Provenance of High Throughput Biomedical Experiments

Printer-friendly version

Abstract:

The field of translational biomedical informatics seeks to integrate knowledge from basic science, directed research into diseases, and clinical insights into a form that can be used to discover effective treatments of diseases. Currently, representations of experimental provenance reside in models specific to each sub-domain: biospecimen management tools track the histories of biospecimens, how they are handled and disposed; high throughput assay-based experiments are described in a format specifically designed for that data; and experimental workflow systems, such as Laboratory Information Management Systems (LIMS), each represent their portion of the research pipeline using models specifically designed to those tasks. In recent years, a concept of a general-purpose provenance model has emerged from the computational workflow domain.

In bioinformatics there has been an explosion of data due to the use of high-throughput assays such as microarrays for research in biology and biomedicine. These assays are used to produce data for experiments based on the current gene expression of cells, commonly expressed polymorphisms, determining the epigenetic regulation of genes, and many others. During this time, the community has developed and adopted a standard for describing experiments and the data that they generate. Adoption of these standards, along with data sharing requirements from funding institutions, has resulted in the publication of tens of thousands of high throughput experiments performed over the last ten years. Because of this, it has become the de-facto format for describing experiments in biomedicine. This standard is referred to as the MAGE (MicroArray and Gene Expression) standard. Like with other parts of the translational research pipeline, these experimental representations are primarily representations of workflow but are not currently integrated with other types of biomedical data. We propose a vision for a common model of provenance representations across the translational research pipeline, and show that one of the largest sources of data in that pipeline, microarray-based experiments, can be accurately represented in general-purpose models of provenance that are already used to represent computational workflows.

We demonstrate metho ds and to ols to generate RDF representations of a commonly used MAGE format, MAGE-TAB, mappings of MAGE documents to two general-purpose provenance representations, OPM (Open Provenance Model) and PML (Proof Markup Language). We show through a use case simulation that the data represented in MAGE documents can be completely represented in OPM and PML through use of round trip analysis of certain examples. The success in mapping MAGE documents into general-purpose provenance models shows that promise in the implementation of the translational research provenance vision.

History

DateCreated ByLink
July 18, 2011
23:05:30
James McCuskerDownload

Related Projects:

Inference Web Project LogoInference Web
Principal Investigator: Deborah L. McGuinness
Description: The Inference Web is a Semantic Web based knowledge provenance infrastructure that supports interoperable explanations of sources, assumptions, learned information, and answers as an enabler for trust. Provenance - if users (humans and agents) are to use and integrate data from unknown, uncertain, or multiple sources, they need provenance metadata for evaluation Interoperability - more systems are using varied sources and multiple information manipulation engines, thus increasing interoperability requirements Explanation/Justification - if information has been manipulated (i.e., by sound deduction or by heuristic processes), information manipulation trace information should be available Trust - if some sources are more trustworthy than others, trust ratings are desired The Inference Web consists of two important components: Proof Markup Language (PML) Ontology - Semantic Web based representation for exchanging explanations including provenance information - annotating the sources of knowledge justification information - annotating the steps for deriving the conclusions or executing workflows trust information - annotating trustworthiness assertions about knowledge and sources IW Toolkit - Web-based and standalone tools that facilitate human users to browse, debug, explain, and abstract the knowledge encoded in PML.

Related Research Areas:

Knowledge Provenance
Lead Professor: Deborah L. McGuinness
Description: Knowledge Provenance
Concepts: Provenance,
Semantic eScience
Lead Professor: Peter Fox
Description:
Science has fully entered a new mode of operation. E-science, defined as a combination of science, informatics, computer science, cyberinfrastructure and information technology is changing the way all of these disciplines do both their individual and collaborative work.
As semantic technologies have been gaining momentum in various e-Science areas (for example, W3C's new interest group for semantic web health care and life science), it is important to offer semantic-based methodologies, tools, middleware to facilitate scientific knowledge modeling, logical-based hypothesis checking, semantic data integration and application composition, integrated knowledge discovery and data analyzing for different e-Science applications.
Partially influenced by the Artificial Intelligence community, the Semantic Web researchers have largely focused on formal aspects of semantic representation languages or general-purpose semantic application development, with inadequate consideration of requirements from specific science areas. On the other hand, general science researchers are growing ever more dependent on the web, but they have no coherent agenda for exploring the emerging trends on the semantic web technologies. It urgently requires the development of a multi-disciplinary field to foster the growth and development of e-Science applications based on the semantic technologies and related knowledge-based approaches.

Concepts: eScience