Provenance Capture in Data Access and Data Manipulation Software

Printer-friendly version

Presented at the AGU Fall Meeting 2013

We'd love your feedback

Please email Patrick West with comments, questions, suggestions.
What we're looking for:

  • Comments on the model
  • The use of provenance and ping-back in the response header
  • Would you like to be a reviewer? Review the model, design documents, technology infrastructure
  • Would you like to collaborate on this project?
  • Any other questions, comments, suggestions are welcome!


There is increasing need to trace back the origins of data products, whether images or charts in a report, data obtained from a sensor on an instrument, a generated dataset referenced in a research paper, in government reports on the environment, or in a publication or poster presentation. Yet, most software applications that perform data access and manipulation keep only limited history of the data, i.e. the provenance. Imagine the following scenario: There is a figure in a report showing multiple graphs and plots related to global climate, the report is being drafted for a government agency. The graphs and plots are generated using an algorithm from an iPython Notebook, developed by a researcher who is using a particular data portal, where the algorithm pulls data from four data sets from that portal. That data is aggregated together over the time dimension, constrained to a few parameters, accessed using a particular piece of data access software, and converted from one datatype to another datatype; All the processing on the data sets was conducted by three different researchers from a public university, on a project funded by the same government agency requesting the report, with one Principal Investigator and two Co-Investigators. In this scenario, today we’re lucky to get a blob of text under the figure that might say a couple things about the figure with a reference to a publication that was written a few years ago. Data citation, data publishing information, licensing information, and provenance are all lacking in the derived data products.

What we really want is to be able to trace the figure all the way back to the original datasets, including what was done to those datasets; and to see information about the researchers, the project, the agency funding, the award, and the organizations collaborating on the project. In this paper we discuss the need for such information and traceback features, as well as new technologies and standards that can help us become better data stewards. Specifically, we will talk about the new PROV recommendation from the W3C, recently published, and existing and new features in the OPeNDAP software stack that can help facilitate the incorporation of citation, licensing, and provenance information and the ability to click through to retrieve that information.


DateCreated ByLink
December 9, 2013
Patrick WestDownload
December 8, 2013
Patrick WestDownload
November 25, 2013
Patrick WestDownload
November 25, 2013
Patrick WestDownload

Related Projects:

DCO-DS LogoDeep Carbon Observatory Data Science (DCO-DS)
Principal Investigator: Peter Fox
Co Investigator: John S. Erickson and Jim Hendler
Description: Given this increasing data deluge, it is clear that each of the Directorates in the Deep Carbon Observatory face diverse data science and data management needs to fulfill both their decadal strategic objectives and their day-to-day tasks. This project will assess in detail the data science and data management needs for each DCO directorate and for the DCO as a whole, using a combination of informatics methods; use case development, requirements analysis, inventories and interviews.
Description: Tasks for various TWC projects related to data access and the OPeNDAP software products.
SPCDIS Project LogoSemantic Provenance Capture in Data Ingest Systems (SPCDIS)
Principal Investigator: Peter Fox
Co Investigator: Deborah L. McGuinness
Description: The goal of this project is to develop at the RPI Tetherless World Constellation, based within the NCAR High Altitude Observatory and in collaboration with the University of Texas at El Paso, the University of Michigan and McGuinness Associates a semantically-enabled data ingest capability.
DCO-DS LogoVirtual Solar Terrestrial Observatory (VSTO)
Principal Investigator: Peter Fox
Co Investigator: Deborah L. McGuinness
Description: VSTO is a collaborative project between the High Altitude Observatory and Scientific Computing Division of the National Center for Atmospheric Research and McGuinness Associates. VSTO is funded by a grant from the National Science Foundation, Computer and Information Science and Engineering (CISE) in the Shared Cyberinfrastructure (SCI) division.

Related Research Areas:

Data Science
Lead Professor: Peter Fox
Description: Science has fully entered a new mode of operation. Data science is advancing inductive conduct of science driven by the greater volumes, complexity and heterogeneity of data being made available over the Internet. Data science combines of aspects of data management, library science, computer science, and physical science using supporting cyberinfrastructure and information technology. As such it is changing the way all of these disciplines do both their individual and collaborative work.

Data science is helping scienists face new global problems of a magnitude, complexity and interdisciplinary nature whose progress is presently limited by lack of available tools and a fully trained and agile workforce.

At present, there is a lack formal training in the key cognitive and skill areas that would enable graduates to become key participants in escience collaborations. The need is to teach key methodologies in application areas based on real research experience and build a skill-set.

At the heart of this new way of doing science, especially experimental and observational science but also increasingly computational science, is the generation of data.

Concepts: eScience
Semantic eScience
Lead Professor: Peter Fox
Science has fully entered a new mode of operation. E-science, defined as a combination of science, informatics, computer science, cyberinfrastructure and information technology is changing the way all of these disciplines do both their individual and collaborative work.
As semantic technologies have been gaining momentum in various e-Science areas (for example, W3C's new interest group for semantic web health care and life science), it is important to offer semantic-based methodologies, tools, middleware to facilitate scientific knowledge modeling, logical-based hypothesis checking, semantic data integration and application composition, integrated knowledge discovery and data analyzing for different e-Science applications.
Partially influenced by the Artificial Intelligence community, the Semantic Web researchers have largely focused on formal aspects of semantic representation languages or general-purpose semantic application development, with inadequate consideration of requirements from specific science areas. On the other hand, general science researchers are growing ever more dependent on the web, but they have no coherent agenda for exploring the emerging trends on the semantic web technologies. It urgently requires the development of a multi-disciplinary field to foster the growth and development of e-Science applications based on the semantic technologies and related knowledge-based approaches.

Concepts: eScience
Lead Professor: Peter Fox
Description: In the last 2-3 years, Informatics has attained greater visibility across a broad range of disciplines, especially in light of great successes in bio- and biomedical-informatics and significant challenges in the explosion of data and information resources. Xinformatics is intended to provide both the common informatics knowledge as well as how it is implemented in specific disciplines, e.g. X=astro, geo, chem, etc. Informatics' theoretical basis arises from information science, cognitive science, social science, library science as well as computer science. As such, it aggregates these studies and adds both the practice of information processing, and the engineering of information systems.
Concepts: , eScience