Across many fields involving complex computing, software systems are being augmented with workflow logging functionality. The log data can be effectively organized using declarative structured languages such as OWL; however, such declarative encodings alone are not enough to facilitate understandable workflow systems with high quality explanation. In this paper, we present our approach for visually explaining OWL-encoded workflow logs for complex systems, which includes the following steps: (i) identifying and normalizing provenance in workflow logs using the provenance interlingua PML2, (ii) using this provenance information, as well as supplemental log data, building an abstracted workflow representation known as a RITE network (capable of storing workflow state Relationships, Identifiers, Types, and Explanations), and (iii) visualizing the workflow log by displaying its provenance information as a directed acyclic graph and presenting supplemental explanations for individual workflow states and relationships. To demonstrate these techniques, we describe the design of a workflow explainer for the Generalized Integrated Learning Architecture (GILA) -- a multi-agent platform designed to use multiple learners to solve problems such as resolving airspace allocation conflicts. We also comment on how our approach can be generalized to explain other complex workflow systems.
The Generalized Integrated Learning Architecture [GILA] is a general-purpose integrated multi-agent platform that solves domain problems by learning from a problem-solution pair submitted by a human expert. One of the key purposes of GILA is to learn how humans solve complex problems and apply this knowledge to future problems. A complex problem domain known as the Airspace Control Scenario has been chosen to drive the development of GILA and evaluate its performance. The objective of this problem domain is to resolve conflicts in a collection of airspace allocations for aircrafts.
The Inference Web is a Semantic Web based knowledge provenance infrastructure that supports interoperable explanations of sources, assumptions, learned information, and answers as an enabler for trust.