Skip to main content

Workflow-Explorer: Explainable AI through Provenance Tagging

Devin De Silva

As artificial intelligence systems evolve into complex, multi-stage workflows orchestrating Large Language Models (LLMs) and dynamic code execution, their inherent non-determinism and opacity pose significant barriers to trust and accountability. To address this challenge, we present a novel hybrid framework for generating traceable, on-demand explanations for programmatic AI workflows. Our approach leverages the ProvONE and EO (Explanation) Ontologies to capture high-fidelity execution traces within a structured knowledge graph. Departing from traditional open-ended retrieval, we utilize the deterministic nature of the ontology schema to programmatically generate a comprehensive library of synthetic questions mapped to executable SPARQL queries. These queries serve as tools for an LLM-based explainer, which employs a least-to-most prompting strategy  to iteratively retrieve provenance data and construct context-aware answers. We demonstrate the efficacy of our framework by implementing it within ChatBS-NexGen, a dynamic LLM unit-testing system. Our results show that this approach effectively transforms opaque execution logs into interactive, factually grounded explanations, significantly enhancing the transparency of autonomous AI processes.

 

Links:


Devin

TWC Faculty

Research Staff

Research Assistants