Devin De Silva
As artificial intelligence systems evolve into complex, multi-stage workflows orchestrating Large Language Models (LLMs) and dynamic code execution, their inherent non-determinism and opacity pose significant barriers to trust and accountability. To address this challenge, we present a novel hybrid framework for generating traceable, on-demand explanations for programmatic AI workflows. Our approach leverages the ProvONE and EO (Explanation) Ontologies to capture high-fidelity execution traces within a structured knowledge graph. Departing from traditional open-ended retrieval, we utilize the deterministic nature of the ontology schema to programmatically generate a comprehensive library of synthetic questions mapped to executable SPARQL queries. These queries serve as tools for an LLM-based explainer, which employs a least-to-most prompting strategy to iteratively retrieve provenance data and construct context-aware answers. We demonstrate the efficacy of our framework by implementing it within ChatBS-NexGen, a dynamic LLM unit-testing system. Our results show that this approach effectively transforms opaque execution logs into interactive, factually grounded explanations, significantly enhancing the transparency of autonomous AI processes.
Links:
- Final paper: https://drive.google.com/file/d/1Z6yqzmL3wWDnjr2r7BQ3bwg-0CAgkLjG/
- Final presentation (slides): https://docs.google.com/presentation/d/1F8-kcmQHkZSzdiS7e5aS-MQdx0jquwnH/
- Final presentation (video): https://youtu.be/9bZKSlGx2JU
- github repository: https://github.com/DevinDeSilva/WorkflowExplorer.git