Creating, Interpreting, and Repurposing Visual Messages

The World Wide Web is a vast, diverse, and dynamic ecosys- tem of content authored and consumed with innumerable frequency. Al- though content may have an original purpose, it can and will be repur- posed in new and unexpected ways. Research in the Semantic Web has led to the pragmatic adoption of several technologies that establish the groundwork for increased machine-to-machine interoperability. Never- theless, the human factor remains – and is paramount – in both the orig- inal and semantic webs, and so is the task of transforming mechanistic content for human consumption and comprehension. Unfortunately, tra- ditional transformation methods produce isolated artifacts incapable of inspection, elaboration, and extension. This paper presents a method- ology for creating accountable visual artifacts – visual messages that preserve associations to their source content, allow traceability to their creation, and provide guidance for proper interpretation. The system is described by following two RDF triples through the originating visual- ization process, into application-specific formats, and back to an RDF- based visual transcription. We conclude by describing future work that will further increase machine accessibility, while maintaining human ap- proachability, of visual messages on the web.

View Publication

Associated Projects

The Inference Web is a Semantic Web based knowledge provenance infrastructure that supports interoperable explanations of sources, assumptions, learned information, and answers as an enabler for trust.

The LOGD project investigates the role of Semantic Web technologies, especially Linked Data, in producing, enhancing and utilizing government data published on and other websites. Large portion of government data published on the Web are not necessarily ready for mashups. The Tetherless World Constellation (TWC) is now publishing over 8 billions RDF triples converted from hundreds of government-related datasets from and other sources (e.g.