Archive for June, 2009

What’s in

June 25th, 2009

A recent article by Tim Berners-Lee, “Putting Government Data online“, has  attracted significant interest to the  datasets published at the US website.  As Berners-Lee discusses the Semantic Web techniques that can be used to get those data into RDF space (something we are now working on), we would like to share our initial investigation of the contents of these government datasets.


* we have now published 5 billions triples from hundreds of datasets at see

I. Translate dataset into RDF

The catalog of the datasets in,,  is published in CSV format as part of We  converted it into RDF using simple CSV parsing. We kept the translation minimal: (i) the properties are directly created from thecolumn names; (ii) each table row is mapped to an instance of pmlp:Dataset; (iii) all non-header cells are mapped to a literal – we don’t create new URIs at this point. The output of our work is published on tw website at:

(We are now starting to do more  integration work, extracting multiple objects from single tables, linking into the linked open data  cloud, etc.  and will publish new version when that is done – the purpose of this first work was simply to make the catalog more available to the RDF community)

II. Browse and query the RDF graph

As an example, we can browse the dataset in tabulator, and then use a SPARQL web service to query the dataset. For example, we use a sparql query to list datasets published in CSV format:

III. Observations on the RDF graph

Using this service we can answer some basic questions about the datatsets:

1. How many datasets are published, and how many among them can be easily converted into RDF?

There are 332 datasets which can be partitioned by  type:  raw data catalog(301);  tool catalog (31).

Not all of the datasets have a link to downloadable data because some offer only browseable data via their own websites,  Others  publish datasets in multiple formats. As of today, the online static files associated with the datasets are distributed as  follows:  204 datasets offer a CSV format dump, 10 datasets offer an XML format dump, and 21 datasets offer an XLS format dump.

2. How are the datasets categorized?

Category number of datasets
Geography and Environment 227
Labor Force, Employment, and Earnings 30
Social Insurance and Human Services 30
Health and Nutrition 11
Law Enforcement, Courts, and Prisons 7
Population 4
Other 3
Prices 3
Business Enterprise 2
Education 2
Energy and Utilities 2
Federal Government Finances and Employment 2
Income, Expenditures, Poverty, and Wealth 2
Science and Technology 2
Transportation 2
Construction and Housing 1
International Statistics 1
National Security and Veterans Affairs 1

3. What are some of the key items in the dataset?

4. What are the  sources of the datasets?

The majority of the datasets are published by the EPA, and they contain environmental data partitioned by the states of the US in three individual years.  Others come from other govt agencies – the distribution is as follows:

IV. Getting Datasets linked

Although the datasets are not explicily linked, we see a number of opportunities for connecting these datasets to others (and into the Linked Open Data datasets):

  • A large percentage of files have some sort of geo-tagging, thus they can be linked to DBpedia or Geo-names (and then presented via Map services).
  • Some datasets are subsets of other datasets, e.g. EPA data “2005 Toxics Release Inventory data for the state of Georgia” is a subset of  “2005 Toxics Release Inventory National data file of all US States and Territories” making for easier “internal” linking of the datasets.
  • A number of the datasets contain temporal information, e.g. IRS’s “Tax Year 1992 Private Foundations Study”,…”Tax Year 2005 Private Foundations Study” which provides an opportunity for mashups using timelines and such.

V. Conclusions

We are committed to getting more of the data online soon (in RDF), and then investigating data integration and knowledge discovery. In order to get our datasets linked to the linked data cloud, we will use SPARQL for extracting entities and our Semantic Mediawiki as a platform to capture the owl:sameAs mappings.  Scalable dataset publishing is also challenging as some of these are very large datasets, e.g. “2005-2007 American Community Survey Three-Year PUMS Population File” has a 1.1 g zipped csv file.  Moreover, some datasets are not directly available in one file but via a web service.  Our current plan is to produce RDF documents available for download soon, and to work on bringing more of these datasets into live, SPARQLable forms as we can.

Li Ding, Dominic DiFranzo and Jim Hendler

VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)
Author: Categories: linked data Tags:

I will pay delicious $100 for hierarchical tagging

June 19th, 2009

Just saw Jim’s post on What is the Semantic Web really all about?

I have been wondering about this problem too. What is Semantic Web? Yesterday I have asked a question “Why few (or none?) Web 2.0 sites provide hierarchical tagging?” on LinkedIn and get some pretty good answers:

For your convenience, I attached my LinkedIn post at the end of this blog.

There are two things in the answers that draw my attention:
* Many do _not_ believe tags, or even hierarchical tags, are semantic; “semantics” means RDF or triples at least to them;
* Some believe that even implementing a hierarchical tagging system is not easy in engineering or social aspects.

I think these two beliefs, among many other reasons, may explain in part why the “Semantic Web” is still far from a reality. The first is about the overestimation of what is “semantics”: triple is one way to express semantics, but it is a question that whether it is _the_ way. The second is about the underestimation of “Web”-scale: realizing a knowledge system, even if is conceptually “simple”,  on the Web can lead to serious scalability problems, both for machine (can you make <1s response for all queries?) and for people (on changing their way of thinking).

Here is what I believe about “semantic web” (note no-capitalization). First, it is not necessarily “the Semantic Web” (just like there is no “the Mobile Web”), as defined by W3C standards or the layered cake model. Semantics is a way of organizing things, RDF and OWL are some ways to express it, but other ways should be encouraged too and sometime work better. Second, tools and services should be “web-ish”, something like a semanticized version of youtube or gmail; after all, “web users” are rarely a bioinformatician or can master a Java-based ontology editor.  Third, start deployment with very very basic semantics like trees (yeah, I know some will protest) and sameAs, but do it in a very very efficient way – if we can’t even come up with a Web-efficient tree reasoner, then how realistic we can come up with a Web-efficient RDF or OWL reasoner?

Now I’m prepared to dodge tomatoes :D

by Jie Bao


My original post on LinkedIn (reorganized a bit)

Why few (or none?) Web 2.0 sites provide hierarchical tagging?

Gmail label and delicious tagging are flat, which is troublesome all the time for me. I have to add (unnecessarily) many tags even if they can be easily inferred. I didn’t find an alternative that allows me to organize my tags in a tree or network. Is there any technical or marketing reason?

People have been talking about semantic web a for a while and are looking for a killer app. It’s apparent that hierarchical tagging is semantic, is in high demand, and is relatively easy to do. Why there is none in popular sites?

PS 1: Let me clarify some situations when hierarchical tagging will save me a lot of time: recently I’m reading a book of Qian Mu, a historian, and tagging my notes on delicious with tags “qianmu“; I also want all those notes be tagged with “history“, but I have to always add both “qianmu” and “history”.

Sometimes I want more than one tags to be inferred. For example, when I add “wuxu” (the year of 1898), I want tags “qing“, “china” and “reform” to be added. You will find how trouble it is to add all 4 tags together when you have about 10 notes on “wuxu”.

In another example, I want to share my tags in both Chinese and English. If I can define two subclass relations between two tags, each in a different language, I will not have to always add the both tags.

Now I have about 1000 tags on delicious. I’m really really in despair need for a hierarchy. I’m willing to pay delicious $100 for such a service.

PS 2: Further clarification: I don’t believe I will need a tagging system that always requires me to pick up terms from a tree, DAG, or a network. I can still freely add tags. But I need some way to clean up my tags from time to time, and organize them. It is just like how i clean up my “download” folder: put them into different folders, and if a folder is too big make some subfolders.

VN:F [1.9.22_1171]
Rating: 8.5/10 (2 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)
Author: Categories: Semantic Web Tags: ,

What is the Semantic Web?

June 19th, 2009

The twittering of #semtech2009 got me pretty frustrated – seems like the “big O” Ontology story was way too prevalent, and while linked data had a good showing, the relationships between linking and ontologies seemed to be forgotten a lot.  It motivated me to write up some thoughts on this on my Nature blog site in a blog entry entitled “What is the Semantic Web really all about” –– I look forward to your comments there or here…

Jim H

VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)