Archive for December, 2008

Get a senior scientist blogging (my response)

December 26th, 2008

During some random Web surfing (something I don’t get nearly enough time to do these days), I ran into the Science blogging Challenge (aka “get a senior scientist blogging”) and it got me thinking about how I got blogging, and more recently how I got twittering (which seems to fit my insane life style better). I sent the following entry to the competition, nominating a few people who were instrumental in getting me blogging and more recently getting me to tweet.

Here’s what I said:

My motivation to start blogging actually came because of a different senior scientist starting his blog — In Jan 06, one of my colleagues started a blog – and it got some big notice — since the blogger was Tim Berners-Lee that made some sense, My first real blog (I had contributed blog comments and done an occasional “guest shot” on other peoples blogs) was called “Time to get a blog” and mentions the influence of Tim’s bloggin. I cannot tell you who convinced Tim to blog, but I know that Danny Weitzner, whose blog is at, was one of the influences.
However, Tim’s starting to blog is the thing that got me to finally do it, but the person who really got me blogging is Jennifer Golbeck, (who blogs in a bunch of different places) who is the one who convinced me to get my act together and walk the walk if I was going to claim to be a Professor of All Things Web, as I now try to be – she’s also the one who got me signed up on orkut, facebook (beta) and a bunch of other social networking sites long before it became popular – and if I’m not mistaken she’s probably the person who got me my gmail invitation way back when – so Jen should definitely be someone considered in the “I got a senior scientist to blog” category.
Meanwhile, the propagation continues – Peter Fox, who attended this past Sci Foo, and is an occasional blogger has joined my lab, and he and I are trying to convince several of our colleagues, esp. Deborah McGuinness, to get blogging.
I’d also like to point out that while blogging continues to be interesting to look at as a mechanism for propagating science, I’m finding these days that microblogging (i’m jahendler on twitter) has been gaining popularity, especially among the Social Scientists – and it may be an even better way for some of the busy senior scientists you’re trying to reach out to (if they can just learn to use the messaging on their cell phones). I credit “eingang” (Michelle Hoyle – for getting me twittering, and I notice that a quick message from my phone during a lecture or seminar is a good way to share a thought or a pointer (although I find it also is fun to add personal observations and such – so it humanizes the scientists who use it)
So anyway – there are three entries for the contest
Danny Weitzner for helping to get Tim Berners-Lee blogging
Jen Golbeck for getting me blogging
Michele Hoyle for getting me micro-blogging
Jim H.

VN:F [1.9.22_1171]
Rating: 6.0/10 (2 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

What sweet spot?

December 20th, 2008

I wanted to leave a blog comment on the Clark and Parsia blog with respect to the entry Kendall wrote in the entry entitled “Our Approach to Modeling, Fidelity, and KR.” However, to leave such a comment I would have to log in, and I have way too many accounts right now, so I thought I’d write my response as a new entry (and by the time I finished, this was too long to be just a comment).

I don’t disagree with the overall “spectrum” that Kendall offers, but his point is that they have picked a point in the middle, and since they are in the middle they can model more than the scalers and scale more than the modelers. The problem is that the middle is very, very wide, and thus there are many places in this space that such a claim could be made. So, for example, a large triple store that can do a small amount of inferencing, say Garlik’s JXT as one example, would scale even better and could still be able to claim to do more modeling than a pure triple store.

On the other end, the idea that decidability is somehow a sweet spot (despite known exponential behaviors for DL) over a more highly modeled, but perhaps heuristic (or incomplete) logic. In this case the system could claim both to have more expressivity than a DL system, but also to be more scalable (just couldn’t gaurantee to have all the answers). In fact, right now the systems that probably have the highest score in modeling power vs. scalability would fall in this camp. The thing is their answer sets would be somewhat different.

I my opinion, the real problem with this blog entry is the idea that there is one sweet spot (Kendall called it the “sweet spot”) which implies that there is a general best answer. This is the point I cannot really live with, and have spent much of my recent career trying to debunk. Depending on what you are trying to do, there are many possible sweet spots. There are a set of problems for which what C&P are doing is exactly the right thing, but there are also many where they are not.

And that is the key thing, we in the field have to get much better at understanding where the tradeoffs are and what various kinds of applications require. Google taught us years ago that sometimes finding a good answer quickly can be an incredibly powerful thing. Expert systems taught us that for many application complex modeling is too expensive. Yet there are systems running in real applications that are using expert level modeling, because sometimes it is the thing you need despite the cost (and the ROI is high enough).

The other problem I have with the argument made actually has nothing to do with the issues of logic and such. The traditional database community for has for a long time made a similar claim, which is that there is a particular place in the expressivity/scalability place that is “the” correct place. They have spent years claiming that particular sweet spot is the only one that is interesting — it certainly has proven to be a very important one, making way more commercial success than the DL stuff. However, lately we’ve been learning that there exist problems where we need more expressivity, and thus other things have to be explored — the people in the DB community who’ve started looking at graph stores are, indeed, seeing that there are some applications, both in enterprises and especially on the Web, where the small amount of added expressivity makes a huge difference. (Anyone who has witnessed my debates with Ullmann have certainly heard this argued…)

Anyway, when I gave the first talk at the DARPA Agent Markup Language (DAML) program, lo these many years, I showed a slide with the word “THE” under a kill ring and stated that in the Web there is no the – and whether to the database community, the adherents of DL, the people who cite my work, or anyone else — remember you are exploring one sweet spot that can be important to some set of applications, but there are many others, and we all win when we remember that.

Cheers – Jim Hendler

p.s. Clearly this is not meant in any way to be an anti-C&P comment, I was just riffing off of what Kendall wrote.

VN:F [1.9.22_1171]
Rating: 7.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: -1 (from 1 vote)
Author: Categories: Semantic Web Tags:

URL daily (Radical translation)

December 11th, 2008


Radical translation is a term invented by American philosopher W. V. O. Quine to describe the situation in which a linguist is attempting to translate a completely unknown language, which is unrelated to his own, and is therefore forced to rely solely on the observed behavior of its speakers in relation to their environment.”

“Quine tells a story (Quine 1960) to illustrate his point, in which an explorer is trying to puzzle out the meaning of the word “gavagai”. He observes that the word is used in the presence of rabbits, but is unable to determine whether it means ‘undetached rabbit part’, or ‘fusion of all rabbits’, or ‘temporal stage of a rabbit’, or ‘the universal ‘rabbithood’”

“radical translation” carries the similar criticism to strong AI as chinese room by John Searle

“…(Searle 1980), which attempts to show that a symbol-processing machine like a computer can never be properly described as having a “mind” or “understanding“, regardless of how intelligently it may behave.”

While language translation is by itself a very interesting work, I would wonder when Chinese was translated into English for the first time. Here are some examples:

1. Proper names of real world entities, such as elephant (象), can be easily translated.


2. Functionary figures such as dragon (龙) carries different meanings

source: source:

3. non-accessible things, such as the philosophical term Tao (道), causes more difficulties because they themselves do not have a clear cut definition in their native language.

4. Another example is the term china,which is also used to refer high-quality porcelain or ceramic ware, originally made in China. This sense is a good example of radical translation, where Quine’s “rabbit” was replaced by porcelain and “gavagai” was replaced by “china”.


The above philosophical arguments and real world translation examples lead to the following thoughts on the social norms:

1. meaning is rather Quine’s ontological commit, where the definition is socially agreed

2. while understanding and translation may be done by one person, the correctness of these actions is evaluated by social peers

3. it is worthy to read Searle’s The Construction of Social Reality (1995), (wikipedia provided a nice briefing)

Li Ding, 2008-12-11

VN:F [1.9.22_1171]
Rating: 8.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)
Author: Categories: AI, Semantic Web, Web Science Tags: