Archive

Archive for August, 2008

We’ve come a long way, maybe… (preprint for an editorial for IEEE Intelligent Systems)

August 14th, 2008

The various 50th anniversary events for AI in America that happened a couple of years ago threatened to make me think about that fact that, shall we say, I’m no longer eligible for young researcher awards. Luckily, I kept very busy running events and special issues, so I managed to avoid thinking about it. However, there are some recent events that have caused me to realize that I’ve been doing AI for a reasonably long time, and to reflect on some aspects of the progress that has been made in the more than thirty years since my first publication in the field.

Recent events

The first of these events was a recent panel entitled “Artificial Intelligence Theory and Practice: Hard Challenges and Opportunities Ahead” which was chaired by Eric Horvitz, President of the Association for the Advancement of Artificial Intelligence, at the Microsoft Faculty Summit. There were seven of us on the panel representing a reasonable range of AI fields, and I think it was a pretty interesting discussion (at the time of this writing the video of the panel has not yet been posted to the Web, but it should be by the time you’re reading this, so search for it, I think you’ll enjoy watching it). Scarily, at some point I realized that I was the person on the panel who’d been doing AI the longest (although edging out a couple of others whose hair was as white as mine because I started in my freshman year of college). I also realized that I’ve been working on more-or-less the same problems for my whole career, but strangely at different times I’ve been seen as anything from a mainstream AI researcher to, more recently, doing research outside the mainstream of the field. That’s one theme I’ll return to.

A second theme arises from the use of the term “AI-Complete Problem” on the panel – with several people giving examples of what they thought some were. The odd part is that, that age thing again, I remember when the term was first introduced, and people really meant it to mean a problem on which, to demonstrate significant capabilities, you would need to solve the whole range of problems in AI – ranging from vision and robotics to language and planning. Nowadays it seems to me that the term was being used for “very hard problem,” which is something very different. I’ll return to this as well.

Another event was a meeting of the investigators from one of the large DARPA-funded projects that I’m involved in. For those outside the US, or lucky enough to be funded from other sources, you may not know that in recent years DARPA has pushed for AI researchers to form teams with industrial partners, and try to solve hard “go/no-go” problems. In the case of this particular project, in the second year we had to run our system on a set of problems, run human subjects on the same set of problems, and show our system outperformed the humans. The problems were in a relatively complicated domain, and DARPA set some pretty difficult ground rules – the main one being we couldn’t build in a lot of domain knowledge to solve the problem. Rather, we had use a set of learning technologies (mostly related to explanation-based learning) to accomplish the task. Amazingly, we passed.

Tying it together

So what is it that brings these three themes together? My answer is that all three of them relate to a narrowing of the goals of the AI field over the past decade or so.

To start with, in my career a large part of my work has always been focused on scaling of knowledge-based inferencing in one way or another. The reason is not that I think this is critically important for applied AI, although I do, but rather because one of the things that is clear to me when I interact with my computer is that we each have very different kinds of memory. My computer never really forgets anything I tell it (well, there was that hard disk crash a couple years back, but that isn’t what I mean), while my memory is pretty porous. However, with the exception of some kinds of purely statistical inference, my computer also doesn’t seem able to put things together the way I would. I may forget the details of some particular restaurant I ate in, or what the name of who my middle school math teacher was, but I sure can integrate a lot of other information about restaurants and middle school math in ways that my computer still can’t. This is similar to the theme of my earlier letter “Computers play chess, humans play go” [[insert ref]], but in this case I’m emphasizing that we humans do something really amazing with our memories, that computer models still don’t come near. We also seem to be the entities with the most sophisticated symbolic reasoning capabilities that we know, a reason why I’m as yet unconvinced that all the success with probabilistic models is getting us nearer to understanding human intelligence.

And speaking of human intelligence, I think that takes us to the second theme. The original concept of “AI Complete” included the idea that solving the problems would teach us something about intelligence writ large. That is, while there were always engineering goals in AI, and one of the reasons this magazine started (back when it was called IEEE Expert) was to reflect that, there tended to be a general feeling in the field that the goal of AI included an understanding of intelligence. Not necessarily human intelligence in the sense of cognitive modeling, but just as we know a lot more about how birds fly from having learned the aerodynamics of making planes fly faster, looking at the difference between computers and humans solving problems situated in the real world and needing a lot of knowledge, seemed like a way to learn more about humans and thought. I also remember that I used to hear at early AI conferences, but rarely if ever hear any more, that looking at problems that humans were much better at than computers was a good way to get inspiration as to what AI problems to attack – the challenge made them compelling, and steps towards solution could, again, help us understand more about what intelligence was.

And what about the success of the DARPA project? How could that tie into the narrowing of AI? I guess my reaction came from the fact that of all the researchers in the room, I was the only one who was really surprised that we were able to outperform the humans on this test. The point is not that the others were so blasé, but rather than in today’s AI milieu we are used to seeing AI programs outperform humans whether it be at searching for information, predicting traffic patterns, beating the world’s chess champion, or solving narrowly defined problems. My surprise in this case came from the fact that the problem we were solving was actually pretty hard, but not artificially so due to data overload, a bad interface or the similar

In the problem we attacked, there are many plausible solutions, but only a few good ones, and the restriction was that there was only a small amount of training data (in fact, one expert trace) for the computer to learn preferences from. This was the kind of problem that we once said we’d be able to get AI to do “someday,” and I was very glad to see, that at least for this particular case (and with an investment of a very large amount of time and money), that day had come. However, I was also a bit chagrined to realize how long it’s been since the last time I had that sort of thrill. While I’ve joined others in the field of AI in celebrating our successes, many of which I have pointed out in these pages over the past few years, this experience made me realize how rare it is that I feel the thrill I felt in the early days of my AI career when, I have to admit, it was a pretty amazing accomplishment when we got the computer to do anything that seemed “smart.”

The consequences

So why do these things bother me? After all, this seeming change in our direction has enabled AI to accomplish significant engineering advances and to become a stronger and better-understood technology. In fact, on the panel I mentioned earlier, one of my younger colleagues expressed how proud she was that when she first took at AI it seemed to be pretty ad hoc, and now when she teaches it the course is full of much stronger theoretical material. My rejoinder, grumpy old man that I’ve become, was that when I first took AI, I was hooked for life on the first day, when Roger Schank who was teaching the course, said something to the effect that almost nothing he would teach us was proven to be correct, and that any one of us was just as likely to come up with a key insight as the people whose work we’d be studying. I found this exhilarating; my colleague seems to have found it a weakness.

But here’s the thing! Despite all the major AI successes of the past decade, and the great strides we have made, what Roger said is still true! When it comes to really understanding the amazing symbolic processor that is the human mind, we still know very little. While I don’t mind that we have more techniques to teach our students, I think it is important that we don’t become enthralled by what Marvin Minsky has referred to as “physics envy.” We should admit, gloriously and deliberately, that when it comes to understanding intelligence our field is still in its very early days. The challenge remains, and it is one of the greatest intellectual challenges of our, nay of all, times – to understand thought, conciousness and intelligence. The best and brightest students go where the most exciting problems are, and we’ve got one of the all-time winners! Let’s not forget that fact.

So as my days as Editor-in-Chief wane, and you read this, my penultimate letter, I hope you will remember that although the focus of this magazine includes “systems,” with an emphasis on bringing AI theories into practice, it also includes “intelligent” and that’s something that we as a field mustn’t ignore. We’ve made a lot of progress, but the journey is far from over, and the original goal is still far over the horizon.

Happy Sailing,

Jim Hendler

VN:F [1.9.22_1171]
Rating: 5.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)
Author: Categories: Uncategorized Tags:

Cuil, Semantic Search

August 13th, 2008

Last week, Cuil.com caught my eye. It gave me very good impression in just 5 seconds (BTW, 10 seconds is a survival maximal for any website to me). First, I tried, as many people may do, my name. It didn’t disappoint me by hitting quite precisely my pages. I also love the grid-based layout. A few minutes later, I found its “Explore by Category” option. It looks like that cuil has some sort of ontology hierarchies for web pages.

A few “google” results reveal that cuil may use some clustering technique to build such hierarchies. It is interesting to think will such hierarchies indeed improve search experience. When I search “Semantic Web”, cuil recommends me to browse “Ontology (computer Science)” and some of its sub category; it also suggests me to look at “James Hendler”‘s homepage. I would say that it will be very useful for exploring.

Building meta data using machine learning technology is a cool thing. On the other hand, I believe that human intervention is also critical. When wikipedia knowledge is used in clustering, I expect some gain in recall or preciseness. As “Ontology (computer Science)” is a wikipedia page, I guess that cuil may have already used wikipedia information in their results.

Also don’t forget the “network effect”. I have created a prefix-based, syntactical gmail label hierarchy for a while. I really like to share part of the hierarchy to my friends, so that when I send a mail labeled with “party”, then they don’t need to relabel it again. If millions of users can share their small hierarchies (not only on gmail, but also on flicker, youtube, twine, etc.), each is connected somehow to hierarchies of friends and family, eventually we will have a very large network of ontologies which may improve search much more than we can do now. Just a random thougt.

P.S. I found one interesting thing. Cuil caches my wiki page at Iowa State University. However, that page should be offline no later than May 2008, while Cuil was online officially only on July 28, 2008. It seems its crawler has been alive for a while.

Jie Bao

VN:F [1.9.22_1171]
Rating: 6.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)
Author: Categories: Uncategorized Tags:

Captcha, Turing Test, and Semantic Web

August 6th, 2008

On the web nobody knows you are a dog, …… or a human. That’s why there are programs on the web to identify one as a human (from bots or dog or cats or……). Most popular ones are captcha. It is based on a simple assumption: no OCR agent so far can be as smart as a human is. To me, it looks like a super-simplified Turing test: an AI program has “real” intelligence as a human has, if being asked by the same question, another human can’t tell who is AI and who is human.

I can’t help imagining that one day, when OCR agents get smart enough to pass the captcha test (I strongly believe that day is not far away), what test we will use to identify a human on web. Math? That will be easy for a good program. Scrabble? maybe, but not that secure. Ask for a Shakespeare’s sonne? Or the end year of world war II? That looks more likely to succeed. But…There are two issues.

First, an agent may have access to a knowledge base. With projects like Dbpedia, human knowledge has been KBized in a speed never seen before in the history. A query as ” the end year of world war II” may be answered by a semantic web agent fairly quickly. I can imagine that someday we will have to design increasingly hard questions (like art things) to identify a human and fight spamming.

The other issue is that a human may have NO access to a knowledge base. Many, many people in the world does not know “the end year of world war II”, even if they may be knowledgeable in other things. They may not even know where to find such a knowledge. Also, they can get bored when been consistently asked such captcha questions and quit — technically, that means they failed the test thus are not “human”. When captcha becomes increasingly hard (like art things), more and more people may fail in one reason or another (including boredness). That will also lead to the failure of the identification system.

Will semantic web help spamming by designing smart agents? :) Maybe, let’s wait and see.

Jie Bao

VN:F [1.9.22_1171]
Rating: 8.0/10 (2 votes cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)
Author: Categories: Uncategorized Tags: