Home > Uncategorized > We’ve come a long way, maybe… (preprint for an editorial for IEEE Intelligent Systems)

We’ve come a long way, maybe… (preprint for an editorial for IEEE Intelligent Systems)

August 14th, 2008

The various 50th anniversary events for AI in America that happened a couple of years ago threatened to make me think about that fact that, shall we say, I’m no longer eligible for young researcher awards. Luckily, I kept very busy running events and special issues, so I managed to avoid thinking about it. However, there are some recent events that have caused me to realize that I’ve been doing AI for a reasonably long time, and to reflect on some aspects of the progress that has been made in the more than thirty years since my first publication in the field.

Recent events

The first of these events was a recent panel entitled “Artificial Intelligence Theory and Practice: Hard Challenges and Opportunities Ahead” which was chaired by Eric Horvitz, President of the Association for the Advancement of Artificial Intelligence, at the Microsoft Faculty Summit. There were seven of us on the panel representing a reasonable range of AI fields, and I think it was a pretty interesting discussion (at the time of this writing the video of the panel has not yet been posted to the Web, but it should be by the time you’re reading this, so search for it, I think you’ll enjoy watching it). Scarily, at some point I realized that I was the person on the panel who’d been doing AI the longest (although edging out a couple of others whose hair was as white as mine because I started in my freshman year of college). I also realized that I’ve been working on more-or-less the same problems for my whole career, but strangely at different times I’ve been seen as anything from a mainstream AI researcher to, more recently, doing research outside the mainstream of the field. That’s one theme I’ll return to.

A second theme arises from the use of the term “AI-Complete Problem” on the panel – with several people giving examples of what they thought some were. The odd part is that, that age thing again, I remember when the term was first introduced, and people really meant it to mean a problem on which, to demonstrate significant capabilities, you would need to solve the whole range of problems in AI – ranging from vision and robotics to language and planning. Nowadays it seems to me that the term was being used for “very hard problem,” which is something very different. I’ll return to this as well.

Another event was a meeting of the investigators from one of the large DARPA-funded projects that I’m involved in. For those outside the US, or lucky enough to be funded from other sources, you may not know that in recent years DARPA has pushed for AI researchers to form teams with industrial partners, and try to solve hard “go/no-go” problems. In the case of this particular project, in the second year we had to run our system on a set of problems, run human subjects on the same set of problems, and show our system outperformed the humans. The problems were in a relatively complicated domain, and DARPA set some pretty difficult ground rules – the main one being we couldn’t build in a lot of domain knowledge to solve the problem. Rather, we had use a set of learning technologies (mostly related to explanation-based learning) to accomplish the task. Amazingly, we passed.

Tying it together

So what is it that brings these three themes together? My answer is that all three of them relate to a narrowing of the goals of the AI field over the past decade or so.

To start with, in my career a large part of my work has always been focused on scaling of knowledge-based inferencing in one way or another. The reason is not that I think this is critically important for applied AI, although I do, but rather because one of the things that is clear to me when I interact with my computer is that we each have very different kinds of memory. My computer never really forgets anything I tell it (well, there was that hard disk crash a couple years back, but that isn’t what I mean), while my memory is pretty porous. However, with the exception of some kinds of purely statistical inference, my computer also doesn’t seem able to put things together the way I would. I may forget the details of some particular restaurant I ate in, or what the name of who my middle school math teacher was, but I sure can integrate a lot of other information about restaurants and middle school math in ways that my computer still can’t. This is similar to the theme of my earlier letter “Computers play chess, humans play go” [[insert ref]], but in this case I’m emphasizing that we humans do something really amazing with our memories, that computer models still don’t come near. We also seem to be the entities with the most sophisticated symbolic reasoning capabilities that we know, a reason why I’m as yet unconvinced that all the success with probabilistic models is getting us nearer to understanding human intelligence.

And speaking of human intelligence, I think that takes us to the second theme. The original concept of “AI Complete” included the idea that solving the problems would teach us something about intelligence writ large. That is, while there were always engineering goals in AI, and one of the reasons this magazine started (back when it was called IEEE Expert) was to reflect that, there tended to be a general feeling in the field that the goal of AI included an understanding of intelligence. Not necessarily human intelligence in the sense of cognitive modeling, but just as we know a lot more about how birds fly from having learned the aerodynamics of making planes fly faster, looking at the difference between computers and humans solving problems situated in the real world and needing a lot of knowledge, seemed like a way to learn more about humans and thought. I also remember that I used to hear at early AI conferences, but rarely if ever hear any more, that looking at problems that humans were much better at than computers was a good way to get inspiration as to what AI problems to attack – the challenge made them compelling, and steps towards solution could, again, help us understand more about what intelligence was.

And what about the success of the DARPA project? How could that tie into the narrowing of AI? I guess my reaction came from the fact that of all the researchers in the room, I was the only one who was really surprised that we were able to outperform the humans on this test. The point is not that the others were so blasé, but rather than in today’s AI milieu we are used to seeing AI programs outperform humans whether it be at searching for information, predicting traffic patterns, beating the world’s chess champion, or solving narrowly defined problems. My surprise in this case came from the fact that the problem we were solving was actually pretty hard, but not artificially so due to data overload, a bad interface or the similar

In the problem we attacked, there are many plausible solutions, but only a few good ones, and the restriction was that there was only a small amount of training data (in fact, one expert trace) for the computer to learn preferences from. This was the kind of problem that we once said we’d be able to get AI to do “someday,” and I was very glad to see, that at least for this particular case (and with an investment of a very large amount of time and money), that day had come. However, I was also a bit chagrined to realize how long it’s been since the last time I had that sort of thrill. While I’ve joined others in the field of AI in celebrating our successes, many of which I have pointed out in these pages over the past few years, this experience made me realize how rare it is that I feel the thrill I felt in the early days of my AI career when, I have to admit, it was a pretty amazing accomplishment when we got the computer to do anything that seemed “smart.”

The consequences

So why do these things bother me? After all, this seeming change in our direction has enabled AI to accomplish significant engineering advances and to become a stronger and better-understood technology. In fact, on the panel I mentioned earlier, one of my younger colleagues expressed how proud she was that when she first took at AI it seemed to be pretty ad hoc, and now when she teaches it the course is full of much stronger theoretical material. My rejoinder, grumpy old man that I’ve become, was that when I first took AI, I was hooked for life on the first day, when Roger Schank who was teaching the course, said something to the effect that almost nothing he would teach us was proven to be correct, and that any one of us was just as likely to come up with a key insight as the people whose work we’d be studying. I found this exhilarating; my colleague seems to have found it a weakness.

But here’s the thing! Despite all the major AI successes of the past decade, and the great strides we have made, what Roger said is still true! When it comes to really understanding the amazing symbolic processor that is the human mind, we still know very little. While I don’t mind that we have more techniques to teach our students, I think it is important that we don’t become enthralled by what Marvin Minsky has referred to as “physics envy.” We should admit, gloriously and deliberately, that when it comes to understanding intelligence our field is still in its very early days. The challenge remains, and it is one of the greatest intellectual challenges of our, nay of all, times – to understand thought, conciousness and intelligence. The best and brightest students go where the most exciting problems are, and we’ve got one of the all-time winners! Let’s not forget that fact.

So as my days as Editor-in-Chief wane, and you read this, my penultimate letter, I hope you will remember that although the focus of this magazine includes “systems,” with an emphasis on bringing AI theories into practice, it also includes “intelligent” and that’s something that we as a field mustn’t ignore. We’ve made a lot of progress, but the journey is far from over, and the original goal is still far over the horizon.

Happy Sailing,

Jim Hendler

VN:F [1.9.22_1171]
Rating: 5.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)
We've come a long way, maybe… (preprint for an editorial for IEEE Intelligent Systems), 5.0 out of 10 based on 1 rating
Author: Categories: Uncategorized Tags: