Agent of Change

Pattie Maes believes software agents are ready for prime time.

Pattie Maes believes software agents are ready for prime time.

Pattie Maes (pattie@media.mit.edu) received a PhD in artificial intelligence from the University of Brussels, then moved to MIT to work on autonomous robots with Rod Brooks, a pioneer in making machines think independently. But Maes grew frustrated with the frailty of robotic hardware, and she redirected her expertise in AI toward software. At the Media Lab, she is creating software agents, which she hopes will become reality in the near future. During the last few years, she has led research on several working prototype systems, including a mail reader, a meeting scheduler, and a Usenet newsgroups filter program. Scott Berkun spoke with the MIT professor about her agent applications and why agents have been difficult to build using traditional AI techniques.

Wired: I've read about attempts to make agent-based systems for more than 10 years. Why is it so difficult?

Maes:

I don't think it's hard: I think people have taken the wrong approach, especially in the early days of artificial intelligence. People were very self-confident; they were convinced AI would be the solution to many problems. They put forward a goal that was ambitious, and that I believe we may never achieve: to build agents that are very intelligent, have common-sense knowledge, and understand why people do things. AI researchers have been trying to do this for 15 or 20 years, and haven't seen significant results. The idea of agents really isn't new. There have been people working on agents all along - they just haven't produced many results yet.

So, how are the approaches you're using different from those of the past?

We have a less ambitious target. We don't try to build agents that can do everything or are omniscient. We try to build agents that help with the more repetitive, predictable tasks and behaviors.

So, there'd there be a specific agent for a specific task?

Right - that's what we've been building so far. The system learns about its user's habits, interests, and behaviors with respect to that task. It can detect patterns and then offer to automate them on behalf of the user. Recently, we have augmented that task with collaboration - agents can share knowledge they have learned about their respective users. This is helpful to people who work in groups and share habits or interests. So those are the techniques we've been exploring: observing user behavior, detecting regularities, watching correlations among users, and exploiting them.

How do the users maintain control with these systems?

We think it's important to keep the users in control, or at least always give them the impression they are in control. In all of the systems we build, the users decide whether to give the agent autonomous control over each activity. So it's the users who decide whether the agent is allowed to act on the users' behalf, and how confident the agent has to be before it is allowed to do so. Users can also instruct agents, giving them rules for special situations. You can tell the system whether the rule is soft or hard - soft being accepted as a default that can be overwritten by what the agent learns, hard meaning it cannot be overwritten by the agent.

How do you see the Internet affecting your work?

The Internet is part of the motivation for agents - it's going to be impossible, if it isn't already, for people to deal with the complexity of the online world. I'm convinced that the only solution is to have agents that help us manage the complexity of information. I don't think designing better interfaces is going to do it. There will be so many different things going on, so much new information and software becoming available, we will need agents that are our alter egos; they will know what we are interested in, and monitor databases and parts of networks.

It won't be how great your software is, it will be how great your agent is.

I'm convinced there will be great pieces of software, but you'll need an agent to help you find them.

I see a few problems with the idea of agents: One is that they are never more than 90 percent accurate. Another is that they take a significant amount of time to learn my behavior.

I agree that we never will get 100 percent accuracy - agents will always make mistakes. But whenever you delegate to someone - be it human or program - you give up some accuracy. If you give a task to someone else, it will never be done quite the way you want. Delegation is the only way to cope with how much work you have. If you had an infinite amount of time, you wouldn't need to delegate. But the problem is, no one has that kind of time. For example, I've never had time to read newsgroups or find the ones I wanted, but with the newsreader agent we built, I have news articles suggested to me, and it gives me the time to read them. Also, you have to be careful which tasks you delegate - if the cost of a mistake is high, don't let someone else do it. But many tasks are low-risk. If my newsreader agent gives me an article I don't want, or forgets to give me one I do want, it has already done more than I could do without it. I think you just have to be aware of the cost of a mistake for a particular task and adjust the agent's autonomy accordingly. We think the learning time is not necessarily a negative feature. Users will have less difficulty accepting agents if they gradually gain their trust. Trust has to be earned, and that always takes time.

We did increase the learning rate once we explored having agents collaborate. We found agents were learning the same things independently. For instance, messages from mailing lists or newsgroups have a lower priority than personal mail. With collaboration, agents can start from shared libraries of experience.

What about someone else using my machine - or my agent?

Security is a general computer issue - it's not unique to agents. Security will develop as computers advance. I think people will, for other reasons, want their agents - particularly the knowledge the agent has about them - secured.

How long will it be before something like the mail system you've developed becomes a product?

I don't think it will be long - I suspect in the next two years.

How will agents change the way people use and think about computers?

I hope agents will make people feel more comfortable dealing with the overload of information, more in control. Confident agents are working on their behalf, are reliable, and never become tired; they are always looking to help the user. One problem I see is that people will question who is responsible for the actions of an agent. Especially things like agents taking up too much time on a machine or purchasing something you don't want on your behalf. Agents will raise a lot of interesting issues, but I'm convinced we won't be able to live without them.