News

Lord of the Robots

"To give a theoretical example: 30 years from now, instead of growing a tree, cutting it down and building a table, we’d just grow a table."

The director of MIT’s
Artificial Intelligence Lab
says the age of smart,
mobile machines is
already beginning. You
just have to know where
to find them—say, in oil
wells.

Q&A with Rodney Brooks – Technology Review

“Computer! Turn on the lights!” Rodney Brooks, director of MIT’s Artificial Intelligence
Laboratory—the largest A.I. lab in the world—strides into his ninth-floor office in
Cambridge, MA. Despite his demand, the room stays dark. “Computer!” he repeats,
sitting down at the conference table.

“I’m…already…listening,” comes a HAL-like voice from the wall. Brooks redirects his
request toward a small microphone on the table, this time enunciating more clearly:
“Turn on the lights!”

A pleasant tweeting sound signals digital comprehension. The lights click on. Brooks
grins, his longish, graying curls bouncing on either side of his face, and admits his
entrance was a somewhat rough demonstration of “pervasive computing.” That’s a
vision of a post-PC future in which sensors and microprocessors are wired into cars,
offices and homes—and carried in shirt pockets—to retrieve information, communicate
and do various tasks through speech and gesture interfaces. “My staff laughs at me,”
says Brooks, noting he could have simply flicked the light switch, “but I have to live with
my technology.”

In the not-too-distant future, a lot more people may be living with technologies that
Brooks’s lab is developing. To help make pervasive computing a reality, researchers in
his lab and MIT’s Laboratory for Computer Science are developing—in an effort Brooks
codirects called Project Oxygen—the requisite embeddable and wearable devices,
interfaces and communications protocols. Others are building better vision systems that
do things like interpret lip movements to increase the accuracy of speech recognition
software.

Brooks’s A.I. Lab is also a tinkerer’s paradise filled with robotic machines ranging from
mechanical legs to “humanoids” that use humanlike expressions and gestures as
intuitive human-robot interfaces—something Brooks believes will be critical to people
accepting robots in their lives. The first generation of relatively mundane versions of
these machines is already marching out of the lab. The robotics company Brooks
cofounded—Somerville, MA-based iRobot—is one of many companies planning this
year to launch new robot products, like autonomous floor cleaners and industrial tools
built to take on dirty, dangerous work like inspecting oil wells.

Of course, autonomous oil well inspectors aren’t as thrilling as the robotic servants
earlier visionaries predicted we’d own by now. But as Brooks points out, robotics and
artificial intelligence have indeed worked their way into everyday life, though in less
dramatic ways (see “A.I. Reboots,” TR March 2002). In conversations with TR senior editor
David Talbot, Brooks spoke (with occasional interruptions from his omnipresent
computer) about what we can expect from robotics, A.I. and the faceless voice from the
hidden speaker in his wall.

TR: The military has long been the dominant funder of robotics and A.I. research. How
have the September 11 terror attacks influenced these fields?

BROOKS: There was an initial push to get robots out into the field quickly, and this started
around 10 a.m. on September 11 when John Blitch [director of robotics technology for
the National Institute for Urban Search and Rescue in Santa Barbara, CA] called iRobot,
along with other companies, to get robots down to New York City and look for survivors
in the rubble. That was just a start of a push to get things into service that were not quite
ready—and weren’t necessarily meant for particular jobs. In general, there has been an
urgency to getting things from a development stage into a deployed stage much more
quickly than was assumed would be necessary before September 11. I think people
saw there was a real role for robots to keep people out of harm’s way.

TR: What else besides…

COMPUTER: I’m…already…listening.

BROOKS: Go to sleep. Go to sleep. Go…to…sleep.

COMPUTER: Going…to…sleep.

BROOKS: As long as we don’t say the “C” word now, we’ll be okay.

TR: Did any other robots get called for active duty?

BROOKS: Things that were in late research-and-development stages have been pushed
through, like iRobot’s “Packbot” robots. These are robots that a soldier can carry and
deploy. They roll on tracks through mud and water and send back video and other
sensory information from remote locations without a soldier going into the line of fire.
They can go into rubble; they can go where there are booby traps. Packbots were sent
for search duty at the World Trade Center site and are moving into large-scale military
deployment more quickly than expected. There is more pressure on developing
mine-finding robots.

TR: How are you balancing military and commercial robot research?

BROOKS: When I became A.I. Lab director four and a half years ago, the Department of
Defense was providing 95 percent of our research funding. I thought that was just too
much, from any perspective. Now it’s at about 65 percent, with more corporate funding.

TR: What’s the future of commercial robots?

BROOKS: There has been a great deal of movement toward commercial robots. Last
November, Electrolux started selling home-cleaning robots in Sweden. They have a plan
to sell them under the Eureka brand in the U.S. There are a bunch of companies that
plan to bring out home-cleaning robots later this year, including Dyson in the U.K.,
Kärcher in Germany and Procter and Gamble in the U.S. Another growing area is
remote-presence robots; these are being investigated more closely, for example, to
perform remote inspections above ground at oil drilling sites. Many companies are
starting to invest in that area. IRobot just completed three years of testing on oil well
robots that actually go underground; we’re now starting to manufacture the first batch of
these.

TR: How is that different from other industrial robots, like spot welders, that have been
around for years?

BROOKS: These robots act entirely autonomously. It’s impossible to communicate via
radio with an underground robot, and extreme depths make even a lightweight
fiber-optic tether impractical. If they get in trouble they need to reconfigure themselves
and get back to the surface. They have a level of autonomy and intelligence not even
matched by the Mars rover Sojourner, which could get instructions from Earth. You don’t
need a crew of workers with tons of cable or tons of piping for underground inspections
and maintenance. You take this robot—which weighs a few hundred pounds—program
it with instructions, and it crawls down the well. You have bunches of sensors on there
to find out flow rates, pressures, water levels, all sorts of things that tell you the health of
the well and what to do to increase oil production. They will eventually open and close
sleeves that let fluids feed into the main well pipe and make adjustments. But the first
versions we’re selling this year will just do data collection.

TR: The computer that turned on the lights is part of MIT’s Project Oxygen, which aims to
enable a world of pervasive computing. As codirector, what are your objectives?

BROOKS: With Project Oxygen, we’re mostly concentrating on getting pervasive
computing working in an office environment. But the different companies investing in
Project Oxygen obviously have different takes on it. Philips is much more interested in
technologies to make information services more available within the home. Delta
Electronics is interested in the future of large-screen displays—things that can be done
if you have wall-sized displays you can sell to homeowners. Nokia is interested in
selling information services. They call a cell phone a “terminal.” They want to deliver stuff
to this terminal and find ways we can interact with this terminal. Already, Nokia has a
service in Finland where you point the cell phone at a soda machine and it bills you for
the soda. In Japan, 30 million people already browse the Web on their cell phones
through NTT’s i-mode. All these technologies are providing services from computing in
everyday environments. We are trying to identify the next things, to see how we can
improve upon or go beyond what these companies are doing.

TR: To that end, Project Oxygen is developing a handheld device called an “H21” and an
embedded-sensor suite called an “E21.” But what, exactly, will we do with these
tools—besides turn on the lights?

BROOKS: The idea is that we should have all our information services always available,
no matter what we are doing, and as unobtrusive as possible. If I pick up your cell phone
today and make a call, it charges you, not me. With our prototype H21s, when you pick
one up and use it, it recognizes your face and customizes itself to you—it knows your
schedule and where you want to be. You can talk to it, ask it for directions or make calls
from it. It provides you access to the Web under voice or stylus command. And it can
answer your questions rather than just giving you Web pages that you have to crawl
through.
The E21s provide the same sorts of services in a pervasive environment. The walls
become screens, and the system handles multiple people by tracking them and
responding to each person individually. We are experimenting with new sorts of user
interfaces much like current whiteboards, except with software systems understanding
what you are saying to other people, what you are sketching or writing, and connecting
you with, for instance, a mechanical-design system as you work. Instead of you being
drawn solitarily into the computer’s virtual desktop as you work, it supports you as you
work with other people in a more natural way.

TR: How common will pervasive computing become in the next five years to 10 years?

BROOKS: First we have to overcome a major challenge—making these devices work
anywhere. As you move around, your wireless environment changes drastically. There
are campuswide networks, and cell phones in different places with different protocols.
You want those protocols to change seamlessly. You want to have these handheld
devices work independent of the service providers. Hari Balakrishnan [an assistant
professor at MIT’s Laboratory for Computer Science] and students have demonstrated
the capability—which has had great interest from the corporate partners—in having a
totally roaming Internet, which we don’t have right now. That’s something I expect will be
out there commercially in five years.

TR: And in 10 years?

BROOKS: In 10 years, we’ll see better vision systems in handheld units and in the wall
units. This will be coupled with much better speech interfaces. In 10 years the
commercial systems will be using computer vision to look at your face as you’re talking
to improve recognition of what you are saying. In a few years, the cameras, the
microphone arrays will be in the ceiling in your office and will be tracking people and
discriminating who is speaking when, so that the office can understand who wants to do
what and provide them with the appropriate information. We’re already demonstrating
that in our Intelligent Room here in the A.I. Lab. I’ll be talking to you—then I’ll point, and
up on the wall comes a Web page that relates to what I’m saying. It’s like Star Trek, in
that the computer will always be available.

TR: What is the state of A.I. research?

BROOKS: There’s this stupid myth out there that A.I. has failed, but A.I. is everywhere
around you every second of the day. People just don’t notice it. You’ve got A.I. systems in
cars, tuning the parameters of the fuel injection systems. When you land in an airplane,
your gate gets chosen by an A.I. scheduling system. Every time you use a piece of
Microsoft software, you’ve got an A.I. system trying to figure out what you’re doing, like
writing a letter, and it does a pretty damned good job. Every time you see a movie with
computer-generated characters, they’re all little A.I. characters behaving as a group.
Every time you play a video game, you’re playing against an A.I. system.

TR: But a robotic lawn mower still can’t be relied upon to cut the grass as well as a
person. What are the major problems that still need solving?

BROOKS: Perception is still difficult. Indoors, cleaning robots can estimate where they
are and which part of the floor they’re cleaning, but they still can’t do it as well as a
person can do. Outdoors, where the ground isn’t flat and landmarks aren’t reliable, they
can’t do it. Vision systems have gotten very good at detecting motion, tracking things and
even picking out faces from other objects. But there’s no artificial-vision system that can
say, “Oh, that’s a cell phone, that’s a small clock and that’s a piece of sushi.” We still
don’t have general “object recognition.” Not only don’t we have it solved—I don’t think
anyone has a clue. I don’t think you can even get funding to work on that, because it is
just so far off. It’s waiting for an Einstein—or three—to come along with a different way of
thinking about the problem. But meantime, there are a lot of robots that can do without it.
The trick is finding places where robots can be useful, like oil wells, without being able
to do visual object recognition.

TR: Your new book Flesh and Machines: How Robots Will Change Us argues that the
distinctions between man and machine will be irrelevant some day. What does that
mean?

BROOKS: Technologies are being developed that interface our nervous systems directly
to silicon. For example, tens of thousands of people have cochlear implants where
electrical signals stimulate neurons so they can hear again. Researchers at the A.I. Lab
are experimenting with direct interfacing to nervous systems to build better prosthetic
legs and bypass diseased parts of the brain. Over the next 30 years or so we are going
to put more and more robotic technology into our bodies. We’ll start to merge with the
silicon and steel of our robots. We’ll also start to build robots using biological materials.
The material of us and the material of our robots will converge to be one and the same,
and the sacred boundaries of our bodies will be breached. This is the crux of my
argument.

TR: What are some of the wilder long-term ideas your lab is working on or that you’ve
been thinking about?

BROOKS: Really long term—really way out—we’d like to hijack biology to build machines.
We’ve got a project here where Tom Knight [senior research scientist at the A.I. Lab] and
his students have engineered E. coli bacteria to do very simple computations and
produce different proteins as a result. I think the really interesting stuff is a lot further
down the line, where we’d have digital control over what is going on inside cells, so that
they, as a group, can do different things. To give a theoretical example: 30 years from
now, instead of growing a tree, cutting it down and building a table, we’d just grow a
table. We’d change our industrial infrastructure so we can grow things instead of
building them. We’re a long way away from this. But it would be almost like a free lunch.
You feed them sugar and get them to do something useful!

TR: Project Oxygen. Robots. Growing tables. What’s the common intellectual theme for
you?

BROOKS: It all started when I was 10 years old and built my first computer, in the early
1960s. I would switch it on and the lights flashed and it did stuff. That’s the common
thread—the excitement of building something new that is able to do something that
normally requires a creature, an intelligence of some level.

TR: That excitement is still there?

BROOKS: Oh yeah.

http://www.techreview.com/articles/qa0402.asp?p=0

Sorry, we couldn't find any posts. Please try a different search.

Leave a Comment

You must be logged in to post a comment.