Elephants Don’t Play Chess
18 November, 2010 § Leave a comment
“Elephants Don’t Play Chess” (PDF) is a paper written by Rodney Brooks of the MIT Artificial Intelligence Laboratory. Brooks introduces Nouvell AI, also called fundamentalist AI. He compares and contrasts his Nouvell AI with the then-mainstream symbol system, also called classical AI.
Brooks also used this paper to introduce his subsumption architecture, a distributed system that senses and then reacts, much like the nervous system of the human body. The subsumption architecture was an improvement on the classical sense-plan-react systems. Time has shown that the subsumption architecture could be improved through the use of a hybrid approach that can sense and react asynchronously.
Classical AI is described as the intelligence being found within the sum of the parts of the system. To improve the intelligence of classical AI systems, the parts of the system must be made smarter. These systems are criticized for being too reliant on finite sets of symbols. These symbols are not understood by the systems and require a human to interpret them. Further, the systems’ symbols are heavily-dependent on the specific task of the system. It is argued that these dependencies cause brittle systems that cannot adapt or scale as the problems change.
Nouvell AI is described as the intelligence being found within each part independently. To improve the intelligence of a Nouvell AI system, more parts can be added to the system. These systems are represented as behavior-based AI systems, where each part of the system understands its own behaviors, and lacks knowledge of the other parts of the system. To implement these systems, the developers departed from symbolic AI and moved to their physical grounding hypothesis. The physical grounding hypothesis states that system intelligence requires representations to be grounded in the physical world, thus removing the need for a predetermined finite set of symbols. The proposed systems would use the world as their environment and would not abstract reality down to symbols.
After introducing the physical grounding hypothesis, Brooks introduces the reader to multiple robots that have been created using the Nouvell AI system. These robots seem promising, yet all of the robots described are task-specific and do not learn new behaviors or tasks over time. Brooks’ defense of this lies in his “But It Can’t Do X” argument. This defense is a mischaracterization of the critiques that Brooks has received. There is a much larger argument that could be provided, in that even though the AI systems Brooks and his colleagues have proposed are more behavior-based, they are still task-specific and do not provide open-ended learning for more new tasks. Further, these systems do not feature autonomous animal-like online learning.
While Brooks was right in the need for the physically grounding hypothesis, there are other requirements that exist for systems to have true human-like intelligence. These requirements lead towards a model of Autonomous Mental Development, where the goal of the system as well as the world representation is unknown at programming time.
Leave a Reply