Autonomous Driving in Traffic: Boss and the Urban Challenge

12 December, 2010 § 1 Comment

The paper “Autonomous Driving in Traffic: Boss and the Urban Challenge” by C. Urmson et al. describes DARPA’s Grand Challenges and the progress that has been made as a result of these challenges [1]. Specifically, the article focuses on the DARPA 2007 Urban Challenge and the winning car known as Boss. Boss was the first car to complete the requirements of the challenge and did so 7.5% faster than the second place finisher. This was the third challenge from DARPA focusing on autonomous driving and the first focusing on urban areas.

As part of the final round of the competition, the vehicles were required to drive 52 miles around a former military base. The military base was used to reflect a typical urban environment, yet many factors of urban environments were missing. First, only midsized cars or larger were allowed on the course, removing potential obstacles such as litter, bicycles, and pedestrians [2]. Second, the only traffic moderation devices found in the course were stop signs and the exact locations of the stop signs as well as the stop-lines were provided to the teams as highly accurate GPS waypoints. Modifications such as these severely limit the practicality to say that these vehicles are ready to be driven autonomously in an urban environment.

The winning vehicle was able to complete the challenge close to 20 minutes faster than its nearest competitor, however even the winning vehicle made 17 mistakes during the final round. Three of these errors were the result of software bugs and sensors incorrectly perceiving their surroundings. No errors occurred related to mechanical malfunctions, and the systems don’t appear to have been designed to handle such occurrences.

As hard as naive path finding is, mechanical failure such as a sensor malfunction or a flat tire while navigating at high speeds could have the potential for catastrophic failure. It will be interesting to see how autonomous cars will handle unpleasant scenarios like the aforementioned, as it can easily be assumed that current test drives only occur following rigorous maintenance and manual checking of the vehicle and its components. Further, a machine must guarantee that it is 100% error free to allow drivers to not pay attention to the road. Otherwise, drivers will still have to pay attention and spring into action when errors occur, severely constraining the benefits of autonomous vehicles.

The original article was written in 2008 and predicted deployment of autonomous haul trucks as soon as 2010. These challenges have helped to reignite the work that was started in the 1980s around autonomous vehicles, but much work is still required. Google has been working privately on their own autonomous vehicles which have collectively logged over 140,000 miles autonomously [3]. However even these autonomous cars from Google require a human to preemptively drive the path of the vehicle to map out the road conditions. All of the research in the field shows what an exciting time it is to be working on artificial intelligence and machine learning problems.

[1] Urmson, C et al. “Autonomous Driving in Traffic: Boss and the Urban Challenge”. AI Magazine. Summer 2009.
[2] Urmson, C and Whittaker, W. “Self-Driving Cars and the Urban Challenge”. IEEE Intelligent Systems. March/April 2008.
[3] Thrun, S. “What we’re driving at”. The Official Google Blog. Oct 09, 2010. http://googleblog.blogspot.com/2010/10/what-were-driving-at.html

Elephants Don’t Play Chess

18 November, 2010 § Leave a comment

“Elephants Don’t Play Chess” (PDF) is a paper written by Rodney Brooks of the MIT Artificial Intelligence Laboratory. Brooks introduces Nouvell AI, also called fundamentalist AI. He compares and contrasts his Nouvell AI with the then-mainstream symbol system, also called classical AI.

Brooks also used this paper to introduce his subsumption architecture, a distributed system that senses and then reacts, much like the nervous system of the human body. The subsumption architecture was an improvement on the classical sense-plan-react systems. Time has shown that the subsumption architecture could be improved through the use of a hybrid approach that can sense and react asynchronously.

Classical AI is described as the intelligence being found within the sum of the parts of the system. To improve the intelligence of classical AI systems, the parts of the system must be made smarter. These systems are criticized for being too reliant on finite sets of symbols. These symbols are not understood by the systems and require a human to interpret them. Further, the systems’ symbols are heavily-dependent on the specific task of the system. It is argued that these dependencies cause brittle systems that cannot adapt or scale as the problems change.

Nouvell AI is described as the intelligence being found within each part independently. To improve the intelligence of a Nouvell AI system, more parts can be added to the system. These systems are represented as behavior-based AI systems, where each part of the system understands its own behaviors, and lacks knowledge of the other parts of the system. To implement these systems, the developers departed from symbolic AI and moved to their physical grounding hypothesis. The physical grounding hypothesis states that system intelligence requires representations to be grounded in the physical world, thus removing the need for a predetermined finite set of symbols. The proposed systems would use the world as their environment and would not abstract reality down to symbols.

After introducing the physical grounding hypothesis, Brooks introduces the reader to multiple robots that have been created using the Nouvell AI system. These robots seem promising, yet all of the robots described are task-specific and do not learn new behaviors or tasks over time. Brooks’ defense of this lies in his “But It Can’t Do X” argument. This defense is a mischaracterization of the critiques that Brooks has received. There is a much larger argument that could be provided, in that even though the AI systems Brooks and his colleagues have proposed are more behavior-based, they are still task-specific and do not provide open-ended learning for more new tasks. Further, these systems do not feature autonomous animal-like online learning.

While Brooks was right in the need for the physically grounding hypothesis, there are other requirements that exist for systems to have true human-like intelligence. These requirements lead towards a model of Autonomous Mental Development, where the goal of the system as well as the world representation is unknown at programming time.

Autonomous Mental Development by Robots and Animals

8 November, 2010 § Leave a comment

Autonomous Mental Development by Robots and Animals”[1] was one of the first publications to introduce a new field of artificial intelligence research, called Autonomous Mental Development (AMD), to the mainstream scientific research community. The paper was written after the first Workshop on Development and Learning took place at Michigan State University in 2000.

The authors of the article differentiate AMD from traditional artificial intelligence such as knowledge-based, learning-based, or genetic-search based approaches. The authors state that the difference between traditional and developmental robotic programming is based on five factors: task specificity; awareness of tasks at time of programming; representation of unknown task; animal-like online learning; and open-ended learning. While AMD differs from more traditional approaches in at least these six ways, it is the combination of these six factors that defines AMD. Up to the point of the publication, initial progress in the field allowed robots to learn objects and places that they had interacted with.

Researchers working on problems of AMD are trying to build programs that understand and know what they are doing as opposed to programs that are simply following instruction that the human programmer has provided. This separation has been called strong vs. weak AI, respectively. I think the article could have benefited by acknowledging the difference since the target audience was that of a mainstream science publication and not one solely for the artificial intelligence community.

One of the figures in the article describes how developmental programs in a machine brain are similar to those of a human brain. Specifically, the figure shows the human development starting in the genes and leading to an adult brain. The growth of this human brain is affected by the environment that it lives in, and the environment of its ancestors through the inheritance of special genetic traits. Further, the article states that humans “raise the developmental robot by interacting with it in real time”. While this goal would have to be accomplished, it leaves out the necessity for previous generations of developmental programs to pass on to their offspring what helped them to be successful.

The first research published in the area of AMD[2] covered facial recognition, feature selection, feature extraction, and self-organization. Since research has started, many questions have been raised as to how systems that are fully-autonomous, task-nonspecific, and open-ended could be created, and the problem space still has areas that are not well defined.

This article provides a good introduction to AMD and the problems the field is trying to solve. To this day, interest in AMD has continued to grow, as shown by the creation of the IEEE CIS AMD Newsletter, started in 2004, and the IEEE Transactions on Autonomous Mental Development journal, started in 2009. Hopefully we will see a day where machines can learn new tasks and correct their mistakes.

[1] Weng et al. “Autonomous Mental Development by Robots and Animals”. Science Magazine. Volume 291, Number 5504, Issue of 26 Jan 2001, pp. 599-60.
[2] Weng et al. “Learning Recognition and Segmentation Using the Cresceptron”. International Journal of Computer Vision. Volume 25, Number 2, Issue of Nov 1997, pp. 105-139.

A Critique of “The Chinese Room”

4 November, 2010 § Leave a comment

John Searle’s 1980 paper “Minds, Brains, and Programs” (Searle 1980) and subsequent papers by Searle are collectively known as “The Chinese Room Arguments”. They were the first papers to differentiate between strong and weak AI. Searle provided a counter-argument to those seeking artificial intelligence through his reasoning that computers are not demonstrating intelligence because computers rely on a set of formal syntax and finite symbols. In this argument, he argued that weak AI is no different than any other program following instructions.

To formalize, weak AI is defined as “the assertion that machines could act as if they were intelligent”, whereas strong AI is defined as “the assertion that machines could act because they are actually thinking and not simulating” (Russell and Norvig). Using these definitions, if a computer does not understand than it is not a mind, and thus representing weak AI, not strong AI.

Comparing this with A.M. Turing’s 1950 paper “Computing Machinery and Intelligence”, Searle ignores what Turing calls his “polite convention” (Turing). Turing’s “polite convention” is such that one should trust anothers thought process and only verify the conclusions that are reached. An example of this is a student learning a second natural language. They may first learn some rules about the language, and when asked a question in the respective language they may take extra time to formalize those rules and derive a response. It is of the questioner to be polite and trust that the student has not memorized the question and answer pair, nor that the questioner has a problem with the student learning a couple rules to help with the language adoption.

One of the strongest rebuttals is the Systems Reply. In Searle’s initial argument, he states that if the human does not understand and the paper does not understand, no understanding exists. This argument relies on intuition and is not grounded well. The reply states that the view of the entire system from the outside does however represent understanding. A contradiction of Searle’s argument would be as simple as explaining that neither H nor 2O can make an object wet, yet together they have a new ability (Russell and Norvig). Applying the argument in the opposite direction, Russell and Norvig question if the brain is any different than the Chinese room. They state that the brain is just a pile of cells acting blindly according to laws of biochemistry, and so what is different about these cells compared to those of the liver?

I shall finish this critique by mentioning Searle’s four axioms (Searle 1990). Axioms 1 and 2 are provided to draw a clear separation between computers and brains. Axiom 1 states that computers are syntactic, but it is well known that computers are also manipulated by electrical current. It is also well known that the human brain is manipulated through electrical current, thus should not brains be syntactic as well? If brains are syntactic like computers are, then Axiom 3 cannot hold.

Bibliography

Russell, Stuart, and Peter Norvig. “Philosophical Foundations.” Artificial Intelligence: A Modern Approach. New Jersey: Prentice Hall, 2010. pp. 1020-1040.
Searle, John. 1980. “Minds, Brains, and Programs”. Behavioral and Brain Sciences 3, pp. 417-424.
Searle, John. 1990. “”Is the Brain’s Mind a Computer Program?”. Scientific American. 262: 26-31.
Turing, A.M. “Computing Machinery and Intelligence”. Mind, New Series, Vol. 59, No. 236. (Oct., 1950), pp. 446.

 

Writing a critique of a scholarly article

5 September, 2010 § 2 Comments

For the first assignment in my Artificial Intelligence course, we are to write a critique of Alan Turing’s “Computing Machinery and Intelligence”. Now I’m pretty sure I’ve written a critique before and I know that it is not a summary of the paper, but I think having the summer off from graduate school allowed for a relapse on the exact mechanics of a critique.

I took this as an opportunity to ask my friend Jim, who is currently studying for a Masters in Fine Arts from Chicago State University, for some tips on writing a critique. I’m pretty sure I’m not alone, so I’d like to share the tips and offer the opportunity to others for feedback on these tips.

Here are the tips:

  1. Pick a side of the argument that the author made (agree/disagree).
  2. Say what parts of the argument you agree/disagree with.
  3. Spend more time pointing out flaws vs. pointing out strengths.
  4. If the author has a point/counterpoint, double-check that the point/counterpoint are actually opposites of each other.
  5. If the writing is historical in nature, look for predictions that turned out wrong.
  6. What could the author have done better with their analysis.

I also found useful the document titled “How to do a Close Reading” by Patricia Kain of the Writing Center at Harvard University. Although writing a critique and a close reading are, in my understanding, two very different activities, there are useful tips given on how to analyze writings and approach them from different viewpoints.

Those are the tips I’ve got so far. Let me know what you think of them or if you have any to add by leaving a comment to this post.

Where Am I?

You are currently browsing entries tagged with artificial-intelligence at JAWS.

Follow

Get every new post delivered to your Inbox.

Join 998 other followers