Autonomous Mental Development by Robots and Animals

8 November, 2010 § Leave a comment

Autonomous Mental Development by Robots and Animals”[1] was one of the first publications to introduce a new field of artificial intelligence research, called Autonomous Mental Development (AMD), to the mainstream scientific research community. The paper was written after the first Workshop on Development and Learning took place at Michigan State University in 2000.

The authors of the article differentiate AMD from traditional artificial intelligence such as knowledge-based, learning-based, or genetic-search based approaches. The authors state that the difference between traditional and developmental robotic programming is based on five factors: task specificity; awareness of tasks at time of programming; representation of unknown task; animal-like online learning; and open-ended learning. While AMD differs from more traditional approaches in at least these six ways, it is the combination of these six factors that defines AMD. Up to the point of the publication, initial progress in the field allowed robots to learn objects and places that they had interacted with.

Researchers working on problems of AMD are trying to build programs that understand and know what they are doing as opposed to programs that are simply following instruction that the human programmer has provided. This separation has been called strong vs. weak AI, respectively. I think the article could have benefited by acknowledging the difference since the target audience was that of a mainstream science publication and not one solely for the artificial intelligence community.

One of the figures in the article describes how developmental programs in a machine brain are similar to those of a human brain. Specifically, the figure shows the human development starting in the genes and leading to an adult brain. The growth of this human brain is affected by the environment that it lives in, and the environment of its ancestors through the inheritance of special genetic traits. Further, the article states that humans “raise the developmental robot by interacting with it in real time”. While this goal would have to be accomplished, it leaves out the necessity for previous generations of developmental programs to pass on to their offspring what helped them to be successful.

The first research published in the area of AMD[2] covered facial recognition, feature selection, feature extraction, and self-organization. Since research has started, many questions have been raised as to how systems that are fully-autonomous, task-nonspecific, and open-ended could be created, and the problem space still has areas that are not well defined.

This article provides a good introduction to AMD and the problems the field is trying to solve. To this day, interest in AMD has continued to grow, as shown by the creation of the IEEE CIS AMD Newsletter, started in 2004, and the IEEE Transactions on Autonomous Mental Development journal, started in 2009. Hopefully we will see a day where machines can learn new tasks and correct their mistakes.

[1] Weng et al. “Autonomous Mental Development by Robots and Animals”. Science Magazine. Volume 291, Number 5504, Issue of 26 Jan 2001, pp. 599-60.
[2] Weng et al. “Learning Recognition and Segmentation Using the Cresceptron”. International Journal of Computer Vision. Volume 25, Number 2, Issue of Nov 1997, pp. 105-139.

A Critique of “The Chinese Room”

4 November, 2010 § Leave a comment

John Searle’s 1980 paper “Minds, Brains, and Programs” (Searle 1980) and subsequent papers by Searle are collectively known as “The Chinese Room Arguments”. They were the first papers to differentiate between strong and weak AI. Searle provided a counter-argument to those seeking artificial intelligence through his reasoning that computers are not demonstrating intelligence because computers rely on a set of formal syntax and finite symbols. In this argument, he argued that weak AI is no different than any other program following instructions.

To formalize, weak AI is defined as “the assertion that machines could act as if they were intelligent”, whereas strong AI is defined as “the assertion that machines could act because they are actually thinking and not simulating” (Russell and Norvig). Using these definitions, if a computer does not understand than it is not a mind, and thus representing weak AI, not strong AI.

Comparing this with A.M. Turing’s 1950 paper “Computing Machinery and Intelligence”, Searle ignores what Turing calls his “polite convention” (Turing). Turing’s “polite convention” is such that one should trust anothers thought process and only verify the conclusions that are reached. An example of this is a student learning a second natural language. They may first learn some rules about the language, and when asked a question in the respective language they may take extra time to formalize those rules and derive a response. It is of the questioner to be polite and trust that the student has not memorized the question and answer pair, nor that the questioner has a problem with the student learning a couple rules to help with the language adoption.

One of the strongest rebuttals is the Systems Reply. In Searle’s initial argument, he states that if the human does not understand and the paper does not understand, no understanding exists. This argument relies on intuition and is not grounded well. The reply states that the view of the entire system from the outside does however represent understanding. A contradiction of Searle’s argument would be as simple as explaining that neither H nor 2O can make an object wet, yet together they have a new ability (Russell and Norvig). Applying the argument in the opposite direction, Russell and Norvig question if the brain is any different than the Chinese room. They state that the brain is just a pile of cells acting blindly according to laws of biochemistry, and so what is different about these cells compared to those of the liver?

I shall finish this critique by mentioning Searle’s four axioms (Searle 1990). Axioms 1 and 2 are provided to draw a clear separation between computers and brains. Axiom 1 states that computers are syntactic, but it is well known that computers are also manipulated by electrical current. It is also well known that the human brain is manipulated through electrical current, thus should not brains be syntactic as well? If brains are syntactic like computers are, then Axiom 3 cannot hold.

Bibliography

Russell, Stuart, and Peter Norvig. “Philosophical Foundations.” Artificial Intelligence: A Modern Approach. New Jersey: Prentice Hall, 2010. pp. 1020-1040.
Searle, John. 1980. “Minds, Brains, and Programs”. Behavioral and Brain Sciences 3, pp. 417-424.
Searle, John. 1990. “”Is the Brain’s Mind a Computer Program?”. Scientific American. 262: 26-31.
Turing, A.M. “Computing Machinery and Intelligence”. Mind, New Series, Vol. 59, No. 236. (Oct., 1950), pp. 446.

 

A Critique of “Computing Intelligence and Machinery”

2 October, 2010 § Leave a comment

Alan Turing’s 1950 paper titled “Computing Machinery and Intelligence” was a ground-breaking paper that focused on the future possibilities and expected criticisms within artificial intelligence. The paper sought to move the question of “if machines can think” towards “if machines can imitate a human successfully enough to cause the human to believe they are communicating with another human”.

Turing had a goal of making the question more specific and measurable, yet 60 years later no machine has passed the “Turing Test”. Requiring that a machine with artificial intelligence pass the Turing Test is still a very vague goal. There have been systems created that can beat the world’s best chess player all the way to systems for predicting which movies customers would enjoy most. While both of these are beyond the intelligence of a single human, the Turing Test sets the bar too wide as it is not described in the paper as anything that could be measurable. This is likely one of the major reasons that Turing’s claim of “in about fifty years’ time it will be possible to program computers … to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning” has still not been validated.

In fact, the mathematical argument presented in Objection 4 further backs up the point about the Turing Test being too vague. Turing states that there will always be something of greater intelligence given a scale, yet he fails to specify when the comparison with another intelligent agent will become unnecessary. Responding to Objection 4 could have been a great opportunity to state a more measurable goal of the Turing Test.

While it is easy to argue that no machine has passed the Turing Test, there have been a host of machines that actually have tricked humans in to thinking they are having a conversation with another human. Chat programs such as ELIZA and CyberLover have tricked their participants with conversations over five minutes, yet they were never awarded the prize of winning the Turing Test. Due to this situation, many researchers focus on practical skill tests instead of the Turing Test for their research.

I would like to move on now to some smaller criticisms of the paper. In one part of the paper, Turing describes the method for training a machine. He states that the programmer should obtain “how [the action] is done, and then translate the answer into the form of an instruction table.” While this process will work fine for most cases, this leaves out instances of intuition from the machine that would be present in a human. Randomness likely will not be a supplement for intuition or the feeling that a chess player gets when they decide that a move simply feels odd.

I do not agree with the necessity of Objection 9. Turing defends this objection by hoping to put competitors into a ‘telepathy-proof room’. If extra-sensory perception and its four items can be learned by humans, there does not seem to be a reason that an intelligent machine could not learn the same trades provided they are given the necessary sensors. It sounds like Turing responds to Objection 9 using the same objection as Objection 2 by burying his “head in the sand.”

Turing stated that the Turing Test should “only permit digital computers to take part in our game,” yet only a few printed pages later does Turing go back on that statement by saying, “the feature of using electricity is thus seen to be only a very superficial similarity. … [W]e should look rather for mathematical analogies of function.”

Towards the end of the paper, Turing discusses the learning process and formulation of rules that the system should follow. He decides that the “random method seems to be better than the systematic.” Yet as with most rules in our universe, there is a specific ordering and weighting of their importance. I propose an example of two imaginary rules: (1) A machine should not take part in any violence. (2) A machine should keep its owner happy. With a random selection of rules to follow, the machine may decide to rid the world of the owner’s enemies to keep its owner happy, even though this is in clear violation of (1). It should be necessary that the decision making process of machines is much more complicated than a random method of choosing which decision to make.

Overall, while the paper was quite groundbreaking for the time period, there are a couple loopholes in Turing’s argument which are prone to criticism. This paper was one of the foundations for artificial intelligence and will continue to be revisited by current and future researchers.

Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.

Where Am I?

You are currently browsing entries tagged with critique at JAWS.