11 January, 2011 § 1 Comment
Today marks the second day of the Spring semester at Michigan State University. Each semester the department of Computer Science and Engineering lines up speakers from within academia and industry to come to campus and give presentations on their research or line of work.
Last semester brought Dr. Greg Wilson, influential author of Making Software and Beautiful Code; Dr. Wolfgang Bauer, MSU University Distinguished Professor and researcher at the National Superconducting Cyclotron Laboratory; Dr. Zhiewei (Jerry) Cen, Reliability Engineer at Google; Dr. Doug L. James, Associate Professor of Computer Science at Cornell University; and many more.
This semester features research talks from schools such as the Colorado School of Mines, Carnegie Mellon University, UCLA, and McGill University.
The talks are mostly on Fridays and usually start around 11:30am and end around 12:30pm in room 1234 Anthony Hall.
If you are near East Lansing, you should stop on by the talks and see the interesting research going on.
24 November, 2010 § 1 Comment
Inspired by sites like Eric Marcos’ Mathtrain.tv and the Simple English Wikipedia, I’ve decided to make some short screencasts of NP-Complete problems. I hope that these screencasts help to explain the problem and frame it in a way that is easy to understand.
Here is the first video that I’ve made, covering the Vertex Cover problem:
Let me know what you think and what I should add or remove from the video.
18 November, 2010 § Leave a Comment
“Elephants Don’t Play Chess” (PDF) is a paper written by Rodney Brooks of the MIT Artificial Intelligence Laboratory. Brooks introduces Nouvell AI, also called fundamentalist AI. He compares and contrasts his Nouvell AI with the then-mainstream symbol system, also called classical AI.
Brooks also used this paper to introduce his subsumption architecture, a distributed system that senses and then reacts, much like the nervous system of the human body. The subsumption architecture was an improvement on the classical sense-plan-react systems. Time has shown that the subsumption architecture could be improved through the use of a hybrid approach that can sense and react asynchronously.
Classical AI is described as the intelligence being found within the sum of the parts of the system. To improve the intelligence of classical AI systems, the parts of the system must be made smarter. These systems are criticized for being too reliant on finite sets of symbols. These symbols are not understood by the systems and require a human to interpret them. Further, the systems’ symbols are heavily-dependent on the specific task of the system. It is argued that these dependencies cause brittle systems that cannot adapt or scale as the problems change.
Nouvell AI is described as the intelligence being found within each part independently. To improve the intelligence of a Nouvell AI system, more parts can be added to the system. These systems are represented as behavior-based AI systems, where each part of the system understands its own behaviors, and lacks knowledge of the other parts of the system. To implement these systems, the developers departed from symbolic AI and moved to their physical grounding hypothesis. The physical grounding hypothesis states that system intelligence requires representations to be grounded in the physical world, thus removing the need for a predetermined finite set of symbols. The proposed systems would use the world as their environment and would not abstract reality down to symbols.
After introducing the physical grounding hypothesis, Brooks introduces the reader to multiple robots that have been created using the Nouvell AI system. These robots seem promising, yet all of the robots described are task-specific and do not learn new behaviors or tasks over time. Brooks’ defense of this lies in his “But It Can’t Do X” argument. This defense is a mischaracterization of the critiques that Brooks has received. There is a much larger argument that could be provided, in that even though the AI systems Brooks and his colleagues have proposed are more behavior-based, they are still task-specific and do not provide open-ended learning for more new tasks. Further, these systems do not feature autonomous animal-like online learning.
While Brooks was right in the need for the physically grounding hypothesis, there are other requirements that exist for systems to have true human-like intelligence. These requirements lead towards a model of Autonomous Mental Development, where the goal of the system as well as the world representation is unknown at programming time.
2 October, 2010 § Leave a Comment
Alan Turing’s 1950 paper titled “Computing Machinery and Intelligence” was a ground-breaking paper that focused on the future possibilities and expected criticisms within artificial intelligence. The paper sought to move the question of “if machines can think” towards “if machines can imitate a human successfully enough to cause the human to believe they are communicating with another human”.
Turing had a goal of making the question more specific and measurable, yet 60 years later no machine has passed the “Turing Test”. Requiring that a machine with artificial intelligence pass the Turing Test is still a very vague goal. There have been systems created that can beat the world’s best chess player all the way to systems for predicting which movies customers would enjoy most. While both of these are beyond the intelligence of a single human, the Turing Test sets the bar too wide as it is not described in the paper as anything that could be measurable. This is likely one of the major reasons that Turing’s claim of “in about fifty years’ time it will be possible to program computers … to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning” has still not been validated.
In fact, the mathematical argument presented in Objection 4 further backs up the point about the Turing Test being too vague. Turing states that there will always be something of greater intelligence given a scale, yet he fails to specify when the comparison with another intelligent agent will become unnecessary. Responding to Objection 4 could have been a great opportunity to state a more measurable goal of the Turing Test.
While it is easy to argue that no machine has passed the Turing Test, there have been a host of machines that actually have tricked humans in to thinking they are having a conversation with another human. Chat programs such as ELIZA and CyberLover have tricked their participants with conversations over five minutes, yet they were never awarded the prize of winning the Turing Test. Due to this situation, many researchers focus on practical skill tests instead of the Turing Test for their research.
I would like to move on now to some smaller criticisms of the paper. In one part of the paper, Turing describes the method for training a machine. He states that the programmer should obtain “how [the action] is done, and then translate the answer into the form of an instruction table.” While this process will work fine for most cases, this leaves out instances of intuition from the machine that would be present in a human. Randomness likely will not be a supplement for intuition or the feeling that a chess player gets when they decide that a move simply feels odd.
I do not agree with the necessity of Objection 9. Turing defends this objection by hoping to put competitors into a ‘telepathy-proof room’. If extra-sensory perception and its four items can be learned by humans, there does not seem to be a reason that an intelligent machine could not learn the same trades provided they are given the necessary sensors. It sounds like Turing responds to Objection 9 using the same objection as Objection 2 by burying his “head in the sand.”
Turing stated that the Turing Test should “only permit digital computers to take part in our game,” yet only a few printed pages later does Turing go back on that statement by saying, “the feature of using electricity is thus seen to be only a very superficial similarity. … [W]e should look rather for mathematical analogies of function.”
Towards the end of the paper, Turing discusses the learning process and formulation of rules that the system should follow. He decides that the “random method seems to be better than the systematic.” Yet as with most rules in our universe, there is a specific ordering and weighting of their importance. I propose an example of two imaginary rules: (1) A machine should not take part in any violence. (2) A machine should keep its owner happy. With a random selection of rules to follow, the machine may decide to rid the world of the owner’s enemies to keep its owner happy, even though this is in clear violation of (1). It should be necessary that the decision making process of machines is much more complicated than a random method of choosing which decision to make.
Overall, while the paper was quite groundbreaking for the time period, there are a couple loopholes in Turing’s argument which are prone to criticism. This paper was one of the foundations for artificial intelligence and will continue to be revisited by current and future researchers.
Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.
24 September, 2010 § 1 Comment
I spend most of my time working in Visual Studio and like its ability for performing post-build steps. These can be used to copy files, register DLLs, or run unit tests.
In my work at MSU I need to use a makefile and have been writing my code test-first. As such, I found I was spending a lot of time typing
make and then typing
./runTests. Ideally, I could just have the tests run automatically after each build.
Looking around on the internet for quick steps to add a post-build step to make wasn’t so fruitful. I ran in to a bunch of old forum posts that mentioned Visual Studio-generated makefiles with post-build steps. A little thinking about the idea sprang the solution straight to mind.
It turns out it is really simple and quite intuitive. Here’s my current makefile:
CC=g++ CFLAGS=-c -Wall -pedantic-errors -Wextra -Wunused -Werror all: findRoute findRoute-test runTests rebuild: clean all runTests: findRoute-test ./findRoute-test.exe findRoute: main.o cities.o flight_charges.o traffic_controller.o $(CC) main.o cities.o flight_charges.o traffic_controller.o -o findRoute findRoute-test: main-test.o cities.o flight_charges.o traffic_controller.o $(CC) main-test.o cities.o flight_charges.o traffic_controller.o -o findRoute-test main.o: main.cpp $(CC) $(CFLAGS) main.cpp cities.o: cities.cpp $(CC) $(CFLAGS) cities.cpp flight_charges.o: flight_charges.cpp $(CC) $(CFLAGS) flight_charges.cpp traffic_controller.o: traffic_controller.cpp $(CC) $(CFLAGS) traffic_controller.cpp main-test.o: main-test.cpp $(CC) $(CFLAGS) main-test.cpp clean: rm -rf *o findRoute*
runTests step above states that the
findRoute-test step is a prerequisite. If that step passes, then it simply executes the next line, that being the test executable.
I found this very intuitive as it is no different than any other executable like the compiler. I hope this helps someone else who is thinking of doing the same or a variation.
Do you see anything above that could be improved? I love getting comments that help me get better.