MSU Computer Science lecture series

11 January, 2011 § 1 Comment

Today marks the second day of the Spring semester at Michigan State University. Each semester the department of Computer Science and Engineering lines up speakers from within academia and industry to come to campus and give presentations on their research or line of work.

Last semester brought Dr. Greg Wilson, influential author of Making Software and Beautiful Code; Dr. Wolfgang Bauer, MSU University Distinguished Professor and researcher at the National Superconducting Cyclotron Laboratory; Dr. Zhiewei (Jerry) Cen, Reliability Engineer at Google; Dr. Doug L. James, Associate Professor of Computer Science at Cornell University; and many more.

This semester features research talks from schools such as the Colorado School of Mines, Carnegie Mellon University, UCLA, and McGill University.

The talks are mostly on Fridays and usually start around 11:30am and end around 12:30pm in room 1234 Anthony Hall.

If you are near East Lansing, you should stop on by the talks and see the interesting research going on.

Simple algorithm overviews – Part 1: Vertex Cover

24 November, 2010 § 1 Comment

Inspired by sites like Eric Marcos’ Mathtrain.tv and the Simple English Wikipedia, I’ve decided to make some short screencasts of NP-Complete problems. I hope that these screencasts help to explain the problem and frame it in a way that is easy to understand.

Here is the first video that I’ve made, covering the Vertex Cover problem:

Let me know what you think and what I should add or remove from the video.

Elephants Don’t Play Chess

18 November, 2010 § Leave a comment

“Elephants Don’t Play Chess” (PDF) is a paper written by Rodney Brooks of the MIT Artificial Intelligence Laboratory. Brooks introduces Nouvell AI, also called fundamentalist AI. He compares and contrasts his Nouvell AI with the then-mainstream symbol system, also called classical AI.

Brooks also used this paper to introduce his subsumption architecture, a distributed system that senses and then reacts, much like the nervous system of the human body. The subsumption architecture was an improvement on the classical sense-plan-react systems. Time has shown that the subsumption architecture could be improved through the use of a hybrid approach that can sense and react asynchronously.

Classical AI is described as the intelligence being found within the sum of the parts of the system. To improve the intelligence of classical AI systems, the parts of the system must be made smarter. These systems are criticized for being too reliant on finite sets of symbols. These symbols are not understood by the systems and require a human to interpret them. Further, the systems’ symbols are heavily-dependent on the specific task of the system. It is argued that these dependencies cause brittle systems that cannot adapt or scale as the problems change.

Nouvell AI is described as the intelligence being found within each part independently. To improve the intelligence of a Nouvell AI system, more parts can be added to the system. These systems are represented as behavior-based AI systems, where each part of the system understands its own behaviors, and lacks knowledge of the other parts of the system. To implement these systems, the developers departed from symbolic AI and moved to their physical grounding hypothesis. The physical grounding hypothesis states that system intelligence requires representations to be grounded in the physical world, thus removing the need for a predetermined finite set of symbols. The proposed systems would use the world as their environment and would not abstract reality down to symbols.

After introducing the physical grounding hypothesis, Brooks introduces the reader to multiple robots that have been created using the Nouvell AI system. These robots seem promising, yet all of the robots described are task-specific and do not learn new behaviors or tasks over time. Brooks’ defense of this lies in his “But It Can’t Do X” argument. This defense is a mischaracterization of the critiques that Brooks has received. There is a much larger argument that could be provided, in that even though the AI systems Brooks and his colleagues have proposed are more behavior-based, they are still task-specific and do not provide open-ended learning for more new tasks. Further, these systems do not feature autonomous animal-like online learning.

While Brooks was right in the need for the physically grounding hypothesis, there are other requirements that exist for systems to have true human-like intelligence. These requirements lead towards a model of Autonomous Mental Development, where the goal of the system as well as the world representation is unknown at programming time.

A Critique of “Computing Intelligence and Machinery”

2 October, 2010 § Leave a comment

Alan Turing’s 1950 paper titled “Computing Machinery and Intelligence” was a ground-breaking paper that focused on the future possibilities and expected criticisms within artificial intelligence. The paper sought to move the question of “if machines can think” towards “if machines can imitate a human successfully enough to cause the human to believe they are communicating with another human”.

Turing had a goal of making the question more specific and measurable, yet 60 years later no machine has passed the “Turing Test”. Requiring that a machine with artificial intelligence pass the Turing Test is still a very vague goal. There have been systems created that can beat the world’s best chess player all the way to systems for predicting which movies customers would enjoy most. While both of these are beyond the intelligence of a single human, the Turing Test sets the bar too wide as it is not described in the paper as anything that could be measurable. This is likely one of the major reasons that Turing’s claim of “in about fifty years’ time it will be possible to program computers … to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning” has still not been validated.

In fact, the mathematical argument presented in Objection 4 further backs up the point about the Turing Test being too vague. Turing states that there will always be something of greater intelligence given a scale, yet he fails to specify when the comparison with another intelligent agent will become unnecessary. Responding to Objection 4 could have been a great opportunity to state a more measurable goal of the Turing Test.

While it is easy to argue that no machine has passed the Turing Test, there have been a host of machines that actually have tricked humans in to thinking they are having a conversation with another human. Chat programs such as ELIZA and CyberLover have tricked their participants with conversations over five minutes, yet they were never awarded the prize of winning the Turing Test. Due to this situation, many researchers focus on practical skill tests instead of the Turing Test for their research.

I would like to move on now to some smaller criticisms of the paper. In one part of the paper, Turing describes the method for training a machine. He states that the programmer should obtain “how [the action] is done, and then translate the answer into the form of an instruction table.” While this process will work fine for most cases, this leaves out instances of intuition from the machine that would be present in a human. Randomness likely will not be a supplement for intuition or the feeling that a chess player gets when they decide that a move simply feels odd.

I do not agree with the necessity of Objection 9. Turing defends this objection by hoping to put competitors into a ‘telepathy-proof room’. If extra-sensory perception and its four items can be learned by humans, there does not seem to be a reason that an intelligent machine could not learn the same trades provided they are given the necessary sensors. It sounds like Turing responds to Objection 9 using the same objection as Objection 2 by burying his “head in the sand.”

Turing stated that the Turing Test should “only permit digital computers to take part in our game,” yet only a few printed pages later does Turing go back on that statement by saying, “the feature of using electricity is thus seen to be only a very superficial similarity. … [W]e should look rather for mathematical analogies of function.”

Towards the end of the paper, Turing discusses the learning process and formulation of rules that the system should follow. He decides that the “random method seems to be better than the systematic.” Yet as with most rules in our universe, there is a specific ordering and weighting of their importance. I propose an example of two imaginary rules: (1) A machine should not take part in any violence. (2) A machine should keep its owner happy. With a random selection of rules to follow, the machine may decide to rid the world of the owner’s enemies to keep its owner happy, even though this is in clear violation of (1). It should be necessary that the decision making process of machines is much more complicated than a random method of choosing which decision to make.

Overall, while the paper was quite groundbreaking for the time period, there are a couple loopholes in Turing’s argument which are prone to criticism. This paper was one of the foundations for artificial intelligence and will continue to be revisited by current and future researchers.

Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.

How to add a Makefile post-build step

24 September, 2010 § 1 Comment

I spend most of my time working in Visual Studio and like its ability for performing post-build steps. These can be used to copy files, register DLLs, or run unit tests.

In my work at MSU I need to use a makefile and have been writing my code test-first. As such, I found I was spending a lot of time typing make and then typing ./runTests. Ideally, I could just have the tests run automatically after each build.

Looking around on the internet for quick steps to add a post-build step to make wasn’t so fruitful. I ran in to a bunch of old forum posts that mentioned Visual Studio-generated makefiles with post-build steps. A little thinking about the idea sprang the solution straight to mind.

It turns out it is really simple and quite intuitive. Here’s my current makefile:

CC=g++
CFLAGS=-c -Wall -pedantic-errors -Wextra -Wunused -Werror

all: findRoute findRoute-test runTests

rebuild: clean all

runTests: findRoute-test
	./findRoute-test.exe

findRoute: main.o cities.o flight_charges.o traffic_controller.o
	$(CC) main.o cities.o flight_charges.o traffic_controller.o -o findRoute

findRoute-test: main-test.o cities.o flight_charges.o traffic_controller.o
	$(CC) main-test.o cities.o flight_charges.o traffic_controller.o -o findRoute-test

main.o: main.cpp
	$(CC) $(CFLAGS) main.cpp

cities.o: cities.cpp
	$(CC) $(CFLAGS) cities.cpp

flight_charges.o: flight_charges.cpp
	$(CC) $(CFLAGS) flight_charges.cpp

traffic_controller.o: traffic_controller.cpp
	$(CC) $(CFLAGS) traffic_controller.cpp

main-test.o: main-test.cpp
	$(CC) $(CFLAGS) main-test.cpp

clean:
	rm -rf *o findRoute*

The runTests step above states that the findRoute-test step is a prerequisite. If that step passes, then it simply executes the next line, that being the test executable.

I found this very intuitive as it is no different than any other executable like the compiler. I hope this helps someone else who is thinking of doing the same or a variation.

Do you see anything above that could be improved? I love getting comments that help me get better.

Comments on the “Great ISA Debate”

20 May, 2010 § 2 Comments

As computer architectures continued to grow and develop over time, many solutions have been proposed. This paper will discuss the competition between the IBM System/360 and Burroughs B5000 as well as its legacy in terms of the Reduced Instruction Set Computer (RISC) versus Complex Instruction Set Computer (CISC). The end of this paper will comment on the current state of computer architecture and where things are headed.

IBM System/360 vs Burroughs B5000

The System/360 and the B5000 differed in many fundamental ways. Figure 1 shows a table of just some of the differences:

Figure 1. Key decisions made by the IBM System/360 and the Burroughs B5000

IBM System/360 [4] Burroughs B5000 [1]
Address size 24 bits Program Reference Table with 1024 entries
Character size 4 bit for binary coded decimal 6 bit (8 characters for 48 bit word)
FP size 32/64 bit 48 bit
Instruction size Variable: 16/32/64 bit 12 bit
Integer size 32 bit 48 bit
Register style General purpose registers Stack based

The IBM System/360 made a significant decision to be binary compatible with their machines and make all of the machines work off a common instruction set [1]. To this day, binary compatibility still survives greatly, with the x86 architecture on desktop PC’s allowing programs written ten years ago to run on machines built yesterday.

Another significant introduction from the IBM System/360 was byte address-ability, whereas the B5000 was word addressed [1]. Byte address-ability allowed a bigger address size than word size, and the ability to use characters located at any location.

The B5000 used a stack based register style [4], which is still in use today by the Java Virtual Machine. This means that there are no registers available to the assembly programmer, and anything that needs to get referenced must be pushed on to the stack. Another important side-effect of the stack based architecture was that the B5000 descriptor system used a base and limit. This, along with the separation between instructions and data, removed the possibility for stack overflow attacks. This allows modern day virtual machines, like the JVM, to have a more secure sandbox than binary compatible systems that lack the stack architecture.

The B5000’s stack based architecture is today called Segmented Virtual Memory and is used by many machines. Burrough was also the first to start talking about how multiple processors would work in the system [4]. It wasn’t too long after that the IBM 360 series had support for multiple processors.

RISC vs. CISC

In the early 1980s David Patterson and David Ditzel introduced a reversal of directions for instruction set architectures from the Complex Instruction Set Computer to their Reduced Instruction Set Computer (RISC).

The processor field was then dominated by the VAX-11/78x systems. These systems used a CISC architecture and provided the best performance [3]. Patterson and Ditzel believed they could top CISC with a reduced instruction set, including taking advantage of the memory hierarchy and its increasing performance per dollar in the memory market, increased speed of chip designers, and optimizing what is being used the most instead of creating special instructions for edge cases.

RISC took notice of Moore’s Law and the exponential increases in memory performance and tried to reduce the number of instructions available and increase the number of instructions used [5]. RISC was thought to allow chip designers to implement the instruction set in far less time than CISC and in that amount of time, the top of the line performance would have doubled and the gains introduced by the newly designed CISC would be worthless [5]. But these arguments had their detractors. VAX Systems Architecture argued that the gap between performance in instructions versus performance in main memory was large and that this gap continued to lead to requests for an instruction set that favored high level languages, similar to the design of the B5000.

Patterson and Ditzel cited Amdahl’s Law in trying to optimize the most well-used parts of a system to have the largest effect, and in doing so, pulled work away from the processor and gave it to the compiler. This allowed the CPU to get smaller and make room for other components on the chip. Time has agreed with this argument, as the dominant CPU for mobile devices today is the Advanced RISC Machine (ARM) chip.

One of the main arguments for RISC was that its reduced instruction set would allow implementers to innovate at a much higher pace and surpass the developments of CISC implementers. This claim has not held its ground, as Intel has continued to use the x86 architecture (CISC) in its desktop CPUs and has been able to keep up with the pace of innovation seen in other ISAs.
Both papers were not clear as to the definition of either RISC or CISC. As time has gone on, RISC has remained a load/store architecture but the complexity of the two has increased.

In Conclusion

The IBM System/360 was designed to have an ISA that was good for microcoded processors, paving the way for the CISC, while the Burroughs B5000 was about pipelining and VLSI, leading to RISC.

The key difference in RISC compared to CISC is the separation of hardware and software responsibilities. RISC’s goal was to move the complexity away from the hardware to software where bugs can be fixed cheaper and faster. This also gave more work to compilers to use more efficient instructions. CISC’s goal was to make the hardware more intelligent and make the job of the compiler writer easier.

Today, RISC is used in embedded devices and high-end servers, whereas CISC dominates the desktop market and the lower-end server market.

References

  1. Amdahl, Blaauw, and Brooks. Architecture of the IBM System/360. IBM Journal of Research and Development, 8(2):87-101, April 1964.
  2. Clark, Douglas W. and Strecker, William D. Comments on “The Case for the Reduced Instruction Set Computer,” by Patterson and Ditzel.
  3. Hennessy, John L. and Patterson, David A. Computer Architecture: A Quantitative Approach. 4th Edition. Page 3. 2006.
  4. Lonergan and King. Design of the B5000 system. Datamation, vol. 7, no. 5, pp. 28-32, May, 1961.
  5. Patterson, David A. and Ditzel, David R. The Case for the Reduced Instruction Set Computer.

An Evaluation of the Future of Computing

13 May, 2010 § 2 Comments

This article was written by Brendan Grebur and Jared Wein.

Current technology can no longer support the ever shrinking size of transistors and switches. In order to combat this limitation, researchers are focused on novel approaches to either replace the current MOSFET transistors or supplement their abilities. Many notable technologies have been leveraged to solve the issue, all have yet to be realized on a large-scale. The articles explore two areas of technology aimed at resolving our computational situation. First, manufacturing hardware on the molecular scale presents new challenges as the peculiarities of quantum mechanics begins to reveal itself. Advances in nanotube technology and an ever growing knowledge of quantum physics presents new opportunities and even challenges our classical view of computation. Second, the actual means of computing are explored as the electron is abandoned for photons or nanofluids.

This paper will cover different research areas that intend to pave the way for the future of computing. The first part of this paper will describe, compare, and contrast the techniques being researched. The second part of this paper will describe two techniques that are predicted for adoption within the authors’ lifetime. The last part of this paper will cover the research area that offers the greatest improvement and largest chance to fundamentally change and improve computing.

Trillion Crummy Components [5]

The ability to perform in the face of massive component failure and faults deeply concerned system architects and operators in the early days of computing. A fortuitous advancement into integrated circuits rendered the issue moot. However, as the scales approached nanometers, the problems resurfaced. This paper looks into a method for developing FPGAs with a nanoswitch/CMOS combination.

In manufacturing ever smaller components, failure rates inevitably increase as little room is left for error. Researchers explored building nanoscale systems containing faulty components and attempted to dynamically cope with it. Wires connected by nanoswitches on the scale of 15 nanometers were produced through NIL techniques.

The design suffers from the failure or faulting of nanoswitches that connect the CMOS transistors to form the configurable logic system. Scanning of the system for these faulty components provides a view of functional components who can then be dynamically connected through the FPGA configuration. Replacing typical CMOS implemented components in an FPGA with nanoswitches freed significant amounts of space. Enough so to obtain an eight-time increase in logic density, which translates to a multiple generation chip improvement. In the end, even in the face of 50% nonoperational switches, production yield remained at 99.7%. Performance of the FPGAs was affected, but only slightly by the nonfunctional components.

The authors of this paper are trying to address the fact that faulty components can be overcome through redundancy. It gives the initial look that faulty components can easily be routed around through switches. The paper leaves many important questions unanswered, such as the minimum amount of redundancy for each of the necessary functional units. They have assumed the CMOS transistors will not fail, only the wires or switches. When this is extended to CMOS transistors, or their replacements, how many of these will need to be replicated? This could essentially undo the advantage to reducing component sizes as more components are now necessary to maintain functionality.

Carbon Nanotubes [3]

Single-walled carbon nanotubes (SWNT) exhibit a variety of advantages over silicon when used in transistors. Some of these include reduced electron scattering, better heat dissipation, and less chemically reactive material. Researchers have already produced p-type and n-type transistors which outperform their silicon based counterparts in speed and mobility.

A hurdle for implementation remains in correctly placing and orientating the SWNTs within the system, as a single nanotube width can measure 1 nm. However, a ring oscillator was constructed and sustained operation in excess of 70 MHz on a single nanotube. Thus affirming the potential of the technology. Any further abilities will depend on advancements in the construction of nanotubes.

The other application resides in constructing switches from nanotubes for nonvolatile memory. Inherent characteristics could allow impressive switch density, along with switching speeds in excess of 100 GHz, all using minimal power consumption.

Restrictive production methods have prevented full exploitation, however novel implementations could resolve these problems. One company used mats of criss-crossing nanotubes capable of being locked in certain directions resulting in a simple but quite effective approach with lifetimes of 50 million cycles and switching states in under 3 ns.

Further exploiting SWNT characteristics, nanotubes can replace copper connections as they do not exhibit the limitations of copper at the nanoscale. The bandwidth in the nanotubes is more than three orders of magnitudes larger than copper, allowing more nanotubes to be used to reduce losses in electrical current with the copper wires.

In addition, quantum computing also benefits from the utilization of nanotubes. Singled-out electrons can be ‘caged’ within the carbon structure to represent a quantum bit. An increased spin relaxation time for the particles within the nanotube dots is advantageous for constructing quantum computers.

This paper looks into completely replacing our dependency on silicon and copper for constructing circuits. SWNTs offer superior performance in almost all aspects, but hasn’t made itself ubiquitous due to large-scale manufacturing issues. The material itself is so versatile and atomically abundant, in addition to being inherently scalable, that once manufacturing hurdles are overcome we are sure to see it overwhelm the market.

Molecular, Chemical, and Organic [6]

Chemical computing currently operates on the macroscopic level to implement complex algebraic operations. Signals including optical, electrical, or chemical are used to represent data. Fluorescence cues have been the most successful, but implementing on a reduced scale remains a crux.

One approach attempts to use single molecules as temporary bridges between electrodes, but accurate measurements of its effectiveness are rarely obtained.

Scaling remains an issue, even at the theoretical level for constructing circuits from organic molecules. Researchers must deal with quantum mechanic effects, when operating at the molecular level. The behavior theorized may be inadvertently fabricated by approximation of their calculations.

Some attempts to embrace the quantum phenomenons have found success. A cascading effect seen when lining up carbon monoxide molecules results from the tunneling of vibrational energy from one molecule to another. The positioning of molecules can represent binary encoding and perform logic operations.

Traditional electron usage is abandoned here for the natural interactions of varying molecules to represent data or perform useful work. Again, other materials are researched for their potential to outperform current technology. Many issues arise from quantum phenomenons and accurately placing or observing individual molecules. Such limitations will inevitably inhibit the widespread application of organic computing.

Quantum Computers [2]

Quantum computing represents an inevitable shift in our quest for ever smaller computation scales. Only recently has the combination of quantum theory and computer science been accepted as reality. Beginning with Peter Schor’s discovery, quantum computing entered the scene by undermining one of the most important features of cryptography, difficulty factoring.

Much of the theorectical framework regarding quantum computing was almost rendered useless by the introduction of imprecision and quantum noise. Since computation occurs as infinite precision data represented by the amplitudes of configurations, any errors that enter the system would propagate and corrupt any results. However, it has been shown that error-correcting codes, coupled with redundancy, can prevent such events from destroying the configuration.

Further discoveries solidified the feasibility of quantum computers, specifically the threshold theorem. The only hindrance lie in the implementation of quantum bits and the manner in which they are controlled.

One of the greatest advancements to stem from this field was the introduction of quantum key distribution. The unconditional detection of eavesdropping allows a level of security unparalleled by traditional methods. Implementation details are surprisingly tolerant of imperfections in device endpoints and error rates. Such technology is currently marketed and actively investigated by many large technology companies.

Quantum algorithms have forced us to reconsider standard approaches to classical computation problems. Algorithms ranging from database sorting to estimating Gaussian sums have runtimes significantly smaller than any current approaches.

Quantum computing has certainly altered every aspect of computer science. Not only will circuit technology be completely redesigned, but the manner in which information can be represented opens up endless possibilities. The functionality is there, but much is still needed to actually access it.

Optical Computing [1]

By replacing electrons with photons, computers would no longer suffer common sources of failure including electromagnetic disturbances and short circuiting. Even more advantageous attributes provided by light are the immense bandwidth, low-loss transmissions, and unprecedented capacity when implemented for storage. Devices dealing with optics tend to be less expensive to manufacture and operate at higher frequency resulting in superb computational speeds.

Currently constructed logic gates operate on the order of nanoseconds, while photonic switches can top the femtosecond range. From figures like these, it would only seem natural for the technology to completely replace our silicon implementation. However, some important factors continue to prevent a full optical solution.

One issue concerns the immense difficulty in cascading large numbers of logic gates together. Half-adders can currently be produced, however extending past that seems to present a greater challenge. Another issue relates to the materials used for constructing switches that demand generous amounts of photons to function properly. Such an optical force generates counterproductive signals, including stimulated Brillouin scattering and self-phase modulations.

By exploiting particles that operate at the universal speed limit, we have reached our limit for transmission speed. In addition, production of photons is far more efficient than than electrons while potentially decreasing thermal output. The technology to effectively link a necessary amount of optical logic gates together remains to be found.

Nanofluidic Computing [4]

Fluid interaction on the micro-scale can be manipulated to perform computational logic or as means of memory storage. The speed of the actions are previously known to be less optimal than current silicon technology, so its aim is simply to supplement.

One type of microfluid implemented logic gates without electrical control by using the resistance of fluid flows. Such an approach yielded the advantage of simultaneously running multiple logic gates from the same flow. Cascading gates could now be easily implemented.

Another type used a hybrid approach where an electrical current would restrict the flow of potassium ions between silicon dioxide plates. A binary state could then be represented and logic gates created from them. Essentially, the standard transistor is replaced by fluid.

Due to limitations in the technology, using it for a reliable communication line would be more likely in standard computing systems. Applications could also extend to analyzing other fluids and performing actions based on the presence of a substance. Since microfluid systems would only require small amounts of test material, they may find a place in blood tests or testing hazardous samples.

This area of research is in a premature stage, but seems to be limited by computational speed and the ability to cascade components. Perhaps the parallel capability and small sample size could prove useful in the future, but for now it appears there are no available applications.

Summary

A Trillion Crummy Components attempts to resolve a problem that we are likely to face soon. By adding redundant components accessible through the crossbar technique, we can easily compensate to maintain a functional system in spite of failure. Here we only deal with a scaling issue, while maintaining the CMOS technology. All other papers looked to completely replace CMOS, most with the goal of ground-up nanoscale construction. Some simply used various materials enabling advantages to complete the same tasks, only using different signals to represent data, i.e. photons, chemicals, fluids. One paper, quantum computing, explored an area that could completely revolutionize everything we know about computing. Quantum computing challenged the very concept of data representation with superpositions and infinite precision variables.

Out of all these radically different approaches, some can certainly be defined as more viable than others. Carbon nanotubes offer nothing but advantages over our silicon devices. Their inherently miniature size supports massive density, coupled with power efficiency, greater heat dissipation, and astounding mechanical switching speed. With more research to perfect the manufacturing process, full-scale realization should be upon us shortly.

Optical computing also offers a formidable technology we are likely to soon see. Optics are already heavily used in the computing industry for networking, data storage (CD-ROM), and scanning. Replacing communication and data representation with light significantly decreases latency and operates flawlessly in the face of electromagnetic interference. Photons offer massive bandwidth as a result of overlapping frequencies without corruption. Optical computing offers such an increase in computational speed (10⁷) that it would prove an invaluable commodity, worthy of any development cost. Cascadability appears to be the biggest challenge, but exploration into useful materials could easily resolve the problem.

The techniques discussed for crummy computing could likely appear as transitions are made to the aforementioned devices, but is restricted to the traditional silicon devices. The future appears to indicate a radical departure from the norm. Organic and nanofluid computing are hindered by a lack of speed or reliability. Finally, quantum computing suffers from a lack of large-scale bit implementations and properly applying quantum error correction. We are sure to see the birth of quantum computers, but not within this lifetime.

Future Impact

One technology stands apart from the rest to forever change computation. Quantum computing has already provided a security technology guaranteed by physics to be unbreakable. A technology that allows for accurate representation of an infinite precision variable is, without doubt, an incredible advancement. No other technology visited provides such a feature. With such claims as solving NP-complete problems in P time, computation now becomes an afterthought. Many other traditional computing problems have been redesigned to run on quantum computers, experiencing substantial decreases in runtime. With such a raw computational power, humans are offered the chance to look further into the physics governing our world through simulation. Quantum computers are sure to offer us a wealth of information and possibilities to change the way we live and think.

References

[1] Abdeldayem, H. and Frazier, D. O. “Optical computing: need and challenge.” Communications of the ACM. Special Issue: Beyond Silicon: New Computing Paradigms (Sep 2007): 60 – 62.

[2] Bacon, D. and Leung, D. “Toward a world with quantum computers.” Communications of the ACM. Special Issue: Beyond Silicon: New Computing Paradigms (Sep 2007): 55 – 59.

[3] Kong, J. “Computation with carbon nanotube devices.” Communications of the ACM. Special Issue: Beyond Silicon: New Computing Paradigms (Sep 2007): 40 -42.

[4] Marr, D. W.M., and Munakata, T. “Micro/nanofluidic computing.” Communications of the ACM. Special Issue: Beyond Silicon: New Computing Paradigms (Sep 2007): 64 – 68.

[5] Robinett, et al. “Computing with a trillion crummy components.” Communications of the ACM. Special Issue: Beyond Silicon: New Computing Paradigms (Sep 2007): 35 – 39.

[6] Stadler, R. “Molecular, chemical, and organic computing.” Communications of the ACM. Special Issue: Beyond Silicon: New Computing Paradigms (Sep 2007): 43 – 45.

Where Am I?

You are currently browsing entries tagged with Research at JAWS.

Follow

Get every new post delivered to your Inbox.

Join 1,003 other followers