Another semester is around the corner…

2 January, 2011 § 1 Comment

January 10th will mark the beginning of my last semester at Michigan State. Come May 2011, I’ll be graduating with my Master’s in Computer Science.

I’ve been attending Michigan State University since September 2004, when I started on my Bachelor’s in Computer Science. I graduated with my Bachelor’s in May 2008 and began working full-time at TechSmith Corporation. I had been working at TechSmith since May 2007 as an intern in software engineering, so the transition from intern to full-time was very easy, especially since I was working 40 hour weeks during the summers. I enrolled in the graduate program in January 2009, while continuing to work full-time at TechSmith. Two and a half years later I will be completing the graduate program.

I’ve learned a lot since 2008, from my desire to continue learning and experimenting all the way to learning how to handle failures. These haven’t been an easy two and a half years, but I wouldn’t trade them for anything. It’s been great taking the advanced courses and discovering new tools to use for learning. I have really gotten in to different teaching styles such as inverted classrooms, hybrid classes and incorporating other lecture styles.

One tip that I often give to new graduate students is to look online for course materials from other schools. I have stumbled upon great treasures in recorded lectures on computer architecture and natural language processing. Watching these lectures online before attending class gives multiple perspectives and pass-throughs of the discussions.

For this last semester I will be taking two courses: Advanced Operating Systems (CSE 812) and Translation of Programming Languages (CSE 450).

Translation of Programming Languages is often referred to as “the compilers course.” It is the first undergrad-level course that I will have taken as a graduate student (of which you are allowed three), but I never got a chance to take it as an undergrad and felt I really missed out.

I’m looking forward to this semester and all that it brings. I will try to post some of my research from these courses as the semester continues.

The Thirteen Dwarfs

12 May, 2010 § Leave a comment

This article was written by Brendan Grebur and Jared Wein.


Adapting to the next step in processor technology requires a reevaluation of current computational techniques. Many-core architecture presents a fundamentally different perspective on both software and hardware systems. With no clearly effective model to follow, we are left to our own devices for advancement. A dwarf’s approach provides guidance, but does not guarantee an effective realization of the underlying issues. However, it provides a starting point to explore the interactions of data at extreme levels of parallelism. Of course, as with any method, pitfalls may arise.

This paper is a critique of the research presented by [Asanovic 06]. We present the motivation for their work, a comparison between their proposal and present-day solutions, as well as a look at advantages and disadvantages of their proposal.


The past 16 years, from 1986 to 2002, have showed tremendous growth in processor performance. Around 2002, the industry hit a “brick wall” when it ran in to limits in memory, power, and instruction level parallelism [Hennessy and Paterson 07]. While performance gains slowed, performance requirements continued to grow at a higher pace than before.

As an example, Intel has been forecasting an “Era of Tera”. Intel claims that the future holds a time where there is demand for teraflops of computing power, terabits per second of communication bandwidth, and terabytes of data storage [Dubey 05]. To prepare for this era, Intel has claimed a “sea change” in computing and put their resources on multi- and many-core processor architectures [Fried 04].

Previously these multi- and many-core computer architectures have not been successful stays in the mainstream [Asanovic 06]. Some reason that one of the causes for poor adoption is the increased complexity when working with parallel computing. [Asanovic 06] developed thirteen independent paradigms that represent the active areas of parallel computing. After compiling their list, they compared theirs with Intel’s list and noticed heavy overlaps between the two. This reassured [Asanovic 06] that they were on the right path.

Advantages of the Dwarfs

The first advantage to the dwarfs that have been proposed is that there exists only a relatively small number of dwarfs to be concerned with. Psychology research has shown that humans can cope with understanding about seven, plus or minus two, items very easily [Miller 56]. The number of dwarfs, at 13, is not too far off from this desirable number, and allows the test space to be conceivably small. Had the number of dwarfs proposed been in the hundreds, it may become too costly to determine if a system can handle all the dwarfs.

Another advantage focuses on benchmarking software. Some of the programs for SPEC benchmarks work on computational problems that are no longer current research areas and have less of an application when determining the system performance. The dwarf approach has a goal to focus on active areas within computer science, and to adapt the focus areas over time to stay relevant.

The years since assembly programming have shown that higher level languages could be created and provide software developers with increased productivity while maintaining software quality. The goal of the parallel programming dwarfs is to extend these common problems to programming frameworks that can help software developers program at an even higher level to maintain efficiency and quality. These dwarfs provide themselves in a way that they can be looked at in the same regard as object-oriented design patterns. One such study was able to use the dwarfs to easily find places within existing sequentially-programmed applications where refactoring could make use of parallel programming solutions [Pankratius 07]. The researchers in this study were able to achieve performance improvements of up to 8x using autotuning and parallel programming patterns described by the dwarfs.

Disadvantages of the Dwarfs

There are a number of questions that haven’t been answered by the Berkeley researchers. We present a couple of the issues below that we are either unsure of or believe are disadvantages of the dwarf paradigm.

First, it is not well-explained how the combination of dwarfs is supported on a given system. When a system claims to support two dwarfs independently, there does not appear to be a deterministic answer as to if the system will support the combination of the dwarfs. Further, the interaction between these dwarfs does not appear to be well documented.

Next, [Asanovic 06] states the power of autotuning when working with sequential programming, but admits that there have not been successful implementations of an autotuner for parallel programs. The problem space for parallel programs is much larger than that of sequential programs, and various factors will have to be elided to make development of an autotuner reasonable. Further, it is unclear if the parameters used to perform the autotuning will provide the optional parameters for use when running the actual software with vastly different datasets than that of the autotuned dataset.

Last, the area of parallel programs presents a virtual graveyard of companies and ideas that have failed to reach the market successfully. The multi- and many-core architectures have failed at producing the same programmer productivity and quality that the uniprocessor delivered. Only time will tell how the field of software development reacts and if it can adopt the shift in computing paradigms.


Computing performance improvements have recently hit a “brick wall” and there has been a computing shift pushed by major chip makers towards multicore systems. There are many complexities with multi- and many-core systems that can lead to unappreciated performance gains. Some of the problems may reside in the complexity of implementing common parallel computing patterns.

The researchers in the Par Lab at University of California at Berkeley have been attacking the multi- and many-core problems since 2006. From their research, they have created a list of 13 dwarfs that represent active areas in parallel computing. These dwarfs provide common patterns that can be reproduced and used for benchmarking and the creation of programming frameworks that offer an abstraction of complex problems.


Asanovic et al. The Landscape of Parallel Computing Research: A View from Berkeley. Dec 2006.

Fried, I. For Intel, the future has two cores. ZDNet. 2004.

Dubey, P. Recognition, Mining and Synthesis Moves Computers to the Era of Tera. Technology@Intel Magazine. Feb 2005.

Hennessy, J. and Patterson, D. Computer Architecture: A Quantitative Approach, 4th edition, Morgan Kauffman, San Francisco, 2007.

Miller, G. A. “The magical number seven, plus or minus two: Some limits on our capacity for processing information”. Psychological Review. Vol 63, Iss 2, pp 81–97. 1956.

Pankratius et al. Software Engineering for Multicore Systems – An Experience Report. Institute for Program Structures and Data Organization. Dec 2007.

The Missing Memristor

11 May, 2010 § 3 Comments

Hewlett Packard recently announced that they have fabricated memristor chips and plan to bring them to market by 2013 [8]. This paper is an evaluation of the memristor and focuses on the potential impact of its introduction.


The memristor is an electrical circuit element that is similar to a resistor but has the potential to maintain state between turning power on and off. The memristor was first described by Leon Chua in 1971 as the fourth fundamental element, based on a relationship between charge and flux [1]. These memristors are about half the size of the transistors found in current flash storage technology such as iPods and portable USB drives, allowing capacity of these devices to double. Further, Hewlett Packard has claimed the ability to improve the data-write cycle limit 10-fold, from flash technology’s 100,000 to the memristor’s 1,000,000 data-write cycles [3].

A Computer Architecture Perspective

The memristor hopes to replace the memory hierarchy that is seen in almost all computers today. Because the memristor’s state is nonvolatile and is planned to reach about four nanometers in width, large amounts of memory can be stored directly on the processing chip. The memory hierarchy seen in today’s computers may have three or four levels of volatile cache, another level of volatile DRAM, and finally a level of nonvolatile memory. Putting the nonvolatile memory close to the logical unit on the chip has the potential to allow memory accesses of 1,000 times faster [1].

Memristor caches may be able to remember state and resume said state even as the power to the devices comes and goes. During current computer start up routines, the bootloader needs to populate the translation lookaside buffer (TLB) at each startup. With memristor technology, the TLB can remember state between power cycles, allowing a much quicker start up process. As the technology matures, this same thinking could be extended all the way to resuming running processes when the machine starts back up.

At its initial implementation, Hewlett Packard’s goal is place memristors between DRAM and disk technology, eventually spreading in both directions to replace disk and DRAM [4]. Due to the memristors speed and data integrity guarantees, this goal is likely to bring hard drive-like reliability with speeds faster than DRAM. Consumers are likely to see these speed increases when writing to their iPods and USB drives, since the higher write-cycle limits will allow the same sector to be written to instead of current flash implementations that try to write to the same sector less often.

Another use for memristors is for computing logic. Using material implication, a group of three memristors can be used to compute any boolean logic equation that is requested [2]. The motivation of using memristors for logic computation comes from the ability to run the computations on the nanoscale. The use of material implication complicates the boolean logic but is not enough to deter the gains in size reduction. The gains are enough to have one memristor replace 15 transistors [9].

Combining the use of memristors for storage and logic may also present a new application for the device. Hypothetically, a crossbar of memristors will have the ability to re-purpose groups of memristors for data-intensive or compute-intensive operations. Memristors are also very efficient power users [1]. Embedding memristors within sensor networks has the possibility to allow sensor networks to collect more data and have much longer lifetimes.

The power walls and memory walls can effectively be written off if the memristor proves its claims.

Impact for the General User

As previously mentioned, the memristor technology will allow much faster write-times compared to flash technology. General computer users will be able to sync up their portable devices in 1/1000th of the time that it takes them today. Not only will devices be able to sync faster, but they will be able to store more data than could have ever been concieved of before. HP expects to reach a storage density of about 20 gigabytes per square centimeter by 2013. As a comparison, the surface area of an Apple iPod Nano is about 64 cm2 which could translate to over 1.25 terabytes of data on the surface of the device alone.

General users can expect to see devices that use this extra storage space to continuously collect data. The vast amounts of data that can be collected are in line with Intel’s “Era of Tera” forecast [5]. Devices will be able to use the data for facial recognition of past acquaintances, weather forecasting through crowd-sourced data collection, and even more interesting applications that have yet to be thought of.

Critiques of the Memristor

To implement the memristor, titanium dioxide is used as the semiconductor instead of the traditional silicone. There is still much to be understood about titanium dioxide, as a team from the National Institute of Standards and Technology (NIST) said [6]. Silicone has had many years to mature as a semiconductor element in use with electrical circuits, whereas researchers are just beginning to understand how to use titanium dioxide in electrical circuits. “The fundamentals of why these metal oxides switch the way they do are not well understood,” said NIST researcher Curt Richter [3]. Further time and research may resolve the unknowns, but at this point there might not be enough known information about it to be sure of its promise.

Hewlett Packard and Intel have had a long history of working together to introduce new technology, such as their work with the EPIC Itanium architecture. Intel has been approached to work on the memristor technology with Hewlett Packard but has decided not to take Hewlett Packard up on their offer. Instead, Intel is looking to focus their energy on phase-change memory [4]. Intel has already shipped phase-change memory samples in 2008 and plans to start shipping mass quantities in 2010 [7], as opposed to Hewlett Packard’s expected release date of 2013 for the memristor.

In Conclusion

The memristor presents an amazing opportunity for change in the computer memory hierarchy, storage capacity, and nonvolatile state. Lab results for the research have shown that we are just at the cusp of all the capabilities of memristors. With that being said, there are still some unknowns about the technology that will challenge its adoption. The material used as the semiconductor is relatively new to electronic circuits, and research from competitors is appearing to be quicker to market than the memristor.

Much of the publications for the memristor are from it’s main researcher, R. Stanley Williams of Hewlett Packard. Many of the claims have not been validated by a third-party and could very easily be exaggerated. If given the opportunity to invest in memristor technology, I would advise taking a deeper look in to phase-change memory and it’s other competitors that look to be closer to reaching the market. Computing technology changes very quickly and there is a low probability that the computing environment three years from now is unchanged from today.


[1] Adee, S. The Mysterious Memristor. IEEE Spectrum. May 2008.

[2] Borghetti et al. ‘Memristive’ switches enable ‘stateful’ logic operations via material implication. Nature. Vol 464. Apr 8, 2010.

[3] Bourzac, K. Memristor Memory Readied for Production. Technology Review. MIT Press. Apr 2010.

[4] Foremski, T. Tha amazing memristor – beyond Moore’s law and beyond digital computing. ZDNet: IMHO. Apr 19, 2010.

[5] Garver, S. and Crepps, B. The New Era of Tera-scale Computing. Intel Software Network. Jan 15, 2009.

[6] Jones, W. A New Twist on Memristance. IEEE Spectrum. Jun 2009.

[7] Miller, M. Memristors: A Flash Competitor that Works Like Brain Synapses. PCMag: Forward Thinking. Apr 14, 2010.

[8] Null, C. Memristor technology gets real; commercial release planned for 2013. Yahoo! News: Today in Tech: The Working Guy. Apr 2010.

[9] Williams, R. S. How We Found the Missing Memristor. IEEE Spectrum. Dec 2008.

Another semester begins

19 January, 2010 § 2 Comments

January 11th marked the beginning of my third of five semesters for my Masters in Computer Science. This semester, as in previous semesters, I am taking two courses:

Along with the coursework, I will continue my role as the Graduate Representative to the Computer Science & Engineering Advisory Committee as well as the Graduate Advisor for the Black Poet Society.

If you have anything that you would like me to bring up to either group, you can send me an email. I will also be attending the Computer Science and Engineering Graduate Student Association (CSEGA) meetings.

Graduate School

16 November, 2009 § 1 Comment

seems like every
day my computer goes to
sleep before I do

Where Am I?

You are currently browsing entries tagged with graduate school at JAWS.