Finished reading “Best Kept Secrets of Peer Code Review”

17 June, 2009 § 3 Comments

Today I just finished readingBest Kept Secrets of Peer Code Review.” I’ll be giving a presentation on it tomorrow to the development staff at work, and I hope to have some notes published here soon about the book.

In short, the book is a free book from Smart Bear Inc. that focuses on the history and techniques of peer code review. Smart Bear are the makers of Code Collaborator, which is web-based code review software that is hosted internally within a company and helps facilitate code reviews.

I haven’t found a digital copy of the book, but there very well may exist one if you want to save a tree. There is also a PDF on their website on best practices for peer code review that you can take a look at.

Moving Away from Exceptional Code

14 June, 2009 § Leave a comment

I once took a training course taught by Ron Jeffries and Chet Hendrickson on Test Driven Development. One of the participants in the course asked both Ron and Chet what are their recommendations on testing exceptional code paths. At first I didn’t think much of the question, since all the unit testing frameworks I’ve used have built-in ways to test exceptional code paths.

  • VSTS Unit Testing (.NET) has an attribute that you can put on a test to say that it is expecting an exception:
    [TestMethod, ExpectedException( typeof ( ApplicationException ) ) ]
    public void CreationOfFooWithNullParameterShouldThrowApplicationException()
    {
       var foo = new Foo( null ); //should throw an ApplicationException
    }
  • Google Test (C++) places a macro around the method call that is expecting an exception to be thrown:
     ASSERT_THROW(Foo(5), bar_exception);
  • unittest (Python) has a special assert to test if an exception was raised, similar to Google Test:
    def testsample(self):
       self.assertRaises(ValueError, random.sample, self.seq, 20)

The answer that Ron gave wasn’t expected, but led me to think about the way that we use exceptions within our codebase.

From what I remember, Ron said that he simply doesn’t test exceptional cases. How? He doesn’t have exceptional cases. He recommended using the Null Object pattern to get away from most of the exceptional cases.

Developers often have to pollute their code with checks to see if a value is null, like so:

XmlNode* userNode = xmlParser.GetNode( "user" );
return ( userNode == NULL ) ? "" : userNode->Text();

In the case above, the call to GetNode would search the XML document for a node with a tagName of “user”. If it finds it, it will return a pointer to the data. If not, then it will return NULL. This makes a lot of sense to developers, but it ruins some of the value of abstraction and good object-oriented programming.

First, any client who makes this call to GetNode has to know how GetNode works (they have to know that GetNode will return NULL if it can’t find the node). Second, there is a type check going on here, basically a runtime check to see if the type returned from GetNode is an actual XmlNode or if it is not.

One way to refactor away from this is to use the Null Object pattern. Martin Fowler talks about the Null Object pattern in his book, Refactoring: Improving the Design of Existing Code. In the section entitled, Introduce Null Object, there is a story told by Ron Jeffries describing how he has used Null Object to treat missing objects as actual objects.

The story goes over how if a record didn’t exist in the database, they would return an object like MissingPersonRecord, which would be derived from PersonRecord. All calls to methods on a MissingPersonRecord would simply return some default behavior and not complain. For example, if the MissingPersonRecord is asked for their name, they would respond with a blank string.

This allows any caller of the methods to not have to worry about missing people in the database. See how the code has changed:

XmlNode* userNode = xmlParser.GetNode( "user" );
return userNode->Text();

There is now no change in behavior if the node doesn’t exist, and there is also no knowledge about how GetNode works (besides the fact that the caller knows that it will never return NULL 😉 ).

One library that I have used that follows this pattern is the TinyXml library in C++. When a caller asks for an XML node, there is no checking to see if that node really exists before making the call. This allows you to write code like so:

TiXmlNode* userNode = xmlParser.GetNode( "user" );
return userNode->FirstChild()->FirstChild()->Text();

A disadvantage to this is that when a Null Object has to be used, the error in the code can take a while to find. You may not know that there is a problem until a user object is displayed on screen without a required field like a “name”.

To solve this, you would have to debug and find the Null Object to determine why it is not a real object. Another approach could be to have the Null Object log a message in the Event Log when specific methods are called on it, but this can create too much noise, especially if you are using the Null Object pattern in the way that TinyXml does.

So, that is a little introduction on the Null Object pattern. To summarize, you can use the Null Object pattern to improve abstraction and polymorphism, and to reduce the lines of code per method.

An interesting take on the Hawthorne Effect

14 June, 2009 § Leave a comment

While I was taking my undergraduate courses at Michigan State for my Bachelor’s of Computer Science, I took a Human-Computer Interaction course taught by Dr. Eileen Kraemer, a visiting scholar from the University of Georgia. In the course, we talked about the Hawthorne Effect and how it can create unintended results during usability studies.

The Hawthorne Effect is based upon research findings that have shown that participants in a study behave differently than normal when they know that their progress is being monitored. Often this comes up in usability studies, as some of the participants results can be questioned to the fact that the user may not have done what they just did had a researcher not been standing next to them. This is one of the major flaws with formal usability research.

To solve this problem, some software will track data anonymously and report it back to the software company. This data can be referenced and presumed to be absent of the Hawthorne Effect since users can be assumed to using the software in their normal ways without feeling like they should perform actions in a different way than normal. Often this is found in prerelease alpha/beta software, but it is becoming more of the norm with final releases where the market of testers expands exponentially.

In the usability realm, the Hawthorne Effect is something that researchers would like to limit. Yet in the project management realm, the Hawthorne Effect is actually something that is strived for.

In Peopleware: Productive Projects and Teams, the authors make a compelling point to try for the Hawthorne Effect on their employees. They claim that changing environment variables in the workplace to provide a better working environment will indeed obtain more work performed by the employees. This change in workplace environment is due to the Hawthorne Effect, since employees will be generally open to the idea (as long as it is not drastic), and will increase their output. Yet this Hawthorne Effect will wear out with time, and so they propose changing up the environment often, and that with this chaos will bring a better return on investment.

I found this approach to the Hawthorne Effect very interesting, since it is a complete reversal of the approach that usability researchers have on the Hawthorne Effect. Instead of trying to minimize the Hawthorne Effect, these project managers are trying to harness it. The Hawthorne Effect was actually coined from some research at Hawthorne Works, a large factory complex outside of Chicago, which was based on changing lighting levels in the workplace and noticing differing levels of increased output.

I wonder why I had never looked at the Hawthorne Effect from this angle before?

Finished reading Peopleware

8 June, 2009 § 1 Comment

Today I finished reading Peopleware: Productive Projects and Teams by Tom DeMarco and Timothy Lister. The book is about project management and working with teams to get the most out of the knowledge holders and deliver a great product.

I’m not a project manager and I don’t have plans to be one. I very much like developing software and getting my hands dirty in algorithms, interfaces, unit tests, and more. But reading this book gave me a perspective that being a software developer doesn’t.

There are so many aspects to supporting a software development team through a product lifecycle. I learned great examples of what not to do. Some of these are: paging employees over a loud speaker system when phone calls or visitors come in; requiring developers to answer their phone so it doesn’t get rerouted to administrative assistants; and a whole slew of others.

I even learned that in Australia, a common form of subordination from employees is not to strike when they are upset with working conditions, but to follow all practices “by the book”. Code submissions that once took a day now take a week as the developer tests the software on twelve different platforms, goes through many code reviews, and profiles the slightest of change. Doctors use this same “by the book” practice to force a regular appendectomy to take a whole week.

The whole “by the book” was really interesting, since it called attention to two problems that workers were facing. First, they were unhappy with a specific work condition, and they were also making noise about unrealistic practices that were meant to be followed yet would never make sense in reality.

Peopleware is a little old now and as such doesn’t cover new agile software development techniques. The ideas in the book are still worth a look in to, and it’s short reading at only around 230 pages. I recommend it to you if you’re ever curious how to be a better project manager or simply what goes through the head of your project manager at times.

Book Reading Update

31 May, 2009 § 1 Comment

Today I returned three books to the MSU Library. I received Effective STL, The Pragmatic Programmer, and Code Complete 2nd Edition through the Michigan eLibrary, otherwise known as MeL.

MeL offers the ability to check out a book from any public library in Michigan through your local public library.

The checkout policies will differ from your local library, for example at Michigan State I can check out a book for six months due to my graduate student standing, whereas with MeL I can only check out a book for three weeks (with an optional three week renewal). Also, any overdue fines at MSU are voided when the book is returned, whereas MeL charges $2/day late fees.

If you live in Michigan I highly recommend you take a look at MeL if you can’t find the book you are looking for in your local library.

After returning the books, I checked out two books from the MSU Libraries:

  1. Le Ton beau de Marot: In Praise of the Music of Language by Douglas Hofstadter
  2. Data Mining Techniques: For Marketing, Sales, and Customer Relationship Management by Micahel J. A. Berry and Gordon S. Linoff

I first learned of Le Ton beau de Marot while reading through a discussion about the Linux port of Google Chrome. From Wikipedia:

Le Ton beau de Marot: In Praise of the Music of Language, published by Basic Books in 1997, is a book by Douglas Hofstadter in which he explores the meaning, strengths, failings, and beauty of translation. Hofstadter himself refers to it as “my ruminations on the art of translation“.

Translation between frames of reference — languages, cultures, modes of expression, or indeed between one person’s thoughts and another — becomes an element in many of the same concepts Hofstadter has addressed in prior works, such as reference and self-reference, structure and function, and artificial intelligence.

The second book I checked out was recommended by Kurt Thearling. I am going to take a Data Mining course in the Fall at MSU as part of my Masters in Computer Science, and I wanted to get a good understanding of Data Mining before the course starts. From Kurt:

An excellent introduction to the techniques of data mining as well as the application of data mining to real world business problems. This is the first book that I recommend to anyone interested in learning about data mining.

I am currently in the process of finishing up Peopleware, and after such I am planning on reading Le Ton beau de Marot.

Where Am I?

You are currently browsing the Book Readings category at JAWS.