Saturday, August 25, 2007

Controlling Change Requests

One of the challenges facing the software delopment process is controlling the number of change requests (CR). You got tons of exotic enhancement requests that will never make it into the product. Next, you have enormous amounts of small bugs that should be fixed. Among those thousands CRs a bunch is already fixed and it is too expensive to find out which ones are fixed.

I plead that enhancement requests older than 2 years (2 major versions) are removed. Apparently they are not important enough. If they are important, they will come back.

Another recommendation: when doing triage (deciding if and when the CR will be fixed), the corresponding code should be annotated with the CR number. This will make it easier for the programmer to see which CRs are applicable to this part. He/she can decide to fix the CR as well, update the CR text, or perhaps even close the CR as already fixed.

Today's bugtracking systems integrate with version control systems enabling you to see which CRs is fixed in which piece. This should be improved so you can see where future CRs reside. This will enable you to see which parts are the weakest.

Sunday, August 19, 2007

Your own framework

Your own framework that is gonna solve all the problems. It is going to be faster, easier to understand, more robust and lightweight.

There is one problem; it takes time. Probably about 3 major releases to get it right. And guess what happened after 3 years of hard work? A new technology has arisen making the existing technologies obsolete.

Take Lucene - i've been told a very elegant framework. There is new major v5 on the horizon to get everything right. Problem however is that frameworks like GWT and J2S solve a core problem; easy way of writing client side code.

Lesson: as a company don't write your own frameworks unless it is core business. Even then take into account that it is going to be out-dated in 5 years.

Tuesday, August 14, 2007

unit testing

Unit testing aka white box testing means you write a test that is aware of the implementation. It means that you can write fine grained tests specific to a set of requirements. Integration tests (or whatever you wanna call them) test multiple artifacts cooperating together. Theoretically a unit test tests the implementation of one method and nothing else. However this distinction is hard to make when the tests evolve.

Suppose you start with two different test folders: unit-test and integration-test. First couple of days everything is working nicely. One day you decide to refactor a piece of code. Luckily you are disciplined enough to move the tests along with the code to the new packages. However one of the methods that was white box tested had a piece that was extracted to a new method (let's say within the same class). Now suddenly this test you had written is no longer testing solely one method. Hmmm.

Thinking of this; if I write a unit test for a method and this method calls a couple of JVM routines, is it a unit test or an integration test from the beginning?

Point i'm trying to make is that if you want to be really strict about separating unit and integration tests it takes an enormous effort to keep it sound. With large teams - people come and go - in time it will be an impossible task.

Thursday, August 9, 2007

In the last couple of years software code and architecture moved from being porcelain to rubber, but it still not fluid. We need fluidity; we need some kind of dynamic flow that will re-establish balance within the code. For example some packages or classes have become too large, obsolete code that needs to be removed. This is now all manual labor. Organism have something called homestase , a mechanism to restore the internal equilibrium. The premise is already fulfilled; we have the metrics, we have refactorings. Only some kind of automation is needed to put it together. Might be a little be odd to come into the office in the morning and to find out you're whole application is turned upside down.

Monday, August 6, 2007

Unit testing

After a couple of years of unit testing I still haven't found an elegant way to make sure that a method calls another method. For example I want to test method x that calls method y. Method y ensures a business rule, for example it checks whether certain reserved characters are not present in a string that is passed as parameter. I can do some litmus tests against x, indirectly testing whether it calls y. However this gives not a 100% guarantee. In fact, it introduces an extra dependency. If the business rule change, y changes and i've to update some tests of x...
Another approach is of course to use mocks. However this requires interfaces, factories/ioc, etc. A lot of overhead for a simple test.

A futuristic alternative is to define an aspect that checks that at least one method called from x is y. Admittedly, I've no idea how to implement this, but it won't be easy / flexible to do in a TDD manner.

Another desperate attempt; if y would throw an exception in case of failure, you could check the stacktrace. Obivously, still brittle.

So the question remains how to solve this. It gives me an unpleasant feeling that such a basic problem is not yet solved. Perhaps someone else has a bright idea.