I always say this and I'll say it again: London is a long way to go from Norwich for the evening. On this occasion it was worth it, as it always is for ACCU London. This dark, cold, late February evening had the added drawback of torrential rain. To make matters worse, while looking for the JP Morgan building at 125 London Wall, we got to the junction of Moorgate to find a sign suggesting we had been walking in the wrong direction. With faith in a printed google map and iPhone GPS we forged on another fifty yards and found 125 London Wall exactly where we expected.
I have been in many offices belonging to a number of financial corporations and JP Morgan is no different to any of them, except for the lifts! Instead of calling the lifts by pressing a button next to them, you have to go to a set of small screens in the middle of the lobby. On one of these screens you select the floor you want and it indicates which lift you have to get in. The assigned lift then opens and takes you to the selected floor. Being a techie I couldn't help thinking how cool this was, but I did find myself wondering what you would do if you changed your mind about which floor you wanted once in the lift. We ascended 17 floors in what didn't feel like not enough time. However the view form the window confirmed just how high we were.
Test Driven Development (TDD) and the benefits it brings are well understood by most software developers and even most companies and managers. Still, every year at the ACCU conference someone does an introductory presentation on TDD. So, I was intrigued when I read about Steve and Nat's presentation on Sustainable TDD as it sounded like the next step.
Steve Freeman and Nat Pryce have a book to sell: Growing Object Orientated Software [1]. Their presentation was based around one section of the book. It was only about 45 minutes long, but there was a fairly long discussion after. During the initial 45 minutes Steve did the majority of the talking and took us through some simple techniques that would improve the readability and maintainability of unit test code.
Steve started off by showing us some lengthy, quite messy unit tests of the sort we have all probably seen or even written at one time or another. Then there were some examples and discussion of how to name test methods effectively. Instead of naming test methods after the method under test we should give them names that describe what is being tested. For example:
The problem with “magic numbers”, literals used directly in code, has been understood for some time, but as Steve explained they still get used in test code, so we should try to use self describing variables instead. For example:
Often tests require one or more complex objects to be constructed before the test can be carried out. This setup code can often be very verbose:
The verbosity can be reduced by using a builder, similar to those described in item 2 of Effective Java [2]:
Steve described quite a few examples of how you might use builders to repeatedly build test objects with different properties. Although this technique could be useful, it would only be where you have a large number of objects to construct or a number of different permeations of a single object that takes a large number of parameters.
Steve then went on to describe a technique that I consider a little controversial. He suggested that the message parameter of JUnit's asserts should be used to help diagnose the problem when a test fails. For example:
This to me is tantamount to using comments. Here, someone could change the test to test something else and not bother to update the message. However, in simple assertions like this with one word descriptions this is unlikely and the message is is likely to be very useful.
Then Steve explained something that appealed to my colleagues and I as pure genius in its simplicity and potential usefulness:
Here if an assertion involving
you get a description of the date:
I think the potential usefulness of this technique speaks for itself.
Discussion continued and Alan Stokes pointed out that only code with unit tests should be refactored and asked how you therefore refactor test code, as it has no tests. The answer was that you first break your production code so that the tests fail, refactor the test code making sure it still fails and then fix the production code and make sure the tests still pass.
Those were the highlights of the presentation for me, although Steve and Nat did cover some other techniques and examples. It was certainly enough for me to buy their book.
References
[1] Growing Object Orientated Software by Steve Freeman and Nat Pryce, ISBN-13: 978-0321503626
[2] Effective Java by Joshua Bloch, ISBN-13: 978-0321356680
I have been in many offices belonging to a number of financial corporations and JP Morgan is no different to any of them, except for the lifts! Instead of calling the lifts by pressing a button next to them, you have to go to a set of small screens in the middle of the lobby. On one of these screens you select the floor you want and it indicates which lift you have to get in. The assigned lift then opens and takes you to the selected floor. Being a techie I couldn't help thinking how cool this was, but I did find myself wondering what you would do if you changed your mind about which floor you wanted once in the lift. We ascended 17 floors in what didn't feel like not enough time. However the view form the window confirmed just how high we were.
Test Driven Development (TDD) and the benefits it brings are well understood by most software developers and even most companies and managers. Still, every year at the ACCU conference someone does an introductory presentation on TDD. So, I was intrigued when I read about Steve and Nat's presentation on Sustainable TDD as it sounded like the next step.
Steve Freeman and Nat Pryce have a book to sell: Growing Object Orientated Software [1]. Their presentation was based around one section of the book. It was only about 45 minutes long, but there was a fairly long discussion after. During the initial 45 minutes Steve did the majority of the talking and took us through some simple techniques that would improve the readability and maintainability of unit test code.
Steve started off by showing us some lengthy, quite messy unit tests of the sort we have all probably seen or even written at one time or another. Then there were some examples and discussion of how to name test methods effectively. Instead of naming test methods after the method under test we should give them names that describe what is being tested. For example:
holdsItemsInTheOrderTheyWereAdded()
canHoldMultipleReferencesToTheSameItem()
throwsAnExceptionWhenRemovingAnItemItDoesntHold()
The problem with “magic numbers”, literals used directly in code, has been understood for some time, but as Steve explained they still get used in test code, so we should try to use self describing variables instead. For example:
final static Chat UNUSED_CHAT = null;
final static int INVALID_ID = 666;
Often tests require one or more complex objects to be constructed before the test can be carried out. This setup code can often be very verbose:
Order order = new Order(
new Customer("Sherlock Holmes",
new Address("221b Baker Street",
"London",
new PostCode("NW1", "3RX"))));
order.addLine(new OrderLine("Deerstalker Hat", 1));
order.addLine(new OrderLine("Tweed Cape", 1));
The verbosity can be reduced by using a builder, similar to those described in item 2 of Effective Java [2]:
new OrderBuilder()
.fromCustomer(
new CustomerBuilder()
.withAddress(new AddressBuilder().withNoPostcode().build())
.build())
.build();
Steve described quite a few examples of how you might use builders to repeatedly build test objects with different properties. Although this technique could be useful, it would only be where you have a large number of objects to construct or a number of different permeations of a single object that takes a large number of parameters.
Steve then went on to describe a technique that I consider a little controversial. He suggested that the message parameter of JUnit's asserts should be used to help diagnose the problem when a test fails. For example:
assertEquals("balance", 16301, customer.getBalance());
This to me is tantamount to using comments. Here, someone could change the test to test something else and not bother to update the message. However, in simple assertions like this with one word descriptions this is unlikely and the message is is likely to be very useful.
Then Steve explained something that appealed to my colleagues and I as pure genius in its simplicity and potential usefulness:
Date startDate = namedDate(1000, "startDate");
Date endDate = namedDate(2000, "endDate");
Date namedDate(long timeValue, final String name) {
return new Date(timeValue) {
public String toString() { return name; }
};
}
Here if an assertion involving
startDate
or endDate
fails, instead of the actual date being reported:
java.lang.AssertionError: payment date
Expected: [Thu Jan 01 01:00:01 GMT 1970]
got: [Thu Jan 01 01:00:02 GMT 1970]
you get a description of the date:
java.lang.AssertionError: payment date
Expected: [startDate]
got: [endDate]
I think the potential usefulness of this technique speaks for itself.
Discussion continued and Alan Stokes pointed out that only code with unit tests should be refactored and asked how you therefore refactor test code, as it has no tests. The answer was that you first break your production code so that the tests fail, refactor the test code making sure it still fails and then fix the production code and make sure the tests still pass.
Those were the highlights of the presentation for me, although Steve and Nat did cover some other techniques and examples. It was certainly enough for me to buy their book.
References
[1] Growing Object Orientated Software by Steve Freeman and Nat Pryce, ISBN-13: 978-0321503626
[2] Effective Java by Joshua Bloch, ISBN-13: 978-0321356680
Comments
Post a Comment