Tag Archives: unit tests

what’s the london school of tdd?

I’ve been hearing about the London school of TDD, and I’m puzzled. This post by Jason Gorman was helpful, but his examples confused me.

In describing classic TDD (as in TDD by Example), he uses as his example a program to express integers as Roman numerals. Then he gives an example for London school (as in Growing Object Oriented Software Guided By Tests) that is bigger, more complex. I’m not sure how to compare them.

I’m wondering whether London-style TDD provides the same advantages as classic TDD. From Wikipedia:

TDD can lead to more modularized, flexible, and extensible code. This effect often comes about because the methodology requires that the developers think of the software in terms of small units that can be written and tested independently and integrated together later. This leads to smaller, more focused classes, looser coupling, and cleaner interfaces.

I’m just starting to read the second book; I’ll know more as I get into it. But I’m really curious: does London school take the same sort of tiny steps that classic TDD advocates? Does it retain the benefits of TDD? I figure you get test coverage, but do you also get emergent design? Simple* design?

* See the four rules of simple design, as explained by Ron Jeffries.

Assert false, return true?

This is probably a pretty quick question. It’s about writing good tests again, and I would really appreciate input from experienced TDDers.

The Java w/ TDD for n00bs book I’m working through suggests writing a test to check a boolean. He instructs us to test like this: Assert.assertFalse(testobject.testCriteriaMethod()); Then he wants us to produce a red result by creating the testCriteriaMethod() and simply returning true.

I love failing tests. I don’t have any objection to making them, but my gut says they should mean something, and this one doesn’t.

I’ve been told that a compilation error counts as a failed test. I think I would have treated it that way, and made the next test pass instead of failing.

Is that a mistake? Is this what teachers mean when they tell me to start with a red test? If so, how does it help?

GeePaw sez…

There are two occasions on which I purposely would type that, compile, and run.

  1. When adding a test to my xUnit rig isn’t trivial. I constantly rotate around between xUnit rigs, so I do this a lot. The resulting red bar says nothing to me about the code aside from “yes, it’s connected”.
  2. When I’m stalling for time. I’m not stalling for time on *passing* the test, though. I’m stalling for time on whether it’s the test I want. Another example of TDD’s distributed design process. Sometimes that extra half-minute is just what I need.

(I have to tell you, the first case is far more embarrassing, because one has to explain about the time one added 17 new passing tests in just a half-hour, by the inadvertent expediency of not telling xUnit they were there.)

What makes a good unit test?

The other day, we had the task of excluding some projects from a report our system creates. The projects belonged to a category that we could easily identify. We set out to build a test, so we could start coding.

How do we test a report? We can look at it, and see if it did what we wanted, but we were looking for a unit test that we could write to do this. This is where we remembered that we could mock the report. We set about looking at what information we could get from it.

All we came up with was counting the number of lines, and making sure it was the same as the number of projects not in Category A. I’m pretty sure that was lame. The way I described it to my pair was this:

If someone comes and does something to exclude a different category (B), and that breaks our code somehow, our test could still pass, because there were the same number of excluded lines from Category B as from Category A. Or a page break enforcement could change the number of lines.

One thing I’m thinking of is that we can produce red then green, but still not have learned much. And I’m aware that from an empirical perspective, passing tests really don’t tell us anything. It’s the failing tests that tell us stuff. Passing tests are just indicators of an absence of failing tests.

What I want to know is this: how do we write strong tests? How do we focus in on the real essence, and write a test that’s tight enough to be really meaningful?


GeePawHill sez…

Your insight is correct. The right number of passing tests is always a judgment call, and in the beginning, that judgment is still pretty weak. It’s also easy to “overcode”, that is, write more code than your tests call for. It’s why TDD is such a tricky sport to master.

Anyway, with a report, I wonder if what’s needed is a more micro approach. (I ┬ácall them microtests , actually.)

If I’m creating a report, there are several things I might want to demonstrate. Since I restrict a single object to having a single responsibility, I suspect my Report class will be what I call a host class. All it does is hook up a few other classes and let them run. For me, that’s liable to be untested. After all, declaring a couple of instances and cross-connecting them is pretty hard to break. Remember that TDD is about testing things you think can break.

My feeling is that I just don’t know enough about the rules you wanted in the report. Does it have to be paginated? Does it need to deal with 0 line-items? many? different types? Or is the important thing that it loads its incoming fields and calculates derived fields? Does it sort? and so on and so forth. Give me some more detail and we can sketch how the TDD might really go.

I constantly start one test, e.g. ReportTest, only to immediately decide that its too much, and I need something much much smaller to get started. Thank God for stacks: I push the starting test onto the stack and go after the next one, and so on, until I find something I’m sure I can do.