This is probably a pretty quick question. It’s about writing good tests again, and I would really appreciate input from experienced TDDers.
The Java w/ TDD for n00bs book I’m working through suggests writing a test to check a boolean. He instructs us to test like this: Assert.assertFalse(testobject.testCriteriaMethod()); Then he wants us to produce a red result by creating the testCriteriaMethod() and simply returning true.
I love failing tests. I don’t have any objection to making them, but my gut says they should mean something, and this one doesn’t.
I’ve been told that a compilation error counts as a failed test. I think I would have treated it that way, and made the next test pass instead of failing.
Is that a mistake? Is this what teachers mean when they tell me to start with a red test? If so, how does it help?
There are two occasions on which I purposely would type that, compile, and run.
- When adding a test to my xUnit rig isn’t trivial. I constantly rotate around between xUnit rigs, so I do this a lot. The resulting red bar says nothing to me about the code aside from “yes, it’s connected”.
- When I’m stalling for time. I’m not stalling for time on *passing* the test, though. I’m stalling for time on whether it’s the test I want. Another example of TDD’s distributed design process. Sometimes that extra half-minute is just what I need.
(I have to tell you, the first case is far more embarrassing, because one has to explain about the time one added 17 new passing tests in just a half-hour, by the inadvertent expediency of not telling xUnit they were there.)