Steve Freeman Rotating Header Image

Some mocks

“I don’t use a knife when I eat, if need one it means the food hasn’t been cut up enough.”
“Forks are unnecessary, I can do everything I want with a pointed knife.” [1]

One of the things we realised when writing “Growing Object-Oriented Software”:http://www.growing-object-oriented-software.com, is that the arguments about mocks in principle are often meaningless. Some of us think about objects in terms of Alan Kay’s emphasis on message passing, others don’t. In my world, I’m interested in the protocols of how objects communicate, not what’s inside them, so testing based on interactions is a natural fit. If that’s not the kind of structure you’re working with right now, then testing with mocks is probably not the right technique.

This post was triggered by Arlo Belshee’s on “The No Mocks Book”:http://arlobelshee.com/post/the-no-mocks-book. I think he has a really good point, buried in some weaker arguments (the developers I work with don’t use mocks just to minimise design churn, they’re as ruthless as anyone when it comes to fixing structure). His valuable point is that it’s a really good idea to try setting yourself style constraints to re-examine old habits, as in this “object-oriented calisthenics exercise”:http://binstock.blogspot.co.uk/2008/04/perfecting-oos-small-classes-and-short.html. As I “once wrote”:http://www.higherorderlogic.com/2008/06/test-driven-development-a-cognitive-justification/, Test-Driven Development itself can have this property of forcing a developer to stop and think about what’s really needed rather than pushing ahead with an implementation.

As for Arlo’s example, I don’t think I can provide a “better” solution without knowing more detail. As he points out in the comments, this is legacy code, so it’s harder to use mocks for interface discovery. I think Arlo’s partner is right that the ProjectFile.LoadFrom is problematic. For me the confusion is likely to be the combination of reading bytes from a disk and the construction of a domain structure, I’d expect better structure if I separated them. In practice, what I’d probably do is some “annealing” by inlining code and looking for a better partition. Finally, it would be great if Arlo could finish up the reworking, I can believe that he has a much better solution in mind but I’m struggling with the value of this step.

There is one more thing we agree on, the idea of building a top-level language that makes sense in the domain. Calling methods on the same object, like a Smalltalk cascade, is one way, although there’s nothing in the type to reveal the protocol—how the calls relate to each other. We did this in “jMock1″:http://jmock.org/jmock1.html, where we used interface chaining to guide the programmer and the IDE as to what to do next. Arlo’s example is simple enough that the top-level can be inspected to see if it makes sense. I’ve worked in other domains where things are complicated enough that we really did need some checked examples to make us feel confident that we’d got it right.

1) of course, most of the world eats with their hands or chopsticks, so this is a culturally-opressive metaphor

On the composeability of Hamcrest matchers

A recent discussion on the hamcrest java users list got me thinking that I should write up a little style guide, in particular about how to create custom Hamcrest matchers.

Reporting negative scenarios

The issue, as raised by “rdekleijn”, was that he wasn’t getting useful error messages when testing a negative scenario. The original version looked something like this, including a custom matcher:

public class OnComposabilityExampleTest {
  @Test public void
  wasNotAcceptedByThisCall() {
    assertThat(theObjectReturnedByTheCall(), 
               not(hasReturnCode(HTTP_ACCEPTED)));
  }

  private Matcher 
  hasReturnCode(final int returnCode) {
    return new TypeSafeDiagnosingMatcher() {
      @Override protected boolean 
      matchesSafely(ThingWithReturnCode actual, Description mismatch) {
        final int returnCode = actual.getReturnCode();
        if (expectedReturnCode != returnCode) {
          mismatch.appendText("return code was ")
                  .appendValue(returnCode);
          return false;
        }
        return true;
      }

      @Override
      public void describeTo(Description description) {
        description.appendText("a ThingWithReturnCode equal to ")
                   .appendValue(returnCode);
      }
    };
  }
}

which produces an unhelpful error because the received object doesn’t have a readable toString() method.

java.lang.AssertionError: 
Expected: not a ThingWithReturnCode equal to <202>
     but: was 

The problem is that the not() matcher only knows that the matcher it wraps has accepted the value. It can’t ask for a mismatch description from the internal matcher because at that level the value has actually matched. This is probably a design flaw in Hamcrest (an early version had a way to extract a printable representation of the thing being checked), but we can use this moment to think about improving the design of the test. We can work with Hamcrest which is designed to be very composeable.

Separating concerns

The first thing to notice is that the custom matcher is doing too much, it’s extracting the value and checking that it matches. A better design would be to split the two activities and delegate the decision about the validity of the return code to an inner matcher.

public class OnComposabilityExampleTest {
  @Test public void
  wasNotAcceptedByThisCall() {
    assertThat(theObjectReturnedByTheCall(), 
               hasReturnCode(not(equalTo(HTTP_ACCEPTED))));
  }

  private Matcher 
  hasReturnCode(final Matcher codeMatcher) {
    return new TypeSafeDiagnosingMatcher() {
      @Override protected boolean 
      matchesSafely(ThingWithReturnCode actual, Description mismatch) {
        final int returnCode = actual.getReturnCode();
        if (!codeMatcher.matches(returnCode)) {
          mismatch.appendText(" return code ");
          codeMatcher.describeMismatch(returnCode, mismatch);
          return false;
        }
        return true;
      }

      @Override
      public void describeTo(Description description) {
        description.appendText("a ThingWithReturnCode with code ")
                   .appendDescriptionOf(codeMatcher);
      }
    };
  }
}

which gives the much clearer error:

java.lang.AssertionError: 
Expected: a ThingWithReturnCode with code not <202>
     but: return code was <202>

Now the assertion line in the test reads better, and we have the flexibility to make assertions such as hasReturnCode(greaterThan(25)) without changing our custom matcher.

Built-in support

This is such a common situation that we’ve included some infrastructure in the Hamcrest libraries to make it easier. There’s a template FeatureMatcher, which extracts a “feature” from an object and passes it to a matcher. In this case, it would look like:

private Matcher 
hasReturnCode(final Matcher codeMatcher) {
  return new FeatureMatcher(
          codeMatcher, "ThingWithReturnCode with code", "code") {
    @Override 
    protected Integer featureValueOf(ThingWithReturnCode actual) {
      return actual.getReturnCode();
    }
  };
}

and produces an error:

java.lang.AssertionError: 
Expected: ThingWithReturnCode with code not <202>
     but: code was <202>

The FeatureMatcher handles the checking of the extracted value and the reporting.

Finally, in this case, getReturnCode() conforms to Java’s bean format so, if you don’t mind that the method reference is not statically checked, the simplest thing would be to avoid writing a custom matcher and use a PropertyMatcher instead.

public class OnComposabilityExampleTest {
  @Test public void
  wasNotAcceptedByThisCall() {
    assertThat(theObjectReturnedByTheCall(), 
               hasProperty("returnCode", not(equalTo(HTTP_ACCEPTED))));
  }
}

which gives the error:

java.lang.AssertionError: 
Expected: hasProperty("returnCode", not <202>)
     but: property 'returnCode' was <202>

Correcting “Growing Object Oriented Software”

We appear to have sold enough copies of Growing Object Oriented Software to require another print run (which is nice). We’re allowed to make some minor corrections as long as it doesn’t affect the paging.

In a modern spirit of crowdsourcing, please let us know of anything we should fix in the next round by commenting on this post. We can’t promise we can get it in, but we will try.

Thanks in advance.

Doing pair programming tests right

In her rant on the state of the industry, Liz Keogh mentioned coding in the interview, which triggered several comments and a post from Rob Bowley, who reminded us of Ivan Moore’s excellent post. I think actually typing on a computer is essential which is why I’ve been doing it for ten years (enough with whiteboard coding), but I’ve also seen examples of cargo cult code interviews where the team didn’t quite get the point:

It’s a senior responsibility
Pair programming tests should be conducted by senior developers. First, this shows that the team thinks that actual coding is important enough that senior people have to get involved, it’s not just something they delegate. Second, now matter how smart, juniors will not have seen many different approaches, so they’re more likely to dismiss alternatives (technical and human) as bad style. They just don’t have the history. There are times when a tight group of young guns is just what you need, but not always.
Do it together
Be present for the work. Don’t just send the candidate off and tell them to submit a solution, the discussion is what’s important. Otherwise, it turns into a measure of how well someone can read a specification. It also suggests that you think that your time is too valuable to actually work with a candidate, which is not attractive. And, please, don’t play the “intentionally vague” specification game, which translates to “Can you guess what I’m thinking?” (unless you’re interviewing Derren Brown)

Be ready
Have your exercise ready. Your candidate has probably taken a day off work, so the least you can do is not waste their time (and, by implication, yours). Picking the next item off the backlog is fine, as long as it doesn’t turn out to be a configuration bug or to have already been fixed. One alternative is a canned example, which has the benefit of being consistent across candidates. An example that is too simple, however, is a good primary filter but limits what you can learn about the candidate, such as larger-scale design skills.
Have a proper setup
Your netbook is cute, portable, and looks great. That doesn’t make it suitable for pairing, not least because some candidates might have visibility issues and the keyboard will have keys in the wrong places. Use a proper workstation with a good monitor so you can both see, and talk about, the code
Allow enough time
Sometimes things take a while to settle. People need to relax in and you need time to get over your initial flash response to the candidate. Most of us do not need developers who can perform well under stress. I’ve seen great candidates that only opened up after 30 minutes. You also need to work on an example that’s interesting enough to have alternatives, which takes time. If you’re worried about wasting effort on obvious misfits, then stage the exercise so you can break early. You’re going to work with a successful candidate for some time, so it’s not worth skimping.
Give something back
This is something that Ivan mentioned. No matter how unsuitable, your candidate spent time and possibly money to come to see you, and deserves more than a cup of tea. Try to show them something new as a return. If you can’t do that then either you don’t know enough to be interviewing (remember, it should be a senior) or you messed up the selection criteria which means you’re not ready.

Another reason not to log directly in your code

I’ve been ranting for some time that it’s a bad idea directly to mix logging with production code. The right thing to do is to introduce a collaborator that has a responsibility to provide structured notifications to the outside world about what’s happening inside an object. I won’t go through the whole discussion here but, somehow, I don’t think I’m winning this one.

Recently, a team I know provided another reason to avoid mixing production logging with code. They have a system that processes messages and have been asked to record all the accepted messages for later reconciliation with an upstream system. They did what most Java teams would do and logged incoming messages in the class that processes them. Then they associated a special appender with that class’s logger that writes its entries to a file somewhere for later checking. The appenders are configured in a separate XML file.

One day the inevitable happened and they renamed the message processing class during a refactoring. This broke the reference in the XML configuration and the logging stopped. It wasn’t caught for a little while because there wasn’t a test. So, lesson one is that, if it matters, there should have been a test for it. But this is a pretty rigorous team that puts a lot of effort into doing things right (I’ve seen much worse), so how did they miss it?

I think part of it is the effort required to test logging. A unit test won’t do because the structure includes configuration, and acceptance tests run slowly because loggers buffer to improve performance. And part of it is to do with using a side effect of system infrastructure to implement a service. There’s nothing in the language of the implementation code that describes the meaning of reporting received messages: “it’s just logging”.

Once again, if I want my code to do something, I should just say so…

Update: I’ve had several responses here and on other media about how teams might avoid this particular failure. All of them are valid, and I know there are techniques for doing what I’m supposed to while using a logging framework.

I was trying to make a different point—that some code techniques seem to lead me in better directions than others, and that a logging framework isn’t one of them. Once again I find that the trickiness in testing an example like this is a clue that I should be looking at my design again. If I introduce a collaboration to receive structured notifications, I can separate the concepts of handling messages and reporting progress. Once I’ve split out the code to support the reconciliation messages, I can test and administer it separately—with a clear relationship between the two functions.

None of this guarantees a perfect design, but I find I do better if I let the code do the work.

Test-First Development 1968

Seeing Kevlin Henney again at the Goto conference reminded me of a quotation he cited at Agile on the Beach last month.

In 1968, NATO funded a conference with the then provocative title of Software Engineering. Many people feel that this is the moment when software development lost its way, but the report itself is more lively that its title suggests.

It turns out that “outside in” development, with early testing is older than we thought. Here’s a quote from the report by Alan Perlis:

I’d like to read three sentences to close this issue.

  1. A software system can best be designed if the testing is interlaced with the designing instead of being used after the design.
  2. A simulation which matches the requirements contains the control which organizes the design of the system.
  3. Through successive repetitions of this process of interlaced testing and design the model ultimately becomes the software system itself. I think that it is the key of the approach that has been suggested, that there is no such question as testing things after the fact with simulation models, but that in effect the testing and the replacement of simulations with modules that are deeper and more detailed goes on with the simulation model controlling, as it were, the place and order in which these things are done.

It’s all out there in our history, we just have to be able to find it.

An example of an unhedged software call option

At a client, we’ve been reworking some particularly hairy calculation code. For better or worse, the convention is that we call a FooFetcher to get hold of a Foo when we need one. Here’s an example that returns Transfers, which are payments to and from an account. In this case, we’re mostly getting hold of Transfers directly because can identify them1.

public interface TransferFetcher {
  Transfer      fetchFor(TransferId id);
  Transfer      fetchOffsetFor(Transfer transfer);
  Set fetchOutstandingFor(Client client, CustomerReference reference);
  Transfer      fetchFor(CustomerReference reference);
}

This looks like a reasonable design—all the methods are to do with retrieving Transfers—but it’s odd that only one of them returns a collection of Transfers. That’s a clue.

When we looked at the class, we discovered that the fetchOutstandingFor() method has a different implementation from the other methods and pulls in several dependencies that only it needs. In addition, unlike the other methods, it has only one caller (apart from its tests, of course). It doesn’t really fit in the Fetcher implementation which is now inconsistent.

It’s easy to imagine how this method got added. The programmers needed to get a feature written, and the code already had a dependency that was concerned with Transfers. It was quicker to add a method to the existing Fetcher, even if that meant making it much more complicated, than to introduce a new collaborator. They sold a Call Option—they cashed in the immediate benefit at the cost of weakening the model. The team would be ahead so long as no-one needed to change that code.

The option got called on us. As part of our reworking, we needed to change how Transfer objects were constructed so we could handle a new kind of transaction. The structure we planned meant changing another object, say Accounts, to depend on a TransferFetcher, but the current implementation of TransferFetcher depended on Accounts to implement fetchOutstandingFor(). We had a dependency loop. We should have taken a diversion and moved the behaviour of fetchOutstandingFor() into an appropriate object, but then we had our own delivery pressures. In the end, we found a workaround that allowed us to finish the task we were in the middle of, with a note to come back and fix the Fetcher.

The cost of recovery includes not just the effort of investigating and applying a solution (which would have been less when the code was introduced) but also the drag on motivation. It’s a huge gumption trap to be making steady progress towards a goal and then be knocked off course by an unnecessary design flaw. The research described in The Progress Principal suggests that small blockers like this have a disproportionate impact compared to their size. Time to break for a cup of tea.

I believe that software quality is a cumulative property. It’s the accumulation of many small good or bad design decisions that either make a codebase productive to work with or just too expensive to maintain.

…and, right on cue, Rangwald talks about The Tyranny of the Urgent.


1) The details of the domain have been changed to protect the innocent, so please don’t worry too much about the detail.

Thanks to @aparker42 for his comments

Going to Goto (twice)

GOTO ConferencesI’ll be at Goto Aarhus October 9-14 this year, giving a presentation and workshop on Nat Pryce and my material on using Test-Driven Development at multiple levels, guiding the design of system components as well as the objects within them.

If you register with the code free1250, you’ll get a discount of 1250 DKK and Goto will donate the same amount to Computers for Charities

Some of us are then rushing to Goto Amsterdam, where I’ll be giving the talk again on Friday. Again the code free1250 will do something wonderful, but I’m not quite sure what.

Is Dependency Injection like Facebook?

The problem with social networks

I think there’s a description in Paul Adams’ talk about online vs. offline social networks of how Dependency Injection goes bad, particularly when using one of the many automated frameworks.

Adams describes a research subject Debbie who, in “real life” has friends and contacts from very different walks of life. She has friends from college with alternative lifestyles who post images from their favourite LA gay bars. She also trains local 10-year olds in competitive swimming. Both the college friends and swimming kids have “friended” her. She was horrified to discover that these two worlds had inadvertently become linked though her social networking account.

This is the “Facebook problem”. The assumption that all relationships are equivalent was good enough for college dorms but doesn’t really scale to the rest of the world, hence Google+. As Adams points out,

Facebook itself is not the problem here. The problem here is that these are different parts of Debbie’s life that would never have been exposed to each other offline were linked online.

Like most users, Debbie wasn’t thinking of the bigger picture when she bound the whole of her life together. She was just connecting to people she knew and commenting on some pictures of guys with cute buns.

Simile alert!

Let’s revisit the right-hand side of that illustration.

This is Nat‘s diagram for the Ports and Adapters pattern. It illustrates how some people (including us) think system components should be built, with the domain logic in the centre protected from the accidental complexity of the outside world by a layer of adapters. I do not want to have my web code inadvertently linked directly to my persistence code (or even connected to LA gay bars).

That’s the trouble with the use of DI frameworks in systems that I’ve seen, there’s only one level of relationship: get me an object from the container. When I’m adding a feature, I just want to get hold of some component—and here’s an easy way to do it. It takes a lot of rigour to step back at every access to consider whether I’m introducing a subtle link between components that really shouldn’t know about each other.

I know that most of the frameworks support managing different contexts but it seems that, frankly, that’s more thinking and organisation than most teams have time for at the beginning of a project. As for cleaning up after the fact, well it’s a good way to make a living if the company can afford it and you like solving complex puzzles. More critical, however, is that the Ports and Adapters structure is recursive. Trying to manage the environments of multiple levels of subsystem with most current containers would be, in Keith Braithwaite‘s words, “impossible and/or insane”.

new again

The answer, I believe, is to save the DI frameworks for the real boundaries of the system, the parts which might change from installation to installation. Otherwise, I gather object assembly into specialised areas of the code where I can build up the run-time structure of the system with the deft use of constructors and new. It’ll look a bit complex but no worse than the equivalent DI structure (and everyone should learn to read code that looks like lisp).

If I later find that I can’t get access to some component that I think I need, that’s not necessarily a bad thing. It’s telling me that I’m introducing a new dependency and sometimes that’s a hint that a component is in the wrong place, or that I’m trying to use it from the wrong place. The coding bump is a design feedback mechanism that I miss when I can just pull objects out of a container. If I do a good job, I should find that, most of the time, I have just the right components at the time that I need them.

Test-Driven Development and Embracing Failure

At the last London XpDay, some teams talked about their “post-XP” approach. In particular, they don’t do much Test-Driven Development because they find it’s not worth the effort. I visited one of them, Forward, and saw how they’d partitioned their system into composable actors, each of which was small enough to fit into a couple of screens of Ruby. They release new code to a single server in their farm, watching the traffic statistics that result. If it’s successful, they carefully propagate it out to the rest of the farm. If not, they pull it and try something else. In their world, the improvement in traffic statistics, the end benefit of the feature, is what they look for, not the implemented functionality.

I think this fits into Dave Snowden’s Cynefin framework, where he distinguishes between the ordered and unordered domains. In the ordered domain, causes lead to effects. This might be difficult to see and require an expert to interpret, but essentially we expect to see the same results when we repeat an action. In the complex, unordered domain, there is no such promise. For example, we know that flocking birds are driven by three simple rules but we can’t predict exactly where a flock will go next. Groups of people are even more complex, as conscious individuals can change the structure of a system whilst being part of it. We need different techniques for working with ordered and unordered systems, as anyone who’s tried to impose order on a gang of unruly programmers will know.

Loosely, we use rules and expert knowledge for ordered systems, the appropriate actions can be decided from outside the system. Much of the software we’re commissioned to build is about lowering the cost of expertise by encoding human decision-making. This works for, say ticket processing, but is problematic for complex domains where the result of an action is literally unknowable. There, the best we can do to influence a system is to try probing it and be prepared to respond quickly to whatever happens. Joseph Pelrine uses the example of a house party—a good host knows when to introduce people, when to top up the drinks, and when to rescue someone from that awful bore from IT. A party where everyone is instructed to re-enact all the moves from last time is unlikely to be equally successful1. Online start-ups are another example of operating in a complex environment: the Internet. Nobody really knows what all those people will do, so the best option is to act, to ship something, and then respond as the behaviour becomes clearer.

Snowden distinguishes between “fail-safe” and “safe-fail” initiatives. We use use fail-safe techniques for ordered systems because we know what’s supposed to happen and it’s more effective to get things right—we want a build system that just works. We use safe-fail techniques for unordered systems because the best we can do is to try different actions, none of which is large enough to damage the system, until we find something that takes us in the right direction—with a room full of excitable children we might try playing a video to see if it calms them down.

At the technical level, Test-Driven Development is largely fail-safe. It allows us, amongst other benefits, to develop code that just works (for multiple meanings of “work”). We take a little extra time around the writing of the code, which more than pays back within the larger development cycle. At higher levels, TDD can support safe-fail development because it lowers the cost of changing our mind later. This allows us to take an interim decision now about which small feature to implement next or which design to choose. We can afford to revisit it later when we’ve seen the result without crashing the whole project.

Continuous deployment environments such as at Forward2, on the other hand, emphasize “safe-fail”. The system is partitioned up so that no individual change can damage it, and the feedback loop is tight enough that the team can detect and respond to changes very quickly. That said, even the niftiest lean start-up will have fail-safe elements too, a sustained network failure or a data breach could be the end of the company. Start-ups that fail to understand this end up teetering on the edge of disaster.

We’ve learned a lot over the last ten years about how to tune our development practices. Test-Driven Development is no more “over” than Object-Orientation is, it’s just that we understand better how to apply it. I think our early understanding was coloured by the fact that the original eXtreme Programming project, C3, was payroll, an ordered system; I don’t want my pay cheque worked out by trying some numbers and seeing who complains3. We learned to Embrace Change, that it’s a sign of a healthy development environment rather than a problem. As we’ve expanded into less predictable domains, we’re also learning to Embrace Failure.


1) this is a pretty good description of many “Best Practice” initiatives
2) Fred George has been documenting safe-fail in the organisation of his development group too, he calls it “Programmer Anarchy
3) although I’ve seen shops that come close to this