Steve Freeman Rotating Header Image


Speaking and tuting at QCon London. 7-11 March

Speaking at QCon London 2011 Nat and I will be running our “TDD at the System Scale” tutorial at QCon London. Sign up soon.

I’ll also be presenting an engaging rant on why we should aspire to living and working in a world where stuff just works.

If you quote the promotion code FREE100 when you sign up, QCon will give you a discount of £100 and the same amount to Crisis Charity.

What are we being primed for?

The excellent BBC popular science programme Bang Goes the Theory, recently reproduced this experiment on priming. In the original experiment, the subjects were primed by being asked to write sentences based on sets of words: one set was neutral and the other contained words related to an elderly sterotype. The result was that

participants for whom an elderly stereotype was primed walked more slowly down the hallway when leaving the experiment than did control participants, consistent with the content of that stereotype.

In the “Bang” experiment, they took two queues of people entering the Science Museum and placed pictures of the elderly and infirm around one queue, and the young and active around the other. The result was the same, people in the queue with the elderly images took significantly longer to walk into the building.

It’s striking that such a small thing can affect how we behave.

Now, look around your work environment and consider what it’s priming you for. Are you seeing artefacts of purpose and effectiveness? Or does it speak of regimentation and decay? Now look at your computer screen. Are you seeing an environment that emphasises productivity and quality? Or does it speak of control and ugliness?

It’s amazing that some of us get anything done at all.

This isn’t about spending lots of money to look nice (although that espresso machine is appreciated). I suspect that the sort of “funky, creative” offices that get commissioned from designers dressed in black are usually an upmarket version of motivational posters.

My guess is that a truly productive environment must have some “authenticity” for the people who spend most of their days in it. Most geeks I know would be happy with a trestle-table provided they get to spend the difference on a good chair and powerful kit, and other disciplines might have other priorities.

But then, perhaps every environment is authentic since the organisation is making clear what it really values most. And what might that imply?…

Bad code isn’t Technical Debt, it’s an unhedged Call Option

I’d been meaning to write this up for a while, and now Nat Pryce has written up the 140 character version.

Payoff from writing a call.

This is all Chris Matts‘ idea. He realised that the problem with the “Technical Debt” metaphor is that for managers debt can be a good thing. Executives can be required to take on more debt because it makes the finances work better, it might even be encouraged by tax breaks. This is not the same debt as your personal credit card. Chris came up with a better metaphor, the Call Option.

I “write” a Call Option when I sell someone the right, but not the obligation, to buy in the future an agreed quantity of something at an price that is fixed now. So, for a payment now, I agree to sell you 10,000 chocolate santas1 at 56 pence each, at any time up to 10th December. You’re prepared to pay the premium because you want to know that you’ll have santas in your stores at a price you can sell.

From my side, if the price of the santas stays low, I get to keep your payment and I’m ahead. But, I also run the risk of having to provide these santas when the price has rocketed to 72 pence. I can protect myself by making arrangements with another party to acquire them at 56 pence or less, or by actually having them in stock. Or, I can take a chance and just collect the premium. This is called an unhedged, or “Naked”, Call. In the financial world this is risky because it has unlimited downside, I have to supply the santas whatever they cost me to provide.

Call options are a better model than debt for cruddy code (without tests) because they capture the unpredictability of what we do. If I slap in an a feature without cleaning up then I get the benefit immediately, I collect the premium. If I never see that code again, then I’m ahead and, in retrospect, it would have been foolish to have spent time cleaning it up.

On the other hand, if a radical new feature comes in that I have to do, all those quick fixes suddenly become very expensive to work with. Examples I’ve seen are a big new client that requires a port to a different platform, or a new regulatory requirement that needs a new report. I get equivalent problems if there’s a failure I have to interpret and fix just before a deadline, or the team members turn over completely and no-one remembers the tacit knowledge that helps the code make sense. The market has moved away from where I thought it was going to be and my option has been called.

Even if it is more expensive to do things cleanly (and I’m not convinced of that beyond a two-week horizon), it’s also less risky. A messy system is full of unhedged calls, each of which can cost an unpredictable amount should they ever be exercised. We’ve all seen what this can do in the financial markets, and the scary thing is that failure, if it comes, can be sudden—everything is fine until it isn’t. I’ve seen a few systems which are just too hard to change to keep up with the competition and the owners are in real trouble.

So that makes refactoring like buying an option too. I pay a premium now so that I have more choices about where I might take the code later. This is a mundane and obvious activity in many aspects of business—although not, it seems, software development. I don’t need to spend this money if I know exactly what will happen, if I have perfect knowledge of the relevant parts of the future, but I don’t recall when I last saw this happen.

So, the next time you have to deal with implausible delivery dates, don’t talk about Technical Debt. Debt is predictable and can be managed, it’s just another tool. Try talking about an Unhedged Call. Now all we need is a way to price Code Smells.

1) There is an apocryphal story about a trader buying chocolate santa futures and forgetting to sell them on. Eventually a truckload turned up at the Wall Street headquarters.

Machiavelli on code quality

As the doctors say of a wasting disease, to start with, it is easy to cure but difficult to diagnose. After a time, unless it has been diagnosed and treated at the outset, it becomes easy to diagnose but difficult to cure.

— Nicolo Machiavelli, The Prince

via Dee Hock, Birth of the Chaordic Age

Not a charter for hackers

Just had to turn off comments since this post has become a spam target. Sorry.

Update: Kent since tweeted this nice one-liner:
a complete engineer can code for latency or throughput and knows when and how to switch

Kent Beck’s excellent keynote at the Startup Lessons Learned Conference has been attracting some attention on The Interweb. In particular, it seems like he’s now saying that careful engineering is wasteful—just copy and tweak those files to get a result now. I can already hear how this will be cited by frustrated proprietors and managers around the world (more on this in a moment).

What I think he actually said is that we should make engineering trade-offs. When we’re concerned with learning, then we want the fastest turnaround possible. It’s like a physics apparatus, it’s over-engineered if it lasts beyond the experiment. But, if we’re delivering stuff that people will actually use, that we want them to rely on, then the trade-off is different and we should do all that testing, refactoring, and so on that he’s been talking about for the last ten years. Kent brushes over that engineering stuff in his talk, but it’s easy to forget how rare timely, quality delivery is in the field.

My favourite part is Kent’s answer to the last question. A stressed manager or owner asks how to get his developers to stop being so careful and just ship something. Kent’s reply is to present the developers with the real problem, not the manager’s solution, and let them find a way. What the manager really wants is cheap feedback on some different options. The developers, given a chance, might find a better solution altogether, without being forced into arbitrarily dropping quality.

Good developers insist on maintaining quality, partly to maintain pride in their work (as Deming proposed), but also because we’ve all learned that it’s the best route to sustained delivery.

As Brian Marick pointed out recently, it’s about achieving more (much more) through relationships, not one side or another achieving dominance.

Mark Twain again

We should be careful to get out of an experience only the wisdom that is in it—and stop there—lest we be like the cat that sits down on a hot stove-lid. She will never sit down on a hot stove-lid again, and that is well; but also she will never sit down on a cold one any more.

via Gemba Panta Rei

Twain also wrote of opera, “that sort of intense but incoherent noise which always so reminds me of the time the orphan asylum burned down.”

Calling an Oracle stored procedure with a Table parameter with Spring’s StoredProcedure class

I don’t normally do this sort of thing, but this took my colleague Tony Lawrence and me a while to figure out and we didn’t find a good explanation on the web. This will be a very dull posting unless you need to fix this particular problem. Sorry about that.

We happen to be using the Spring database helper classes to talk to Oracle with stored procedures. It turns out that there’s a bug in the driver that means that you have to jump through a few hoops to pass values in when the input parameter type is a table. This should be equivalent to an array, but apparently it isn’t, so you have to set up the callable statement correctly. Where to do this was not obvious (to us) in the Spring framework.

Here’s an example stored procedure declaration:


CREATE PACKAGE db_package {
PROCEDURE a_stored_procedure(
  table_in IN list_type

The table_in parameter type list_type is declared within a package, which means we can’t declare the parameter as an OracleTypes.ARRAY when setting up the statement. Instead we declare it as the type of the table contents OracleTypes.VARCHAR

class MyProcedure extends StoredProcedure {
  public MyProcedure(DataSource dataSource) {
    super(dataSource, "db_package.a_stored_procedure");
    declareParameter(new SqlParameter("table_in", 
  void call(String... values) {

Here’s the money quote. When setting up the parameter, you need to provide it with a SqlTypeValue. Don’t use one of the helper base classes that come out of the box, but create an implementation directly. That gives you access to the statement, which you can cast and set appropriately.

   private Map<String, Object> withParameters(String... values) {
      return ImmutableMap.of("table_in",
                             oracleIndexTableWith(50, values));

   private  <T> SqlTypeValue 
   oracleIndexTableWith(final int elemMaxLen, final T... values) {
     return new SqlTypeValue() {
       public void setTypeValue(
         PreparedStatement statement, int paramIndex, 
         int sqlType, String typeName) throws SQLException 
            paramIndex, values, values.length, values.length,  
            sqlType, elemMaxLen);

That’s it. Happy copy and paste.

Responding to Brian Marick

Brian’s been paying us the compliment of taking our book seriously and working through our extended example, translating it to Ruby.

He has a point of contention in that he’s doubtful about the value of our end-to-end tests. To be more precise, he’s doubtful about the value of our automated end-to-end tests, a view shared by J.B.Rainsberger, and Arlo Belshee and Jim Shore. That’s a pretty serious group. I think the answer, as always, is “it depends”.

There are real advantages to writing automated end-to-end tests. As Nat pointed out in an extended message to the mailing list for the book,

Most significantly to me, however, is the difference between “testing” end-to-end or through the GUI and “test-driving”. A lot of people who are evangelical about TDD for coding do not use end-to-end tests for driving design at the system scale. I have found that writing tests gives useful design feedback, no matter what the scale.

For example, during Arlo and Jim’s session, I was struck by how many of the “failure stories” described situations where the acceptance tests were actually doing their job: revealing problems (such as deployment difficulties) that needed to be fixed.

Automating an end-to-end test helps me think more carefully about what exactly I care about in the next feature. Automating tests for many features encourages me to work out a language to describe them, which clarifies how I describe the system and makes new features easier to test.

And then there’s scale. Pretty much anything will work for a small system (although Alan Shalloway has a story about how even a quick demonstrator project can get out of hand). For larger systems, things get complicated, people come and go, and the team isn’t quite as confident as it needs to be about where things are connected. Perhaps these are symptoms of weaknesses in the team culture, but it seems wasteful to me to take the design experience we gained while writing the features not encode it somewhere.

Of course this comes at a price. Effective end-to-end tests take skill, experience, and (most important) commitment. Not every system I’ve seen has been programmed by people who are as rigorous as Nat about making the test code expressive or allowing testing to drive the design. Worse, a large collection of badly written end-to-end tests (a pattern I’ve seen a few times) is a huge drag on development. Is that price worth paying? It (ahem) depends, and part of the skill is in finding the right places to test.

So, let me turn Brian’s final question around. What would it take to make automated end-to-end tests less scary?

London XpDay 7th & 8th December

XP Day

There are still a few places left for the London XpDay, an event designed by practitioners for practitioners.

We’re trying the half-Open Space format again, with a day of prepared sessions (some promising experience reports this year) leading to a day of ad-hoc sessions. This means we can have a conference that’s more responsive to the needs of the attendees in the room—if I want to cover a topic I can propose a session.

And we have some interesting keynotes. Apart from Mark Striebeck, talking about scaling up some agile techniques as only Google can, we’re continuing our tradition of bringing in ”outside“ speakers to trigger discussion. We have Doron Swade (who built the calculating engine in the Science Museum) talking about Babbage, and storyteller Terry Saunders.

Nat and I will also be using the opportunity to launch our book in the UK.

Friday 13th, Talking at Skills Matter

Prove you’re not superstitious! I’ll be giving my talk on Sustainable TDD at Skills Matter on Friday, 13th November. Sign up here (if you dare).

This talk is about the qualities we look for in test code that keep the development “habitable.” We want to make sure the tests pull their weight by making them expressive, so that we can tell what’s important when we read them and when they fail, and by making sure they don’t become a maintenance drag themselves. We need to apply as much care and attention to the tests as we do to the production code, although the coding styles may differ. Difficulty in testing might imply that we need to change our test code, but often it’s a hint that our design ideas are wrong and that we ought to change the production code. In practice, these qualities are all related to and support each other. Test-driven development combines testing, specification, and design into one holistic activity.

I just ran it at the BBC and people seemed to like it.

If you miss this opportunity, you can always see it at QCon San Francisco.