Steve Freeman Rotating Header Image

Agile Programming

Lean and Agile: should cousins marry?

Dave West has written a cautionary posting (Lean and Agile: Marriage Made in Heaven or Oxymoron?) on the dangers of taking a simplistic view of Lean and Agile. He’s right that a naive reading of a Lean approach to software will just trap us in another metaphor, manufacturing, that’s as inappropriate (or appropriate) as we once thought building construction was. And yes, there will be teams that adopt the techniques without understanding the meaning behind them, just as we’ve seen with all the Agile methodologies. I particularly like his reminder to show the bigger picture by making visible the current product backlog, with all its inconsistencies and misunderstandings.

I think there is a place for the production features of Lean in software practice, such as the idea that when I get something working, it really works and it stays working. I think there’s value within development on staying focussed on the task at hand and getting in “Done, done”. For some teams, just getting their system to a state where they can safely make and deploy changes is a huge advance. As Martin Fowler points out, there are quite a few “Agile” teams that can’t achieve that. Working in a reliable code base where all the mechanical stuff just works is so much more productive than the alternative, which counts, I think, as removing waste.

David Anderson claims that nowadays the most significant waste is usually around development rather than within it. It’s in the weakness of the organisation’s communication and prioritisation, the stuff that Dave has pointed out we need to support; teams waste effort when they don’t communicate and understand.

Once I started exploring deeper into the Lean material, I discovered the Toyota Product Development System which some think is even more important than their Production system. From a Production point of view, it looks very wasteful because they work on multiple possible solutions but choose only one, which allows them always to deliver on time. From a Product point of view it’s worth the cost because slipping the date is too expensive, and it’s not totally waste because they record what they’ve learned from the failures and use it the next generation of products. I think this is where much of the activity that Dave wants to remind us about belongs, and many software teams just don’t spend enough on exploring the range of features and technical issues available to them.

To mangle Dave’s metaphor, I don’t think we should expect wedding bells because the couple is already too closely related. There will be mistakes of interpretation in the field, such as dismissing retrospectives as waste, but, properly understood, the underlying values share too much DNA.

“Hammers considered harmful”

Here’s another post on the lines of: “Hammers considered harmful. Every time I use one, it strips the threads from my screws.” One of the clues is in the list of symptoms at the end of the first paragraph: “mammoth test set-ups”. The tests were complaining but not being heard.

In truth, we’ve done a dreadful job of explaining where interaction-based techniques are relevant and where they aren’t. I keep bumping into codebases that are supposed to be written that way but where the unit tests have baroque, inflexible setups because the team weren’t listening to the tests. I even saw Lasse Koskela, who knows what he’s doing, during a programming demo at the recent Agile Conference, slip into writing expectations for a simple clock object that should have just returned a series of values; J.B. Rainsberger, being more forthright than me, called him on it.

Romilly, one of my partners in crime collaborators, once said he was surprised when he started working with Nat and me, how simple our unit tests are and how few expectations we set. That degree of focus is one of the points we try to get across whenever we talk about our approach. I find that the best use of interaction-based testing is to nudge me into thinking about objects and their relationships, not as a single solution for all my TDD needs.

In the meantime, we’re working on making our ideas more accessible.

Test-Driven Development. A Cognitive Justification?

It’s been a busy week. Michael Feathers has an interesting post on the nature of Test-Driven Development, to which Keith has responded. I think Michael overstated my position on “most” people (it was probably a bar discussion) but over the years I’ve seen a lot of TDD code that doesn’t look right. Incidentally, Tim Mackinnon, who was there, tells the origin of Mocks story at the bottom of this page.

With that out of the way, I’d like to get to the real point of this posting

A Cognitive Justification for Test-Driven Development

Two influences coincided for me at XP2008 this week: Dave Snowden talking about social complexity, including current understanding of the how the mind works, and Naresh Jain pairing to understand different people’s approaches to Test-Driven Development.

Dave has spent a lot of time exploring how decision-making happens. In particular, it turns out that people don’t actually spend their time carefully working out the trade-offs and then picking the best option. Instead, we employ a “first-fit” approach: work through an ordered list of learned responses and pick the first one that looks good enough. All of this happens subconsciously, then our slower rational brain catches up and justifies the existing decision—we can’t even tell it’s happening. Being an expert means that we’ve built up more patterns to match so that we can respond more quickly and to more complicated situations than a novice, which is obviously a good thing in most situations. It can also be a bad thing because the nature of our perception means that experts literally cannot receive certain kinds of information that falls outside their training, not because they’re inadequate people but because that’s how the brain works.

Part of Dave’s practice is concerned with breaking through what he calls this “Expert Entrainment”. He has developed exercises to shuffle our list of response patterns and allow other ideas to break through the crust of skills we’ve worked so hard to acquire. One motivation for doing this is to stop experts jumping to a known solution when they haven’t really understood the situation.

Naresh, meanwhile, is on a mission to pair program with the world to understand how different people approach Test-Driven Development, with an example problem that he uses with everyone. My preference these days is to start with a very specific example of the use of the system and then, as I add more examples, extract structure by refactoring. As we talked this through, Naresh described another programmer who noticed that the problem was an instance of a more general type of system and coded that up directly, there was nothing in his solution that included the language of the example. The other programmer had used his expertise to recognise an underlying solution and short-circuit the discovery process—that’s why we claim higher rates for experience. This programmer was right about his solution, so why did the leap to a design bother me (apart from my own Expert Entrainment)?

Then it struck me, Test-Driven Development, at least as practised by the school that I follow, progresses by focussing on the immediate, on addressing narrow, concrete examples. Don’t worry about all those ideas buzzing around your head for how the larger structure should be, just make a note and park them. For now, just do something to address this little concrete example. Later on, when you’ve gathered some empirical evidence, you can see if you were right and move the code in that direction.

I think what this means is that Test-Driven Development works (or should do) by breaking our first-fit pattern matching. It stops us being expert and steam-rolling over the problem with, literally, the first thing that came into our minds. It forces us out of our comfort zone long enough to consider the real requirements we should be addressing. Even better, starting with a test forces us to think first about the need (what’s the test for that?), and then about a solution that our expert mind is so keen to provide.

Just in case you missed that (and it took me a while to see it), it makes a cognitive difference whether you write the tests first or the code.

The best supporting evidence is Arlo Belshee’s group that implemented Promiscuous Pairing. They found empirically that they were most productive when switching pairs every couple of hours, contrary to what anyone would expect; their view was that were taking advantage of constantly being in a state of “Beginner’s Mind”. Of course, to make TDD work in practice, we still need all that expertise underneath to draw on but to support, not to control.

Personally, I’m constantly surprised at the interesting solutions that come up from being very focussed on the immediate and concrete, with a background awareness of the larger picture. By letting go, I discover more possibilities. Very Zen.

Incremental and decremental development

Nat Pryce just wrote this sidebar for our book

Incremental and Iterative Development

In a project organised as a set of nested feedback loops, development is incremental and iterative.

Incremental Development builds a system feature by feature, not module by module. Each feature is implemented as an end-to-end “slice” through all the modules of the system. The system is always integrated and ready for deployment.

Iterative Development progressively refines the implementation of features in response to feedback until they are good enough for purpose.

Yesterday we realised that there are two other categories of development:

Decremental development is where you improve the system by removing code; most systems could do with more of this.

Detrimental development is where the code you’ve just written has made the system worse.

Actually, we’ve found a couple of references for Decremental Development, one from Kevlin Henney, and a whole research proposal from Peter Sommerlad.

Actually, Nat might have thought of Detrimental Development first.

Programming, it’s really about language

Yesterday, during the XpDay Sampler track at QCon, Keith Braithwaite presented the latest version of his talk on measuring the characteristics of Test-Driven code. Very briefly, many natural phenomena follow a power law distribution (read the slides for more explanation), in spoken language this is usually known as Zipf’s Law. Keith found that tracking the number of methods in a code base for each level of cyclomatic complexity looks like such a power law distribution where the code has comprehensive unit tests, and in practice all the conforming examples were written Test-First; trust a physicist to notice this. This matters because low-complexity methods contain many fewer mistakes.

Keith used jMock as his example of code at the “good” end of the scale (thanks Keith) and, as he was showing some examples of its implementation, it struck me that a great many of those small, low complexity methods were syntactic sugar, they were there to attach a meaningful name to a little piece of code. We put a great deal of emphasis in our coding style on readability, on teasing out concepts and expressing them directly in code and trying to minimize the accidental noise from the language; we don’t always succeed, but that’s what we’re trying to do.

Is this why our code conforms to Zipf’s Law, because we’re trying to think in terms of language and expression, rather than in terms of procedures? Hmmmm.


The other question about Keith’s discovery is that it doesn’t yet say anything about causality. The first conclusion one might come to is that Test-Driving code leads to power-law structure, but I’ve seen TDD code that definitely does not have that characteristic. An alternative explanation might be that the sort of people who write that sort of code were amongst the first to be drawn to TDD, and that maybe TDD encourages the trend if you’re already mostly there. I’m not sure what an appropriate experiment would be, perhaps mining some old code that the TDDers wrote before they learned the practice? There are just too many variables.

Asphalt on mud

Dave Nicolette has a nice post called Good-enough today beats complete next year about
how Agile helps to focus development on delivering something valuable in time for the business, instead of waiting for the full, perfect solution.

I think it’s important also to distinguish different kinds of quality. The clue is in the phrase “well-graded dirt road”, which means that it doesn’t do much but it is well built within its parameters. This is not the same thing as laying asphalt on mud, which might be as quick to build and looks good, but will be a source of misery and expense as long as it exists—and will be in the way when it’s time to build something more substantial.

Potemkin Agile

I’ve been meaning to write this post for a little while and now I feel triggered by Keith Braithwaite’s especially grumpy contribution.

I’ve had a few discussions around the “Has Agile Lost Its Mojo” session, including with Keith. One person called me an elitist, which is an interesting term of disapproval around an investment bank. Clarke Ching wrote that he bailed out of the related session after five minutes because he was so upset.

First the “Mojo” session, which was intended to be a contentious topic with a cute title that would get people talking on their way to the pub. In that respect it seems to have succeeded. It was also intended to flag what looks like a shift in the community as ideas that were treated as ludicrous less than ten years ago have become accepted, and even dogma, in some communities. Is this the moment where the venture capitalists oust the founders to bring in experienced management? Maybe it is judging by the number of organisations with proper sales people who have started using Agile terminology.

It’s true, as Keith points out, that the Agile manifesto is largely platitudes—except that there are still plenty of organisations that don’t act as if it were true, particularly the one about people over process. But I claim that some of the success that Keith, Clarke, and others have been having (apart from the fact that they know what they’re doing) is because the hot-house clique (including Keith) and others went out and made it work, and generated enough noise to make terms like “incremental” acceptable in polite conversation. Very little of the kind of improvements we’re introducing today could not have been introduced before, so something must have changed. I, for one, would be sorry to see the extremists of the Agile movement wither away because, if nothing else, that’s where the ideas get to be “well-tried”. As Dave Snowden just wrote in a much heavier post than this,

Multiple small initiatives showing that there is a different way of doing things are vital, and people prepared to make sacrifices of convention to establish them are to be praised. The witness of community is a part of the history of humanity and one that continues and needs to continue today.

He also has some warnings about Model Communities, but that’s for another day.

As for the “Compromised Agility” session. Again, this was intended to be contentious. I know the presenters and they appear to have been attracting favourable attention from some very senior people at their current client because they’re offering real value instead of faffing around like their competitors. To quote from the client’s CFO, “This is the first time I’ve visited a team where everyone clearly knows exactly what’s going on in the project.” They got the job because they don’t compromise on the stuff they think is important and they managed to find a client that likes that. Is this every client in the world? No, but then it doesn’t have to be. That said, part of Simon and Gus’s point is that too many people burn out early, letting their organisation continue to haemorrhage value because they just can’t face the struggle any more.

Now to the slightly darker part of this posting. The phrase “Potemkin Agile” is a reference to the apocryphal (but untrue) story that Prince Potemkin rigged up fake villages along the Dnieper River to show Catherine II and her court how well his development of the Crimea was going.

As Agile becomes regarded as a good in itself, we should expect to see organisations claim they’ve successfully adopted Agile when the attempt is so half-baked that the result is worse than what came before. I’d include some of Clarke’s horror stories in that category. A long time ago, I went through the Total Quality training at a large corporation. There was good content in the material but most people treated it as a box-ticking exercise to be endured until they could get back to some Real Work, which took the topic right off the agenda. More recently, I’ve seen situations where hit-and-run training has left teams officially “Agile” but lost and miserable; somebody somewhere met their transitioning targets but left out the hard stuff, the follow-up and necessary structural changes. I’ve also seen projects which had all the visible characteristics of an Agile project except that the working code they delivered had no value to the company, because no-one knew what it was for. Think this is just me moaning? Here’s a paper from a respectable business school professor complaining about CIOs who measure value based on a system’s delivery not its use.

Any approach where what people do is misaligned with the organisation is compromised, whether it’s management burning value by only watching costs or technical polishing the wrong code. This begs a huge “How do we get there from here?” question which leaves plenty of room for dissensions like this.

At Citcon Brussels

I’m back from Citcon. So a few more notes

Things people have shown

The worst build I ever worked saw…

We had a hallway discussion about some of the difficult build environments we’ve worked on over the years. A bad build can be really unpleasant to work with and a blocker to progress. One project I worked on burned out three developers in a row trying to get a messy build under control.

The discussion reminded me of something I always knew but only figured out recently, that a complicated build is often a symptom of design weaknesses. So when I’m thinking about adding another little tweak to the build to fix a problem, I should first take a look at the code to if there’s a root cause that I should address first. For me, the classical example of fixing the wrong problem is a build that changes the code to set parameters, which means I need to build artefacts for each configuration. Usually this requires lots of copying stuff around, which takes time and is harder to track. The real answer is to have clean artefacts that can deployed anywhere and separate out the per-environment features.

Concurrent builds

As often happens, the most interesting snippet for me was right at the end.
Jeffrey Fredrick talked about how his group has an optimistic, rather than pessimistic, approach to running multiple builds. They run all their builds in parallel, rather than having a pipeline of increasingly complicated tests, and people can check in provided they pass the fast check-in build that catches the obvious errors. The corollary is that people can check in even when there are broken secondary builds, which is a bit shocking to the hard core. Usually, any failures settle down as check-ins ease off towards the end of the day.

The idea is to get feedback as soon as possible, and to avoid the problem that some teams have where it’s hard to get a check-in window because it takes too long to confirm the last one. Of course, they have a culture that makes this work: they’re doing shrink-wrap so their release cycle is longer, they have enough hardware to run in parallel, and I assume that people have the initiative to pick up failed builds and fix them.

Why I am a Keyboard Hog

Hi. My name is Steve and I’m (sniff), I’m a Keyboard Hog.

But there may be a solution. I was talking to fellow sufferer “Ivan”:http://ivan.truemesh.com/ on the train back from “SPA”:http://www.spaconference.org/spa2007/index.html and I thought about my, in “NLP”:http://en.wikipedia.org/wiki/Representational_systems terms, “kinesthetic”:http://en.wikipedia.org/wiki/Kinesthetic tendency[1]. I realised that the important thing for me was to have something in my hands and, with a single keyboard, this meant doing too much of the typing. Recently, at “my current client”:http://www.easynet.com/gb/en/ we’ve been able to plug in two keyboards thanks to the wonders of USB and I _think_ I’m getting the habit under control. Now I have something to hold, but my pair can type as well. Ivan said he recognised the pattern, but now he’s post-technical it doesn’t affect him.

One day at a time.


fn1. Actually, reading through the NLP article, I seem to have all the tendencies at once. I guess it’s like reading a medical textbook.

Misuse Stories [OOPLSA2006]

Vidar Kongsli is talking about “Towards Agile Security in Web Applications”:http://www.oopsla.org/2006/submission/practitioner_reports/towards_agile_security_in_web_applications.html. They’ve done a nice job of integrating the two, which is interesting as the culture of security people tends to be more static.

During planning, they introduced “Misuse Stories”, like user stories but for potential expoits of the system. Once they have Misuse Stories, they can write tests to catch them and roll security into the process — educating the developers along the way. Interestingly, they also found that security is simpler to work with when broken into smaller features. Of course, the hard part is ensuring completeness since security is a quality of the whole system