Friday 15 May 2009

24th March 2009: Theory of Constraints Challenged

My sincere thanks to Kevin Rutherford and Allan Kelly for co-presenting a fascinating session about lean software development to the BCS Kingston & Croydon branch on 24th March this year, entitled "Lean, Constraints, Action!". The audience was excellent too and helped us re-create a famous experiment related by Eliyahu Goldratt in "The Goal".



(Click images to see a larger version)

I had participated in this game previously at the London XP Day 2008 (facilitated by Karl Scotland in an Open Space session). It is designed to demonstrate an intuitively paradoxical finding: that a lean, pull-oriented flow substantially reduces the amount of inventory or work in progress (WIP), while improving throughput.

However, I had a sneaky feeling that the experiment was biased, because in the first "push" simulation, the assembly line was not pre-loaded with WIP, while in the second "pull" simulation, the line was pre-loaded with workpieces at each "workstation's" input buffer up to either the maximum limit or to 50% of that limit. Therefore in a simulation of 10 rounds (equivalent to ten working days - approximately equal to the average cycle time in a six-workstation setup) the push simulation will only start to produce output towards the very end of the simulation, while the pull simulation will produce something from the very first day.



So I got Allan and Kevin to agree to vary the rules a little bit, to try to get closer to a "steady state" from the first "day". Before each of the two simulations, our teams placed three Lego blocks on each of the coasters representing the input buffers of the second through sixth workstations (the first workstation of course has the whole of the product backlog as its input hopper). In fact, as it turned out, four workpieces would have been closer to the true steady state in the pull simulation, even more in the push simulation.

Off our teams went and played the production line for ten rounds each. In the push simulation, the die was passed in order from workstation 1 to workstation 6 during each round and the number of workpieces transferred to the next input buffer was the number thrown, up to the number of pieces available in the workstation's input buffer. Instances of starvation were rare under this system, but did occur sometimes. At the end we counted up the number of pieces that had come off the end of the line and the number currently in progress (i.e. on any of the five input buffers for workstations 2 to 6).

In the pull simulation, the die was passed in the opposite direction and the input buffers were constrained to a maximum of six workpieces. So if the next input buffer had three pieces already in it and the player threw anything over a 3, they could only pass along 3 more workpieces (subject to their own input buffer holding at least 3, of course). Once again, the results after 10 rounds were compared.

The results didn't surprise me particularly, but I think some of the others were a little taken aback:



As you can see, the constraint resulted in both lower WIP and lower throughput. This makes sense when you consider that there were far more occasions during the pull game than during the push game when players were unable to process the full number of workpieces indicated by the die.

Looking back at the game notes, it is noted that if the simulation is run for much longer than 10 days, the pull (or Kanban) system "will rarely produce as much as the traditional push". This may have escaped the attention of some readers (or perhaps it's a more recent edit - I don't know).



My conclusion is that you get nothing for free. The cost of reducing WIP is reduced throughput, which is perfectly acceptable as long as you're aware of it. Software development projects are not production lines in any case, so it is very unlikely that any developer will sit around kicking her or his heels if the work runs out on a given day. There are always low priority tasks such as fettling the build system, cleaning up the documentation on the project Wiki, answering user support requests etc. - or just take the next item off the product backlog and raise the kanban limit temporarily.

Tuesday 5 May 2009

Distributed bug-tracking in Haskell

At the recent SPA 2009 conference, there was a lot of talk about functional programming, Haskell in particular (a couple of years ago, the flavour of the month had been Erlang). Just to prove that Haskell is no longer "just a research language", along comes DisTract, a distributed issue-tracking system that runs in Firefox browsers. If you're already using Git, Darcs, Mercurial or Monotone as your distributed software configuration management solution, the author reasoned, why shouldn't you be able to close bugs while you're off-line at the same time as you check in your fix? Caveat: I have not tried this yet, but it sounds like a really neat idea. Does anyone know of a user forum?