I've recently worked with a team doing its first agile project (though one or two team members had been involved in an agile project before). The most difficult concept to get across was TDD - test driven design. I found that people really didn't grok the idea until I pair-programmed with them for a couple of hours. I wonder why that might be.
Dan North has suggested one possibility. He observed that newcomers to TDD often don't get the really big payback because they continue to think that TDD is mainly about testing - even if they will admit that writing the tests before the code leads to better quality code. They never transition to treating TDD as a design process, letting them discover the API to a component they're writing, nor to the realisation that TDD is about defining the behaviour of their component and its interactions with other components of the system.
Keith Braithwaite has put forward another consideration. In physical engineering disciplines, practitioners speed up their work process by using gauges. There are many kinds, from the simple spark plug gap gauge, which is simply a sliver of metal to slide between the electrodes, to electronic vernier calliper gauges that can be pre-set to a precise dimension with tolerances above and below. Each workpiece is tested at each stage of the process by checking its dimensions with the appropriate gauge(s). Workpieces that are out of tolerance are sent back for rework or scrapped. Our unit tests are a bit like that - they provide a safeguard that the software component we're working on still meets all its requirements following any engineering we've done.
It occurred to me today that unit and acceptance tests, particularly if automated, perform another valuable function in the context of an agile (especially a lean) development process. Whereas the waterfall processes familiar to most developers are characterised by "quality gates" at key stages, every single artifact in an agile development has its own little quality gate, manifested in the appropriate tests. This theoretically frees the development process from the usual bottlenecks that the quality gates tend to become.
I say "theoretically", because in many instances agile development projects have to take place within a quality system that doesn't take advantage of incremental delivery. Instead, continued approval and in many cases funding for the project tends to be contingent on passing the traditional quality gates following requirements analysis, functional specification, high-level design, low-level design, coding, integration, system test and acceptance. Project managers are therefore forced to conjure up some kind of spurious linkage between the milestones laid down in the rigid quality system and some arbitrary points along their product release plan. This can hamper their freedom to adjust the release plan in response to changing circumstances and emerging technical insights.
This could be avoided if the quality system could recognise that properly written tests represent every work product of a software development project apart from the code itself. It should therefore simply insist on a verification at each iteration (or at each release, perhaps) that the tests comprehensively and comprehensibly represent the requirements of the business on the system under development and that the required set of tests pass repeatably. I say "the required set" because there's always the possibility that some tests will intentionally fail - e.g. where they have been written to test features that are not yet in the current release.
In other words, TDD can be used to eliminate the quality-gate bottlenecks of quality systems that assume waterfall development processes.
Dan North has suggested one possibility. He observed that newcomers to TDD often don't get the really big payback because they continue to think that TDD is mainly about testing - even if they will admit that writing the tests before the code leads to better quality code. They never transition to treating TDD as a design process, letting them discover the API to a component they're writing, nor to the realisation that TDD is about defining the behaviour of their component and its interactions with other components of the system.
Keith Braithwaite has put forward another consideration. In physical engineering disciplines, practitioners speed up their work process by using gauges. There are many kinds, from the simple spark plug gap gauge, which is simply a sliver of metal to slide between the electrodes, to electronic vernier calliper gauges that can be pre-set to a precise dimension with tolerances above and below. Each workpiece is tested at each stage of the process by checking its dimensions with the appropriate gauge(s). Workpieces that are out of tolerance are sent back for rework or scrapped. Our unit tests are a bit like that - they provide a safeguard that the software component we're working on still meets all its requirements following any engineering we've done.
It occurred to me today that unit and acceptance tests, particularly if automated, perform another valuable function in the context of an agile (especially a lean) development process. Whereas the waterfall processes familiar to most developers are characterised by "quality gates" at key stages, every single artifact in an agile development has its own little quality gate, manifested in the appropriate tests. This theoretically frees the development process from the usual bottlenecks that the quality gates tend to become.
I say "theoretically", because in many instances agile development projects have to take place within a quality system that doesn't take advantage of incremental delivery. Instead, continued approval and in many cases funding for the project tends to be contingent on passing the traditional quality gates following requirements analysis, functional specification, high-level design, low-level design, coding, integration, system test and acceptance. Project managers are therefore forced to conjure up some kind of spurious linkage between the milestones laid down in the rigid quality system and some arbitrary points along their product release plan. This can hamper their freedom to adjust the release plan in response to changing circumstances and emerging technical insights.
This could be avoided if the quality system could recognise that properly written tests represent every work product of a software development project apart from the code itself. It should therefore simply insist on a verification at each iteration (or at each release, perhaps) that the tests comprehensively and comprehensibly represent the requirements of the business on the system under development and that the required set of tests pass repeatably. I say "the required set" because there's always the possibility that some tests will intentionally fail - e.g. where they have been written to test features that are not yet in the current release.
In other words, TDD can be used to eliminate the quality-gate bottlenecks of quality systems that assume waterfall development processes.