I've recently worked with a team doing its first agile project (though one or two team members had been involved in an agile project before). The most difficult concept to get across was TDD - test driven design. I found that people really didn't grok the idea until I pair-programmed with them for a couple of hours. I wonder why that might be.
Dan North has suggested one possibility. He observed that newcomers to TDD often don't get the really big payback because they continue to think that TDD is mainly about testing - even if they will admit that writing the tests before the code leads to better quality code. They never transition to treating TDD as a design process, letting them discover the API to a component they're writing, nor to the realisation that TDD is about defining the behaviour of their component and its interactions with other components of the system.
Keith Braithwaite has put forward another consideration. In physical engineering disciplines, practitioners speed up their work process by using gauges. There are many kinds, from the simple spark plug gap gauge, which is simply a sliver of metal to slide between the electrodes, to electronic vernier calliper gauges that can be pre-set to a precise dimension with tolerances above and below. Each workpiece is tested at each stage of the process by checking its dimensions with the appropriate gauge(s). Workpieces that are out of tolerance are sent back for rework or scrapped. Our unit tests are a bit like that - they provide a safeguard that the software component we're working on still meets all its requirements following any engineering we've done.
It occurred to me today that unit and acceptance tests, particularly if automated, perform another valuable function in the context of an agile (especially a lean) development process. Whereas the waterfall processes familiar to most developers are characterised by "quality gates" at key stages, every single artifact in an agile development has its own little quality gate, manifested in the appropriate tests. This theoretically frees the development process from the usual bottlenecks that the quality gates tend to become.
I say "theoretically", because in many instances agile development projects have to take place within a quality system that doesn't take advantage of incremental delivery. Instead, continued approval and in many cases funding for the project tends to be contingent on passing the traditional quality gates following requirements analysis, functional specification, high-level design, low-level design, coding, integration, system test and acceptance. Project managers are therefore forced to conjure up some kind of spurious linkage between the milestones laid down in the rigid quality system and some arbitrary points along their product release plan. This can hamper their freedom to adjust the release plan in response to changing circumstances and emerging technical insights.
This could be avoided if the quality system could recognise that properly written tests represent every work product of a software development project apart from the code itself. It should therefore simply insist on a verification at each iteration (or at each release, perhaps) that the tests comprehensively and comprehensibly represent the requirements of the business on the system under development and that the required set of tests pass repeatably. I say "the required set" because there's always the possibility that some tests will intentionally fail - e.g. where they have been written to test features that are not yet in the current release.
In other words, TDD can be used to eliminate the quality-gate bottlenecks of quality systems that assume waterfall development processes.
Dan North has suggested one possibility. He observed that newcomers to TDD often don't get the really big payback because they continue to think that TDD is mainly about testing - even if they will admit that writing the tests before the code leads to better quality code. They never transition to treating TDD as a design process, letting them discover the API to a component they're writing, nor to the realisation that TDD is about defining the behaviour of their component and its interactions with other components of the system.
Keith Braithwaite has put forward another consideration. In physical engineering disciplines, practitioners speed up their work process by using gauges. There are many kinds, from the simple spark plug gap gauge, which is simply a sliver of metal to slide between the electrodes, to electronic vernier calliper gauges that can be pre-set to a precise dimension with tolerances above and below. Each workpiece is tested at each stage of the process by checking its dimensions with the appropriate gauge(s). Workpieces that are out of tolerance are sent back for rework or scrapped. Our unit tests are a bit like that - they provide a safeguard that the software component we're working on still meets all its requirements following any engineering we've done.
It occurred to me today that unit and acceptance tests, particularly if automated, perform another valuable function in the context of an agile (especially a lean) development process. Whereas the waterfall processes familiar to most developers are characterised by "quality gates" at key stages, every single artifact in an agile development has its own little quality gate, manifested in the appropriate tests. This theoretically frees the development process from the usual bottlenecks that the quality gates tend to become.
I say "theoretically", because in many instances agile development projects have to take place within a quality system that doesn't take advantage of incremental delivery. Instead, continued approval and in many cases funding for the project tends to be contingent on passing the traditional quality gates following requirements analysis, functional specification, high-level design, low-level design, coding, integration, system test and acceptance. Project managers are therefore forced to conjure up some kind of spurious linkage between the milestones laid down in the rigid quality system and some arbitrary points along their product release plan. This can hamper their freedom to adjust the release plan in response to changing circumstances and emerging technical insights.
This could be avoided if the quality system could recognise that properly written tests represent every work product of a software development project apart from the code itself. It should therefore simply insist on a verification at each iteration (or at each release, perhaps) that the tests comprehensively and comprehensibly represent the requirements of the business on the system under development and that the required set of tests pass repeatably. I say "the required set" because there's always the possibility that some tests will intentionally fail - e.g. where they have been written to test features that are not yet in the current release.
In other words, TDD can be used to eliminate the quality-gate bottlenecks of quality systems that assume waterfall development processes.
3 comments:
Immo, I kind of like the historical meaning of TDD, Test Driven Development and the reason is that I feel like there are much more advantages on doing TDD than not only improving the design.
Learn to do small steps when you develop your code, to roll back if you don't know why the new tests fail, to fake or triangulate an implementation to understand what exactly you want to implement, to make your code speak. This and others are the things that you learn to do when you practice TDD and none of them are less important than design because make you develop in a fluent way...
One of the ways I sometimes thing about TDD tests is as termination conditions. Like an invariant in a loop.
You know you can stop coding when you pass the test.
Without the test you code and code and code. An infinite loop.
Adding to these thoughts I ALSO like the fact that thinking about how a piece of engineering will be used, AND abused, once it has left your fingertips, forces you to think a lot deeper about the problem you are trying to solve, as well as the actual problem you eventually do solve - by having a much more robust and complete API that TDD inevitably causes the natural evolution of. And TDD does this VERY efficiently and EXTREMELY effectively, as opposed to tracking failures and fixing them later in the productive use of that piece of code/API. TDD IMHO is a technique for a major shortcut in the lifetime of a piece of used code that will evolve (eventually) to that state naturally.
Post a Comment