January 16, 2007
Say it in tests
Visionpace has been working with a client, providing agile coaching, over the last few weeks. Like all of our clients, the organization is full of bright people, making a viable product, and struggling to balance new development projects against existing legacy code. In this instance, the situation has lead to a few critical people wearing multiple hats and juggling a lot of different responsibilities.
Since these resources are in such demand for a lot of competing things, there always seems to be a bottleneck around them. As one might imagine, the responsibilities that involve human interaction (managing people or supporting customers) take a higher priority than those that are more technology derived (like testing).
Some of the common problems that we’ve seen in this situaion include:
- The team has spent a significant amount of time refining and re-defining what the scope of a story is during the iteration.
- During the iteration planning the developers work with the customer proxies to define the stories and acceptance tests for them, but the tests are not always captured at this time.
- The tests are sometimes too narrowly defined or too broadly defined to be of any use in validating the code.
- There is often the desire to select stories to work on that have not been fully clarified. They are the highest priority, or are the next logical step, but during iteration planning some questions arise that need outside input. At these times, the notion is to work on what we know now and we’ll fill in the blanks before we run out of defined tasks. Usually the answer to the unknown things changes the ‘known’ items and leads to more questions. This cycle is repeated a few times before the dust settles.
Some of the smells associated with these problems are:
- Constantly adding tasks during the iteration to account for missed features
- Running out of tasks in the backlog mid-iteration because the features were misunderstood.
- Generating a lot of code inventory during the iteration because it is waiting on testing (developers saying that ‘I think it’s done, but it needs to be tested.’)
- Switching gears from implementing tasks to iteration planning (user story discussion and task breakdown) mid-iteration. This isn’t always bad. It just shouldn’t be the norm.
So what’s the proposed solution to these smells you ask? Inspect and adapt. We’ve been including the expert user in the iteration planning meetings and asking them to review the user story with the developers. This causes the developer to get into discussions with the customer proxy about the pros and cons of what is or isn’t included in the story. It causes overall confusion as to what needs to be tasked out, and can lead to a feeling of uncertainty about what needs to be implemented. The focus of the iteration planning moves to architecture possibilities and eventually a ‘code speak’ between the developers about where we should go. To avoid this we’re going to try something new on the project that we’ve used successfully in the past with other Visionpace clients.
We’ll break the iteration down into to steps; iteration prep and iteration planning. The goal is to define what the user story is (and isn’t) in the iteration prep. The output from the iteration prep is a set of written acceptance tests and some low fidelity models of forms, reports, etc. for the user story. The tests are then used in the tasking portion of the iteration planning meeting to refine the scope of the conversations. The features of the user story are described to the development team via the list of acceptance tests and the low fidelity mock ups are used to explain the tests. (Say it in tests!) Another benefit of this approach is that the acceptance tests are refined in the iteration planning and when the developer feels that they have implemented the feature, they have a series of tests to confirm their belief. Finally the tests from the iteration prep are either added to a manual test script or incorporated in an automated testing tool.
The takeaway for all of this is that the roles and responsibilities are leveraged in this situation to minimize chaos. The customer (or proxy), the scrum master, and other appropriate parties work to define what the user story is without worrying about underlying architecture or developer centric details. This definition is delivered to the developers in the form of low fidelity screen mock ups and documented acceptance tests, so they can focus on defining what they are being asked to implement.
Posted by martinolson on January 16, 2007 | Permalink
TrackBack URL for this entry:
Listed below are links to weblogs that reference Say it in tests: