June 08, 2007
Why use Tasks?
As we coach software development teams in Kansas City, we are often asked about the process of breaking User Stories down into development tasks (anything related to implementing and verify a User Story). This is one of the areas teams struggle with when adopting an Agile process. It's actually something they often struggle with using most any process but most Agile approaches rely heavily on it for a number of reasons.
1) Iteration Planning
Tasks are a good last check for iteration planning purposes. Even though we can rely (to some extent) on past actual velocity to get a sense of what a team might be able to achieve in subsequent iterations, breaking stories down into development tasks/hours allows the team to leverage what they've learned so far during a given release in terms of some stories being potentially larger or smaller than previously thought. They can use that information to develop a better, more reliable/realistic iteration plan.
During planning, openly discussing development tasks gives all team members a view of what will take place while implementing the stories selected for the iteration. During that discussion they can help each other clarify and improve their development plan and discuss lesser known areas of the code, database, etc. to help other team members learn and become more productive in their development.
2) Iteration Burndown (what's left <how many hours> to do within a given iteration)
Having tasks with hours estimates enables the team to discuss (during the Daily Stand Up meeting) why certain tasks might be taking longer than planned, why some tasks were overlooked when a story was initially discussed, why some tasks weren't ultimately needed. With this information, it becomes apparent that at times, developers may be struggling with a given task and may need help.
It also enforces thinking more thoroughly about tasks needed to complete a story and aides in becoming better at tasking and task estimating during subsequent iteration planning meetings. It also allows the team to get an earlier sense of whether the goals and story points planned for a given iteration will be achieved and allows the team to consider adjustments sooner.
Defining and estimating tasks makes team commitments to an iteration public knowledge and increases
the sense of urgency of getting better at task definition and estimation. It also can help with maintaining a sense of focus on agreed upon tasks and indirectly trims waste related to gold plating. It can encourage team members to seek assistance on tasks they might be struggling with as their 2-hour task still isn't done after a couple of days effort while the teams velocity begins to become negatively impacted.
Developing for a story without tasks can lead to stories that don't get done or done as hoped. Without tasks, the team loses the ability to provide assistance (since nobody knows what you're doing) and overall iteration and ultimately release predictability suffers.
4) Shared and Individual Learning
Discussing, defining, estimating and tracking tasks allows the entire team to learn about the problem domain, especially when the domain or parts of it might be new to certain team members. It also helps all team members become better about planning the work needed for all stories and helps them to become better definers and estimators of tasks.
5) Tasking Encourages Better Design
Thinking through a plan of attack for implementing user stories and creating steps (a.k.a tasks) to achieve it tends to create a higher level of focus and optimize overall productivity. It also facilitates design discussion often resulting in better and more complete story implemenation.
6) Forecasting Velocity
When you don't have the luxury of running an iteration to get an actual velocity but need to provide stakeholders with some sense of cost and schedule, you need to forecast the teams velocity. Using tasks is very effective for this. Do this by estimating team capacity, breaking stories down into tasks/hours until the capacity is filled and adding up the points for the stories you just tasked. You now have a forecasted velocity to provide a preliminary forecast of cost and schedule.
7) Tasks Serve as Reminders
When you task, typically at the beginning of an iteration (during the planning meeting), you have the users attention and are able to ask questions to which you'll need answers to enable you to think about your plan of attack for a given story in terms of development tasks that will be necessary. Even a few days into the actual iteration, you'll forget at least part of what you discussed with the user if you don't have recorded tasks and you'll have potentially less access to the user to confirm/reconfirm tasks and/or stories.
8) Talk to the Dog
Having to talk out loud and/or in a public setting about tasks you'd need to complete a user story tends to create greater focus than just beginning to code. It typically creates better overall productivity and thoughtfulness in approaching the implementation of a story.
9) What if you hear, "I can't/won't task."?
If a team or team member simply says they won't task, that's more of a personal discussion. Obviously asking them why they won't task would be a useful starting point, it may ultimately speak more to ego, personality or an underlying resistance to change.
When a team or team member claims they can't task you have more information to work with. A common stated reason is that they "just have to start coding" and don't really know about the tasks until they start working on it. One approach is to say "Fine, then write down the tasks as you uncover them during coding and we'll discuss and learn from those to make you a better up-front tasker.". You can also allow a "research" task (2-4 hours) to allow a developer to spend time looking at the code, database, etc. for purposes of ultimately tasking a user story.
Above are just a few broad reasons why tasking is useful for software development, in particular for Agile processes.
May 15, 2007
Liar, liar, pants on fire!
Before I start this diatribe, give me a minute to get up on my soapbox. Ah, that’s better. The air up here is not as polluted and I can see more clearly now. Too bad some of you can’t get up on the soapbox with me to enjoy the view, but I digress. OK, before going any further, I warn you that I am about to offend a lot of “kool-aid drinkers”. If you are one of them, tough toenails, because I’ll give you the bottom line right now – you, my friend, are an idiot. If you don’t have thick skin, read no further. Those of you brave enough to face the truth, continue on.
We here at Visionpace make every attempt to practice the Agile methodology of software development. We are proud of the fact that this practice serves both us, as software developers, and our clients extremely well. Many of our clients were reluctant at first, but once they understood the benefits, most of them gave it a chance and haven’t looked back.
Agile practices are a good thing, but I do have one beef that I am VERY passionate about and I am about to discuss (some of you might even say, proselytize) it. In many agile discussions, as well as discussions about other software disciplines, one often hears the phrase, “All comments are lies.” Those of you who believe this are (as I have alluded to above) idiots! However, those of you who do NOT believe it, COULD still be an idiot, and so, be sure to get a second opinion.
Hey, I get it. Don’t trust comments. They are not always accurate. Duh! I feel that there are a certain percentage of you out there who actually embrace this belief because it validates the fact that you were doing the right thing all those years by NOT putting comments in your code. No, it just validates that you are lazy. Lazy is not, necessarily, a bad trait in a software developer, but NOT commenting your code WITHIN the code makes you, dare I say it, an idiot. That is the last time I will use that term, because, hopefully by now, you are riled up enough to really listen to me.
Other than the fact that someone either knowingly wrote an inaccurate comment or wrote a comment, modified the code pertaining to it, and then did not update the comment, can any one out there give me a reason NOT to trust a comment you discover in code? We will exclude that person from the discussion because he is a “psycho programmer”; you know the one that NEVER follows any kind of standard and use one character mvars. Besides, most “psycho programmers” don’t comment anyway.
Here are the reasons why you SHOULD comment your code.
- Let’s get the “trust” issue out of the way right now. If YOU put the comment in yourself, you BETTER believe that the comment is accurate. You did it (primarily) for yourself. The fact that an accurate comment will benefit a maintenance programmer who comes down the pike a year later is an extra benefit; a benefit which I appreciate when I am called in to do the maintenance. I will be more than happy to ASSume that the newly discovered comment is accurate because THAT developer put it in there and, like you, why would they want to lie to them self? Yes, there is that chance that the comment is now inaccurate. Studies have shown, statistically, that approximately 93% of comments ARE accurate. Right here, you should be asking yourself where I got those numbers, and right here, I will tell you that I made them up. Who would waste their time researching that fact? Nevertheless, think about it, most comments ARE accurate because the person who writes them WANTS them to be accurate.
- OK, I can tell you are getting a little bit peeved right now. Dave, I thought our code is supposed to be “self-documenting”? If I write bug free code, why do I have to comment? Oh, grasshopper, you will learn someday. Until then, trust me, write those comments! In all honesty, I, too, am a believer that under most circumstances, the code SHOULD be self documenting. However, consider the following code:
REPLACE Name WITH UPPER(SUBST (LastName, 1, 2) + LOWER(SUBST (LastName, 3)
That code is self documenting, if someone wanted to take the time to figure out what it does, they could easily do so. The real question the person should be asking their self is... WHY is that code there? It looks kind of goofy anyway. When I first saw it, I thought that perhaps the coder didn’t know about the PROPER function, but then, they were really capitalizing the first TWO letters of the name. Maybe the parameter of the first substring should have been 1, 1. Maybe I should refactor using the PROPER function. I am soooooooo confused! Close to an hour of my time was spent determining WHY this code was here. Imagine how much easier it would have been had the following comment been in place.
DLA/Visionpace 04-01-2004 Mr. Jones, the COO, bought 113 file cabinets at an auction. He has decided that the new company wide filing system will be based on the first TWO letters of the customer name. EACH drawer will have a two letter designation and the file will be placed in the drawer base on the first two letters of the customer name. To make it easier to file, he wants the first two letters emphasized for easier reading.
Immediately, we can see that the code is correct and that there is a reason WHY it was written. Additionally, as often happens to me, in some scrum, when asked why this was done or who requested it, the answer is there. Not only for me, but for you when you come in to do the maintenance. Two minutes to initially write the comment or 60 minutes to figure out the details three years later. You do the math.
- Additionally, what one developer feels is self-documenting, another might not. Why not eliminate any potential confusion and explain the reason for the code’s existence? Again, it will save the next person who comes along much valuable time for there is a real good chance that they will know nothing about the project and the code. Believe me, you will benefit also. One of my favorite phrases is... “When you are coding, you and God know what you are doing. Six months later, only God knows.” I can not begin to tell you the number of times when I have gone into some legacy code and the comment I wrote three years earlier IMMEDIATELY brings back to life the exact reason for the code. Many is the time that it has saved my fanny. Those are the times that I almost dislocate my shoulder patting myself on the back for a job well-done.
If you have made it this far, you are just not getting enough billable hours in. However, thank you for reading along and agreeing or disagreeing. Either way, I would appreciate hearing your thoughts on this very polarizing topic. I promise to bring an open mind if you do the same, but I will say right up front, that it will take a Herculean effort to change my mind. Care to give it a try?
December 11, 2006
On Being Agile
As you've worked on or with teams making the change to more of an Agile Software Development approach, you've probably heard comments along the lines of, "Hey, this is just assigning a name to what we've already been doing." -or- "These are all just common sense concepts.".
As we coach teams on adopting Agile principles we try to respond to comments like this confirming that yes, Agile principles mostly represent filtering out activities that don't seem to be useful (at least not in every situation) and doing more of the remaining activities most of the time. These activities (or Agile principles) tend to be the items that people list as being useful to deliver working and testing code on a frequent/consistent basis. These also tend to be the elements that people list as existing on teams they've worked on in the past that they viewed as successful.
We encourage teams to not get bogged down in calling the process Agile (or not) but to just focus on identifying and alleviating their software development pains by applying principles that may or may not be a traditional part of one or more Agile methodologies. It goes without saying that every team and situation is different, so one way to view Agile principles is that they provide a framework of useful concepts that you can introduce (gradually in some cases) to attempt to fix a broken process. Some of them will be common sense (depending on which team member is viewing them), some fill in holes in a current process that isn't working and others add "just enough" structure to provide useful metrics and oversight for management and all stakeholders.
So on the topic of being or becoming Agile, the debate shouldn't focus on "installing yet another methodology". It should focus more on identifying and admitting process pains and deciding which principles (and how deeply you employ them) will be useful to begin addressing the pain.
November 27, 2006
Size, Layers and Intuition
As the holidays approach and the cold weather begins moving into Kansas City, it's a good time to reflect on our efforts and goals in the areas of custom software development and agile process coaching.
In the area of User Story Points estimation we made one seemingly subtle change in that we refer to the activity as "Sizing" a User Story rather than "Estimating". For us, it seemed to further focus the discussion on the size of a given User Story, especially when triangulating the story relative to other already-sized stories. This has been useful in a variety of ways, including helping move people past struggling to understand the concept of a story point.
We look at discussing and sizing a story in terms of the elements common to most stories (at least in terms of our software development) which tend to fall in the area of the classic three layers: UI, Business and Data layers. Traditionally, teams have relied mostly on free-form discussion and developer (sizer) intuition to determine point values and during triangulation. While we agree that expert developer opinion and intution is highly valuable, sessions can sometimes be prone to thrashing (excess discussion), fatigue and personality dominance.
When we conduct sizing (using the Planning Poker approach) sessions, we read a User Story card and then the customer answers questions from sizers (developers) until there are no more questions. This is fairly standard in most Planning Poker approaches. But instead of sizers flipping one card to reveal their overall size opinions, they use three cards. One for the UI layer, one for the business layer and one for the data layer. The idea is that since they are already thinking about the complexity and therefore the amount of work involved in implementing a user story and since they usually do this in terms of design,testing and development along the lines of the three classic layers of UI, business and data, we have them assign story layer point values (using the same point scale they would for overall story points) to generate discussion about differing opinions and to confirm assumptions even when their size opinions are similar the first time they flip their cards. In some cases, we'll even have the sizers shout out the objects (Forms, Reports, Classes, Controls, Tables, Stored Procs, Views, etc.) they had in mind across the layers and record those on the wall for a number of reasons, including for reminders of effort we already accounted for in sizing other stories. The resulting layer size opinions and object counts help with triangulation for an overall story size opinion. This is useful for current sizing sessions and for future triangulation as new stories are uncovered during subsequent iterations. The final consideration is developer (sizer) intuition. Even with object counts and layer assumptions in place, if the group feels that similar stories based on the above shouldn't share the same overall point assignment based on their gut that design, acceptance testing, etc. makes the story belong in another point column, then the story gets moved.
Doing the above does add some additional time to the sizing session but it also saves time by reducing thrashing and directing a portion of the discussion along the lines of the story layers. The net result is no more time is taken and a there is typically a higher degree of confidence in story sizes.
September 27, 2006
The problem with percent-complete
Does this sound familiar?
PM: “Jason, what is the status of the XYZ Modulator? Last week you were at 55% and management is breathing down my neck.”
Jason: “Well, I implemented the auto-reload, and the changes from QA. I’d say were about 70-75% done.”
PM: “Great 20% more than last week! That will make the Powers that Be happy.”
… One week later …
PM: “Jason I need an update on the project status.”
Jason: “Uhhh, yeah about that. Well I found that the auto-reload didn’t work for have the new data types so I spend the last week fixing that.”
PM: “Oh. Well what should I put as the percent complete? It’s 75% now.”
Jason: “Ummm, how about 80%...???”
When I witness these conversations, or hear about them, I see red flags everywhere. Percent complete seems to have taken a life of its own outside a Gantt chart. It has become, in some instances, the measure of all work. Don’t get me wrong. I think it’s valid to say that we are X% to implementing a piece of functionality provided you also include a reference to a percent complete of what you’re measuring. (I.E “We initially thought this was going to take 30 hours. That’s still valid and were about 50% there.” From this, one can deduce that about 15 hours have been worked on the code, and about 15 hours remain.)
One problem with the initial, and far too common, example above lies in not having an underlying measure to use as reference. If you don’t have some reference included the use of percent complete gets clouded. Even trying to say ‘I’ve been working on the code for the last three days and I’m 60% through’ leaves some information out of the status. Were you spending all of your time for the last three days on the code, or were you working on other tasks as well? What are the plans moving forward? Are you saying that you anticipate being done in two more days? In 16 hours? You’re 60% of what?
Another problem is using percent complete as the only measure of progress in a project. By updating percent complete as the only means of a project metric, a project can appear to be progressing all the time at a steady pace when its not. The risk is that a culture can develop that the developers are conditioned to give some number greater than the last number they gave for percent complete. Unfortunately for the developer, the number can’t go above 100%. So what usually happens is a feature is started and the developer spends say two days working on the feature and feels that they are making good progress and say at the first status meeting that they are 30% (or some other optimistic number) complete. The next two days they find that they didn’t account for a needed architecture originally in the first two days, so they spend the time implementing it. In the larger picture, in terms of having the feature implemented, they are at the same place as the last status meeting, but have a better architecture. Since the metric is percent complete, not updating the metric would indicate that they didn’t do anything for the last two days. So they have to give a number to justify their efforts. So they choose a number like 15% and report that they are now 45% complete. You get the picture. Before too long the status chart shows the code at 80% complete, and in terms of hours spent to date over the overall amount of hours forecast, the number is much less than 80%. Usually the percent complete starts to increase much more slowly; in increments of 1-5%, until the measurement is finally reset. Eventually the reporting becomes a dance that everyone feels they have to perform, but nobody on the team has any faith in it. Unfortunately the memo to the executives is seldom sent…
It doesn’t always work out this way, but often I find that projects that rely on percent complete do so because management likes it. It’s an easy thing for management to understand. In addition the management in question often hangs their hat on the number and holds it to assumed (and often not-discussed) standards.
Some of these standards include:
- If you’ve spent time working on the project and the percent complete didn’t increase, you didn’t provide any value to the project. (As described in refactoring example above.)
- Percent complete shouldn’t go backward. If a house if 50% complete this week, it won’t be 40% complete next week unless some outside factors are in play (like a wind storm.) Software development doesn’t work like this.
- Percent complete will increase at the same rate during the entire project. Reporting that the project is 20% complete after two weeks often times allows people to assume that the project will be done in eight more weeks.
- Percent complete is perceived to always be accurate. If you say that you are 86% complete, it is assumed that there are empirical calculations to back that up. Often times, the only thing to back the number up is the number a developer pulled out of the air during a meeting. Even worse each developer throws out the number for the things they have been working on, so there isn’t any consistency between the metrics for the different tasks the team is working on.
This is why Visionpace follows the model of forecasting pieces of work in ‘story points’ and then when we start working on a specific feature, we break the feature into tasks. The tasks are estimated in hours, and ideally the entire team is involved in determining the estimates. This allows for multiple viewpoints and voices, and tends to create more realistic estimates. As the tasks are worked on the status meeting focuses on:
- What did you work on yesterday (how many hours did you get to spend working on the task)?
- What are you going to work on today (how many hours are left on the task at hand)?
- What are the roadblocks, if any? By asking these questions regularly, the entire team is involved on all of the development. Tasks that are underestimated initially are identified quickly and the factors that led to the underestimation are presented to the entire team, so they can be understood for the next estimating session. Additionally, with this approach, if a developer is struggling with a task, it is apparent to the entire team and assistance can be addressed sooner rather than later. One final benefit of measuring the tasks in hours, is that it’s easier to plan on when the tasks is expected to be implemented. (Consider the statement, ‘I think there are six hours remaining on this task. I’m out half a day tomorrow for my kid’s doctor appointment, so I expect to have it done before lunch the day after tomorrow.’ This explains in succinct detail, what is going with the task.
If you’re interested in this approach feel free to contact me at Visionpace. We deliver custom software development following agile principles in Kansas City.
March 29, 2006
When the story doesn’t fit
On a recent .NET consulting project in Kansas City, where we were doing software development and agile project management coaching, the team was struggling with sizing our user stories to fit our iterations.
Invariably during our projects we find that some of the user stories we came up with during the initial story workshop don’t really fit in the project development as written. Sometimes they are too large for the time allotted and/or have some high priority tasks and some not so high priority tasks. Other times we’ve learned so much about the project, the initial story doesn’t encompass all the facets it should. The question is what to do with these stories. Should the entire story be tasked out and carried over to the next iteration, should only the necessary tasks be focused on, and the story carried over, or should the story be broken down into smaller stories?
The answer, like a lot of answers when following an agile approach is; it depends. One can make an argument for any of the scenarios listed. Since I’m the kind of person that has a ‘I want to show progress on the burn down’ perspective, I prefer to break the story into smaller stories that ‘slice the cake’. (The stories aren’t all user interface or all back end processing. Rather they have some user interface, some middle tier logic and some back end processing.) What’s important is to capture the stories based on the current understanding, scope them in an appropriate size as much as possible so they don’t have to be broken down again later, and make sure that nothing falls through the cracks.
Once a story is broken into new stories the next thing to consider is how to forecast the new stories. Often times there is the desire to say that since the original story was forecast as a five points, and since we broke it into three new stories, one story that is three points, a second story that is one point, the remaining story must also be one point (5=3+1+1). Don’t do this. Another pitfall is for the team to say “When we started these stories would have all been threes, but since we have a better idea about what we’re doing, they all look like one’s now.” The key is to forecast each of the resulting stories independently and triangulate them against the existing stories for the project, making sure they are put in the right bucket.
The take away is to make sure the user stories reflect what you are doing in an iteration rather than trying to make the iteration fit the stories defined. If this means that some stories need to be removed and rewritten, do it. Just pay attention to the stories that are created to make sure they will fit in the future and that you don’t lose anything in the transition. Also remember that the forecasting for the stories follows the same triangulation used in the original story workshop.
November 10, 2005
Can IM help with software support and rollouts?
August 11, 2005
Finding the Perfect Coach
Have you ever sat through a great training class, learned quite a bit of useful information and then were ready to apply it, only to get back and not be able to apply within a reasonable period of time?
Maybe what you need instead is a coach.
In a recent interview, I discussed how our PerfectCoaching offering can assist a team with becoming productive on new technologies or best practices faster than with more traditional training techniques.
You can listen to it here.
For more information about PerfectCoaching check out the FAQ.
For a free white paper that uncovers what's wrong with traditional software development training and what to do about it, click here.
May 25, 2005
As a software development and developer training firm we've been involved with 100s of clients and many times that number of students all trying to solve the same core problem: How to be successful and get the best return on investment in completing software development projects and educating developers. For various reasons clients have attacked the problem from a number of angles but they usually involve the two classical approaches of outsourcing -or- sending internal/staff developers to training classes and asking them to complete a project. While there are certainly cases where one or both of these approaches have worked in terms of getting something delivered, both approaches have fundamental flaws that tend to yield sub-par results. When asked, most clients feel that these traditional approaches are far from perfect and tend to have more stories relating unsatisfactory outcomes than success stories or at least make comments like, "It could've been a lot better.". From a pure cost standpoint, the perfect solution for most clients is to have internal staff work on projects or hire additional staff versus paying outsourcing rates. The challenge is in sufficiently training current staff or hiring and training new staff in technologies needed for internal initiatives. There's a growing trend in Developer Coaching where experts join a team to both train developers and help deliver software features. When the project is complete the client is left with working code and staff developers with modernized skills. In most cases, Coaching represents an optimal blend of outsourcing and training. At Visionpace, we have leveraged the skills of our trainers and consultants to create a systematic approach to integrated feature delivery and developer training called "Perfect Coaching". For more information check out http://www.visionpace.com/perfectcoaching.html and download the Coaching white paper at http://www.visionpace.com/coaching.html.