March 1st, 2008
I’m giving a presentation next month on the politics associated with introducing a software metrics initiative in a typical IT organization. Most presentations, articles, and books that I’ve seen in this area focus on why it’s important to measure various things about software development … and what things we should be measuring … and how we should go about doing so.
Those are all important topics, and I suspect a lot of people need advice and guidance about the why, the what, and the how. But I don’t think it provides us with any insight or understanding about a depressing metric about metrics initiatives: only about 10% of newly-initiated metrics programs survive more than 18 months. They may be started with great fanfare and enthusiasm, and with the best of intentions; but they often have trouble getting past the first annual budget review. And they may suffer from lack of support, hostility, or outright mutiny.
The reason, quite simply, is politics. People are people, and it’s human nature to react in a variety of ways — not always positive! — to the notion of having one’s work measured. Based on my consulting work, project reviews, and visits to many, many IT organizations over the years, I’ve assembled a “baker’s dozen” list of 13 common political problems associated with metrics initiatives. Most of them will strike you as obvious and “common sense” — but that doesn’t mean they don’t happen.
Here’s the list:
- Metrics are often used to “punish” people — e.g., to criticize them for bugs, or to fire the people with the lowest productivity.
- A common perception is that newly-introduced metrics will be used to punish people — even if that’s not what management had in mind.
- Metrics are sometimes used (or misused) as leverage in highly political negotiations about deadlines, budgets, and staffing in high-pressure, risky projects.
- “Unintended consequences” — the introduction of a metrics initiative is likely to have “feedback loop” consequences that nobody expected or intended.
- IT organizations sometimes introduce a metrics initiative that measures hundreds of different things, thus overwhelming everyone with a mountain of data.
- Other IT organizations introduce a metrics initiative that focuses on only one measurement — e.g., programmer productivity measured in lines of code per person-month.
- Management doesn’t realize that, in many cases, you get what you measure — e.g., if you create the impression that people will be measured by how many lines of code they write, then they’ll write lots of code, even if it’s buggy, stupid code.
- The Hawthorne Effect.
- The perception that the metrics data gathered by the newly-introduced metrics initiative will be kept secret, and not shared with the people doing the work.
- The perception that the metrics data will be completely ignored by management.
- The perception that, even if management does review the metrics results, they won’t take appropriate action — e.g., they’ll try to hide or bury the problem, or blame someone else for the embarrassing metrics.
- The perception that the metrics results are not credible (sometimes, again, because management doesn’t want the world to see just how bad the metrics really are).
- The perception that gathering/recording of metrics data will take too much time, and that it’s not productive — e.g., the reaction from software engineers that “we should be doing our work, not spending all of our time measuring the work that we don’t have time to do!”
If you have any additional items to add to this list, please let me know. And if you’ve seen (or created) any particularly clever ideas to solve, reduce, minimize, or avoid any of these problems, please let me know about that, too.
Meanwhile, I’ll add some additional thoughts and ideas about this area, as time permits, over the next few weeks.