The politics of software metrics

Bookmark and Share

March 1st, 2008

I’m giving a presentation next month on the politics associated with introducing a software metrics initiative in a typical IT organization. Most presentations, articles, and books that I’ve seen in this area focus on why it’s important to measure various things about software development … and what things we should be measuring … and how we should go about doing so.

Those are all important topics, and I suspect a lot of people need advice and guidance about the why, the what, and the how. But I don’t think it provides us with any insight or understanding about a depressing metric about metrics initiatives: only about 10% of newly-initiated metrics programs survive more than 18 months. They may be started with great fanfare and enthusiasm, and with the best of intentions; but they often have trouble getting past the first annual budget review. And they may suffer from lack of support, hostility, or outright mutiny.

The reason, quite simply, is politics. People are people, and it’s human nature to react in a variety of ways — not always positive! — to the notion of having one’s work measured. Based on my consulting work, project reviews, and visits to many, many IT organizations over the years, I’ve assembled a “baker’s dozen” list of 13 common political problems associated with metrics initiatives. Most of them will strike you as obvious and “common sense” — but that doesn’t mean they don’t happen.

Here’s the list:

  1. Metrics are often used to “punish” people — e.g., to criticize them for bugs, or to fire the people with the lowest productivity.
  2. A common perception is that newly-introduced metrics will be used to punish people — even if that’s not what management had in mind.
  3. Metrics are sometimes used (or misused) as leverage in highly political negotiations about deadlines, budgets, and staffing in high-pressure, risky projects.
  4. “Unintended consequences” — the introduction of a metrics initiative is likely to have “feedback loop” consequences that nobody expected or intended.
  5. IT organizations sometimes introduce a metrics initiative that measures hundreds of different things, thus overwhelming everyone with a mountain of data.
  6. Other IT organizations introduce a metrics initiative that focuses on only one measurement — e.g., programmer productivity measured in lines of code per person-month.
  7. Management doesn’t realize that, in many cases, you get what you measure — e.g., if you create the impression that people will be measured by how many lines of code they write, then they’ll write lots of code, even if it’s buggy, stupid code.
  8. The Hawthorne Effect.
  9. The perception that the metrics data gathered by the newly-introduced metrics initiative will be kept secret, and not shared with the people doing the work.
  10. The perception that the metrics data will be completely ignored by management.
  11. The perception that, even if management does review the metrics results, they won’t take appropriate action — e.g., they’ll try to hide or bury the problem, or blame someone else for the embarrassing metrics.
  12. The perception that the metrics results are not credible (sometimes, again, because management doesn’t want the world to see just how bad the metrics really are).
  13. The perception that gathering/recording of metrics data will take too much time, and that it’s not productive — e.g., the reaction from software engineers that “we should be doing our work, not spending all of our time measuring the work that we don’t have time to do!”

If you have any additional items to add to this list, please let me know. And if you’ve seen (or created) any particularly clever ideas to solve, reduce, minimize, or avoid any of these problems, please let me know about that, too.

Meanwhile, I’ll add some additional thoughts and ideas about this area, as time permits, over the next few weeks.

5 responses about “The politics of software metrics”

  1. Joe Cascio said:

    The only serious metrics project I was in any way involved in was among the 90% that didn’t make it. The reason, as I recall (it was many years ago), was that it didn’t seem to yield anything particularly useful. It’s like it was telling barbers how many hairs on someone’s head got cut and ended up on the floor. Perhaps accurate but not particularly useful for giving stylish cuts.

    And that brings me to really the most serious objection I’d make about metrics. The only thing that counts in the end is user satisfaction, and how on earth do you measure that, or even assuming you could, how then to attribute it to individual coders?

  2. Alfonso Guerra said:

    The fears echoed by your list, and the backlash they engender against metrics, are the second biggest problem I have with the extreme programming religion: the lack of dispassionate measurements of the process to prove its value. The lack of metrics means the emperor has no clothes. And any claims to it’s superiority with, “It feels better.” is mere hand-waving.

    Going with your gut doesn’t fly for improving performance in software, nor does it fly in improving the software development process. You can’t fix what you refuse to measure. How can you determine the better practices from different XP shops without measuring them? How can you determine the benefits gained from making changes at cost X without measuring them?

    The claim that software metrics are too hard because developers are human and development is art not science, is a cop-out. Practically every other human endeavor has a metric available for it, including sports and arts. Practically every other field has seen tremendous leaps forward in productivity and quality except software development. Why? Because developers are the last shadow of the old school data processing priesthood, luddites who fear change while promoting it everywhere else: the last ones with control.

    Anyone who claims IDEs provide no more than 8% productivity gains over punched cards would face a lot of criticism by the anti-metrics crowd, but they wouldn’t be able to disprove it. In fact, without measuring for results, the stand-up sessions and pair-programming sessions favored by XPers can be proven to reduce productivity due to the loss in programming time.

    The potential for metrics to be misused is no guarantee they will be. If you feel the current state of metrics lacking, then develop more accurate ones and demonstrate their proper use. But promoting “feel-good programmery” and mystic coding ability without practical, reproducible measurements is just promoting another religion.

  3. Ram said:

    I don’t fully agree with Alfonso. Creating a very good software application is an art. (It involves deep thought, creativity, doing things differently, analysis). Organizations commoditize it. You want to treat it as an Engineering process, to ensure you meet the deadlines and meet the requirements and the budget. So it is like… ditch the creativity.. Except for say things like, be creative in creating reusable components to increase productivity and reduce defects.. “Creativity” part will be demonstraated as the value addition. From my experience, every project may need to tune the metrics to be effective. There is a need to identify what parameters are different in your project as compared to the standard – skill levels, technology, team structure, requirements stability/details, user skill level etc. You may need to create additional measurements. Well, most of the projects that we do need to meet the client requirements. So you need to keenly observe if you have the right metrics. Success will depend on the answering the following questions… What is the accuracy levels of metrics in your organization? (Do you have history data of successful metrics used for projects’ success for a few years?). Is the data collection automated or is it an overhead? Do you collect data as real time as possible for your analysis or do you do it “to comply” ? Who gets the credit for a successful metrics driven project in your organization and who gets the blame for a project in trouble? Is your senior mangement willing to listen to you?

  4. ed said:

    Interesting comments! For some reason, I decided to do a google search for “Dilbert” and “metrics”. Found this site, with a reference to the “Dilbert Barometer”: http://tabletumlnews.powerblogs.com/posts/1178255423.shtml

  5. Arun Kumar said:

    I am not clever.

    I fail to understand what they mean by data-collection when it comes to the IT companies. Aren’t they supposed to be technology drivers! I believe that if you implement a good process with appropriate sophisticated tools that seamlessly tracks the process so that you don’t do explicit data collection for the sake of metrics analysis, but simply allow it to be inferred by the tool we use, then atleast some of the grudging and misunderstanding could be avoided.

Leave a Reply