Software Quality Metrics

D

DRAMMAN

Any opinions or experience developing SW quality metrics? I am looking ot establish SW quality metrics that monitor performance durring the development process and post release.
 

yodon

Leader
Super Moderator
Software quality metrics are challenging and can lead to encouraging bad behavior if not watched closely. For example, if you proclaim that number of lines of code per day per developer is good, you'll get lots of lines of code but not necessarily quality. If you proclaim the number of bugs fixed per day per developer is good, you'll get plenty of bugs fixed but not necessarily improving quality. If you proclaim the number of bugs found by test is good, they'll find bugs, but many that aren't too useful.

Static analysis tools can give good insight (e.g., code complexity) but don't get religious over the numbers. There are often very good reasons for added complexity in code.

Clearly, the best measure of software quality is the number of problems reported from the field. But that's more of a systemic measure than software quality; i.e., why was the bug introduced in the first place, what was the earliest point where it should have been detected (phase escape), etc.

The absolute worst thing you can do is to try to use the metrics to assess performance. As soon as there's any inkling of management doing that, you can kiss any thought of getting rational data out. Developers will quickly slip into the habit of making the numbers look good rather than focusing on quality. Along the same lines, don't punish developers if bugs do get into the field. If that does occur (and it will - there is no bug-free code), a number of factors had to occur. Presuming the requirements and design were correct and the developer did introduce the bug, why wasn't there a review that caught the error? Why didn't unit and/or integration testing catch the error? Why didn't system testing catch the error? It would be like blaming a goalie in hockey for goals allowed (where was the rest of the defense?).

Since the goal for metrics should be for eventual improvement, I would (instead) set up a feedback / improvement process. I would start with problems reported from the field. Do an analysis on each as to why the problem occurred and where the problem should have been caught. If you have a large number of bugs reported, you may need to employ a prioritization scheme to help focus. Generally, developers get on board if the focus is kept on improvement and not for punitive actions.
 

Bev D

Heretical Statistician
Leader
Super Moderator
expanding from what yodon said:

Every metric can be manipulated and abused; a best practice is to have a set of metrics that are based on people (morale or engagement), quality, delivery and cost. Each of these are of equal importance and since this is a system, if you work on one you will affect the other 3. so it's important to understand that improvements (or at least no negative affect) in all 4.

An approach I've tried in these types of cases is to get the most recent field failures / complaints, investigate as to why they occured and why they escaped. Then intiate 'fixes' and improved processes to better detect, correct and prevent. during this process ask the developers what metrics they think will help keep them on track and be viewed as great evidence of their improved performance....
 
D

DRAMMAN

Thanks for the feedback. We are in the early stages of implementing metrics and processes to drive continuous improvement in the SW portion of our products. Do any of you define a difference between "issues", "bugs", "field problems", etc? Or do people tend ot use the terms interchangeably.
 

Bev D

Heretical Statistician
Leader
Super Moderator
well, as a colleague of mine says "people have issues, equipment has problems". So I tend to not use the term 'issue'.

I use "Customer complaint" which can be an imprecise description of the problem as they experienced it.

I will also use "error" - if there is a specific error 'code' that is displayed. Sometimes the term "fault" if there isn't an error code...it may also be referred to as the "failure mode" => some function failed in a specific manner. The key here is that error or fault should be a precise description of the Problem from the effect side. It should not involve descriptions of the cause.

I reserve the term 'bug' for the cause of the failure. I do this becuase Grace Hopper coined the phrase when someone found that a 'bug' (moth?) had caused a short in an electrical component of a computer that casued it to malfunction. so 'bug' is a cause and not an effect.

I do recommend the "apollo method" for a comprehensive description of the cause and effect relationship.

however, you can use any terminology that you wish as long as you clear about the definitions and people in your organization stick to the definitions. otherwise confusion shall result!
 

yodon

Leader
Super Moderator
well, as a colleague of mine says "people have issues, equipment has problems". So I tend to not use the term 'issue'.

Such discussions make me think of one pundit's approach. He suggested calling things that weren't right "spoilage." Can't say I think that's a good idea nor do I buy in to being overly 'politically correct' but, as Bev points out so well, words have multiple meanings. What things are called could certainly have effects, depending on the company culture. Personally, I use "issue" and have instilled that it's never personal; individuals are never blamed. The culture here is quite accepting of this approach.
 

michellemmm

Quest For Quality
Thanks for the feedback. We are in the early stages of implementing metrics and processes to drive continuous improvement in the SW portion of our products. Do any of you define a difference between "issues", "bugs", "field problems", etc? Or do people tend ot use the terms interchangeably.

I always become concerned when organizations' approach for setting metics starts at micro level and top management set it at macro. "issues" or "bugs" might be significant to some and "defect density" might be more meaningful to others.


No matter what type of metics you select, it should measure effectiveness Vs efficiency of the process, driven from company's goals and objectives . Somehow micro and macro should relate.
 
D

DRAMMAN

michelle.....completly agree with you.

Anny....Do you have any examples or thoughts on what sftware metrics have worked well in your organizations? Or even which have not.
 
F

flyin01

michelle.....completly agree with you.

Anny....Do you have any examples or thoughts on what sftware metrics have worked well in your organizations? Or even which have not.

My two cents on this is that you should separete the metrics into two parts.

1.Measure the process itself. Could be how efficient the developers are in terms of submitting patches, handling feedback/issues/bugs (responding to them, tagging them, assigning them etc). Time measurements, numbers of items.

2.Measure the output of the process. I e the SW blob itself that come out of your developers hard efforts. What is important to the customer? Don?t guess, ask! Is it how snabby the SW is (ms), number of bugs/crashes (#, %). Have a few metrics that make sense and prioritize them, then focus on these. Don?t go for the score card with 57 metrics. This will only cause head ache. Don?t make the formulas too complex. I am quoting Einstein here, if you cannot explain it to a 6 year old... :D

Sometimes you do not yet have constructive feedback from the customers in terms of what went wrong. (Maybe you have not release a final product yet) Then it may be really tricky to find out what matters, but you can benchmark the competition.

I hope it helps! :bigwave:
 
Top Bottom