The Challenge of "Good Enough" Software
Original posted at *** Dead Link Removed ***
Copyright James Bach 1996
Quality is an increasingly critical factor as customers become more sophisticated, technology becomes more complex, and the software business becomes more intensely competitive. But there is a powerful alternative to the orthodox, expensive, and boring methodologies that aim at the best possible quality. An alternative that is one secret to the success of Microsoft and many other companies: the discipline of good enough software development.
I call this discipline the utilitarian approach. It isn't simply a minimized or compromised version of the orthodox approach, any more than a city bus is an airliner without wings, or a station wagon is a race car with faux wood siding and a hatchback. To produce good enough software requires certain processes and methods that are almost universally ignored or devalued by software process formalists and by popular process models like the SEI's CMM [1]. On the other hand, even for projects that do require the utmost in methodology, the utilitarian approach can help in getting the biggest bang for the buck, and making better decisions in an uncertain world.
Views of Quality
Software quality is a simple concept, at least in the textbooks. Just determine your requirements, and systematically assure that your requirements are achieved. Assure that the project is fully staffed and has adequate time to do its work. Assure that the quality assurance process is present in every phase of the development process, from requirements definition to final testing. Oh, and remember that it's absolutely critical that management be committed to quality on the unquestioned faith that it is always worth whatever it will cost. Otherwise, ha ha, forget the whole thing.
Software quality is not so simple in the field: where requirements shift and waver like a desert mirage, projects are perpetually understaffed and behind schedule, software quality assurance is often a fancy word for ad hoc testing, and management seems more interested in business than in the niceties and necessities of software engineering. Oh, and remember that there's lots of money to be made if you can sell the right product at the right time, or even something close enough to being right.
We often talk about quality as if it were a substance; something that could, in principle, be read from a gauge. But quality is more like an optical illusion-- a complex abstraction that emerges partly from the observed, partly from the observer, and partly from the process of observation itself. Behind the veneer of metrics and Ishikawa diagrams, quality is just a convenient rendezvous for a set of ideas regarding goodness. As Jerry Weinberg says, "quality is value to some person [2]." While I agree with that definition, there are also some higher order patterns that we can discern in the use of the word:
Aesthetic view
Quality is elegance; an ineffable experience of goodness. This view is often held by software developers, who may describe it in terms of various attributes, but cannot unambiguously describe the idea itself. We need to consider this view because it enhances morale and leads to pride in workmanship. A danger of this view is that it can become a cloak for perfectionists and underachievers.
Manufacturing view
Quality is correctness; conformance to specifications. This view leaves open the whole problem of creating a quality specification and so isn't of much help in the problem of design, nor in the problem of comparing the relative importance of various nonconformances. A danger of this view is that we may create perfect products that satisfy no one.
Customer view
Quality is fitness for use; whatever satisfies the customer. The problem here is that quality becomes not only subjective, as in the aesthetic view, but we also lose control of it altogether. Who are the customers? How do we incorporate their values into a product they haven't yet seen? There may be more than one customer, or the customers may not know what they want, or what they want may be some shade of unachievable. A danger of this view is that we will find ourselves chasing a will-o'-the-wisp instead of creating a product that will be happily accepted once customers do experience it.
Measurable quality factors?
Mostly smoke and mirrors.
With respect to any of the above three viewpoints, we can identify certain factors that are characteristic of quality software: The ISO 9126 standard [3] contains 21 such attributes, arranged in six areas: functionality, reliability, usability, efficiency, maintainability, and portability. The Encyclopedia of Software Engineering includes an excellent article on software quality factors [4], including ideas on how to measure those factors.
On the face of it, quality assurance would seem to be a matter of assessing the product for the presence of each quality factor with respect to each of the three views. But the problem is not that simple. In a particular context, some factors are more important than others. Some factors detract from others, as in the classic functionality-versus-portability conflict. For most of the factors we can identify, there are neither straightforward nor inexpensive ways to measure them, nor to compose those individual factors, once measured, into an overall quality metric. Furthermore, no matter how we tend to these factors in general, a single bug in the product may possibly negate everything else that works right.
Finally, let's face facts: our clients, whether internal or external, will never know the quality of our products. They will form a perception of quality and act on that basis, whether or not that matches our perception. Customer perception depends among other things on their values, skill level, past experience, and profile of use. Some of these factors we can study; none of them do we control.
So, we have some idea what software quality is, but no certain idea. We have some methods to produce it and measure it, but no certain methods. Quality is inhibited by the complexity of the system, invisibility of the system, sensitivity to tiny mistakes, entropy due to development, interaction with external components, and interaction with the individual user. In summary, creating products of the best possible quality is a very, very expensive proposition, while on the other hand, our clients may not even notice the difference between the best possible quality and pretty good quality. This leads us to three critical questions:
How much of which quality factors would be adequate?
How do we measure it adequately?
How do we control it adequately?
One answer to these questions is to punt on the whole notion of measurable quality. Instead of using our heads to create a universal metric of quality, and then working to optimize everything against that metric, we can use our heads to directly consider the problems that quality is supposed to solve. When those problems are solved, we automatically have good enough quality.
The utilitarian view of quality
Utilitarianism is a nineteenth century ethical philosophy. A branch of Consequentialism, it asserts that the rightness or wrongness of an action is a function of how it affects other people.
The utilitarian view of quality is framed in terms of positive and negative consequences. The quality of something should be considered good enough when the potential positive consequences of creating or employing it acceptably outweigh the potential negatives in the judgment of key stakeholders. This view incorporates all the other views above, but replaces blind perfectionism with vigilant moderation. It applies both to projects and products. It focuses us on identifying problems and improving our problem-solving capabilities. For an interesting look at how a problem-oriented attitude is fundamentally different from a goal-oriented attitude (quality being a goal), see Bobb Biehl's book with the straight-ahead title Stop Setting Goals if you Would Rather Solve Problems [5].
One way of expressing the utilitarian view is to say that quality is the optimum set of solutions to a given set of problems. In these terms, the answers to the three critical questions above follow from the process of understanding the problems we face, studying the tradeoffs, and matching them with appropriate processes. We boldly cut corners. A danger of the utilitarian approach is that we may cut too many corners.
Figure 1 lays out the relationship between utilitarian SQA and quality. The SQA process examines the product and all known problems that relate to the project. When problems are observed with the product, the perception of product quality drops, while perceived risk of shipping increases. The opposite happens when quality is observed in the product. But, quality per se is not the determinant of when we ship. We ship when we believe the risks to be acceptably low-- however low or high that may be in absolute terms. At that point, quality is automatically good enough.
Our challenge lies in predicting, controlling, and measuring the consequences of creating and employing the product. In terms of employing the product, those consequences are mirrored in product quality. In terms of creating the product, those consequences are mirrored in the quality of the process, staff, and resources. We can certainly improve quality by operating directly on some chosen metric, doing whatever is necessary to drive that metric in the desired direction, but I assert that in doing so we lose sight of the full spectrum of the product, project, and customer. Instead, by working the consequences side of the mirror, we can see the whole problem.
For example, we might decide that a reasonable quality metric is the number of known defects in the product. If we follow the orthodox approach, we would improve quality either by preventing defects or by fixing them before we shipped. Either way, we would minimize the number of defects in order to maximize quality. If we follow the utilitarian approach, however, we would examine the consequences of each problem, and decide on a case-by-case basis which were important to fix. The quality metric would then either take care of itself or else become irrelevant. Prevention is a concern, but not blanket prevention, only prevention of important problems, and only prevention to the extent necessary.
To be good enough is to be good at knowing the difference between important and unimportant; necessary and unnecessary. The orthodox approach glosses over such considerations, or tends to translate any discussion of them into a battle between Good Engineering and Bad Management.
Apple shipped the first version of Hypercard with about 500 known bugs in it, yet the product was a smashing success. The Hypercard QA team chose the right bugs to ship with. They also chose the right qualities. I'm not sure how many thousands of bugs were shipped with Windows 3.1, but you can bet that it was at least several. I was working at Apple when Macintosh System 7.0 shipped, and it went out with thousands of bugs. Successful software quality managers will tell you, it isn't the number of bugs that matters, it's the effect of each bug.
You know, no matter what other approach to quality we talk about, we all use the utilitarian approach. We all make judgments of risk, whether we hide that in terms of some quality metric like failure density or whether we avoid all explicit metrics and processes. The issue is how effectively we assess and control risk. Call me crazy, but I believe that we will be better judges of risk if we admit that's what we're doing and learn how to do it directly, rather than indirectly through psychological games like "six sigma" and slogans like "quality without compromises."
From problems to products:
the double-cycle project model
In order to explore the means by which we can create good enough software, let's start by examining the simple project model in figure 2. The model consists of the following elements:
Problems
Problems are the motivators of the project. In the absence of problems, there would be no project. Problems can be defined broadly as the difference between an actual and a desired state, or more narrowly as some work to be done. Either way, we can also characterize them in terms of an ecology of causes and consequences. In other words, problems represent risk. A major part of utilitarian software development is choosing which problems to avoid, which to accept, and which to solve. That process is one of risk management. Problems, in this model, include project problems as well as product problems. To do a project well enough is to be left with an acceptable set of problems at the end.
Staff
The project staff is the agent that solves all the hard problems. They employ processes to turn those problems into products. They also solve problems even without any discernible processes or direction. I call this "process heroism." People are thus the most versatile and critical part of the project.
Resources
Resources include anything that money can buy, such as brute labor, software, hardware, office space, and information. Resources are important to note chiefly because they support the project's staff and processes. They also can create problems, as in the need to maintain equipment; or they can contribute directly to products, as when an application framework is used as a foundation for development.
Original posted at *** Dead Link Removed ***
Copyright James Bach 1996
Quality is an increasingly critical factor as customers become more sophisticated, technology becomes more complex, and the software business becomes more intensely competitive. But there is a powerful alternative to the orthodox, expensive, and boring methodologies that aim at the best possible quality. An alternative that is one secret to the success of Microsoft and many other companies: the discipline of good enough software development.
I call this discipline the utilitarian approach. It isn't simply a minimized or compromised version of the orthodox approach, any more than a city bus is an airliner without wings, or a station wagon is a race car with faux wood siding and a hatchback. To produce good enough software requires certain processes and methods that are almost universally ignored or devalued by software process formalists and by popular process models like the SEI's CMM [1]. On the other hand, even for projects that do require the utmost in methodology, the utilitarian approach can help in getting the biggest bang for the buck, and making better decisions in an uncertain world.
Views of Quality
Software quality is a simple concept, at least in the textbooks. Just determine your requirements, and systematically assure that your requirements are achieved. Assure that the project is fully staffed and has adequate time to do its work. Assure that the quality assurance process is present in every phase of the development process, from requirements definition to final testing. Oh, and remember that it's absolutely critical that management be committed to quality on the unquestioned faith that it is always worth whatever it will cost. Otherwise, ha ha, forget the whole thing.
Software quality is not so simple in the field: where requirements shift and waver like a desert mirage, projects are perpetually understaffed and behind schedule, software quality assurance is often a fancy word for ad hoc testing, and management seems more interested in business than in the niceties and necessities of software engineering. Oh, and remember that there's lots of money to be made if you can sell the right product at the right time, or even something close enough to being right.
We often talk about quality as if it were a substance; something that could, in principle, be read from a gauge. But quality is more like an optical illusion-- a complex abstraction that emerges partly from the observed, partly from the observer, and partly from the process of observation itself. Behind the veneer of metrics and Ishikawa diagrams, quality is just a convenient rendezvous for a set of ideas regarding goodness. As Jerry Weinberg says, "quality is value to some person [2]." While I agree with that definition, there are also some higher order patterns that we can discern in the use of the word:
Aesthetic view
Quality is elegance; an ineffable experience of goodness. This view is often held by software developers, who may describe it in terms of various attributes, but cannot unambiguously describe the idea itself. We need to consider this view because it enhances morale and leads to pride in workmanship. A danger of this view is that it can become a cloak for perfectionists and underachievers.
Manufacturing view
Quality is correctness; conformance to specifications. This view leaves open the whole problem of creating a quality specification and so isn't of much help in the problem of design, nor in the problem of comparing the relative importance of various nonconformances. A danger of this view is that we may create perfect products that satisfy no one.
Customer view
Quality is fitness for use; whatever satisfies the customer. The problem here is that quality becomes not only subjective, as in the aesthetic view, but we also lose control of it altogether. Who are the customers? How do we incorporate their values into a product they haven't yet seen? There may be more than one customer, or the customers may not know what they want, or what they want may be some shade of unachievable. A danger of this view is that we will find ourselves chasing a will-o'-the-wisp instead of creating a product that will be happily accepted once customers do experience it.
Measurable quality factors?
Mostly smoke and mirrors.
With respect to any of the above three viewpoints, we can identify certain factors that are characteristic of quality software: The ISO 9126 standard [3] contains 21 such attributes, arranged in six areas: functionality, reliability, usability, efficiency, maintainability, and portability. The Encyclopedia of Software Engineering includes an excellent article on software quality factors [4], including ideas on how to measure those factors.
On the face of it, quality assurance would seem to be a matter of assessing the product for the presence of each quality factor with respect to each of the three views. But the problem is not that simple. In a particular context, some factors are more important than others. Some factors detract from others, as in the classic functionality-versus-portability conflict. For most of the factors we can identify, there are neither straightforward nor inexpensive ways to measure them, nor to compose those individual factors, once measured, into an overall quality metric. Furthermore, no matter how we tend to these factors in general, a single bug in the product may possibly negate everything else that works right.
Finally, let's face facts: our clients, whether internal or external, will never know the quality of our products. They will form a perception of quality and act on that basis, whether or not that matches our perception. Customer perception depends among other things on their values, skill level, past experience, and profile of use. Some of these factors we can study; none of them do we control.
So, we have some idea what software quality is, but no certain idea. We have some methods to produce it and measure it, but no certain methods. Quality is inhibited by the complexity of the system, invisibility of the system, sensitivity to tiny mistakes, entropy due to development, interaction with external components, and interaction with the individual user. In summary, creating products of the best possible quality is a very, very expensive proposition, while on the other hand, our clients may not even notice the difference between the best possible quality and pretty good quality. This leads us to three critical questions:
How much of which quality factors would be adequate?
How do we measure it adequately?
How do we control it adequately?
One answer to these questions is to punt on the whole notion of measurable quality. Instead of using our heads to create a universal metric of quality, and then working to optimize everything against that metric, we can use our heads to directly consider the problems that quality is supposed to solve. When those problems are solved, we automatically have good enough quality.
The utilitarian view of quality
Utilitarianism is a nineteenth century ethical philosophy. A branch of Consequentialism, it asserts that the rightness or wrongness of an action is a function of how it affects other people.
The utilitarian view of quality is framed in terms of positive and negative consequences. The quality of something should be considered good enough when the potential positive consequences of creating or employing it acceptably outweigh the potential negatives in the judgment of key stakeholders. This view incorporates all the other views above, but replaces blind perfectionism with vigilant moderation. It applies both to projects and products. It focuses us on identifying problems and improving our problem-solving capabilities. For an interesting look at how a problem-oriented attitude is fundamentally different from a goal-oriented attitude (quality being a goal), see Bobb Biehl's book with the straight-ahead title Stop Setting Goals if you Would Rather Solve Problems [5].
One way of expressing the utilitarian view is to say that quality is the optimum set of solutions to a given set of problems. In these terms, the answers to the three critical questions above follow from the process of understanding the problems we face, studying the tradeoffs, and matching them with appropriate processes. We boldly cut corners. A danger of the utilitarian approach is that we may cut too many corners.
Figure 1 lays out the relationship between utilitarian SQA and quality. The SQA process examines the product and all known problems that relate to the project. When problems are observed with the product, the perception of product quality drops, while perceived risk of shipping increases. The opposite happens when quality is observed in the product. But, quality per se is not the determinant of when we ship. We ship when we believe the risks to be acceptably low-- however low or high that may be in absolute terms. At that point, quality is automatically good enough.
Our challenge lies in predicting, controlling, and measuring the consequences of creating and employing the product. In terms of employing the product, those consequences are mirrored in product quality. In terms of creating the product, those consequences are mirrored in the quality of the process, staff, and resources. We can certainly improve quality by operating directly on some chosen metric, doing whatever is necessary to drive that metric in the desired direction, but I assert that in doing so we lose sight of the full spectrum of the product, project, and customer. Instead, by working the consequences side of the mirror, we can see the whole problem.
For example, we might decide that a reasonable quality metric is the number of known defects in the product. If we follow the orthodox approach, we would improve quality either by preventing defects or by fixing them before we shipped. Either way, we would minimize the number of defects in order to maximize quality. If we follow the utilitarian approach, however, we would examine the consequences of each problem, and decide on a case-by-case basis which were important to fix. The quality metric would then either take care of itself or else become irrelevant. Prevention is a concern, but not blanket prevention, only prevention of important problems, and only prevention to the extent necessary.
To be good enough is to be good at knowing the difference between important and unimportant; necessary and unnecessary. The orthodox approach glosses over such considerations, or tends to translate any discussion of them into a battle between Good Engineering and Bad Management.
Apple shipped the first version of Hypercard with about 500 known bugs in it, yet the product was a smashing success. The Hypercard QA team chose the right bugs to ship with. They also chose the right qualities. I'm not sure how many thousands of bugs were shipped with Windows 3.1, but you can bet that it was at least several. I was working at Apple when Macintosh System 7.0 shipped, and it went out with thousands of bugs. Successful software quality managers will tell you, it isn't the number of bugs that matters, it's the effect of each bug.
You know, no matter what other approach to quality we talk about, we all use the utilitarian approach. We all make judgments of risk, whether we hide that in terms of some quality metric like failure density or whether we avoid all explicit metrics and processes. The issue is how effectively we assess and control risk. Call me crazy, but I believe that we will be better judges of risk if we admit that's what we're doing and learn how to do it directly, rather than indirectly through psychological games like "six sigma" and slogans like "quality without compromises."
From problems to products:
the double-cycle project model
In order to explore the means by which we can create good enough software, let's start by examining the simple project model in figure 2. The model consists of the following elements:
Problems
Problems are the motivators of the project. In the absence of problems, there would be no project. Problems can be defined broadly as the difference between an actual and a desired state, or more narrowly as some work to be done. Either way, we can also characterize them in terms of an ecology of causes and consequences. In other words, problems represent risk. A major part of utilitarian software development is choosing which problems to avoid, which to accept, and which to solve. That process is one of risk management. Problems, in this model, include project problems as well as product problems. To do a project well enough is to be left with an acceptable set of problems at the end.
Staff
The project staff is the agent that solves all the hard problems. They employ processes to turn those problems into products. They also solve problems even without any discernible processes or direction. I call this "process heroism." People are thus the most versatile and critical part of the project.
Resources
Resources include anything that money can buy, such as brute labor, software, hardware, office space, and information. Resources are important to note chiefly because they support the project's staff and processes. They also can create problems, as in the need to maintain equipment; or they can contribute directly to products, as when an application framework is used as a foundation for development.