Predicting software defect potentials
This highly complex task includes
- Client understanding of their own requirements
- Team skills in similar applications
- Methodologies used for defect prevention
- Pretest defect removal methods
- The cyclomatic complexity of code
- Test case design methods
- Test coverage percent
- Test library control methods
The software engineering community has an ethical and professional obligation to assist clients in understanding and eliminating toxic requirements. The clients themselves cannot be expected to understand the intricacies of putting business applications onto computers, so it is up to the software engineering community to keep their clients from implementing harmful requirements. This community should have a kind of Hippocratic Oath that includes the concept of “first, do no harm.”
In chronological order these seven fundamental topics should be explored as part of the requirements gathering process:
1. The outputs that should be produced by the application


2. The inputs that will enter the software application


3. The logical files that must be maintained by the application


4. The entities and relationships that will be in the logical files of the application


5. The inquiry types that can be made against the application


6. The interfaces between the application, other systems, and users


7. Key algorithms and business rules that must be present in the application
there are also 13 other ancillary topics that should be resolved during the requirements gathering phase:
1. The size of the application in function points and source code
2. The schedule of the application from requirements to delivery
3. The cost of the application by activity and also in terms of cost per function point
4. The quality levels in terms of defects, reliability, and ease-of-use criteria¨
5. The hardware platform(s) on which the application will operate
6. The software platform(s) such as operating systems and databases
7. The security criteria for the application and its companion databases
8. The performance criteria, if any, for the application
9. The training requirements or form of tutorial materials that may be needed
10. The installation requirements for putting the application onto the host platforms
11. The reuse criteria for the application in terms of both reused materials going into the application and also whether features of the application may be aimed at subsequent reuse by downstream applications
12. The use cases or major tasks users are expected to be able to perform via the application
13. The control flow or sequence of information moving through the application.
These 13 supplemental topics are not the only items that can be included in requirements, but none of these 13 should be omitted by accident given that they can all have a significant effect on software projects.
Defect Discovery Rates by Software Application Users
Clients will discover a majority of latent defects as the software goes into production and begins to accumulate usage hours. The rate at which latent defects are discovered is surprisingly variable. The three major factors that influence the discovery of latent defects are
1. Defect discovery goes up with the number of users.
2. Defect discovery goes up with the number of usage hours.
3. Defect discovery goes down with increasing application size.
In other words, software that is used by 10,000 people will uncover latent defects at a faster rate than software used by 10 people. Software that executes 24 hours a day, every day of the week will uncover latent defects at a faster rate than software that operates only an hour a day or only once or twice a month. A software application of 100 function points will uncover a higher percentage of latent defects than an application of 10,000 function points mainly because more of the features are used for small applications than large ones.
Testing is one of the oldest forms of software defect removal and has provided more than 50 years of accumulated information. This is why there are so many books on testing compared to other forms of quality and defect removal.
Because software defect removal efficiency averages only about 85% and essentially never equals 100%, there will always be latent defects present in software at the time of delivery.
Software Development and Maintenance
Low Quality
1. Low quality stretches out testing and makes delivery dates unpredictable.
2. Low quality makes repairs and reworks the major software cost driver.
3. Low quality leads to overtime and/or major cost overruns.
4. Low quality after release leads to expensive customer support.
5. Low quality after release leads to expensive post-release maintenance.
6. Low quality after release can lead to litigation for contract projects.
High Quality
1. High quality shortens testing schedules and improves delivery schedules.
2. High quality reduces repairs and rework by more than 50%.
3. High quality reduces unplanned overtime and reduces cost overruns.
4. High quality after release leads to inexpensive customer support.
5. High quality after release leads to lower maintenance and support costs.
6. High quality lowers the odds of litigation for contract projects.
Software as a Marketed Commodity
Low Quality
1. Low quality necessitates repairs and recalls and lowers profit levels. 


2. Low quality reduces customer satisfaction. 


3. Low quality can reduce market share. 


4. Low quality can give advantages to higher-quality competitors. 


5. Low quality raises the odds of litigation with software contractors. 


6. Low quality can lead to criminal charges in some situations. 


High Quality
1. High quality reduces repairs and raises profit levels. 


2. High quality raises customer satisfaction and repeat business. 


3. High quality can expand market share. 


4. High quality can give advantages over low-quality competitors. 


5. High quality reduces the odds of litigation with software contractors. 


6. High quality reduces the odds of software causing life-threatening problems. 


Software as a Method of Human Effort Reduction
Low Quality
1. Low quality increases down time when equipment cannot be used. 


2. Low quality can slow transaction speed and degrade worker performance. 


3. Low quality can lead to accidents or transaction errors. 


4. Low quality causes errors that require worker effort to correct. 


5. Low quality leads to increases in invalid defect reports. 


6. Low quality leads to consequential damages and expensive business problems. 


High Quality
1. High quality leads to few outages and little down time. 


2. High quality optimizes human worker performance. 


3. High quality reduces the odds of accidents and transaction errors. 


4. High quality and low error rates mean low user effort for repairs. 


5. High-quality software has fewer invalid defect reports. 


6. High quality reduces consequential damages and business problems. 


Software and Innovative New Kinds of Products
Low Quality
1. Low quality can keep new users from trying novel products. 


2. Low quality can cause novel products to fail in use. 


3. Low-quality software with excessive defects discourages users when learning new products. 


4. Low quality and numerous defects can lead to user mistakes and human problems. 


5. Low quality and numerous defects can lead to recalls from vendors. 


High Quality
1. High quality tends to attract new users. 


2. High quality minimizes operational failures. 


3. High quality keeps users interested and focused when learning new products. 


4. High quality correlates to fewer user mistakes and human problems. 


5. High quality minimizes recalls and disruptions. 


Bibliographical Information
Economics of Software Quality
By: Capers Jones; Olivier Bonsignour
Publisher: Addison-Wesley Professional
Pub. Date: July 24, 2011
ISBN-10: 0-13-258220-1
These are notes I made after reading this book. See more book notes
Just to let you know, this page was last updated Wednesday, Dec 11 24