Ten myths about Quality Assurance in software development

Everybody would agree that quality is an important part of the software development process. However, the complexity involved in delivering quality is often poorly understood and the amount of effort it requires tends to be underestimated.

1. Quality assurance is testing

You need to worry if people start using “quality assurance” and “testing” as interchangeable terms. The reality is that testing is just one part of quality assurance.

Good quality assurance should encompass the entire development process from the very start of requirements gathering all the way to maintenance. Not only does this involve a range of different test techniques but it should also take in the standards, processes, documentation and sign-off gates that are used throughout the entire development life-cycle.

2. You can eliminate all the bugs from a system

Expectations need to be managed. One of the great un-provable laws of computing is that all systems have bugs. You will never eliminate all of the bugs in a system, it's just a matter of chasing them down to an acceptable level.

The testing expert Boris Beizer estimated that his private bug rate was 1.5 bugs per line of executable code including typing errors. The majority of these bugs are found and corrected by a developer as the code is being written, but a lot of testing can be seen in the context of weeding out as many of the remaining bugs as possible.

On larger systems, the maintenance phase of the life-cycle is principally concerned with managing an on-going bug list. There is a list of "known defects" that are tolerated because the overall level of system quality is regarded as sufficient. It's important that everybody understands this and agrees what an acceptable level of defects might be.

3. You should automate all of your testing

Automated testing can accelerate the overall testing effort, but it does not eliminate the need for manual testing. Effective testing is best achieved through a combination of automated and manual testing.

Automated tests can reduce the need for some repetitive manual testing but they tend to use the same set of inputs on each occasion. A piece of software that consistently passes a rigid set of automated tests may not fare so well once it is subjected to the more random and unpredictable inputs of human testers. The expert eye of a seasoned quality assurance professional will provide a more rigorous test than an automated script.

It can also be very difficult to bed in any kind of reliable automated testing in the early stages of a project or for new functionality. Most development is in flux at first and it can be tough to decide when best to start building in test automation. Some software platforms suffer from a relative shortage of test frameworks which can further undermine the scope of automation.

4. Testing is easy

Quality assurance professionals are often under-estimated and under-valued, mainly by people who do not quite understand what value they bring to a project.

Really good quality assurance professionals are like gold dust. They combine deep knowledge of test techniques with a genuine enthusiasm for quality. They can find faults that anybody else in the project will over-look. They will be able to make your testing more efficient by second guessing where defects are most likely to be found. They also bring a broader perspective to the project based on a deep understanding of both the business requirements and development process.

5. Only the quality assurance team need to be involved in testing

Quality assurance professionals really add value because they care deeply about quality and have a superior grasp of what to look for when testing a system. However, quality should be something that everybody takes some responsibility for.

It can be dangerous to leave quality assurance to a separate team of testers as it helps to enforce the idea that only a specialist can usefully test software. It also implies a sequential model of development based on functional silos where business analysis write requirements, technical architects design solutions, developers write code and quality assurance test the end result.

This sequential model feels dated and it can encourage team members to absolve themselves of responsibility for the overall quality of the system. More modern, agile development approaches help to counter this by encouraging a more collaborative approach. Techniques such as continuous integration and iterative releases can also help to foster a shared responsibility for system quality.

6. The more testing you do the better

Many projects start with the intention of having 100% test coverage for the system. This can be unrealistic and is rarely achieved as coverage tends to shrink in response to changing development schedules. This can lead to decisions about which areas to test being made on-the-fly rather than using a more systematic approach to determine priorities.

Any decisions about priority should take into account risk and business imperatives so that those areas with the greatest potential impact receive the greatest coverage. This risk-based approach assumes that complete test coverage is unrealistic but prepares you for being able to make more informed decisions about the most sensible areas to concentrate on.

7. You can leave quality assurance to the end

A lot of projects are planned with a certain amount of testing to be carried out once development has been completed. This can seem sensible as it allows you to test and fix the completed system in its entirety through a number of quality assurance cycles.

The catch is that the time available for these quality assurance cycles tends to get squeezed as the project wears on. The inevitable delays that creep into development can make the later stages a rushed affair. When you are faced with the choice between a round of testing and the addition of a new feature it's easy to skimp on the quality assurance.

It is also a very inefficient approach to testing, as major bugs can be left to fester in the system until the later stages of the project. It is always cheaper to fix bugs earlier on in the development cycle rather than waiting towards the end where they are likely to have become more deep-seated and the code won't be so fresh in a developer's mind.

8. Performance testing is only worth doing on a production environment

Performance testing is often left to a set of load tests at the tail-end of a development schedule. This approach tends to concentrate on finding the points at which a system crashes rather than ensuring an acceptable level of performance throughout the system. This is also leaving it far too late as by this stage any serious performance problems will be costly and time-consuming to fix.

It's always best to work performance testing into the development life-cycle. Use code profiling tools to check for bottle-necks in code that may come back to haunt you. Define metrics for performance during the design phase and use prototypes to evaluate architectural choices. Above all, plan and monitor system performance throughout the entire development rather than waiting for the “big bang” of load testing.

9. Security and quality are difference activities

Security testing is often relegated to a single audit of a system just before it goes live. As with any last-minute testing, this only creates extra cost as issues are far cheaper to fix if they are caught earlier in the development process. Last-minute assessments such as penetration tests can provide a valuable assurance before go-live, but they should not provide the first test of security vulnerabilities.

A genuine commitment security requires something more substantial than an audit. Ideally, a risk-based approach to identifying and remedying vulnerabilities should be used throughout the development process. Security audits should be built into the architecture design and code review processes. Above all, you should develop a coherent idea of what the risks are and how they have been addressed by the system.

10. Quality assurance adds cost

It can be tempting to see quality assurance as an overhead. When schedules start to slip then it can be tempting to cut down on quality assurance, but this is a false economy.

I generally find that a willingness to skimp on quality assurance is a sign of inexperience. If you have ever witnessed a project sink into a quagmire of endless bug-fixing then you would never try to cut back on quality assurance. There is no such thing as a “quick and dirty” project – the “dirty” always remains long after the “quick” has been forgotten.

A project that is beset by quality assurance difficulties is a gruelling experience. It's also an expensive one as you end up pouring resource into fixing bugs that could and should have been caught earlier in the development process. It blows a hole in profitability, damages the reputation of your business, undermines user confidence and demoralises development teams. Quality assurance really isn't a luxury.