What makes a good test automation suite?

Writing a good unit test suite is hard. A lot of test suites don't provide any meaningful benefits, merely serving as an unreliable and unloved extension of the main code base.

Unit tests suites can help to make engineering more productive, support more frequent releases, and reduce the rate at which defects are found in production. They can also become a nightmarish time-sink that undermines delivery and erodes confidence in a system. What are the characteristics of a successful test suite?

It should be fast

A test suite should be able to provide run continuously and provide fast feedback. You want it to be integrated as a natural part of the engineering workflow, so it executes every time an engineer checks code into a repository.

If you're test suite doesn't execute quickly then you are less likely to realise any productivity benefits and more likely to fall victim to broken builds. Adding tests becomes a hassle that requires too great an investment to be worth it. Engineers won't be able to verify changes before integrating them into the code base. Test failures become more difficult to fix if they are not discovered quickly. Overall, your tests are just less useful if they are slow to run.

It should be able to detect regression

A good test suite should reduce the need for regression testing by verifying that previously features are still working as expected. It should support code refactoring by giving engineers confidence that their changes are not breaking anything.

This implies that a unit test suite should be comprehensive. Test coverage can be a controversial subject, but good coverage doesn't have to require isolated tests for every single method. It may not be practical to cover every single behaviour either.  A risk-based approach to coverage is normally the most sensible approach here, where greater attention is given to higher value features that pose the greatest risk if they fail.

It should serve as documentation

Requirements documents and code comments tend to go out of date, creating a gap between the code base and any documentation that describes it. A good test suite can help to bridge this documentation gap by describing the intent of the code.

However, this is not a given. It depends on the tests being derived from a solid set of use cases that clearly describe the intended interactions of the system. It also demands that the tests be consistently organised, understandable, and written in plain language.  It should be easy to determine what a test is for, how to run it, and what results should be expected.

It should be reliable

There's no point having a test automation suite unless you are confident that it can provide a consistent and reliable indication of the health of the code base. This means that a test suite should routinely pass and not fall for arbitrary reasons.

This is a critical aspect of a healthy engineering team culture. It's easy for "broken window syndrome" to kick in when a team becomes accustomed to tests failing. They just stop caring about new failures as the tests are routinely broken anyway This does rather defeat the objective of having tests in the first place.

It should provide a contract between engineers

If engineers are going to collaborate efficiently on a code base, they need certain guarantees about the reliability of the code. They need to be confident that the latest version of a code base will be reliable. They need to be certain that it's not suddenly going to break when other engineers' changes are merged into it. They also need to be certain that any changes they commit will not undermine the code base.

A reliable test suite can provide a contract between engineers that helps to enforce these guarantees. So long as a check-in passes the tests, then an engineer can be reasonably confident that their code has not broken something.

It should be reportable

A test yields a result, sometimes with supporting information such as error messages. Large test suites can return a lot of information for each run. You need some mechanism for reporting on this so that it's visible to everybody and easily understood. It should be immediately obvious when a test has failed. There should also be sufficient information available to allow engineers to figure out why a test has failed and start working on a resolution.

It should be easy to maintain

Your test suite shouldn't become a time sink that requires engineering effort every time you add a new feature. You should be able to add, remove, and change tests relatively easily. This is easier to achieve if tests are isolated and atomic so you can implement a new feature without having to work through a cascade of dependent test code.

It should be recognised as valuable

There are still naysayers who deny the value of test automation. After all, test suites take a lot of work to build and maintain. This extra overhead needs to be worth the effort, as it consumes resources that could otherwise be spent building new features.

A test suite shouldn't be seen as "extra" code, but as a necessary part of delivering a system. You write code both to implement and verify a feature, preferably starting with the test code. Over time, this should make engineers more productive, systems more reliable, and help to reduce the defects found in production.