Testing, uncertainty and expectation

The interpretation of a test by a human being is influenced by the state of mind of the tester before that test is run.

For example, if we run a test and experience a failure:

  • If the failure is in the area we are 'looking for bugs' e.g. a newly written piece of code, we are disappointed but not unnerved (say) because we expect to find bugs in new code - and that is the purpose of the test.

  • But what if the failure is in an area we trust, to the degree that we have no specific tests in that area? The failure is unnerving, undermining our confidence. e.g. suppose we experience a database or operating system failure. This undermines our confidence in the platform as a whole. It also challenges our assumptions that the platform was reliable (so we didn't have any DB or OS tests planned).

Our pre-disposition to software aligns closely with out perception of risk.

If we perceive the likelihood of failure of a platform or component is low (even though the impact is catastrophic) we are unlikely to test that platform or component. Our thinking is, "we are so unlikely to expose failure here - why bother?" We might also attribute the notion of (bad) luck to such areas. "If that test exposed a bug, we'd be so unlucky"

Essentially, risk-based testing focuses our attention on the modes of failure that are perceived as dangerous enough (severe, to the degree they threaten the objectives of the system etc.) and likely enough that we would be neglectful if we ignore them.

In so doing, we identify tests or ideas for tests that have a good chance of finding bugs that matter. The usual introduction to test design techniques suggests this very point: "techniques have a good chance of finding a bug". We respond to the uncertainty of where bugs could be by pre-empting the bug-detection response.

You can also look at product risks as modes of failure that we know, if they occur, we would prioritise to fix and (usually) fix quickly. You can therefore think of product risks as virtual or speculative bug reports.

It’s a good strategy. However, if we judge that some aspect of a system is not risky, but it fails during testing, it is rather unnerving. So, it’s sensible to prepare yourselfyour team to see, for example, problems in trusted, but untested infrastructure early in proceedings.

Previous
Previous

What is the best ratio of testers to developers in an agile team?