Regression Testing v Recession Testing

Anne-Marie Charrett wrote a blog post that I commented on extensively. I’ve reproduced the comment here:

“Some to agree with here, and plenty to disagree with too…

1. Regression testing isn’t about finding bugs the same way as one might test new software to detect bugs (testing actually does not detect bugs, it exposes failure. Whatever.) It is about detecting unwanted changes in functionality caused by a change to software or its environment. Good regression tests are not necessarily ‘good functional tests’. They are tests that will flag up changes in behaviour – some changes will be acceptable, some won’t. A set of tests that purely achieve 80% branch coverage will probably be adequate to demonstrate functional equivalence of two versions of software with a high level of confidence – economically. They might be lousy functional tests “to detect bugs”. But that’s OK – ‘bug detection’ is a different objective.

2. Regression Testing is one of four anti-regression approaches. Impact analysis from a technical and business point of view are the two preventative approaches. Static code analysis is a rarely used regression detection approach. Fourthly…and finally … regression testing is what most organisations attempt to do. It seems to be the ‘easiest option’ and ‘least disruptive to the developers’. (Except that it isn’t easy and regression bugs are an embarrassing pain for developers). The point is one can’t consider regression testing in isolation. It is one of four weapons in our armoury (although the technical approaches require tools). It is also over relied-on and done badly (see 1 above and 3 below).

3. If Regression testing is about demonstrating functional equivalence (or not), then who should do it? The answer is clear. Developers introduce the changes. They understand or should understand the potential impact of planned changes on the code base before they proceed. Demonstrating functional equivalence is a purely technical activity. Call it checking if you must. Tools can do it very effectively and efficiently if the tests are well directed (80% branch coverage is a rule of thumb). Demonstrating functional equivalence is a purely technical activity that should be done by technicians.

Of course, what happens mostly is that developers are unable to perform accurate technical impact analyses and they don’t unit test well so they have no tests and certainly nothing automated. They may not be interested in and/or paid to do testing. So the poor old system or acceptance testers working purely from the user interface are obliged to give it their best shot. Of course, they try and re-use their documented tests or their exploratory nous to create good ones. And fail badly. Not only are tests driven from the UI point of view unlikely to cover the software that might be affected, the testers are generally uninformed of the potential impact of software changes so have no steer to choose good tests in the first place. By and large, they aren’t technical and aren’t privy to the musings of the developers, before they perform the code changes so they are pretty much in the dark.

So UI driven manual or automated regression testing is usually of low value (but high expense) *when intended to demonstrate functional equivalence*. That is not to say that UI driven testing has no value. Far from it. It is central to assessing the business impact of changes. Unwanted side effects may not be bugs in code. Unwanted side-effects are a natural outcome of the software changes requested by users. A common unwanted effect here is for example, a change in configuration in an ERP system. The users may not get what they wanted from the ‘simple change’. Ill-judged configuration changes in ERP systems designed to perform straight-through processing can have catastrophic effects. I know of one example that caused 75 man-years of manual data clean-up effort. The software worked perfectly – there was no bug. The business using the software did not understand the impact of configuration changes.

Last year I wrote four short papers on Anti-Regression Approaches (including regression testing) and I expand on the points above. You can see them here: http://gerrardconsulting.com/index.php?q=node/479 “

Anti-Regression Approaches: Anti-Regression Strategy – Making it Work

The first four articles in this series have set out the main approaches to combating regression in changing software systems. From a business and technical viewpoint, we have considered both pre-change regression prevention (impact analysis) and post-change regression detection (regression testing). In this final article of the series, we’ll consider three emerging approaches that promise to reduce the regression threat and present some considerations of an effective anti-regression strategy with a recap of the main messages of the article series.

Three Approaches: Test, Behaviour and Acceptance Test-Driven Development

There is an increasing amount of discussion on development approaches based on the test-driven model. Ten years or so ago, before lightweight (later named Agile) approaches became widely publicized, test-driven development (TDD) was rare. Some TTD happened, but mostly in high integrity environments where component development and testing was driven by the need to meet formal functional and structural test coverage targets.

Over the course of the last ten years however, the notion of developers creating automated tests typically based on stories and discussions with on-site customers is becoming more common. The leaders in the Agile community are tending to preach behaviour- (BDD) and even acceptance test-driven development (ATDD) to improve and make accessible the test assets in Agile projects. They are also an attempt to move the Agile emphasis from coding to delivery of stakeholder value.

The advocates of these approaches (see for example testdriven.net, gojko.net, behaviour-driven.org, ATDD in Practice) would say that the approaches are different and of course, in some respects they are. But from the point of view of our discussion of anti-regression approaches, the relevance is this:

  1. Regression testing performed by developers is probably the most efficient way to demonstrate functional equivalence of software (given the limited scope of unit testing).
  2. The test-driven paradigm ensures that regression test assets are acquired and maintained in synchrony with the code – so are accurate and constantly reusable.
  3. The existence of a set of trusted regression tests means that the programmer is protected (to some degree) from introducing regressions when they change code (to enhance, fix bugs in or refactor code).
  4. Programmers, once they commit to the test-first approach tend to find their design and coding activities more predictable and less stressful.

These approaches obviously increase the effort at the front-end and many programmers are not adopting (and may never adopt) them. However, the trend toward test-first does seem to be gaining momentum.

A natural extension of test-first in Agile and potentially more structured environments is the notion of live specifications. In this approach, the automated tests become the independent and executable definition of the behavior of the system. The tests define the behavior of a system by example, and can be considered to be executable specifications (of a sort). Of course, examples alone cannot define the behavior of systems completely and some level of logical specification will always be required. However, the live-specification approach holds great promise, particularly as way of reducing regressions.

The ideal seems to be that where a change is required by users, the live specification is changed, new tests added and existing tests changed or retired as required. The software changes are made in parallel. The new and changed tests are run to demonstrate the changes work as required, and the existing (unchanged) tests are, by definition, the regression test pack. The format, content and structure of such live-specifications are evolving and a small number of organisations claim some successes. It will be interesting to see examples of the approach in action.

Unified Requirements and Systems Testing

The test-first approaches discussed above are gaining popularity in Agile environments. But what can be done in structured, waterfall, larger developments?

Some years ago (in my first Eurostar paper in 1993), I proposed a ‘Unified Approach to System Functional Testing’. In that paper, I suggested that a tabular notation for capturing examples or test cases could be used to create crude prototypes, review check lists and structured walkthroughs of requirements. These ‘behaviours’ as I called them could be used to test requirements documents, but also reused as the basis of both system and acceptance testing later on. Other interests took priority and I didn’t take this proposal much further until recently.

Several developments in the industry make me believe that a practical implementation of this unified approach is now possible and attractive to practitioners. See for example the model-based papers here: www.geocities.com/model_based_testing/online_papers.htm or the tool described here: teststories.info. To date, these approaches have focused on high formality and embedded/industrial applications.

Our approach involves the following activities:

  1. Requirements are tabulated to allow cross-referencing.
  2. Requirements are analysed and stories, comprising feature descriptions and a covering set of scenarios and examples (acceptance criteria) are created
  3. The scenarios are mapped to paths though the business process and a data dictionary; paper and automated prototypes can be generated from the scenarios
  4. Using scenario walkthroughs, the requirements are evaluated, omissions and ambiguities identified and fixed.
  5. The process paths, scenarios and examples may be incorporated into software development contracts, if required.
  6. The process paths, scenarios and examples are re-used as the basis of the acceptance test which is conducted in the familiar way.

Essentially, the requirements are ‘exampled’, with features identified and a set of acceptance criteria defined for each – in a structured language. It is the structure of the scenarios that allows tabular definitions of tests for use in manual procedures as well as skeletal automated tests to be generated automatically. There are several benefits deriving from this approach, but the two that concern us here are:

  • The definition of tests and the ability to generate automated scripts occurs before code is written which means that the test-first approach is viable for all projects, not just Agile.
  • The database of requirements, processes, process paths, features, examples and data dictionary are cross-referenced. The database can be used to support more detailed business-oriented impact analysis.

The first benefit has been discussed in the previous section. The second has great potential.

The business knowledge captured in the process will allow some very interesting what-if questions to be asked and answered. If a business process is to change, the system features, requirements, scenarios and tests affected can be traced. If a system feature is to be changed, the scenarios, tests, requirement and business process affected can be traced. This knowledge should provide at least at a high level, a better understanding of the impact of change. Further, it promotes the notion of live specifications and Trusted Requirements.

There is a real possibility that the (typically) huge investment in requirements capture will not be wasted and the requirements may be accurately maintained in parallel with a covering set of scenarios. Further, the business knowledge captured in the requirements and the database can be retained for the lifetime of the system in question.

Improving Software Analysis Tools

The key barrier to performing better technical impact analyses is the lack (and expense) of appropriate tools to provide a range of source code analyses. Tools that provide visualisations of the architecture, relationships between components and hierarchical views of these relationships are emerging. Some obvious challenges make life somewhat difficult though:

  1. Tools are usually language dependent so mixed environments are troublesome.
  2. The source code for third-party components used in your system may not be available.
  3. Visualisation software is available, but for real-size systems, the graphical models can become huge and unworkable.

These tools are obviously focused at architects, designers and developers and are naturally technical in nature.

An example of how tools are evolving in this area is Structure101g by (headwaysoftware.com).
This tool can perform detailed structural analyses of several languages (“java, C/C++ and anything”) but can, in principle provide visualisations, navigation and query facilities for any structural model. For example, with the necessary plugins, the tool can provide insights into XML/XSLT libraries and web site maps at varying levels of abstraction.

As tools like this become better established and more affordable, they will surely become ‘must-haves’ for architects and developers in large systems environments.

Anti-Regression Strategy – Making it Happen

We’ll close this article series with some guidelines summarised from this and previous articles. Numerals in brackets refer to the article number.

  1. Regressions in working software affect business users, technical support, testers, developers, software and project management and stakeholders. It is everyone’s problem (I, V).
  2. A comprehensive anti-regression strategy would include both regression prevention and detection techniques from a technical and business viewpoint. (I, II).
  3. Impact analysis can be performed from both a business and technical viewpoints. (all)
  4. Technical impact analysis really needs tool support; consider open source, proprietary (or consider building your own to meet your specific objectives).
  5. Regression testing may be your main defence against regression, but should never be the only one; impact analysis prevents regression and informs good regression testing. (I, II, IV).
  6. Regression testing can typically be performed at the component, system or business level. These test levels have different objectives, owners and may be automated to different degrees (III).
  7. Regression tests may be created in a test-driven regime, or as part of requirements or design based approaches. Reuse of tests saves time, but check that these tests actually meet your anti-regression objectives (III).
  8. Regression tests become less effective over time; review your test pack regularly, especially when you are about to add to it. (This could be daily in an Agile environment!) (III)
  9. Analyses of production data will tell you the format, volumes and patterns of data that is most common – use it as a source of test data and a model for coverage; but don’t forget to include the negative tests too though! (III)
  10. If you need to be selective in the tests you retain and execute then you’ll need an agreed process, forum, decision-maker or makers or criteria for selection (agreed with all stakeholders in 1 above) (III).
  11. Most regression testing can and should be automated. Understand your context (objectives, test levels, risk areas, developer/tester motivations and capabilities etc.) before defining your automation strategy (III, IV).
  12. Consider what test levels, system stimulation and outcome detection methods, ownership, capabilities and tool usability are required before defining an automation regime (IV).
  13. Creating an automation regime retrospectively is difficult and expensive; test-first approaches build regression testing into the DNA of project teams (V).
  14. There is a lot of thinking, activity and new approaches/tools being developed to support requirements testing, exampling, live-specs and test automation; take a look (V).

I wish you the best of luck in your anti-regression initiatives.

I’d like to express sincere thanks to the Eurostar Team for asking me to write this article series and attendees at the Test Management Summit for inspiring it.

Paul Gerrard
23 August 2010.