Testing the Code that Checks Code

Twitter is a lousy medium for debate, don’t you think?

I had a very brief exchange with Michael Bolton below.  (Others have commented on this thread this afternoon). To my apparently contradictory (and possibly stupid) comment, Michael responded with a presumably perplexed “?”

This blog is a partial explanation of what I said, and why I said it. You might call it an exercise in pedantry. (Without pedantry, there is less joy in the world – discuss). There’s probably a longer debate to be had, but certainly not on Twitter. Copenhagen perhaps, Michael? My response was to 3) Lesson.

3) Lesson: don’t blindly trust … your automated checks, lest they fail to reveal important problems in production code.

I took the lesson tweet out of context and ignored the first two tweets deliberately, and I’ll comment on those below. For the third, I also ignored the “don’t blindly trust your test code” aspect too and here’s why. If you have test code that operates at all, and you have automated checks that operate, you presumably trust the test code already. You will have already done whatever testing of the test code you deemed appropriate. I was more concerned with the second aspect. Don’t blindly trust the checks.

But you know what? My goal with automation is exactly that – to blindly trust automated checks.

If you have an automated check that runs at all, then given the same operating environment, test data, software versions, configuration and so on, you would hardly expect the repeated check to reveal anything new (unless it detected a failure – doing it’s job). If it did ‘fail’, then it really ought to flag some kind of alarm. If you are not paying attention or are ignoring the alarm, then on your own head be it. But if I have to be paying attention all the time, effectively babysitting – then my automation is failing. It is failing to replace my manual labour (often the justification for automating in the first place).

A single check is most likely to be run as part of a larger collection of tests, perhaps thousands, so the notification process needs to be integrated with some form of automated interpretation or at least triggered when some pre-defined threshold is exceeded.

Why blindly? Well, we humans are cursed by our own shortcomings. We have low attention-spans, are blind to things we see but aren’t paying attention to and of course, we are limited in what we can observe and assimilate anyway. We use tools to replace humans not least because of our poor ability to pay attention.

So I want my automation to act as if I’m not there and to raise alarms in ways that do not require me to be watching at the time. I want my phone to buzz, or my email client to bong, or my chatOps terminal to beep at me. Better still, I want the automation to choose who to notify. I want to be the CC: or BCC: in the message, not necessarily the To: all the time.

I deliberately took an interpretation of Michael’s comment that he probably didn’t intend. (That’s Twitter for you).

When my automated checks run, I don’t expect to have to evaluate whether the test code is doing ‘the right thing’ every time. But over time, things do change – the environment, configuration and software under test – so I need to pay attention to whether these changes impact my test code. Potentially, the check needs adjustment, re-ordering, enhancing, replacing or removing altogether. The time to do this is before the test code is run – during requirements discussions or in collaboration with developers.

I believe this was what Michael intended to highlight: your test code needs to evolve with the system under test and you must pay attention to that.

Now, my response to the tweet suggests rather than babysit your automated checks, you should spend your time more wisely – testing the system in ways your test code cannot (economically).

To the other tweets:

1) What would you discover if you submitted your check code to the same review, scrutiny, and testing as your production code?

2) If you’re not scrutinizing test code, why do you trust it any more than your production code? Especially when no problems are reported?

Test code can be trivial, but can sometimes be more complex than the system under test. It’s the old old story, “who tests the test code and how?”. I have worked on a a few projects where test code was treated like any other code. High integrity projects and the like. but even then I didn’t see much ‘test the test code’ activity. I’d say there are some common factors that make it less likely you would test your test code, and feel safe (enough) not doing so.

  1. Test code is built incrementally, usually, so that it is ‘tried’ in isolation. Your test code might simulate a web or mobile transaction, for example? if you can watch it move to fields, enter data, check the outcomes you see correctly, most testers would be satisfied it works as a simple check. What other test is required than re-running it, expecting the same outcome each time?
  2. Where the check is data-driven, of course, the code uses prepared data to fill, click or check parameterised fields, buttons and outcomes respectively. On a GUI app this can be visibly checked. Should you try invalid data (not included in your planned test data) and so on? Why bother? If the test code fails, then that is notification enough that you screwed up – fix it. If the test code flags false negatives when for example your environment changes, then you have a choice: tidy up your environment, or add code to accommodate acceptable environmental variations.
  3. Now, when your test code loses synchronisation or encounters a real mismatch of outcomes, your code needs handlers for these situations. These handlers might be custom-built for every check (an expensive solution) or utilise system-wide procedures to log, recover, re-start or hand-off depending on the nature of the tests or failures. This ought to be where your framework or scaffolding code comes in.
  4. Surely the test code needs testing more than just ‘using it’? The thing is, your test code is not handed over to users for them to enter extreme, dubious or poor quality data. All the data it will ever handle is in the test suite you use to test the system under test. Another tester might add new rows of test data to feed it, but problems arising are as likely to be due to other things than new test data. At any rate, what tests would you apply to your test code? Your test data, selected to exercise extremes in your system under test is probably quite well suited to testing the test code anyway.
  5. When problems do arise when your test code is run, it is more likely to be caused by environmental/data problems or software changes, so your test code will be adapted in parallel with these changes, or made more resilient to variations (bearing in mind the original purpose of the test code).
  6. Your scaffolding code or home-grown test frameworks handles this doesn’t it? Pretty much the same arguments above apply. They are likely to be made more robust through use, evolution and adaptation than a lot of planned tests.
  7. Who tests the tests? Who tests the tests of the tests? Who tests the tests of …

Your scaffolding, framework, handlers, analysis, notifications, messaging capability need a little more attention. In fact, your test tools might need to be certified in some way (like standard C, C++, Java compilers for example) to be acceptable to use at all.

I’m not suggesting that under all circumstances, your test code doesn’t need testing. But it seems to me, that in most situations, the code that actually performs a check might be tested by your test data well enough and that most exceptions arise as you develop and refine that code and can be fixed before you come to rely on it.

 

Anti-Regression Approaches: Impact Analysis and Regression Testing Compared and Combined: Part IV: Automated Regression Testing

In Parts I and II of this article series, we introduced the nature of regression, impact analysis and regression prevention. In Part III we looked at Regression Testing and how we select regression tests. This article focuses on the automation of regression testing.

Automated Regression Testing is One Part of Anti-Regression

Sometimes it feels like more has been written about test automation, especially GUI test automation, than any other testing subject. My motivation in writing this article series was that most things of significance in test automation had been said 8, 10 or 15 years ago and not that much progress has been made since (notwithstanding the varying technology changes that have occurred). I suggest there’s been a lack of progress, because significant and sustained success with automation of (what is primarily) regression testing is still not assured. Evidence of failure, or at least troublesome implementations of automation, is still widespread.

My argument in the January 2010 Test Management Summit was that perhaps the reason for failure in test automation was that people didn’t think it through before they started. In this context, ‘started’ often means getting a good deal on a proprietary GUI Test Automation tool.

It’s obvious – buying a tool isn’t the best first step. Automating tests through the user interface may not be the most effective way to achieve anti-regression objectives. Test automation may not be an effective approach at all. It certainly shouldn’t be the only one considered. Test execution automation promises reliable, error-free, rapid, unattended test execution. In some environments, the promise is delivered, but in most – it is not.

In the mid 1990’s informal surveys revealed that very few (in one survey, less than 1% of) test automation users achieved ‘significant benefits’. The percentage is much higher nowadays – maybe as high as 50%, but that is probably because most practitioners have learnt their lessons the hard way. Regardless, success is not assured.

Much has been written on the challenges and pitfalls of test automation. The lessons learned by practitioners in the mid-90s are substantially the same as those facing practitioners today. I have to say that it’s a cause of some frustration that many companies still haven’t learnt them. In this article, there isn’t space to repeat those lessons. The referred papers, books and blogs at the end of this article focus on implementing automation, primarily from a user interface point of view, and sometimes as an end in itself. To complement these texts, to bring them up to date and focus them on our anti-regression objective, the remainder of this article will set out some wider considerations.

Regression test objectives and (or versus?) automation

The three main regression test objectives are set out below together with some suggestions for test automation. Although the objectives are distinct, the differences between regression testing and automation for the three objectives are somewhat blurred.

Anti-Regression Objective Source of Tests Automation Considerations
1. To detect unwanted changes to trusted functionality. Functional system tests

Integration tests

Consider the criteria in references 6, 7, 8

Most likely to be automated using drivers to component and sub-system interfaces

2. To detect unwanted changes (to support technical refactoring). Test-first, test-driven environments generate automated tests naturally Consider reference 9 and the discussion of testing in TDD and Agile in general.
3. To demonstrate to stakeholders that they can still do business. Acceptance Tests, business process flows, ‘end to end’ tests. Consider the criteria in references 6, 7, 8 but expect mostly manual testing for demonstration purposes.

See reference 10 for an introduction to Acceptance Driven Development.

Regression objectives reframed: detecting regression v providing confidence

Of the three regression test objectives above, objectives 1 and 2 are similar. What differentiates them is who (and where) they come from. Objective 1 comes from a system supplier perspective and tests are most likely to be sourced from system or integration tests that were previously run (either manually or automated). Objective 2 comes from a developer or technical perspective where the aim is to perform some refactoring in a safe environment. By and large, ‘safe’ refactoring is most viable in a Test-Driven environment where all unit tests are automated, probably in a Continuous Integration regime. (Although refactoring at any level benefits from automated regression testing).

If objectives 1 and 2 require tests to demonstrate ‘functional equivalence’, regression test coverage can be based on the need to exercise the underlying code and cover the system functionality. Potentially, tests based on equivalence partitioning ought to cover the branches in code (but not housekeeping or error-handling functionality – but see below). Tests covering edge conditions or boundary values should verify the ‘precision’ of those decisions. So a reasonable guideline could be – use automation to cover functional paths through the system and data-drive those tests to expand the coverage of boundary conditions. The automation does not necessarily have to execute tests that would be recognisable to the user, if the objective is to demonstrate functional equivalence.

Objective 3 – to provide confidence to stakeholders is slightly different. In this case, the purpose of a regression test is to demonstrate to end users that they can execute business transactions and use the system to support their business. In this respect, it may be that these tests could be automated and some automated tests that fall under the category 1 and 2 above will be helpful. But experience of testing GUI applications in particular suggests that end users sometimes only trust their own eyes and need to have a hands-on experience to give them the confidence that is required. Potentially, a set of automated tests might be used to drive a number of ‘end to end’ transactions, and reconciliation or control reports could be generated to be inspected by end users. There is a large spectrum of possibilities of course. In summary, automated tests could help, but in some environments, the need for manual tests as a ‘confidence builder’ cannot be avoided.

At what level(s) should we automate regression tests?

In Part III of this article series, we identified three levels at which regression testing might be implemented – at the component, system and business (or integrated system) levels. These levels should be considered as complementary and the choice is where to place emphasis, rather than which to include or exclude. The choice of automation at these levels is not really the point. Rather, a level of regression testing may be chosen primarily to achieve an objective, partly on the value of information generated and partly because of the ease with which the tests can be automated.

What are the technical considerations for automation?

At the most fundamental, technical level, there are four aspects of the system under test that must be considered. How the system under test is stimulated, and how the test outcomes of interest (with respect to regression) will be detected.

Mechanisms for stimulating the system under test

This aspect reflects how a test is driven by either a user or an automated tool. Nowadays, the number of user and technical interfaces in use is large – and growing. A table of the most common are presented and some suggestions made.

PC/Workstation-based applications and clients>
  • Proprietary or open source GUI-object based drivers
  • Hardware (keyboard, video, mouse) based tools – physically connected to clients
  • Software based automation tools driving clients working across VNC connections
Browser/web-based applications
  • Proprietary object-based agents
  • Open source JavaScript-based agents
  • Open source script languages and GUI toolkits
Web-Server-based functionality (HTTP)
  • Proprietary or open source webserver/HTTP/S drivers
Web services
  • Proprietary or open source web services drivers
Mobile applications
  • Mobile OS simulators driven by integrated or separate GUI based toolkits
Embedded
  • Typically java-based toolkits
Error, failure, spate, race conditions or other situations
  • May be simulated by instrumentation, load generation tools or manipulation of the test environment or infrastructure
Environments
  • Don’t forget that environmental conditions influence the behaviour of ALL systems under test.

There are an increasing number of proprietary and open source unit and acceptance testing frameworks available to manage and control the test execution engines above.

Outcome/Output detection and capture

A regression can be detected in as many ways as any outcome (output, change of state etc.) of a system can be exposed and detected. Here’s a list of common outcome/output formats that we may have to deal with. This is not a definitive list.

Browser-rendered output
  • The state of any object on the document Object Model (DOM) exposed by a GUI tool
Any screen-based output
  • Image recognition by hardware or software based agents
Transaction response times
  • Any automated tool with response time capture capability
Database changes
  • Appropriate SQL or database query tool
Message output and content
  • Raw packets captured by network sniffers
  • Message payloads captured and analysed by protocol-specific tools
Client or server system resources
  • CPU, i/o, memory, network traffic etc. detected by performance monitors
Application or other infrastructure – changes of state
  • (Database, enterprise messaging, object request brokers etc. etc.) – dedicated system/resource monitors or custom-built instrumentation etc.
Changes in accessibility or usability (adherence to standards etc.)
  • Web page HTML scanners, character-based screen or report scanners or screen image scanners
Security (server)
  • Port scanning and server-penetration tools

Comparison of Outcomes

A fundamental aspect of regression testing is comparison of actual outcomes (in whatever format from whatever source above) to expected outcomes. If we are running a test again, the comparison is between the new ‘actual’ output/outcome and previously captured ‘baseline’ output/outcome.

Simple comparison functionality of numbers, text, system states, images, mark-up language, database content, reports, message payloads, system resource is not enough. We need to have a capability in our automation to:

Filter content: we may not need to compare ‘everything’. Subsets of database records, screen/image regions, branches or leaves in marked up text, some objects and states but not others etc. may be filtered out (of both actual and baseline content).

Mask content: of the content we filter out, we may wish to mask out certain patterns of content such as image regions that do not contain field borders; textual report columns or rows that contain dates/times, page numbers, varying/unique record ids etc.; screen fields or objects of certain colours, sizes, that are hidden/visible; patterns of text that can be matched using regular expressions and so on.

Calculate from content: the value, significance or meaning of content may have to be calculated: perhaps the number of rows displayed on a screen is significant; the error message, number or status code displayed on a screen image, extracted by text recognition; the result of a formula in which the variables are extracted from an outputted report and so on.

Identify content meeting/exceeding a threshold: the significance of output is determined by its proximity to thresholds such as: CPU, memory or network bandwidth usage compared to pre-defined limits; the value of a purchase order exceeds some limit; the response time of a transaction exceeds a requirement and so on.

System Architecture

The architecture of a system may have a significant influence over the choice of regression approach and automation in particular. An example will illustrate. An increasingly common software model is the MVC or model-view-controller architecture. Simplistically (from Wikipedia):

“The model is used to manage information and notify observers when that information changes; the view renders the model into a form suitable for interaction, typically a user interface element; the controller receives input and initiates a response by making calls on model objects. MVC is often seen in web applications where the view is the HTML or XHTML generated by the app. The controller receives GET or POST input and decides what to do with it, handing over to domain objects (i.e. the model) that contain the business rules and know how to carry out specific tasks such as processing a new subscription.”

A change to a ‘read-only’ view may be completely cosmetic and have no impact on models or controllers. Why regression test other views, models or controllers? Why automate testing at all – a manual inspection may suffice.

If a controller changes, the user interaction may be affected in terms of data captured and/or presented but the request/response dialogue may allow complete control of the transaction and examination of the outcome. In many situations, automated control of requests to and from controllers (e.g. HTTP GETs and POSTs) is easier to achieve than automating tests through the GUI (i.e. a rendered web page).

Note that cross-browser test automation, to verify the behaviour and appearance of a system’s web pages across different browser types, for example, cannot be handled this way. (Some functional automation may be possible, but some usability/accessibility tests will always be manual).

It is clear that the number and variety of the ways a system can be stimulated and potentially regressive outcomes can be observed is huge. Few, if any tools, proprietary or open source, have all the capabilities we need. The message is clear – don’t ever assume the only way to automate regression testing is to use a GUI-based test execution tool!

Regression test automation – summary

In summary, we strongly advise you to bear in mind the following considerations:

  1. What is the outcome of your impact analysis?
  2. What are the objectives of your anti-regression effort?
  3. How could regressions manifest themselves?
  4. How could those regressions be detected?
  5. How can the system under test be stimulated to exercise the modes of operation of concern?
  6. Where in the development and test process is it feasible to implement the regression testing and automation?
  7. What technology, tools, harnesses, custom utilities, skills, resources and environments do you need to implement the automated regression test regime?
  8. What will be your criteria for automating (new or existing, manual) tests?

Test Automation References

  1. Brian Marick, 1997, Classic Testing Mistakes,
    http://www.exampler.com/testing-com/writings/classic/checklist.html
  2. James Bach, 1999, Test Automation Snake Oil,
    http://www.satisfice.com/articles/test_automation_snake_oil.pdf
  3. Cem Kaner, James Bach, Bret Pettichord, 2002, Lessons Learned in Software Testing, John Wiley and Sons
  4. Dorothy Graham, Paul Gerrard, 1999, the CAST Report, Fourth Edition
  5. Paul Gerrard, 1998, Selecting and Implementing a CAST Tool,
    http://gerrardconsulting.com/?q=node/532
  6. Brian Marick, 1998, When Should a Test be Automated?
    http://www.stickyminds.com/sitewide.asp?Function=edetail&ObjectType=ART&ObjectId=2010
  7. Paul Gerrard, 1997, Testing GUI Applications,
    http://gerrardconsulting.com/?q=node/514
  8. Paul Gerrard, 2006, Automation below the GUI (blog posting),
    http://gerrardconsulting.com/index.php?q=node/555
  9. Scott Ambler, 2002-10, Introduction to Test-Driven Design,
    http://www.agiledata.org/essays/tdd.html
  10. Naresh Jain, 2007, Acceptance-Test Driven Development,
    http://www.slideshare.net/nashjain/acceptance-test-driven-development-350264

In the final article of this series, we’ll consider how an anti-regression approach can be formulated, implemented and managed and take a step back to summarise and recap the main messages of these articles.

Paul Gerrard
21 June 2010.