Test Management is Dead, Long Live Test Management

Do you remember the ‘Testing is Dead’ meme that kicked off in 2011 or so? It was triggered by a presentation done by Alberto Savoiea here . It caused quite a stir, some copycat presentations and a lot of counter-arguments. But I always felt most people missed the point being made. you just had to strip out the dramatics and Doors music.

The real message was that for some organisations, the old ways wouldn’t work any more, and as time has passed, that prediction has come true. With the advent of Digital, mobile, IoT, analytics, machine learning and artificial intelligence, some organisations are changing the way they develop software, and as a consequence, testing changes too.

Shifting testing left, with testers working more collaboratively with the business and developers, test teams are being disbanded and/or distributed across teams. With no test team to manage, the role of the test manager is affected. Or eliminated.

Test management thrives; test managers come and go.

It is helpful to think of testing as less of a role and more of an activity that people undertake in their projects or organisations. Everyone tests, but some people specialise and make a career of it. In the same way, test management is an activity associated with testing. Whether you are the tester in a team or running all the testing in a 10,000 man-year programme, you have test management activities.

For better or for worse, many companies have decided that the role of test managers is no longer required. Responsibility for testing in a larger project or programme is distributed to smaller, Agile teams. There might be only one tester in the team. The developers in the team take more responsibility for testing and run their own unit tests. There’s no need for a test manager as such – there is no test team. But many of the activities of test management still need to be done. It might be as mundane as keeping good records of tests planned and/or executed. It could be taking the overall project view on test coverage (of developer v tester v user acceptance testing for example).

There might not be a dedicated test manager, but some critical test management activities need to be performed. Perhaps the team jointly fulfil the role of a virtual test manager!

Historically, the testing certification schemes have focused attention on the processes you need to follow—usually in structured or waterfall projects. There’s a lot of attention given to formality and documentation as a result (and the test management schemes follow the same pattern). The processes you follow, the test techniques you use, the content and structure of reporting vary wherever you work. I call these things logistics.

Logistics are important, but vary in every situation.

In my thinking about testing, as far as possible, I try to be context-neutral. (Except my stories, which are grounded in real experience).

As a consultant to projects and companies, I never knew what situation would underpin my next assignment. Every organisation, project, business domain, company culture, and technology stack is different. As a consequence, I avoided having fixed views on how things should be done, but over twenty-five years of strategy consulting, test management and testing, certain patterns and some guiding principles emerged. I have written about these before[1].

To the point.

Simon Knight at Gurock asked me to create a series of articles on Test Management, but with a difference. Essentially, the fourteen articles describe what I call “Logistics-Free Test Management”. To some people that’s an oxymoron. But that is only because we have become accustomed in many places to treat test management as logistics management. Logistics aren’t unique to testing.

Logistics are important, but they don’t define test management.

I believe we need to  think about testing as a discipline where logistics choices are made in parallel with the testing thinking. Test Management follows the same pattern. Logistics are important, but they aren’t testing. Test management aims to support the choices, sources of knowledge, test thinking and decision making separately from the practicalities – the logistics – of documentation, test process, environments and technologies used.

I derived the idea of a New Model for Testing – a way of visualising the thought processes of testers – in 2014 or so. Since then, I have presented to thousands of testers and developers and I get very few objections. Honestly!

However, some people do say, with commitment, “that’s not new!”. And indeed it isn’t.

If the New Model reflects how you think, then it should be a comfortable fit. It is definitely not new to you!

One of the first talks I gave on the New Model is here. (Skip to 43m 50s to skip the value of testing talk and long introduction).

The New Model for Testing

Now, I might get a book out of the material (hard-copy and/or ebook formats), but more importantly, I’m looking to create an online and classroom course to share my thinking and guidance on test management.

Rather than offer you specific behaviours and templates to apply, I will try to describe the goals, motivations, thought processes, the sources of knowledge and the principles of application and use stories from my own experience to illustrate them. There will also be suggestions for further study and things to think about as exercises or homework.

You will need to adjust these lessons to your specific situation. It requires that you think for yourself – and that is no bad thing.

Here’s the deal in a nutshell: I’ll give you some interesting questions to ask. You need to get the answers from your own customers, suppliers and colleagues and decide what to do next.

I’ll be exploring these ideas in my session at the next Assurance Leadership Forum on 25 July. See the programme here and book a place.

In the meantime, if you want to know more, leave a comment or do get in touch at my usual email address.

 

[1] The Axioms of Testing in the Tester’s Pocketbook for example, https://testaxioms.com

 

New Model Testing: A New Test Process and Tool

Last month, I presented a webinar for the EuroSTAR conference. “New Model Testing: A New Test Process and Tool” can be seen below. To see it, you have to enter some details – this is not under my control, but the EuroSTAR conference folk. You can see the slides below.

The value of documentation prepared in structured/waterfall or agile projects is often of dubious value. In structured projects, the planning documentation is prepared in a knowledge vacuum – where the requirements are not stable, and the system under test is not available. In agile projects – where time is short and other priorities exist – not much may get written down anyway. I believe the only test documentation that could be captured reliably and be trusted must be captured contemporaneously with exploration and test.

The only way to do this would be using a pair tester or a bot to capture the thoughts of a tester as they express them. I’ve been working on a prototype robot that can capture the findings of the tester as they explore and test. The bot can be driven by a paired tester, but it has a speech recognition front-end so it can be used as a virtual pair.

From using the bot, it is clear that a new exploration and planning metaphor is required – I suggest Surveying – and we also need a new test process.

In the webinar, I describe my experiences of building and using the bot for paired testing and also propose a new test process suitable for both high integrity and agile environments. The bot – codenamed Cervaya™ – builds a model of the system as you explore and captures test ideas, risks and questions and generates structured test documentation as a by-product.

If you are interested in collaborating – perhaps as a Beta Tester – I’d be delighted ot hear from you.

The slides can be seen here:

The New Model and Testing v Checking

It was inevitable that people would compare my formulation of a New Model for Testing with the James Bach & Michael Bolton demarcation of or distinction between ‘testing versus checking‘. I’ve kind of avoided giving an opinion online, although I have had face to face conversations with people who both like and dislike this idea and expressed some opinions privately. The topic came up again last week at StarWest so I thought I’d put the record straight.

I should say that I agree with Cem Kaner’s rebuttal of the testing v checking proposal. It is clear to me that although the definition of checking is understandable, the definition of testing is less so, evidenced by the volume of comments on the authors’ own blogs. In my discussions with people who support the idea, it was easy to agree on checking as a definition, but testing, as defined, seemed much harder for them to defend in the face of experience.

Part of the argument for highlighting what checking is, is to posit that we cannot rely on checking, particularly with tools, alone. Certainly, brainless use of tools to check – a not infrequent occurrence – is to be decried. But then again, brainless use of anything… But it is just plain wrong that we cannot rely on automated tests. Some products, cannot be tested in any other way. Whether you like it or not – that’s just the way it is.

One reason I’ve been reticent on the subject is I honestly thought people would see through this distinction quickly, and it would be withdrawn or at least refined into something more useful.

Some, probably many, have adopted the checking definition. But few have adopted the testing definition, such as it is, and debated it with any conviction. It looks like I have to break cover.

It would be easy to compare exploration in the New Model with the B & B view of testing and my testing with their view of checking. There are some parallels but comparing them only serves to highlight the differences in perspectives of the authors. We don’t think the same, and that’s for sure.

From my perspective, perhaps the most prominent argument against the testing v checking split is the notion that somehow testing (if it is simply their label for what I call exploration) and checking are alternatives. The sidelining of checking as something less valuable, intellectual or effective doesn’t match experience. The New Model reflects this in that the tester explores sources of information to create models that inform testing. Whether these tests are in fact checks is important, but the choice of scripting as a means of recording a test for use in execution (by tools or people) is one of logistics – it is, dare I say, context-specific.

The exploration comes before the test. If you do not understand what the system should (or should not) do, you cannot formulate a meaningful test. You can enquire what a system might do, but who is to say whether that behaviour is correct or otherwise, without some input from a source of knowledge other than the system itself. The SUT cannot be its own test oracle. The challenges you apply during exploration of the SUT are not tests – they are trials of your understanding (your model) of the actual behaviour of the SUT.

Now, in most situations, it is extremely hard to trust a requirement, however stated, is complete, correct, unambiguous – perfect. In this way, one might never comfortably decide the time is right for testing or checking (as I have scoped it). The New Model implies one should persevere to improve the requirements/sources and aligned with your mental model before you commit to (coding or) testing. Of course, one has to make that transition sometime and that’s where the judgement comes in. Who can say what that judgement should be except that it is a personal, consensual or company-cultural decision to proceed.

Exploration is a dynamic activity, whereby you do not usually have a fully formed view of what the system should do, so you have to think, model and try things based on the model as it stands. Your current model enables you to make predictions on behaviour and to try these predictions on the SUT or stakeholders or against any other source of knowledge that is germane to the challenge at hand.

Now, I fully appreciate our sources of knowledge are fallible. This is part and parcel of the software development process and why there are feedback loops in (my version of) exploration. But I argue that the exploration part of the test process (enquiring, modelling, predicting and challenging) are the same for developers as they are for testers.

The critical step in transitioning from exploration to testing, or in the case of a developer, to writing the code based on their understanding (synonymous with the ‘model’ in their head) is where the developer or tester believes they understand the need and trust their model or understanding. Until that moment, they remain in the exploration state, are uncertain to some degree and are not yet confident (if that is the right term) that they could decide whether a system behaviour is correct or not or just ‘interesting’.

If a developer or tester proceeds to coding or testing before they trust their model, then it’s likely the wrong thing will be built or tested or it will be tested badly.

Now just to take it further, a tester would not raise a defect report while they are uncertain of the required behaviour of the SUT. Only when they are confident enough to test would it be reasonable to do so. If you are not in a position to say ‘this works correctly according to my understanding of the requirement (however specified)’ you are not testing, you are exploring your sources of information or the SUT.

In the discussion above, the New Model appears to align with this rather uncertain process called exploration.

Now, let’s return to the subject of testing versus checking. Versus is the wrong word, I am sure. Testing and checking are not opposed or alternatives. Some tests can be scripted in some way, for the use of people or automated tools. In order to reach the point in one’s understanding to flip from exploration to testing, you have to have done the ground work. In many ways, it takes more effort, thinking, modelling to reach the requisite understanding to construct a script or procedure to execute a check than to just learn what a system does through exploration, valuable though that process is.

As an aside, it’s very hard to delineate where scripted and unscripted testing begin and end anyway. If I say, ‘test X like so, and keep your eyes open’ is that a script or an exploratory charter?

In no way can checking be regarded as less sophisticated, useful, effective or requiring less effort than testing without a script. The comparison is spurious as for example, some systems can be tested in no other way than with scripted tooling. In describing software products, Philip Armour (in his book, ‘the Laws of Software Process’) says that software is not a product, but rather ‘the product is the knowledge contained in the software’. Software is not a product, it is a medium.

The only way a human can test this knowledge product is through the layers of hardware (and software) that must be utilised in very specific ways. In almost every respect, testing is dependent on some automated support. So, as Cem says, at some level, ‘all tests are automated… all tests are manual’.

Since the vast majority of software has no user interface, it can only ever be checked using tools. (Is testing in the B & B world only appropriate to a user interface then?) On the other hand, some user interfaces cannot be automated in any viable way (often because of the automated technology between human and the knowledge product). That’s life, not a philosophical distinction.

The case can be made that people following scripts by rote might be blinkered and miss certain aspects of incorrect behaviour. This is certainly the case, especially if people are asked to follow scripts blindly. In all my experience of testing in thirty years no tester has ever been asked to be so blinkered. In fact, the opposite is often true, and testers are routinely briefed to look out for anything anomalous precisely to address the risk of oversight. Of course, humans make mistakes and oversight is an inevitability. However, it could also be said that working to a script makes the tester more eagle-eyed – the formality of scripted testing, possibly witnessed (akin to pairing, in fact) is a serious business.

On the other hand, people who have been asked to test without scripts, might be unfocused, lazy almost, unthinking and sloppy. They are hard to hold to account as a list of features or scenarios to cover in some way in a charter, without thinking or modelling is unsafe.

What is a charter anyway? It’s a high level script. It might not specify test steps, but more significantly it usually defines scope. An oversight in a scripted test might let a bug through. An oversight in a charter might let a whole feature go untested.

Enough. The point is this. It is meaningless and perverse to compare undisciplined, unskilled scripted or unscripted testing with its skilled, disciplined counterpart. We should be paying attention to the skills of the testers we employ to do the job. A good tester, investing the effort on the left hand side of the New Model will succeed whether they script or not. For that reason alone, we should treat the scripted/unscripted dichotomy as a matter of logistics, and ignore it from the perspective of looking at testers skills.

We should be thankful that, depending on your project, some or all testing can be scripted/automated, and leave it at that.