Test Management is Dead, Long Live Test Management

It's only fair to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

Do you remember the ‘Testing is Dead’ meme that kicked off in 2011 or so? It was triggered by a presentation done by Alberto Savoiea here . It caused quite a stir, some copycat presentations and a lot of counter-arguments. But I always felt most people missed the point being made. you just had to strip out the dramatics and Doors music.

The real message was that for some organisations, the old ways wouldn’t work any more, and as time has passed, that prediction has come true. With the advent of Digital, mobile, IoT, analytics, machine learning and artificial intelligence, some organisations are changing the way they develop software, and as a consequence, testing changes too.

Shifting testing left, with testers working more collaboratively with the business and developers, test teams are being disbanded and/or distributed across teams. With no test team to manage, the role of the test manager is affected. Or eliminated.

Test management thrives; test managers come and go.

It is helpful to think of testing as less of a role and more of an activity that people undertake in their projects or organisations. Everyone tests, but some people specialise and make a career of it. In the same way, test management is an activity associated with testing. Whether you are the tester in a team or running all the testing in a 10,000 man-year programme, you have test management activities.

For better or for worse, many companies have decided that the role of test managers is no longer required. Responsibility for testing in a larger project or programme is distributed to smaller, Agile teams. There might be only one tester in the team. The developers in the team take more responsibility for testing and run their own unit tests. There’s no need for a test manager as such – there is no test team. But many of the activities of test management still need to be done. It might be as mundane as keeping good records of tests planned and/or executed. It could be taking the overall project view on test coverage (of developer v tester v user acceptance testing for example).

There might not be a dedicated test manager, but some critical test management activities need to be performed. Perhaps the team jointly fulfil the role of a virtual test manager!

Historically, the testing certification schemes have focused attention on the processes you need to follow—usually in structured or waterfall projects. There’s a lot of attention given to formality and documentation as a result (and the test management schemes follow the same pattern). The processes you follow, the test techniques you use, the content and structure of reporting vary wherever you work. I call these things logistics.

Logistics are important, but vary in every situation.

In my thinking about testing, as far as possible, I try to be context-neutral. (Except my stories, which are grounded in real experience).

As a consultant to projects and companies, I never knew what situation would underpin my next assignment. Every organisation, project, business domain, company culture, and technology stack is different. As a consequence, I avoided having fixed views on how things should be done, but over twenty-five years of strategy consulting, test management and testing, certain patterns and some guiding principles emerged. I have written about these before[1].

To the point.

Simon Knight at Gurock asked me to create a series of articles on Test Management, but with a difference. Essentially, the fourteen articles describe what I call “Logistics-Free Test Management”. To some people that’s an oxymoron. But that is only because we have become accustomed in many places to treat test management as logistics management. Logistics aren’t unique to testing.

Logistics are important, but they don’t define test management.

I believe we need to  think about testing as a discipline where logistics choices are made in parallel with the testing thinking. Test Management follows the same pattern. Logistics are important, but they aren’t testing. Test management aims to support the choices, sources of knowledge, test thinking and decision making separately from the practicalities – the logistics – of documentation, test process, environments and technologies used.

I derived the idea of a New Model for Testing – a way of visualising the thought processes of testers – in 2014 or so. Since then, I have presented to thousands of testers and developers and I get very few objections. Honestly!

However, some people do say, with commitment, “that’s not new!”. And indeed it isn’t.

If the New Model reflects how you think, then it should be a comfortable fit. It is definitely not new to you!

One of the first talks I gave on the New Model is here. (Skip to 43m 50s to skip the value of testing talk and long introduction).

The New Model for Testing

Now, I might get a book out of the material (hard-copy and/or ebook formats), but more importantly, I’m looking to create an online and classroom course to share my thinking and guidance on test management.

Rather than offer you specific behaviours and templates to apply, I will try to describe the goals, motivations, thought processes, the sources of knowledge and the principles of application and use stories from my own experience to illustrate them. There will also be suggestions for further study and things to think about as exercises or homework.

You will need to adjust these lessons to your specific situation. It requires that you think for yourself – and that is no bad thing.

Here’s the deal in a nutshell: I’ll give you some interesting questions to ask. You need to get the answers from your own customers, suppliers and colleagues and decide what to do next.

I’ll be exploring these ideas in my session at the next Assurance Leadership Forum on 25 July. See the programme here and book a place.

In the meantime, if you want to know more, leave a comment or do get in touch at my usual email address.

 

[1] The Axioms of Testing in the Tester’s Pocketbook for example, https://testaxioms.com

 

It's only fair to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

Testing the Code that Checks Code

It's only fair to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

Twitter is a lousy medium for debate, don’t you think?

I had a very brief exchange with Michael Bolton below.  (Others have commented on this thread this afternoon). To my apparently contradictory (and possibly stupid) comment, Michael responded with a presumably perplexed “?”

This blog is a partial explanation of what I said, and why I said it. You might call it an exercise in pedantry. (Without pedantry, there is less joy in the world – discuss). There’s probably a longer debate to be had, but certainly not on Twitter. Copenhagen perhaps, Michael? My response was to 3) Lesson.

3) Lesson: don’t blindly trust … your automated checks, lest they fail to reveal important problems in production code.

I took the lesson tweet out of context and ignored the first two tweets deliberately, and I’ll comment on those below. For the third, I also ignored the “don’t blindly trust your test code” aspect too and here’s why. If you have test code that operates at all, and you have automated checks that operate, you presumably trust the test code already. You will have already done whatever testing of the test code you deemed appropriate. I was more concerned with the second aspect. Don’t blindly trust the checks.

But you know what? My goal with automation is exactly that – to blindly trust automated checks.

If you have an automated check that runs at all, then given the same operating environment, test data, software versions, configuration and so on, you would hardly expect the repeated check to reveal anything new (unless it detected a failure – doing it’s job). If it did ‘fail’, then it really ought to flag some kind of alarm. If you are not paying attention or are ignoring the alarm, then on your own head be it. But if I have to be paying attention all the time, effectively babysitting – then my automation is failing. It is failing to replace my manual labour (often the justification for automating in the first place).

A single check is most likely to be run as part of a larger collection of tests, perhaps thousands, so the notification process needs to be integrated with some form of automated interpretation or at least triggered when some pre-defined threshold is exceeded.

Why blindly? Well, we humans are cursed by our own shortcomings. We have low attention-spans, are blind to things we see but aren’t paying attention to and of course, we are limited in what we can observe and assimilate anyway. We use tools to replace humans not least because of our poor ability to pay attention.

So I want my automation to act as if I’m not there and to raise alarms in ways that do not require me to be watching at the time. I want my phone to buzz, or my email client to bong, or my chatOps terminal to beep at me. Better still, I want the automation to choose who to notify. I want to be the CC: or BCC: in the message, not necessarily the To: all the time.

I deliberately took an interpretation of Michael’s comment that he probably didn’t intend. (That’s Twitter for you).

When my automated checks run, I don’t expect to have to evaluate whether the test code is doing ‘the right thing’ every time. But over time, things do change – the environment, configuration and software under test – so I need to pay attention to whether these changes impact my test code. Potentially, the check needs adjustment, re-ordering, enhancing, replacing or removing altogether. The time to do this is before the test code is run – during requirements discussions or in collaboration with developers.

I believe this was what Michael intended to highlight: your test code needs to evolve with the system under test and you must pay attention to that.

Now, my response to the tweet suggests rather than babysit your automated checks, you should spend your time more wisely – testing the system in ways your test code cannot (economically).

To the other tweets:

1) What would you discover if you submitted your check code to the same review, scrutiny, and testing as your production code?

2) If you’re not scrutinizing test code, why do you trust it any more than your production code? Especially when no problems are reported?

Test code can be trivial, but can sometimes be more complex than the system under test. It’s the old old story, “who tests the test code and how?”. I have worked on a a few projects where test code was treated like any other code. High integrity projects and the like. but even then I didn’t see much ‘test the test code’ activity. I’d say there are some common factors that make it less likely you would test your test code, and feel safe (enough) not doing so.

  1. Test code is built incrementally, usually, so that it is ‘tried’ in isolation. Your test code might simulate a web or mobile transaction, for example? if you can watch it move to fields, enter data, check the outcomes you see correctly, most testers would be satisfied it works as a simple check. What other test is required than re-running it, expecting the same outcome each time?
  2. Where the check is data-driven, of course, the code uses prepared data to fill, click or check parameterised fields, buttons and outcomes respectively. On a GUI app this can be visibly checked. Should you try invalid data (not included in your planned test data) and so on? Why bother? If the test code fails, then that is notification enough that you screwed up – fix it. If the test code flags false negatives when for example your environment changes, then you have a choice: tidy up your environment, or add code to accommodate acceptable environmental variations.
  3. Now, when your test code loses synchronisation or encounters a real mismatch of outcomes, your code needs handlers for these situations. These handlers might be custom-built for every check (an expensive solution) or utilise system-wide procedures to log, recover, re-start or hand-off depending on the nature of the tests or failures. This ought to be where your framework or scaffolding code comes in.
  4. Surely the test code needs testing more than just ‘using it’? The thing is, your test code is not handed over to users for them to enter extreme, dubious or poor quality data. All the data it will ever handle is in the test suite you use to test the system under test. Another tester might add new rows of test data to feed it, but problems arising are as likely to be due to other things than new test data. At any rate, what tests would you apply to your test code? Your test data, selected to exercise extremes in your system under test is probably quite well suited to testing the test code anyway.
  5. When problems do arise when your test code is run, it is more likely to be caused by environmental/data problems or software changes, so your test code will be adapted in parallel with these changes, or made more resilient to variations (bearing in mind the original purpose of the test code).
  6. Your scaffolding code or home-grown test frameworks handles this doesn’t it? Pretty much the same arguments above apply. They are likely to be made more robust through use, evolution and adaptation than a lot of planned tests.
  7. Who tests the tests? Who tests the tests of the tests? Who tests the tests of …

Your scaffolding, framework, handlers, analysis, notifications, messaging capability need a little more attention. In fact, your test tools might need to be certified in some way (like standard C, C++, Java compilers for example) to be acceptable to use at all.

I’m not suggesting that under all circumstances, your test code doesn’t need testing. But it seems to me, that in most situations, the code that actually performs a check might be tested by your test data well enough and that most exceptions arise as you develop and refine that code and can be fixed before you come to rely on it.

 

It's only fair to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

What is Digital?

It's only fair to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

Revolution

If you are not working on a “Digital” project, the hype that surrounds the whole concept of Digital and that is bombarding business and IT professions appears off-putting to say the least. But it would be wrong to ignore it. The Digital Transformation programmes that many organisations are embarking on are affecting business across all industry and government sectors. There is no doubt that it also affects people in their daily lives.

That sounds like yet another hype-fuelled statement intended to get the attention. It is attention grabbing, but it’s also true. The scope of Digital[1] is growing to encompass the entirety of IT related disciplines and business that depends on it: that is – all business.

It is becoming clear that the scope and scale of Digital will include all the traditional IT of the past, but when fully realised it will include the following too:

  • The IoT– every device of interest or value in the world will become connected; sensors of all types and purpose will be connected – by the billion – to the internet.
  • Autonomous vehicles – cars, planes, ships, drones, buses will become commonplace in the next ten years or so. Each will be a “place on the move”, fully connected and communicating with its environment.
  • Our home, workplace, public and private spaces will be connected. Our mobile, portable or wearable devices will interact with their environment and each other – without human intervention.
  • Robots will take over more and more physical tasks and make some careers obsolete and humans redundant. Robots will clean the city, fight our wars and care for the elderly.
  • Software in the form of ‘bots’ will be our guardian angel and a constant irritant – notifying us of the latest offers and opportunities as we traverse our Smart Cities[2].
  • The systems we use will be increasingly intelligent, but AI won’t be limited to corporates. Voice control may well be the preferred user-interface on many devices in the home and our car.
  • The operations or ‘Digital Storm’ of commerce, government, medicine, the law and warfare will be transformed in the next few years. The lives of mid-21st century citizens could be very different from ours.

Motivation

Still not convinced that Digital will change the world we live in? The suggested scale of change is overwhelming. Why is this happening? Is it hype or is it truly the way the world is going?

The changes that are taking place really are significant because it appears that this decade – the 2010’s – are the point at which several technological and social milestones are being reached. This decade is witness to some tremendous human and technological achievements.

  1. One third of the world is connected; there are plans to connect the remaining two-thirds[3]
  2. The range of small devices that can be assembled into useful things has exploded. Their costs are plummeting.
  3. Local and low power networking technologies can connect these devices.
  4. Artificial Intelligencewhich has promised so much for so many years is finally delivering in the form of Machine Learning.
  5. Virtual and Augmented Reality-based systems are coming. Sony VR launched (13/10/2016) to over 1.8million people and Samsung VR starts at under $100.
  6. Robotics, drone technology and 3D printing are now viable and workable whilst falling in cost.

Almost all businesses have committed to transform themselves using these technological advances – at speed – and they are calling it Digital Transformation.

Ambition

If you talk to people working in leading/bleeding edge Digital projects, it is obvious that the ambition of these projects is unprecedented. The origin of these projects can be traced to some critical, but dateless assumptions being blown away. It’s easy to imagine some Digital expert convincing their client to do some blue-sky thinking for their latest and greatest project. “The rules of the game are changed” they might advise:

  • There need be no human intervention in the interactions of your prospects and customers and your systems[4].
  • Your sales and marketing messages can be created, sent to customers, followed up and changed almost instantly.
  • You have the full range of data from the smallest locale to global in all media formats at your disposal.
  • Autonomous drones, trucks and cars can transport products, materials and people.
  • Physical products need not be ordered, held in stock and delivered at all – 3D printing might remove those constraints.
  • And so on.

Systems of Systems and Ecosystems

According to NASA the Space Shuttle[5] – with 2.5 million parts and 230 miles of wire – is (or was) the most complex machine ever built by man. With about a billion parts, a Nimitz class supercarrier[6] is somewhat more complex. Of course, it comprises many, many machines that together comprise the super-complex system of systems – the modern aircraft carrier.

A supercarrier has hundreds of thousands of interconnected systems and with its crew of 5-6,000 people could be compared to an average town afloat. Once at sea, the floating town is completely isolated except for its radio communications with base and other ships.

The supercarrier is comparable to what people are now calling Smart Cities. Wikipedia suggests this definition[7]:

“A smart city is an urban development vision to integrate multiple information and communication technology (ICT) and IoT  solutions in a secure fashion to manage a city’s assets – the city’s assets include, but are not limited to, local departments’ information systems, schools, libraries, transportation systems, hospitals, power plants, water supply networks, waste management, law enforcement, and other community services.”

The systems of a Smart City might not be as complex as those of an aircraft carrier, but in terms of scale, the number of nodes and endpoints within the system might be anything from a million to billions.

A smart city is not just bigger than an aircraft carrier – it also has the potential to be far more complex. The inhabitants and many of the systems move in the realm of the city and beyond. They move and interact with each other in unpredictable ways. On top of that, the inhabitants are not hand-picked like the military; crooks, spies and terrorists can usually come and go as they please.

Unlike a ship – isolated at sea, the smart city is extremely vulnerable to attack from individuals and unfriendly governments and is comparatively unprepared for attack.

But it’s even more complicated than that.

Nowadays, every individual carries their own mobile system – a phone at least – with them. Every car, bus and truck might be connected. Some will be driverless. Every trash can, streetlight, office building, power point, network access point is a Machine to Machine (M2M) component of a Digital Ecosystem which has been defined thus:

“A Digital Ecosystem is a distributed, adaptive, open socio-technical system with properties of self-organisation, scalability and sustainability inspired from natural ecosystems”[8].

Systems of Every Scale

The picture I’ve been painting has probably given you the impression that the Digital systems being now architected and built are all of terrifying scale. But my real point is this: The scale of Digital ranges from the trivial to the largest systems mankind has ever attempted to build.

The simplest system might be, for example, a home automation product – where you can control the heating, lighting, TV and other devices using a console, your mobile phone or office PC. The number of components or nodes might be ten to thirty. A medium complexity system might be a factory automation, monitoring and management system where the number of components could be several thousand. The number of nodes in a Smart City will run into the millions.

The range of systems we now deal with spans a few dozen to millions of nodes. In the past, a super-complex system might have hundreds of interconnected servers. Today, systems are now connected using services or microservices – provided by servers. In the future, every node on a network – even simple sensors – is a server of some kind and there could be millions of them.

Systems with Social Impact

It might seem obvious to you now, but there is no avoiding the fact that Digital systems almost certainly have a social impact on a few, many or all citizens who encounter them. There are potentially huge consequences for us all as systems become more integrated with each other and with the fabric of society.

The scary notion of Big Brother[9] is set to become a reality – systems that monitor our every move, our buying, browsing and social activities – already exist. Deep or Machine Learning algorithms generate suggestions of what to buy, where to shop, who to meet, when to pay bills. They are designed to push notifications to us minute by minute.

Law enforcement will be a key user of CCTV, traffic, people and asset movement and our behaviours. Their goal might be to prevent crime by identifying suspicious behaviour and controlling the movement of law enforcement agents to places of high risk. But these systems have the potential to infringe our civil liberties too.

The legal frameworks of all nations embarking on Digital futures are some way behind the technology and the vision of a Digital Future that some governments are now forming.

In the democratic states, civil liberties and the rules of law are very closely monitored and protected. In non-democratic or rogue states, there may be no limit to what might be done.

Ecosystems of Ecosystems

The span of Digital covers commerce, agriculture, health, government, the media in its various forms and the military; it will affect the care, travel, logistics, and manufacturing industries. There isn’t much that Digital won’t affect in one way or another.

A systems view does not do it justice – it seems more appropriate to consider Digital systems as ecosystems within ecosystems.

This text is derived from the first chapter of Paul’s book, “Digital Assurance”. If you want a free copy of the book, you can request one here.

[1] From now on I’ll use the word Digital to represent Digital Transformation, Projects and the wide range of disciplines required in the ‘Digital World’.

[2] See for example, http://learn.hitachiconsulting.com/Engineering-the-New-Reality

[3] Internet.org is a Facebook-led organisation intending to bring the Internet to all humans on the planet.

[4] Referred to as ‘Autonomous Business Models’.

[5] http://spaceflight.nasa.gov/shuttle/upgrades/upgrades5.html

[6] http://science.howstuffworks.com/aircraft-carrier1.htm

[7] https://en.wikipedia.org/wiki/Smart_city

[8] https://en.wikipedia.org/wiki/Digital_ecosystem

[9] No, not the reality TV show. I mean the despotic leader of the totalitarian state, Oceania in George Orwell’s terrifying vision, “1984”.

 

It's only fair to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn