All testing is exploratory. There are quite a few definitions of Exploratory Testing, but the easiest to work with is the definition on Cem Kaner’s site.
“Exploratory software testing is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.”
The usual assumption is that this style of testing applies to software that *exists* and where the knowledge of software behaviour is primarily to be gathered from exploring the software itself.
But I’d like to posit that if one takes the view:
- All the artefacts of a project are subjected to testing
- Testers test systems, not just software in isolation
- The learning process is a group activity that includes users, analysts, process and software designers, developers, implementers, operations staff, system administrators, testers, users, trainers, stakeholders and the senior user most if not all of who need to test, interpret and make decisions
- All have their own objectives and learning challenges and use exploration to overcome them.
… then all of the activities from requirements elicitation onwards use testing and exploration.
Exploration wasn’t invented by the Romans, but the word explorare is Latin. It’s hard to see how the human race could populate the entire planet without doing a little exploration. The writings of Plato and Socrates are documented explorations of ideas.
Exploration is in many ways like play, but requires a perspicacious thought process. Testing is primarily driven by the tester’s ability to create appropriate and useful test models. An individual may hold all the knowledge necessary to test in their heads whilst collecting, absorbing and interpreting information from a variety of sources including the system under test. Teams may operate in a similar way, but often need coordination and control and are accountable to stakeholders who need planning, script and/or test log documentation. Whatever. At some point before, during and/or after the ‘test’ they take guidance from their stakeholders and feedback the information they gather and adjust their activity where necessary.
Testing requires an interaction between the tester, their sources of knowledge and the object(s) under test. The source of knowledge may be people, documents or the system under test. The source of knowledge provides insights as to what to test and provides our oracle of expectations. The “exploration” is mostly of the source of knowledge. The execution of tests and consideration of outcomes confirms our beliefs – or not.
The real action takes place in the head of the tester. Consider the point where a tester reflects on what they have just learned. They may replay events in their mind and the “what just happened?” question triggers one of those aha! moments. Something isn’t quite right. So they retrace their steps, reproduce the experience, look at variations in the path of thinking and posit challenges to what they test. They question and repeatedly ask, “what if?”
Of course, the scenario above could apply to testing some software, but it could just as easily apply to a review of requirements, a design or some documented test cases. The thought processes are the same. The actual outcome of a “what if?” question might be a test of some software. But it could also be a search in code for a variable, a step-through of printed code listing or a decision table in a requirement or a state transition diagram. The outcome of the activity might be some knowledge, more questions to ask or some written or remembered tests that will be used to question some software sooner or later.
This is an exploratory post, by the way :O)