School’s Out!

Teacher: Paul, make a sentence starting with the letter I.

Paul: I is…

Teacher: No, no, no, don't say "I is", you say "I am".

Paul: OK, I am the ninth letter of the alphabet.

This blog is my response to James Bach's comments on his blog to my postings on testing axioms. "Does a set of irrefutable test axioms exist?" and "The 12 Axioms of Testing". There are a lot of comments – all interesting – but many need a separate response. So, Read the following as if it were a conversation – it might make more sense.

PG:= Paul

Text in the standard font = James – not highlighted.

Here we go… James writes…

Paul Gerrard believes there are irrefutable testing axioms.

PG: I'm not sure I do or I don't. My previous blog asks could there be such axioms. This is just an interesting thought experiment. Interesting for me anyway. 😉

This is not surprising, since all axioms are by definition irrefutable.

PG: Agreed – "irrefutable axioms" is tautological. I changed my blog title quickly – you probably got the first version, I didn't amend the other blog posting. Irrefutable is the main word in that title so I'll leave it as it is.

To call something an axiom is to say you will cover your ears and hum whenever someone calls that principle into question.

PG: It's an experiment, James. I'm listening and not humming.

An axiom is a fundamental assumption on which the rest of your reasoning will be based.

PG: Not all the time. If we encounter an 'exception' in daily life, and in our business we see exceptions all the damn time, we must challenge all such axioms. The axiom must explain the phenomena or be changed or abandoned. Over time, proposals gain credibility and evolve into axioms or are abandoned.

They are not universal axioms for our field.

PG: (Assume you mean "there are no") Now, that is the question I'm posing! I'm open to the possibility. I sense there's a good one.

Instead they are articles of Paul’s philosophy.

PG: Nope – I'm undecided. My philosophy, if I have one, is, "everything is up for grabs".

As such, I’m glad to see them. I wish more testing authors would put their cards on the table that way.

PG: Well thanks (thinks… damned with faint praise 😉 ).

I think what Paul means is that not that his axioms are irrefutable, but that they are necessary and sufficient as a basis for understanding what he considers to be good testing.

PG: Hmm, I hadn't quite thought of it like that but keep going. These aren't MY axioms any more than Newton's laws belonged to him – they were 'discovered'. It took me an hour to sketch them out – I've never used them in this format but I do suspect they have been in some implicit way, my guide. I hope they have been yours too. If not…

In other words, they define his school of software testing.

PG: WHAT! Pause while I get up off the floor haha. Deep breath, Paul. This is news to me, James!

They are the result of many choices Paul has made that he could have made differently. For instance, he could have treated testing as an activity rather than speaking of tests as artifacts. He went with the artifact option, which is why one of his axioms speaks of test sequencing. I don’t think in terms of test artifacts, primarily, so I don’t speak of sequencing tests, usually. Usually, I speak of chartering test sessions and focusing test attention.

PG: I didn't use the word artifact anywhere. I regard testing as an activity that produces Project Intelligence – information, knowledge, evidence, data – whatever you like – that has some value to the tester but more to the stakeholders of testing. We should think of our stakeholders before we commit to a test approach and not be dogmatic. (The stakeholder axiom). How can you not agree with that one? The sequencing axiom suggests you put most valuable/interesting/useful tests up front as you might not have time to do every test – you might be stopped at any time in fact. Test Charters and Sessions are right in line with at least half of the axioms. I do read stuff occasionally 🙂 Next question please!

No these aren't the result. They are thoughts, instincts even that I've had for many years and I've tried to articulate. I'm posing a question. Do all testers share some testing instincts? I won't be convinced that my proposed axioms are anywhere close until they've been tested and perfected through experience. I took some care to consider the 'school'.

Sometimes people complain that declaring a school of testing fragments the craft. But I think the craft is already fragmented, and we should explore and understand the various philosophies that are out there. Paul’s proposed axioms seem a pretty fair representation of what I sometimes call the Chapel Hill School, since the Chapel Hill Symposium in 1972 was the organizing moment for many of those ideas, perhaps all of them. The book Program Test Methods, by Bill Hetzel, was the first book dedicated to testing. It came out of that symposium.

PG: Hmm. This worries me a lot. I am not a 'school' thank-you very much. Too many schools push dogma, demand obedience to school rules and mark people for life. They put up barriers to entry and exit and require members to sing the same school song. No thanks. I'm not a school.

It reminds me of Groucho Marx. "I wouldn't want to join any club that would have me as a member."

The Chapel Hill School is usually called “traditional testing”, but it’s important to understand that this tradition was not well established before 1972. Jerry Weinberg’s writings on testing, in his authoritative 1961 textbook on programming, presented a more flexible view. I think the Chapel Hill school has not achieved its vision, it was largely in dissatisfaction with it that the Context-Driven school was created.

PG: In my questioning post, I used 'old school' and 'new school' just to label one obvious choice – pre-meditated v contemporaneous design and execution to illustrate that axioms should support or allow both – as both are appropriate in different contexts. I could have used school v no-school or structured v ad-hoc or … well anything you like. This is a distraction.

But I am confused. You call the CH symposium a school and label that "traditional". What did the symposium of 1972 call themselves? Traditional? A school? I'm sure they didn't wake up the day after thinking "we are a school" and "we are traditional". How do those labels help the discussion? In this context, I can't figure out whether 'school' is a good thing or bad. I only know one group who call themselves a school. I think 'brand' is a better label.

One of his axioms is “5. The Coverage Axiom: You must have a mechanism to define a target for the quantity of testing, measure progress towards that goal and assess the thoroughness in a quantifiable way.” This is not an axiom for me. I rarely quantify coverage. I think quantification that is not grounded in measurement theory is no better than using numerology or star signs to run your projects. I generally use narrative and qualitative assessment, instead.

PG: Good point. the words quantity and quantifiable imply numeric measurement – that wasn't my intention. Do you have a form of words I should use that would encompass quantitive and qualitative assessment? I think I could suggest "You must have a means of evaluating narratively, qualitatively or quantitatively the testing you plan to do or have done". When someone asks, how much testing do you plan to do, have done or have left to do, I think we should be able to provide answers. "I don't know" is not a good answer – if you want to stay hired.

For you context-driven hounds out there

PG: Sir, Yes Sir! 😉

practice your art by picking one of his axioms and showing how it is possible to have good testing, in some context, while rejecting that principle. Post your analysis as a comment to this blog, if you want.

PG: Yes please!

In any social activity (as opposed to a mathematical or physical system), any attempt to say “this is what it must be” boils down to a question of values or definitions. The Context-Driven community declared our values with our seven principles. But we don’t call our principles irrefutable. We simply say here is one school of thought, and we like it better than any other, for the moment.

PG: I don't think I'm saying "this is what it must be" at all. What is "it", what is "must be"? I'm asking testers to consider the proposal and ask whether they agree if it has some value as a guide to choosing their actions. I'm not particularly religious but I think "murder is wrong". The fact that I don't use the ten commandments from day to day does not mean that I don't see value in them as a set of guiding principles for Christians. Every religion has their own set of principles, but I don't think many would argue murder is acceptable. So even religions are able to find some common ground. In this analogy, school=religion. Why can't we find common ground between schools of thought?

I'm extremely happy to amend, remove or add to the axioms as folk comment. Either all my suggestions will be completely shot down or some might be left standing. I'm up for trying. I firmly believe that there are some things all testers could agree on no matter how abstract. Are they axioms? Are they motherhood and apple pie? Let's find out. These abstractions could have some value other than just as debating points. But let's have that debate.

By the way – my only condition in all this is you use the blog the proposed axioms appear on. If you want to defend the proposed axioms – be my guest.

Thanks for giving this some thought – I appreciate it.


With most humble apologies to Rudyard Kipling…

If you can test when tests are always failing,
When failing tests are sometimes blamed on you,
If you can trace a path when dev teams doubt that,
Defend your test and reason with them too.

If you can test and not be tired by testing,
Be reviewed, and give reviewers what they seek,
Or, being challenged, take their comments kindly,
And don’t resist, just make the changes that you need.

If you find bugs then make them not your master;
If you use tools – but not make tools your aim;
If you can meet with marketing and salesmen
And treat those two stakeholders just the same.

If you can give bad news with sincerity
And not be swayed by words that sound insane
Or watch the tests you gave your life to de-scoped,
And run all test scripts from step one again.

If you can analyse all the known defects
Tell it how it is, be fair and not be crass;
And fail, fail, fail, fail, fail repeatedly;
And never give up hope for that one pass.

If you explore and use imagination
To run those tests on features never done,
And keep going when little code is working
And try that last test boundary plus one.

If you can talk with managers and users,
And work with dev and analysts with ease,
If you can make them see your test objective,
Then you’ll see their risks and priorities.

If you can get your automation working
With twelve hours manual testing done in one –
Your final test report will make some reading
And without doubt – a Tester you’ll become!

By the way, to read If… by Rudyard Kipling you can see it here:

Certification – a personal history

As a matter of record, I wanted to post a note on my involvement with the testing certification scheme best known in the UK (and many other countries) as the ISEB Testing Certificate Scheme. I want to post some other messages commenting on the ISEB, ISTQB and perhaps other schemes too, so a bit of background might be useful.

In 1997, a small group of people in the UK started to discuss the possibility of establishing a testing certification scheme. At that time, Dorothy Graham and I were probably the most prominent. There was some interest in the US too, I recall, and I briefly set up a page on the Evolutif website promoting the idea of a joint European/US scheme, and asking for expressions of interest in starting a group to formulate a structure, a syllabus, an examination and so on. Not very much came of that, but Dot and I in particular, drafted an outline syllabus which was just a list of topics, about a page long.

The Europe/US collaboration didn’t seem to be going anywhere so we decided to start it in the UK only to begin with. At the same time, we had been talking to people at ISEB who seemed interested in administering the certification scheme itself. At that time ISEB was a certifying organisation having charitable status, independent of the British Computer Society (BCS). That year, ISEB decided to merge into the BCS. ISEB still had it’s own identity and brand, but was a subsidiary of BCS from then on.

ISEB, having experience of running several schemes for several years (whereas we had no experience at all) suggested we form a certification ‘board’ with a chair, terms of reference and constitution. The first meeting of the new board took place on 14th January 1998. I became the first Chair of the board. I still have the Terms of Reference for the board, dated 17 May 1998. Here are the objectives of the scheme and the board extracted from that document:

Objectives of the Qualification
• To gain recognition for testing as an essential and professional software engineering specialisation by industry.
• Through the BCS Professional Development Scheme and the Industry Structure Model, provide a standard framework for the development of testers’ careers.
• To enable professionally qualified testers to be recognised by employers, customers and peers, and raise the profile of testers.
• To promote consistent and good testing practice within all software engineering disciplines.
• To identify testing topics that are relevant and of value to industry
• To enable software suppliers to hire certified testers and thereby gain commercial advantage over their competitors by advertising their tester recruitment policy.
• To provide an opportunity for testers or those with an interest in testing to acquire an industry recognised qualification in the subject.

Objectives of the Certification Board
The Certification Board aims to deliver a syllabus and administrative infrastructure for a qualification in software testing which is useful and commercially viable.
• To be useful it must be sufficiently relevant, practical, thorough and quality-oriented so it will be recognised by IT employers (whether in-house developers or commercial software suppliers) to differentiate amongst prospective and current staff; it will then be viewed as an essential qualification to attain by those staff.
• To be commercially viable it must be brought to the attention of all of its potential customers and must seem to them to represent good value for money at a price that meets ISEB’s financial objectives.

The Syllabus evolved and was agreed by the summer. The first course and examination took place on 20-22 October 1998, and the successful candidates were formally awarded their certificates at the December 1998 SIGIST meeting in London. In the same month, I resigned as Chair but remained on the board. I subsequently submitted my own training materials for accreditation.

Since the scheme started, over 36,000 Foundation examinations have been taken with a pass rate of about 90%. Since 2002 more than 2,500 Practitioner exams have been taken, with a relatively modest pass rate of approximately 60%.

The International Software Testing Qualificaton Board (ISTQB) was established in 2002. This group aims to establish a truly international scheme and now has regional boards in 33 countries. ISEB have used the ISTQB Foundation syllabus since 2004, but continue to use their own Practitioner syllabus. ISTQB are developing a new Practitioner level syllabus to be launched soon, but ISEB have already publicised their intention to launch their own Practitioner syllabus too. It’s not clear yet what the current ISEB accredited training providers will do with TWO schemes. It isn’t obvious what the market will think of two schemes either.

Interesting times lie ahead.