Certification – a personal history

As a matter of record, I wanted to post a note on my involvement with the testing certification scheme best known in the UK (and many other countries) as the ISEB Testing Certificate Scheme. I want to post some other messages commenting on the ISEB, ISTQB and perhaps other schemes too, so a bit of background might be useful.

In 1997, a small group of people in the UK started to discuss the possibility of establishing a testing certification scheme. At that time, Dorothy Graham and I were probably the most prominent. There was some interest in the US too, I recall, and I briefly set up a page on the Evolutif website promoting the idea of a joint European/US scheme, and asking for expressions of interest in starting a group to formulate a structure, a syllabus, an examination and so on. Not very much came of that, but Dot and I in particular, drafted an outline syllabus which was just a list of topics, about a page long.

The Europe/US collaboration didn’t seem to be going anywhere so we decided to start it in the UK only to begin with. At the same time, we had been talking to people at ISEB who seemed interested in administering the certification scheme itself. At that time ISEB was a certifying organisation having charitable status, independent of the British Computer Society (BCS). That year, ISEB decided to merge into the BCS. ISEB still had it’s own identity and brand, but was a subsidiary of BCS from then on.

ISEB, having experience of running several schemes for several years (whereas we had no experience at all) suggested we form a certification ‘board’ with a chair, terms of reference and constitution. The first meeting of the new board took place on 14th January 1998. I became the first Chair of the board. I still have the Terms of Reference for the board, dated 17 May 1998. Here are the objectives of the scheme and the board extracted from that document:

Objectives of the Qualification
• To gain recognition for testing as an essential and professional software engineering specialisation by industry.
• Through the BCS Professional Development Scheme and the Industry Structure Model, provide a standard framework for the development of testers’ careers.
• To enable professionally qualified testers to be recognised by employers, customers and peers, and raise the profile of testers.
• To promote consistent and good testing practice within all software engineering disciplines.
• To identify testing topics that are relevant and of value to industry
• To enable software suppliers to hire certified testers and thereby gain commercial advantage over their competitors by advertising their tester recruitment policy.
• To provide an opportunity for testers or those with an interest in testing to acquire an industry recognised qualification in the subject.

Objectives of the Certification Board
The Certification Board aims to deliver a syllabus and administrative infrastructure for a qualification in software testing which is useful and commercially viable.
• To be useful it must be sufficiently relevant, practical, thorough and quality-oriented so it will be recognised by IT employers (whether in-house developers or commercial software suppliers) to differentiate amongst prospective and current staff; it will then be viewed as an essential qualification to attain by those staff.
• To be commercially viable it must be brought to the attention of all of its potential customers and must seem to them to represent good value for money at a price that meets ISEB’s financial objectives.

The Syllabus evolved and was agreed by the summer. The first course and examination took place on 20-22 October 1998, and the successful candidates were formally awarded their certificates at the December 1998 SIGIST meeting in London. In the same month, I resigned as Chair but remained on the board. I subsequently submitted my own training materials for accreditation.

Since the scheme started, over 36,000 Foundation examinations have been taken with a pass rate of about 90%. Since 2002 more than 2,500 Practitioner exams have been taken, with a relatively modest pass rate of approximately 60%.

The International Software Testing Qualificaton Board (ISTQB) was established in 2002. This group aims to establish a truly international scheme and now has regional boards in 33 countries. ISEB have used the ISTQB Foundation syllabus since 2004, but continue to use their own Practitioner syllabus. ISTQB are developing a new Practitioner level syllabus to be launched soon, but ISEB have already publicised their intention to launch their own Practitioner syllabus too. It’s not clear yet what the current ISEB accredited training providers will do with TWO schemes. It isn’t obvious what the market will think of two schemes either.

Interesting times lie ahead.

27 Test Environments not enough?

I was in Nieuwegein, Holland last week giving my ERP Lessons Learned Talk as part of the EuroSTAR – Testnet mini-event. After the presentation, I was talking to people afterwards. The conversation came around to test environments, and how many you need.

One of the big issues in ERP implementations is the need for multiple, expensive test environments. Some projects have environments running into double figures (and I’m not talking about desktop environments for developers, here). Well, my good friend said, his project currently has 27 environments, and that still isn’t enough for what they want to do. 27 didn’t didn’t include the test environments required for their interfacing systems to test. It’s a massive project,needless to say, but TWENTY SEVEN? The mind boggles.

Is this a record? Can you beat that? I’d be delighted to hear from you if you can!

Why are our estimates always too low?

At last week’s Test Management Forum, Susan Windsor introduced a lively session on estimation – from the top down. All good stuff. But during the discussion, I was reminded of a funny story (well I thought it was funny at the time).

Maybe twenty years ago (my memory isn’t as good as it used to be), I was working at a telecoms company as a development team leader. Around 7pm one evening, I was sat opposite my old friend Hugh. The office was quiet, we were the only people still there. He was tidying up some documentation, I was trying to get some stubborn bug fixed (I’m guessing here). Anyway. Along came the IT director. He was going home and he paused at our desks to say hello, how’s it going etc.

Hugh gave him a brief review of progress and said in closing, “we go live a week on Friday – two weeks early”. Our IT director was pleased but then highly perplexed. His response was, “this project is seriously ahead of schedule”. Off he went scratching his head. As the lift doors closed, Hugh and I burst out laughing. This situation had never arisen before. What a problem to dump on him! How would he deal with this challenge? What could he possibly tell the business? It could be the end of his career! Delivering early? Unheard of!

It’s a true story, honestly. But what it also reminded me of was that if estimation is an approximate process, our errors in estimation in the long run (over or under estimation) expressed as a percentage under or over, should balance statistically around a mean value of zero, and that mean would represent the average actual time or cost it took for our projects to deliver.

Statistically, if we are dealing with a project that is delayed (or advanced!) by unpredictable, unplanned events, we should be overestimating as much as we under estimate, shouldn’t we? But clearly this isn’t the case. Overestimation, and delivering early is a situation so rare, it’s almost unheard of. Why is this? Here’s a stab at a few reasons why we consistently ‘underestimate’.

First, (and possibly foremost) is we don’t underestiate at all. Our estimates are reasonably accurate, but consistently we get squeezed to fit with pre-defined timescales or budgets. We ask for six people for eight weeks, but we get four people for four weeks. How does this happen? If we’ve been honest in our estimates, surely we should negotiate a scope reduction if our bid for resources or time is rejected? Whether we descope a selection of tests or not, when the time comes to deliver, our testing is unfinished. Of course, go live is a bumpy period – production is where the remaining bugs are encountered and fixed in a desperate phase of recovery. To achieve a reasonable level of stability takes as long as we predicted. We just delivered too early.

Secondly, we are forced to estimate optimistically. Breakthroughs, which are few and far between are assumed to be certainties. Of course, the last project, which was so troublesome, was an anomaly and it will always be better next time. Of course, this is nonsense. One definition of madness is to expect a different outcome from the same situation and inputs.

Thirdly, our estimates are irrelevant. Unless the project can deliver in some mysterious predetermined time and cost contraints, it won’t happen at all. Where the vested interests of individuals dominate, it could conceivably be better for a supplier to overcommit, and live with a loss-making, troublesome post-go live situation. In the same vein, the customer may actually decide to proceed with a no-hoper project because certain individuals’ reputation, credibility and perhaps jobs depend on the go live dates. Remarkable as it may seem, individuals within customer and supplier companies may actually collude to stage a doomed project that doesn’t benefit the customer and loses the supplier money. Just call me cynical.

Assuming project teams aren’t actually incompetent, it’s reasonable to assume that project execution is never ‘wrong’ – execution just takes as long as it takes. There are only errors in estimation. Unfortunately, estimators are suppressed, overruled, pressured into aligning their activities with imposed budgets and timescales, and they appear to have been wrong.