Test Organisation Maturity Assessment
The Need for a Test Process Maturity Model
Gerrard Consulting has been conducting test process improvement projects since 1991. To help us to improve our clients' test practices by focusing on what is most important, we have developed and refined a test process improvement methodology. All process improvement methods require an initial assessment of current practices and this is used to measure the current capability, identify shortcomings and guide the improvement process. For several years we have been seeking a process model that helped us to assess an organisation's testing maturity objectively, and which could be used to identify a set of appropriate improvements.
The Capability Maturity Model (CMM) is the best known Software Engineering process model. It has as its foundation an incremental set of maturity levels. Each level consists of a set of Software Engineering practices that an organisation must use for them to reach that maturity level. To reach the next level of maturity, the organisation must implement the practices identified as part of the next level and so on. The CMM attempts to provide a sequenced set of process improvements to reach the ultimate process capability.
The CMM and related models have been found lacking when it comes to testing practices. The detail presented in these models is sparse, to say the least. Attempts have been made to refine the CMM as well as to come up with alternative testing-specific models. We have found that even these models did not match the way we conducted testing improvement projects. These models are based on assessments that identify whether certain good practices are used or not, and present a staged sequence of process improvements. The recommended process improvements consist of the good practices not currently being adopted. The assumption is that these practices will increase testing effectiveness and improve software quality.
Remedy-Based Models are Inadequate
Several problems with such approaches (particularly the CMM) have been documented but we would emphasise one in particular. We believe that these models are all solution or 'remedy-based' and so miss the point. Consider what might happen, if a doctor adopted a remedy-based diagnosis process. If you had a headache, and the doctor asked you a series of questions relating to possible remedies, this would probably perplex you: 'are you taking aspirin?', 'are you taking penicillin?'… These questions are not related to the problem and would be very unsatisfactory, unless of course, you were a hypochondriac and wanted to take lots of pills.
Process assessments that are remedy-based are also unsatisfactory. Most organisations wishing to improve their test practices have one or more specific problems they wish to solve. E.g. 'testing costs too much', 'our testing isn't effective enough', 'testing takes too long'. Answering NO to questions such as, 'do you conduct inspections?', 'do you use a tool?', 'are incidents logged?' should not mean that inspections, tools and incident logging are automatically the best things to do next. The remedies recommended may be based on the sequencing of practices in the model, not because it will help the organisation solve its software development problems.
We fear that many organisations use remedy-oriented approaches blindly. Assuming that an organisation's problems can be solved by adopting the 'next level' practices may be dangerous. The cost or difficulty in adopting new practices may outweigh the marginal benefit of using them. For example, an organisation might use 80% of CMM level 2 practices and 60% of level 3 practices, but could be assessed at a level higher than level 1. If the organisation adopted the last 20% of level 2 practices would they automatically benefit? There might be some benefit, but it is more likely that those practices are not adopted because the benefits are marginal.
We believe that process improvement methods that use remedy-based approaches are inadequate because they do not take existing problems, objectives and constraints into consideration.
Process Models and Process Improvements
In our experience, the major barriers to improved practices are organisational, not technical. Most of the difficulties in the implementation of improved practices are associated with changing management perceptions, overcoming people's natural resistance to change and implementing workable processes and management controls.
For example, management may say 'testing takes too long' and believe that an automated tool can help. Buying a tool without further analysis of the problems would probably waste more time than it saves: time is spent getting the tool to work, the tool doesn't deliver the benefits promised, so the situation is made worse and the tool would end up as shelfware.
The underlying problem to be faced is most likely due to a combination of problems. Management doesn't understand the objectives of testing; the cost of testing is high but difficult to pin down; developers, testers, users may never have been trained in testing; the quality of the product delivered into testing is poor, so takes forever to get right. To address the management problem, a mix of improvements is most likely to be required: management awareness training; testing training; improved definition of the test stages and their objectives; measurement of the quality of the product at each stage etc. etc.
We believe that the assessment model must take account of the fact that not all improvements are a good idea straight away. Some 'improvements' are expensive; some save time, but changes to the way people work may be dramatic; some improve the quality of the testing, but take longer to perform. Very few improvements save time, improve quality, cause minimal change and pay back after two weeks. Recommended improvements must take account of other objectives, constraints and priorities.
The Test Organisation Maturity Model (TOM™ )
Gerrard Consulting has developed a Test Organisation Model, TOM™ to address the primary concern that the outcome of the assessment should address the problems being experienced. The assessment process is based on a relatively simple questionnaire that can be completed and a TOM™ score derived without the assistance of a consultant.
The questionnaire is governed by the following:
- The questions focus on organisational rather than technical issues and the answers, in most cases, can be provided by management or practitioners (try both and compare).
- The number of questions asked is small (twenty).
- The objectives of the organisation assessed should be taken into consideration and prioritised. (Do we want to get better, or do we want to save money?)
- Questions relate directly to the symptoms, not remedies. (What's going wrong, now?)
- Symptoms are prioritised. (Release decisions are made on 'gut feel' and that's bad, but we are more concerned that our sub-system testing is poor).
- The scoring system is simple. All scores and priorities are rated from one to five.
The Improvement Model
A potential process improvement may help to solve several problems. The improvement model is a simple scoring/weighting calculation that prioritises potential improvements, based on the assessment scores and priorities. The model has a library of 83 potential testing improvements. For each symptom, a selection of improvements has been deemed most appropriate, and weighted against the objectives and constraints.
When the questionnaire is completed, the scores are entered, and a prioritised action list of potential process improvements is generated.
How the TOM™ Questionnaire is used
When completed, the questionnaire can be used to calculate a TOM™ level. Since you must answer twenty questions with scores of 1-5, you can score a minimum of 20 and a maximum of 100. If you repeat the questionnaire after a period, you can track progress (or regress) in each area. If you send the completed questionnaire to us, we will enter the data into our TOM™ database. We are using assessment data to survey testing practices across industry. The database also has a built-in improvement model. Based on the assessment data entered, the model generates up to seventy prioritised improvement suggestions. You can use these to identify the improvements that are likely to give the most benefits to your organisation.
Completing the Questionnaire
The questionnaire has four parts. Parts one and two are for administrative and analysis purposes. Parts three and four are used to calculate a TOM™ level and, and then analysed in the testing improvement model to generate prioritised improvement suggestions.
You may complete a questionnaire for your entire organisation or for a particular project. You only need to complete part one once. For each assessment in your organisation, you need to complete parts two, three and four.
Part 1 - About Your Organisation
We use this information to analyse the assessment data we have collected across industry.
Part 2 - About You (the Assessor)
When we have analysed the data on your questionnaire, we can contact you easily.
Part 3 - Your Objectives and Constraints
You may have overriding objectives or constraints that will influence which improvements will help you most. For example, some improvements improve quality, but may be expensive to implement. Rate the objectives and constraints in the range one to five. You can give the different objectives the same weightings e.g. 5, 3, 5, 2, 2 or grade them 5, 1, 3, 4, 2
Part 4 - Your Symptoms
This part asks questions about twenty symptoms of poor test process maturity. The questions must be answered with a score (1-5) and a priority (1-5).
For each numbered question, there are three examples of maturity. If one of the three columns headed Low, Medium or High resembles your situation, score a 1, 3 or 5 respectively. If you are 'in between' score 2 or 4. You must score each question between 1 and 5. If you score less than 5 for a symptom, you should assign a priority to that symptom. The priority indicates how much 'it hurts'. A priority 1 symptom can be ignored. A priority 5 is extremely painful and must be addressed.
Add up the individual scores for each of the symptoms. The result is a TOM™ maturity level.
Send completed questionnaires for the attention of Paul Gerrard:
PO Box 347
|or FAX to +44 (0)1628 630398|
We will return a printed version of the assessment and a prioritised list of up to seventy potential test process improvements FREE.
All information provided on these questionnaires will be treated in confidence.
The UK TMF
|Gerrard Consulting founded the UK Test Management Forum in January 2004. Over 2500 delegates have attended the quarterly sessions.
The Forums provide lively debate between practitioners, consultants and vendors and cover both Test Management and Technical Testing topics.
Recent Blog Posts
- Will the Test Leaders Stand Up?
- Continuous Delivery of Long Term requirements
- Top 10 Books for Testers
- Story Based Acceptance Test Automation Using Free Tools
- Story Based Testing and Automation
- New Tutorial: Test Strategy in a Day
- Live Specifications: From Requirements to Automated Tests and Back
- Business Story Method and Platform Awareness
- Business Story Manager Tutorial: Analysis Module Part I
- Redistribution of Testing: A Brief Summary
Business Story Pocketbook
The Business Story Method is supported by and largely documented in a book called "The Business Story Pocketbook" which is available for FREE DOWNLOAD or can be purchased from Gerrard Consulting (or other good booksellers).
The Business Story Pocketbook
Testing Axioms and Pocketbook
The Testing Axioms identify the critical thinking processes for testing. As a set of context-neutral rules they also provide a universal testing framework.