What is the best ratio of testers to developers in an agile team?

You may or may not find this response useful. 🙂

“It depends”.

The “it depends” response is an old joke. I think I was advised by David Gelperin in the early 90s that if someone says “it depends” your response should be “ahh, you must be a consultant!”

But it does depend. It always has and will do. The context-driven guys provide a little more information – “it depends on context”. But this doesn’t answer the question of course – we still get asked by people who really do need an answer – i.e. project managers who need to plan and to resource teams.

As an aside, there’s an interesting discussion of “stupid questions” here. This question isn’t stupid, but the blog post is interesting.

In what follows – let me assume you’ve been asked the question by a project manager.

The ‘best’ dev/tester ratio is possibly the most context-specific question in testing. What are the influences on the answer?

  • What is the capability/competence of the developers and testers respectively and absolutely?
  • What do dev and test WANT to do versus what you (as a manager) want them to do?
  • To what degree are the testers involved in early testing (they just system test? Or are involved from concept thru to acceptance etc.)
  • What is the risk-profile of the project?
  • Do stakeholders care if the system works or not?
  • What is the scale of the development?
  • What is the ratio of new/custom code versus reused (and trusted) code/infrastructure?
  • How trustworthy is the to-be-reused code anyway?
  • How testable will the delivered system be?
  • Do resources come in integer whole numbers or fractions?
  • And so on, and so on…

Even if you had the answers to these questions to six significant digits – you still aren’t much wiser because some other pieces of information are missing. These are possibly known to the project manager who is asking the question:

  • How much budget is available? (knowing this – he has an answer already)
  • Does the project manager trust your estimates and recommendations or does he want references to industry ‘standards’? i.e. he wants a crutch, not an answer.
  • Is the project manager competent and honest?

So we’re left with this awkward situation. Are you being asked the question to make the project manager feel better; to give him reassurance he has the right answer already? Does he know his budget is low and needs to articulate a case for justifying more? Does he think the budget is too high and wants a case for spending less?

Does he regard you as competent and trust what you say anyway? This final point could depend on his competence as much as yours! References to ‘higher authorities’ satisfy some people (if all they want is back-covering), but other folk want personal, direct, relevant experience and data.

I think a bit of von Neumann game theory may be required to analyse the situation!

Here’s a suggestion. Suppose the PM says he has 4 developers and needs to know how many testers are required. I’d suggest he has a choice:

  • 4 dev – 1 tester: onus is on the devs to do good testing, the tester will advise, cherry pick areas to test and focus on high impact problems. PM needs to micro manage the devs, and the tester is a free-agent.
  • 4 dev – 2 testers: testers partner with dev to ‘keep them honest’. Testers pair up to help with dev testing (whether TDD or not). Testers keep track of the coverage and focus on covering gaps and doing system-level testing. PM manages dev based on tester output.
  • 4 dev – 3 testers: testers accountable for testing. Testers shadow developers in all dev test activities. System testing is thorough. Testers set targets for achievement and provide evidence of it to PM. PM manages on the basis of test reports.
  • 4 dev – 4 testers: testers take ownership of all testing. But is this still Agile??? 😉

Perhaps it’s worth asking the PM for dev and tester job specs and working out what proportion of their activities are actually dev and test? Don’t hire testers at all – just hire good developers (i.e. those who can test). If he has poor developers (who can’t/won’t test) then the ratio of testers goes up because someone has to do their job for them.

Why are our estimates always too low?

At last week’s Test Management Forum, Susan Windsor introduced a lively session on estimation – from the top down. All good stuff. But during the discussion, I was reminded of a funny story (well I thought it was funny at the time).

Maybe twenty years ago (my memory isn’t as good as it used to be), I was working at a telecoms company as a development team leader. Around 7pm one evening, I was sat opposite my old friend Hugh. The office was quiet, we were the only people still there. He was tidying up some documentation, I was trying to get some stubborn bug fixed (I’m guessing here). Anyway. Along came the IT director. He was going home and he paused at our desks to say hello, how’s it going etc.

Hugh gave him a brief review of progress and said in closing, “we go live a week on Friday – two weeks early”. Our IT director was pleased but then highly perplexed. His response was, “this project is seriously ahead of schedule”. Off he went scratching his head. As the lift doors closed, Hugh and I burst out laughing. This situation had never arisen before. What a problem to dump on him! How would he deal with this challenge? What could he possibly tell the business? It could be the end of his career! Delivering early? Unheard of!

It’s a true story, honestly. But what it also reminded me of was that if estimation is an approximate process, our errors in estimation in the long run (over or under estimation) expressed as a percentage under or over, should balance statistically around a mean value of zero, and that mean would represent the average actual time or cost it took for our projects to deliver.

Statistically, if we are dealing with a project that is delayed (or advanced!) by unpredictable, unplanned events, we should be overestimating as much as we under estimate, shouldn’t we? But clearly this isn’t the case. Overestimation, and delivering early is a situation so rare, it’s almost unheard of. Why is this? Here’s a stab at a few reasons why we consistently ‘underestimate’.

First, (and possibly foremost) is we don’t underestiate at all. Our estimates are reasonably accurate, but consistently we get squeezed to fit with pre-defined timescales or budgets. We ask for six people for eight weeks, but we get four people for four weeks. How does this happen? If we’ve been honest in our estimates, surely we should negotiate a scope reduction if our bid for resources or time is rejected? Whether we descope a selection of tests or not, when the time comes to deliver, our testing is unfinished. Of course, go live is a bumpy period – production is where the remaining bugs are encountered and fixed in a desperate phase of recovery. To achieve a reasonable level of stability takes as long as we predicted. We just delivered too early.

Secondly, we are forced to estimate optimistically. Breakthroughs, which are few and far between are assumed to be certainties. Of course, the last project, which was so troublesome, was an anomaly and it will always be better next time. Of course, this is nonsense. One definition of madness is to expect a different outcome from the same situation and inputs.

Thirdly, our estimates are irrelevant. Unless the project can deliver in some mysterious predetermined time and cost contraints, it won’t happen at all. Where the vested interests of individuals dominate, it could conceivably be better for a supplier to overcommit, and live with a loss-making, troublesome post-go live situation. In the same vein, the customer may actually decide to proceed with a no-hoper project because certain individuals’ reputation, credibility and perhaps jobs depend on the go live dates. Remarkable as it may seem, individuals within customer and supplier companies may actually collude to stage a doomed project that doesn’t benefit the customer and loses the supplier money. Just call me cynical.

Assuming project teams aren’t actually incompetent, it’s reasonable to assume that project execution is never ‘wrong’ – execution just takes as long as it takes. There are only errors in estimation. Unfortunately, estimators are suppressed, overruled, pressured into aligning their activities with imposed budgets and timescales, and they appear to have been wrong.