I love things that make my job easier. A statistical model for estimating isn’t normally something I’d put in the ‘make my job easy’ box, but I might just have found one that works.
I caught up with William W. Davis, MSPM, PMP and Project Management Superhero. He’s taken the PERT (Project Evaluation and Review Technique) estimating approach to the next level by letting you add a dash of professional judgement in with the numbers.
I started off by asking him why he felt that was necessary.
William, why isn’t normal PERT good enough?
The PERT formula usefully calculates an expected value for an uncertainty with bell-shaped properties. But expected values are only about 50% reliable. What if you want an estimate that is, say, 75% reliable? Or 90% reliable? PERT can’t give you those kinds of estimates.
Moreover, PERT can’t rationally adjust estimates to incorporate an estimator’s knowledge and intuition about an uncertainty. PERT is all head, but has no heart. Yet decision-making invokes both our intellect and our emotions, and that is true for when we make estimates about project uncertainties.
OK, I get it. So what does the S offer?
The S in SPERT offers superpowers to all project managers.
I like the sound of that!
The first superpower is statistics. The American Statistics Association defines statistics as, “the science of learning from data, and of measuring, controlling, and communicating uncertainty.” For me, statistics is about learning, measuring, controlling and communicating uncertainty to my project stakeholders.
Statistics is a scientific superpower. If project managers can harness that power, they can address the two chief reasons why anyone estimates anything.
Remind me of those again…
We estimate to:
- Align expectations among many stakeholders, so everyone knows what to expect about future uncertainties, and
- Make better, more informed decisions with respect to those future uncertainties.
Ah, yes. You mentioned more than one superpower?
The second superpower is sensing. Statistical PERT lets estimators rationally adjust their estimates based upon their sense of how likely the most likely outcome really is.
SPERT uses an estimator’s sense of the most likely outcome to adjust SPERT estimates.
Can you explain how it works?
Statistical PERT is a five-step process (but a SPERT template makes it only three steps). The five steps are:
- Identify a minimum, most likely and maximum outcome for some uncertainty
- Calculate the expected value using the PERT formula
- Make a subjective judgment about how likely the most likely outcome really is
- Calculate a standard deviation
- Choose any probabilistic estimate that fits your desired risk level
SPERT templates do steps 2 and 4 for you, and Excel’s statistical functions in Step 5 make that a snap.
You have a SPERT template that you’ve chosen to offer for free. Why is that?
I do offer all Statistical PERT example workbooks and templates, for free, to everyone.
I want to remove barriers that keep people from exploring their own statistical superpowers.
I want to encourage all businesspeople – especially project managers – to use statistics to quickly align stakeholder expectations and improve executive decision-making.
What led you to develop it in the first place?
Two years ago, I surveyed peer project managers and asked them this question: “How confident do you strive to be when estimating your projects?” Their anonymous responses ranged from 50% to 100%! But none of these project managers calculated their confidence levels, and their sponsors had no idea how much risk they were assuming by approving their project budgets and schedules.
I realised that project managers need an easy way to communicate their sense of confidence and risk to other people. I didn’t find a suitable way to do that, so I created a way.
Great, now we all benefit! Tell me, what trends have you spotted in estimating?
VersionOne, maker of agile project management software, incorporated Monte Carlo simulation into their software two years ago. Agilists using VersionOne can now see bell-shaped curves that forecast when their releases will be finished.
But, collectively, we haven’t solved the basic problem that we just don’t estimate the unknown future very well. And we do an even worse job of communicating our sense of confidence and risk about our project uncertainties to our stakeholders.
What’s your take on #noestimate?
The #noestimate crowd is mostly agilists who eschew traditional project management, and that includes the traditional, and often failed, ways of estimating projects.
Rather than not estimating at all, I believe we should move from a predictive model to a forecasting model that allows for many possible outcomes, both plausible and implausible. Statistics can move us towards a project forecasting model.
Remember what the purpose of statistics is: to learn, measure, control and communicate uncertainty. Not estimating means we aren’t communicating uncertainty to our stakeholders.
So given all that we’ve talked about today, what’s the big thing that PMs should be watching out for?
Listen carefully to whenever you hear anyone sharing a deterministic (single-value) estimate of some project uncertainty. When you hear or see such an estimate, you should clarify the estimate: “Is this an optimistic, most likely, or pessimistic outcome? How reliable is this estimate?”
When estimates are statistically derived, even single-value estimates come with a shareable confidence level. For example, if I say, “I’ll finish the task by Friday,” I don’t convey any sense of confidence or risk. But if I say, “There’s an 80% chance I’ll finish the task by Friday,” now I’ve conveyed both a measure of confidence and risk about whether I’ll finish the task by Friday.
And that’s a rule to live by. Thanks, William!