Project Management Failures – Standish (Chaos)
Dit artikel gaat over ontwikkelen en implementeren van Projecten. En vooral wat we vergeten te leren. Ook hier gaar het weer over de verschillende veranderDrivers die ten grondslag liggen aan het verandervermogen. Daarmee wordt weer het belang benadrukt van het meten daarvan en het daarop sturen. De tools van Business Fitscan helpen daarbij.
Project Management Failures - Standish (Chaos) reports (1994-2015)
(Gepubliceerd op 20 januari 2016)
The Standish (Chaos) report has been published for a number of years now and it is always a sobering read for those involved in project management and in particular, projects involving computer technology. Of particular note is that there has been no significant change in the statistics quoted below since the mid 90’s.
For those who just want the headlines, the 1994 report indicated that ” a staggering 31.1% of projects will be cancelled before they ever get completed. Further results indicate 52.7% of projects will cost 189% of their original estimates. The cost of these failures and overruns are just the tip of the proverbial iceberg. The lost opportunity costs are not measurable, but could easily be in the trillions of dollars. One just has to look to the City of Denver to realise the extent of this problem. The failure to produce reliable software to handle baggage at the new Denver airport was costing the city $1.1 million per day. Based on this research, The Standish Group estimates that in (1995) American companies and government agencies would spend $81 billion for cancelled software projects. These same organisations would pay an additional $59 billion for software projects that will be completed but will exceed their original time estimates. Risks are always a factor when pushing the technology envelope, but many of these projects were as mundane as a driver’s license database, a new accounting package, or an order entry system. On the success side, the average is only 16.2% for software projects that are completed on- time and on-budget.”
Of course this is the 1994 report and when compared to the 2015 report (in the context of the purpose of this article) the following items stand out
- Standish changed the definition of ‘success’ from one that was largely focused on time, cost and scope to one that proposes scope is not necessarily based on achieving a predefined target than a ‘satisfactory’ result.
- The impact of this change in definition was quantified with those who considered their project a success using the traditional definition range from 36%-41% in contrast to a range of 27% 31% for those using the updated definition.
- On page 11 of the report they consider the factors of success which provides a list of 10 with two additions (not listed on the 1994 report) described as ’emotional maturity’- ranked second highest of the list and ‘optimisation’ -ranked 4th highest. Emotional security is described as ‘skills which include managing expectations and gaining consensus, which in turn would cause a high satisfaction’ and goes further to propose that ‘having a skilled emotional maturity environment helps 80% of projects enjoy success’. Optimisation is not defined
The report concludes that ‘over the last 20 years the project management field has experienced increasing layers of project management processes, tools, governance, compliance, and oversight. Yet these activities and products have done nothing to improve project success.’
What I found particularly interesting when comparing both reports was that list of factors that survey participants considered to be the causes of project failure (the report uses the terms such as ‘impaired’ and ‘challenged’ rather than failure) remained largely the same (except for the two described above). Secondly, in the 2015 report, Standish proposes that Agile PM methodology has gone against the trend by achieving much greater project success but is becoming increasingly challenged by the creeping bureaucracy of traditional PM practices.
The more I looked at these reports and the conclusions drawn the more I became suspicious as to whether the core problem of project failure had actually been found and if not, on what basis was the agile conclusion in the 2015 report justified? I am not suggesting that Agile isn’t the answer to the question asked, but is correlation sufficient justification (the number of agile projects deemed
successful vs non agile projects that aren’t successful)? Does agile address each cause and if so how (there were many ranging from 10 to 13 after removing duplication’s)?
I decided to check out the first part of this by trying to derive the root cause using the Theory of Constraints thinking process tool – Current Reality Tree (CRT) to help. The CRT assumes that the core problem must be first identified before it is possible to consider a solution. Note that I did not include the optimisation and emotional maturity factors as a) the method does not require every undesirable effect to be mentioned and b) the absence of (any or) a succinct definition for these items
For many this tree (see below) will be unfamiliar, but a quick review will suggest to you that it is describing a number of statements and their relationship to each other. The boxes with a yellow background correspond to the ’causes’ noted by the 1994 Standish report (being more comprehensive than that described in the 2015 report) with additional boxes representing my additions in order to create the required logical links tying everything together.
I won’t get into the detail of how to create or read the tree but, readers should note the following:
- None of the ’causes’ that the Standish report provided from those surveyed appear to be the root cause. ie. they appear to be symptoms often linked to each other and providing no real insight into the real root cause.
- The root cause proposed by this exercise is ‘(1) The organisation do not know why changing the traditional project evaluation method/s is necessary ‘. This cause was not specificially mentioned in the Standish reports but the proposal that Agile is a better alternative suggests that they were not too far away from coming to the same conclusion. However, knowing what the root cause is is not the same as knowing why the cause has persisted and without that knowledge, there is a risk that any proposed solution will fail in the long run (again the 2015 chaos report alludes to this in its reference to a growing tendency towards bureaucracy even after adopting Agile). – These steps (essentially encapsulated in the Evaporating cloud technique) are part of the same TOC methodology but not addressed in this particular article
- All the causes point to a failure of the current project evaluation AND management methods
Does this sound right to you? If you follow the arrows you will see how I developed the logic to justify this root cause however because I documented it in this way, it is transparent and available for others to scrutinise.
So what’s different about this approach?
- It makes the logic of individuals visible and available to everyone to scrutinize and validate (isn’t this exactly how Wikipedia became such a definitive reference?)
- In its final form, it demonstrates that valid causes can be found without significant effort and expense being applied to data validation and mining (correlation is not causation).Sure data will help but now we have a context for where and what to look for
- It provides significant insight into where management should focus their attention rather than fighting fires that only address the symptoms
- Conversations need not jump down rabbit holes based on personal opinions. If others want to contribute to the debate, the method encourages those people to add, modify or delete the existing logic on the proviso that any proposed changes are provided with the same rigor as shown in the tree
For those who want to know more how to create such a tree