Sign in

Hardwired for failure? Part Two White Paper

White Paper

Hardwired for failure? Part Two White Paper

White Paper

  • White Paper
  • Communication
  • Behaviour
  • Business solutions
  • Governance
  • Knowledge management
  • Project management
  • Project planning
  • Requirements
  • Strategy
  • PRINCE2

Author  Richard Caton

April 23, 2018 |

 20 min read

  • White Paper
  • Communication
  • Behaviour
  • Business solutions
  • Governance
  • Knowledge management
  • Project management
  • Project planning
  • Requirements
  • Strategy
  • PRINCE2

This is the second and final part of a two-part White Paper. The two parts of the paper explore:

  • the concept of cognitive biases from the field of psychology and the role they play in enhancing the conditions that lead to project failure; and
  • what the project management community can do to mitigate their impact.

Part one of this White Paper provided a definition for project failure. It stated that a failed project is one that:

  • is cancelled with large sums written off and/or
  • has missed business critical milestones and/or
  • is significantly over budget particularly where the longer-term value, return or benefits do not mitigate this and/or
  • has products or outputs that are fundamentally not fit for purpose and/or
  • has poor take up by users.

It then looked at the extent of project failure against the backdrop of the current aspirations for project management and promoted best practice. It looked at the traditional reasons given for why projects fail and the classic view of what we expect from people working on projects; compliance and obedience to the project plan. It then went on to look at how organizational culture and human psychology have a role in increasing the propensity for projects to fail.

Part one described heuristics as mental shortcuts or “rules of thumb” that the brain uses subconsciously to make decisions or judgements. It explained that cognitive biases are the errors in judgement that are the consequence of heuristics. Cognitive biases arise as a result of the brain operating different processing systems and of cognition; intuition and reasoning.

Jonathan Haidt’s analogy of a rider on top of an elephant from his book The Happiness Hypothesis1, was used to show the difference between the controlled, reasoned and rational self from the automatic, implicit and emotional self. The rider represents our rational side and the elephant our emotional side. The rider holds the reins and is generally in control of the elephant. But should the elephant disagree with the rider, the rider will be overpowered by the elephant’s will.

Part two looks at a number of these cognitive biases in detail and how they map to key aspects of the project and programme management lifecycle. It goes on to provide a number of specific frameworks, tools, techniques, strategies and approaches for reducing the negative impact of cognitive biases with the ultimate goal of improving rates of project success.

Cognitive biases

There are at least 100 cognitive biases that have been described, tested and documented. They affect our ability to:

  • Analyze the evidence
  • Forecast the likelihood of success Accurately plan
  • Make informed decisions
  • Look at our work objectively in hindsight.

These areas map closely across all aspects of the project and programme management lifecycles: start up, initiation, delivery, close or identification, definition, delivery/benefits realization, close. Below are the cognitive biases that this author believes are most pertinent to project and programme management.

  • Confirmation bias
  • Anchoring effect
  • Optimism bias
  • Overconfidence
  • Planning fallacy
  • Endowment effect / sunk cost fallacy
  • Choice support bias
  • Hindsight bias

Analyzing the evidence and approving and initiating initiatives

3.1 CONFIRMATION BIAS

‘The tendency for people to favour information that confirms their already held beliefs or expectations.’2

This involves selective recall of information or the interpretation of ambiguous information in order to support existing positions. When considering a question, we tend to look for evidence to answer the question but we don’t tend to look for information that would refute the statement.

In one experiment using a fictional child custody case, participants were told that Parent A was reasonably suited to being the guardian in a variety of ways. Parent B had a mix of both positive but also negative qualities: they had a close relationship with the child but also had a job that would take them away for long periods of time. When asked, “Which parent should have custody of the child?” most participants chose Parent B as they focused on the positive attributes3. But when asked, “Which parent should be denied custody of the child?” they also answered Parent B as they centred on the negative attributes. People tend to use a ‘positive test strategy’ looking for evidence to support a question posed. However, best practice says we should test questions by seeking to refute them, using counterfactuals and playing devil’s advocate.

In projects we seek progress and put forward recommendations for approval. They are presented to a decision-maker framed towards eliciting a positive outcome. Whilst decision-makers do challenge the thinking contained in proposals, they start from a non-neutral, often one-sided recommendation.

In other cases, an expectation can be set to those making proposals about the answer that is wanted. Evidence is then steered to justify the preferred recommendation. So, we are looking for confirming evidence and making strong cases for why things should progress. Equally, opposing views can be side-lined or opinions are sought to find the answer that is wanted to provide justification for the investment.

Are projects looking for the counterfactuals and playing devil’s advocate enough? Are we looking at all the available facts without bias? Are we giving unwelcome views a fair hearing?

In the case of the San Francisco Transbay Transit Center (discussed in part one), an engineer hired by Seattle’s Mayor, who was an opponent of the scheme, had warned of unprecedented risks because of poor, inconsistent soil conditions and a high-water table and other stakeholders had warned of issues if the tunnel-boring machine, the largest in the world, should get stuck. However, a consultant brought in by the city council, which was supportive of the tunnel, had concluded that the risks were acceptable.

3.2 ANCHORING EFFECT

‘The tendency to rely heavily on the first piece of information offered (the ‘anchor’) when making decisions.’4

Dan Ariely has shown that the price paid for an item can be influenced by entirely arbitrary factors5. In an experiment at MIT he was able to influence the price that a group of MBA students would be willing to pay for a bottle of Côtes du Rhône Jaboulet Parallel purely by asking them to look at the last two digits of their social security number; the higher that pair of numbers the more money a person was willing to offer in order to buy the wine in an auction.

There is a parallel in project management terms when scoping out projects and trying to estimate elements such as budget and cost. Feasibility studies are conducted with information which at that stage is likely to be incomplete or based on significant assumptions and there can be pressure for the work to go ahead. With this in mind, estimates are likely to fall on the low side. Even with strong caveats, sponsors are likely to hold the initial estimates as the benchmark by which all future information is measured. This can be compounded as spreadsheets, and Gantt charts are not good at reflecting caveats, ambiguity or information which is entered only as an assumption.

Forecasting likelihood of success

4.1 OPTIMISM BIAS

‘The tendency to believe that one is at less risk of experiencing a negative event compared to other people.’6

People have a tendency to believe that things go less wrong for them than for other people; the outside world is kinder to them than others. Optimism bias is thought to be driven by factors such as self-enhancement, a component of well-being, self-presentation, a projection of personal image in social settings, personal or perceived control, or where there is belief that people have more control over events than others.

For example, although smokers acknowledge that smoking increases health risks, they believe that the extent of the increased risk will be lower than what non-smokers believe, however well-proven the risks are. In addition, they believe they are at a lower risk of suffering ill health effects compared to other smokers7.

4.2 SELF-ILLUSORY BIAS

‘The tendency for people to evaluate themselves more positively than others evaluate them.’8

A (possibly apocryphal) anecdote concerning Muhammad Ali amusingly illustrates this. Prior to a flight, a flight attendant reminded Muhammad Ali to fasten his seat belt. Muhammad Ali replied: “Superman don’t need no seatbelt!”, to which the flight attendant retorted “Superman don’t need no airplane, either.”

  • In a survey of a cohort of US car drivers, 93% rated themselves as having better skills than average and 88% rated themselves as safer than average.9
  • In a study of faculty staff at the University of Nebraska-Lincoln, 90% rated themselves as above average with two thirds rating themselves in the top quartile.10

This concerns our relative skill or ability compared to others. We think we are better than average. But, we cannot all be better than everyone else.

4.3 OVERCONFIDENCE EFFECT

The tendency to ‘subjectively believe that one’s judgement is better or more reliable than it objectively is.’11

The overconfidence effect is driven by misjudging one’s own nominal performance or ability, misjudging performance relative to other people’s and having a belief that one knows the truth. Where people are answering difficult questions on subjects with which they are not familiar, confidence will exceed accuracy. In one study, clinicians who were 100% certain in their diagnosis, were incorrect 40% of the time.12

A study by Duke University asked chief financial officers of major companies to estimate the likely performance of the stock market. The result was evidence that these industry leaders had no actual insight into the short-term future of the stock market. However, the participants were oblivious to this fact and were far too confident in their predictions. Their actual ability should have only merited a prediction such as ‘There is an 80% chance that the S&P13 return next year will be between -10% and +30%.’14 But this level of ignorance would not be tolerable.

Organizations who accept predictions of overconfident experts take risks that they should in fact be avoiding. Society values optimism in general and as Daniel Kahneman says ‘Admission that one is merely guessing is especially unacceptable when the stakes are high. Acting on pretended knowledge is often the preferred solution.’15 ‘Overconfident professionals sincerely believe they have expertise, act as experts and look like experts. You will have to struggle to remind yourself that they may be in the grip of an illusion.’ — Daniel Kahneman.16

Planning

5.1 PLANNING FALLACY

The tendency to underestimate and be too optimistic about task completion times.17

The optimism bias, self-illusory bias, and the overconfidence effect are all factors driving the planning fallacy.

A Case Study involving Daniel Kahneman describes him looking to design a new curriculum and writing a textbook to teach judgement and decision-making in high schools. He pulled together an experienced and knowledgeable team and developed initial material as a test. He then asked the team to gauge how long it would take to complete the textbook. As an expert in cognitive biases, he knew that a discussion amongst the group would reduce the quality of the answer, so he asked everyone to make a private prediction.

The average was two years, ranging from 18 months to two and a half years. At that point he asked the curriculum expert to think of other similar projects he was aware of. The expert had not considered this question previously. He sheepishly admitted that around 40% of teams at a similar point had abandoned their project before completion and that those that finished had taken between seven and ten years. In fact, the book was finished after eight years, though never used, as the enthusiasm for the work by the client had disappeared at that point.18

Research shows that even worst-case estimates are still shorter than reality. One study asked psychology students to predict how long it would take to complete their senior theses. The average was around 34 days. Then they were asked to predict duration ‘if everything went as well as it possibly could’ and ‘if everything went as poorly as it could’, the results being around 27 and 47 days respectively. The actual average turned out to be 55 days with around 30% completing within the time they predicted.19

Hofstadter’s Law: “It always takes longer than you expect, even when you take into account Hofstadter’s Law.”20

We may know that things took longer than expected last time, but we are confident that this time, things will be better, perhaps in the belief that we learn by experience. We treat previous situations as the exception when in fact they are the norm. For past prediction failures, we have a tendency to seek reasons that were outside our control or as one-offs. Project plans are commonly built on a forward view and one that expects things to go to plan but are built far less commonly on comparative experience of similar endeavours.

If a project’s best guess is the same as the best case scenario, it would suggest a high risk of being inaccurate when tested in real world.

It would seem reasonable to think that it is impossible to foresee all the issues that would arise in a project and be able to understand their impact on time and cost. Thinking about the categorizations provided by the ‘Johari Window’ method and brought into the popular consciousness by Donald Rumsfeld, we may know that we have ‘known unknowns’ but by definition, ‘unknown unknowns’ are beyond comprehension. We tend to discount the risk of low probability events with high impact. We may fail to take into account the impact of resource gaps that occur through sickness, holidays or people leaving a team. These gaps impact on capacity in a project, but also can affect the decision-making ability of projects at crucial moments. Project managers may try to add these planned and unplanned resource gaps into project plans but it is not uncommon for approval bodies to challenge their inclusion. When people are replaced on projects, it takes time for them to become productive. In project, programme and change management we are often embarking on journeys to create new and bespoke solutions to unique problems with teams that may not be used to working with each other. Sponsors may not be as knowledgeable or as experienced in the project domain as they are in their ‘day jobs’. In that sense we have a good environment for ignorance, where overconfidence can flourish.

Authorizing continued investment

6.1 ESCALATION COMMITMENT AND SUNK COST FALLACY

Escalation Commitment: ‘the continued commitment and allocation of resources to a course of action that is failing in the hope of recouping losses. Also called creeping commitment.’21

Sunk cost fallacy: when one continues “a behaviour or endeavour as a result of previously invested resources (time, money or effort).”22

In other words it means justifying the continued course of action based on money spent to date rather than expected future commitments.

Managers and politicians may want to avoid a situation where the outcome of a project is an absolute failure, being a total write-off with zero value delivered. Continuing to invest may re-calibrate the sense of wasted expenditure, with the size of additional budget required to keep going feeling relatively small. It can also appear safer from a career perspective to continue to invest in a venture, even if that is irrational in purely economic terms, if some value can be derived and in order to avoid the embarrassing consequences of complete abandonment.

The case of the Garden Bridge in London is a good example of the fallout from a decision to halt a project based on a re-assessment of the business case and writing off a significant amount of public money.

The report by the former chair of the Public Accounts Committee concluded that ‘What started life as a project costing an estimated £60m is likely to end up costing more than £200m’23. It was therefore decided to stop the project at that point with £37 million being written off. A very public blame game between the current and previous London Mayor ensued.

PRINCE2® focuses on the continued justification of the business case at the end of each stage and may close the project prematurely if the justification is not met. Although it is the expected course of action and indeed the economically rational one, the hypothesis is that this occurs less often than our frameworks would expect. Are project teams and project boards too close to their own projects to make an objective judgement?

Reviewing and learning lessons

7.1 HINDSIGHT BIAS

‘The tendency to see events that have already occurred as being more easily predicted than they would have been before taking place.’24

When we say ‘I knew it!’, what we mean could be called a classic case of hindsight bias. A further, more serious, example would be the criticisms often tabled at intelligence services when a terrorist attack occurs. Although it is easy to plot backwards from an actual attack through known people’s history and build a picture of their historic motivations on a trajectory towards a future event, it is a totally different proposition to build forward from ambiguous pictures of multiple suspects to a definitive future event.

Where people have not reached a desired outcome, they may use ‘retroactive pessimism as a defense mechanism, concluding that chances of success were not too good to begin with.’25 People may believe incorrectly that failure was inevitable and unavoidable or, at the very least, that it was impossible to foresee the challenges.

When seeking to improve the success of future projects by learning from experiences, a risk is that people often believe the end result was inevitable. They therefore may not look for lessons and feel they cannot learn as much from the situation. An element of groupthink may also start to emerge: it is likely that those involved in a failed project will find comfort in thinking that the outcome was inevitable. It is after all a protection mechanism, in order to save face, both to colleagues in the organization and to oneself.

7.2 CHOICE SUPPORT BIAS

The tendency to attribute more positive features to chosen options than to foregone options.26

Studies in this area show that when people reflect on previous decisions made, they enhance the positive aspects of their choice. We tend to enhance the benefits of a decision made and in some situations, minimize the other options that were available but not selected. We sometimes add additional positives that don’t exist to the choice we made and add additional negatives that didn’t exist to the options we forewent. This distortion can reduce regret about decisions made, potentially being supportive to people’s well-being and acting as a defence mechanism. However, it reduces accountability for decisions made and particularly prevents the possibilities of learning from past experience.27 There is a link to the self-serving bias where people “claim more responsibility for successes than failures.”

Tools and techniques for countering the effects of cognitive biases

Dilip Soman in his University of Toronto BE101x course on Behavioural Economics looks into overarching strategies of how the biases can be overcome28. There are two main options for countering the cognitive biases: either help people make better choices or make the environments in which they are making decisions easier to navigate. These options are called debiasing and rebiasing. Debiasing involves fixing the cause; rebiasing involves imposing a secondary mechanism to cancel the original bias. Rebiasing is more commonly known as ‘nudging’ from the Thaler and Sunstein book Nudge: Improving Decisions About Health, Wealth and Happiness.29 Richard Thaler’s work for his contribution to behavioural economics earnt him the Nobel Prize in Economic sciences in 2017.

Debiasing covers three main approaches:

  • Making people be more motivated to be correct:
    • Incentivization
    • Enhanced accountability
    • Tapping into the desire for being socially correct by using peer pressure and making public commitments
  • Cognitive strategies:
    • Training people to think differently and better
    • Using templates encouraging capturing the right data to make a decision
    • Model based approaches. These involve looking back at a number of previous experiences, deciding what are thought to be the most important factors and then creating models and processes that better account for those in future decision-making
  • Technological strategies:
    • Case-based decision supports systems. These involve finding the single most similar previous example and predicting that that historic experience will be true for the current experience
    • Utilizing online databases to provide the right data

Rebiasing (nudging) has four dimensions:

  • Boosting self-control (the right behaviour is known but staying committed is challenging) or activating a desired behaviour (steering to an unknown but correct path)
  • Self-imposed (individual knows they have a self-control issue) vs Externally-imposed (imposed by a 3rd party)
  • Mindful (encouraging cognition and better thinking) vs Mindless (less thinking and more automated)
  • Encouragement (of a desired behaviour) vs Discouragement (of a competing activity getting in the way of making better decisions)

There are a number of specific strategies for overcoming particular biases also.

Geologists at Royal Dutch Shell aimed to reduce their overconfidence by considering previous Case Studies. However, Daniel Kahneman is not optimistic about our ability to overcome overconfident optimism. He says that:

‘The main obstacle is that subjective confidence is determined by the coherence of the story one has constructed, not by the quality and amount of information that supports it.’30

In essence, we construct a narrative which is then difficult to ignore.

His counter suggestion is a ‘premortem’, a concept created by Gary Klein. At the point of decision-making, a workshop is called. Participants are asked to project themselves forward by one year and imagine that the proposed plan was instigated but that the outcome had been a disaster. Participants are then invited to explain why that was the case. This is a very useful approach to reducing groupthink and where doubts about a decision can be voiced without being seen as a lack of loyalty or undermining the judgement of a leader. The premortem legitimizes doubt and encourages all involved to consider possible threats, reducing the opportunity for optimism to go untested.31

PRINCE2 promotes the idea of looking for lessons from previous projects, but these are likely to be qualitative lessons drawn from lessons logs and reports. A solution to enhance this is to use a technique known as reference class forecasting. This builds data around projected project time and cost for different types of projects versus what happens in reality and identifying ratios between these estimations and actuals. These ratios can then be applied to project plans to gain a more realistic likely outcome. As this would be data-driven, it would be a more robust way of communicating this consistently demonstrated problem to decision makers. A more granular approach would be to keep track of how many days or weeks are delivered exactly as per the project plan, and utilising this ratio to get a more realistic view. Using more quantitative data may improve project planning significantly.

One rule of thumb for improving the accuracy of forecasts is to make a credible estimate and double it. Although this is unlikely to have a lot of impact when presented to the board, when a project presents a time plan, it might be useful for board members to gauge what might be more realistic in practice and be able to consider the question “if this project took twice as long and cost twice as much what would be the impact?”

Decision-makers need to be wary of creating an echo chamber to only reflect back welcome opinions. Projects need to be more open to people holding opposing views or who play devil’s advocate. Dilip Soman suggests getting advice from different perspectives particularly from people that are different from you.32 This strategy involves getting an external viewpoint.

Another strategy Soman proposes is that people making decisions should actively seek several reasons why proposals might be wrong. This would involve looking for counterfactuals. He also says that commonly, when comparing opinions, we make binary choices: which of the two is correct? He states that averaging opinions is the better strategy.

In terms of capturing lessons, external analysis may help reveal and communicate more lessons than the current processes. In PRINCE2, project assurance is a function of the project board or delegated from them to others. Given the issues identified in this paper, should project assurance be a role for the sponsoring group to also consider?

Lessons reports are likely to be written by those working on the project. However, even if they are well placed to comment given their proximity to what happened, they may not have the necessary objective view of the real causes for issues. They may also feel some pressure to downplay criticism of the project, the project team and project board given that the project board itself approves the report. Post project structured debriefing workshops33 are useful to look back at the project from different perspectives. Run by a facilitator, the project team, and in some cases other stakeholders, gather for a 90-minute session during which the questions, ‘what went well?’, ‘what didn’t go so well?’ and ‘what would be done differently next time?’ are answered. After action review workshops34 are a similar concept. All attendees participate and as a result more perspectives are given and heard. In having a dialogue, a more open discussion is enabled than a written report often allows. A short report is then produced, capturing the points raised. Arguably more learning across more people takes place via this approach than via traditional lessons reports.

Conclusion

In part one of this White Paper, we showed that the classic project management view of how to reduce project failure is by no means the whole picture. More, or better, adherence to process will not reduce project failure to the extent we would wish or may think is possible. Experience and data demonstrate this. There may be diminishing returns in following this course of action. Part one argued that there may be a genuine opportunity to do better. Although change management as a discipline contributes towards understanding how people act in the real world, it tends to focus on ‘them’, the actors and stakeholders outside the project team, rather than the ‘us’, the project team itself and how it plans and makes decisions. We may be relying too much on the effectiveness of a command and control approach and not considering other elements of organizational theory.

Part two has looked to borrow from psychology and behavioural economics and to show the role that cognitive biases may be playing in project management. These cognitive biases are caused by heuristics which are ingrained human traits. Because those traits are operating at the subconscious level they are very challenging to influence. But being aware of them, using debiasing and rebiasing to take their effects into account and, also by using specific techniques, could enable a reduction in project failure.

Some work has been done in looking at project planning and why projects fail through the lens of cognitive biases35 but there is certainly more to be done in understanding their scale and impact. There are particular opportunities for the industry to work to create datasets around ideas like reference class forecasting. Utilizing these approaches may help us achieve projects objectives and deliver successful projects more consistently and effectively, thereby meeting the ambitious objectives and aspirations set out by our industry leaders.

Questions for reflection

10.1 FROM PART ONE

Thinking about Morgan’s Organizational Metaphors, do you see traits of political systems, organisms and flux and transformation as well as the machine metaphor in your projects?

Have you seen examples of strategic misrepresentation where benefits are deliberately overstated and/or costs deliberately understated to suit a particular preferred course of action?

Are projects open enough to considering inconvenient facts and opinions or are they side-lined or ignored? Do projects stop as they should, based on rational economic arguments or are reasons found to press ahead?

10.2 FROM PART TWO

Can you identify situations where you have seen any of the cognitive biases in this paper demonstrated on projects you have worked on?

Do projects look for counterfactuals (“why shouldn’t this decision be approved?”) enough and reasons not to go ahead at key decision-making points or is the bias towards find reasons why they should?

How can the ‘outside view’ thinking be brought into projects more often?

Are there opportunities to conduct more structured debriefings or after action reviews to complement or inform the lessons reports that are written?

How can the industry develop and share more aggregated, qualitative information to support reference class forecasting?

End Notes

1 Haidt, J. (2007) The Happiness Hypothesis: Putting Ancient Wisdom to the Test of Modern Science. Arrow Books: London.

2 www.psychologyconcepts.com/confirmation-bias/[accessed: 24 April 2018].

3 Shafir, E. (1993), ‘Choosing versus rejecting: why some options are both better and worse than others’, Memory and Cognition, 21 (4): 546–56.

4 www.psychologyconcepts.com/anchoring/[accessed: 24 April 2018].
5 Ariely, D. (2008) Predictably Irrational, Harper Collins: London. Pp. 26.

6 www.psychologyconcepts.com/optimism-bias/[accessed: 24 April 2018].

7 Weinstein, N. D. (1998). ‘Accuracy of smokers’ risk perceptions’. Annals of Behavioral Medicine, 20(2), 135-140.

8 www.psychologyconcepts.com/self-illusory-bias/ [accessed: 24 April 2018].

9 Svenson, O. (1981). ‘Are we all less risky and more skillful than our fellow drivers?’ Acta Psychologica, 47(2), 143-148.

10 Cross, K. P. (1977), ‘Not can, but will college teaching be improved?’ New Directions for Higher Education, 1977: 1–15.

11 www.psychologyconcepts.com/overconfidence-effect/[accessed: 24 April 2018].

12 Kahneman, D. (2011) Thinking Fast and Slow, Penguin: London. Pp. 263.

13 S&P refers to Standard and Poor’s Financial Services LLC and specifically the S&P 500 a US Stock Market Index based on the 500 largest companies on the New York Stock Exchange or NASDAQ.

14 Kahneman, D. (2011) Thinking Fast and Slow, Penguin: London. Pp. 263.

15 Kahneman, D. (2011) Thinking Fast and Slow, Penguin: London. Pp. 263.

16 Kahneman, D (19 October 2011). ‘Don’t Blink! The Hazards of Confidence’. New York Times. Available from: https://www.nytimes.com/2011/10/23/magazine/dont-blink-the-hazards-of-confidence.html [accessed: 24 April 2018].

17 Buehler, R., Griffin, D., Ross, M., ‘Exploring the “planning fallacy”: Why people underestimate their task completion times.’ Journal of Personality and Social Psychology, Vol 67(3), Sep 1994

18 Kahneman, D. (2011) Thinking Fast and Slow, Penguin: London. Pp. 245-247.

19 Buehler, Roger, Griffin, Dale, Ross, Michael. (1994), “Exploring the “planning fallacy”: Why people underestimate their task completion times.” Journal of Personality and Social Psychology, 67(3), 366-381

20 Hofstader, D. (1979) Gödel, Escher, Bach: An Eternal Golden Braid, Basic Books: New York.

21 https://psychologydictionary.o... [accessed: 24 April 2018].

22 https://www.behavioraleconomics.com/mini-encyclopedia-of-be/sunk-cost-fallacy/ [accessed: 24 April 2018]; Arkes, H. R., Blumer, C. (1985), ‘The psychology of sunk costs, Organizational Behavior and Human Decision Processes’, 35, pp124-140.

23 https://www.theguardian.com/uk-news/2017/aug/14/london-garden-bridge-project-scrapped-sadiq-khan
[accessed: 24 April 2018].

24 www.psychologyconcepts.com/hindsight-bias/[accessed: 24 April 2018].

25 http://isiarticles.com/bundles/Article/pre/pdf/38216.pdf [accessed: 24 April 2018].
26 Mather, Mara, Johnson, Marcia K, (2000) ‘Choice-supportive source monitoring: Do our decisions seem better to us as we age?’ Psychology and Aging, 15(4), pp 596-606

27 Mara Mather, Eldar Shafir, Marcia K. Johnson (2002) ‘Misremembrance of Options Past: Source Monitoring and Choice’ Psychological Science, 11(2), pp 132-138

28 Soman, D. Behavioural Economics in Action, University of Toronto BE101x, course available from: www.edx.org/[accessed: 24 April 2018].

29 Thaler, R and Sunstein, C, (2008) Nudge: Improving Decisions about Health, Wealth and Happiness, Yale University Press: United States of America.

30 Kahneman, D. (2011)Thinking Fast and Slow, Penguin: London. Pp. 264.

31 Kahneman, D. (2011) Thinking Fast and Slow, Penguin: London. Pp. 264.

32 Soman, D. Behavioural Economics in Action, University of Toronto BE101x, course available from: www.edx.org/[accessed: 24 April 2018].

33 http://www.civildefence.govt.nz/assets/Uploads/publications/is-06-05-organizational-debriefing. pdf [accessed: 24 April 2018]; http://www.wiltshire.police.uk/information/documents/publication- scheme/policies-and-procedures/crime-reduction-and-delivery/523-critical-incident-debrief-procedure/ file [accessed: 24 April 2018]; https://www.app.college.police.uk/app-content/operations/briefing-and-
debriefing/ [accessed: 24 April 2018]; https://hbr.org/2015/07/debrie... [accessed: 24 April 2018].

34 https://www.cebma.org/wp-content/uploads/Guide-to-the-after_action_review.pdf [accessed: 24 April 2018].

35 https://www.nao.org.uk/report/optimism-bias-paper/?utm_source=LinkedIn&utm_medium=Post&utm_
campaign=2013-optimism-bias [accessed: 24 April 2018].

About the Author

Richard Caton has been involved in project and programme management for over 20 years. He is currently Head of the Regeneration Divisional Programme Office at the London Borough of Hackney. Previously in Hackney he has managed programmes relating to organizational culture change, transforming adult social care and elements relating to the borough’s response to London 2012, as well as developing Hackney’s approach to project management.

Richard is a champion of sharing and developing PPM best practice. He was a founder of the Project and Programme Management Community of Practice, an online network primarily for local and regional government to develop and share best practice which has grown to over 1,800 members. He is also Chair of the London PPM Forum for local and regional government PPM Practitioners. He is a regular reviewer of PPM best practice guides including P3O, P3M3 and Management of Portfolios and contributes to sector best practice such as the APM’s Public Sector Group Research Report.

Over the past five years, Richard has studied psychology and behavioural economics modules through institutions such as Duke University, the University of Toronto and the University of Queensland via Coursera, edX and other MOOC (massive open online course) platforms. His interest in these fields and their potential application to better understanding the causes of project failure has culminated in this White Paper.

Richard_Caton_200.jpg