Risk-taking and the “Freedom to Fail”

January 5th, 2011 1 Comment
Take calculated risks–that is quite different from being rash.”
General George S. Patton

Especially in high-tech, progress is made only by taking risks. Taking a risk implies the possibility of partial or complete failure. So how can we decide what risks to take, and how can we encourage the identification of those risks and the willingness to go forward with them, i.e., institute a culture of risk-taking?

One of the most important factors in encouraging risk-taking is the “freedom to fail.” That does not mean that developers should charge off with whatever wild ideas come to mind, with no thought of the likelihood of success, not to mention the cost or ultimate value of the project. Risks can and should be managed, but the organization needs to have a culture in which a project can fail without the participants being branded as failures. Everyone who takes risks will occasionally fail, and if the organization punishes those failures, all who are willing to take risks will eventually suffer punishment and will either leave the organization (voluntarily or involuntarily) or adopt the learned behavior of not taking risks. An organization staffed with people unwilling to take risks is doomed to mediocrity.

How a developer identifies reasonable risk, and how risks are managed through the development process, are subjects that whole books have been written about. What is of concern here is management’s responsibilities in ensuring a culture of taking reasonable risks: look at the risks, look at the potential rewards, and decide whether to take the risks and proceed. Lip service to instructions such as Patton’s are meaningless if failure is punished, and the developers will soon learn whether managers means what they say. If they do not, the developers will avoid taking risks, and the organization will make little or no progress.

So, managers must balance the developer’s urge to build whatever strikes his or her fancy with the fear of punishment if a project can’t be successfully completed or is not useful in the marketplace. What is needed is partly communication–what level of risk will be tolerated, how that level is ascertained for any proposal, and how the developer needs to manage the risk–and partly management actions. Actions of course include reviewing, but not micromanaging, the developer’s evaluation of risk and its subsequent management. For example, I had one Vice President atCisco who, at all reviews, looked at the difference between “planned” and “committed” cost and schedule: if the difference was, in his opinion, too small, he concluded that the project was not taking enough risks or was underestimating the risks. I.e., he wanted the project to establish a margin, a “management reserve,” the time and budget to handle contingencies that cannot be fully accounted for in advance. Such an approach is one method of risk management, and this VP insisted on both risk-taking and this approach to managing the risk.

Risk management status should be part of any management review, and should provide an up-to-date picture of risks resolved, new risks identified, and risks being tracked and actively managed. Managers should consider risk status carefully, and if it is judged that a risk has become excessive, cannot be mitigated, and cannot be accepted, then the project must be cancelled or modified to work around the risk (e.g., postpone or remove a feature that cannot be completed, at least within acceptable cost and schedule). The key point is that, if risks were honestly evaluated and managed according to a valid plan, developers should not be “blamed” for the project not being completed according to the original plan. Blame should attach only if some risks were hidden or deliberately understated, or incompetently managed. Developers will immediately know the truth, when they see whether punishment is handed out as a result of the project’s failure–for example are they reassigned to new, promising projects, or laid off.

Managers may even want to establish a project risk threshold: “This organization needs to take sufficient risks such that, even if all risks are properly identified and managed, ‘X’ percent of our projects will fail and will need to be cancelled or incur major redirection. If we are operating near this percentage, properly managed projects that fail will be considered part of the cost of doing business, and participants will continue to be regarded as valued contributors to the success of the organization.” Following a policy such as this will increase the confidence of project participants that they can take reasonable (“calculated”) risks, as well as the confidence of management that risks are being honestly discussed.

“Freedom to fail” also includes a situation where a project is developed and completed on schedule and on budget, but a management decision is made that a market for the product no longer exists. Managers may question why it failed to see that issue earlier in the project’s life cycle, but the project participants carried out their commitments properly and should not be viewed as having failed.
Admiral Chester Nimitz, commander of the US Pacific Fleet in World War II, understood both risk-taking and the freedom to fail. As far as risk-taking, consider his orders before the Battle of Midway in June 1942: “In carrying out the task assigned … you will be governed by the principle of calculated risk, which you shall interpret to mean the avoidance of exposure of your force to attack by superior enemy forces without good prospect of inflicting, as a result of such exposure, greater damage on the enemy.”

As far as freedom to fail, his policy was to “give a dog two bites,” i.e., he gave subordinates a second chance after a mistake. Several of his subordinates made errors but were given a second chance and went on to success in important roles; several others failed twice and were dismissed or relegated to less-important roles.

Merlin Dorfman specializes in software system engineering, software process improvement, and software quality engineering.  He has over 40 years of experience, with Lockheed Martin and Cisco Systems, and in writing and teaching in his areas of specialization.  He has BS and MS degrees from MIT and a PhD from Stanford, all in Aerospace Engineering, and is a registered Professional Engineer in California and Colorado.  He likes to read and to travel, and has taken adult-education courses at Stanford in Theoretical Physics and recent/contemporary history.
Be Sociable, Share!

1 Comment

  1. […] risk-taking while balancing it with sound risk management.  Dr. Merlin Dorfman’s new article “Risk-taking and the Freedom to Fail” provides a much needed framework for building risk management plans that encourage  risk-taking […]

Leave a Reply