July 21, 2012

Subjectivity in Risk Analysis


There are known knowns; things we know we know.
There are known unknowns; things we know we dont know.

But there are also unknown unknowns – things we dont know we dont know.”

—United States Secretary of Defense, Donald Rumsfeld.

And now, onto the main blog entry:

"The Unknown Unknowns"

An interesting comment on the Fukushima Daiichi disaster:

The Japanese Nuclear Commission had the following goals set in 2003: "The mean value of acute fatality risk by radiation exposure resultant from an accident of a nuclear installation to individuals of the public, who live in the vicinity of the site boundary of the nuclear installation, should not exceed the probability of about 1x10⁻⁶ per year (that is , at least 1 per million years)".

That policy was designed only 8 years ago. Their one in a million-year accident occurred about 8 years later. - ENDQUOTE

Plus the disaster was only 66 years after 1945.

How reliable are probability estimates for real life systems?

Firstly, let us take a look at the definition of the 'deterministic system':

"A deterministic system is a conceptual model of the philosophical doctrine of determinism applied to a system for understanding everything that has and will occur in the system, based on the physical outcomes of causality. In a deterministic system, every action, or cause, produces a reaction, or effect, and every reaction, in turn, becomes the cause of subsequent reactions. The totality of these cascading events can theoretically show exactly how the system will exist at any moment in time."

--From Wiki:Deterministic System

In a deterministic/mechanistic universe, all forces/variables/interrelationships operating in any system would be identifiable and quantifiable.

However, our universe is NOT a deterministic/mechanistic/discreet system in which cause effect relationships are known or knowable.

Probabilities of discrete systems like the throw of a dice or toss of a coin in which outputs are well defined, such probabilities are reliable.

But probabilities of real life systems, that are complex non discreet systems marked by non quantifiable "Emergents", systems in which all the forces/variables/interrelationships involved can never be known - such probabilities cannot be reliable.

Put simply,

• The probabilties associated with simple systems like the throw of a dice come with 100% certainty. The probability that any one side will show up is 1/6, and this cannot be disputed.

• A complex system consisting of thousands or millions of moving parts, electronics, human operators, weather conditions, is however a different ballgame. The probabilities calculated can never be certain, the subjectivity involved in the calculations is fundamentally unhandleable.

If we look for evidence of the above - we'll run into a lot of evidence.

One example:

"We overestimate our ability to predict and serially assign lower than warranted probabilities to extreme events. During the week of the Lehman collapse, bank analysts claimed that they had seen 3 six sigma events (each with probability lower than 1% of 1%)!" -Policy Tensor

The following are some examples of popular scientists/philosophers/mathematicians who's basic world view is Indeterminism. The Indeterminist cannot trust probability/uncertainty calculations for real life systems.

By no means an exhaustive list, there's a lot more:

• Richard Feynman (Physicist, Nobel Prize 1965 )
• Fritjof Capra (physicist)
• Benoit Mandelbrot (mathematician)
• Nassim Taleb (mathematician)
• Robert Pirsig (philosopher/metaphysician)
• Robert Laughlin (physicist)
• Werner Heisenberg (Quantum physicist, Nobel Prize 1932)
• Max Born (Quantum physicist, Nobel Prize 1954)
• Jacques Monod (Nobel Prize 1965)
• Murray Gell-Man (Nobel Prize 1969)
• Ilya Prigogine* (Nobel Prize 1977)

*Argued for indeterminism in complex systems.


• "If a guy tells me that the probability of failure is 1 in 10⁵, I know he’s full of crap."

- Richard Feynman, Nobel Laureate commenting on the NASA Challenger disaster.

• "We don't know the probability. We don't have enough data, we don't have enough knowledge, we don't have reliable information."

- Benoit Mandelbrot, founding father of Fractal Mathematics, commenting on the complexity of global economics.

• "We should not talk about small probabilities in any domain. Science cannot deal with them. It is irresponsible to talk about small probabilities and make people rely on them, except for natural systems that have been standing for 3 billion years." - Nassim Taleb.

• "If you compute the frequency of a rare event and your survival depends on such event not taking place (such as nuclear events), then you underestimated that probability." - Nassim Taleb.

Passages from Subjectivity in Risk @ CSR.NCL.AC.UK by Felix Redmill.

• Risk values are arrived at via the process of risk analysis. In many quarters, this is assumed to be objective, and its results - the risk values - to be correct. Yet, as will be shown in subsequent sections of this report, all stages of the process involve subjectivity, in some cases to a considerable extent. Always there is reliance on judgement, and, as in all cases in which judgement is called for, there can be no guarantee that it will be made to a reasonable approximation, even by an expert. Indeed, it may be - and sometimes is - made by an inexperienced novice. The need for judgement introduces subjectivity and bias, and therefore uncertainty and the likelihood of inaccuracy.

The results obtained by one risk analyst are unlikely to be obtained by others starting with the same information.

Further, there is a natural impediment to arriving at 'correct' risk values. Although definitions of risk do not explicitly refer to time, the future is implicit in them.

• Thus, risk may be estimated but it cannot be measured (Gould et al 1988). Risk values cannot be assumed to be 'correct'.

• The decisions on how to define consequence, at both the definition-of-scope and analysis stages, are subjective. So too are the predictions of what the actual consequences might be.

• This omission of possible causes of failure (or dangerous failure, if safety is the main criterion) is not unusual and, no guarantee can be given that it has been avoided. It renders risk calculations spurious.

And finally, a passage from Techné: Research in Philosophy and Technology:

Few if any decisions in actual life are based on probabilities that are known with certainty.

• Strictly speaking, the only clear-cut cases of “risk” (known probabilities) seem to be idealized textbook cases that refer to devices such as dice, coins, or roulette wheels that are supposedly known with certainty to be fair. More typical real-life cases are characterized by uncertainty that does not, primarily, come with exact probabilities.

Hence, almost all decisions are decisions “under uncertainty”. To the extent that we make decisions “under risk”, this does not mean that these decisions are made under conditions of completely known probabilities. Rather, it means that we have chosen to simplify our description of these decision problems by treating them as cases of known probabilities.

This ubiquity of uncertainty applies also in engineering design. An engineer performing a complex design task has to take into account a large number of hazards and eventualities. Some of these eventualities can be treated in terms of probabilities; the failure rates of some components may for instance be reasonably well-known from previous experiences.

However, even when we have a good experience-based estimate of a failure rate, some uncertainty remains about the correctness of this estimate and in particular about its applicability in the context to which we apply it.

In addition, in every design process there are uncertainties for which we do not have good or even meaningful probability estimates.

This includes the ways in which humans will interact with new constructions. As one example of this, users sometimes “compensate” for improved technical safety by more risk-taking behaviour. Drivers are known to have driven faster or delayed braking when driving cars with better brakes. (Rothengatter 2002) It is not in practice possible to assign meaningful numerical probabilities to these and other human reactions to new and untested designs. It is also difficult to determine adequate probabilities for unexpected failures in new materials and constructions or in complex new software.

We can never escape the uncertainty that refers to the eventuality of new types of failures that we have not been able to foresee.

Of course, whereas reducing risk is obviously desirable, the same may not be said about the reduction of uncertainty. Strictly interpreted, uncertainty reduction is an epistemic goal rather than a practical one.

• Many of the most ethically important safety issues in engineering design refer to hazards that cannot be assigned meaningful probability estimates. It is appropriate that at least two of the most important strategies for safety in engineering design, namely safety factors and multiple safety barriers, deal not only with risk (in the standard, probabilistic sense of the term) but also with uncertainty.

Currently there is a trend in several fields of engineering design towards increased use of probabilistic risk analysis (PRA). This trend may be a mixed blessing since it can lead to a one-sided focus on those dangers that can be assigned meaningful probability estimates. PRA is an important design tool, but it is not the final arbitrator of safe design since it does not deal adequately with issues of uncertainty. Design practices such as safety factors and multiple barriers are indispensable in the design process, and so is ethical reflection and argumentation on issues of safety. Probability calculations can often support, but never supplant, the engineer’s ethically responsible judgment.

- Safe Design by Sven Ove Hansson, Department of Philosophy and the History of Technology, Royal Institute of Technology, Stockholm.

Recommended 1: Role of Richard Feynman in NASA Challanger disaster, Rogers Commission Report

Some quotes from the abovementioned:

• "Feynman was struck by management's claim that the risk of catastrophic malfunction on the shuttle was 1 in 105; i.e., 1 in 100,000. Feynman immediately realized that this claim was risible on its face; as he described, this assessment of risk would entail that NASA could expect to launch a shuttle every day for the next 274 years while suffering, on average, only one accident."

• Feynman was disturbed by two aspects of this practice. First, NASA management assigned a probability of failure to each individual bolt, sometimes claiming a probability of 1 in 108; that is, one in one hundred million. Feynman pointed out that it is impossible to calculate such a remote possibility with any scientific rigor. Secondly, Feynman was bothered not just by this sloppy science but by the fact that NASA claimed that the risk of catastrophic failure was "necessarily" 1 in 105. As the figure itself was beyond belief, Feynman questioned exactly what "necessarily" meant in this context—did it mean that the figure followed logically from other calculations, or did it reflect NASA management's desire to make the numbers fit?

• Feynman suspected that the 1/100,000 figure was wildly fantastical, and made a rough estimate that the true likelihood of shuttle disaster was closer to 1 in 100.

Recommended 2: Report of the PRESIDENTIAL COMMISSION on the Space Shuttle Challenger Accident, Volume 2: Appendix F - Personal Observations on Reliability of Shuttle by R. P. Feynman
Recommended 3: Black Swan MindMap
Recommended 4: Policy Tensor
Recommended 5: Philosophy of Risk Homepage

No comments:

Post a Comment