Straight from Wikipedia: Risk is the potential that a chosen action or activity (including the choice of inaction) will lead to a loss (an undesirable outcome). The notion implies that a choice having an influence on the outcome exists (or existed). Potential losses themselves may also be called “risks”.
Now before you get all pissy about me using Wikipedia as a reference, know that I merely wanted to define what I was talking about, and the Wikipedia definition was quite suitable.
Further definitions – Let’s examine a scenario in which I give you one of two choices:
- I give you $50.
- I flip a coin. Heads – I give you $100. Tails – you get nothing.
Statistically speaking, both choices are equal when viewed over an infinite number of trials. A risk neutral person would be agreeable to either scenario. A risk averse person would choose the 1st option, even if it was for less than $50. A risk inclined person would choose the 2nd option, even if the 1st was for more than $50. This is just a simple explanation of terms I plan on using. This simply exemplifies the character types defined from a scenario with a known probability and quantity of outcome.
Most people are risk inclined, up until what is risked is of such great value to the person that they become risk averse. The lottery is a perfect example of this. It offers terrible odds, and even the payout isn’t comparable. One version of the Lottery offers a chance of winning the jackpot at 1 in 175 million with each ticket costing $1. The actual payout for the jackpot averages less than 175 million. Therefore, anyone playing the Lottery, whether they understand the statistics or not, is risk inclined. Consider what would happen if the lottery tickets cost $1000 and the jackpot had the same odds but paid out 175 billion. Most everyone that previously purchased $1 lottery tickets would stop buying them because $1000 is of great value to them. Somewhere between $1 and $1000 per ticket people changed from risk inclined to risk averse. If we have a risk averse culture (which I believe to be true), we can infer that the majority of decisions we make or the outcomes that may come are of great value to us.
Whenever we make decisions, we intrinsically go through an evaluation process:
- What is the decision to be made?
- What things affect the outcome?
- What is the probability that those things will happen?
- What are the potential outcomes?
There are many things that shape the decisions made in the VP Navy. Most are able to be quantified in some fashion, like the probability that the airplane will experience a catastrophic wing failure. But some of the biggest forces that shape our decision making are very difficult to quantify, like “What will the Skipper think if I do “X” and then “Y” happens?” Unfortunately, the many uncertainties we face often either paralyze us into inaction or make us act only to avoid negative outcomes, rather than acting to cause positive outcomes.
Let’s consider a detachment to San Diego: The Commodore has expressed interest in hearing about how his squadron supported a Carrier exercise. The Skipper understands full well that no matter how well the crews perform onstation, anything that reaches the Commodore will be perceived as nothing more than grandstanding embellishments, because no one ever reports doing poorly. Therefore, the Skipper concludes, perhaps unknowingly, that the only way to look good, is to not look bad. Remember that he’s competing against only 3 others for continual advancement.
So the Skipper has to decide how to not look bad. What things affect that outcome? The easiest performance metrics to quantify are mission completion rates, hours flown, and whether the onstation times were met. Its not surprising then, given that they are easy to measure, that we judge our performance almost solely based on them. This, by the way, contributes to an ever decreasing lack of emphasis on tactical performance. Another thing that affects whether or not the Skipper doesn’t look bad is liberty incidents, and must be avoided at all cost. No matter how good the onstation performance is, a liberty incident negates any positive outcome.
We know that the odds of taking off on time and completing a mission is somewhere between 50 and 95%. That’s just an experienced guess, but I think everyone’s guess would be somewhere between those two numbers. But given that there are only 5-10 flights, its not unfeasible for the Skipper (or det OIC) to eye a number closer to 100%. What are the odds that there will be a liberty incident? Well, experience teaches us that its certainly higher on detachment than at home, but probably on the order of 5% or so.
A risk averse person would, believing the outcome to be very important, take measures to improve the outcome, regardless of the probability of the outcome being negative. This behavior leads to backup preflights, ops readies, and midnight curfews. This of course lowers morale, and generally makes people pissed off (If you’ve done a backup preflight at 2:00 a.m. you know what I’m talking about). Is this necessarily a bad thing? Some would argue that its but one small sacrifice to defend freedom. I would argue that measures like those are the actions of leaders that are paralyzed with fear about looking bad, and are unnecessary. There are more examples of risk averse behavior (like this anonymous blog). This behavior serves to undermine our real performance, lower morale, and in general make us feel like a bunch of douches.
So what exactly has driven our community into this cowardly abyss? For your convenience, I’ve created a list (feel free to comment on more and I’ll edit them in and credit your fake name):
- Promotion system
- Evaluation system
- Information availability
- Media exposure
- Too few openings at high levels
- Little opportunity for success outside the “golden path”
To be concluded…