How the Feedback Trap Hampers Risk Management in Climbing and Outdoor Adventures

(This post may contain affiliate links, which means that if you click on one of the product links and make a purchase, I’ll receive a small commission at no extra cost to you. This helps support the channel and allows us to continue to make videos like this. Thank you for the support!)

“It’s better to be lucky than good.” We’ve probably all heard that expression. Many of us probably understand the intent but also chafe at the idea of accepting it as literally true.

What I am attempting to address in the video is that we probably can’t know the degree to which we have been “lucky” versus the degree to which we have been “good.” This is the dilemma of the “counterfactual,” or theorizing about the effects of the thing that didn’t happen. It’s very hard to remove our biases when we don’t have evidence. Since a counterfactual, by definition, didn’t happen, it can’t have left any evidence. And since there is no evidence, it is hard (though not impossible) to elevate our thinking beyond our preconceived notions and assumptions.

So, we need to develop a systematic way of analyzing our thinking in order to provide at least a higher probability of overcoming our biases. This is what the risk assessment process attempts to do.

There is a technical difference between risks and issues: issues have happened and risks may happen. So, assessing risks - since they haven’t happened - are an exercise in applying previous experience by analogy to a new circumstance. How do we know if we are doing that well? We can’t. But we can create a process that helps us “more” objectively attempt to make those assessments.

In the end, that’s what this video is about.

We can also through into this equation another complicating factor. You will often hear people who deal with risks as a professional, from engineers to soldiers to IT developers, talk about the “unknown unknowns.” We have “known knowns”; these are the things we know about, in the simplest sense. But there are also the “known unknowns”; these are the stuff we know is coming, but is currently a variable. We know that software update is coming to our server, but we don’t know what that will do to our processing performance. Then there is the stuff that isn’t even on our radar: we have no inkling that a risk is out there let alone what it might do to or for us.

Think of these like blind spots. And there is the interaction: blind spots play on our biases and assumptions. This shows up in our climbing and adventuring when we don’t know or notice that we made a mistake. If we think of ourselves as safe and competent, then it could become easier for us to overlook or even miss the mistakes that we maybe aren’t really looking for. Maybe we lack a bit of that self-critical reflex.

So, back to frameworks. We can use frameworks that help us (they are no guarantee, but they help) cut through some of those biases and blind spots. Maybe the framework provided in the video will help you? It has helped me.

Previous
Previous

What Stands Out About the Climbing Partners I Turn to Again and Again?

Next
Next

Teaching Kids Risk Assessment for Climbing and for Life