In this article, I was going to talk about survivorship bias. But in these strange and difficult times of lockdowns, social distancing and state approved daily exercise, there’s only one possible topic of conversation.
You guessed it, it’s co…
Oh, alright then. Let’s talk about cognitive biases.
In 1987, the Journal of the American Veterinary Medical Association published a rather unusual article about cats falling from high rise buildings.
Cats, the authors observed, suffered fewer injuries when they fell from higher floors of the building. Cats falling from lower floors seemed to be less fortunate, with broken bones aplenty. Very counterintuitive.
One tragic (but hilarious) explanation was that cats relax once they reach terminal velocity, thereby absorbing the impact better. The unfortunate cat falling from a lower height would still be accelerating as it hit the ground, tensed up in a state of feline panic.
While this is a perfectly coherent hypothesis, the more realistic explanation proposed was survivorship bias.
Ask yourself this: Did the authors collect their data in a controlled experimental environment? Presumably not. Throwing kittens out of windows is ill advised in a civilized society.
Instead, their data came from veterinary practices, where anxious owners had brought their ailing pets for treatment after a fall.
And what linked all of the cats that were brought to the vet?
Well, maybe. I don’t know.
The important link is that they all survived the fall. Deceased kitties never made it to the vet and hence were omitted from the dataset. Unsurprisingly, these were the ones that fell from the greatest heights. The dataset was therefore biased, and the conclusion was faulty.
A lot, as usual.
As pipeline professionals, our ultimate goal is to prevent incidents, and – much like the authors of the 1987 study – we pore over historical data in an attempt to understand why they happen. So, is there anything we can learn from the cat story?
Well, in this context, the survivorship bias would be a tendency to limit the scope of our studies to pipelines that have survived (i.e. never failed). I don’t think we can be accused of that. Indeed, a huge amount of effort goes into analyzing pipeline failures, and the resulting failure statistics are regularly used as a basis for risk assessments.
There is no evidence that we overlook failed pipelines as if they were dead cats.
But here’s some food for thought: What if we do the opposite? What if we focus too much on pipeline failures and not enough on their absence?
The reverse survivorship bias.
When learning about the causes of pipeline failures, we need to pay close attention to all pipelines, whether they be pipelines that narrowly avoided failure in the past, pipelines that are likely to fail in the future, or even pipelines that are entirely healthy. The data describing and explaining their condition is all valuable for decision support. That’s why ROSEN is developing predictive models that leverage condition data from tens of thousands of in service pipelines as well as failure statistics.
After all, what if the best way to reach zero incidents is by learning from pipelines that have had zero incidents?
I know. You heard it here first.
Until next time, take care everyone.