Here’s a simple truth: It’s better to bend than to break, and it’s best to be prepared
for the worst. This age-old wisdom is going by a new name in slide-rule circles: “Resilience engineering”
starts with the insight that it’s smart to design and maintain systems so they have some give. That means building
technologies that offer extra capacity to handle sudden loads, plenty of warning when normal operations are beginning to break
down, backup systems in case things do go wrong, diverse digital architectures so that a single bug doesn’t produce
widespread failure, and decentralization so that when (not “if”) communication breaks down things don’t
grind to a halt.
Resilience engineering as an academic idea was born in response to the 2003 space shuttle Columbia
disaster. The spacecraft disintegrated on re-entry because thermal panels had been damaged by a piece of foam that
broke off during the launch. But investigators identified a larger issue: NASA had responded to budget cuts in the 1990s by
adopting a “faster, better, cheaper” approach, launching more missions with fewer resources. Safety margins gradually
narrowed, information sharing withered and overconfidence ballooned without anyone really noticing. The organization had become
brittle and prone to disaster.
When a system looks solid year after year, it’s easy to become complacent,
like the generals behind France’s old Maginot Line—which, after all, was pretty good at keeping the Germans out,
though useless once they found another way in. It’s just a short step from complacency to pure arrogance: Why worry
about lifeboats when the Titanic is unsinkable? Resilience is about having enough lifeboats anyway.
Resilience engineering is a specialized field, but it simply takes some common sense to apply its principles
to the ordinary world.