Failure itself isn’t that interesting. What happens after it is. In modern systems, failure isn’t the end. It’s expected. Links drop. Nodes disappear. Data becomes incomplete. And still… the mission continues. So the real question is not: “Does it work?” But: 👉 what happens when it stops working as intended? • How does the system adapt? • What information is still usable? • What decisions are made under uncertainty? • How do operators respond? That’s where things break. Not at the moment of failure. But in what happens next. We don’t need better tests. We need a better understanding of how systems behave under loss.
Gert-Jan van den Ham’s Post
More Relevant Posts
-
Drift is how complex systems begin to fail. Not through collapse. Through misalignment. In large systems signals travel through many layers: infrastructure organizations algorithms human decisions At first the deviations are small. A delay here. A slightly distorted signal there. A decision made with incomplete information. Nothing dramatic. But complex systems amplify small differences. Signals that once moved in synchrony begin to separate. Decisions start reacting to outdated states. Resources move according to yesterday’s reality. Friction slowly accumulates inside the system. From the outside everything still appears functional. But internally coherence is already drifting. And when enough drift accumulates, the system begins to behave unpredictably. Not because it lacks intelligence. But because its signals are no longer aligned. Tomorrow: Structural Friction.
To view or add a comment, sign in
-
Two systems see the same signal. They don’t arrive at the same moment. Most adaptive systems are optimized to respond — not to interpret. It looks consistent. It isn’t. Until you realize hesitation isn’t a single state. Sometimes it’s uncertainty. Sometimes it’s recall effort. Sometimes the load is already too high. And sometimes… it’s actual thinking. Same signal. Different state. Treat them as one, and timing starts to drift. Not abruptly. Quietly. Sometimes too early. Sometimes exactly when it shouldn’t. Not because the system fails. But because it decides before it fully understands what’s happening. This is where most systems go wrong.
To view or add a comment, sign in
-
Everyone talks about making systems smarter. Very few ask a more important question: What should the system never be allowed to do? Most architectures are capability-first: we add models, tools, agents — and then monitor, correct, and patch. This works until it doesn’t. Because the system is still free to enter states it should never reach. There is another approach: Design the system so those states cannot exist in the first place. Not controlled. Not corrected. Excluded. Invalid behaviour is excluded — not corrected.
To view or add a comment, sign in
-
-
Most data teams are trying to improve reliability by changing tools. That rarely works. Reliability doesn’t come from the platform. It comes from whether you can answer two questions: What changed? What will happen when this runs? If those answers aren’t clear, everything feels fragile. If they are, even legacy systems become predictable.
To view or add a comment, sign in
-
-
Three real stories. Three different decades of technology. The tools all worked. Same ending every time. (𝘈𝘯𝘵𝘩𝘳𝘰𝘱𝘪𝘤 𝘫𝘶𝘴𝘵 𝘱𝘶𝘣𝘭𝘪𝘴𝘩𝘦𝘥 𝘥𝘢𝘵𝘢 𝘵𝘩𝘢𝘵 𝘦𝘹𝘱𝘭𝘢𝘪𝘯𝘴 𝘸𝘩𝘺. 𝘚𝘭𝘪𝘥𝘦 6.) Swipe →
To view or add a comment, sign in
-
What I described earlier is not an observation. It’s a missing layer. Most systems operate on: Signal → Authority → Execution And assume that’s enough. It isn’t. There is a boundary between authority and action. A layer that determines: Not whether something can execute But whether it is admissible to execute now Full model: Signal → Authority → Admissibility → Execution Without this layer: Systems don’t fail. They drift. Because the problem is not capability. It’s timing
To view or add a comment, sign in
-
-
Why systems fail Most systems don’t fail because of technology—they fail because of misaligned incentives, unclear ownership, and weak governance. When you fix the system, people perform better. When you ignore the system, even the best people struggle.
To view or add a comment, sign in
-
We keep trying to fix systems at the level where symptoms appear. Better models. Better controls. Better explanations. But something doesn’t add up. Because systems don’t drift when something breaks. They drift when everything keeps working. Each decision is valid. Each step is admissible. Each output is correct. And yet, the system moves in the wrong direction. Not by error. By continuity. That’s the uncomfortable part. Because nothing fails at the interface. The failure happens in something harder to see: the fidelity of the signal that informs the system about reality. Feedback weakens. Signals get filtered. Latency increases. And by the time misalignment becomes visible, the system is already operating outside its viable domain. So the problem is not only maintaining alignment. It’s preserving the conditions under which alignment can still be detected. Because once that degrades, the system doesn’t just drift from reality. It loses the ability to notice that it has. And at that point, correction is no longer control. It’s reaction. The shift is subtle: from optimizing outputs to designing conditions of admissible execution. From asking: “Is the system correct?” to asking: “What constrains what is allowed to become real?” That’s where coherence stops being a property… and becomes a requirement for viability.
To view or add a comment, sign in
-
“In our current moment we face a new crisis, one that affects our minds more than our bodies: the negative impact of digital technology on our ability to think.” https://lnkd.in/erc9FDfg
To view or add a comment, sign in