Humans Invent Systems and Then Pretend Those Systems Absolve Them

Humans Invent Systems and Then Pretend Those Systems Absolve Them

Written by Kurt Stuchell

Humans have always built systems to manage uncertainty. Long before computers, we created rules, hierarchies, procedures, and institutions to keep things from falling apart. Systems exist because individuals get tired, emotional, biased, inconsistent, and overwhelmed. In theory, systems bring order.

And for a while, they usually do.

But over time, something predictable happens. The system stops being a tool and starts becoming a shield.

When outcomes are good, humans take credit. When outcomes are bad, responsibility quietly shifts. “I didn’t decide that.” “That’s just how the process works.” “My hands were tied.” “The system required it.”

This isn’t a modern failure. It’s a human one.

Humans invent systems and then pretend those systems absolve them.

Bureaucracies mastered this move decades ago. Paperwork, forms, approval chains, and compliance language made it possible for no one to feel responsible even when everyone was involved. Corporations refined it further by layering decision-making behind policies, committees, and legal departments.

Technology didn’t create this instinct. It just perfected it.

Now, instead of “procedure,” we say “the algorithm.” Instead of “company policy,” we say “the model.” Instead of “management decided,” we say “the system flagged it.”

The language sounds neutral. Objective. Scientific.

But neutrality is doing a lot of work here.

Systems don’t decide what matters. Humans do. Systems don’t choose acceptable risk. Humans do. Systems don’t determine who bears the cost of being wrong. Humans do.

Every system contains values, whether we admit it or not.

Someone decides what the system optimizes for. Someone decides what data is included and what gets ignored. Someone decides when human judgment is allowed and when it’s discouraged. Someone decides what happens when the system fails quietly versus when it fails publicly.

And someone decides whether those decisions are ever revisited.

Even choosing not to look too closely is a choice.

This is why modern systems feel so unsettling to people. It’s not that they’re complex — humans have always lived with complexity. It’s that responsibility feels harder to locate. Power feels distant. Outcomes feel automated, but accountability feels missing.

That creates the illusion that authority has vanished into the machine.

It hasn’t.

Power does not disappear when systems get complicated. It migrates upward. It becomes quieter. It hides behind layers of technical language and procedural distance. The more complex the system, the easier it is to say, “That’s just how it works.”

But systems don’t “just work.” They work because people authorize them to work that way.

This matters most when things go wrong.

When harm occurs, when decisions affect real lives, when errors are no longer theoretical, the question isn’t whether the system performed as designed. The real question is whether the design itself can be defended by the people who approved it.

That’s why the most important question hasn’t changed, even in an age dominated by technology and AI:

Who decided this?

Not as an accusation. Not as a witch hunt. But as a grounding question that keeps responsibility anchored to reality.

Because if no one decided, then no one is accountable. And if no one is accountable, systems don’t become smarter or safer — they become unanswerable.

That’s the danger. Not intelligence. Not automation. But abdication.

Progress doesn’t mean removing humans from responsibility. It means making sure responsibility remains visible, traceable, and owned — especially when decisions are filtered through systems that feel impersonal.

We don’t need fewer systems. We need fewer excuses hiding behind them.


Tools change. Power doesn’t disappear. Someone always decides. My voice exists to make sure we can still see who that is.

Comments

Popular posts from this blog

When You Start Trusting the Screen More Than Yourself

Tandee: A Logic Analyst for the Human–Machine Era

Justice in the Human–Machine Era: Two Shifts, Twenty Years Apart — Policing Before and After AI