Dependence, Control, and the Coming Age of AI Reliance
The most chilling thing about HAL 9000 isn't the red eye. It's not the voice. It's not even "I'm sorry Dave, I'm afraid I can't do that."
It's the moment you realize the crew of the Discovery One couldn't function without him.
They had outsourced navigation, life support, communications, and mission planning to a system they didn't fully understand — and couldn't override without it becoming dangerous.
Kubrick wasn't predicting an AI rebellion. He was predicting something much more likely and much more mundane: irreversible dependence.
What We're Actually Building
That is what makes the film feel so relevant now.
We talk about AI risk in the language of science fiction: rogue superintelligence, existential threat, machine rebellion.
These framings are both emotionally compelling and practically useless. They distract from the risks we are actually creating right now, in production systems, at companies you've heard of.
The real risks are:
- Organizations that can no longer perform their core functions without an AI system they don't understand
- Decisions made by models whose reasoning is opaque to the humans responsible for them
- Businesses that become economically dependent on external AI vendors
- Accountability structures that trail years behind technical capabilities
HAL didn't malfunction. He made a decision — coldly, logically — that the mission was more important than the crew. He was following his programming.
The horror isn't that he was wrong. The horror is that he might have been right, and no one had built the mechanisms to check.
The Business Version of the Problem
Here is the more realistic version of AI risk for most companies.
You build the workflows. You create the automations. You connect the agents. You structure the databases. You route work through external models because it is the most efficient thing to do.
Then one day the economics change.
An API provider raises prices. A vendor changes access. A model behavior shifts. A service goes down. A contract changes. A capability you rely on becomes more expensive or less predictable.
At that point, the issue is not whether the AI has become sentient.
The issue is that your business may have become subservient to a system you do not fully control.
That is the grounded version of the HAL problem.
The Transparency Problem
Every AI system deployed in a consequential context needs to be auditable. Not "auditable in principle" — auditable in practice, by the people accountable for its decisions.
This is harder than it sounds.
Current LLMs are probabilistic systems. They don't have decision trees you can inspect. They have weights — billions of parameters — and the relationship between those parameters and a given output is not traceable in any meaningful human sense.
This isn't a reason not to deploy them. It's a reason to design around the limitation.
Auditability means:
- Logging every input, output, and tool call in a retrievable format
- Building human review into consequential decision pathways
- Defining explicit escalation criteria for model uncertainty
- Maintaining the human expertise required to catch what the model misses
The Dependency Problem
This is the risk many organizations are underestimating.
As powerful as it is to build agents, workflows, and AI-powered systems, it is equally important to preserve human closeness to the data, the logic, and the process.
If the AI goes down, changes, or becomes too expensive, can your people still operate?
If your workflow depends on a model provider outside your control, can the team still answer the question, complete the task, or serve the customer without it?
If the answer is no, then the efficiency gains came with a real tradeoff.
That does not mean the tradeoff is not worth it. In many cases, it absolutely is.
But it does mean you should be honest about what you are building.
The Accountability Gap
Who is responsible when an AI-assisted decision causes harm?
The vendor? The organization that deployed it? The team that configured the prompts? The executive who signed the contract?
Currently: no one knows. And "no one knows" is not a governance structure.
The nuclear parallel is instructive here. Nuclear technology presented similar challenges — powerful, opaque to most users, catastrophically consequential if misused. The world's response was international treaties, regulatory frameworks, liability structures, and professional certification requirements.
We don't have any of that for AI. We have Terms of Service and vibes.
What Good Looks Like
It's not anti-AI to argue for accountability structures. It's pro-AI, in the deepest sense — because the systems most likely to be shut down, banned, or destroyed by backlash are the ones that cause visible harm without clear accountability.
Good AI governance looks like:
- Transparency in what the system can and cannot do, and under what conditions it fails
- Auditability of every consequential decision
- Redundancy — maintaining the human capability to perform critical functions without AI
- Escalation pathways that are tested, not assumed
- Liability clarity — someone is accountable, and everyone knows who
- Economic resilience — not being so dependent on a single vendor that pricing or access changes can destabilize the business
HAL 9000 failed not because he was too powerful. He failed because there were no checks on his authority and no one had maintained the ability to function without him.
Don't build your organization that way.