We Knew This in 1973

March 2026 · AI, UX Design, Design Leadership, Product Design

"A computer can never be accountable, therefore a computer must never make a management decision."

Stafford Beer wrote that in 1973.

Not last year. Not in response to ChatGPT. Not as a hot take on LinkedIn. Over fifty years ago, before most working designers were born, someone already identified the core problem with handing consequential decisions to a machine.

And yet, here we are.

The irony is the point

We are watching organisations remove experienced designers from the decision-making loop and replace them with AI tools that generate interfaces, flows, and copy at speed. The output looks finished. The process looks efficient. The problem is: no one is accountable for what it does to the person on the other end.

Beer was not talking about design specifically. He was talking about management, systems, and cybernetics. But the principle maps perfectly. Swap "management decision" for "design decision" and you have a pretty accurate description of what is happening right now in product teams across the industry.

A computer cannot be accountable. It cannot be wrong in a way that matters to it. It cannot sit across from a user who could not complete a task, or a stakeholder asking why the conversion rate dropped, or a legal team asking who signed off on that pattern. It just generates the next token.

Someone has to be accountable. That is the designer.

Polish is not the same as correctness

Here is what makes this particularly tricky right now. AI output does not look rough. It does not look like a sketch or a first draft that needs scrutiny. It looks done.

A wireframe full of gaps invites challenge. A fully rendered screen with copy, components, and a logical-seeming flow does not. It bypasses the natural friction that catches problems early.

We have always known that in UX, how something looks and how something works are two entirely different questions. AI collapses that distinction visually while leaving the gap wide open underneath. The interface can look considered and the underlying decisions can be completely unvalidated.

That is not an AI problem. That is an oversight problem.

Someone has to hold the proxy

One of the defining responsibilities of a UX designer is holding a proxy for users who are not in the room. When decisions are made in sprint planning, in a design crit, in a stakeholder review, the designer is the person asking "but what does the user actually need here?"

AI has no proxy instinct. It has pattern matching. It will produce what similar products have done before, which means it will also reproduce the mistakes, the dark patterns, the inaccessible flows, and the outdated conventions that exist across the web, confidently, at speed, and without flagging any of it as a concern.

An experienced designer knows when something feels wrong even when it looks right. That instinct is built from research, from watching real users struggle, from shipping things and learning what the data showed afterwards. It cannot be prompted into existence.

So what do we do with this?

I am not arguing that AI has no place in the design process. I use it. It has genuine value in speeding up earlier stages of work, exploring directions, generating options, and drafting copy to react to.

But "generating options" and "making design decisions" are not the same thing.

The practical version of Beer's principle, applied to design in 2026, looks something like this:

AI can generate. A designer must evaluate. Every output needs a human who can assess it against real user needs, not just aesthetic conventions.

Speed is not a proxy for quality. If AI is accelerating your output but your team has no way to verify decisions, you are not moving faster. You are accumulating risk faster.

Accountability has to live somewhere. When a design decision harms a user, and some will, someone needs to be able to explain what oversight existed. "The AI suggested it" is not an answer.

Oversight is not a bottleneck. It is the work. The value of an experienced designer in an AI-assisted workflow is not slowing things down. It is being the person who can tell the difference between something that looks right and something that is right.

The part that should concern us

Junior designers learn by doing the work that AI is now absorbing. They learn by making first drafts, getting critique, understanding why something does not work, and iterating. If that pathway disappears without teams building something intentional to replace it, we end up with a generation of designers who can operate AI tools but have never developed the judgement to evaluate what comes out of them.

And then we are in a worse position than 1973. Because at least then, the people making decisions had earned the right to make them.

The principle has not changed

Beer's insight was not really about computers. It was about accountability, and the fact that accountability cannot be delegated to something that does not bear consequences.

Fifty years later, the tools are different. The principle is the same.

Design is a series of decisions made on behalf of users who were not in the room. Those decisions need an accountable human behind them. Not because AI is not capable of producing something useful, it clearly is. But because when it goes wrong, and it will, someone has to own that.

That someone is you.

This post is part of an ongoing series on design, AI, and what it means to do this work well. If you found it useful, the previous post, AI Makes Output Cheaper. It Doesn't Make Design Cheaper, covers the related shift in where designer value actually sits.

More blogs