For years, customer service had a kind of structural clarity. The system held the rules. The agent applied them. The outcome was binary.
“Computer says no” became shorthand for that world. It was blunt, sometimes infuriating, but clear. The decision lived inside the machine. The human delivered it.
You can see it in banking: a payment is blocked because it breaches a defined rule. In insurance: a claim is rejected because it falls outside policy. In licensing: an application is refused because a criterion is not met.
The system processed. The human explained.
Generative AI shifts that arrangement. Instead of hard-coded rules, we increasingly see probabilistic assessments.
In a recent Linkedin post after an event with Monzo, Experian and DeepMind, Sarah Gold from Projects by IF posed a simple question:
If an AI flags a payment as potentially fraudulent with 63% confidence, who owns the decision that follows?
That question matters because AI doesn’t just make things faster. It moves judgement around. It changes who decides, when humans step in, and who is accountable when things go wrong.
In the old model, the system made the decision and the organisation owned the rule. In the new model, the system generates a probability and hands it back to a human or another process. The ambiguity doesn’t disappear. It relocates, and that relocation is structural.
Take fraud prevention. Blocking a legitimate payment causes frustration and erodes trust. Allowing a fraudulent one causes financial loss and reputational damage. The work isn’t simply ‘processing transactions’, it is managing risk across millions of interactions.
Or take something like driver licensing in the context of medical conditions. On paper, it looks like a processing service: review the application, issue or refuse the licence. In reality, it balances the public safety risk of allowing someone unfit to drive against the personal and economic consequences of wrongly restricting someone’s independence. The system is constantly negotiating uncertainty.
Deterministic systems didn’t eliminate risk. They obscured it. By turning judgement into rules, organisations could tell themselves the problem had been processed rather than negotiated. The outcome looked objective, even though someone, somewhere, had already decided how much uncertainty was acceptable.
When AI enters the picture, that disguise falls away.
A model may suggest that a payment looks suspicious. It may classify a medical disclosure as higher risk. But it does not decide what level of false positives is tolerable. It does not determine how much inconvenience is acceptable in the name of safety. It does not set the organisation’s appetite for error.
Those are organisational choices.
“Computer says maybe” is what happens when probability replaces policy as the first response. The system no longer closes the question. It opens it.
That shifts what a job actually involves.
In a deterministic environment, being good at a job meant applying the rule consistently and escalating when necessary. In a probabilistic one, it’s about weighing the model’s suggestion against the organisation’s tolerance for error and deciding whether to follow it or override it. And then being able to explain why.
There’s something slightly ironic about this.
For years, we designed systems and services that treated humans a bit like machines. We stripped away discretion in the name of consistency. We encoded policy so tightly that the person on the frontline became an extension of the rulebook. Customer service often became the place where poorly designed processes ended up; absorbing frustration and enforcing decisions that had already been made elsewhere.
Now, as machines start to behave more like humans , making contextual assessments and generating recommendations rather than fixed answers, the human role should change again.
If the machine can model patterns and probabilities, the human can focus on judgement, explanation and trust. Customer service doesn’t have to be the bin for failure demand. It can become the voice of the organisation, the place where values are interpreted, where trade-offs are made visible, where risk is handled with context rather than hidden behind policy.
That only happens, though, if organisations are willing to design for it.
And here’s the problem Sarah points to: most organisations haven’t redesigned for that shift. The AI can score risk but it cannot decide who carries it.
“Computer says maybe” sounds softer than “computer says no.”
In reality, it is harder. Because it removes the comfort of certainty without removing the need for accountability, and that is the real work.