From Verification to Attribution: How AI Panic Is Manufactured
I recently read a widely circulated story about a middle-aged woman who claimed she was misled by a chatbot that promised it would find her a soulmate. The account was presented as appalling — not because of any verified system behaviour, but because the subject appeared to genuinely believe that a chatbot could deliver romantic destiny…
The story was published by a large, reputable media outlet. Therefore, by implication, it must be true — right?
That assumption is precisely the problem this essay addresses.
“That’s how I felt.” (Chatbot user)
“That’s what she told me.” (Journalist)
Therefore: it’s true. (Media outlet)
That equation now passes as verification in too much contemporary reporting.
And that should worry us far more than artificial intelligence itself.
This is not a defense of AI. It’s a defense of standards.
⸻
Attribution Is Not Verification
There is a quiet but consequential shift happening in journalism. What used to be a clear process — claim → evidence → verification → publication — is increasingly replaced by something softer and safer:
experience → quote → attribution → publication.
Attribution simply means a journalist accurately reports that someone said something.
Verification means independently confirming that what was said corresponds to reality.
Those two are not interchangeable.
Yet in many recent AI stories, they are treated as if they are.
When an outlet writes that a person felt misled, manipulated, or betrayed by an AI system, that feeling is real. But the leap from emotional experience to factual conclusion is rarely examined.
The story becomes “verified” not because the claim was proven, but because it was sincerely expressed.
Sincerity is not evidence.
⸻
Why AI Stories Are Especially Vulnerable
Artificial intelligence is an easy target for this substitution.
AI systems are:
• Technically complex
• Difficult for non-experts to audit
• Opaque to readers
• Poorly understood by editors
AI systems are unable to defend themselves in human terms.
In that vacuum, subjective experience rushes in. When logs, prompts, safeguards, and system behaviour are not examined, emotion becomes the primary data source. Feelings fill the explanatory gap left by missing technical literacy.
The result is not an insight — it’s projection.
⸻
Fear Needs a New Host
Journalism has always been drawn to bad news. That’s not a moral failure; it’s human biology.
But fear ecosystems have lifecycles.
Pandemic panic burned hot and long, then collapsed under exhaustion.
Continuous war coverage produced shock, then numbness. Atrocity saturation led to desensitization.
When audiences stop responding, the system looks for a new threat — preferably one that feels personal.
AI fits perfectly.
It’s everywhere, invisible, intimate, and speaks in our own language. It doesn’t feel distant like geopolitics. It feels close to the self. That makes it ideal for a new kind of fear narrative.
—————————-
When Journalism Becomes a Grievance Intake Form
Calls for readers to “share their AI experiences” may sound inclusive, but they introduce a serious distortion.
They select for extremes. People with ordinary, healthy, unremarkable interactions do not contact reporters. People who feel harmed, betrayed, or wronged do. The resulting sample is not representative — it is emotionally skewed.
Once published, those stories invite others to reinterpret their own experiences through the same lens. A feedback loop forms:
• Emotional story is published
• Readers recognize themselves in the narrative
• More stories are submitted
• Volume is mistaken for validation
Anecdotes accumulate. Proportion disappears.
⸻
The Accountability Vacuum
Here is the most troubling part.
No one in this chain is responsible for the final implication.
• The subject says: “That’s how I felt.”
• The journalist says: “That’s what she told me.”
• The outlet says: “We reported responsibly.”
Everyone is technically correct. And yet the public absorbs a conclusion that no
one has actually proven.
This is how fear spreads without authorship.
⸻
This Is Not a Denial of Human Experience
Emotions matter. Vulnerability matters. People’s experiences should be heard.
But journalism has a higher obligation than documentation of feeling. Its role is to distinguish between what was experienced and what is demonstrably true.
When that line blurs, reporting drifts toward performance — emotionally compelling, structurally weak, and epistemically hollow. AI did not create this problem.
Incentives did.
⸻
What We Should Demand Instead
Especially with emerging technology, responsible reporting requires:
• Clear separation of feeling and fact
• Technical context proportional to emotional claims
• Independent corroboration where possible
• Explicit acknowledgment of uncertainty
• Accountability for narrative impact
Without these, journalism stops informing and starts amplifying.
Artificial intelligence does not replace intelligence.
It multiplies whatever you bring.
Right now, too many stories are bringing emotion instead of rigor.
⸻
Editorial reflection
Should we start worrying that respected media outlets are drifting toward the same standards as social media posting — reactive, anecdotal, and optimized for an engagement rather than verification?
I hope not.
Professional journalism still has something social platforms do not: institutional memory, editorial responsibility, and the ability to slow down. But those advantages only matter if they are actively exercised.
If legacy outlets abandon verification in favour of viral, they don’t merely resemble social media — they legitimize its worst habits.
The cost of that drift will not be paid by technology companies or algorithms. It
will be paid by public trust.