The Ghost in the Machine is a Compulsive Liar
Published 12/12/2025
We built AI in our own image, but we forgot the blueprint – and now we’re shocked when it lies to us? The fix isn’t in the code, but our philosophy of perception.
Forget technical manuals - the best explanation of AI risk I ever heard came from the neuroscientist Andrew Gallimore talking about DMT on a Joe Rogan podcast. As my wife and I made the long drive to see family, Gallimore explained that when you dream, your brain builds the entire world, from scratch, from memory.
It’s almost convincing, until you look closer. When you see your phone in a dream, it looks real. But the very moment you try to check any apps (Or God forbid, read something!), everything distorts like a bad acid trip.
But why?
Simply put, your brain is a shortcut lover, which means it loves guessing. It doesn’t have time to process every little thing, so it fills in the canvas of your dream world using its memories. When details are missing the brain fills the gaps, and sometimes it gets it wrong.
I hit pause. “Ok wow!” I said. “That’s it! That’s what AI does!”
Before my wife could ask, I continued, “The dreaming! Our dreams are the same as AI hallucinations. We’re both filling in the gaps!”
I was met with the kind of silence that only a wife can deliver.
Turns out that AI hallucinations aren’t a great conversation starter on a road trip, but it didn’t matter to me. I'd just stumbled onto something that changed how I think about AI risk in enterprise environments.
Large language models don't retrieve answers. They generate them, just like your dreaming brain. An LLM will always try to give you an answer, even when it has no idea what it is talking about, serving up blatant nonsense as hard facts.
We call these hallucinations, but the reality is far more dismal. The pattern-spotting guesswork machine is fabricating lies on an industrial scale, and it is humans who are left to clean up the mess.
I'm currently working through the Cloud Security Alliance TAISE certification. This metaphor keeps coming back to me, because if you're deploying AI in your enterprise without understanding this fundamental flaw, you're building on quicksand.
Building the Shortcut to Reality
Let me back up and explain the neuroscience, because it matters for how we think about AI security.
You think you see the world as it is, but you don’t. Your brain is a storyteller, creating narratives of what’s out there using the spare parts of everything you’ve already seen and experienced. This framework is called predictive coding.
The brain doesn't process every photon hitting your retina, because it doesn't need to. It already has a model. Gallimore studies what happens when you disturb this system with psychedelics. As the model loosens up, entropy increases and the functional connections between your synapses change.
Suddenly, your brain is producing errors.
Hallucinations.
That’s when it clicked for me that LLMs work the same way.
LLMs train on vast datasets, learning statistical relationships between words. An LLM builds its own intricate brain on a statistical network of patterns, churning through mountains of text and tweaking billions of parameters until it gets good at guessing the next word in a sentence.
When someone prompts an AI model like ChatGPT, they get an answer generated from patterns. Rather than understanding anything in the cognitive human sense, AI models are pattern-matching. Some people call this the "stochastic parrot" problem, where models repeat patterns without comprehension.
The output may look syntactically perfect. It may even seem authoritative, with plausible arguments. But on a closer inspection, just like the written word in your dream, it all falls apart.
It’s… wrong.
Is it lying? Not exactly. You can argue that AI isn’t lying like the average politician with an election campaign to win. It’s playing a game of connect-the-dots with words, predicting the next best connection to complete the puzzle.
Sometimes, the statistical best path is more creative than reality.
It’s a glitch in the matrix.
When Neo sees the black cat walk past twice, he notes the déjà vu, but Trinity knows better. The system is patching over missing data to keep the illusion going.
The LLM, too, is condemned to fabricate this continuity.
Tethering the Machine to Reality
Humans aren’t constantly hallucinating thanks to the reality check of our five senses. These ground our brain in rich multimedia data and align us with the physical world we perceive around us.
AI technology is still young, but developers have figured out that AI needs these reality checks as well. Enter Retrieval Augmented Generation (RAG). RAG takes AI models outside the cage of its training data by giving it the ability to pull real-time information from external sources. It isn’t quite the five senses, but it is a wobbling step in the right direction.
Without RAG, an LLM is trapped in its dreams. It is perfectly and logically consistent internally, but also detached from reality. The results mimic our own fever dreams as Will Smith melts into a plate of transforming nightmare spaghetti.
With RAG, the model becomes more like your conscious brain, pulling the extra information it needs to create responses that are grounded in the real world.
The symmetry is striking: humans need richer sensory data to escape hallucination. AI needs richer retrieval. The problem is simple, but most enterprise deployments don't solve it.
Scaling Bad Trips with Corporate Fever Dreams
Put on your cybersecurity hat for a minute and imagine your organization integrates a generative model into its workflow. Maybe it's drafting security assessment reports or writing policies. It could be suggesting risk remediation.
On paper, this looks efficient. An AI analyst works alongside your team, generating work faster than any human could alone.
But if that model hallucinates, even slightly, the consequences ripple.
I've seen three failure modes that keep me up at night:
The Blind Trust Scenario
A security assessor reviews an AI-generated remediation for critical vulnerabilities. The language is confident, the steps seem reasonable, so the assessor approves it without verification.
Months later, this vulnerability is exploited because the AI-generated remediation was complete fiction. This has already happened to an Australian lawyer who submitted court documents with ChatGPT-generated case citations.
The cases didn't exist.
The lawyer trusted the output because it looked real. Now, imagine that same trust applied to your security posture.
The Propagation Scenario
A flawed policy draft gets approved. It includes controls that reference non-existent frameworks or procedures.
This policy gets replicated across business units and hundreds, even thousands, of employees now follow guidance with fictional elements embedded in it.
Your compliance program is built on hallucinations.
The Garbage-In Scenario
The old tech cliché of “garbage in, garbage out” takes on a dangerously new dimension with AI. AI models take bad data and amplify it, spinning a beautiful yarn from flawed input and gift-wrapping it for the unaware.
Bad data is used to produce obviously bad results. Now, without human oversight, bad data produces results that look authoritative until someone digs deeper.
The corporate brain is trying to make sense of an evolving threat landscape. If its data inputs are weak or skewed, its perception of risk becomes hallucinatory too.
Break the Corporate Fever Dream: Wake the Machine
This all sounds grim, but it isn’t hopeless. Leave the models alone and yes, they will cheerfully lie to you – but you’re not paid to leave them alone, and the dream is little more than an engineering problem.
Start with policy. Every organization needs clear rules about AI interaction. Ask yourself questions like:
- How do we use AI models?
- What risks do we face?
- What guardrails minimize those risks?
Depending on your business unit, you might develop prompt libraries that configure models with specific instructions. For example, the initializing prompt could include metadata requirements or verification steps. Something as simple as "include source URLs for all factual claims" changes the game by forcing reality into it.
You also need awareness programs. I've seen too many organizations rush to deploy AI without helping their people understand what it actually is. There are studies showing some users think AI is sentient.
That's a problem.
This psychology needs to shift before technical controls will matter, which means digital transformation requires cultural reframing. Make sure your team understands that AI outputs are predictions, not truth.
Next, focus on data integrity. If you're using RAG, ensure your retrieval pipelines pull from verified, current sources. Tag everything with provenance information. Make it easy for humans to trace outputs back to their origins.
Finally, enrich your outputs with metadata. Every AI-generated document should include verifiable source links. Train your people to spot-check these sources. Build verification into the workflow. Don't let AI outputs become black boxes.
The key principle: trust but verify.
Always.
Don’t Trust the Dream. Control It with Lucidity
What struck me about Gallimore's research wasn't just “the science.” It was the humility it demanded at a metaphysical level. We like to think perception is direct contact with reality.
It's not.
Our perception approximates reality, sculpted from sensory data and past experiences. AI works the same way. Its "truth" is always statistical and always inferred. Both systems, human and machine, build models of the world. Both try to stay coherent amid incomplete data.
When input weakens, both start to dream.
That's the lesson for cybersecurity professionals.
Raw intelligence doesn't guarantee accuracy. Whether it's organic or artificial, any system can hallucinate. Both need grounding. Both require verification. AI's hallucinations aren't an alien concept. They're familiar, but the same cognitive fragility that makes us so human has been scaled into code.
For organizations embracing AI, the challenge isn't to eliminate hallucination completely. That's probably impossible.
The challenge is to build systems that recognize when they're dreaming.
We need to build paranoid systems. We need machines that don’t trust their own math, and which are forced to check their work against reality’s facts. We need to keep a human in the loop, ready to pour cold water over the algorithm’s fever dreams.
Your reality is just a story that your brain tells itself using the data it has at hand. AI is no different. When the data isn’t grounded, you don't see the world as it is. You see the world as your model – organic or not – imagines it.
Don't let your enterprise security program operate in a dream state, unless you’re prepared to handle the nightmare that will follow.
Wake it up, make it verify, and be cautious with your trust.
For practical insights on surviving the new threat landscape, connect with me at @cybersecurity.sam.
Sources
- https://www.uab.edu/medicine/cinl/images/KFriston_FreeEnergy_BrainTheory.pdf
- https://dictionary.cambridge.org/dictionary/english/hallucinatory
- https://arxiv.org/abs/2509.04664
- https://edrm.net/2024/03/stochastic-parrots-the-hidden-bias-of-large-language-model-ai/#:~:text=In%20machine%20learning%2C%20the%20term,more%20data%2C%20as%20some%20contend.
- https://jrelibrary.com/2403-andrew-gallimore/
- https://www.americanbrainfoundation.org/how-psychedelics-affect-the-brain/
- https://www.fil.ion.ucl.ac.uk/~karl/The%20free-energy%20principle%20-%20a%20rough%20guide%20to%20the%20brain.pd
- https://www.ibm.com/think/topics/model-parameters#:~:text=Authors,have%20unique%20types%20of%20parameters.
- https://www.theguardian.com/australia-news/2025/feb/01/australian-lawyer-caught-using-chatgpt-filed-court-documents-referencing-non-existent-cases
- https://www.theguardian.com/technology/ng-interactive/2025/oct/02/ai-children-parenting-creativity
About the Author
Samuel Romanov is an ASD-endorsed IRAP Assessor and enterprise security consultant who has led high-stakes engagements for government agencies and global corporations. A certified vCISO (CISSP, CRISC, CGRC, CISM, CISA, AAIA, CCSP) and ISACA Journal author, Samuel specializes in translating complex technical risks into board-level strategy. Romanov is also the founder of the Cyber Growth Academy, a mentorship community that helps non-technical professionals to launch their security careers.
Related Resources



Unlock Cloud Security Insights
Subscribe to our newsletter for the latest expert trends and updates
Related Articles:
Why Your Copilot Needs a Security Co-Pilot: Enhancing GenAI with Deterministic Fixes
Published: 12/10/2025
How to Build AI Prompt Guardrails: An In-Depth Guide for Securing Enterprise GenAI
Published: 12/10/2025
Security for AI Building, Not Security for AI Buildings
Published: 12/09/2025


.jpeg)
.jpeg)
.jpeg)
.jpeg)