Why AI search is probabilistic (volatility & hallucinations explained)
- Will Tombs
- Jan 22
- 6 min read
Contents
AI search tools, such as ChatGPT, Google Gemini, Google AI Mode, and Perplexity, are now integral to daily business and marketing workflows. They answer questions, summarise topics, and even recommend brands.
In practice, these systems perform well most of the time. Independent benchmark testing found that advanced AI models can achieve above 90% accuracy on established question-answer tasks when the information is well represented in their training data. For example, one evaluation showed that GPT-4o answered 93.3% of test questions correctly on a structured accuracy benchmark.
However, users may notice occasional inconsistencies. The same question can return slightly different answers, and in rare cases, confident but incorrect details appear. This behaviour is not a malfunction. It reflects how AI search works.
Unlike traditional search engines, AI search is probabilistic in nature. It does not look up a fixed answer. It predicts the most likely response based on patterns in data. This is why results can change (volatility) and why AI sometimes invents information (hallucinations).
If you’re unsure how this affects your brand, working with an experienced GEO agency can help you understand how AI search sees your business and where you may need to adapt.
Meanwhile, let us proceed with the article and explain what probabilistic AI search means, why it leads to volatility and hallucinations, and what this shift means for businesses.
What does “probabilistic” mean for AI search?
Large Language Models (LLMs) do not know things. They do not think, reason, or understand information in the human sense. Instead, they work as advanced prediction systems.
At their core, LLMs are prediction engines. When you ask a question, the model calculates the most likely sequence of words (called tokens) that should come next. It does this by analysing patterns it learned from vast amounts of data during training. Each word is chosen based on probability, not certainty.
A simple way to think about this is autocomplete. When your email tool suggests the next word in a sentence, it is guessing based on patterns. AI search works the same way, but at a much larger scale. Instead of predicting one word, it predicts full sentences and paragraphs.
This is where the probabilistic nature matters. The model generates the answer it believes is most plausible, not the one it can verify as factually correct. If incorrect information fits the pattern well, it can still be confidently produced.
If you want a deeper but still accessible explanation of the underlying technology, this guide is a good place to start: How Do ChatGPT and Other LLMs Work?
Understanding volatility: Why you get different answers to the same prompt
In AI search, volatility means that asking the same question multiple times can return different answers. This is normal behaviour for probabilistic systems and often surprises business users who expect stable results.
The main reasons for AI search volatility are:
Temperature settings - AI models use a control often called temperature to manage randomness.
Higher temperature = more creative, less consistent answers
Lower temperature = more predictable, less varied answers
Constant model updates - AI providers regularly retrain and fine-tune models.
Knowledge sources evolve
Response patterns change over time
The same prompt may produce a different answer weeks later
Varying conversation context - AI search considers the full conversation history.
Small changes in earlier prompts affect later responses
Even subtle wording differences can alter outputs

A simple example:
Prompt: “Suggest three marketing slogans for a new coffee brand”
Ask it multiple times
You receive different creative ideas each time
None of these answers is necessarily wrong. They are simply different statistically plausible outputs. This volatility exists because AI search predicts likely responses rather than retrieving fixed facts.
AI hallucinations explained: When “plausible” becomes factually incorrect
AI hallucinations occur when an AI system confidently generates information that is false, misleading, or completely made up, and then presents it as if it were factual.
In simple terms, the answer sounds right, but it isn’t.
But it is important to be clear about one thing. AI is not lying.
There is no intent or awareness behind the response. The model is doing exactly what it is designed to do: generating the most statistically plausible sequence of words based on the prompt. If an incorrect detail fits the pattern well, it can still be produced with confidence.
This is why hallucinations often feel convincing.
A notable example involved a US law firm using ChatGPT for legal research, where the model fabricated several non-existent legal cases and judicial opinions. When challenged, the AI even generated fictitious quotes and citations for these made-up cases. This highlights the commercial risk for any business relying on unverified AI outputs.
What causes AI hallucinations?
AI hallucinations do not happen for a single reason. They are usually the result of how Large Language Models are trained and how they are designed to respond.

The most common causes include:
Gaps in training data - If an AI model has not seen enough reliable information about a topic, it may try to fill in the gaps. Instead of saying “I don’t know,” it generates an answer that sounds plausible but is not grounded in fact.
This is a known limitation of generative models trained on incomplete or uneven datasets.
Misinterpreting patterns - AI models learn by spotting patterns in large volumes of data. Sometimes, they identify patterns that are not meaningful.
For example, if two names frequently appear in similar contexts, the AI may assume a relationship exists and invent one. This is also why biased or low-quality data often leads to inaccurate or misleading outputs.
Conflicting information online -The internet contains a mix of accurate, outdated, and contradictory information. AI models do not always have a reliable way to judge authority or credibility.
As a result, they may blend facts incorrectly or prioritise weaker sources over stronger ones.
The mandate to respond - Generative AI systems are designed to be helpful. Their default behaviour is to produce an answer.
Rather than stopping at “I don’t know,” the model may construct a response from the closest available patterns, even if that means fabricating details.
Together, these factors explain why AI hallucinations are not random errors, but a predictable outcome of probabilistic, pattern-based systems.
How this differs from traditional search (SEO vs. GEO)
AI search works very differently from traditional search engines.
Feature | Traditional search (SEO) | AI search (GEO) |
Core principle | Deterministic | Probabilistic |
Process | Crawls, indexes, and ranks existing URLs. | Takes information from many sources and combines it into one clear answer. |
Output | A list of links to external web pages. | A direct, conversational answer in the interface. |
Optimisation goal | Rank a specific URL for a keyword. | Influence the AI's understanding to be cited (referenced as source) and mentioned in its generated narrative. |
In short, SEO helps pages rank. GEO helps brands exist correctly inside AI-generated answers.
For a deeper, side-by-side comparison, read: GEO vs SEO – What’s the difference and what it means for brands.
What should businesses do about AI’s unpredictability?
AI search will always involve uncertainty.
Its probabilistic nature cannot be removed. But it can be influenced.
For businesses, this means shifting from a pure SEO mindset to an integrated approach that includes Generative Engine Optimisation (GEO).
The goal is no longer just ranking pages. It is shaping how AI systems understand, describe, and recommend your brand.
Practical steps businesses should take:
Become an authoritative source
Publish clear, well-structured, and factually accurate content. Use simple language, consistent terminology, and strong internal linking. This makes your content easier for LLMs to parse (read and interpret), trust, and reuse correctly.
Shape your brand narrative
Use GEO tactics such as educational articles, explainers, and comparison content. These help AI models build strong semantic associations (meaningful connections between related concepts) around your brand and categorise it accurately within answers.
Monitor your AI visibility
Regularly check how your brand appears in AI-generated responses. Look for misinformation, missing context, or incorrect associations. Early detection allows you to correct gaps before they spread.
AI search rewards brands that are clear, consistent, and authoritative across the web. If you want expert support in navigating this shift, Buried is here to help.
Related read: What is a GEO audit?
Conclusion - The path forward for your business!
AI search is probabilistic by design. It predicts likely answers rather than retrieving fixed facts. This is why results can change and why hallucinations occur.
These behaviours are not flaws. They are natural outcomes of how generative systems work.
This marks a clear break from traditional SEO. Classic search is deterministic and page-led. AI search is fluid, answer-led, and driven by probability. Optimising for one alone is no longer enough.
To succeed, brands must adapt. That means building strategies that work across both ecosystems.
SEO still matters, but it must be paired with Generative Engine Optimisation (GEO) to shape how AI systems understand, trust, and represent your brand.
As AI search continues to evolve, informed brands will have the advantage. To stay ahead, explore our latest GEO & SEO news, resources & more.
If you want expert support navigating GEO and modern SEO, speak to the team at Buried to stay visible and trusted in AI-driven search.