When AI Invents Things About Your Business
There is a difference between an AI that gets your check-in time wrong and an AI that tells a customer you have a rooftop pool when you do not have one.
Both are accuracy problems. They are not the same problem.
Getting a fact wrong, an outdated price, a changed phone number, an old amenity list, is a knowledge gap. The AI has information that used to be true, or misread something. This is covered elsewhere. What we are talking about here is fabrication: AI inventing things that were never true.
How Fabrication Happens
AI language models predict text. When a model does not have clear, confident information about a specific business, it fills gaps by pattern-matching against similar businesses in its training data. A hotel in a hill station gets described with amenities common to hill station hotels. A boutique property in a known tourist area gets attributed the characteristics of boutique properties the model has seen described in similar contexts.
The model is not trying to deceive. It is completing a pattern. But the output is a statement of fact about your specific business that is entirely fabricated.
Three conditions make fabrication more likely:
- Thin web presence. If there is limited authoritative content about your business, the model has less to work with and relies more on pattern-completion.
- Entity confusion. If your business name or category overlaps with another entity, the model may blend the two. Your restaurant gets attributed the wine cellar of a different restaurant with a similar name in the same city.
- Training data gaps. Newer businesses, recently renovated properties, or businesses that went through significant changes may have a training data profile that reflects an older, different version of the business.
What Gets Invented
Based on what we see in AI responses across hospitality, the most common fabrications fall into a few categories:
- Amenities. Pools, spas, restaurants, gyms. If a property type typically has them and your property does not, an AI may state that you do. The customer shows up expecting a spa. There is no spa.
- Awards and recognition. "Award-winning" is a common phrase in hospitality marketing. AI sometimes attributes awards to properties that have not received them, or attributes awards from one property to a sibling property in the same group.
- Services and policies. Airport transfers you never offered, pet policies you never set, restaurant concepts that were proposed but never built. If it appeared anywhere in your web presence as a plan or a possibility, it may surface in AI answers as a current fact.
- Historical facts. Founding year, original owner, historical significance. These are often pattern-matched from properties in similar contexts and may have no relationship to your actual history.
The Operational Risk Is Real
The sequence that should worry you: AI states you have a feature. Customer books based on that feature. Customer arrives. Feature does not exist. Customer is disappointed. Customer leaves a review describing the disappointment. AI reads that review. AI now has fresh evidence of a service failure associated with your property.
Fabrication creates a feedback loop. The invention becomes a complaint becomes a negative signal that reinforces a distorted AI profile of your business. The original error was the AI's. The review fallout is yours.
This is not hypothetical. In markets we have tracked, we have found AI responses citing specific amenities, hours, and services for properties that confirmed those details were inaccurate. In several cases, the properties were unaware their AI profile contained these fabrications until they ran an audit.
Why This Is Distinct from Regular Inaccuracy
Inaccuracy is a knowledge refresh problem. The information existed, it changed, and the AI has not caught up. The fix is making the current information authoritative and accessible: update your website, update your Google Business Profile, get fresh citations.
Fabrication is a knowledge vacuum problem. The AI is inventing because it does not have enough authoritative information to work with. The fix is different: fill the vacuum with structured, specific, machine-readable facts so the model has less reason to pattern-complete.
A page that explicitly states "We do not have a pool" is not just good customer communication. It is a signal to AI systems. Absence of a fact, stated clearly, is more reliable than leaving the AI to guess.
Structured Data as Hallucination Prevention
Schema markup does something underappreciated in the context of fabrication. It creates a machine-readable, high-confidence source of truth for specific facts about your business. When an AI system has access to a well-structured LocalBusiness schema with your amenities, hours, and services explicitly defined, it has less reason to pattern-complete from similar businesses.
Think of it as occupying the vacuum before the AI fills it with something worse.
The most useful schema types for fabrication prevention:
- LocalBusiness or LodgingBusiness. Explicit amenityFeature fields let you state what you have and, where supported, what you do not.
- FAQ schema. A FAQ page that directly answers "Do you have a pool?" with "No, we do not have a pool" is readable by both humans and AI systems.
- Menu or Offer schema. For restaurants or businesses with specific services, explicit structured offers prevent fabricated service descriptions.
The Case for Regular AI Audits
AI fabrication is not a one-time event you fix and move on from. Models update. Search grounding sources change. New content about your business enters the training or retrieval pipeline and shifts how you are characterized.
A fabrication that did not exist three months ago may exist today because a model update changed how your entity profile is assembled. The only way to know what AI is saying about you right now is to ask it, regularly, across the platforms your customers use.
This does not need to be a complex operation. A monthly audit, running a standard set of queries across ChatGPT and Gemini and recording the responses, gives you a baseline. Deviations from that baseline, new claims, new characterizations, new invented features, are your signal to investigate and correct.
The businesses that will have the cleanest AI profiles in two years are the ones running this process now, not the ones who will react after a pattern of disappointed customers tells them something is wrong.