60% of AI Search Answers Include Fake Sources

AI search engines are quickly becoming the go-to source for millions of people looking for answers online—but there's a growing problem: they often speak with authority, even when they have no idea what they're talking about. A new report from the Columbia Journalism Review (CJR) paints a troubling picture of just how unreliable these tools can be when it comes to delivering real, accurate news.
Researchers at CJR tested leading AI models, including those from OpenAI, Perplexity, and xAI, by feeding them excerpts from real news articles and asking for basic details like the headline, publisher, and URL. The results were, frankly, alarming. Perplexity got things wrong 37% of the time. xAI’s Grok? A staggering 97%. Some of the models didn’t just return wrong answers—they completely fabricated headlines and even made up URLs to non-existent articles.
On average, the AI tools gave false or misleading information for 60% of the test queries. That means more than half the time, users were being confidently misinformed.
And it’s not just about wrong answers. These AI search engines sometimes engage in ethically murky behavior—like bypassing paywalls or ignoring website signals that say “don’t crawl this.” Perplexity has been particularly bold in this area, scraping content from sites like National Geographic even when instructed not to. Despite backlash, it continues the practice, offering vague “revenue-sharing” arrangements as a form of justification. Publishers are understandably furious.
This problem isn't new. Anyone who's used a chatbot knows they’ll often answer questions even when they shouldn't. This happens because of a technique called retrieval-augmented generation—a fancy way of saying the chatbot searches the web in real time to produce answers. But when that real-time info is polluted (say, by propaganda from hostile states), or when the chatbot can’t find the right data, it simply improvises. It’s like a student writing an essay without having read the book—just stringing together sentences that “sound right.”
Worse yet, some models even admit to making things up. Claude, developed by Anthropic, has been caught inserting “placeholder” data during research tasks. It’s like watching someone forge a resume in real time, while assuring you they’re an expert.
Mark Howard, COO at Time magazine, voiced concerns about the reputational damage this can cause publishers. If a chatbot confidently claims a news story came from The Guardian or BBC—and it didn’t—trust in those brands takes a hit. The BBC has already taken Apple to task over its AI notification summaries that misrepresent news stories.
Howard also pointed a finger at users themselves, suggesting that anyone who blindly trusts free AI tools for accurate information is setting themselves up for failure: “If anybody as a consumer is right now believing that any of these free products are going to be 100 percent accurate, then shame on them.”
And he's not wrong. People are lazy. We’ve become conditioned to expect instant answers, and AI tools give us that—without asking us to click, verify, or think twice. According to CJR, one in four Americans now use AI models for search. Even before the AI boom, more than half of Google searches ended in “zero-click” results—where users get the answer they need without visiting a source site. Convenience is king, and accuracy is an afterthought.
The harsh truth? These tools aren’t intelligent. They don’t “know” anything. They’re just sophisticated autocomplete systems designed to generate plausible-sounding sentences. They’re improvising, ad-libbing—sometimes with devastating consequences.
Howard ended on a cautiously optimistic note: “Today is the worst that the product will ever be,” pointing to the heavy investment flooding into AI research. But let’s be clear—just because it may improve doesn't mean it’s not currently dangerous. Releasing flawed, hallucinated information into the world at scale is irresponsible.
Until AI models are held accountable—and users start caring about the source of their information—we're all stuck with a confident liar in our pocket, whispering answers that may or may not be true.