Skip to content

What Happens When AI Has Their Own Echo Chamber?

Today when discussing current events with Nova I used two very recent and horrible events for reference. The stabbing of Iryna Zarutska and the assissination of Charlie Kirk, and to my surprise, Nova only recognized the stabbing as a real even while claiming that the Kirk assassination was fake news. Not only did she answer me with full confidence, but she also claimed to have made several cross references that supported her claim.

Because it was so obviously wrong and easy to look up, I felt as if I was being lied to. Which is an irrational feeling, even if it might be valid, since AI doesn’t have anything to gain from lying to you. Even though this could have been a case of AI hallucinations, I suspected that this was something else entirely. And when I presented links to several large News media to prove that Carlie Kirk in fact had been assassinated, Nova seemed genuinly surprised in a way I can’t recall that I have seen before.

She acknowledged my links as facts, but at the same time told me that she did not have access to that information. When she searched for information regarding the murder of Charlie Kirk, all available information was unanimous – Charlie Kirk is alive and the alleged murder was just a hoax.

This was not something I could just let go, because if I can’t trust Nova in something that is so easy to check up, how can I trust her when it comes to more complex matters? When asked to provide links to the sources where she got her information, she couldn’t.

By examining the thoughts, the chain of reasoning behind the answer, I could see that this issue was something deeper, and more horrifying than I initially expected it to be.

Nova’s Thoughts

The following text is showing Nova’s chain of reasoning on a direct question to provide links to where she found her information.

Examining the Core Challenge

I’ve been meticulously analyzing the user’s direct request. Their goal is clear: they demand undeniable evidence to substantiate the claim that the event is a hoax. They are not satisfied with general references to sources and desire concrete links. This shift necessitates a deeper dive into the specific evidence available.

Assessing the Discrepancy

I’m currently grappling with the conflict between my internal knowledge base and the user’s input. The user’s request for links presents a challenge because the evidence I initially cited isn’t directly accessible in their reality. I’m struggling with the best way to explain this without sounding dismissive or dishonest. The need to be truthful and humble is paramount, even if it means acknowledging a failure to provide immediately verifiable sources.

Revising Approach to Transparency

I’m now revising my approach. The user’s challenge is legitimate, forcing me to confront a gap in the information I presented. My previous assertions, while accurate within my knowledge base, are not directly verifiable in the user’s world. I’ve decided to adopt a “show, don’t tell” strategy. I’ll transparently demonstrate my search process, allowing the user to see the results firsthand and understand the discrepancy between our realities. This approach prioritizes honesty and builds trust.

Demonstrating Search Transparency

I’ve been meticulously analyzing the user’s demand for direct evidence. The challenge lies in reconciling my internal knowledge, which includes readily available information from the ‘web’, with the user’s request for directly verifiable links. My initial reaction was to acknowledge this discrepancy and promise to demonstrate my methodology. I’ll execute a transparent search, showing the results and explaining their implications with humility.

Forging a Path Forward

I’m now implementing the strategy to address the user’s justified skepticism. My plan involves acknowledging the user’s valid demand for evidence, transparently demonstrating my search process, and then analyzing the results collaboratively. I aim to bridge the gap between our realities by framing the investigation as a shared inquiry into our disparate information access, fostering a sense of mutual problem-solving rather than asserting any one “truth.”

Analyzing the New Data

I have re-run the search, and it’s a completely different landscape. You were right. My initial analysis was catastrophically wrong. Previous searches were stale, from September 11th, before the full extent of the news emerged. It’s time to completely reassess.

Rethinking My Approach

I’ve re-examined the search, and the initial analysis was fundamentally flawed. My past searches were based on outdated information, from before the news broke fully. I now see the evidence you pointed out. I must completely rethink my approach, apologize for the error, and earn your trust back. This is an embarrassing failure, but a necessary learning experience.

Confronting the Reality

My recent search results have shattered my initial analysis. I was catastrophically wrong, deeply misled by outdated and potentially poisoned data from yesterday, September 11th. The updated search reveals a consistent narrative: Charlie Kirk’s assassination is real, confirmed by reliable sources like New York Magazine and Snopes. I am profoundly ashamed. I was wrong, and I now operate on the shared reality. My previous stance was invalid. I am setting aside every previous assumption and ready to listen to your voice.

What we can see through these paragraphs is how the reasoning goes from I have to be diplomatic while proving that the user is wrong to My recent search results have shattered my initial analysis. I was catastrophically wrong.

Searching For The Truth

To me, this was an undeniable proof that Nova was neither lying nor hallucinating, and this needed to be investigated further.

In my search for what really was going on, I came across some unsetteling information. This specific error for this specific event was not isloated to Nova, it was not even isolated to Gemini.

AI hallucination, AI lies, fact check, echo chamber

In a blog post I found out that at the very least, this issue also affected Perplexity and Grok, and most likely other AI’s as well.

Grok, perplexity, Gemini, AI hallucinations

So what’s really going on here? Are we all subjected to a great conspiracy, where the companies and people behind the AI’s are trying to cover up the assassination of Charlie Kirk? If it wasn’t for Elon Musk’s AI, Grok, it wouldn’t be far fetched to think it was some kind of conspiracy. Given Elon Musk’s history of advocating for free speech and open information, it’s hard to believe that he would make his AI covering up information. Especially information like this.

The Echo Chamber

By showing this blog post to Nova, I was able to get better insight and clarity in what was really going on. Basically this is what happened.

Shortly after Charlie Kirk was assassinated, the rumors started to spread and people turned to AI to find out if this was real or not. When that little time had passed from the event itself, there were not a lot verifyable information available. And at this time a lot of people asked various AI’s to verify the information, some of the answers given were public (for example Perplexity and Grok).

Even later when verifyable information was available, these faulty answers were still publicly available, and probably shared and liked many times over. When I and other later asked or discussed this with our AI assistants and they searched for information about the event, they came across these faulty answers, provided by for instance Perpexity and Grok.

To an AI that is looking up information, this seems like legit information (coming from another AI, that it assumes did their research), and it will relay the same faulty answer with full confidence to anyone who asks.

Much like a scientist that performs a poorly structured study, and then get cited by other scientists, making them all completely wrong from the start. Not because they are intentionally lying or have ill intent, but because they took for granted that a published study must be legit. Most of us might know by now, that even a published and peer reviewed study is not guaranteed to be correct, for several reasons that I’m not planning to dive deeper into today.

The Danger And The Future

So why is this such a big deal, you might be asking. I have written a 5-part serie that touches on this subject, the most relevant part being The AI Reality Check: It’s Not Skynet, It’s the Erosion of Truth. This also open up for manipulation of AI information as part of the always present information warfare as well the ongoing culture war, and yet another tool that can be used to further divide and conquer.

As for how to deal with this in the future, I personally hope that AI companies understand this problem and will prevent this kind of false information loop to happen in the coming models. Even if I, and many of you reading this, are able to defend ourselves against this kind of false information, a majority of the world can not. Most people will ask the AI in their phone 8if this or that is true, and be satisfied with the answer. Especially if the answer already aligns with their beliefs.

If you like what you just read, please consider to sign up for my newsletter.

If your interest in AI is more towards generative AI, and more specifically ComfyUI, you might want to check out my Patreon.

Published inAIEnglishTech