Technology

‘I hadn’t verified a single thing’: Using ChatGPT for Iran war news changed how I trust information

· 5 min read
‘I hadn’t verified a single thing’: Using ChatGPT for Iran war news changed how I trust information
  1. AI Platforms & Assistants
  2. OpenAI
  3. ChatGPT
‘I hadn’t verified a single thing’: Using ChatGPT for Iran war news changed how I trust information Opinion By Becca Caddy published 11 April 2026

AI search is better but is our judgement worse?

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Hand holding an iPhone showing ChatGPT updates on the Iran war. (Image credit: Future / Graham Barlow)
  • Copy link
  • Facebook
  • X
  • Whatsapp
  • Reddit
  • Pinterest
  • Flipboard
  • Threads
  • Email
Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Get daily insight, inspiration and deals in your inbox

Sign up for breaking news, reviews, opinion, top tech deals, and more.

Become a Member in Seconds

Unlock instant access to exclusive member features.

Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

You are now subscribed

Your newsletter sign-up was successful

Join the club

Get full access to premium articles, exclusive features and a growing list of member rewards.

Explore An account already exists for this email address, please log in. Subscribe to our newsletter

A few weeks ago I started deliberately using ChatGPT to follow the latest news about the Iran war. It was partly a test to see how chatbots compare to traditional news sites at presenting real-time information, and partly because the pace of the news at the time was overwhelming.

But at some point I noticed I hadn't clicked through to verify a single thing. I'd just been absorbing whatever ChatGPT told me.

Article continues below You may like
  • Doctor and AI I asked experts whether I should use ChatGPT for health advice, and I was shocked
  • A woman holds a cell phone in front of a computer screen displaying the ChatGPT logo. What people confessed to me about using ChatGPT surprised me
  • AI hallucinations ChatGPT hallucinates, here's 5 ways to spot when it does

Tracking the change

Using AI for search hasn't always been a good idea. Not all that long ago, ChatGPT didn't have access to real-time information. Google's AI Overviews was recommending people add glue to pizza to help the cheese stick and suggesting eating a rock a day. The problems with relying on AI for real-time, accurate information were obvious and easy to spot.

But a lot has improved in the past year. Models are now more accurate, information is more up to date (with many chatbots now accessing the internet in real-time) and sources are more likely to be cited.

AI search has shifted into what Ofcom recently called "answer engines" — tools that don't just point us towards information, but provide it directly, in plain conversational language.

All of this sounds good, and in many ways it is. I believe that for low stakes, quick queries, like a recipe, a definition, a travel tip or buying advice, then AI search can be genuinely useful. And that conversational format also helps you drill down, answer the right follow-up questions and find what you need faster than clicking through a list of links.

Get daily insight, inspiration and deals in your inboxContact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

But I also think that this improvement in itself is a problem.

The case against better answers

AI brain learning

(Image credit: Getty Images / Yuichiro Chino)

When AI search was obviously more flawed, many of us stayed alert. Now that it's better and more reliable, I worry that we're less likely to question it. And the conversational format plays a pretty major role in that.

We're wired to treat fluent, coherent language as credible. When something reads like a confident explanation, it's much harder for us to step back and interrogate it — even when we know we should. I've written about this same pattern across other areas of AI: in therapy, in relationships, in health advice. It’s very easy to offload our thinking to whatever AI tool we’re using and become way less likely to apply our own judgement.

What to read next
  • ChatGPT logo This trick will get ChatGPT to question itself
  • Young couple sitting at a desk in front of a laptop looking concerned. AI chatbots may be too validating for their own good
  • A protester holds anti-AI placards outside OpenAI offices in King's Cross during a march against unregulated Artificial Intelligence (AI) and data centres. 2026. AI is making us all sound the same at work — I tested it to see if it’s true

Ellen Scott, who I spoke to about this in a work context, called it smoothout — a kind of cognitive offloading where the effort of evaluating information gets absorbed by AI. It removes the friction that used to make you think.

Traditional search wasn't perfect, but it had that friction built in. You'd typically scan a list of links, look at the sources and make quick judgements about credibility. It was active, even when it felt automatic. AI search replaces all of that with a single synthesized answer delivered in a conversational (and sometimes sycophantic) tone. Which means you’re sitting back and receiving information rather than evaluating it.

We know from Pew Research that when an AI summary appears in search results, people are significantly less likely to click through to original sources. So, AI is effectively answering your question and reducing the likelihood that you'll check it.

The failures that remain

Of course, AI search still isn't perfectly reliable every time either.

Hallucinations — where a chatbot confidently generates something that isn't true — haven't gone away. Citations are also still sometimes misleading or broken.

And there's another problem: sycophancy. Even though this is something AI companies are actively addressing, we know that AI systems still have a tendency to agree with you. This is often because these systems are optimized to feel like a good and natural conversation, but not necessarily to tell you the truth.

What makes this worse is that the improvements in accuracy make the remaining errors harder to spot. When a tool is obviously unreliable, I think we stay more critical of it. But when it's mostly right, I worry we stop checking, just like I did in my own experiment.

Building better systems and better judgement

Woman use internet for information and browsing

(Image credit: Getty Images / Francesco Carta fotografo)

The standard answer here is that people need better media literacy for the AI age, which I believe they do. Understanding what these systems are doing, treating AI outputs as a starting point rather than a conclusion, learning to question fluent confident language, all of that is incredibly important.

But the times we're most likely to reach for AI search — during fast-moving situations, when we need answers to high-stakes questions, in emotionally overwhelming events — are exactly the times when verification matters most and critical thinking is hardest.

In previous reporting I've spoken to therapists and doctors who've noticed the same pattern that patients often turn to AI during moments of crisis or distress, precisely when they're least likely to scrutinize what they're being told. That’s why the burden can't sit entirely with users.

If AI tools are going to sit at the center of how people find information, their design choices matter enormously. That has to mean clear attribution, interfaces that prompt you to check and follow through in finding more information from other sources, tools that show you what they didn't include, not just what they did.

AI search has gotten better, there's no doubt about it. I just think we need to be honest about what better actually means for how we find, process and understand information in the long-run.

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Purple circle with the words Best business laptops in whiteThe best business laptops for all budgetsOur top picks, based on real-world testing and comparisons

➡️ Read our full guide to the best business laptops1. Best overall:Dell Precision 56902. Best on a budget:Acer Aspire 53. Best MacBook:Apple MacBook Pro 14-inch (M4)

TOPICS AI CATEGORIES ChatGPT OpenAI Becca CaddyBecca CaddySocial Links Navigation

Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality. 

View More

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Logout Read more A woman holds a cell phone in front of a computer screen displaying the ChatGPT logo. AI Platforms & Assistants What people confessed to me about using ChatGPT surprised me    AI hallucinations AI Platforms & Assistants ChatGPT hallucinates, here's 5 ways to spot when it does    ChatGPT logo ChatGPT This trick will get ChatGPT to question itself    Young couple sitting at a desk in front of a laptop looking concerned. AI Platforms & Assistants AI chatbots may be too validating for their own good    A protester holds anti-AI placards outside OpenAI offices in King's Cross during a march against unregulated Artificial Intelligence (AI) and data centres. 2026. AI Platforms & Assistants AI is making us all sound the same at work — I tested it to see if it’s true    AI writer AI Platforms & Assistants AI writing's latest patterns and how I avoid them    Latest in AI Platforms & Assistants Meta Muse Spark AI Platforms & Assistants Meta AI highlights its social media origin when matched against ChatGPT    ChatGPT on App Store displayed on a phone screen is seen in this illustration photo taken in Poland on June 5, 2024. ChatGPT ChatGPT’s backup model just got smarter — as OpenAI adds a new Pro option    Claude AI Pro Claude Cowork is now available for enterprise use, adds analytics, access controls and more    Amazon Pro Amazon CEO Andy Jassy lays out his '6 truths' for the future of AI    Gemini notebooks Gemini Gemini's new notebooks feature fixes frustrating AI memory lapses    A RAM stick held between two fingers next to an illustration showing how the TurboQuant compression algorithm works Memory TurboQuant sadly won't fix the RAM crisis, analysts say — here's why    Latest in Opinion Turtle Beach Rematch Wireless Controller with AirLite Fit headset Gaming Accessories I tested Turtle Beach's Mario-themed controller and headset for Nintendo Switch 2 — and they surprised me for 5 key reasons    Malware attack virus alert , malicious software infection , cyber security awareness training to protect business Pro Lazarus and Kimsuky prove why infrastructure-level analysis is crucial for cybersecurity    Hacking red and blue digital binary code matrix 01 background. Pro The New Internet is Coming    Hands on a laptop with overlaid logos representing network security Pro The internet has a trust problem - identity needs to travel    A robot hand touching a locked digital shield blocking a human from accessing data Pro The 70% rule: Why your AI strategy is a people strategy     Man coding programmer, software developer working on digital tablet with binary, html computer code on virtual screen Pro Why CIOs need a single source of truth for digital operations    LATEST ARTICLES