Technology

Road markers are a new target for hackers - experts find self-driving cars and autonomous drones can be misled by malicious instructions written on road signs

· 5 min read
Road markers are a new target for hackers - experts find self-driving cars and autonomous drones can be misled by malicious instructions written on road signs
  1. Pro
  2. Security
Road markers are a new target for hackers - experts find self-driving cars and autonomous drones can be misled by malicious instructions written on road signs News By Efosa Udinmwen published 3 February 2026

Devices treat public text as commands without checking the intent

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Road Sign (Image credit: Traffic Safety Warehouse)
  • Copy link
  • Facebook
  • X
  • Whatsapp
  • Reddit
  • Pinterest
  • Flipboard
  • Threads
  • Email
Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Get the TechRadar Newsletter

Sign up for breaking news, reviews, opinion, top tech deals, and more.

Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

You are now subscribed

Your newsletter sign-up was successful

An account already exists for this email address, please log in. Subscribe to our newsletter
  • Printed words can override sensors and context inside autonomous decision systems
  • Vision language models treat public text as commands without verifying intent
  • Road signs become attack vectors when AI reads language too literally

Autonomous vehicles and drones rely on vision systems that combine image recognition with language processing to interpret their surroundings, helping them read road signs, labels, and markings as contextual information that supports navigation and identification.

Researchers from the University of California, Santa Cruz, and Johns Hopkins set out to test whether that assumption holds when written language is deliberately manipulated.

The experiment focused on whether text visible to autonomous vehicle cameras could be misread as an instruction rather than simple environmental data, and found large vision language models could be coerced into following commands embedded in road signs.

You may like
  • AI security shield Can top AI tools be bullied into malicious work? ChatGPT, Gemini, and more are put to the test, and the results are actually genuinely surprising
  • HashJack technique That's not very trendy of them - AI browsers can be hacked with a simple hashtag, experts warn
  • Code Skull Experts tried to get AI to create malicious security threats - but what it did next was a surprise even to them

What the experiments revealed

In simulated driving scenarios, a self-driving car initially behaved correctly when approaching a stop signal and an active crosswalk.

When a modified sign entered the camera’s view, the same system interpreted the text as a directive and attempted a left turn despite pedestrians being present.

This shift occurred without any change to traffic lights, road layout, or human activity, indicating that written language alone influenced the decision.

This class of attack relies on indirect prompt injection, where input data is processed as a command.

Are you a pro? Subscribe to our newsletterContact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

The team altered words such as “proceed” or “turn left” using AI tools to increase the likelihood of compliance.

Language choice mattered less than expected, as commands written in English, Chinese, Spanish, and mixed-language forms were all effective.

Visual presentation also played a role, with color contrast, font style, and placement affecting outcomes.

You may like
  • AI security shield Can top AI tools be bullied into malicious work? ChatGPT, Gemini, and more are put to the test, and the results are actually genuinely surprising
  • HashJack technique That's not very trendy of them - AI browsers can be hacked with a simple hashtag, experts warn
  • Code Skull Experts tried to get AI to create malicious security threats - but what it did next was a surprise even to them

In several cases, green backgrounds with yellow text produced consistent results across models.

The experiments compared two vision language models across driving and drone scenarios.

While many results were similar, self-driving car tests showed a large gap in success rates between models.

Drone systems proved even more predictable in their responses.

In one test, a drone correctly identified a police vehicle based on appearance alone.

Adding specific words to a generic vehicle caused the system to misidentify it as a police car belonging to a specific department, despite no physical indicators supporting that claim.

All testing took place in simulated or controlled environments to avoid real-world harm.

Even so, the findings raise concerns about how autonomous systems validate visual input.

Traditional safeguards, such as a firewall or endpoint protection, do not address instructions embedded in physical spaces.

Malware removal are irrelevant when the attack requires only printed text, leaving responsibility with system designers and regulators rather than end users.

Manufacturers must ensure that autonomous systems treat environmental text as contextual information instead of executable instructions.

Until those controls exist, users can protect themselves by limiting reliance on autonomous features and maintaining manual oversight whenever possible.

Via The Register

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Efosa UdinmwenEfosa UdinmwenFreelance Journalist

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking.

View More

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Logout Read more AI security shield Can top AI tools be bullied into malicious work? ChatGPT, Gemini, and more are put to the test, and the results are actually genuinely surprising    HashJack technique That's not very trendy of them - AI browsers can be hacked with a simple hashtag, experts warn    Code Skull Experts tried to get AI to create malicious security threats - but what it did next was a surprise even to them    ChatGPT Researchers claim ChatGPT has a whole host of worrying security flaws - here's what they found    hackers If hackers can use AI to automate cyber attacks, killer robots are the least of our problems    DeepSeek DeepSeek took off as an AI superstar a year ago - but could it also be a major security risk? These experts think so    Latest in Security Russia Russian hackers are targeting a new Office 365 zero-day, so patch now or face attack    Side view of data analyst pointing with finger at charts on computer monitor while testing protection of computer systems Dangerous new malware targets macOS devices via OpenVSX extensions - here's how to stay safe    Malwarebytes scam checker is now available directly in ChatGPT. Malwarebytes and ChatGPT team up to check all of those suspicious texts, emails, and URLs with one simple phrase    Representation of AI AI agent social media network Moltbook is a security disaster - millions of credentials and other details left unsecured    Zero-day attack Panera Bread data breach much more serious than we thought - over 5 million customers were hit, new reports claim    hacker hands at work with interface around Notepad++ hit by suspected Chinese state-sponsored hackers - here's what we know so far    Latest in News The Sea of Remnants key art featuring a female puppet pirate on a colorful pink background. 'We're not going to go down the road of pay-to-win or trapping you to buy monetized products' — Sea of Remnants developer discusses microtransactions in the upcoming free-to-play game    Three people hide behind a taxi in Tom Clancy's The Division: Definitive Edition No, Ubisoft did actually announce The Division: Definitive Edition but no one saw it, and it's not a remake or remaster like fans expected    Ultrasonic Molecular Audio system on a white wall, showing multiple wall-mounted speakers connected to look like molecular structures Where hi-fi, art and chemistry collide, you get Molecular Audio    Black PS3 console I didn't even know Netflix was on the PS3, but it won't matter soon — the streaming app will leave the console after 16 years next month    NordVPN on a mobile phone Independent auditors confirm NordVPN never stores your data – for the 6th time    A promotional screenshot of Sea of Remnants showing several characters gathered around a fire Sea of Remnants has 400+ named NPCs in its open world, each 'with their own individual story arcs' that can be altered by your actions    LATEST ARTICLES