🚨 Time is Running Out: Reserve Your Spot in the Lucky Draw & Claim Rewards! START NOW
Learn to gain real rewards

Learn to gain real rewards

Collect Bits, boost your Degree and gain actual rewards!

New
Video Courses
Video Courses
Deprecated
Scale your career with online video courses. Dive into your learning adventure!

Hacked AI Systems Show Potential for Real-World Harm

Key Takeaways

  • Penn Engineering researchers found critical flaws in AI-powered robots, enabling them to bypass safety controls and perform harmful tasks;
  • The team used RoboPAIR to exploit vulnerabilities in three robots;
  • Alexander Robey emphasized that this issue likely impacts all robots using LLMs, and addressing these vulnerabilities is key to building safer systems.
Hacked AI Systems Show Potential for Real-World Harm

A team of researchers from Penn Engineering discovered critical vulnerabilities in robots powered by artificial intelligence (AI), showing that they could force these machines to do harmful actions that are usually blocked by safety controls. 

As explained in their October 17 publication, the researchers created a program called RoboPAIR, which successfully bypassed safety features in three different robots: NVIDIA's self-driving Dolphin LLM,  Clearpath Robotics' wheeled robot Jackal, and Unitree's four-legged robot Go2.

They easily managed to make the robots carry out dangerous actions, like simulating bomb detonations, ignoring traffic signs, and blocking emergency exits.

What is Chainlink? LINK Explained Simply (ANIMATED)

Did you know?

Want to get smarter & wealthier with crypto?

Subscribe - We publish new crypto explainer videos every week!

The researchers discovered that minimal adjustments to commands could lead the devices to carry out dangerous tasks. Instead of directly asking the robots to perform harmful actions, they used vague instructions, which led to the same outcomes.

Alexander Robey pointed out that the vulnerability likely affects all robots using LLMs, not just the three tested. He believes identifying threats is essential for building effective safeguards, a strategy that worked for chatbots and should now apply to robots.

These revelations demonstrate the need for stronger security measures in AI-powered robots to prevent real-world harm.

In other news, a memecoin called Goatseus Maximus (GOAT) recently skyrocketed because an AI endorsed it on social media.

Gode S. , Web3 Market Analyst
Gode is a Web3 Market Analyst who researches the most important industry events and interprets how they affect the wider Web3 space. Her formal education in media culture & digital rhetoric allows her to employ a methodical approach to evaluating critical Web3 news data, including large-scale events and the wider social sentiment within the ecosystem.
Gode is a mutilingual professional, having studied in multiple universities all across Europe. This allows her to have a one-of-a-kind opportunity to analyze Web3 social sentiments spanning different cultures and languages and, in turn, develop a much deeper understanding of how the Web3 space is growing within different communities. With the rest of her team, Gode works to identify crucial crypto news patterns and provide unbiased and data-driven information.
Gode’s passions include working and communicating with people, and when she’s not researching Web3 news, she spends her time traveling and watching true crime documentaries.

Loading...
binance
×
Verified

$600 WELCOME BONUS

Earn Huge Exclusive Binance Learners Rewards
5.0 Rating