Google has shared new details on how government-backed hacking groups have tried to exploit Gemini, its artificial intelligence (AI) chatbot.
In a January 30 report titled Adversarial Misuse of Generative AI, the company outlines various attempts by threat actors to manipulate Gemini's system, including efforts to bypass its safety measures.
According to Google, some attackers attempted to “jailbreak” Gemini using repeated prompts or slight variations of their requests. Jailbreaking refers to tricking an AI into performing restricted tasks, such as generating harmful content or exposing sensitive information.
Did you know?
Subscribe - We publish new crypto explainer videos every week!
How do Cryptocurrency Exchanges Work? (Easily Explained!)
However, Google confirmed that these attempts were unsuccessful, as the AI consistently blocked unsafe requests.
Advanced persistent threat (APT) linked to governments tried to use Gemini for their own purposes, including gathering information, analyzing security vulnerabilities, and writing scripts or code.
Others tried to find ways to cover their tracks after a breach. While these groups experimented with AI tools, Google emphasized that they failed to achieve their goals due to Gemini’s restrictions.
Furthermore, Google’s report highlighted specific cases tied to different countries. North Korean groups used Gemini at various stages of their cyber operations, including researching military and financial topics. Google noted:
They also used Gemini to research topics of strategic interest to the North Korean government, such as the South Korean military and cryptocurrency.
Hackers from Iran also used Gemini to refine phishing campaigns and research cybersecurity topics.
Meanwhile, a security firm called Scam Sniffer recently reported that crypto scammers are using Telegram to distribute malware. How? Read the full story.