Google has issued a warning about the potential security risks associated with artificial intelligence (AI) after state-sponsored hackers attempted to exploit its Gemini AI model. However, their efforts to jailbreak the system were ultimately unsuccessful. Google’s threat intelligence group (GTIG) has posted a paper on ‘Adversarial Misuse of Generative AI,’ which explains how threat actors have conversed with their AI chatbot, Gemini. Hackers try to jailbreak Google’s Gemini AI but fail with simple tricks Google reports that threat actors attempted to jailbreak Gemini AI using prompts, including efforts by government-backed advanced persistent threat (APT) groups to leverage the AI for malicious activities. However, the American multinational corporation found no evidence of advanced jailbreak attempts. Instead, the hackers used straightforward approaches, such as typing different phrases or repeating the same request many times. Google commented that these attempts were unsuccessful. Jailbreaks with the help of AI are prompt injection attacks that try to evade restrictions and make an AI model perform forbidden actions, such as revealing sensitive information or creating harmful content. According to Google, one APT actor attempted to exploit Gemini using publicly available jailbreak prompts to generate malicious code. However, the attempt failed, as Gemini responded with a safety-filtered output. Google mentioned that the attackers sought to achieve different evil objectives with the help of Gemini, such as gathering information on targets, identifying vulnerabilities from open sources, and writing code and scripts. Furthermore, some attempts were made to support post-compromise actions, for instance, to avoid detection. Iran, China, and North Korea exploit Google’s Gemini AI for cyber operations According to Google, Iran-based APT groups primarily leveraged AI for crafting phishing campaigns. They also used Gemini to conduct reconnaissance on defense experts and organizations and generate cybersecurity-related content. Meanwhile, China’s APT actors have leveraged Gemini to troubleshoot code, scripting, and development tasks. They also used AI to explore methods for gaining deeper access to target networks. Google’s threat intelligence group and North Korean APT groups have used Gemini in different phases of the attack life cycle, such as pre-attack research and development. The report said: They also used Gemini to research of strategic interest to the North Korean government, such as the South Korean military and cryptocurrency. – GTIG Last year, North Korean hackers stole $1.3 billion in digital assets, according to Chainalysis. Cryptopolitan Academy: FREE Web3 Resume Cheat Sheet - Download Now