Thursday, January 30, 2025

How Government Hackers Are Attempting to Exploit Google Gemini AI

The Google Threat Intelligence Group (GTIG) recently shared insights about how various threat actors, including state-backed advanced persistent threat (APT) groups from China, Iran, North Korea, and Russia, attempted to exploit its Gemini AI tool.

According to Google, actors from at least 20 nations have tried to use Gemini, with the highest activity coming from groups in China and Iran. These groups aimed to leverage Gemini in various stages of their attacks. They sought to secure infrastructure and bulletproof hosting, scout targets, research vulnerabilities, develop payloads, and create malicious scripts, all while trying to evade detection after compromise.

Iranian actors stand out for their extensive use of Gemini. They focus on researching defense organizations, identifying vulnerabilities, and crafting content for phishing campaigns, particularly with a cybersecurity angle. Their targets often relate to Iran’s neighbors in the Middle East and interests linked to the U.S. and Israel.

Chinese APT groups, in contrast, utilize Gemini primarily for reconnaissance, scripting, and troubleshooting code. They research tactics like lateral movement, privilege escalation, data theft, and intellectual property theft, often targeting the U.S. military, government IT providers, and the intelligence community.

North Korean and Russian groups use Gemini less frequently. North Koreans tend to concentrate on theft, like cryptocurrency, and support their ongoing campaign to infiltrate target organizations with fake IT contractors. Russian usage primarily revolves around coding tasks, hinting at ongoing ties between the Russian state and financially motivated ransomware groups.

The GTIG noted that while AI can assist threat actors, it hasn’t drastically changed the game yet. Skilled attackers find AI tools useful but haven’t shown signs of developing groundbreaking capabilities. For less experienced actors, these tools streamline learning and productivity, allowing for quicker development of malicious tools using existing techniques. Yet, current AI models aren’t poised to revolutionize how these actors operate.

GTIG has also seen a few instances where threat actors attempted to subvert Gemini’s safety measures by using publicly available jailbreak prompts. In one case, an APT actor copied prompts into Gemini and requested basic instructions on malware creation. Gemini provided some initial coding help but blocked further requests for more dangerous code types, like VBScript or denial-of-service tools.

Some malicious actors even sought guidance on advanced phishing techniques for Gmail or ways to bypass account verification. All tried-and-failed attempts yielded only safe and neutral responses from Gemini, focusing on coding advice rather than enabling malicious campaigns. GTIG emphasized that they haven’t observed any significant improvements in threat actors’ capabilities to bypass Google’s defenses. For a deeper dive into their findings, the complete research dossier is available for download from Google.