Latest News

Research: AI agents are highly vulnerable to hijacking attacks

Written on Aug 15, 2025

Some of the most widely used AI agents and assistants from Microsoft, Google, OpenAI and other major companies are susceptible to being hijacked with little or no user interaction, according to new research from Zenity Labs.  

Zenity researchers said hackers can exfiltrate data, manipulate critical workflows across targeted organizations and, in some cases, even impersonate users.  

Beyond infiltrating these agents, the researchers said, attackers could also gain memory persistence, letting them maintain long-term access and control.  

Researchers demonstrated vulnerabilities in multiple popular AI agents:  

  • OpenAI’s ChatGPT could be compromised using an email-based prompt injection that granted them access to connected Google Drive accounts.  

  • Microsoft Copilot Studio’s customer-support agent leaked entire CRM databases, and researchers identified more than 3,000 agents in the wild that were at risk of leaking internal tools.  

  • Salesforce’s Einstein platform was manipulated to reroute customer communications to researcher-controlled email accounts.  

  • Attackers could turn Google’s Gemini and Microsoft 365’s Copilot into insider threats, targeting users with social-engineering attacks and stealing sensitive conversations.  

Zenity Labs disclosed its findings to the companies, and some of them issued patches immediately. 

The research comes as AI agents advance rapidly in enterprise environments and as major companies encourage their employees to embrace the technology as a significant productivity boost.