Some of the most widely used AI agents and assistants from Microsoft, Google, OpenAI and other major companies are susceptible to being hijacked with little or no user interaction, according to new research from Zenity Labs.
Zenity researchers said hackers can exfiltrate data, manipulate critical workflows across targeted organizations and, in some cases, even impersonate users.
Beyond infiltrating these agents, the researchers said, attackers could also gain memory persistence, letting them maintain long-term access and control.
Researchers demonstrated vulnerabilities in multiple popular AI agents:
Zenity Labs disclosed its findings to the companies, and some of them issued patches immediately.
The research comes as AI agents advance rapidly in enterprise environments and as major companies encourage their employees to embrace the technology as a significant productivity boost.