Global – Generative AI is fuelling a sharp rise in cyberattacks, with 62% of organisations hit by deepfake incidents in a 12-month period, according to new research from Gartner.
The survey, conducted between March and May 2025 among 302 cybersecurity leaders in North America, EMEA and Asia-Pacific, reveals that AI systems are increasingly being exploited for social engineering and automated process manipulation.
Beyond deepfakes, 32% of companies reported prompt-based exploits that manipulated AI models to generate biased or harmful outputs. The infrastructure behind enterprise AI applications is also under fire, with 29% of respondents confirming attacks targeting chatbots and assistants through adversarial prompting (source: BW Marketing World).
Gartner warns that while 67% of cybersecurity leaders see genAI risks as requiring significant defence changes, sweeping overhauls are premature. Instead, firms should strengthen core security controls and build targeted measures, as adversarial AI use becomes a mainstream threat to business resilience.
Discover more AI insights in our Artificial Intelligence topic.
Strategic opportunity
Build deepfake resilience by stress-testing AI systems and creating rapid-response protocols to safeguard brand integrity and consumer trust