Threats from malicious cyber activity are likely to increase as nation-states, financially motivated criminals, and novices increasingly incorporate artificial intelligence into their routines, the UK’s top intelligence agency said.
The assessment, from the UK’s Government Communications Headquarters, predicted ransomware will be the biggest threat to get a boost from AI over the next two years. AI will lower barriers to entry, a change that will bring a surge of new entrants into the criminal enterprise. More experienced threat actors—such as nation-states, the commercial firms that serve them, and financially motivated crime groups—will likely also benefit, as AI allows them to identify vulnerabilities and bypass security defenses more efficiently.
“The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term,” Lindly Cameron, CEO of the GCHQ’s National Cyber Security Centre, said. Cameron and other UK intelligence officials said that their country must ramp up defenses to counter the growing threat.
The assessment, which was published Wednesday, focused on the effect AI is likely to have in the next two years. The chances of AI increasing the volume and impact of cyber attacks in that timeframe were described as “almost certain,” the GCHQ’s highest confidence rating. Other, more-specific predictions listed as almost certain were:
- AI improving capabilities in reconnaissance and social engineering, making them more effective and harder to detect
- More impactful attacks against the UK as threat actors use AI to analyze exfiltrated data faster and more effectively, and use it to train AI models
- Beyond the two-year threshold, commoditization of AI-improving capabilities of financially motivated and state actors
- The trend of ransomware criminals and other types of threat actors who are already using AI will continue in 2025 and beyond.
The area of biggest impact from AI, Wednesday’s assessment said, would be in social engineering, particularly for less-skilled actors.
“Generative AI (GenAI) can already be used to enable convincing interaction with victims, including the creation of lure documents, without the translation, spelling and grammatical mistakes that often reveal phishing,” intelligence officials wrote. “This will highly likely increase over the next two years as models evolve and uptake increases.”
The assessment added: “To 2025, GenAI and large language models (LLMs) will make it difficult for everyone, regardless of their level of cyber security understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts.”