Old school, crude phishing attacks, that are often spotted as ‘laughably’ obvious, are still used for a simple reason: they continue to work. And with the democratisation of AI, these crude, mistake-ridden emails are more and more becoming a thing of the past. AI-based social-engineering attacks are easy to create, more sophisticated and personaliSed. But the same tool that threatens organisations’ defences so much, can also be used to protect them – but only if it’s done right.
AI-driven phishing attacks are on the rise, with one report estimating that they have increased by 60% in the past year alone. Phishing and other social-engineering-based cyberattacks are now incredibly difficult to spot, and can be produced quickly, and at scale – all thanks to artificial intelligence.
AI dramatically changes the nature of cyber-attacks. “Traditional attacks often required a lot of manual effort – writing phishing emails and targeting individuals, or coding malware variants, line-by-line,” asserts Anna Collard, SVP Content Strategy & Evangelist at KnowBe4 Africa. This meant that, mostly, phishing emails were easier to spot due to bad grammar, typos and being highly generalised for mass roll-outs.
Today, thanks to AI, attackers can automate these processes at a massive scale. Generative AI can craft highly personalised phishing messages that can mimic the writing style of trusted individuals, and adapt to dynamic responses. And even technology or technical security defenses aren’t spared this new threat.
“AI is increasingly being used by cybercriminals to enhance the effectiveness of their attacks,” Collard says. “In terms of malware, AI can be used to automatically generate and test code to identify the most effective variants. And when it comes to evading detection, it can mimic an organisation’s network patterns and simulate normal activity – essentially hiding in plain sight.”
She adds that some AI bots can even adapt in real time if they detect they’re being watched, buying attackers time to exfiltrate data or cause damage quickly.
“All of this means cybercrime is no longer limited to advanced nation states, criminal syndicates, or highly skilled hackers,” Collard explains. “With the rise of crimeware-as-a-service and the democratisation of AI-powered tools, anyone with the right motivation can become a serious cyber threat.”
The devastating sophistication of deepfakes
Deepfakes also pose a significant threat to organisations’ and individual’s cybersecurity. Deepfake scams, which emulate a person’s appearance or voice, are on the rise worldwide, with a recent US report revealing that deepfake fraud calls result in far greater financial damage than traditional phone scams.
A famous and well-known case is the use of deepfake videos of British Arup executives in a Teams call, instructing an employee to transfer $25 million to a fraudulent account.
As AI becomes more advanced day-by-day, it is becoming increasingly versatile for both good and bad uses. From a cybersecurity perspective, it comes down to AI driving cyber threats that can be far more devastating.
“Organisations need to be aware that polymorphic malware using machine learning can continuously alter its code to bypass detection,” Collard warns. “These attacks are so dangerous because they exploit both human trust and automation – humans are more easily tricked and traditional defences struggle to keep up.”
Fighting AI with AI
In the wrong hands, AI can be a formidable enemy. But in the right hands, it can be an organisation’s strongest ally, contends Collard. “Machine learning can be used to detect anomalies in user behaviour, network traffic or email communication,” she says. “AI can help triage alerts, predict attack paths and automate responses.”
But she emphasises that AI needs to be trained on high-quality, diverse data – and overseen by skilled professionals who understand both the technology and human behaviour. “It’s not just about having AI,” she comments, “but knowing how to use it wisely.”
“Businesses should prepare by investing in people and their skills,” says Collard. “With the right people and skills, processes and technology will make all the difference.
“Train your staff on the new realities of AI-enhanced threats, especially around phishing and social engineering. Deploy AI-powered detection tools, and ensure you have the right talent to manage them.”
She also recommends revisiting your organisation’s incident-response plan and simulating how you’d handle an AI-driven attack. “Finally, keep in mind to adequately protect your AI Systems when deploying them as they themselves can become a vulnerability as well.”