Once the domain of elite spies and con artists, social engineering is now in the hands of anyone with an internet connection – and AI is the accomplice. Supercharged by generative tools and deepfake technology, today’s social engineering attacks are no longer sloppy phishing attempts. They’re targeted, psychologically precise, and frighteningly scalable.
Welcome to Social Engineering 2.0, where the manipulators don’t need to know you personally. Their AI already does.
Deception at machine levels
Social engineering works because it bypasses firewalls and technical defences. It attacks human trust. From fake bank alerts to long-lost Nigerian princes, these scams have traditionally relied on generic hooks and low-effort deceit. But that’s changed, and continues to.
“AI is augmenting and automating the way social engineering is carried out,” says Anna Collard, SVP of Content Strategy & Evangelist at KnowBe4 Africa. “Traditional phishing markers like spelling errors or bad grammar are a thing of the past. AI can mimic writing styles, generate emotionally resonant messages, and even recreate voices or faces – all within minutes.”
The result? Cybercriminals now wield the capabilities of psychological profilers. By scraping publicly available data – from social media to company bios – AI can construct detailed personal dossiers. “Instead of one-size-fits-all lures, AI enables criminals to create bespoke attacks,” Collard explains. “It’s like giving every scammer access to their own digital intelligence agency.”
The new face of manipulation: Deepfakes
One of the most chilling evolutions of AI-powered deception is the rise of deepfakes – synthetic video and audio designed to impersonate real people. “There are documented cases where AI-generated voices have been used to impersonate CEOs and trick staff into wiring millions,” notes Collard.
In South Africa, a recent deepfake video circulating on WhatsApp featured a convincingly faked endorsement by FSCA Commissioner Unathi Kamlana promoting a fraudulent trading platform. Nedbank had to publicly distance itself from the scam.
“We’ve seen deepfakes used in romance scams, political manipulation, even extortion,” says Collard. One emerging tactic involves simulating a child’s voice to convince a parent they’ve been kidnapped – complete with background noise, sobs, and a fake abductor demanding money.
“It’s not just deception anymore,” Collard warns. “It’s psychological manipulation at scale.”
The Scattered Spider effect
One cybercrime group exemplifying this threat is Scattered Spider. Known for its fluency in English and deep understanding of Western corporate culture, this group specialises in highly convincing social engineering campaigns. “What makes them so effective,” notes Collard, “is their ability to sound legitimate, form quick rapport, and exploit internal processes – often tricking IT staff or help-desk agents.” Their human-centric approach, amplified by AI tools, such as using audio deepfakes to spoof victims’ voices for obtaining initial access, shows how the combination of cultural familiarity, psychological insight, and automation is redefining what cyber threats look like. It’s not just about technical access – it’s about trust, timing, and manipulation.
Social engineering at scale
What once required skilled con artists days or weeks of interaction – establishing trust, crafting believable pretexts, and subtly nudging behaviour – can now be done by AI in the blink of an eye. “AI has industrialised the tactics of social engineering,” says Collard. “It can perform psychological profiling, identify emotional triggers, and deliver personalised manipulation with unprecedented speed.”
The classic stages – reconnaissance, pretexting, rapport-building – are now automated, scalable, and tireless. Unlike human attackers, AI doesn’t get sloppy or fatigued; it learns, adapts, and improves with every interaction.
The biggest shift? “No one has to be a high-value target anymore,” Collard explains. “A receptionist, an HR intern, or a help-desk agent; all may hold the keys to the kingdom. It’s not about who you are – it’s about what access you have.”
Building cognitive resilience
In this new terrain, technical solutions alone won’t cut it. “Awareness has to go beyond ‘don’t click the link,’” says Collard. She advocates for building ‘digital mindfulness’ and ‘cognitive resilience’ – the ability to pause, interrogate context, and resist emotional triggers.
This means:
- Training staff to recognise emotional manipulation, not just suspicious URLs.
- Running simulations using AI-generated lures, not outdated phishing templates.
- Rehearsing calm, deliberate decision-making under pressure, to counter panic-based manipulation.
Collard recommends unconventional tactics, too. “Ask HR interviewees to place their hand in front of their face during video calls – it can help spot deepfakes in hiring scams,” she says. Families and teams should also consider pre-agreed code words or secrets for emergency communications, in case AI-generated voices impersonate loved ones.
Defence in depth – human and machine
While attackers now have AI tools, so too do defenders. Behavioural analytics, real-time content scanning, and anomaly detection systems are evolving rapidly. But Collard warns: “Technology will never replace critical thinking. The organisations that win will be the ones combining human insight with machine precision.”
And with AI lures growing more persuasive, the question is no longer whether you’ll be targeted – but whether you’ll be prepared. “This is a race,” Collard concludes. “But I remain hopeful. If we invest in education, in critical thinking and digital mindfulness, in the discipline of questioning what we see and hear – we’ll have a fighting chance.”