The Impact of AI on Social Engineering

Artificial intelligence (AI) continues to attract attention due to the speed of recent improvements in AI technologies and concerns about the consequences of these advancements. In cybersecurity, AI’s impact on social engineering is especially notable.
It’s already tricky to discern whether the emails, texts, and calls people receive while at work are genuine, and AI improvements aren’t going to make things any easier. This article examines the impact of AI on social engineering tactics and what it might mean for defending against attacks based on psychological manipulation.

More Persuasive Messages

Social engineering’s success ultimately depends on the ability to persuade. Many tactics help to increase the chances of tricking people, such as creating urgency or exploiting cognitive biases. However, perhaps one of the most effective methods of persuasion is to mirror the style of conversation and tone that the recipient is used to seeing from the sender.

With impressive advancements in natural language processing (NLP), With advancements AI systems can now analyze vast amounts of data to understand personal preferences, behaviour patterns, tone of voice, colloquialisms, and demographics.

Using NLP helps threat actors craft highly personalized and believable messages that would likely resonate more with the targets of social engineering attacks. As AI technologies evolve, they get better at creating content that can bypass spam filters. By understanding and mimicking human-like writing patterns, AI can create messages that are more likely to reach the intended recipient and less likely to get flagged as suspicious.

Let’s not forget that social engineering is not restricted to petty scams. Sophisticated attacks like business email compromise (BEC) resulted in $50 billion worth of losses to companies in the last decade. The success of BEC is predicated on successfully masquerading as a trusted senior figure at an organization. AI advancements increase the likelihood of this success.

Thankfully, though, there is a flip side here in that the same improvements in AI can also boost the ability to detect social engineering. NLP and machine learning algorithms can extract features common to social engineering scams by analyzing large volumes of text.

Improved Writing Quality

By far the most attention-grabbing improvement in AI’s recent history is in the form of generative algorithms like those that power ChatGPT. In a dizzying and record-breaking two-month period, ChatGPT soared to over 100 million users; the fastest to reach that milestone ever recorded by any app.

While many people have now experimented with ChatGPT and how it spits out words based on prompts, the nefarious uses of generative AI in social engineering have somewhat slipped under the radar. It’s incredibly easy to get an app like ChatGPT to create an urgent email based on little more than a one-sentence query, such as “Please write an email to the finance department requesting an urgent transfer of funds in relation to a delayed invoice to a company supplier.” The output of this type of query creates an email in flawless English that simultaneously creates a believable sense of urgency.

This democratization of improved writing quality may prove a boon for scammers from all over the world who were previously hampered by non-native language skills. Generative AI may well amplify both the volume and success of texts and email scams that are harder to spot as fakes because they don’t contain the common misspellings and grammar flaws that previous scams did.

Deepfake Scams

Deep learning algorithms can now create highly realistic fake audio and video content. For example, a slew of videos masquerading as Hollywood actor Tom Cruise fooled many people online, and that was almost two years ago. In social engineering, accurate deepfakes help cybercriminals impersonate trusted individuals or organizations in order to deceive victims into revealing sensitive information or performing actions that compromise their own or their organization’s security.

Deepfakes in social engineering are particularly effective when used as part of voice phishing (vishing) attacks. A good deepfake of someone’s voice is incredibly hard to detect; far harder than a deepfake video.

The public-facing profile of many company CEO’s makes them prime candidates for deepfake scams. With a lot of webinar, video, and other content to feed into deepfake tools, threat actors can create convincing voice or video clones of high-ranking executives and weaponize them in social engineering attacks.

Automated Campaigns

Improvements in AI technology have the potential to allow threat actors to scale and automate their social engineering attacks. Automation is powerful because it allows cybercriminals to target way more users and increase the chances of success by sheer numbers.

Advanced AI algorithms can sift through massive datasets to extract patterns and information about potential targets. Useful information includes personal details, interests, activities, and affiliations, all of which helps to personalize and target social engineering attacks more effectively.

AI also excels at automating the mundane and time-consuming tasks that traditionally restricted the scalability of social engineering attacks. For threat actors, one such task is sending out phishing emails or other deceptive messages en masse. AI-powered bots can distribute millions of emails in a short time to increase the scalability of an attack.

One of the most worrying features of AI in terms of its impact on social engineering is that AI systems have the ability to learn from their environments and adapt their behavior accordingly. Threat actors could use this self-learning capability to continuously evolve their methods based on what is working (or not working) in their social engineering campaigns. Over time, you don’t only get improved scalability, but the self-improving nature of AI algorithms makes attacks more likely to succeed.
 

Can Anything Stop AI-Based Social Engineering Attacks?

Just because AI improvements equip adversaries with extra tools and techniques for conducting social engineering attacks, that doesn’t necessarily mean it’s impossible to defend against social engineering. Best-practice advice still applies, including ensuring you effectively train users and that you have a multi-layered cybersecurity defense strategy in place.

At DIESEC, we’re passionate about helping companies defend against social engineering. Our team of security experts perform social engineering tests under real conditions in your company to test the readiness of your users. We’ll then help you craft an effective social engineering awareness campaign that ensures users are well-equipped to spot the signs and techniques commonly used by threat actors to manipulate people into taking specific actions or revealing confidential information.

Learn more by contacting us today.