AI and Cybersecurity: Examining the Pros and Cons

Rapid advancements in artificial intelligence (AI) took the world by storm when OpenAI released ChatGPT 3.5 to the general public in November 2022. After reaching 100 million users in just two months, this was a clear case of tech hype translating into adoption en masse. While the general public played around with the tool, the proficiency of the large language model underlying ChatGPT sparked frenetic discussions about the impact of AI on a wide range of professions and industries, including cybersecurity.

It’s important to remember though that ChatGPT is just one type of AI technology, and there are other ways in which AI might potentially impact cybersecurity. This article presents an overview of the pros and cons of AI in cybersecurity and answers the question of whether human security expertise will ever become obsolete.

 

Benefits of AI in Cybersecurity

Rather than start off by playing into fear, uncertainty, and doubt, let’s take a look at some of the exciting, positive applications and benefits of AI for strengthening cybersecurity defenses.

Improved Phishing Detection

 Phishing is a low-hanging fruit cyber attack in that’s not technically challenging to carry out, yet it works pretty well. In the six-year period between 2015 and 2021, the cost of phishing attacks on companies quadrupled from $3.8 million to $14.8 million per year. These fraudulent emails exploit flaws in human psychology rather than technical weaknesses in apps or systems.

Machine learning models are particularly suited to detecting phishing emails since the task at hand is ultimately one of classifying emails as either phishing or not phishing. These algorithms can be trained on large numbers of emails, and they get better over time. AI-powered phishing detection can analyze emails and more accurately spot suspicious links, attachments, or subtle abnormalities in the sender’s email address that human users may miss.

 

Assisting with Malware Analysis

Today’s malicious code is more sophisticated than ever. Skilled hackers write code that mutates and gets hidden from traditional signature-based detection tools like antivirus software. As different types of malware inundate companies every day, human security analysts can quickly get overwhelmed by manually analyzing files to determine the presence of malicious software.

AI has the potential to assist with malware analysis by automating certain phases of the process, such as static analysis. Machine learning algorithms excel at pattern recognition, aiding the search for packers that encrypt portions of executable code, the use of certain system calls, suspicious strings, etc.

When running the malware in a controlled environment and observing its behavior (dynamic analysis) machine learning algorithms are also useful for identifying suspicious behavior patterns, such as attempts to modify system files or communicate with external servers. AI systems can therefore effectively triage malware so that analysts get presented with the most suspicious files for further analysis rather than having to do all this preliminary work themselves.

 

Automating Vulnerability Management

 Unpatched vulnerabilities still play a role in about 60 percent of cybersecurity breaches. This is a huge figure that might sound surprising, but it’s also understandable given the constraints under which IT and security teams operate. Applying patches in a timely manner to different systems and apps across a complex IT environment is no easy task, especially when there are other priorities that take up the attention of your staff.

AI can streamline and automate many aspects of vulnerability management. Machine learning algorithms that spot patterns and characteristics associated with known vulnerabilities help you detect vulnerabilities faster, even those you might not know about. AI systems can also help out with remediation workflows by finding the appropriate patch or workaround and applying it to the affected systems. Lastly, AI vulnerability management tools easily validate that the patch has been applied correctly.

 

Stronger User Identity Security

 Account hijacking remains an important and widespread initial access vector. Hackers use stolen login details, brute force hacking, or social engineering techniques to get the right credentials and take control of user accounts. Often, this access slips under the radar, and hackers probe around networks looking for ways to escalate their privileges or move laterally between different systems.

A big part of mitigating the impact of account hijacking is being able to detect suspicious user account activity. AI algorithms accomplish this task excellently, using training data of observed user behavior to create a baseline of what’s normal for each user. This baseline can then be used in real-time to dynamically assess the likelihood of account hijacks depending on how the behavior deviates from what’s normal.

 

Potential Drawbacks of AI in Cybersecurity

The drawbacks of AI often attract the most spotlight from a media perspective. But what exactly are those drawbacks when it comes specifically to cybersecurity?

  • False Positives and Negatives: One of the main disadvantages is the potential for both false positives (where benign activities are flagged as malicious) and false negatives (where actual threats go undetected). These errors can lead to unnecessary alerts, wasted resources, or actual breaches of your defenses, so it’s important not to rely on AI alone.
  • Algorithmic Bias: If the AI model that powers a specific security solution is trained on biased data, the tool may reinforce or amplify these biases, leading to unfair or inaccurate results. For example, an AI system trained on data from a specific set of network traffic might not perform well when applied to a different segment of your network.
  • Resource Intensive: Training AI models can require significant computational resources, as well as expertise in machine learning and data science. Deploying and maintaining these systems can also be costly and complex.
  • Lack of Transparency: The most advanced systems, such as large language models, remain a black box in that there is no clarity or explanation about detections, warnings, or threats. This lack of transparency leads to cases where nobody really knows how the algorithm powering a specific security tool makes decisions or whether it would’ve been possible to make a better decision. In a field as important as cybersecurity, there is little room for not understanding the decisions you make.

 

Why Security Expertise Will Always Be Valuable

There is a lot of online commentary about how AI will wipe out a whole range of jobs, including many of those in cybersecurity. But security remains a specialized field, where human input, creativity, and expertise is always likely to prove valuable. Most likely, AI will augment the role of the traditional security analyst and reduce the time spent on menial tasks in that type of position.

When it comes to more specialized security roles, such as penetration testing and red teaming, the need for human expertise is even greater. These roles require ethical hackers with years of experience who know how to think like real-world threat actors.

At DIESEC, our security experts thrive in helping you improve your cyber defenses through comprehensive penetration tests and red team exercises. These services test your defenses for loopholes using technical knowledge, creativity, and flexibility, in the same way that real hackers would.

Contact us today to learn more.