When Bad Guys Use AI and ML in Cyberattacks, What Do You Do? 

By

Artificial intelligence (AI) has become vital in defending organizations against crippling cybersecurity attacks. If your organization is not leveraging AI as a core component of your Security Operations Center (SOC), you risk falling behind those who are out to do you harm. 

Companies are investing heavily in AI. The market for cybersecurity AI is expected to grow by about 23% annually, to $38.2 billion in sales by 2026, from $8.8 billion in 2019. In 2023, Gartner predicts that global spending on security and risk management will grow by more than 11%

At the same time, adversaries are also using AI—as well as its subset, machine learning—to mount more automated, aggressive, and coordinated attacks. In today’s environment, it’s not just about using automation to fight machines with machines; it’s about using the right tools and technologies to fight intelligence with intelligence.

By using AI and machine learning, hackers can be more efficient in developing a deeper understanding of how organizations are trying to prevent them from penetrating their environments.

For example, natural language processing, the same technology that makes many customer-service applications more effective, can also create better phishing emails. Recent attacks have used machine learning to mimic the usual digital activity of users, all the better to cover the tracks of perpetrators, according to reports.

“You can buy a service that will create phishing emails and send them out for you,” said Keri Pearlson, executive director of Cybersecurity at MIT Sloan (CAMS). “That service isn’t necessarily designed to create bad emails. It’s designed to create emails.” 

Hacking as a Service

We’re all getting those emails, often unnoticed in our daily inbox flow. “You probably get six or seven thousand a day, as well as quite legitimate ones,” Pearlson adds. “Those tools are not designed for nefarious purposes, but they can be turned into something for nefarious purposes.”

Dark Web marketplaces also offer a number of AI and machine learning hacker tools—and even help-desk support to turn them into working attacks, noted Pearlson. CAMS conducts research on the operations and strategic issues that affect cybersecurity and recently published a report noting the emergence of cyberattacks-as-a-service (CaaS) on the Dark Web. 

The report said hackers can use AI to leverage personal information collected from social media to automatically generate phishing emails than can rack up open rates as high as 60%. That’s higher than the typical spear-phishing campaigns where hackers do the job manually. 

Data Quantity and Quality Are Key

Data is the first line of defense to keep your own AI safe and to fight back against cybercriminals who may be using the technology. AI and ML tools and applications are only as good as the models and data they are built on, so keeping those models up to date with the latest threat intelligence is key.

The more data you can collect, the better. But for AI and ML to be most effective as defensive weapons, the data must have complete, relevant, and rich context collected from every potential source, whether that is at the endpoint, on the network, or in the cloud. You also have to focus on cleaning the data so you can define outcomes.

In this context, AI and machine learning are changing the paradigm in cybersecurity by enabling organizations to:

  • Be more efficient in preventing malware and other attack modes from entering your environment.
  • Detecting sophisticated threats, including new modes of attack that have never been seen before.
  • Automating the response and using learning from each incident to prevent the same or similar threats from successfully penetrating your defenses again.

Information Sharing and Security-by-Design

Security-by-design and information sharing are additional tools in your arsenal, particularly if your cybersecurity teams are using AI-based tools and technologies for automated prevention, detection, and response. 

With a security-by-design model, AI and machine learning can be implemented across the entire organization, including applications, devices, networks, and clouds.  

“That’s something that we’re still figuring out—how to deal with some of the AI-enabled tools,” said Josephine Wolff, assistant professor of cybersecurity policy at the Tufts Fletcher School of Law and Diplomacy. 

Experts recommend security professionals learn about AI, encourage a security-by-design mindset when deploying AI and machine learning applications in their networks, and share threat information internally and with other organizations. 

A number of industry groups are trying to enable information flow, such as the Forum of Incident Response and Security Teams and the Cyber Threat Alliance. Many key industries—such as aviation, financial services, and healthcare—have information sharing and analysis centers (ISACs) to help enterprises collaborate on cybersecurity and work together through the National Council of ISACs. 

Adding AI or any other technology can both solve problems and create new attack surfaces for hackers to exploit, said Wolff. That trade-off has to be managed. “Every time you implement some new security control, you’ve potentially created a new attack surface, even if you’ve closed off one, as well,” said Wolff. “You have to adjust your sense of what the threats are.” 

New Tools, New Attack Surfaces

In a recent paper for the Brookings Institution, Wolff noted that building better AI means using more data to fine-tune the systems; but with more data accessible, bad actors could figure out the AI models. 

Security needs to navigate those risks. “It’s good to have a deep understanding of what the application is. That’s what lets you build the threat model of what could go wrong,” said Wolff. 

AI as a whole is often treated like a black box, said Pearlson. Most automation can be tested simply, by entering data and watching for the expected result, which means the model is working as expected. But AI and ML are designed to learn, which complicates the testing.

“They’re designed to find the needle in the haystack. So how do you know that the output is actually the right needle and that the system hasn’t been compromised?” she asked. “You’re designing to find something you wouldn’t have seen. Yet you don’t know whether that’s because it’s the actual answer or because something was hacked along the way.” 

Adversaries share information on the Dark Web, Pearlson noted. “They’re really good at sharing information. And we outside (the Dark Web) are not good at sharing information.” 

Organizations keep quiet to avoid damage to their reputations, their customers, their supply chains, and other consequences. “People are afraid they put a target on their back if they tell how somebody got in,” she said.

Information sharing is a powerful tool, even if many organizations find it counterintuitive. Adversaries have no trouble sharing tips and best practices for taking down networks. As defenders, organizations need to become more open to sharing information, both to help others and to learn before the next attack. 

The richer the data you have, the more you will need to rely on artificial intelligence to strengthen your defenses. But keep in mind your adversaries are doing the same, so it’s smart to always stay at least one step ahead.  

End Points

  • Adversaries are increasingly using artificial intelligence and machine learning to mount more efficient and aggressive attacks.
  • To defend against intelligence-based attacks, organizations need to make AI and ML core elements of their cybersecurity strategies.
  • Rich, contextualized and complete data, along with shared intelligence and security-by-design, are key.

Topics