Artificial intelligence, growing more potent and easier to use, threatens to compound the already considerable challenges companies face as they deal with cyberattacks, researchers warn.
While rare, they say early signs of such advanced forms of attacks have already been detected.Earlier this year, cybersecurity firm Darktrace Inc. spotted a never-before-seen attack at a client company in India that used rudimentary machine learning to observe and learn patterns of normal user behavior inside a network, Chief Executive Nicole Eagan said. The software then began to mimic that normal behavior, effectively blending into the background and becoming harder for security tools to spot. Darktrace declined to discuss the case in greater detail.
It wasn’t exactly clear what the goal of the attack was, but Ms. Eagan said the use of AI and machine learning in cyber breaches opens up a range of dangerous scenarios, from the ability of intruders to more easily scan networks for unpatched ports to the automated composition of emails that match the tone and writing style of someone that the intended target knows.
“We do imagine that there will be a time when attackers use machine learning and artificial intelligence as part of the attack. We have seen early signs of that,” she said.
Early manifestations of machine learning in cyberattacks already can be found.
For years, a service called Death by Captcha has used machine learning models to quickly defeat the familiar CAPTCHA system, in which people verify their identity by entering a string of squiggly letters. Using a process called optical character recognition,the software identifies and learns from millions of different images of those blurry figures until it’s trained to recognize them and solve the CAPTCHA.
‘‘You can scale a lot of the crime economy by utilizing a form of basic machine learning” ’
Another tool, Sentry MBA, enables hackers to automatically test stolen usernames and passwords across a large number of sites. It uses machine learning to do optical character recognition, similar to DeathByCAPTCHA. It can then masquerade as Safari, Firefox or another web browser to make a set of login requests look like they are coming from many different users instead of from a single attacker’s computer, said Shuman Ghosemajumder, chief technology officer at Shape Security. The practice could give criminals control of millions of accounts each day.
Data scientists at ZeroFOX Inc. in 2016 built a neural network that parsed Twitter data to write phishing posts that targeted specific users. Phishing is a common attack method in which hackers use fake emails or other tools to trick employees into giving them access to a target system or victim. The research project established that algorithms could analyze a person’s social media feeds to craft highly-targeted social engineering attacks.
“Attackers try to match the phishing attack to users by extracting data on them. You can scale a lot of the crime economy by utilizing a form of basic machine learning,” said Tomer Weingarten, founder and CEO of SentinelOne, a company that deploys machine learning and artificial intelligence to defend against attacks at the endpoints of a network. “It happens with every phishing campaign you see. I would call it statistical analysis, a form of machine learning.”
The growing sophistication of fast-moving, modern cyberattacks is forcing companies to employ similar technologies to defend themselves. By turning to artificial intelligence companies also can help plug the gaps left by a shortage of cybersecurity talent and the growing scale of attacks.
Mastercard Inc. is experimenting with software to automate the response to phishing attacks. Every incoming email is sent through a platform that uses machine learning to analyze each email and produce a risk score. High-risk emails are quarantined and reviewed by a security analyst before delivery who determines the appropriate action.
On a good day, human security analysts are alerted to a potential phishing attack and fix the problem in an hour, says Ron Green, Mastercard’s chief security officer. The new software, built as part of a security automation initiative led by the U.S. National Security Agency and Johns Hopkins University, can identify and fix the problem within minutes.
Security experts are quick to point out that technologies such as machine learning can be put to legal and illegal use, and that it’s only a matter of time before the most advanced forms of AI are used by attackers.
“It’s inevitable that we see the other side start to use the same set of tools,” said Joe Levy, CTO of cybersecurity firm Sophos.