Resistant AI raises $2.75 million to protect algorithms from adversarial attacks

Resistant AI has raised $2.75 million in venture capital to develop an artificial intelligence system that protects algorithms from automated attacks.

Index Ventures and Credo Ventures led the investment, which also included participation by Seedcamp; Daniel Dines, CEO of UiPath; and Michal Pechoucek, CTO of Avast. The Prague-based company focuses on the growing problem of hackers harnessing AI to manipulate machine learning systems.

The need to combat such a scenario confirms the predictions by experts that eventually cybersecurity would see a kind of AI arms race as attackers and their targets seek to turn automation to their advantage.

“Companies are just now learning how to deploy AI,” said Resistant AI co-founder and CEO Martin Rehak. “And on the other side, we see criminals and fraudsters learning how to use those processes for their benefit and how to steal money at scale. Our job is to protect the AI and machine learning models.”

Read more: VentureBeat’s Special Issue on AI and security

Founded in 2019, Resistant AI’s team includes a core group that worked at Cognitive Security, which was acquired by Cisco Systems in 2013. That team originally began working on AI for security back in 2006, Rehak said, at a moment when such technology seemed far over the horizon.

“The first five years, when I told anyone what we were doing, they told me I was crazy,” he said.

At Cisco, the AI-related work became more central. But eventually, the group struck out on its own to specifically focus on the issue of AI being used to attack AI. Or, as Rehak explains, AI being used to attack various automated decision-making systems.

Experts have grown increasingly worried about the rise of so-called adversarial attacks. This refers to the idea of someone externally introducing elements into a machine learning model designed to disrupt or manipulate it.

As Resistant AI got started, it decided to focus first on financial companies which had begun turning to such automated systems to approve applications for various products.

The fraud attempts can occur in several ways. In one basic scenario, people use utility bills or bank statements where names have been changed to fool algorithmic-driven verification systems for opening accounts or financing or approving loans. Resitant’s AI can intervene by detecting visual anomalies or identifying data that seems suspicious and stop them from entering the approval system.

Resistant’s service can also review the decisions being made by a financial system. It can consider all the inputs and looks for correlations or inconsistencies within large batches. So for example, a single request for approval might seem benign, but within a group of 100,000 requests, it may have abnormalities that resemble several other requests.

“That way we can see that someone under different identities is actually fingerprinting the system and trying to find the vulnerability,” Rehak said.

By “fingerprinting,” Rehak means that someone is submitting a range of documents and information to try to understand how a company’s algorithms and machine learning function.

The goal of such an attack can be twofold. First, the hacker may be trying figure out the parameters of the algorithms in order commit fraud. However, they may also be trying to use the attack to learn about the algorithm in order to copy it to either sell the information to other people who want to commit fraud or possibly even to competitors of the company being attacked.

In both cases, the hackers are increasingly using AI to automate and adapt their own methodology for probing these machine learning systems, Rehak said.

Going forward, Resistant plans to use the money to expand its staff of 20 people and extend its sales operations in Western Europe.

Source: Read Full Article