More security for artificial intelligence
neurocat GmbH tricks self-learning systems and makes them more reliable
The Adlershof-based neurocat GmbH increases the security of artificial intelligence applications. The company’s young team is leading efforts for developing the industry’s security standards – which it is pushing forward in concert with global corporations, industry associations and large testing organisations.
Stephan Hinze and Florens Greßner met through a mutual friend. They started talking and the conversation quickly steered towards Greßner’s favourite topic: artificial intelligence (AI). The 24-year-old mathematician specialises on the mathematical structures behind self-learning IT systems. He left a lasting impression on Hinze, who looks back on 15 years of business experience in private equity and high-frequency trading.
They soon started to make plans. “We are convinced that AI will fundamentally change many industries. This shift bears great potential for young companies,” says Hinze. They seized their chance in late-2017 when they founded neurocat GmbH. At the Innovation and Start-Up Centre (IGZ) in Adlershof, the two founders have gathered a dozen (male and female) mathematicians and computer scientists around them who develop security solutions for AI solutions – while devouring copious amounts of pizza. Co-founder Sebastian Kotte takes care of finances and stays in close contact with Frank Kretschmer, the company’s business angel, who has been supporting them since day one.
“We quickly realised that we didn’t want to develop AI applications but, instead, work to secure those applications,” says Greßner. Neurocat provides AI users from the auto industry, digital manufacturing, authorities and healthcare companies with ways to trick their AI systems. This includes making pedestrians invisible to the sensors of self-driving cars and simulating false speed limits as well as deactivating the personnel safety systems of industrial machines. These are realistic scenarios – according to the company founders, the web is replete with manuals on how to fool AI systems. Besides hacks and self-developed attacks, the start-up also takes on security gaps and aims at protecting AI systems. “We analyse the mathematical functions working in the background in terms of robustness and clarity, validate the architecture and the performance of self-learning systems, and take care of the verification of the systems that we optimise,” explains Felix Assion, the head of research.
The company’s services are in high demand. Their customers already include one of the three largest car manufacturers in the world. This is partly because neurocat is committed to security standards and advocates the development of a DIN standard for AI security. “One of our goals is to develop a quality seal for AI systems,” says Hinze. Currently, they are also going forward with preparing an additional standardisation project under the roof of the German Association of the Automotive Industry (VDA) and are in the middle of setting up a showcase for the Web Summit 2018 in Lisbon.
Since its founding, neurocat has gained significant attention. To Hinze, this is a requirement to grow horizontally and scale the business model. The security of self-learning systems is relevant everywhere: self-driving cars, smart cities, digital administration, digital manufacturing, or in personalised medicine. “We look at things that are structurally similar across applications – and develop a matching testing method,” he says. The preparations are underway for this company to go down a path towards growth.
By Peter Trechow for Adlershof Journal