Sustainable reinforcement of cyber security and the rule of law in the digital age

Research Project Provides Court-Proof Methods for Forensic Analysis of AI Systems

Press release /

© Cyberagentur
Launch of the “Forensik intelligenter Systeme” (Forensics of Intelligent Systems, FIS) research project with three project teams whose research concepts were selected from a pool of nine brief concepts.

The Forensics of Intelligent Systems research project is dedicated to the development of inter-disciplinary approaches to ensuring forensic readiness in the event of manipulation of continuously learning AI systems. The research project is funded by Cyberagentur and realized jointly by Atos, a globally leading enterprise in the field of AI-supported digital transformation, the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, the Fraunhofer Institute for Applied and Integrated Security AISEC, the Institut für Internet-Sicherheit (Institute for Internet Security, Westphalian University of Applied Sciences) and the University of Cologne. The project aims to sustainably strengthen cyber security and the rule of law in the digital age.

The Forensics of Intelligent Systems research project supports German security agencies in investigating cyber attacks on AI systems. The project’s goal is to develop suitable methods for detecting manipulative intervention such as data poisoning and attacks on the behavior of AI-controlled systems, as well as to preserve corresponding evidence. The focus is on continuously learning AI systems, and in particular on neural networks in image and video analysis that are used in critical applications. To achieve forensic readiness, the project combines technical innovative strength with legal expertise. The term forensic readiness refers to preparatory actions performed in connection with recording and storage of digital evidence in AI systems. The goal is for such evidence to be available for legally sound analysis in the event of a security incident (e. g., attacks or data manipulation) and for it to be court-proof.

Further forensic development of evidence of attacks on AI


Fraunhofer AISEC is contributing its expertise in the field of cyber security and is collaborating with the project partners to further develop existing detection methods for ensuring forensic readiness of AI systems. In the next step, the partners validate legal compliance of the resulting methods and their results. Attack scenarios addressed include inversion, evasion and data poisoning attacks. Model stealing attacks are playing a particularly important role here, as these can be preparatory steps for further attacks. When simulating such attacks, the AISEC team adopts the defender perspective, which is the forensic perspective (Blue Team).

“The project improves confidence in AI systems by providing forensic readiness in the event of targeted attacks. We are thus creating the basis for forensically verifiable and legally sound AI, and this is a key factor for safety-critical applications. Our research helps to advance further development of general technological and legal conditions in the digital age,” says Philip Sperl, Head of the Cognitive Security Technologies Department at Fraunhofer AISEC.