Cognitive Security Technologies

Artificial intelligence and IT security

The department Cognitive Security Technologies advances research at the intersection between Artificial Intelligence (AI) and IT security.

There are two main aspects:

Application of AI methods in IT security

Modern IT systems are characterised by rapidly growing complexity. The development of current and future information and communication technology opens up new, previously unforseen challenges: From the increasing networking of even the smallest communicatiing units and their combination to form the Internet of Things, to the global connection of critical infrastructures to unsecured communication networks, to the securing of digital identities: Modern society is confronted with the challenge of ensuring the security and stability of all these systems.

In order for IT security to keep pace with this rapid development, automation must be further developed and rethought. The Cognitive Security Technologies department researches and develops partially automated security solutions that use AI to support people in examining and securing security-critical systems.

Security of machine learning and AI algorithms

Just like conventional IT systems, AI systems can also be attacked. For example, it is possible to manipulate a face recognition AI through adversarial examples. Attackers can thus gain unauthorized access to security-critical access management systems that rely on AI systems. Similar attack scenarios also affect, for example, the area of autonomous driving, where humans must rely on the robustness and stability of assistance systems.

The Cogntitive Security Technologies department at Fraunhofer AISEC is researching how such weaknesses in AI algorithms can be found and eliminated. The department also offers tests for hardening such AI systems.


Deep learning and AI require very high computing power. Fraunhofer AISEC therefore maintains several GPU clusters that are optimized for deep learning. These resources are continuously expanded with the latest technology. This allows to train the latest models quickly and efficiently to keep development cycles short.

Fraunhofer AISEC is the German leader in the field of hardening and robustness analysis of AI methods. Due to top-class publications in international conferences and the close cooperation with our industry partners, the department Cognitive Security Technologies knows the current challenges and offers appropriate solutions.

One of the main areas of research is, for example, the development of a test procedure that evaluates AI models for their vulnerability and creates suitable key performance indicators (KPI). This allows the model owner to estimate the vulnerability of its own system, comparable to classical penetration tests. In a second step, the models can then be hardened accordingly.

The Cognitive Security Technologies research department has sound expertise in the following areas:

  • Adversarial machine learning
  • Anomaly detection
  • Natural language processing
  • AI-based fuzzing
  • User behaviour analysis
  • Analysis of Encrypted Network Traffic
  • AI for embedded systems
  • General machine learning

Offers at a glance

Our goal is to systematically improve the safety of systems and products in close cooperation with our partners and customers. In doing so, we use the potential of the latest AI algorithms to comprehensively evaluate system reliability and to maintain security and robustness over the entire life cycle.

  • Evaluation of AI-based security products, such as face recognition cameras or audio systems like speech synthesis, speech recognition, or voice-based user recognition
  • Explainability of AI methods (Explainable AI)
  • Hardware reversing and pentesting by means of artificial intelligence, e.g. by side channel attacks on embedded devices
  • Evaluation of the correctness of data sets, both against random errors (such as incorrect annotations) and attacks (adversarial data poisoning)
  • Evaluation of training pipelines for machine learning (ML): Investigation of the correctness of the used preprocessing methods, algorithms and metrics
  • Implementation and further development of approaches from the field of Privacy Preserving Machine Learning: Training of models on foreign data sets, while maintaining the confidentiality of data sets or models
  • Authentication and security of Human Machine Interfaces (HMI)
  • Support in the evaluation of security log files using Natural Language Processing
  • Information aggregation for system analysis and monitoring using ML-based evaluation of data streams, log files and other data sources
  • Conception and prototyping of high-performance, AI-supported anomaly detection
  • Conception and prototyping of AI-supported fraud detection
  • Automatic creation of situation reports using image, text and audio material (among others through Open Source Intelligence)
  • Development of algorithms in the field of predictive security
  • Creation of automated solutions for the implementation of the GDPR guidelines
  • Seminar and training courses on AI for IT security
  • Development of recognition algorithms for deepfake materials
  • Implementation of AI-based elements for IP protection

Selected Projects


Identification and early warning system for critical infrastructures

More information


Design, development, integration and demonstration of a highly connected and resilient industrial production

More information


AI-supported protection of highly connected, fully automated industrial production

More information


Adversarial Machine Learning
  • Sperl P., Kao C., Chen P., Lei X., Böttinger K. (2020) DLA: Dense-Layer-Analysis for Adversarial Example Detection. 5th IEEE European Symposium on Security and Privacy (EuroS&P 2020).
Anomaly Detection
  • Müller, N.,  Debus, P., Kowatsch, D. & Böttinger, K. (2019, July). Distributed Anomaly Detection of Single Mote Attacks in RPL Networks. Accepted for publication at 16th International Conference on Security and Cryptography (SECRYPT). Scitepress.
  • Schulze, J.-Ph., Mrowca, A., Ren, E., Loeliger, H.-A., Böttinger, K. (2019, July). Context by Proxy: Identifying Contextual Anomalies Using an Output Proxy. Accepted for publication at The 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’19).
  • Fischer, F., Xiao, H., Kao, C., Stachelscheid, Y., Johnson, B., Razar, D., Furley, P., Buckley, N, Böttinger, K., Muntean, P., Grossklags, J. (2019) Stack Overflow Considered Helpful! Deep Learning Security Nudges Towards Stronger Cryptography, Proceedings of the 28th USENIX Security Symposium (USENIX Security).
Natural Language Processing
  • Müller, N., & Kowatsch, D., Debus, P., Mirdita, D. & Böttinger, K. (2019, September). On GDPR compliance of companies' privacy policies. Accepted for publication at TSD 2019.
General Machine Learning
  • Müller, N., & Markert, K. (2019, July). Identifying Mislabeled Instances in Classification Datasets. Accepted for publication at IJCNN 2019.
  • Sperl, P., Böttinger, K. (2019). Side-Channel Aware Fuzzing. In Proceedings of 24rd European Symposium on Research in Computer Security (ESORICS). Springer.
User Behaviour Analysis
  • Engelmann, S., Chen, M., Fischer, F., Kao, C. Y., & Grossklags, J. (2019, January). Clear Sanctions, Vague Rewards: How China’s Social Credit System Currently Defines “Good” and “Bad” Behavior. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 69-78). ACM.

  • K. Böttinger, G. Hansch, and B. Filipovic. “Detecting and Correlating Supranational Threats for Critical Infrastructures”. In 15th European Conference on Cyber Warfare and Security (ECCWS 2016), 2016.
  • K. Böttinger, D. Schuster, and C. Eckert. “Detecting Fingerprinted Data in {TLS} Traffic”. In Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, 2015, pp. 633–638.
  • H. Xiao, B. Biggio, G. Brown, G. Fumera, C. Eckert, and F. Roli. “Is Feature Selection Secure against Training Data Poisoning ?”. Int’l Conf. Mach. Learn., vol. 37, 2015.
  • D. Schuster and R. Hesselbarth. “Evaluation of Bistable Ring PUFs Using Single Layer Neural Networks”. In Trust and Trustworthy Computing, Springer, 2014, pp. 101–109.
  • H. Xiao, B. Biggio, B. Nelson, H. Xiao, C. Eckert, and F. Roli. “Support Vector Machines under Adversarial Label Contamination”. J. Neurocomputing, Spec. Issue Adv. Learn. with Label Noise, Aug. 2014.

  • H. Xiao and C. Eckert. “Lazy Gaussian Process Committee for Real-Time Online Regression”. In 27th AAAI Conference on Artificial Intelligence (AAAI ’13), 2013.
  • H. Xiao and C. Eckert. “Efficient Online Sequence Prediction with Side Information”. In IEEE International Conference on Data Mining (ICDM), 2013.
  • H. Xiao and C. Eckert. “Indicative support vector clustering with its application on anomaly detection”. Proc. - 2013 12th Int. Conf. Mach. Learn. Appl. ICMLA 2013, vol. 1, pp. 273–276, 2013.H. Xiao, H. Xiao, and C. Eckert, “OPARS: Objective photo aesthetics ranking system,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 7814 LNCS, pp. 861–864, 2013.
  • H. Xiao, H. Xiao, and C. Eckert. “Learning from multiple observers with unknown expertise”. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 7818 LNAI, no. PART 1, pp. 595–606, 2013.
  • H. Xiao, H. Xiao, and C. Eckert. “Adversarial Label Flips Attack on Support Vector Machines”. In 20th European Conference on Artificial Intelligence (ECAI), 2012.