Cognitive Security Technologies

Artificial Intelligence and IT Security

The department Cognitive Security Technologies conducts research at the intersection between artificial intelligence (AI) and IT security.
The focus is on two aspects:

Applying AI methods in IT security

State-of-the-art IT systems are characterized by a fast-growing complexity. The development of current and future information and communication technology introduces new, previously unexpected challenges: Starting with the increasing connectivity of even the smallest communicating units and their merging into the Internet of Things, continuing with the global connection of critical infrastructures to unsecured communication networks, and ending with the protection of digital identities: People are faced with the challenge of ensuring the security and stability of all these systems.

To keep up with this fast-paced development, IT security needs to further develop and rethink automation. The research department Cognitive Security Technologies develops semi-automated security solutions that use AI to support humans in investigating and securing security-critical systems.

Security of machine learning and AI algorithms

Just like conventional IT systems, AI systems can also be attacked. For example, through adversarial examples, it is possible to manipulate AI for facial recognition. Attackers can thus gain unauthorized access to sensitive access management systems that rely on AI systems. Similar attack scenarios also affect the field of autonomous driving, for example, where humans must rely on the robustness and stability of assistance systems.

One field of research of the department Cognitive Security Technologies at Fraunhofer AISEC it the exploration of those vulnerabilities in AI algorithms and solutions to fix them. Furthermore, the department offers tests to harden such AI systems.

GPU clusters with high processing power

Deep Learning and AI require very high processing power. Fraunhofer AISEC therefore maintains several GPU clusters that are specifically optimized for Deep Learning. These resources are continuously upgraded with the latest technology. This provides the ability to train the latest models quickly and efficiently to keep development cycles short.

 

Offerings

Our goal is to systematically improve the security of systems and products in close cooperation with our partners and customers. In doing so, we utilize the capabilities of state-of-the-art AI algorithms to comprehensively evaluate system reliability and sustainably maintain reliability and robustness throughout the entire lifecycle.

Evaluate Security

  • Evaluating AI-based security products, such as facial recognition cameras or audio systems like speech synthesis, voice recognition, or voice-based user recognition.
  • Explainability of AI methods (Explainable AI).
  • Hardware reversing and pentesting using artificial intelligence, e.g., side-channel attacks on embedded devices
  • Assessing the correctness of datasets, both against random errors (such as incorrect annotations) and attacks (adversarial data poisoning)
  • Evaluating machine learning (ML) training pipelines: examining the correctness of the applied preprocessing methods, algorithms, and metrics

 

Design Security 

  • Implementation and further development of approaches from the field of Privacy Preserving Machine Learning: training of models on foreign datasets, while maintaining the confidentiality of datasets or models
  • Authentication and Human Machine Interface (HMI) Security
  • Support in the evaluation of security log files using Natural Language Processing
  • Information aggregation for system analysis and monitoring using ML-based analysis of data streams, log files and other data sources

 

Maintain security

  • Conception and prototyping of performance-aware, AI-assisted anomaly detection
  • Conception and prototyping of AI-assisted fraud detection
  • Situational awareness using imagery, text, and audio (including open source intelligence)
  • Development of algorithms in predictive security
  • Creation of automated solutions for the implementation of the DSGVO guidelines
  • Seminar and training courses on AI for IT security
  • Development of detection algorithms for deepfake materials
  • Implementation of AI-based elements for IP protection

Expertise

Fraunhofer AISEC is a national leader in the field of hardening and robustness analysis of AI methods. Through high-profile publications in international conferences and close cooperation with our industrial partners, the department Cognitive Security Technologies understands the current challenges and provides corresponding solution approaches.

For example, one of the main research areas is the development of a testing procedure that evaluates AI models for their vulnerability and creates appropriate key performance indicators (KPI). This allows the model owner to estimate the vulnerability of his own system, comparable to classical penetration tests. In a second step, the models can then be hardened accordingly.

The department Cognitive Security Technologies has in-depth expertise in the following areas:

  • Adversarial Machine Learning
  • Anomaly Detection
  • Natural Language Processing
  • AI-based fuzzing
  • User Behaviour Analysis
  • Analysis of Encrypted Network Traffic
  • AI for Embedded Systems
  • General Machine Learning

Further Projects

 

ECOSSIAN

Detection and warning systems for critical infrastructures.

 

 

 

 

CyberFactory#1

Design, development, integration, and demonstration of highly connected and resilient industrial production.

 

SeCoIIA

AI-based protection of highly connected, fully automated industrial production.

Publications

  • N. Müller and K. Böttinger. “Adversarial Vulnerability of Active Transfer Learning”. In: Symposiumon Intelligent Data Analysis 2021. 2021.
  • Philip Sperl, Jan-Philipp Schulze, and Konstantin Böttinger. “Activation Anomaly Analysis”. In: Machine Learning and Knowledge Discovery in Databases. Ed. by Frank Hutter, Kristian Kersting, JefreyLijffijt, and Isabel Valera. Cham: Springer International Publishing, 2021, pp. 69–84. ISBN: 978-3-030-67661-2.

  • Tom Dörr, Karla Markert, Nicolas M. Müller, and Konstantin Böttinger. “Towards Resistant AudioAdversarial Examples”. In: 1st Security and Privacy on Artificial Intelligent Workshop (SPAI’20). ACMAsiaCCS. Taipei, Taiwan, 2020. DOI: https://doi.org/10.1145/3385003.3410921.
  • Karla Markert, Donika Mirdita, and Konstantin Böttinger. “Adversarial Attacks on Speech Recognition Systems: Language Bias in Literature”. In: ACM Computer Science in Cars Symposium (CSCS). Online, 2020.
  • Karla Markert, Romain Parracone, Philip Sperl, and Konstantin Böttinger. “Visualizing Automatic Speech Recognition”. In: Annual Computer Security Applications Conference (ACSAC). Online, 2020.
  • N. Müller, D. Kowatsch, and K. Böttinger. “Data Poisoning Attacks on Regression Learning and Corresponding Defenses”. In: 25th IEEE Pacific Rim International Symposium on Dependable Computing (PRDC). 2020
  • N. Müller, S. Roschmann, and K. Böttinger. “Defending Against Adversarial Denial-of-Service Data Poisoning Attacks”. InDYNAMICS Workshop, Annual Computer Security Applications Conference (ACSAC). 2020.
  • P. Sperl and K. Böttinger. “Optimizing Information Loss Towards Robust Neural Networks”. In: DYNAMICS Workshop, Annual Computer Security Applications Conference (ACSAC). 2020.
  • Sperl P., Kao C., Chen P., Lei X., Böttinger K. (2020) DLA: Dense-Layer-Analysis for Adversarial Example Detection. 5th IEEE European Symposium on Security and Privacy (EuroS&P 2020).
  • Müller, N.,  Debus, P., Kowatsch, D. & Böttinger, K. (2019, July). Distributed Anomaly Detection of Single Mote Attacks in RPL Networks. Accepted for publication at 16th International Conference on Security and Cryptography (SECRYPT). Scitepress.
  • Schulze, J.-Ph., Mrowca, A., Ren, E., Loeliger, H.-A., Böttinger, K. (2019, July). Context by Proxy: Identifying Contextual Anomalies Using an Output Proxy. Accepted for publication at The 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’19).
  • Fischer, F., Xiao, H., Kao, C., Stachelscheid, Y., Johnson, B., Razar, D., Furley, P., Buckley, N, Böttinger, K., Muntean, P., Grossklags, J. (2019) Stack Overflow Considered Helpful! Deep Learning Security Nudges Towards Stronger Cryptography, Proceedings of the 28th USENIX Security Symposium (USENIX Security).
  • Müller, N., & Kowatsch, D., Debus, P., Mirdita, D. & Böttinger, K. (2019, September). On GDPR compliance of companies' privacy policies. Accepted for publication at TSD 2019.
  • Müller, N., & Markert, K. (2019, July). Identifying Mislabeled Instances in Classification Datasets. Accepted for publication at IJCNN 2019.
  • Sperl, P., Böttinger, K. (2019). Side-Channel Aware Fuzzing. In Proceedings of 24rd European Symposium on Research in Computer Security (ESORICS). Springer.
  • Engelmann, S., Chen, M., Fischer, F., Kao, C. Y., & Grossklags, J. (2019, January). Clear Sanctions, Vague Rewards: How China’s Social Credit System Currently Defines “Good” and “Bad” Behavior. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 69-78). ACM.

  • Xiao, H. (2017). Adversarial and Secure Machine Learning (Doctoral dissertation, Universität München).
  • Schneider, P., & Böttinger, K. (2018, October). High-Performance Unsupervised Anomaly Detection for Cyber-Physical System Networks. In Proceedings of the 2018 Workshop on Cyber-Physical Systems Security and PrivaCy (pp. 1-12). ACM.
  • Fischer, F., Böttinger, K., Xiao, H., Stransky, C., Acar, Y., Backes, M., & Fahl, S. (2017, May). Stack overflow considered harmful? the impact of copy&paste on android application security. In Security and Privacy (SP), 2017 IEEE Symposium on (pp. 121-136). IEEE.
  • Böttinger, R. Singh, and P. Godefroid. Deep Reinforcement Fuzzing. In IEEE Symposium on Security and Privacy Workshops, 2018.
  • Böttinger, K. (2017, May). Guiding a Colony of Black-Box Fuzzers with Chemotaxis. In Security and Privacy Workshops (SPW), 2017 IEEE (pp. 11-16). IEEE.
  • Böttinger, K. (2016). Fuzzing binaries with Lévy flight swarms. EURASIP Journal on Information Security, 2016(1), 28.
  • Böttinger, K., & Eckert, C. (2016, July). Deepfuzz: triggering vulnerabilities deeply hidden in binaries. In International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment (pp. 25-34). Springer, Cham.
  • Böttinger, K. (2016, May). Hunting bugs with Lévy flight foraging. In Security and Privacy Workshops (SPW), 2016 IEEE (pp. 111-117). IEEE.
  • Settanni, G., Skopik, F., Shovgenya, Y., Fiedler, R., Carolan, M., Conroy, D., ... & Haustein, M. (2017). A collaborative cyber incident management system for European interconnected critical infrastructures. Journal of Information Security and Applications, 34, 166-182

  • Xiao, H., Biggio, B., Nelson, B., Xiao, H., Eckert, C., & Roli, F. (2015). Support vector machines under adversarial label contamination. Neurocomputing, 160, 53-62.
  • Xiao, H., Biggio, B., Brown, G., Fumera, G., Eckert, C., & Roli, F. (2015, June). Is feature selection secure against training data poisoning?. In International Conference on Machine Learning (pp. 1689-1698).
  • Böttinger, K., Schuster, D., & Eckert, C. (2015, April). Detecting Fingerprinted Data in TLS Traffic. In Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security (pp. 633-638). ACM.
  • Schuster, D., & Hesselbarth, R. (2014, June). Evaluation of bistable ring PUFs using single layer neural networks. In International Conference on Trust and Trustworthy Computing (pp. 101-109). Springer, Cham.