What are the opportunities and risks of GenAI for cybersecurity?
Generative artificial intelligence is revolutionizing the automated creation of content such as text and images, and also that of program code. This is made possible by neural networks based on parameters numbering in the trillions. But these opportunities also harbor risks: In addition to their transparency and traceability, AI systems also pose significant challenges with their hallucination, degree of robustness against attackers and reliability.
Generative artificial intelligence also poses opportunities and risks in cybersecurity: GenAI can therefore be seen both as a potential threat and as a tool for defense.
Opportunities for GenAI in cybersecurity
1. Proactively increasing cybersecurity levels, including against AI-based attacks
- Real-time analysis: Generative AI can quickly analyze large quantities of data, detect anomalies and identify causes of attacks.
- Predicting paths of attack: Predictive Security AI can detect vulnerabilities and anticipate protective measures..
- Autonomous defense: AI can respond to attacks without human intervention, as long as its decisions are transparent and comprehensible.
2. Support in operational cybersecurity tasks
- Automating routine tasks: Automating routine tasks such as log analysis reduces the workload for skilled workers and counteracts the shortage of skilled labor.
- Automated vulnerability detection: Vulnerabilities can be automatically detected, traced and corrected..
- Support for compliance: Help in implementing and documenting statutory requirements (e. g. the Datenschutz-Grundverordnung (Basic Regulation on Data Protection, DSGVO), the NIS2 Directive or the CRA). This can be achieved through integration in frameworks of the NIST Open Security Controls Assessment Language (OSCAL), the Common Security Advisory Framework (CSAF) or the German Federal Office for Information Security (BSI).
3. Improving the quality of security in software development
- Analyzing software artifacts: Support in software security testing in the supply chain, also in binary code.
- Secure software development: Early detection of vulnerabilities, recommendations for secure code and continuous security analysis through integration with IDEs and CI/CD pipelines..
- More efficient development processes: Generative AI can improve the quality and speed of development while minimizing security risks at an early stage.
Threats
1. Individual usage risks
- Prompt injection: Users can use targeted inputs to manipulate large language models (LLMs) that have been trained on large quantities of text data to understand human language and generate text themselves.
- Information leakage: Confidential data can be inadvertently disclosed in queries or inferred from model responses (inference attacks).
- Malicious code from LLM outputs: The use of AI responses without checking can create security vulnerabilities, especially when generated program code is used without checking.
- Manipulated training data: Training data poisoning can deliberately alter the behavior of a model.
2. AI-generated attack campaigns
- Automatically generated malware: Adaptable malicious code (e.g. polymorphic viruses) can be created with no in-depth expertise.
- Phishing and deepfakes: AI enables the deployment of broad-based campaigns with targeted, highly individualized and seemingly realistic but manipulated content.
- Fake news and disinformation: Widespread automated dissemination of misinformation undermines social trust and democratic processes.
- Autonomous attackers: AI can detect vulnerabilities and develop attack strategies (e.g. hierarchical agent systems).
- Hybrid attacks: The combination of LLMs with traditional security analysis frameworks supporting those responsible for security enables coordinated and dynamic cyberattacks.
3. AI-supported attack preparation
- Vulnerability detection: LLMs help attackers quickly identify vulnerabilities.
- Situation reports through data analysis: The combination of open-source intelligence (OSINT) and other data sources enables precise preparation of attacks, even without technical expertise.
The discussion paper "Generative KI und ihre Auswirkungen auf die Cybersicherheit (Generative AI and its impact on cybersecurity)" (in German) published by the Scientific Working Group of the National Cyber Security Council (in German) in June 2025 provides a good overview of the opportunities and risks. Claudia Eckert, Head of the Fraunhofer Institute for Applied and Integrated Security AISEC, is the lead author of this discussion paper.
The OWASP* Top 10 for LLM Applications from November 2024 identifies the most critical security risks in this area.
Fraunhofer Institute for Applied and Integrated Security