Artificial Intelligence and Cyber Attacks

Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen:
https://doi.org/10.48693/413
Open Access logo originally created by the Public Library of Science (PLoS)
Titel: Artificial Intelligence and Cyber Attacks
Autor(en): Saalbach, Klaus
Zusammenfassung: Artificial Intelligence (AI) is commonly understood as the ability of machines to perform tasks that normally require human intelligence and is a key area of advanced computing. A rapidly growing and widespread AI application is the Generative AI where the AI can create content like new images, texts, sounds, and videos based on short instructions, the so-called prompts, which are a key vulnerability if malicious instructions are given. The rapid and uncontrolled expansion made AI a top security matter: On 28 Sep 2023, the US National Security Agency NSA announced the creation of an AI Security Center which will consolidate all AI security-related activities, protect US AI-systems, and defend the homeland against AI-related threats. At the same time, the Central Intelligence Agency (CIA) Director of Artificial Intelligence announced the development of an internal AI-based chatbot to support intelligence analysis. The AI program ChatGPT-4 (Generative Pretrained Transformer) released on 14 March 2023 uses 100 trillion parameters, was trained with a very large data set from multiple sources and is a multimodal, large-scale model that accepts images and text as input. In practice, AI ethics is not achieved by algorithms, but by governance. The producers of AI models have guidelines to make sure that an AI acts ethically and in a responsible manner and is not unlawful, discriminating, aggressive etc. Attempts to circumvent these restrictions are done by prompt injections (special instructions to AI to create restricted content), also called jailbreaks. The key security problems of ChatGPT are the easy access to prompt injections in internet search engines, the simplicity of attacks and the curiosity of the users. Typical attacks are prompt injections with direct commands, imagination, and reverse psychology. These methods facilitate the creation of malware, polymorphic viruses, ransomware, and other malevolent applications. Further problems are hallucinations, contamination of search engines and the efflux of sensitive data. Generative Adversarial Networks (GANs) as subset of generative AI can be misused to break CAPTCHAs and to create fake content such as deepfakes, face swapping and voice cloning. On the other hand, generative AI is also very useful for cyber defense for advanced data analysis, advanced pattern recognition, creation and analysis of threat repositories and code analysis. The rapidly growing capability of AI raised concerns whether this could be harmful for human beings. This paper briefly presents the potential of AI for creation and defense of cyber attacks, the risks of generative AI and the need for a regulatory framework to safeguard the further development.
Bibliografische Angaben: Working Paper. Universität Osnabrück, Fachbereich 1 - Kultur- und Sozialwissenschaften, Institut für Sozialwissenschaften, Osnabrück 2023.
URL: https://doi.org/10.48693/413
https://osnadocs.ub.uni-osnabrueck.de/handle/ds-202310169882
Schlagworte: Artificial Intelligence; Cyber Attacks; Cyber Security; ChatGPT
Erscheinungsdatum: 16-Okt-2023
Lizenzbezeichnung: Attribution 3.0 Germany
URL der Lizenz: http://creativecommons.org/licenses/by/3.0/de/
Publikationstyp: Arbeitspapier [WorkingPaper]
Enthalten in den Sammlungen:FB01 - Hochschulschriften

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
AI_and_Cyber_Attacks_2023_Saalbach.pdf436,15 kBAdobe PDF
AI_and_Cyber_Attacks_2023_Saalbach.pdf
Miniaturbild
Öffnen/Anzeigen


Diese Ressource wurde unter folgender Copyright-Bestimmung veröffentlicht: Lizenz von Creative Commons Creative Commons