Lethal Autonomous Weapon Systems

Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen:
https://doi.org/10.48693/444
Open Access logo originally created by the Public Library of Science (PLoS)
Titel: Lethal Autonomous Weapon Systems
Autor(en): Saalbach, Klaus
Zusammenfassung: The development of autonomous weapons is in progress due to technical advances, decreasing production costs, the progress in Artificial Intelligence (AI) and the resulting degree of autonomy. It is expected that fully autonomous weapon systems will become operational in the next few years. Lethal autonomous weapon systems (LAWS), also known as autonomous weapon systems (AWS), robotic weapons or killer robots, use sensors and algorithms to independently identify, engage and destroy a target. In military practice, the development of unmanned drone swarms is the technology closest to full LAWS. This is accompanied by an intense ethical and legal discussion. While substantial progress was made on the responsible use of AI for military purposes, a ban on LAWS could not yet be achieved. Additional technical risks include errors, reliability issues, hacking, data poisoning, spoofing, unintended engagement, and other scenarios. Among the approximately 800 AI-related projects and unmanned device (UxS) programs of the US Department of Defense (DoD), in particular three programs are steps towards LWAS: the Golden Horde program for collaboration between small bombs, the Replicator program for coordinated mass attacks of unmanned systems from seabed to satellites and the ongoing development of the new inter-machine language Droidish. While currently human beings are directly part of the decision process (human-in-the-loop) or are at least acting as supervisors (human-on-the-loop), the speed and complexity of inter-machine communication between thousands of machines will make it difficult for humans to intervene (humans-out-of-the loop) and could reduce human supervision to a symbolic presence. Another factor that may undermine human control is the massive expansion of AI capabilities such as logical reasoning in the Q*-debate, the difficulty to safeguard strong AIs (Superalignment), the uncertainty of future relations between humans and AI-enabled machines and the new option that larger AI can create small AIs and spread them which could be used a new kind of cyber attack. This paper briefly presents the status of LAWS development, of the US DoD programs Golden Horde, Replicator and Droidish, and the legal, ethical, and technical challenges for LAWS and AI-enabled weapons.
URL: https://doi.org/10.48693/444
https://osnadocs.ub.uni-osnabrueck.de/handle/ds-2023121910199
Schlagworte: Lethal Autonomous Weapon Systems; Artificial Intelligence; drone swarms; superalignment
Erscheinungsdatum: 19-Dez-2023
Lizenzbezeichnung: Attribution 3.0 Germany
URL der Lizenz: http://creativecommons.org/licenses/by/3.0/de/
Publikationstyp: Arbeitspapier [WorkingPaper]
Enthalten in den Sammlungen:FB01 - Hochschulschriften

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
Lethal_Autonomous_Weapons_Systems_2023_Saalbach.pdf418,6 kBAdobe PDF
Lethal_Autonomous_Weapons_Systems_2023_Saalbach.pdf
Miniaturbild
Öffnen/Anzeigen


Diese Ressource wurde unter folgender Copyright-Bestimmung veröffentlicht: Lizenz von Creative Commons Creative Commons