Toward a Positive Statement of Ethical Principles for Military AI
Toward a Positive Statement of Ethical Principles for Military AI
This chapter responds to the interesting lack of formal movement by states toward the development of practical ethical frameworks for military applications of Artificial Intelligence (AI). Despite prominent depictions of autonomous weapon systems as the first step toward a dystopian future and almost seven years of international discussions under the auspices of the United Nations, only the United States has developed explicit principles for military AI. In the absence of similar efforts by other states, the underlying technologies have continued to develop, particularly in the civilian sphere. The purpose of this chapter, therefore, is to review ethical principles that were originally developed for civilian applications of AI and then propose a version that could be deployed to armed forces that seek to deploy autonomous systems and military AI. This chapter will consider the limitations of such an approach and argue in favor of the development of a ‘minimally-just AI’ to ensure that autonomous weapons cannot be used for blatant violations of the laws of war. The Military Ethical AI principles outlined in this chapter are intended for use as a high-level framework and shared language to enable discussion among various stakeholders on the ethical and legal concerns that remain with militarized AI.
Keywords: Ethical AI principles, militarized artificial intelligence, Minimally-Just AI, civil-military relations, military ethics
Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
Please, subscribe or login to access full text content.
If you think you should have access to this title, please contact your librarian.
To troubleshoot, please check our FAQs , and if you can't find the answer there, please contact us .