chevron-down Created with Sketch Beta.
March 26, 2021 Feature

The Department of Defense AI Ethical Principles: A Guide for Legal Counsel to Autonomous Drone Operators

By Frank Coppersmith

Artificial intelligence (AI) is transforming industries, consumer-business relations, and global security. As organizations grapple with deploying this new, enabling technology, leaders and their legal counsel are faced with difficult decisions about the propriety and legality of combining AI with existing capabilities and business processes. Lawyers are now being asked whether and to what extent AI can be included in products, used to deliver services, or incorporated into a company’s day-to-day operations.1 In reply, lawyers are asking engineers and operators tough questions about their confidence in the proposed AI’s accuracy, risk of unforeseen (and potentially illegal) bias, the nature of the underlying training data, and the potential for harm if the AI behaves in an unexpected way.2

In the face of increasingly capable adversaries and a need to adopt and deploy AI to support its national security mission, the Department of Defense (DoD) has been at the forefront of developing an ethical and regulatory framework for AI that has broad applicability outside of the military.3 Public sector actors and private industry involved with the design, development, and deployment of AI across a wide variety of business activities will find DoD’s newly adopted AI ethical principles to be a useful guide in any situation where the application of AI has outpaced the development of concomitant regulatory and legal guidance. Consideration of and reference to DoD’s ethical principles may prove especially valuable for lawyers advising operators of AI-enhanced drones to gain public confidence and regulatory sign-off to ensure the freedom of operation needed for successful commercial performance.

What Is AI?

AI consists of computers and software performing tasks that normally require human intelligence, cognition, or mental flexibility—such as reasoning, problem solving, planning, and learning—but conducted with microprocessor speed and precision.4 What makes modern AI different from traditional software engineering (where a human explicitly programs a computer to do a task) is the use of machine learning, a process where computers learn via “trial and error.”5 In one notable example of machine learning, Carnegie Mellon scientists let a drone teach itself to fly by navigating twenty different indoor environments; in just forty hours of flying time and 11,500 collisions the drone mastered its aerial environment.6 For all of its impact and potential commercial disruption, today’s AI is not based on sophisticated, export-controlled hardware or know how; rather, it comes from relatively low-cost civilian technology and commercially available algorithms making access to AI available to nearly anyone.7

For example, many commercial and even low-cost consumer drones are equipped with advanced computer vision to detect an obstacle and autonomously avoid getting into trouble.8 Other drones are manufactured with even more advanced AI such as a drone that follows and films a user autonomously, avoiding obstacles and keeping the user in focus without explicit instruction. What is next for AI-enabled drones? Automated drone deliveries to consumers, which can be either a critical (medical deliveries) or convenient (delivery in under thirty minutes) service using rapidly advancing technology pushing against the limits of existing regulation.9

U.S. Drone Market

The U.S. is home to the largest drone market in the world, representing half of all global drone investments and projected to triple in size by 2024.10 Commercial drones have redefined a number of industries, but some of the greatest potential comes from pairing innovative drone technology with AI to disrupt package delivery services.11 Autonomous delivery drones are not science fiction: Rapid increases in microprocessor power and in the availability of enormous volumes of easily analyzed digital, navigational data, coupled with decreases in the cost of data storage, make using powerful, machine-learning AI available even for routine drone applications. The potential market for drone deliveries is growing. With COVID-19 driving “stay at home” orders and closing storefront retail, consumers have turned to e-commerce, increasing spending 77% year over year.12 For the 2020 holiday season, U.S. businesses are expected to ship over 1.5 billion packages.13

Ever since December 7, 2016, when Amazon Prime Air14 made its first delivery to a customer using a GPS-guided flying drone, companies such as Amazon, Google,15 and the United Parcel Service (UPS)16 have been competing to get authorization to provide drone delivery services for consumer goods. Consumers want drone delivery: 79% of U.S. consumers would be “likely” to request drone delivery if their package could be delivered within an hour, with 73% of consumers saying that they would pay up to $10 for a drone delivery.17

If fully deployed today, Amazon’s Prime Air drones could satisfy between 75% and 90% of all of Amazon’s package deliveries18 flying directly from their seventy-five fulfillment centers and keeping the delivery time under thirty minutes. Nonetheless, to take such a plan to scale and comply with current Federal Aviation Administration (FAA) regulations, Amazon will need over 6,000 operators flying up to 40,000 drones at a cost to Amazon in salary and benefits alone of over $400 million per year.19 The attraction of AI-enabled drone delivery is clear.

Accordingly, Amazon and other delivery services envision a future where package delivery can take place using AI-enabled drones that have inevitably received the necessary FAA approvals consistent with regulatory guidance that enable operations independent of direct, operator control. And as business and consumers eventually rely on AI-enabled drone delivery for urgent and routine services, drone service providers must go beyond mere administrative or legal requirements and look to broader ethical principles as they integrate AI into their global business models.

DoD AI Ethical Principles

The DoD has adopted a series of broad principles related to the use and deployment of AI to drive internal discussion, inform contractual requirements, and address commander authority and accountability across the AI delivery pipeline. From predictive analytics to autonomous vehicles, DoD use cases for AI range from the back-office to the battlefield.20 Nonetheless, the nature of modern AI, especially using the process of machine learning, where software is trained on large data sets and not explicitly programmed, presents a unique risk of unintended consequences. Specifically, AI created with machine learning has used data to develop a generalized set of rules of decision-making that can be tested and validated for general accuracy (e.g., the AI is correct 99% of the time); however, the nature of these rules is opaque (e.g., a “black box”). As such, the AI cannot be evaluated for how and by what factors it is making a decision. While accurate against test data sets, the AI could be using improper means (such as racial or ethnic factors otherwise prohibited from use by law) or could react outside of expected norms when presented with circumstances not previously covered in the training data. Especially frightening is the interaction between different AIs; known as “emergent behavior,” this interaction can be far outside expected actions and not what either AI developer intended.

For these reasons, the Defense Innovation Board, an independent advisory committee, conducted a fifteen-month effort to identify AI ethical principles for DoD by consulting with leading AI experts across government, academia, industry, and the public. The result was a series of recommendations in a sixty-six-page study issued in the fall of 2019 to help smooth the path of military-civilian cooperation on AI. Encouraged by Silicon Valley, on February 21, 2020, DoD adopted the following AI Ethical Principles for the design, development, and deployment of AI.21

  1. Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  2. Equitable. DoD will take deliberate steps to minimize unintended bias in AI capabilities.
  3. Traceable. DoD’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  4. Reliable. DoD’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life cycles.
  5. Governable. DoD will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.22

While created by DoD, together these principles provide commercial, nongovernmental, and other governmental organizations with a framework to create conditions for responsible and human-centric adoption of AI.23 Any organization that designs, develops, or deploys AI can use these principles as a methodology to ensure ongoing operational freedom and sustainability.

DoD adopted these principles to “earn the trust of the American public, [and] attract and retain a talented digital workforce. . .” after a series of blunders and fear on behalf of technologists and engineers that DoD was going to build killer robots.24 Nonetheless, commercial drone operators, especially those considering adopting AI-enabled drones, face the same challenge: how to ensure that the public and regulatory community trust what is a highly visible, and potentially threatening, experience of seeing drones in close proximity to homes and businesses while knowing that there is no pilot at the controls. Surveys have indicated that consumers have strong opinions about drone use: 80% favor government regulation and 75% expect to be consulted before drone deliveries begin in their communities.25

As far back as 2015, the U.S. Air Force recognized this concern looking to a time when “autonomous systems . . . will work synergistically with our airmen as part of an effective human-autonomy team.”26 Teaming with AI-enabled systems that are adaptable to changing conditions such as those found in the supply chain requires the human partner (including the user, customer, or bystander) to have confidence in the AI that can only arise from understanding the AI’s reasoning process before, during, and after the encounter.

Prior to adopting the AI principles, DoD stood up the Joint Artificial Intelligence Center (JAIC) in 2018 with the mission of transforming DoD through the adoption, integration, and scaling of AI, including leading and developing DoD’s AI governance framework.27 While industry and academe may be suspicious of military employment of AI, the JAIC has found that “nobody [referring to commercial actors] is very far along in . . . ethics implementation for AI.”28 In fact, DoD is the first military to adopt AI ethics principles, looking to take a leading role in the responsible development and application of this critical emerging technology.29

Killer Robots

There are other reasons for organizations considering deployment of AI-enabled drones to evaluate their plans and operations within the context of the DoD AI principles. Consider the application of consumer and commercial drones combined with tools like facial recognition and AI-enabled autonomous flight, making it possible for autonomous drones to deliver items to specific individuals.30 Curiously, the technology built for helpful package delivery or disaster relief suddenly has the same capabilities and characteristics useful to DoD, potentially transforming these tools into weapon systems. Modified civilian drones can enter denied spaces, seek targets based on facial recognition, and deliver a payload heedless of whether that is the latest Amazon package or lethal force.31 The Future of Life Institute, a nonprofit organization that works to mitigate risks facing humanity and particularly risk from advanced AI, fears that such drones will evolve into “killer robots,” a topic explored in their groundbreaking video “Slaughterbots,” a sci-fi short film that explores the murderous consequences of a world with unregulated, autonomous AI combined with low-cost drones.32


The ability of drone operators to gain the benefits of AI-enabled systems will come down to whether the public and regulatory communities have confidence that such systems will operate in a safe and appropriate manner and are not “killer robots.” Regulatory schemes may provide a minimum set of requirements and a license regime; however, the technology will advance much faster than regulatory guidance can keep up. As such, it is up to the organizations operating AI-enabled drones to provide, and the attorneys advising them to encourage, responsible design, development, and use in a manner that is transparent and confidence-building. If the DoD or similar AI ethical principles are simply seen as another compliance tool, organizations will fall short of the benefits the principles can provide when considered at each stage of development. Beyond the law, ethics must be part of our DNA when it comes to AI.

Major participants in the drone-delivery industry recognize and message on the importance of public trust: Amazon—“Safety is our top priority” and UPS—“. . .building a full-scale drone operation based on the rigorous reliability, safety, and control requirements.” Nonetheless, safety is only one consideration in the ethical adoption of AI. Public perception and simply doing the right thing are others. As organizations move forward towards responsible adoption, lawyers advising such clients would be well served to have them adopt AI ethical principles similar to DoD’s to serve as valuable guidance as both technology and regulations evolve.


1. Ethics Considerations for Law Firms Implementing AI, Law360 (Nov. 23, 2020),

2. Id.

3. Kathleen Walch, How the Department of Defense Approaches Ethical AI, Forbes (Nov. 29, 2020),

4. B.J. Copeland, Artificial Intelligence, Encyclopaedia Britannica (Aug. 11, 2020)

5. Jake Frankenfield, Machine Learning, Investopedia (Au. 31, 2020),

6. Andrei Tiburca, AI and the Future of Drones, The Next Web (Dec. 1, 2017),

7. Wilson Pang, In 2021, Off-the-Shelf Datasets Will Be on the Rise for AI Model Development, Venture Beat (Nov. 18, 2020),

8. Ali Husain, AI Meets Drones: Detecting Objects In-Flight with Computer Vision, SkyGrid (Dec. 3, 2020),

9. Malik Murison, 4 Projects Combining Drones with AI, RIIS,

10. The United States Drone Market 2019–2024—US Commercial Drone Unit Sales Will Quadruple Between 2018 and 2024—, BusinessWire (June 18, 2019),

11. Fintan Corrigan, Drones for Deliveries from Medicine to Post, Packages and Pizza, DroneZon (July 2, 2020),

12. Walker Sands Commc’ns, Reinventing Retail: Four Predictions for 2016 and Beyond (2016),

13. Marshal Cohen, Will Holiday 2020 Be Another COVID-19 Retail Casualty?, NPD (Sept. 8, 2020),

14. Amazon Prime Air, Amazon (Dec. 7, 2016),

15. Google Completes First Drone Delivery in the US, TechXplore (Oct. 19, 2019),

16. UPS Flight ForwardTM Drone Delivery, UPS,

17. Walker Sands Commc’ns, supra note 12.

18. Connie Guglielmo, Turns Out Amazon, Touting Drone Delivery, Does Sell Lots of Products That Weigh Less Than 5 Pounds, Forbes (Dec. 2, 2013),

19. Ryan Whitwam, Amazon Completes First Prime Air Drone Delivery, ExtremeTech (Dec. 14, 2016),

20. AI Enters the Front Lines of National Devense and Security, HPC Wire (July 29, 2019),

21. Sydney J. Freedberg Jr., DOD Adopts AI Ethics Principles—but How Will They Be Enforced?, Breaking Def. (Feb. 24, 2020),

22. Id.

23. Press Release, U.S. Dep’t of Def., DOD Adopts Ethical Principles for Artificial Intelligence (Feb. 24, 2020),

24. Walch, supra note 3.

25. Inst. of Mech Eng’rs, Public Perceptions: Drones. Survey Results 2019,

26 Off. of Chief Scientist, U.S. Air Force, AF/ST TR 15-01, Autonomous Horizons: System Autonomy in the Air Force—A Path to the Future, Vol. I: Human-Autonomy Teaming, at iv (June 2015),

27. Dana Deasy, Welcome from the DoD CIO, JAIC,

28. Freedberg, supra note 21.

29. Id.

30. Kalev Leetaru, AI Package Delivery Drones Are Just Killer Robots in Waiting, Forbes (Apr. 19, 2019),

31. Id.

32. Tiburca, supra note 6.

The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

By Frank Coppersmith


Frank Coppersmith is the CEO of Smarter Reality, an expert developer of custom software applications serving entrepreneurs, innovative business leaders, and nontechnical founders with product discovery, user experience design, and technology development. Frank holds an MBA from the Wharton School, a law degree from Samford University, and a degree in electrical engineering from The Citadel. He is also a reservist in the U.S. Air Force JAG Corps, where he serves as the senior reserve legal advisor to the Operations and International Law Directorate.

The views expressed in this article are those of the author and not the position of the U.S. Air Force.