The Defense Department has adopted a series of ethical principles intended to guide the development and use of artificial intelligence on and off the battlefield, including taking “deliberate steps to minimize unintended bias” and ensuring the ability to “deactivate” systems that aren’t behaving as expected.
The principles are based on the recommendations of the Defense Innovation Board, which spent 15 months consulting with AI experts in industry, government, and academia, according to a Feb. 24 DOD release.
They are based on five focus areas, including:
- Responsible—DOD personnel must exercise “judgement and care” when deploying AI technology.
- Equitable—“The department will take deliberate steps to minimize unintended bias in AI capabilities.”
- Traceable—DOD will develop AI processes and operational methods that are transparent and auditable.
- Reliable—AI capabilities will have “explicit, well-defined uses” that are “subject to testing and assurance within those defined uses across their entire life cycles.”
- Governable—A fail safe will be built into AI capabilities allowing the department to “disengage or deactivate deployed systems that demonstrate unintended behavior.”
The Joint Artificial Intelligence Center will coordinate the implementation of these principles across the department.
“The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order,” Defense Secretary Mark Esper said in a release. “AI technology will change much about the battlefield of the future, but nothing will change America’s steadfast commitment to responsible and lawful behavior.”
The Air Force, which released its own AI strategy in September 2019, has partnered with the AI accelerator at the Massachusetts Institute of Technology to “get basic research at the cutting edge,” Air Force acquisition chief Will Roper told reporters on Feb. 21.
The service already is testing AI in some of its academic settings. For example, Pilot Training Next utilizes an AI coach to more efficiently train the next generation of pilots. Air Education and Training Command and AFWERX, the service’s innovation hub, are working together to find ways to continually improve this capability so more students can take advantage of the new learning methods. The technology also is being looked at to improve the service’s maintenance practices.
“We think having AI in a plane that is able to interface with people and other machines is something that will be really important for us,” Roper said. “I’m very confident from talking to the same researchers that we’re not going to want to have pilots out of the sky anytime soon, because AI is extremely fragile, at least this generation. There’s no adversary in its mind. Data is all credible and relevant, and of course, the second that we have AI in a drone or a cockpit, our adversaries are going to figure out how to beat it. Just like, if you have stealth, you’re going to try to figure out how to beat it.”
Roper predicted that before long a new breed of hackers will start targeting intelligence or data sets, rather than code, which is why he said there needs to be a bigger emphasis on adversarial AI.
“Hopefully by funding that research, not only will we be making the Air Force stronger, we’ll be seeding a next-generation of AI that we’re all going to need domestically and personally,” Roper said.
Editorial Director John A. Tirpak contributed to this report.