191024-N-II672-0035. Samuel Tangredi, professor, U.S. Naval War College (NWC), delivers an opening speech during the “Beyond the Hype: Artificial Intelligence in Naval and Joint Operations” conference at NWC, Oct. 24. The two-day event focused on applying, designing and the potential problems of artificial intelligence in war fighting. (U.S. Navy photo by Mass Communication Specialist 2nd Class Tyler D. John/Released)

This story originally appeared on www.usnwc.edu, see the original story and read more from the Naval War College here.

Scholars reached “beyond the hype” to discuss artificial intelligence as a tool for the American military during a U.S. Naval War College conference Oct. 24-25.

Who is accountable when an autonomous warship fires its weapons? How can the federal government harness “big data” in a meaningful way? What are the nation’s adversaries up to, and how can the U.S. compete?

A group of academics from across the military and the private sector examined those and other questions at a conference called “Beyond the Hype: Artificial Intelligence in Naval and Joint Operations.” The event was organized under the auspices of the college’s Institute for Future Warfare Studies and its newly created Leidos Chair for Future Warfare Studies.

Some of the presenters are contributing work toward the upcoming book “AI at War: How Big Data, Artificial Intelligence and Machine Learning Are Changing Naval Warfare.” Sam Tangredi, who is co-editing the book for the Naval Institute Press, coordinated the conference.

Provost Lewis Duncan told the audience that artificial intelligence is a concept that has delivered less than expected.

“That’s why it’s important to have workshops like this one where we look at how we move beyond just the promises and talk about the actual applications that AI will bring to the military warfighting world,” Duncan said in his welcome address.

Tangredi, who is director of the Institute for Future Warfare Studies, said past history shows that people get excited about a new technology well before it is ready for use.

“If you read any of our professional journals, you read articles that say, ‘Whoever has AI is going to rule the world! Legacy systems are dead!’“ Tangredi told the audience.

“I think we are heading toward the Department of Defense and others starting to realize that this might not be so easy. It’s going to be hard to apply,” he said.

Tangredi charged the group with helping the Defense Department recognize that artificial intelligence holds great promise but, “There’s a lot of hard work before it actually has practical applications.”

In a panel discussion on applying artificial intelligence to warfighting, Lt. Cmdr. Connor McLemore suggested that automation of weapons or navigation systems will become necessary for the United States to compete in complicated scenarios.

“The real question to me is, who is going to be accountable for the consequences of the actions of these AI systems?” said McLemore, an E-2C naval flight officer who did graduate work on and taught operations research at the Naval Postgraduate School.

He said the likely answer is that a person who understands the system will need to remain “in the loop” for accountability.

However, McLemore added, “Another tension here is, if you have an opponent who is using AI systems, then if you slow or simplify your systems to make them understandable (to human decision-makers,) the opponent may gain an advantage against you.”

McLemore posed the idea that some situations will call for what he described as “unrestrained AI,” which is automation that is allowed to make decisions faster than a human can follow in real time.

“If you are in a situation where maybe things are going south, and the humans have lost situational awareness, depending on the operational environment, maybe you want unrestrained AI, but you know that you are not going to be competitive otherwise,” he said.

In a panel on the problems of using artificial intelligence for warfighting, Naval War College research professor William Glenney discussed the idea of teaching a computer how to employ the strategic concepts of iconic naval thinker Alfred Thayer Mahan.

Glenney said his research has found that you can’t create a “Mahan in a Box,” in part because the rules of war are sometimes murky and even the definition of success can be unclear.

“Current decision aids featuring AI, such as photo recognition, may help win a battle but will come up short in assisting the operational commander to win the campaign or the war,” Glenney told the group.

“Metaphorically, no one yet has figured out how to put Alfred Thayer Mahan in a box.”In the closing keynote address, former Deputy Secretary of Defense Robert Work told the group that he is concerned that when discussing AI, military leaders are becoming “enthralled by the technology and technologists.”

Work echoed other speakers when he said he sees artificial intelligence as just a means to pursue the wider application of autonomy and autonomous systems. But he also suggested ways to get at what’s been described as a widening gap between U.S. application of artificial intelligence and that of China.

Work said a centralized effort – such as a bureau of autonomy or an advanced technology panel – is probably called for.

“You are never going to do this by decentralizing innovation. You are never going to get urgent change at significant scale,” he said.

Looking ahead, Tangredi said the discussions in the conference’s working groups will have an impact on the forthcoming “AI at War” book, which is scheduled for publishing next year.

Jeanette Steele , U.S. Naval War College Public Affairs

This story originally appeared on www.usnwc.edu, see the original story and read more from the Naval War College at www.usnwc.edu.