The U.S. government is increasingly turning to artificial intelligence to reshape its approach to global diplomacy and conflict resolution, with significant implications for national security strategy.
At the Center for Strategic and International Studies (CSIS) in Washington D.C., researchers at the Futures Lab are pioneering AI applications in diplomatic practice with funding from the Pentagon's Chief Digital and Artificial Intelligence Office. The lab is experimenting with large language models like ChatGPT and DeepSeek to tackle complex issues of war and peace, moving beyond AI's traditional diplomatic roles in speech-writing and administrative tasks.
One of the lab's flagship initiatives, "Strategic Headwinds," demonstrates AI's potential in peace negotiations. The program was developed by training AI models on hundreds of historical peace treaties alongside current news articles detailing negotiating positions in the Ukraine conflict. The system identifies potential areas of agreement that could lead to a ceasefire, offering diplomats data-driven insights that might otherwise remain hidden.
"You might eventually have AIs start the negotiation themselves... and the human negotiator say, 'OK, great, now we hash out the final pieces,'" suggests Andrew Moore, an adjunct senior fellow at the Center for a New American Security, who envisions AI tools eventually simulating foreign leaders to help diplomats test crisis responses.
However, these technologies face significant limitations. Andrew Reddie, founder of the Berkeley Risk and Security Lab, warns of information asymmetry: "Adversaries of the United States have a really significant advantage because we publish everything... and they do not." This transparency disparity could potentially be exploited by nations with less open information environments.
Experts also caution that AI systems struggle with novel situations. "If you truly think that your geopolitical challenge is a black swan, AI tools are not going to be useful to you," Reddie notes, highlighting AI's dependence on historical patterns.
The Defense and State departments are conducting their own AI experiments, signaling a broader institutional shift toward computational diplomacy. Benjamin Jensen of CSIS acknowledges these systems need specialized training to understand diplomatic language, citing instances where AI models misinterpreted terms like "deterrence in the Arctic" with unintentionally comic results.
As these technologies mature, policymakers face a critical choice about AI's role in American foreign policy: will it become an invaluable diplomatic assistant providing nuanced insights, or merely another digital tool with limited practical value? The answer will likely shape U.S. diplomatic strategy for decades to come.