With the rapid development of artificial intelligence technology, leading AI developers like OpenAI and Anthropic are striving to collaborate with the U.S. military to enhance the efficiency of the Pentagon while ensuring that their AI technologies are not used for lethal weapons.

Dr. Radha Plumb, the Pentagon's Chief Digital and AI Officer, stated in an interview with TechCrunch that current AI is not being used for weapons, but it provides significant advantages to the Department of Defense in threat identification, tracking, and assessment.

Robot Artificial Intelligence AI

Image Source Note: Image generated by AI, licensed from Midjourney

Dr. Plumb mentioned that the Pentagon is accelerating the execution of the "kill chain," a process that involves identifying, tracking, and eliminating threats, utilizing complex sensors, platforms, and weapon systems. Generative AI shows potential in the planning and strategy phases of the kill chain. She pointed out that AI can help commanders respond quickly and effectively to threats.

In recent years, the relationship between the Pentagon and AI developers has become increasingly close. In 2024, companies like OpenAI, Anthropic, and Meta relaxed their usage policies, allowing U.S. intelligence and defense agencies to use their AI systems, while still prohibiting these AI technologies from being used to harm humans. This shift has led to rapid collaboration between AI companies and defense contractors.

For example, Meta reached an agreement in November with companies like Lockheed Martin and Booz Allen to apply its Llama AI model in the defense sector. Anthropic has also formed a similar partnership with Palantir. Although the specific technical details of these collaborations are not yet clear, Dr. Plumb indicated that the application of AI in the planning phase may conflict with the usage policies of several leading developers.

There has been intense debate in the industry regarding whether AI weapons should have the ability to make life-and-death decisions. Palmer Luckey, CEO of Anduril, noted that the U.S. military has a long history of procuring autonomous weapon systems. However, Dr. Plumb refuted this, emphasizing that in any case, a human must be involved in the decision to use force.

She pointed out that the idea of automated systems making life-and-death decisions independently is overly binary; the reality is much more complex. The Pentagon's AI systems represent a collaboration between humans and machines, with senior leadership involved in the decision-making process.

Key Points:

🌐 AI is providing significant advantages to the Pentagon in identifying and assessing threats, enhancing military decision-making efficiency.  

🤝 The collaboration between AI developers and the Pentagon is becoming increasingly close, but there is a firm commitment to not allowing AI technology to be used to harm humans.  

🔍 The discussion about whether AI weapons should have life-and-death decision-making capabilities continues, with the Pentagon emphasizing that there will always be human involvement.