In October last year, Microsoft proposed the use of the “DAHL-I” technology developed by “Open AI” to assist the US Department of Defense in designing software for military operations, as reported by sources on “The Intercept” site in a report published in April of the past year.
According to the report, this discovery came a few months after the ban on Open AI was lifted to use its technologies in military activities, and this was done quietly without any official announcement from the company.
Microsoft has invested over $10 billion in Open AI, and its name has been closely linked to the startup company recently regarding artificial intelligence technologies.
The presentations provided by Microsoft, titled “Generative Artificial Intelligence using Department of Defense Data,” published on The Intercept’s site, provide details on how the Pentagon uses tools and machine learning technologies from Open AI, such as the chatbot “Chat GT” and the image generator “DAHL-I” in document analysis and assisting in machine maintenance.
These documents were extracted from a large collection of materials presented by Microsoft in a training seminar for the US Department of Defense on “Understanding and Teaching Artificial Intelligence Technologies,” organized by the US Air Force unit in Los Angeles in October 2023.
The seminar included a variety of presentations from companies specializing in machine learning, including Microsoft and Open AI, on the capabilities these companies can offer to the Pentagon.
Documents revealed to the public on Alethia Labs’ site, a non-profit consultancy assisting the federal government in technology consultations, were discovered by journalist Jack Paulson on The Intercept’s site.
Alethia Labs has been working extensively with the US Department of Defense (Pentagon) to swiftly integrate artificial intelligence technologies into the weapons arsenal, and since last year, the company has contracted with the Pentagon’s Artificial Intelligence Office.
One of Microsoft’s presentation slides shows the “Common” federal uses of Open AI technology, including its use in the military sector.
One section under the title “Training Advanced Computer Vision Systems” states: “Combat Management Systems: using AI models to create images for training combat management systems.”
As the name suggests, a combat management system is a set of software that aids military leaders in commanding and controlling forces, providing a comprehensive battlefield view to coordinate military operations like artillery fire, air strikes, and troop movements in battle zones, according to a report.
Training on computer vision implies that the images created by the “DAHL-I” model can help Pentagon computers improve their “vision” of battlefield situations, a crucial advantage in identifying and destroying targets.
The presentation files do not provide additional information on how exactly the “DAHL-I” model is used in combat management systems on the battlefield, but training these systems may involve using “DAHL-I” to provide the Pentagon with synthetic training data, offering a highly realistic and imaginary simulation resembling real-world scenes.
For instance, a vast amount of fake aerial images, produced by the “DAHL-I” model, can be displayed on military software designed to detect enemy targets on the ground, aiming to enhance the software’s ability to recognize such targets in the real world.
In a meeting last month with the Center for Strategic and International Studies, Captain M. Xavier Lugo fromThe US Navy is considering a military application of artificial intelligence data, such as that which can be produced by “DAL-E”, suggesting the use of synthetic images to train drones to better see and understand the world beneath them.
The name of Lugo has been included, who leads the team for generative artificial intelligence at the Pentagon and is a member of the office of the Chief Digital and Artificial Intelligence Officer at the Department of Defense, as a contact person at the end of the Microsoft presentation file.
The US Air Force is currently working on developing an advanced combat management system, which is part of the Pentagon’s larger project known as “Joint All-Domain Command and Control” (JADC2), aiming to link all US military branches together by leveraging AI-supported analytics and enhancing the ability to fight effectively.
Through this project, the Pentagon is seen to offer a promising future where enemy data can be easily exchanged between air force drones, navy warship radars, army tanks, and Marine forces on the ground to improve enemy destruction operations.
On April third of last year, the US Central Command announced that it had begun using some elements of this project in the Middle East region.
However, even disregarding the ethical aspects, the effectiveness of this approach can be debated, according to cybersecurity engineer Heidi Khalaf, who previously worked with OpenAI. She says: “It is known that the accuracy of the model and its ability to process data correctly decreases with each training on content from AI production.”
She argues that the images created by DAL-E are not accurate and do not reflect reality correctly, even when used in combat systems. These models cannot produce precise images of human parts like limbs or fingers, so how can we rely on them to be accurate in real-life details on the battlefield?
(Intercepted)