AI Ethics Under Fire: OpenAI's Deal with the US Military Raises Concerns and Controversy
The world of artificial intelligence just got a little more complicated. Just a few minutes ago, Chris Vallance and Laura Cress, technology reporters from AFP, broke the news that OpenAI is reevaluating its agreement with the US military after facing intense backlash.
OpenAI, a leading AI research organization, initially struck a deal with the US government to utilize its cutting-edge technology in classified military operations. However, the company now admits that the deal was 'opportunistic and sloppy.' This statement raises crucial questions about the ethical use of AI in warfare and the balance of power between governments and private entities.
In a Saturday statement, OpenAI boasted (https://openai.com/index/our-agreement-with-the-department-of-war/) that their agreement with the Pentagon had more safeguards than any previous AI deployment agreements, including those of their competitor, Anthropic. But the story doesn't end there. On Monday, OpenAI's CEO, Altman, revealed on X that further changes were underway. These amendments ensure their system won't be used for domestic surveillance of US citizens and require intelligence agencies like the NSA to modify their contracts before accessing OpenAI's technology.
Altman acknowledged that the company rushed the initial announcement, saying, 'The issues are super complex, and demand clear communication.' He admitted that their haste made the deal appear opportunistic and poorly planned.
The backlash from users was swift. Following OpenAI's partnership announcement with the Department of Defense, data from Sensor Tower revealed a significant surge in ChatGPT uninstalls. The daily uninstall rate skyrocketed by 200% compared to normal.
While OpenAI faces criticism, its rival, Anthropic, gains traction. Anthropic's AI model, Claude, soared to the top of Apple's App Store ranking and remains there as of Tuesday (https://apps.apple.com/us/iphone/charts). Interestingly, Claude was previously blacklisted by the Trump administration due to Anthropic's refusal to compromise on its principle of not creating fully autonomous weapons.
But here's where it gets controversial: despite the blacklist, Claude has been used in the US-Israel war with Iran, as reported by CBS News (https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-s/). This revelation raises questions about the effectiveness of corporate red lines and the potential for AI to be used in ways companies may not intend.
The Pentagon's silence on its dealings with Anthropic adds another layer of intrigue to this story. Meanwhile, the use of AI in the military is multifaceted. It ranges from optimizing logistics to rapidly analyzing vast amounts of data. For instance, Palantir, an American company, provides data analytics tools to governments, including the US, Ukraine, and NATO, for intelligence gathering, surveillance, counterterrorism, and military operations.
The UK Ministry of Defence recently signed a £240 million contract with Palantir for its AI-powered defense platform, Maven. This platform integrates a vast array of military data, from satellite imagery to intelligence reports, which can then be analyzed by commercial AI systems like Claude to aid in strategic decision-making, as explained by Louis Mosley, head of Palantir's UK operations.
However, AI language models have their limitations. They can make errors or even generate false information, a phenomenon known as 'hallucinating.' Lieutenant Colonel Amanda Gustave, chief data officer for NATO's Task Force Maven, emphasized the importance of human oversight, stating that AI decisions would always be subject to human review.
Palantir's stance on autonomous weapons differs from Anthropic's. While they don't advocate for a complete ban, they emphasize the need for human involvement in the decision-making process. With Anthropic now out of the Pentagon's favor, Professor Mariarosaria Taddeo of Oxford University warns that the most safety-conscious player in the game has been sidelined, raising concerns about the future of AI ethics in military applications.
This story is part of the BBC's AI Unpacked week, where we delve into the world of AI and its implications. To explore more, visit AI Unpacked (https://www.bbc.co.uk/topics/cx2408k997jt).
What do you think about the use of AI in military operations? Are companies like OpenAI and Anthropic doing enough to ensure ethical AI deployment? Share your thoughts and join the conversation!