
Google Reverses Stance: It’s Okay to Use AI to Make Weapons
Movieguide® Contributor
Google previously pledged that it wouldn’t allow its AI tech to be used to make weapons. Well, now the mega company has had a change of thought.
Google now says countries should be able to use AI for their national security.
On Tuesday, Google-Alphabet’s senior VP, James Manyika, and Sir Demis Hassabis, chief exec at Google DeepMind AI lab, stated AI will help “support national security.” This change of course is driven by the “global competition taking place for AI leadership within an increasingly complex geopolitical landscape” and guided “by core values like freedom, equality, and respect for human rights.”
Google previously said it wouldn’t allow its tools to be made into “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” It said its AI shouldn’t be used for invasive surveillance or “cause overall harm.”
The company made the pledge in 2018 after thousands of staff protested Google’s work on a US military drone project. Manyika and Hassabis say that AI has “evolved rapidly” in the last seven years.
READ MORE: GOOGLE REVEALS LATEST PLOY TO TRACK USERS’ DATA
When Google introduced its initial public offerings filings, it included “don’t be evil” in its ethics code. When the company restructured under Alphabet, that phrase was removed.
Google has been working with Trump’s administration, and its chief executive, Sundar Pichai, even attended the new president’s inauguration last month.
This pledge reversal comes one month after the release of DeepSeek, a Chinese top-of-the-line AI chat platform. Now, the pressure is on the West to catch up with its tech. The Pentagon reported last year that Beijing viewed AI as a “revolution in military affairs” and was using AI to make “autonomous and precision-strike weapons.”
Employees have expressed concern over the pledge reversal. After the Oct. 7 massacre in Israel, many employees criticized Google’s involvement with Israel’s government. They argued its cloud technology could aid military and surveillance operations that would harm Palestinians. Google fired 28 upset employees who entered executive offices, staged a sit-down and live-streamed the event on the internet.
Geoffry Hinton, known as one of the fathers of AI, quit working for Google in 2023. He warned AI could make humanity extinct within 10 to 20 years.
And it’s not just Google that’s taking back a statement. Hassabis is also taking back a personal pledge. He previously put his name on a statement, along with many scientists, that acknowledged “mitigating the risk of extinction from AI should be a global priority.”
READ MORE: GOOGLE’S AI GEMINI CAN NOW ‘REASON’ … BUT WHAT DOES THAT MEAN?