Google has gone back on a previous pledge not to use AI for weapons and surveillance, The Washington Post reported on Tuesday. Policies about “applications we will not pursue” from 2018 have been deleted from the company’s AI principles.
“As recently as Jan. 30 [the list of banned applications] included weapons, surveillance, technologies that ’cause or are likely to cause overall harm,’ and use cases contravening principles of international law and human rights, according to a copy hosted by the Internet Archive,” the Post reported.
When asked for comment, a Google spokesperson directed TheWrap to a blog post from the company’s head of AI Demis Hassabis and SVP for technology and society James Manyika that promises transparency in the latest technological developments.
“We believe democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights. And we believe that companies, governments and organizations sharing these values should work together to create AI that protects people, promotes global growth and supports national security,” the blog reads.
Google’s updated AI principles state that the company will use human oversight to make sure the use of its technology conforms to “widely accepted principles of international law and human rights.”
The company first published its AI principles in 2018 after employees protested a contract with the Pentagon that used Google’s computer vision algorithms to analyze drone footage. The contract was not renewed.
Thousands of employees signed a letter addressed to CEO Sundar Pichai that stated, “We believe that Google should not be in the business of war.”
Source link