The tech giant's parent company, Alphabet, had previously vowed that it would never develop technology that would “cause or are likely to cause overall harm”.
However, the guidelines have now been amended, and the pledge has been removed.
The section stated that Google would never use applications “that gather or use information for surveillance violating internationally accepted norms.”
The company now says it will apply “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.”
The AI principles were first published in 2018, with Google senior vice president James Manyika and Sir Demis Hassabis, who spearheads the firm’s AI lab, Google DeepMind, explaining that with the rise in the use of artificial intelligence, they were long overdue an update.
In a blog post, they said: “Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organisations and individuals use to build applications.
“It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself; one with numerous beneficial uses for society and people around the world, supported by a vibrant AI ecosystem of developers.”
Google – which previously faced criticism for its controversial $1.2 billion cloud computing and AI agreement between the Israeli government - highlighted that they need to work together with governments and organisations to “create AI that protects people, promotes global growth, and supports national security.”
The pair added: “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights.
“And we believe that companies, governments, and organisations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”