Google says it will continue to work with the military on AI, but won't get involved in weapons or spying
Google has published new guidelines describing its principles for the development of artificial intelligence.
The document -- entitled "Artificial Intelligence at Google: Our Principles" -- the company sets out its objectives for the future of AI. As well as saying it will incorporate its privacy principles into AI, Google says that it will continue to work with the military, but commits to avoid helping with the development of weapons or other technologies that could injury or kill.
- When artificial intelligence goes bad: Reddit made 'Norman' a psychopathic AI
- Keep track of the World Cup with Google
- Google brings Wi-Fi to 400 Indian train stations
Last month, Google faced criticism for its involvement in Project Maven -- a Pentagon program using AI to analyze drone footage. It is in the context of this that Google has released its artificial intelligence guidelines and principles.
Tackling the issue of military work, CEO Sundar Pichai says:
We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.
Setting out its principles for all to see, Google says that it wants to work on artificial intelligence projects that are socially beneficial, avoid creating or reinforcing unfair bias, are built and tested for safety, are accountable to people, incorporate privacy design principles, and uphold high standards of scientific excellence.
The company also says that it will strive to limit potentially harmful or abusive applications. In its declaration, Google lists four areas for which it will not develop AI technology:
- Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
- Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
- Technologies that gather or use information for surveillance violating internationally accepted norms.
- Technologies whose purpose contravenes widely accepted principles of international law and human rights.