Safe and Ethical Artificial Intelligence


Inspired by how bees make collective decisions, researchers are exploring how crowdsourcing techniques may help intelligence analysts produce the best-reasoned analysis from the available data.


The collection of voiceprints on a massive scale by government departments is troubling. But potentially just as problematic is the rise of smart speakers and other devices that work through spoken commands. By definition, these listen all the time to what people say.

Robots and human rights

Computational technologies are deeply implicated in the unequal power relationships between individual citizens, the state and its agencies, and private corporations. If unhinged from effective national and international systems of checks and balances, they pose a real and worrying threat to our human rights.

Walking robot

Will automation, AI and robotics mean a jobless future, or will their productivity free us to innovate and explore? Is the impact of new technologies to be feared, or a chance to rethink the structure of our working lives and ensure a fairer future for all?

Crowd Thinking

When making tough decisions, humans have long sought advice from a higher power.

Combining the opinions of large numbers of people – experts and non-experts alike – can provide decision support for arbitrary problems.


To be responsible over how it develops and deploys AI, Google needs to move beyond the current tentative language about encouraging architectures of privacy and ensuring “appropriate human direction and control” without explaining who decides what is appropriate and on what basis. It needs to embed human rights into the design of AI and incorporate safeguards such as human rights impact assessments and independent oversight and review processes into the principles.