Calls to put brakes on the use of artificial intelligence and in some cases to ban the technology altogether are getting louder. Now the UN’s human rights chief Michelle Bachelet has joined her voice to the chorus of experts who are urging governments to take stronger action to keep algorithms under control, in a new report that recommends moratoriums on the sale and use of artificial intelligence for high-risk use cases.
Bachelet also advocated for a ban on some AI applications that are contrary to international human rights law, such as the social scoring of individuals based on discriminatory criteria.
Since safeguards aren’t in place yet to make sure that the technology is used responsibly, said the UN commissioner, governments should rein in artificial intelligence as a matter of urgency. Without that, algorithms that are potentially detrimental to human rights will be continue to be deployed with no oversight, causing more harm to citizens.
Algorithms are already present in every aspect of most people’s lives, stressed Bachelet, pointing to various systems that are currently participating in life-changing decisions such as who gets public services, who gets a job, or what users can see and share online.
“We cannot afford to continue playing catch-up regarding AI – allowing its use with limited or no boundaries or oversight and dealing with the almost inevitable human rights consequences after the fact,” said Bachelet.
The UN commissioner’s comments come off the back of a report that was carried out by her office to investigate how AI might impact human rights such as privacy, health, education, freedom of movement or freedom of expression.
While artificial intelligence was found to be, to a large extent, “a force for good” that can help societies and economies overcome huge challenges, the report also highlighted an “undeniable and steadily growing” impact of AI on certain human rights.
Algorithms, for example, can significantly intrude into users’ privacy. AI systems are fed large amounts of data, including personal data, which encourages organizations to collect, store and process sensitive information about their users or customers, sometimes in secretive ways. The systems can in turn be used to make predictions about people’s personal lives, ranging from the neighborhood they live in to their sleeping patterns.
There are several sectors where the use of those AI systems is particularly concerning, according to the report. They include law enforcement, where algorithms may influence a decision to arrest a criminal, as well as public services, where AI models can help determine welfare entitlements all the way to whether a family should be flagged for visits by childcare services.
In the workplace, the report pointed to the risk of using algorithms to monitor and manage workers. And online, AI systems can be used to support content management decisions, which is instrumental in deciding the type of content that each user is exposed to.
For many of these applications, the risk is two-fold. Not only can the AI system itself be at odds with basic human rights such as the right to privacy; but in some cases, when the algorithm is biased, the technology can also make discriminatory decisions.
For example, the data used to inform AI systems could be biased against certain groups, or it could simply be faulty, out of date or irrelevant, which in turn leads algorithms to make unfair decisions. This is how, for instance, the algorithm that was used to predict student grades in the UK last year ended up allocating higher marks to those living in wealthier areas.
There are enough examples of AI models making harmful – and sometimes plain wrong – decisions to worry. “The risk of discrimination linked to AI-driven decisions – decisions that can change, define or damage human lives – is all too real. This is why there needs to be systematic assessment and monitoring of the effects of AI systems to identify and mitigate human rights risks,” Bachelet said.
The UN report drew attention, in particular, to the use of algorithms by states for surveillance purposes, and especially biometric technologies like facial recognition, which is used to identify people in real-time and from a distance, with the potential to allow unlimited tracking of individuals.
There should be a moratorium on the use of biometric technologies in public spaces, says the report, until it is demonstrated that there are no issues with the accuracy of the algorithm, and that the system does not have a discriminatory impact.
It is up to governments, therefore, to implement the right laws to keep the development of AI tools that enable surveillance and other human rights abuses under control, according to the UN commissioner.
Earlier this year, for example, the EU Commission unveiled draft rules on artificial intelligence, which catalogued AI applications based on different levels of risk. At the top of the pyramid were use cases that come with “unacceptable risk”, which violate fundamental rights and should be banned.
The use of facial recognition in public spaces, said the EU Commission, should be banned as it breaches those rights. The ruling, however, came with various exceptions which were condemned by advocacy groups as allowing for too many loopholes.