A group of 51 digital rights organizations has called on the European Commission to impose a complete ban on the use of facial recognition technologies for mass surveillance – with no exceptions allowed.
Comprising activist groups from across the continent, such as Big Brother Watch UK, AlgorithmWatch and the European Digital Society, the call was chaperoned by advocacy network the European Digital Rights (EDRi) in the form of an open letter to the European commissioner for Justice, Didier Reynders.
It comes just weeks before the Commission releases much-awaited new rules on the ethical use of artificial intelligence on the continent on 21 April.
The letter urges the Commissioner to support enhanced protection for fundamental human rights in the upcoming laws, in particular in relation to facial recognition and other biometric technologies, when these tools are used in public spaces to carry out mass surveillance.
SEE: Security Awareness and Training policy (TechRepublic Premium)
According to the coalition, there are no examples where the use of facial recognition for the purpose of mass surveillance can justify the harm that it might cause to individuals’ rights, such as the right to privacy, to data protection, to non-discrimination or to free expression.
It is often defended that the technology is a reasonable tool to deploy in some circumstances, such as to keep an eye on the public in the context of law enforcement, but the signatories to the letter argue that a blanket ban should instead be imposed on all potential use cases.
“Wherever a biometric technology entails mass surveillance, we call for a ban on all uses and applications without exception,” Ella Jakubowska, policy and campaigns officer at EDRi, tells ZDNet. “We think that any use that is indiscriminately or arbitrarily targeting people in a public space is always, and without question, going to infringe on fundamental rights. It’s never going to meet the threshold of necessity and proportionality.”
Based on evidence from within and beyond the EU, in effect, EDRi has concluded that the unfettered development of biometric technologies to snoop on citizens has severe consequences for human rights.
It has been reported that in China, for instance, the government is using facial recognition to carry out mass surveillance of the Muslim Uighur population living in Xinjiang, through gate-like scanning systems that record biometric features, as well as smartphone fingerprints to track residents’ movements.
But worrying developments of the technology have also occurred much closer to home. Recent research coordinated by EDRi found examples of controversial deployments of biometric technologies for mass surveillance across the vast majority of EU countries.
They range from using facial recognition for queue management in Rome and Brussels airports, to German authorities using the technology to surveil G20 protesters in Hamburg. The European Commission provides a €4.5 million ($5.3 million) grant to deploy a technology dubbed iBorderCtrl at some European border controls, which picked up on travelers’ gestures to detect those who might be lying when trying to enter an EU country illegally.
In recent months, however, some top EU leaders have shown support for legislation that would limit the scope of facial recognition technologies. In a white paper published last year, in fact, the bloc stated that it would consider banning the technology altogether.
The EU’s vice-president for digital Margrethe Vestager has also said that using facial recognition tools to identify citizens automatically is at odds with the bloc’s data protection regime, given that it doesn’t meet one of the GDPR’s key requirements of obtaining an individual’s consent before processing their biometric data.
This won’t be enough to stop the technology from interfering with human rights, according to EDRi. The GDPR leaves space for exemptions when “strictly necessary”, which, coupled with poor enforcement of the rule of consent, has led to examples of facial recognition being used to the detriment of EU citizens, such as those uncovered by EDRi.
“We have evidence of the existing legal framework being misapplied and having enforcement problems. So, although commissioners seem to agree that in principle, these technologies should be banned by the GDPR, that ban doesn’t exist in reality,” says Jakubowska. “This is why we want the Commission to publish a more specific and clear prohibition, which builds on the existing prohibitions in general data protection law.”
EDRi and the 51 organizations that have signed the open letter join a chorus of activist voices that have demanded similar action in the last few years.
Over 43,500 European citizens have signed a “Reclaim Your Face” petition calling for a ban on biometric mass surveillance practices in the EU; and earlier this year, the Council of Europe also called for some applications of facial recognition to be banned, where they have the potential to lead to discrimination.
Pressure is mounting on the European Commission, therefore, ahead of the institution’s publication of new rules on AI that are expected to shape the EU’s place and relevance in what is often described as a race against China and the US.
For Jakubowska, however, this is an opportunity to seize. “These technologies are not inevitable,” she says. “We are at an important tipping point where we could actually prevent a lot of future harms and authoritarian technology practices before they go any further. We don’t have to wait for huge and disruptive impacts on people’s lives before we stop it. This is an incredible opportunity for civil society to interject, at a point where we can still change things.”
As part of the open letter, EDRi has also urged the Commission to carefully review the other potentially dangerous applications of AI, and draw some red lines where necessary.
Among the use cases that might be problematic, the signatories flagged technologies that might impede access to healthcare, social security or justice, as well as systems that make predictions about citizens’ behaviors and thoughts; and algorithms capable of manipulating individuals, and presenting a threat to human dignity, agency, and collective democracy.