Home Blog Charting a Course Through the Facial Recognition Debate Charting a Course Through the Facial Recognition Debate Jan 21, 2020 Our 3 December Data Policy Network evening explored the legal and policy implications of facial recognition technology and other algorithmic modelling based on biometric data. These technologies evoke a powerful emotional response, throwing the tension between innovation and public trust into sharp relief. We were lucky to have Olivia Varley-Winter of the Ada Lovelace Institute to take us through Ada’s work on public perceptions and recent report. Facial recognition technology is an emotive example of technology provoking public debate. It raises difficult privacy questions. Unlike other forms of biometric data (e.g., fingerprints, DNA), facial data can be collected at a distance, without an individual’s knowledge or consent. Unlike a password, biometric identifiers cannot be changed as a precaution or in response to a data breach. Some of the most controversial uses have involved ‘live facial recognition’, where processing to match individuals to a watchlist happens in real time. This could reduce the scope for checks and balances (e.g., having the match confirmed by a human operator). Charting a course through the debate around risks and benefits requires clear, structured reflection. We think that the best approach to understanding the impact of facial recognition is to focus on the specific use case rather than the technology in general, take account of privacy by design and ensure that a diverse set of views is considered. Clear focus on the specific use case allows the risks and benefits to be identified clearly. They will vary depending on what the technology is being asked to do, which we’ll call the tasking, and the context in which the technology is deployed. On tasking Jenny Brennan, also of the Ada Lovelace Institute, sets out a taxonomy in her compelling blog post. [1] She groups tasks into five categories: Detection, identifying a face in an image Clustering, grouping images containing a specific face Matching, where a face is compared against a list of ‘persons of interest’ Verifying, a one to one match to confirm a user’s identity Classifying, identifying an attribute (e.g. gender or age) from an image of a face. Those tasks can be performed in a range of contexts. For example, passport checks at an airport aim to verify a user’s identity in a highly controlled setting. Airport staff can ask users to remove hats and glasses, users stand still in a well lit location, and human immigration officials supervise the system. For a deeper dive on the specific airport use case, we recommend Jonathan Cantor’s talk, which explores the US Department of Homeland Security use case at the US border. Clearly articulating benefits and risks helps organisations deploying the technology engage the public in a dialogue on acceptability. Leading organisations will want to engage proactively, recognising that users may have privacy concerns, seeking to address them by explaining the controls in place, and by providing options to empower users who still feel uncomfortable. A recent survey [2] by the Ada Lovelace Institute (Ada) found support for use cases where there is a clear public benefit and where appropriate safeguards are in place. In contrast, the survey identified very low levels of support for other use cases including to track or monitor individuals or to assist decision making in a commercial context. Source: Ada Lovelace Institute, Beyond face value: public attitudes to facial recognition technology, September 2019 Ada’s work shows that safeguards are key element in building public acceptance. Privacy by design can help. The list of tasks above shows that not every use case requires ‘recognition’ in the sense of identifying an individual. For example, a system designed for age verification may not need to store data or identify the individual whose age is being verified. In other examples, a trusted third party retains access to the facial data. Apps on an iPhone can authenticate users via FaceID but do not have access to FaceID data. The app is only notified as to whether authentication is successful. This means that a user only has to trust Apple’s privacy features [3] not individual app developers. These examples show that privacy can be designed at the technical level and the business model built on the technology. Facial recognition can also borrow from advanced thinking on privacy as it relates to other biometric technologies, whether emerging or well established. Many organisations already use biometric data – from behavioural biometrics for fraud detection to the better known DNA or fingerprint analysis in policing and security. [4] This type of data is already considered “special category” and subject to stricter processing rules under the GDPR and additional safeguards under the UK Data Protection Act 2018. [5] The Home Office’s Biometrics Strategy recognises that the increased use of biometrics can raise “significant issues of public trust”. [6] As with other emerging technologies, we welcome the breadth of engagement with a broad range of stakeholders. Ada’s work is one example. Engagement ranges from the ICO issuing an Opinion [7] to the courts taking a view [8] on the policing use case. We believe that these will allow us to frame the debate in a more helpful way, avoiding confusion and generalisations. This is particularly important in the context of a desire to regulate. On taking office, European Commission President Von der Leyen pledged to outline an EU approach to AI by March 2020. [9] A leaked Commission white paper, which included the option of a “time-limited ban on the use of facial recognition technology in public spaces”, has been widely reported. [10] Similarly, the City of San Francisco’s ordinance [11] restricting the use of facial recognition, has also been reported as a ban.[12] But on closer inspection, neither the San Francisco ordinance nor the Commission white paper (assuming the leak is correct) actually propose blanket bans. Does this mean regulators agree that targeting specific use cases can be more effective? Time will tell. Jenny Brennan, Facial recognition: defining terms to clarify challenges, 13 November 2019 Ada Lovelace Institute, Beyond face value: public attitudes to facial recognition technology, 2 September 2019 Apple, About Face ID advanced technology International Biometrics & Identity Association, Behavioral Biometrics ICO, Special category data Home Office, Biometrics Strategy, June 2018 ICO, Blog: Live facial recognition technology – police forces need to slow down and justify its use R (Bridges) v CCSWP and SSHD [2019] EWHC 2341 (Admin) Atlantic Council, Von der Leyen, new Commission take aim at AI legislation, 28 October 2019 Reuters, EU mulls five-year ban on facial recognition tech in public areas, 16 Jan 2020 Administrative Code – Acquisition of Surveillance Technology, Ordinance number 190110 Vox, San Francisco’s facial recognition technology ban, explained, 14 May 2019 Data Privacy