Self-service access to safe data
Protect data and manage risk
Analyze conversational chat data
Reduce the time and cost to comply
Right data in the right hands
Align control and business use
Controlled access to data
Flexibility, consistency, scalability
Our professional services
Power responsible use
From clinical to commercial
Optimize data tests
Open new revenue streams
Realize the potential of the cloud
Protect data from misuse
Transform your data
Opinion and industry insights
An A to Z of the industry
The podcast for data leaders
Press releases, awards, and more
Staying at the cutting edge
The team behind Privitar
A thriving partner ecosystem
Our story, values, and careers
Dedicated customer assistance
Jan 21, 2020
Our 3 December Data Policy Network evening explored the legal and policy implications of facial recognition technology and other algorithmic modelling based on biometric data. These technologies evoke a powerful emotional response, throwing the tension between innovation and public trust into sharp relief. We were lucky to have Olivia Varley-Winter of the Ada Lovelace Institute to take us through Ada’s work on public perceptions and recent report.
Facial recognition technology is an emotive example of technology provoking public debate. It raises difficult privacy questions. Unlike other forms of biometric data (e.g., fingerprints, DNA), facial data can be collected at a distance, without an individual’s knowledge or consent. Unlike a password, biometric identifiers cannot be changed as a precaution or in response to a data breach. Some of the most controversial uses have involved ‘live facial recognition’, where processing to match individuals to a watchlist happens in real time. This could reduce the scope for checks and balances (e.g., having the match confirmed by a human operator).
Charting a course through the debate around risks and benefits requires clear, structured reflection. We think that the best approach to understanding the impact of facial recognition is to focus on the specific use case rather than the technology in general, take account of privacy by design and ensure that a diverse set of views is considered.
Clear focus on the specific use case allows the risks and benefits to be identified clearly. They will vary depending on what the technology is being asked to do, which we’ll call the tasking, and the context in which the technology is deployed.
On tasking Jenny Brennan, also of the Ada Lovelace Institute, sets out a taxonomy in her compelling blog post.  She groups tasks into five categories:
Those tasks can be performed in a range of contexts. For example, passport checks at an airport aim to verify a user’s identity in a highly controlled setting. Airport staff can ask users to remove hats and glasses, users stand still in a well lit location, and human immigration officials supervise the system. For a deeper dive on the specific airport use case, we recommend Jonathan Cantor’s talk, which explores the US Department of Homeland Security use case at the US border.
Clearly articulating benefits and risks helps organisations deploying the technology engage the public in a dialogue on acceptability. Leading organisations will want to engage proactively, recognising that users may have privacy concerns, seeking to address them by explaining the controls in place, and by providing options to empower users who still feel uncomfortable.
A recent survey  by the Ada Lovelace Institute (Ada) found support for use cases where there is a clear public benefit and where appropriate safeguards are in place. In contrast, the survey identified very low levels of support for other use cases including to track or monitor individuals or to assist decision making in a commercial context.
Source: Ada Lovelace Institute, Beyond face value: public attitudes to facial recognition technology, September 2019
Ada’s work shows that safeguards are key element in building public acceptance. Privacy by design can help. The list of tasks above shows that not every use case requires ‘recognition’ in the sense of identifying an individual. For example, a system designed for age verification may not need to store data or identify the individual whose age is being verified.
In other examples, a trusted third party retains access to the facial data. Apps on an iPhone can authenticate users via FaceID but do not have access to FaceID data. The app is only notified as to whether authentication is successful. This means that a user only has to trust Apple’s privacy features  not individual app developers. These examples show that privacy can be designed at the technical level and the business model built on the technology.
Facial recognition can also borrow from advanced thinking on privacy as it relates to other biometric technologies, whether emerging or well established. Many organisations already use biometric data – from behavioural biometrics for fraud detection to the better known DNA or fingerprint analysis in policing and security.  This type of data is already considered “special category” and subject to stricter processing rules under the GDPR and additional safeguards under the UK Data Protection Act 2018.  The Home Office’s Biometrics Strategy recognises that the increased use of biometrics can raise “significant issues of public trust”. 
As with other emerging technologies, we welcome the breadth of engagement with a broad range of stakeholders. Ada’s work is one example. Engagement ranges from the ICO issuing an Opinion  to the courts taking a view  on the policing use case. We believe that these will allow us to frame the debate in a more helpful way, avoiding confusion and generalisations.
This is particularly important in the context of a desire to regulate. On taking office, European Commission President Von der Leyen pledged to outline an EU approach to AI by March 2020.  A leaked Commission white paper, which included the option of a “time-limited ban on the use of facial recognition technology in public spaces”, has been widely reported.  Similarly, the City of San Francisco’s ordinance  restricting the use of facial recognition, has also been reported as a ban. But on closer inspection, neither the San Francisco ordinance nor the Commission white paper (assuming the leak is correct) actually propose blanket bans. Does this mean regulators agree that targeting specific use cases can be more effective? Time will tell.
Our team of data security and privacy experts are here to answer your questions and discuss how modern data provisioning can fuel business growth.