Privitar

Data Policy Evening
Facial Recognition

What’s happening 

Facial recognition has stepped off the sci fi movie screen into our lives. While we welcome the convenience of using it to unlock our phones or breeze through airport security, other uses are more concerning.

Some use cases force us to consider the trade-off between public goods like security and individual freedoms like privacy. The recent use of facial recognition by a landlord at Kings Cross sparked public outcry and an ICO investigation.  Facial recognition trials by police forces in London and South Wales have also attracted scrutiny and legal challenge. The South Wales case prompted last week’s first ever Information Commissioner’s Opinion, highlighting the importance of this issue for data protection

 

Why does it matter? 

Facial recognition can be used to identify and track individuals as they move through public spaces. As with many emerging technologies, we don’t yet fully understand the benefits and the risks presented by different use cases.

Most of us would trade a level of surveillance for the benefit of preventing serious crime. But would we feel the same about facial recognition being used to improve the layout of a shopping centre? Or for a school to track our child’s attendance? Or for an employer to evaluate candidates at interview?

As well as uncertainty over benefits, we also find it hard to assess the risks. Facial recognition, like other AI models, can generate false positives and has been shown to be less accurate at identifying women or ethnic minorities than white males.  And the risk increases as models become more complex and try to read our lips or infer our emotions. We need to understand both the benefits and the harms to decide whether a use of the technology is proportionate.

Facial recognition is one example of a swathe of powerful new technologies which pose new and not yet well understood risks. Our response may set the tone for other new applications of AI, such as voice recognition, which can also operate at a distance regardless of our consent, cooperation or knowledge.

The decisions we make today about novel applications of AI shape the society we will live in tomorrow. So who makes these decisions? And how do we engage a diverse and representative set of views to ensure those decisions reflect, and are trusted by, all of society?

Fortunately, last year the Nuffield Foundation took action and set up the Ada Lovelace Institute as a stewardship body to monitor the use and adoption of new AI technologies, and engage with the public on the issues and challenges they raise. And that’s exactly what they’ve been doing, releasing their first major report in September on public attitudes to facial recognition. The report shows that despite low levels of public knowledge on facial recognition, people are forming opinions on the acceptability of this technology and the trade offs it entails. The lack of knowledge also means that people are unclear about how facial recognition works, when it is being used and by whom - undermining public trust, normalising surveillance and encroaching on privacy and data protection rights. 

We’re delighted to welcome Olivia Varley-Winter from the Ada Lovelace Institute to help us navigate these thorny questions.

 

What's the plan?

The aim of the data policy network evening is to combine focused discussion on the question of facial recognition, with a chance to just chat to one another over food and drinks about what we’re working on and thinking about. To get a good balance, the first 30 minutes and last hour are left open for people to chat about anything, with the middle bit structured around a few specific questions for debate. 

As usual the evening will be split into open discussion and a more structured debate around the theme and selected questions.

  • 17.45-18.15: General discussion and networking over food and drinks
  • 18.15-19.00: Introductory reflections from Olivia Varley-Winter
  • 19.00-20.00: Debate of selected questions in groups
  • 20.00-21.00: Back to open discussion
We try to keep the event relatively small to enable group discussions, but if you think there’s someone who would be particularly interested in the topic then please let us know and we’d be happy to invite them.
Calendar

3rd December 2019
17:45pm - 21:00pm

Location

Privitar, Alto Tower, 3rd Floor,
5 Hatfields, SE1 9PG, London, UK.

 

Tickets

Free
by invitation only



Speaker

olivia-varley_privacy-network

Olivia Varley-Winter
Program Manager, Ada Lovelace Institute

Olivia is a data, science and technology policy specialist who has worked on the development of Ada’s work on facial recognition and digital recognition since its inception.

Over the past ten years she has held policy, research and management roles at the Royal Statistical Society, City and Guilds, Centre for Cities and nfpSynergy, and completed an MSc at the University of Oxford on environmental policy.

Register your interest

Need more info?

Feel free to reach out, and we will help you