Privitar

Data Policy Network

Proportionality - what's the issue?

Recital 4 of the GDPR states:

“The right to the protection of personal data is not an absolute right; it must be considered in relation to its function in society and be balanced against other fundamental rights, in accordance with the principle of proportionality.”

A lot of current debate around challenges that arise in the context of big data analytics, like fairness in algorithmic decision making or privacy in the information age, comes down to the question of how to balance seemingly conflicting objectives, like personal privacy and national security, or improved health care through personalised medicine and confidentiality of an individual’s medical history. A focus on ‘fairness’ in some instances is a question of what is proportional and balanced. However, as many of the public goods that ought to be balanced are not well defined, and the impact of potential data uses are often hard to anticipate and measure, it can be challenging to strike the right balance and fulfil the public’s reasonable expectations.

As an example, for a long time, the public debate about personal privacy vs national security has followed the conventional paradigm that by surveilling the public en masse, and thus asking individuals to give up personal privacy, lives can be saved. But some question whether the right balance has been struck. So what is the right balance, and how do we reason about what it should be? What quantum of privacy should individuals be expected to give up to reduce the probability of a given number of deaths? How does one go about reasoning about the value of these two competing social goods?

Similar questions appear in the use of data in the healthcare system. As Elizabeth Denham said when giving evidence to parliament back in January of this year:

“I think we have a lot of stresses and strains in our current model. The NHS is such a valuable resource. When we are talking about health data, people expect and want data to be used to find new solutions—to save babies and find new pathways for treatment. That is a decision that needs to be made by Parliament. We probably need—we do need—a deep public conversation: a meaningful public consultation about the use of health data and the application of machine learning and AI.

The public need to be behind whatever we decide to do, and I think it would need a change in the law to open up those datasets for more broad research and the use of AI.”

If we think preventing violent deaths is a reasonable basis for infringing on privacy, might it not also be reasonable when trying to prevent deaths from illness? What are the differences and how should we reason about these types of questions?
The promise of the big data age is that the use of sensitive data opens up new possibilities, not just in health but across all sectors and industries. But if these public benefits are only available at the cost of our privacy, which of these opportunities should be realised and which should we choose to pass on, and why?

 

What's the plan?

The aim of the data policy network evening is to combine focused discussion on the question of proportionality, with a chance to just chat to one another over food and drinks about what we’re working on and thinking about. To get a good balance, the first and last hour are left open for people to chat about anything, with the middle bit structured around a few specific questions for debate. For those who are interested, a write up of the last event can now be found here.

The rough plan is as follows:

5.45 - 6.45 People arrive and meet one another over food and drinks
6.45 – 7.00 Reflections on the topic from Sophia Adams Bhatti
7.00 – 8.00 Discussion of selected questions in groups
8.00 – 9.00 Back to open discussion

 

Topics of discussion

After the topic introduction, we’ll split into smaller groups and discuss some relevant questions on the theme. Each 20 minutes we’ll change the question and mix up the groups a little, so everyone will get a chance to discuss each question and with different people. We’re still thinking about what the most interesting questions are for discussion (and please let me know if you have any suggestions!), but currently the plan is to discuss:

  1. Do the opportunities offered by AI call for legal changes, and a new civic duty; a sort of ‘data tax’? As part of the social contract should citizens be expected to provide their data for certain specific purposes in the public interest, e.g. health research?
  2. Are there any absolutes in privacy? Are there any red lines which should never be crossed, equivalent to the Convention on Certain Conventional Weapons? And if there are no absolutes, how can the value of privacy in relation to other social goods best be measured?
  3. What can we learn from how NICE has balanced competing health objectives in the past? Is there an equivalent of QALYs* for privacy? What are the similarities and differences in composition and use?

*Quality-adjusted life year (QALYS), see https://en.wikipedia.org/wiki/Quality-adjusted_life_year

 

RSVP here below

Calendar

26th November 2018
05:45pm - 09:00pm

Location

Privitar HQ,
6th Floor, South Bank Central,
30 Stamford Street, SE1 9LQ
London, United Kingdom

 

Tickets

Free
by invitation only

RSVP here below

Need more info?

Feel free to reach out, and we will help you