The Data Policy Network
We’re in the foothills of the 4th industrial revolution. A raft of new technologies, perhaps most notably in the field of AI, promise to solve pressing problems and boost economic output. McKinsey estimate the economic benefit of AI alone to be around $13T by 2030. The smart world we’re moving into offers new opportunities, but also throws up new risks, and changes how we can and should respond to existing risks.
Whilst it took months and years to prototype new products, build factories, and distribute the hardware of the first three industrial revolutions. Today’s software is built in ‘sprints’ measured in weeks, with updates pushed out and installed in seconds. As such policy makers must become more agile themselves to keep up. Partially this can be done by creating networks and forums for those in the policy community to discuss their projects and challenges and share learning and ideas.
Privitar is committed to driving the responsible growth of the data economy. Our close links with academia, the public sector, and the business community make us well placed to bring this diverse network together to tackle these emerging policy issues. To that end Privitar hosts networking and discussion evenings every other month. Each event focuses on a particular topic. The theme of our most recent event was ‘accountability’.
Our Second Data Policy Evening – September 2018
Organisations today have access to much greater quantitates of personal data, and new tools with which to automatically process, analyse, and act on this data.
This throws up new challenges to accountability. As organisations look to use data in more innovative and complex ways, how does society ensure they can be held for account and do not put individuals at risk by acting irresponsibly? And when new technologies replace humans as sources of analysis and decision making, how do we ensure that those responsible can still be held accountable for the consequences of these decisions?
The second Data Policy Evening was focused on the theme of accountability, focusing on two questions:
1) How do we maintain democratic accountability when political decisions are being embedded in code?
Modern tools change where, how, and by whom decisions and recommendations can be made. Decisions and recommendations which could previously only be made by humans can now be made at scale by algorithms. Those designing these algorithms may be data scientists and developers, not the operational staff previously responsible. Whilst more efficient, and arguably more consistent than humans, this automation raises new policy issues, especially around who is responsible for the recommendations made, and how the recommendations being made are affected by automation.
Algorithmic decision making/recommendation tools, such as COMPAS in the US or HART in the UK (longer read) have shown how political decisions are being embedded in code. In these instances the weighting of how an algorithm prioritises false positives against false negatives represents a statement of values. This prioritisation translates to how much the system aims to prevent future crimes, compared to ensuring those who won’t reoffend aren’t kept in jail. The results of these decisions can also, in the case of COMPAS, arguably lead to racially biased results.
Are we equipped to deal with this change? How do we ensure that accountable decision makers have the access, understanding, and control to ensure effective democratic oversight is possible?
Three thoughts from the discussion:
- Some argued that we already codify rules to as certain extent (for example, guidelines) so in some ways the change wasn’t as much about the codification as the automation. Perhaps the significance was that the decision making could become centralised and operate at scale, and this would mean that a politician, or other decision maker, could have more granular control of how operations ran.
- Others suggested that this codification allowed for greater transparency as the values were being written down making them open to challenge, and detecting where previously unfair decisions had been made. And so perhaps this change will actually help democratic accountability.
- There was also a discussion of how society’s priorities and the parameters for the possible decisions change over time through case law and cultural shifts. So a rigid algorithm would become outdated, and should they therefore have a use by date?
2) In an increasingly complex world, are additional oversight mechanisms required to ensure fundamental rights and freedoms are not put at risk by data controllers?
Many times a day we are asked to make decisions about our personal data. We lack the mental bandwidth or understanding to engage with most of these decisions, so how can the consent we give be considered an effective way of protecting our fundamental rights and freedoms?
The GDPR makes use of consent harder, whilst effectively promoting the legitimate interest basis, where responsibility for determining if something presents an acceptable risk to the data subject is taken on by the controller. But whilst the controller may have the understanding and bandwidth to consider the risks and benefits of processing personal data, do they have the right incentives? With regulators stretched, who will provide effective oversight for how controllers make decisions about processing personal data? Do we need new transparency mechanisms which would allow civil society, or others, to oversee controllers’ decision making processes? If so, what do these look like? And how would they work?
Three thoughts from the discussion:
- It was generally agreed that the answer to question 2 was ‘yes’, but the question should have been how, and by whom? Much discussion was on the existing burden on the individual, and how increased access to information wasn’t that helpful, as this would increase the burden. Instead an independent and expert organisation was needed.
- Who would have the expertise, impartiality, and public trust to deliver this function was discussed in depth. The ICO was mentioned but people thought it wasn’t sufficiently resourced. As were journalists, but their independence and trustworthiness were questioned. Also discussed were Which? and a possible new non-ministerial department.
- Another solution discussed was to standardise certain types of processing and how organisations treated data for these purposes, such that organisations could then be certified to ensure they were behaving in a way generally agreed to be acceptable. For common processes a simple kite mark could reduce the mental burden for users.
What do you think? If you want to continue the discussion please leave a comment below!