Self-service access to safe data
Protect data and manage risk
Analyze conversational chat data
Right data in the right hands
Align control and business use
Controlled access to data
Flexibility, consistency, scalability
Our professional services
Power responsible use
From clinical to commercial
Optimize data tests
Open new revenue streams
Realize the potential of the cloud
Protect data from misuse
Transform your data
Opinion and industry insights
An A to Z of the industry
The podcast for data leaders
Press releases, awards, and more
Staying at the cutting edge
The team behind Privitar
A thriving partner ecosystem
Our story, values, and careers
Dedicated customer assistance
Dec 10, 2021
Our Data Policy Network event on 1 December took a deep dive into anonymization policy, or de-identification as it’s called in the US. We focused on the situation in the UK, but many of the challenges apply globally.
UK policy on anonymization, and on data use more broadly, is changing: the ICO is consulting on updated guidance to replace their 2012 code and DCMS is also consulting on broader data protection reforms.
Changes in the UK are important, but the UK is not an island when it comes to anonymization. Changes in Europe also matter, particularly for organizations operating across both jurisdictions. The European Data Protection Board is also due to publish new guidance on anonymization and pseudonymization in 2022. We’ll have to wait to see whether the strict test in the 2014 Article 29 Working Party guidance will change, and what impact jurisprudence (for example, the European Court of Justice judgement in Breyer) will have.
This blog will explore why anonymization matters and argue for a pragmatic approach to anonymization, using an example from health research to illustrate the practical impact that taking a pragmatic approach could have.
It can be tempting to skirt around anonymization by arguing that data protection law offers a choice of routes to data, allowing organizations to choose the appropriate route for their intended use case. Anonymization, that argument goes, is only one possible route, so it doesn’t matter that it can be difficult to use.
We disagree for three main reasons:
We’ve seen a clear ambition from policymakers to encourage data use, including for secondary purposes like research and AI. The UK’s National Data Strategy and AI Strategy are both examples. We believe that a pragmatic approach to anonymization, supported by guidance, could offer an efficient way to achieve this ambition without a significant risk to individuals.
A pragmatic approach requires two main elements: (1) assessing risk based on the context in which data is used into account, not just the risk posed by the data itself and (2) accepting a non-zero risk of re-identification.
The presentation from Guy Cohen, Prvitar’s Head of Policy, on common re-identification techniques explained why these two elements are important:
An example involving health research illustrates this point. Imagine a UK researcher working with patient data. Her research proposal was scrutinized, ensuring that it was of a high scientific value, met ethical requirements and used robust methodologies. She’s been trained in confidentiality and data protection issues, and is contractually barred from attempting re-identification. She may be working with de-identified data in a Trusted Research Environment (TRE) or in a managed IT environment.
In our example, a pragmatic approach to anonymization could mean that the data is anonymous in the researcher’s hands. The controls on the context, taken together with any data transformations, reduce the risk of re-identification. However, re-identification risk is not zero; the researcher can still make inferences about the cohort. But the risk is probably low enough to cross the legal threshold for the data to be considered anonymous.
However, uncertainty about how to apply this pragmatic approach in practice poses challenges for organizations and researchers, for example:
Clarity on anonymization can enable organizations to optimize their data governance overheads, by applying only the controls necessary to ensure that data crosses the legal threshold from personal to anonymous.
We’ll publish further thoughts on what the guidance needs to provide in the coming weeks, as a part of our engagement with the ICO on their consultation.
Sorry, no posts matched your criteria.
Our team of data security and privacy experts are here to answer your questions and discuss how modern data provisioning can fuel business growth.