By Marcus Grazette, Europe Policy Lead at Privitar

Our latest Data Policy Network event explored proposals for a new legal duty of loyalty. Organizations using personal data would owe this duty to the individuals whose data is used. Professor Woodrow Hartzog, author of Privacy’s Blueprint, took us through the arguments. You can watch the recording of his presentation below. This blog summarizes the rationale for change, what a duty of loyalty would require, how it would interact with the existing legal regime and what it would mean for organizations.

 

The rationale for change

Data protection and consumer protection rules aim, among other things, to protect individuals as they interact with organizations seeking to collect and use personal data. However, the current regime is arguably failing to prevent practices that exploit or manipulate individuals for the benefit of organizations.

Dark patterns are a topical example. The FTC describes dark patterns as “design features used to deceive, steer or manipulate users into behavior that is profitable for an online service, but often harmful to users or contrary to their intent.” Dark patterns are widespread and effective.

Professors Richards and Hartzog use dark patterns as one example of practices that fall through gaps in the current legal and regulatory regime. They summarize the US approach to data privacy in three obligations: (1) do not lie, (2) do not harm and (3) follow the Fair Information Practices. The first two arise from the prohibition of deceptive and unfair trade practices, and the FTC sets a relatively low ‘notice and choice’ bar for the third. Taken together, this allows organizations to rely on dense, often unread terms and conditions to justify exploitative data collection and use.

 The GDPR takes a more robust approach. It sets a high bar for consent and imposes requirements to consider individual rights when organizations use data under the ‘legitimate interest’ legal basis. Despite this, the UK Parliament’s Joint Committee on Human Rights captured the widely-held belief that “the consent model is broken” because it “puts too much onus on the individual…rather than setting a high standard of protection by default.”

What is a duty of loyalty?

A duty of loyalty would require organizations using data to act in the best interests of the individual whose data is being used. A requirement to act in the individual’s best interests is well established in other areas of law. For example, in UK trust law where the trustee owes a fiduciary duty to the beneficial owner of an asset held in trust.

How would a duty of loyalty interact with data protection and consumer protection law?

Richards and Hartzog argue that a duty of loyalty closes an important gap, because existing data protection and consumer protection laws overlook the fact that organizations can “exploit their structural and information superiority over people trusting them with their data.” In fiduciary terms, this is called self-dealing – where the fiduciary takes advantage of their position in a transaction to act in their own interests. Imposing a duty of loyalty would ban self-dealing and create a presumption against other types of behavior.

The gap is clear in a US context. A duty of loyalty would impose significant, new obligations on organizations. It would upend the current ‘notice and consent’ approach. US lawmakers are actively considering a duty of loyalty. It exists, in various forms,  in several privacy bills – the Data Care Act, New York Privacy Act and the Consumer Online Privacy Rights Act

The idea of a legal duty is more familiar to policymakers in the UK and EU. The GDPR already sets a high bar for consent and requires data protection by design and default. LINC, the French data protection authority’s digital innovation lab, makes the case that existing GDPR provisions are sufficient to tackle dark patterns and has published (in French) guidance on interface design and data protection.

 The GDPR also imposes presumptions against some types of behavior. For example, the GDPR bans processing special category data and international data transfers unless an exemption applies. We’re seeing this type of presumption explicitly extended to the interaction between users and service providers, for example proposed EU legislation on AI would ban “manipulative and exploitative practices” that “distort” behavior.

Emerging policy ideas, such as data institutions, rely on a similar concept of ‘data stewardship.’ The data institution holds data on behalf of individuals and makes that data available to be used, often for public benefit, educational or charitable aims. The European Commission’s proposed Data Governance Act would create a governance framework for so-called ‘data intermediaries,’ a form of data institution.

What does this mean for organizations?

Complying with a duty of loyalty could require enhanced data governance arrangements.  Organizations would need to understand and take account of an individual’s best interests and of the potential harms an individual could experience as a result of their data being collected and used.

De-identification has a role to play. It can help to ensure a strict approach to data minimization, for example allowing organizations to collect and hold only the data necessary at the level of precision necessary for their purpose. De-identification also helps to mitigate the risk of harm to individuals.

Strict data minimization is not just in an individual’s interest, it can improve data-driven projects. Read our blog on better machine learning through data minimization to learn how.