Universal security and privacy automation
Protect data and manage risk
Analyze conversational chat data
Reduce the time and cost to comply
Self-service without friction or delay
Align data protection and business use
Tailor access controls and data privacy
Flexible, consistent, scalable
Automate actionable compliance steps
Who we integrate with
Our professional services
Power responsible use
From clinical to commercial
Optimize data tests
Open new revenue streams
Realize the potential of the cloud
Protect data from misuse
Transform your data
Opinion and industry insights
An A to Z of the industry
The podcast for data leaders
The latest compliance news and advice
Press releases, awards, and more
Staying at the cutting edge
The team behind Privitar
A thriving partner ecosystem
Our story, values, and careers
Dedicated customer assistance
Oct 06, 2021
Our Data Policy Network event on 29 September explored fairness in AI. We started with a conceptual discussion, then quickly moved to the pressing practical issue facing many organizations: how to operationalize fairness in the systems they build and deploy?
Fairness has several, mutually incompatible definitions. As Cynthia Dwork explained in her Royal Society lecture, we can assess AI systems against several different notions of fairness. She used statistical parity, equality of odds (sometimes called equality of opportunity) and predictive rate parity as examples. She also showed that it is impossible to satisfy all three definitions of fairness at the same time; they are mutually incompatible.
A real world example underlines the complexity. Researchers identified twenty-three notions of fairness, then used a German loans dataset to test credit scores against them. They set out their findings (spoiler: men were more likely to receive a good credit score, even when they actually had bad credit) and pose the crucial question: is the classifier fair? They conclude that the answer “depends on the notion of fairness one wants to adopt.”
Organizations also need to make practical choices when designing and developing AI systems. For example understanding limitations in the training data, or whether regional or cultural differences have been properly considered, or how the system’s output will be used. Good design can prevent some of the issues around bias and fairness from arising in the first place.
We’ve already seen that statistical notions of fairness compete with one another. The same is true for legal approaches to fairness. For example, laws aiming to protect a subpopulation from discrimimation (e.g. the UK’s Equality Act 2010, the EU Charter of Fundamental Rights or various US civil rights acts) focus on equality of opportunity.
However positive action initiatives, which focus on statistical parity, exist in parallel. For example, US provision to support veterans (a specific subpopulation) into employment or the European Court of Justice rulings (e.g. C-450/93 and C-409/95) allowing positive action in favor of women to reduce under-representation.
Fairness may also entail trade-offs with other objectives, such as privacy or transparency. The DCMS consultation on the UK’s data protection regime proposes a legislative intervention to add ‘bias monitoring, detection and correction in relation to AI systems’ to the list of presumed legitimate interests for which a balancing test is not required. This can be seen as trading privacy (more data is being used) for fairness (using that data to detect and correct bias).
There is broad agreement on the need for organizations to build trust in their use of data, including for AI systems. Researchers found that study participants were “deeply concerned about algorithmic unfairness, they often expected companies to address it regardless of its source, and a company’s response to algorithmic unfairness could substantially impact user trust.” Similarly, the UK’s new National AI Strategy describes an ambition to support “innovation and adoption while protecting the public and building trust.”
There is a significant body of literature on how to achieve this. DCMS describes the current landscape as “fragmented, with a plethora of actors producing guidelines and frameworks.”
AstraZeneca’s approach provides a useful case study. They developed a set of principles to guide their AI projects, with governance structures to support implementation. The approach to data, digital technologies and AI is reflected across the organization, as part of the company-wide Code of Ethics.
Fairness is one of those principles. AstraZeneca considers factors including the input data used to train and validate the system and the context in which the system is deployed. Marghi Sheth, a Data Policy Director at AstraZeneca, set out the approach in her presentation to the Digital Leadership Forum, available on demand here.
AstraZeneca is far from alone. A 2019 survey found that 63% of respondents “have an ethics committee that reviews the use of AI” and 70% conduct ethics training for technologists. Although the survey does not drill down into the topics those committees and training programs cover, it’s a safe bet that fairness is among them.
Trustworthiness cuts across all types of data use, not just AI development. Similar approaches can help to build trust for data use in general. We’ve seen examples of organizations defining principles, implementing processes (which may include committees), supporting people (e.g. through training) and deploying technical tools like Privacy Enhancing Technologies.
This approach allows an organization to be transparent about how and why data is being used. In an area like fairness, where there is no one ‘right’ way forward, transparency helps organizations to learn from feedback, engage actively and ultimately to build trust.
Sorry, no posts matched your criteria.
Our experts are ready to answer your questions and discuss how Privitar’s security and privacy solutions can fuel your efficiency, innovation, and business growth.