Differential privacy leapt from research papers to tech news headlines last year when, in the WWDC keynote, Apple VP of Engineering Craig Federighi announced Apple’s use of the concept to protect user privacy in iOS.

It was the latest instance of a general trend: users and engineers awakening to the importance of privacy protection in software. High-profile privacy violations such as Uber’s ‘God mode’ have demonstrated in stark terms the ease with which employees of a company can misuse sensitive data gathered from their customers.

The amount of sensitive data that is being digitally recorded is rapidly increasing. People now rely on digital services for more of their payments, transportation, navigation, shopping, and health than ever. This new data collection creates ever increasing ways to violate privacy.

But it also creates exciting opportunities to improve transportation networks, to reduce crime, to cure disease if made available to the right data scientists and researchers. There is a natural tension between protecting the privacy of individuals in the dataset and enabling analytics that could lead to a better world.

Differentially private algorithms are a promising technical solution that could ease this tension, allowing analysts to perform benign aggregate analysis while guaranteeing meaningful protection of each individual’s privacy.

This developing field of technology is worth considering in any system that seeks to analyse sensitive data.