A data masking technique that replaces the field value with a 'token', a synthetic value that stands in for the real value. The pattern for the generated token is configurable and can be chosen to be of the same format as the source data to preserve data formats (e.g. for testing and development). Tokenization can also be done consistently, meaning that the same value is always replaced with the same token, such that referential integrity is preserved in the dataset.

Return to Glossary

Data de-identification 101 Webinar

Privacy Pulse 2019

Privitar's Privacy Pulse 2019 presents the latest from our annual survey of consumer and business attitudes to data privacy.

Learn More

Didn't find what you were looking for?

Want to know more about Privitar, but don't know where to start? Look no further. Get in touch by clicking the button below and we'll gladly assist you.