Self-service access to safe data
Protect data and manage risk
Analyze conversational chat data
Right data in the right hands
Align control and business use
Controlled access to data
Flexibility, consistency, scalability
Our professional services
Power responsible use
From clinical to commercial
Optimize data tests
Open new revenue streams
Realize the potential of the cloud
Protect data from misuse
Transform your data
Opinion and industry insights
An A to Z of the industry
The podcast for data leaders
Press releases, awards, and more
Staying at the cutting edge
The team behind Privitar
A thriving partner ecosystem
Our story, values, and careers
Dedicated customer assistance
Apr 19, 2022
Data mesh is a hot topic, and quite possibly, one of the biggest emerging trends in data science and analytics for the year ahead. That should come as no surprise, as many enterprises are increasingly investing in their data ecosystems, and looking for ways to modernize and optimize their data architecture stacks.
Data mesh is a decentralized, distributed approach to enterprise data management. The approach was developed by Zhamak Dehghani, Director of Next Tech Incubation, principal consultant at Thoughtworks, and a member of its technology advisory board.According to Thoughtworks, a data mesh is intended to “address[es] the common failure modes of the traditional centralized data lake or data platform architecture,” hinging on modern distributed architecture and “self-serve data infrastructure.” There are four key principles to data mesh:
Organizations that embrace data mesh as the basis of their modern data architectures can take advantage of their ability to democratize both data access and management.
The idea is that people in business domains who work with specific data every day can utilize self-service infrastructures to create pipelines that take data from application data sources used in that business domain and produce data products that are available in a data mesh. They can build trusted, compliant “data products” once and reuse them in different analytical workloads rather than needing to repeatedly re-invent data integration pipelines to recreate that same data for different analytical systems. This enables them to speed up the creation and access to trusted, high-quality, compliant data available for analytics.
Data virtualization has risen to the fore as a key enabler of data mesh architecture, providing an agile way to produce, consume, and govern data products.
Data virtualization enables domains to quickly implement data products by creating virtual models on top of any data source. Thanks to the simplicity of use and the minimization of data replication enabled by data virtualization, the creation of data products is much faster than using traditional alternatives. Data products can also be automatically registered in a global, company-wide data catalog that acts as a data marketplace for the business.
Another key benefit provided by data virtualization in this architecture is that domains can select and evolve autonomously from the data sources that implement their products. For instance, many business units will already have domain-specific data analytics systems (e.g. data marts) they can reuse with almost no effort and without introducing new skills to their teams. They can also directly reuse applications specifically tailored to their domains (e.g. SaaS applications).
Data virtualization also lends itself naturally to implementing the federated governance principle. First, the layered nature of virtual models enables the easy reuse of definitions across domains. This, in turn, enables the definition of common entities with a consistent representation across all data products, ensuring their interoperability. It also enables developers to easily reuse the data products of other domains, avoiding duplicated effort.
In short, you can unlock the value of distributed data, no matter where it lives, using the power of data virtualization technology.
For organizations to be able to build trusted, compliant data products, safe data must be at the foundation. This is critically important. Data access and privacy tools can de-identify sensitive information and facilitate access to safe and trusted data. They are essential for organizations to ensure that their data stays safe when using it across their organizations and beyond.
Privitar recently announced a new strategic partnership with Denodo to help do just that. With our partnership, we are aligning to advance modern data provisioning, and put safe data at the core of the data mesh powered by data virtualization. You’ll be able to use Denodo’s data virtualization to provide data products in combination with Privitar policies, and automatically and transparently allow sensitive data to be consistently governed no matter where it is used, and remain compliant with legislation.
Our team of data security and privacy experts are here to answer your questions and discuss how modern data provisioning can fuel business growth.