By Mike Foster, VP of Technology Partners at Privitar, and Ron Yu, Director of Technology & Cloud Alliances at Denodo 

Data mesh is a hot topic, and quite possibly, one of the biggest emerging trends in data science and analytics for the year ahead. That should come as no surprise, as many enterprises are increasingly investing in their data ecosystems, and looking for ways to modernize and optimize their data architecture stacks.

Defining Data Mesh

Data mesh is a decentralized, distributed approach to enterprise data management. The approach was developed by Zhamak Dehghani, Director of Next Tech Incubation, principal consultant at Thoughtworks, and a member of its technology advisory board.

According to Thoughtworks, a data mesh is intended to “address[es] the common failure modes of the traditional centralized data lake or data platform architecture,” hinging on modern distributed architecture and “self-serve data infrastructure.”

There are four key principles to data mesh:

  1. domain-oriented decentralization of data ownership and architecture 
  2.  domain-oriented data served as a product
  3.  self-serve data infrastructure as a platform to enable autonomous, domain-oriented data teams
  4. federated governance to enable ecosystems and interoperability.

Organizations that embrace data mesh as the basis of their modern data architectures can take advantage of their ability to democratize both data access and management.

The idea is that people in business domains who work with specific data every day can utilize self-service infrastructures to create pipelines that take data from application data sources used in that business domain and produce data products that are available in a data mesh. They can build trusted, compliant “data products” once and reuse them in different analytical workloads rather than needing to repeatedly re-invent data integration pipelines to recreate that same data for different analytical systems. This enables them to speed up the creation and access to trusted, high-quality, compliant data available for analytics.

The role of data virtualization in a data mesh

Data virtualization has risen to the fore as a key enabler of data mesh architecture, providing an agile way to produce, consume, and govern data products.

Data virtualization enables domains to quickly implement data products by creating virtual models on top of any data source. Thanks to the simplicity of use and the minimization of data replication enabled by data virtualization, the creation of data products is much faster than using traditional alternatives. Data products can also be automatically registered in a global, company-wide data catalog that acts as a data marketplace for the business. 

Another key benefit provided by data virtualization in this architecture is that domains can select and evolve autonomously from the data sources that implement their products. For instance, many business units will already have domain-specific data analytics systems (e.g. data marts) they can reuse with almost no effort and without introducing new skills to their teams. They can also directly reuse applications specifically tailored to their domains (e.g. SaaS applications).

Data virtualization also lends itself naturally to implementing the federated governance principle. First, the layered nature of virtual models enables the easy reuse of definitions across domains. This, in turn, enables the definition of common entities with a consistent representation across all data products, ensuring their interoperability. It also enables developers to easily reuse the data products of other domains, avoiding duplicated effort.

In short, you can unlock the value of distributed data, no matter where it lives, using the power of data virtualization technology.

Why you must have safe data at the core

For organizations to be able to build trusted, compliant data products, safe data must be at the foundation. This is critically important. Data access and privacy tools can de-identify sensitive information and facilitate access to safe and trusted data. They are essential for organizations to ensure that their data stays safe when using it across their organizations and beyond.

Privitar recently announced a new strategic partnership with Denodo to help do just that. With our partnership, we are aligning to advance modern data provisioning, and put safe data at the core of the data mesh powered by data virtualization. You’ll be able to use Denodo’s data virtualization to provide data products in combination with Privitar policies, and automatically and transparently allow sensitive data to be consistently governed no matter where it is used, and remain compliant with legislation. 

To support the launch of the partnership, Privitar will present at Denodo’s Fast Data Strategy Virtual Summit, kicking off on April 27th. Learn more or register here