Master Data Management – Using Graph Databases

By Data

By Muqaddas Mehmood

“Big Data” grows bigger every year, and today’s enterprise leaders not only need to manage larger volumes of data, but they critically need to generate insights from their existing data. Businesses need to stop merely collecting data points and start connecting them. In other words, the relationship between data points matter almost more than the individual points themselves. To leverage those data relationships, your organization needs a database technology that stores relationship information as a first-class entity. That technology is a graph database.

While traditional relational databases have served the industry well in the past in enabling service and process models that tread upon these complexities, in most deployments they still demand significant overhead and expert levels of administration to adapt to change. Relational databases require cumbersome indexing when faced with non-hierarchic relationships that are becoming more common in complex IT ecosystems with partners and/or suppliers and service providers, as well as more dynamic infrastructures associated with cloud and agile.

Unlike relational databases, graph databases are designed to store interconnected data that’s not purely hierarchic, make it easier to make sense of that data by not forcing intermediate indexing at every turn, and also making it easier to evolve models of real-world infrastructures, business services, social relationships, or business behaviors that are both fluid and multi-dimensional.

Current State of Master Data Management:

The world of master data is changing. Data architects and application developers are swapping their relational databases with graph databases to store their master data. This switch enables them to use a data store optimized to discover new insights in existing data, provide a 360-degree view of master data and answer questions about data relationships in real time.

Your Master Data Management (MDM) program likely uses the same database technology as your transactional application: a mature, highly-tuned, relational database (RDBMS). You excel at relational databases because you have many years of experience working with them and most of your data live there, so it makes sense to keep master data there. Traditionally, MDM has included Customer, Product, Accounts, Vendor, Partners and any other highly shareable data in an enterprise.

Master Data, by definition, is highly shared. This tends to cost business agility in a way that ripples throughout the organization. Our architectures struggle on getting data to fit a single definition of the truth, something most of us realize is not a feasible solution in the long run.

The Future of Master Data Management:

MDM programs that attempt to retain data in a single location physically continue to wrestle with the realities of modern Information Technology. Most enterprise organizations use vendor applications: customer relationship management (CRM) systems, work management systems, accounts payable, accounts receivable, the point-of-sale systems, etc. Due to this approach, it’s not always feasible to move all Master Data to a single location. Even with a CRM system in place, we typically end up with customer information maintained in several systems. The same goes for product and accounting data as well.

The most successful programs will not strive to find a single physical location for all data, but will provide the standards, tools, and services necessary to provide a consistent vision of enterprise data. There will be data we can store in one place, using the technologies that best fit its data story. Data will also likely be found in multiple physical systems due to the increasing use of packaged applications as well for performance and geographically-distributed processing needs. Once we understand our environment, we can architect solutions that build upon those needs.

The future of Master Data Management will derive value from data and its relationships to other data. MDM will be about supplying consistent, meaningful views of Master Data. In many cases, we will be able to unify data into one location, especially to optimize for query performance and data fit. Graph databases offer exactly that type of data/performance fit, as we will see below. In this paper, we discuss why your master data is a graph and how graph databases like Neo4j are the best technologies for master data.

Today’s enterprises are drowning in “Big Data” – most of which is mission-critical Master Data – and managing its complex relationships can be quite a challenge. Here are some of the most difficult hurdles in MDM that enterprises must face:

• Complex and hierarchical datasets

Master Data such as organizational and product data has deep hierarchies with top-down, lateral, and diagonal connections. Managing such data models with relational database results in complex and unwieldy code that are slow to run, expensive to build, and time-consuming to maintain.

• Real-time query performance

Master Data systems must integrate with and provide data to a host of applications within the enterprise – sometimes in real time. However, traversing a complex and highly interconnected dataset to provide real-time information is a challenge.

• Dynamic structure

Master Data is highly dynamic with constant addition and re-organization of nodes, making it difficult for your developers to design systems that accommodate both current and future requirements.

The best data-driven business decisions aren’t based on stale information silos. Instead, you need real-time Master Data with information about data relationships. Graph databases are built from the ground up to support data relationships. With more efficient modeling and querying, organizing your Master Data in a graph yields relevant answers faster and with more flexibility than ever before.

The Struggle of Data Scrubbing

By Data

By Mery Ramirez

Data Scrubbing is one of the most common techniques used since the introduction of data sciences and has been used for various purposes for quite some time. Also known as Data Cleansing, this process is primarily defined as the cleaning and removing of various types of completely inconsistent data that does not have any meaning or needed usage. Industries such as insurance companies, banking systems, telecommunications, and many others use data for various purposes. For this reason, having completely refined data that does not require many changes affecting system performance is important.

Normally, data cleaning processes are hectic and require a lot of effort. This is made much easier by using various software solutions that can completely clean and scrub the data in no time. Though this is one of the best ways to get work done, however, there are also many other struggles that need to be looked at. While both data scrubbing and data cleansing are used simultaneously, there is still a minor difference between these two. Deciding on taking the challenges one may have to face? Let us have a look at the process in more detail.

Steps to Perform Data Scrubbing

There are several steps to perform while doing data scrubbing. Here, we have confined them to a few that are important to give a rough idea of what is going to happen and how one can sort out all the essential things easier.

Inspection and Audit

The very first thing is to identify all the irregularities and inconsistencies that are present in the data. This can be done in two ways. Either by reading and pointing out the whole errors scheme manually or by using a data-scrubbing tool. By doing a complete audit, one can easily get an idea of what is wrong with the given data and what needs to be fixed. With this, one can now move on to the next step as the parts that need change have been identified.

Data Cleaning

Here comes the real deal. This is where one’s skills will start counting. In data cleaning, begin by removing the general errors that can be seen. This includes all inconsistencies and all irregularities in the data that affect the flow. This also involves irregular words that do not appear to be needed. In short, this is the general error removal of typo errors and more.

Verification of Data Cleanliness

After removing all the general errors, one needs to make sure that all the necessary data issues that were in the data have been removed. One can also do this by using the same data-scrubbing tool that has been used to first identify the errors. This can also be done by reviewing the whole thing manually and getting others to give feedback on what needs to be rectified.


Typing a report is also valuable, as this will document the effort on things that have changed. Writing a report is important and good practice, especially in a great firm. This is needed to get the whole thing under analysis and noting the parts changes were made. All the outcomes are then converted into a written format and submitted.

Common Errors

Speaking of things to rectify, let us get an idea of what kind of errors may need to be fixed. This is important so one can easily identify errors and make sure things do not go south. Following are some of the common data errors one can find while rectifying working data.

Duplicate Data Errors

This typo error involves the repetition of words in your data. It can be a single word or an entire block. Duplicate data increases the size of the data in no time and makes things difficult for many people.

Inconsistent Data Errors

This is an irregular flow of the whole data and can make a poor impression. The data flow is corrected and neatly set to ensure that it does not affect the natural look and elegance of the whole data. It is removed either using a scrubbing tool or manually.

General Errors

These include all the typing errors such as typo errors, punctuation, smaller letters, and much more. This can be rectified by simply using document editing software. However, it is important to take care of all errors so there are no issues left behind.

Data Scrubbing, as easy as it seems, can become a real hectic task if not done correctly. Just make use of the proper tools and knowledge to cleanse data from all kinds of errors. Choose from the options one can avail, and we are good to go.