Data Credibility – A Critical Dimension to Data Quality
In the last post, we talked about data quality, specifically as it relates to purchased parts and products. One important dimension to data quality, which is being talked about these days is data credibility. Let us see what it means and entails and how you can ensure it becomes a part of your data quality and data management systems.
Before we start, let me give out some facts and figures that are sure to fascinate you.
- 25% of critical data is the world’s top companies is flawed.
- Best in class companies claim they can only access 35% of newly added data. This figure drops to 10% for laggards.
- The average estimated loss for 140 companies, due to erroneous data, was $8,200,000.
- By 2020, the average organization will be handling over 30 Zeta bytes (30,000,000,000,000,000,000,000 bytes) of data.
(Sources – Gartner, Melissa data, The Data Warehousing Institute)
Now that I have your attention, let us come back to data credibility. The phrase is quite simple – data credibility is a measure of how much trust you can put in the data you are getting. Malcolm Chisholm puts it in an Information Management article – “Data credibility is the extent to which the good faith of a provider of data or source of data can be relied upon to ensure that the data really represents is what the data is supposed to represent, and that there is no intent to misrepresent what the data is supposed to represent.”
The definition is all well and good, but how does it affects me?
Unreliable data is one of the biggest problems that could affect companies working to get to the next level. The defining feature of companies that handle a large amount of data and work with a number of systems is these systems are all connected to each other (as they should be). When a problem is identified in the latter half of any cycle, the source or reason could be really hard to find due to mismanagement of the data in the initial stages. This can cause confusion and spread wrong information which could hamper processes and cost the companies a fair share. Credible data would be a simple solution and will be able to avoid all these problems. The importance of data management in the initial stages can have vast effects and mismanagement of it can have repercussions that companies should look at avoiding.
How to get credible data?
Credible data can only be had with a robust data management and data governance system in place. Making sure that the data you have is cleaned, rationalized, and attributed is the first step, while ensuring that it remains that way and is not corrupted by new data is the other one. Tools than can help in this regard should definitely be something that any large organization should be considering.
Note – A thank is due to Prash Chan for it was his blog that gave me the idea of talking about data credibility.
Blog – Why Do Businesses Overlook The Importance Of Data Quality Improvement?
Case Study – Material Master Data Management for a leading Oil and Gas company
White Paper – Sailing smooth through Data Vortex – Master data management
Latest posts by Vipul Aroh (see all)
- Key Takeaways from our Webinar – Strategically Manage Data Quality in an ERP rollout - November 4, 2015
- Boosting the ROI of an ERP project – Part I - August 31, 2015
- Master Data Quality Improvement – Adding to your EAM Implementation ROI - July 29, 2015