Data Profiling: What It Is, Tools & Best Practices

Data Profiling

Data profiling, or data archeology, is the process of reviewing and cleansing data to better understand its structure and maintain data quality standards within an organization. It is the process of examining, analyzing, and creating useful summaries of data.

The process of data mining yields a high-level overview that aids in the discovery of data quality issues, risks, and overall trends. Data profiling produces critical insights into data that companies can then leverage to their advantage.

Basics of Data Profiling

Data profiling is the process of reviewing source data, and understanding structure, content, and interrelationships. It also identifies the potential for data projects. 

Data profiling evaluates data based on factors such as accuracy, consistency, and timeliness to show if the data lacks consistency or accuracy or has null values. A result could be something as simple as statistics, such as numbers or values in the form of a column, depending on the data set.

Data profiling is a crucial part of:

  • Data warehouse and business intelligence (DW/BI) projects: Data profiling can uncover data quality issues in data sources, and what needs to be corrected in ETL.
  • Data conversion and migration projects: Data profiling can identify data quality issues, which you can handle in scripts and data integration tools copying data from source to target. It can also uncover new requirements for the target system.
  • Source system data quality projects: Data profiling can highlight data suffering from serious or numerous quality issues. It can also highlight the source of the issues, e.g. user inputs, errors in interfaces, data corruption).

Specifically, data profiling sifts through data to determine its legitimacy and quality. Analytical algorithms detect dataset characteristics such as mean, minimum, maximum, percentile, and frequency to examine data in minute detail. It then performs analyses to uncover metadata, including frequency distributions, key relationships, foreign key candidates, and functional dependencies.

Finally, it uses all of this information to expose how those factors align with your business’s standards and goals.

Data profiling can eliminate costly errors that are common in customer databases. These errors include null values (unknown or missing values), and values that should not be included. This also includes values with unusually high or low frequency, values that don’t follow expected patterns, and values outside the normal range.

Types of data profiling

There are three main types of data profiling:

Content discovery

This looks into individual data records to discover errors. Content discovery identifies which specific rows in a table contain problems, and which systemic issues occur in the data (for example, phone numbers with no area code).

Relationship discovery

This discovers how parts of the data are interrelated. For example, the key relationships between database tables, and the references between cells or tables in a spreadsheet. Understanding relationships is crucial to reusing data; related data sources should be united into one or imported in a way that preserves important relationships.

Structure discovery

Validating that data is consistent and formatted correctly, and performing mathematical checks on the data (e.g. sum, minimum or maximum). Structure discovery helps understand how well data is structured—for example, what percentage of phone numbers do not have the correct number of digits. 

Data profiling steps

Ralph Kimball, a data warehouse architecture expert, suggests a four-step process for data profiling:

  1. Use data profiling at the project start to discover if data is suitable for analysis. Also, make a “go/no go” decision on the project.
  2. Identify and correct data quality issues in source data, even before starting to move it into the target database.
  3. Identify data quality issues that can be corrected by Extract-Transform-Load (ETL), while data is moved from source to target. Data profiling can uncover if additional manual processing is needed.
  4. Identify unanticipated business rules, hierarchical structures, and foreign key/private key relationships. Use them to fine-tune the ETL process.

Benefits of data profiling

Bad data can cost businesses 30% or more of their revenue. For most companies, that means millions of dollars wasted, strategies recalculated, and tarnished reputations. And often, the culprit is oversight.

Companies can become so busy collecting data and managing operations that they compromise on the efficacy and quality of data. That could mean lost productivity, missed sales opportunities, and missed chances to improve the bottom line. That is where a data profiling tool comes in.

Once a data profiling application is engaged, it continually analyzes, cleans, and updates data in order to provide critical insights that are available right from your laptop. Specifically, data profiling provides:

Better data quality and credibility

Once data has been analyzed, the application can help eliminate duplications or anomalies. It can determine useful information that could affect business choices, identify quality problems that exist within an organization’s system, and be used to draw certain conclusions about the future health of a company.

Organized sorting

Most databases interact with a diverse set of data that could include blogs, social media, and other big data markets. Profiling can trace back to the original data source and ensure proper encryption for safety. A data profiler can then analyze those different databases, source applications, or tables, and ensure that the data meets standard statistical measures and specific business rules.

Understanding the relationship between available data, missing data, and required data helps an organization chart its future strategy and determine long-term goals. Access to a data profiling application can streamline these efforts.

Predictive decision making

Profiled information can be used to stop small mistakes from becoming big problems. It can also reveal possible outcomes for new scenarios. Data profiling helps create an accurate snapshot of a company’s health to better inform the decision-making process.

Proactive crisis management

Data profiling can help quickly identify and address problems, often before they arise.

Challenges of Data Profiling

Data profiling challenges typically stem from the complexity of the work involved. More specifically, you can expect:

Expensive and time-consuming

Data profiling can become very complex when trying to implement a successful program due to the sheer volume of data collected by a typical organization. This can become a very expensive and time-consuming task to hire trained experts to analyze the results and then make decisions without the correct tools.

Inadequate resources

In order to start the data profiling process a company needs its data all in one place, which is often not the case. If the data lives across different departments and there is no trained data professional in place, it can become very difficult to data profile a company as a whole.

Data profiling vs. data mining

While there is overlap with data mining, data profiling has a different goal in mind. What is the difference?

  • Data profiling helps in the understanding of data and its characteristics, whereas data mining is the process of discovering patterns or trends by analyzing the data.
  • Data profiling focuses on the collection of metadata and then using methods to analyze it to support data management.
  • Data profiling, unlikely data mining, produces a summary of the data’s characteristics and enables use of the data.

In other words, data profiling is the first of the tools you use to ensure the data is accurate and there are no inaccuracies.

Data profiling and data quality analysis best practices

Basic data profiling techniques:

  • Distinct count and percent: Identifies natural keys, the distinct values in each column that can help process inserts and updates. Handy for tables without headers.
  • Percent of zero/ blank/null values: Identifies missing or unknown data. Helps ETL architects set up appropriate default values.
  • Minimum/maximum/average string length: Helps select appropriate data types and sizes in the target database. Enables setting column widths just wide enough for the data, to improve performance.

Advanced data profiling techniques:

  • Key integrity: Ensures keys are always present in the data, using zero/blank/null analysis. Also, helps identify orphan keys, which are problematic for ETL and future analysis.
  • Cardinality: Checks relationships like one-to-one, one-to-many, many-to-many, between related data sets. This helps BI tools perform inner or outer joins correctly.
  • Pattern and frequency distributions: Check if data fields are formatted correctly, e.g., if emails are in a valid format. Extremely important for data fields used for outbound communications (emails, phone numbers, addresses).

Data profiling tools: Open source and commercial

Data profiling, a tedious and labor-intensive activity, can be automated with tools, to make huge data projects more feasible. These are essential to your data analytics stack.

Open-source data profiling tools

1. Aggregate Profiler (Open Source Data Quality and Profiling)

Key features include:

  • Data profiling, filtering, and governance
  • Similarity checks
  • Data enrichment
  • Real-time alerting for data issues or changes
  • Basket analysis with bubble chart validation
  • Single customer view
  • Dummy data creation
  • Metadata discovery
  • Anomaly discovery and data cleansing tool
  • Hadoop integration

2. Quadient DataCleaner

Key features include:

  • Data quality, data profiling and data wrangling
  • Detect and merge duplicates
  • Boolean analysis
  • Completeness analysis
  • Character set distribution
  • Date gap analysis
  • Reference data matching

3. Talend Open Studio (a suite of open-source tools)

Data quality features include:

  • Customizable data assessment
  • A pattern library
  • Analytics with graphical charts
  • Fraud pattern detection
  • Column set analysis
  • Advanced matching
  • Time column correlation

Commercial data profiling tools

4. Data Profiling in Informatica

Key features include:

  • Data stewardship console which mimics data management workflow
  • Exception handling interface for business users
  • Enterprise data governance
  • Map data quality rules once and deploy on any platform
  • Data standardization, enrichment, de-duplication and consolidation
  • Metadata management

5. Oracle Enterprise Data Quality

Key features include:

  • Data profiling, auditing and dashboards
  • Parsing and standardization including constructed fields, misfiled data, poorly structured data and notes fields
  • Automated match and merge
  • Case management by human operators
  • Address verification
  • Product data verification
  • Integration with Oracle Master Data Management

6. SAS DataFlux

Key features include:

  • Extracts, cleanses, transforms, conforms, aggregates, loads and manages data
  • Supports batch-oriented and real-time Master Data Management
  • Creates real-time, reusable data integration services
  • User-friendly semantic reference data layer
  • Visibility into where data originated and how it was transformed
  • Optional enrichment components

Data profiling in action

With the enormous amount of data available today, companies sometimes get overwhelmed by all the information they’ve collected. As a result, they fail to take full advantage of their data, and its value and usefulness diminish.

Data profiling organizes and manages big data to unlock its full potential and deliver powerful insights.

Domino’s data avalanche

With almost 14,000 locations, Domino’s was already the largest pizza company in the world by 2015. But when the company launched its AnyWare ordering system, it suddenly faced an avalanche of data. Users could now place orders through virtually any type of device or app, including smartwatches, TVs, car entertainment systems, and social media platforms.

That meant Domino’s had data coming at it from all sides. And, by putting reliable data profiling to work, Domino’s now collects and analyzes data from all of the company’s point of sales systems in order to streamline analysis and improve data quality.

As a result, Domino’s has gained deeper insights into its customer base, enhanced its fraud detection processes, boosted operational efficiency, and increased sales.

Data quality for customer loyalty

Office Depot combines an online presence with continued, offline strategies. Integration of data is crucial, combining information from three channels: the offline catalog, the online website, and customer call centers.

Among other things, Office Depot uses data profiling to perform checks and quality control on data before it is entered into the company’s data lake. Integrated online and offline data results in a complete 360-degree view of customers. It also provides high-quality data to back-office functions throughout the company.

Higher customer lifetime value with healthy data

Globe Telecom provides connectivity services to more than 94.2 million mobile subscribers and 2 million home broadband customers in the Philippines. Opportunities to expand market share are limited, so it was vital that Globe get a better understanding of its existing customer base so it could grow the lifetime value of each relationship.

To deliver the customer insights the business required, Globe needed data that was healthy and suitable for applications such as data analytics. However, this proved to be a challenge in areas like data scoring, which at that point was manually addressed by using spreadsheets and offline databases to apply validation and data quality rules to existing data.

Today, Globe operates a center of excellence for its data that encompasses data quality, data engineering, and data governance. With healthy data, Globe improved the availability of data quality scores from once a month to every day, increased trusted email addresses by 400%, and achieved higher ROI per marketing campaign.

Metrics include a 30% cost reduction per lead, a 13% improvement in conversion rates, and an 80% increase in click-through rates.

References

0 Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like