Clean, white clouds of data, anyone?

SnapLogic’s Summer 2011 Release is another huge step forward in helping our customers eliminate the manual drudgery associated with cleansing and enriching data as it moves between disparate applications and departments. We’re not the first data integration company to laud the virtues of data quality management, but like a lot of things here at SnapLogic, we make it a whole lot easier and less expensive. Perhaps most significant is that you can now make data quality management a seamless part of your company’s integration plumbing. As I’ll explain below, this single innovation can have a huge impact on your business, and SnapLogic’s approach is a break from past efforts to tackle this challenge.

Historically, many businesses, especially in the mid-market, would cleanse and enrich their data as a batch process. Often, this would be done by sending a file to a third party or running the file through a desktop program. This new file would then be re-uploaded back from whence it came.

Larger enterprises could afford expensive and complex on-premise solutions under the banner of data quality management, MDM and data governance, yet these solutions can become unwieldy as the enterprise introduces more and more cloud-based applications and data sources.

More recently, cloud-based data quality solutions have emerged which provide pay-as-you go pricing and swift deployment, but these solutions often fail to incorporate real-time cleansing and enrichment as your data moves between multiple, disparate on-premise and cloud-based systems. Or worse yet, they simply assess your data, but don’t give you the tools to improve its quality.

So, how does our new solution automate data quality management as part of your integration architecture (“the plumbing”) vs. an ad hoc batch process? Let me explain with a simple example.

Let’s say each month we want to pull new customer account data out of a MySQL database to either assign customers to the enterprise account services team or to note midsize customers for a welcome kit mailing handled by the SMB Programs Manager. Here’s the flow and how it might look using the SnapLogic Designer:

  1. Pull new account data out of MySQL
  2. Verify, cleanse and enrich data using the Trillium Data Quality Snap
  3. Segment into “Enterprise” and “SMB Accounts” using SnapLogic’s Expression-Based Filtering
  4. Move Enterprise data into Salesforce.com and SMB data into a file for the SMB direct mail program
  5. Output Enterprise data into a flat file, ready for further processing by a business analyst or others evaluating customer acquisition trends.

So, the first thing we’re going to do with the new account data is to verify, cleanse and enrich it against Trillium’s global address verification system. For contacts that are not able to be verified, we could even create an exception file and alert the sales operations manager.

Next, we’re going to filter the accounts into two distinct sets of data: “Enterprise Accounts” and “SMB Accounts.” This is accomplished using plain language, not complicated coding.

For “Enterprise Accounts” we want accounts that meet the following criteria:

1) The account has more than 1000 employees OR

2) Over $50 million in revenue OR

3) Four or more offices.

For “SMB Accounts” we want accounts that have:

1) Between 100 and 1000 employees OR

2) Less than 4 offices AND

3) There is NO Reseller Partner handling the account.

We want to load Enterprise accounts data into Salesforce.com and SMB Accounts into a CSV for the mailing. Instead of having to code a nested expression or burn cycles with your DBA, you can simply plug the filter logic directly into the SnapLogic Designer and output it to the destination of your choosing.

When the CSV file is ready, you could even choose to send a notification email to the SMB Programs Manager so that she knows the file is ready for her campaign.

Likewise, even though data is being loaded automatically into Salesforce.com, you’re going to create a CSV file output for review by a sales operations analyst. Of course, we could also feed this data into a BI application like Birst.

The best part is that when we want to run this same operation next month, we can simply push Play or schedule it to run automatically on a day of our choosing. This builds data cleansing directly into the ongoing business process.


Subscribe to Blog Updates

Quickly connect apps, data, and devices

Start Free Trial
Contact Us Free Trial