Why citizen integrators are today’s architects of customer experience

By Nada daVeiga

Lately, I’ve been thinking a lot about customer experience (CX) and the most direct, most effective ways for companies to transform it. As I recently blogged, data is the centerpiece – the metaphorical cake, as it were, compared to the martech frosting – of creating winning customer experiences.

That being said, which internal organization could possibly be better than marketing, to shape customer experience?

Nearly every enterprise function shapes CX

As it turns out, there are many teams within the modern enterprise that serve as CX architects. Think of all the different groups that contribute to customer engagement, acquisition, retention, and satisfaction: marketing, sales, service, and support are the most obvious, but what about product development, finance, manufacturing, logistics, and shipping? All of these functions impact the customer experience, directly or indirectly, and thus should be empowered to improve it through unbridled data access.

This point of view is reflected in SnapLogic’s new white paper, “Integration in the age of the customer: The five keys to connecting and elevating customer experience.” From it, a key thought:

[W]ho should corral the data? The best outcomes from customer initiatives happen when the business takes control and leads the initiative. The closer the integrators are to the customer, the better they can put themselves in their customers’ shoes and understand their needs. Often, they have a clear handle on metrics, the business processes, the data, and real-world customer experiences, whether they’re in marketing, sales, or service, and are the first to see how the changes they’re making are improving customer experience — or not.

Democratizing data integration

Because most departmental leaders in sales, service, and marketing are typically not familiar with programming, they look for integration solutions that provide click-not-code graphical user interfaces (GUIs) that enable a visual, intuitive process to democratize customer data integration. SnapLogic believes that GUI-driven, democratic data integration is an essential first step in empowering today’s CX architects to gain the analytic insight they need to improve customer experience.

In short, we believe that “citizen integrator” is really just another name for “citizen innovator;” fast, easy, seamless data integration shatters stubborn barriers to CX innovation by igniting exploration and problem-solving creativity.

To learn how to design your integration strategy to improve customer experience across the organization, download the white paper, “Integration in the age of the customer: The five keys to connecting and elevating customer experience.” In it, you’ll find actionable insights on how to optimize your organization’s data integration strategy to unlock CX innovation, including:

  • Why you need to ensure your organization’s integration strategy is customer-focused
  • How to plan around the entire customer lifecycle
  • Which five integration strategies help speed customer analytics and experience initiatives
  • How to put the odds of customer success in your favor

Nada daVeiga is VP Worldwide Pre-Sales, Customer Success, and Professional Services at SnapLogic. Follow her on Twitter @nrdaveiga.

Data integration: The key to operationalizing innovation

md_craig-BW-1443725112By Craig Stewart

It’s not just a tongue twister. Operationalizing innovation has proven to be one of the most elusive management objectives of the new millennium. Consider this sound bite from an executive who’d just participated in an innovation conference in 2005:

The real discussion at the meeting was about … how to operationalize innovation. All roads of discussion led back to that place. How do you make your company into a systemic innovator? There is no common denominator out there, no shared understanding on how to do that.[1]

The good news is that, in the 12 years since, cloud computing has exploded, and a common denominator clearly emerged: data. Specifically, putting the power of data – big data, enterprise data, and data from external sources – and analytics into users’ hands. More good news: An entirely new class of tools[2] has emerged that allows business users to become “citizen data analysts.”

The bad news: There hasn’t been a fast, easy way to perform the necessary integrations between data sources, in the cloud – an essential first step that is the foundation of citizen data analytics, today’s hottest source of innovation.

Until now.

The SnapLogic Enterprise Integration Cloud is a mature, full-featured Integration Platform-as-a-Service (iPaaS) built in the cloud, for the cloud. Through its visual, automated approach to integration, the SnapLogic Enterprise Integration Cloud uniquely empowers both business and IT users, accelerating analytics initiatives on Amazon Redshift and other cloud data warehouses.

Unlike on-premises ETL or immature cloud tools, SnapLogic combines ease of use, streaming scalability, on-premises and cloud integration, and managed connectors called Snaps. Together, these capabilities present a 10x improvement over legacy ETL solutions like Informatica or other “cloud-washed” solutions originally designed for on-premises use, accelerating integrations from months to days.

By enabling “citizen integrators” to more quickly build, deploy and efficiently manage multiple high-volume, data-intensive integration projects, SnapLogic uniquely delivers:

  • Ease of use for business and IT users through a graphical approach to integration
  • A solution built for scale, offering bulk data movement and streaming data integration
  • Ideal capabilities for hybrid environments, with over 400 Snaps to handle relational, document, unstructured, and legacy data sources
  • Cloud data warehouse-readiness with native support for Amazon Redshift and other popular cloud data warehouses
  • Built-in data governance* by synchronizing data in Redshift at any time interval desired, from real-time to overnight batch.

* Why data governance matters

Analytics performed on top of incorrect data yield incorrect results – a detriment, certainly, in the quest to operationalize innovation. Data governance is a significant topic, and a major concern of IT organizations charged with maintaining the consistency of data routinely accessed by citizen data scientist and citizen integrator populations. Gartner estimates that only 10% of self-service BI initiatives are governed[3] to prevent inconsistencies that adversely affect the business.

Data discovery initiatives using desktop analytics tools risk creating inconsistent silos of data. Cloud data warehouses afford increased governance and data centralization. SnapLogic helps to ensure strong data governance by replicating source tables into Redshift clusters, where the data can be periodically synchronized at any time interval desired, from real-time to overnight batch. In this way, data drift is eliminated, allowing all users who access data, whether in Redshift or other enterprise systems, to be confident in its accuracy.

To find out more about how SnapLogic empowers citizen data scientists, and how a global pharmaceutical company is using SnapLogic to operationalize innovation, get the white paper, “Igniting discovery: How built-for-the-cloud data integration kicks Amazon Redshift into high gear.

Craig Stewart is Vice President, Product Management at SnapLogic.

[1] “Operationalizing Innovation–THE hot topic,” Bruce Nussbaum, Bloomberg, September 28, 2005. https://www.bloomberg.com/news/articles/2005-09-28/operationalizing-innovation-the-hot-topic

[2] “The 18 Best Analytics Tools Every Business Manager Should Know,” Bernard Marr, Forbes, February 4, 2016. https://www.forbes.com/sites/bernardmarr/2016/02/04/the-18-best-analytics-tools-every-business-manager-should-know/#825e6115d397

[3] “Predicts 2017: Analytics Strategy and Technology,” Kurt Schlegel, et. al., Gartner, November 30, 2016. ID: G00316349

James Markarian: Was the Election a Referendum on Predictive Analytics?

In his decades working in the data and analytics industry, SnapLogic CTO James Markarian has witnessed few mainstream events that have sparked as much discussion and elicited as many questions – around the value and accuracy of predictive analytics tools – as our recent election.

In a new blog post on Forbes, James examines where the nation’s top pollsters (who across the board predicted a different election outcome) possibly went wrong, why some predictions succeed and others fail, what businesses who have invested in data analytics can learn from the election, and how new technologies such as integration platform as a service (iPaaS) can help them make sense of all their data to make better predictions.

Be sure to read James’s blog, titled “What The Election Taught Us About Predictive Analytics”, on Forbes here.

7 Big Data Predictions for 2017

As data increasingly becomes the means by which businesses compete, companies are restructuring operations to build systems and processes liberating data access, integration and analysis up and down the value chain. Effective data management has become so important that the position of Chief Data Officer is projected to become a standard senior board level role by 2020, with 92 percent of CIOs stating that a CDO is the best person to determine data strategy.

With this in mind as you evaluate your data strategy for 2017, here are seven predictions to contemplate to build a solid framework for data management and optimization.

  1.  Self-Service Data Integration Will Take Off
    Eschewing the IT bottleneck designation and committed to being a strategic partner to the business, IT is transforming its mindset. Rather than be providers of data, IT will enable users to achieve data optimization on a self-service basis. IT will increasingly decentralize app and data integration – via distributed Centers of Excellence based on shared infrastructure, frameworks and best practices – thereby enabling line-of-business heads to gather, integrate and analyze data themselves to discern and quickly act upon insightful trends and patterns of import to their roles and responsibilities. Rather than fish for your data, IT will teach you how to bait the hook. The payoff for IT: satisfying business user demand for fast and easy integrations and accelerated time to value; preserving data integrity, security and governance on a common infrastructure across the enterprise; and freeing up finite IT resources to focus on other strategic initiatives.
  1. Big Data Moves to the Cloud
    As the year takes shape, expect more enterprises to migrate storage and analysis of their big data from traditional on-premise data stores and warehouses to the cloud. For the better part of the last decade, Hadoop’s distributed computing and processing power has made it the standard open source platform for big data infrastructures. But Hadoop is far from perfect. Common user gripes include complexity and instability – not all that surprising given all the software developers regularly contributing their improvements to the platform. Cloud environments are more stable, flexible, elastic and better-suited to handling big data, hence the predicted migration.
  1. Spark Usage Outside of Hadoop Will Surge
    This is the year we will also see more Spark use cases outside of Hadoop environments. While Hadoop limps along, Spark is picking up the pace. Hadoop is still more likely to be used in testing rather than production environments. But users are finding Spark to be more flexible, adaptable and better suited for certain workloads – machine learning and real-time streaming analytics, as examples. Once relegated to Hadoop sidekick, Spark will break free and stand on its own two feet this year. I’m not alone in asking the question: Hadoop needs Spark but does Spark need Hadoop?
  1. A Big Fish Acquires a Hadoop Distro Vendor?
    Hadoop distribution vendors like Cloudera and Hortonworks paved the way with promising technology and game-changing innovation. But this past year saw growing frustration among customers lamenting increased complexity, instability and, ultimately, too many failed projects that never left the labs. As Hadoop distro vendors work through some growing pains (not to mention limited funds), could it be that a bigger, deeper-pocketed established player – say Teradata, Oracle, Microsoft or IBM – might swoop in to buy their sought after technology and marry it with a more mature organization? I’m not counting it out.
  1. AI and ML Get a Bit More Mainstream
    Off the shelf AI (artificial intelligence) and ML (machine learning) platforms are loved for their simplicity, low barrier to entry and low cost. In 2017, off the shelf AI and ML libraries from Microsoft, Google, Amazon and other vendors will be embedded in enterprise solutions, including mobile varieties. Tasks that have until now been manual and time-consuming will become automated and accelerated, extending into the world of data integration.

6. Yes, IoT is Coming, Just Not This Year
Connecting billions and billions of sensor-embedded devices and objects over the internet is inevitable, but don’t yet swallow all the hype. Yes, there is a lot being done to harness IoT for specific aims, but the pace toward the development of a general-purpose IoT platform is closer to a canter than a gallop. IoT solutions are too bespoke and purpose-built to solve broad, commonplace problems – the market still nascent with standards gradually evolving – that a general-purpose, mass-adopted IoT platform to collect, integrate and report on data in real-time will take, well, more time. Like any other transformation movement in the history of enterprise technology, brilliant bits and pieces need to come together as a whole. It’s coming, just not in 2017.

  1. APIs Are Not All They’re Cracked Up to Be
    APIs have long been the glue connecting apps and services, but customers will continue to question their value vs investment in 2017. Few would dispute that APIs are useful in building apps and, in many cases, may be the right choice in this regard. But in situations where the integration of apps and/or data is needed and sought, there are better ways. Case in point is iPaaS (integration platform as a service), which allows you to quickly and easily connect any combination of cloud and on-premise technologies. Expect greater migration this year toward cloud-based enterprise integration platforms – compared to APIs, iPaaS solutions are more agile, better equipped to handle the vagaries of data, more adaptable to changes, easier to maintain and far more productive.

I could go on and on, if for no other reason that predictions are informed “best guesses” about the future. If I’m wrong on two or three of my expectations, my peers will forgive me. In the rapidly changing world of technology, batting .400 is a pretty good statistic.

Connect with SnapLogic at AWS re:Invent

This week, the SnapLogic team will be supporting one of our partners, Amazon Web Services, in Las Vegas for the annual AWS re:Invent conference. This gathering of the global AWS community will feature hands-on labs and bootcamps and cover topics such as infrastructure maintenance, and improving developer productivity, network security and application performance.

Continue reading “Connect with SnapLogic at AWS re:Invent”

A Hadoop Data Lake For Banking: A SnapLogic Story

Last week, part of the SnapLogic team was in New York City for the Strata/Hadoop World conference. It’s one of the largest big data events in the U.S. and has grown steadily larger over recent years. The agenda has shifted a bit as well – from largely academic discussions and how-to presentations by open source committers to real-world case studies by non-ISV enterprises.

With that in mind, I’d like to share a story from one of our enterprise customers. In fact, this customer is a 100+ year old financial institution. Perhaps not a company that you would associate with the cutting edge of data management technologies… Due the nature of their industry, I can’t share their name.

Like many established companies, this bank’s data processing and storage systems have been acquired or added over the years based on the most pressing needs and compliance requirements at the time. They ultimately found themselves trying to manage an unwieldy mix of 240+ interfaces and applications. Continue reading “A Hadoop Data Lake For Banking: A SnapLogic Story”

SnapLogic Live: Tableau Integration

Data visualization is a hot topic, but is only useful when the most up-to-date data is made available. This SnapLogic Live session features Tableau Integration for customers seeking rapid integration to connect Tableau with other cloud and on-premises applications and data sources.

SL-Live-Tableau

Continue reading “SnapLogic Live: Tableau Integration”