Moving your data warehouse to the cloud: Look before you jump

By Ravi Dharnikota

Where’s your data warehouse? Is it still on-premises? If so, you’re not alone. Way back in 2011, in its IT predictions for 2012 and beyond, Gartner said, “At year-end 2016, more than 50 percent of Global 1000 companies will have stored customer-sensitive data in the public cloud.”

While it’s hard to find an exact statistic on how many enterprise data warehouses have migrated, cloud warehousing is increasingly popular as companies struggle with growing data volumes, service-level expectations, and the need to integrate structured warehouse data with unstructured data in a data lake.

Cloud data warehousing provides many benefits but getting there isn’t easy. Migrating an existing data warehouse to the cloud is a complex process of moving schema, data, and ETL. The complexity increases when restructuring of database schema or rebuilding of data pipelines is needed.

This post is the first in a “look before you leap” three-part series on how to jump-start your migration of an existing data warehouse to the cloud. As part of that, I’ll also cover how cloud-based data integration solutions can significantly speed your time to value.

Beyond basic: The benefits of cloud data warehousing

Cloud data warehousing is a Data Warehouse as a Service (DWaaS) approach that simplifies time-consuming and costly management, administration, and tuning activities that are typical of on-premises data warehouses. But beyond the obvious – data warehouses being stored in the cloud - there’s more. Processing is also cloud-based, and all major solution providers charge separately for storage and compute resources, both of which are highly scalable.

All of which leads us to a more detailed list of key advantages:

  • Scale up (and down): The volume of data in a warehouse typically grows at a steady pace as time passes and history is collected. Sudden upticks in data volume occur with events such as mergers and acquisitions, and when new subjects are added. The inherent scalability of a cloud data warehouse allows you to adapt to growth, adding resources incrementally (via automated or manual processes) as data and workload increase. The elasticity of cloud resources allows the data warehouse to quickly expand and contract data and processing capacity as needed, with no impact to infrastructure availability, stability, performance, and security.
  • Scale out: Adding more concurrent users requires the cloud data warehouse to scale out. You will be able to add more resources – either more nodes to an existing cluster or an entirely new cluster, depending on the situation – as the number of concurrent users rises, allowing more users to access the same data without query performance degradation.
  • Managed infrastructure: Eliminating the overhead of data center management and operations for the data warehouse frees up resources to focus where value is produced: using the data warehouse to deliver information and insight.
  • Cost savings: On-premises data centers themselves are extremely expensive to build and operate, requiring staff, servers, and hardware, networking, floor space, power, and cooling. (This comparison site provides hard dollar data on many data center elements.) When your data warehouse lives in the cloud, the operating expense in each of these areas is eliminated or substantially reduced.
  • Simplicity: Cloud data warehouse resources can be accessed through a browser and activated with a payment card. Fast self-service removes IT middlemen and democratizes access to enterprise data.

In my next post, I’ll do a quick review of additional benefits and then dive into data migration. If you’d like to read all the details about the benefits, techniques, and challenges of migrating your data warehouse to cloud, download the Eckerson Group white paper, “Jump-Start Your Cloud Data Warehouse: Meeting the Challenges of Migrating to the Cloud.

Ravi Dharnikota is Chief Enterprise Architect at SnapLogic. Follow him on Twitter @rdharn1

The commoditization of integration

By Dinesh Chandrasekhar

Eight years ago, dozens of integration vendors were offering scores of solutions, all with what seemed to be the same capabilities. Pick any ESB or ETL tool and each seemed to perform the same functions as their competitors. RFPs were no longer a viable way to weed out the inferior vendors as each solution checked all the boxes across the board. Plus, all vendors were ready to lower their prices at the drop of a hat to win your business. It was at this time that the integration market had truly reached a level of commoditization. Consumers could easily pick and choose any solution as there were no true differentiators amongst them.

But, several factors have changed the landscape since then:

  • NoESB – The NoESB architecture had started gaining interest – pushing the idea of the irrelevancy of ESB for many integration scenarios. Yet, an API Gateway was not the right alternative.
  • Cloudification – The cloudification of pretty much all your favorite on-premises enterprise applications began around the same time. Enterprises that were thinking of a digital transformation couldn’t get too far without a definitive cloud strategy in place.
  • Convergence of ESB and ETL – The lines between application integration and data integration were blurring. CIOs and IT managers didn’t want to deal with two different sets of integration tools. With the onset of mobile and IoT, data volumes were exploding daily. As a result, even data warehouses moved to the cloud. To serve such big data needs, the traditional/legacy ESB/ETL tools were incompetent and unfit.
  • Agile Integrations – Finally, the DevOps and Agile movements impacted enterprise integration initiatives as well. They had given rise to new user personas in the enterprise – Citizen Integrators or Citizen Developers. These are the LOB Managers or non-IT personnel that needed quick integrations within their applications to render their data in different views. The reliance on IT to deliver solutions to business was becoming a major hindrance.

All these factors have influenced the iPaaS (Integration Platform as a Service) market. Now, thousands of companies are already leveraging iPaaS solutions to integrate their cloud and on-premises solutions. iPaaS solutions break away from legacy approaches to integration, are cloud-native, intuitive, fast, self-starting, support hybrid architectures, and offer connectors to a wide range of on-premises and on the cloud applications.

Now comes the big question – “Will iPaaS solutions be commoditized, too?” At the moment, the answer is a definite NO and there are multiple reasons why. Beyond scale, latency, tenancy, SLAs, number of connectors etc., one of the key areas that will differentiate iPaaS solutions is the developer experience. The user interface of the solution will determine the adoption rate and the value it brings to the enterprise. So, for a citizen integrator to actually use the system, the interface should be intuitive enough to guide them in building their integration flows quickly, effectively, and most importantly, without the assistance of IT. This alone will make or break the system adoption.

iPaaS vendors are trying to enhance this developer experience with features like drag-and-drop connectors, pipeline snippets, a templates library, a starter kit, mapping enhancements, etc. However, very few vendors are offering AI-driven tooling that enables intelligent ways to predict next steps – based on learnings from hundreds of other users – for your integration flow. AI-assist is truly a great benefit for citizen integrators, who may be non-technical. Even technically savvy developers welcome a significant boost in their productivity. With innovations like this happening, the iPaaS space is quite far away from being commoditized. However, enterprises still need to be wary of cloud-washing iPaaS vendors that offer “1000+” connectors, a thick-client IDE, or an ESB wrapped in a cloud blanket. And, that is a post for a different day!

Dinesh Chandrasekhar is Director of Product Marketing at SnapLogic. Follow him on Twitter @AppInt4All.

Mossberg out. Enterprise technology still in

By Gaurav Dhillon

A few weeks ago, the legendary tech journalist, Walt Mossberg, penned his last column. Although tech journalism today is vastly different than it was in 1991, when his first column appeared in the Wall Street Journal, or even five or 10 years ago, voices like Walt’s still matter. They matter because history matters – despite what I see as today’s widely held, yet unspoken belief that nothing much important existed prior to the invention of the iPhone.

Unpacking that further, history matters because the people who learn from it, and take their cues from it, are those who will drive the future.

Enterprise tech history is still unfolding

I like to think of myself as one of those people, certainly one who believes that all history is meaningful, including tech history. As tech journalism’s eminence grise, Walt not only chronicled the industry’s history, he also helped to define it. He was at the helm of a loose cadre of tech journalists and industry pundits, from Robert X. Cringely to Esther Dyson, who could make or break a company with just a few paragraphs.

Walt is now retiring. So what can we learn from him? The premise of his farewell column in Recode is that tech is disappearing, in a good way.”[Personal] tech was once always in your way. Soon, it will be almost invisible,” he wrote, and further, “The big software revolutions, like cloud computing, search engines, and social networks are also still growing and improving, but have become largely established.”

I’ll disagree with Walt on the second point. The cloud computing revolution, which is changing the way enterprises think and operate, is just beginning. We are at a juncture populated by unimaginably large quantities of data, coupled with an equally unquenchable thirst by enterprises to learn from it. The world has gone mad for artificial intelligence (AI) and analytics, every permutation of which is fueled by one thing: data.

The way we use data will become invisible

In his column, Walt observed that personal tech is now almost invisible. We use and benefit from it in an almost passive way. The way data scientists and business users consume data is anything but. Data is still moved around and manually integrated, on-premises and in the cloud, with processes that haven’t changed much since the 1970s. Think about it – the 1970s! It’s no secret that extract, transfer, and load (ETL) processes remain the bane of data consumers’ existence, largely because many enterprises are still using 25-year-old solutions to manage ETL and integrate data.

Cloud Computing

The good news is, data integration is becoming much easier to do, and is well on its way to becoming invisible. Enterprise integration cloud technology promises to replace slow and cumbersome scripting and manual data movement with fast, open, seamless data pipelines, optimized with AI techniques.

Remember how, as Internet use exploded in the late 1990s, the tech industry was abuzz with companies offering all manner of optimization technologies, like load balancing, data mirroring, and throughput optimization? These days you never hear about these companies anymore; we take high-performance internet service for granted, like the old-fashioned dial tone.

I am confident that we are embarking on a similar era for enterprise data integration, one in which modern, cloud-first technologies will make complex data integration processes increasingly invisible, seamlessly baked into the way data is stored and accessed.

Making history with data integration

I had the pleasure of meeting Walt some years ago at his office, a miniature museum with many of the personal tech industry’s greatest inventions on display. There, his love of tech was apparent and abundant. Apple IIe? Nokia Communicator 9000? Palm Treo and original iPod? Of course. If Walt were to be at his keyboard, in his office, for another couple of years, I’m pretty sure his collection would be joined by a technology with no physical form factor, but of even greater import: the enterprise cloud.

Hats off to you, Walt. And while you may have given your final sign-off, “Mossberg out,” enterprise tech is most definitely still in.

Follow me on Twitter @gdhillon.

Gaurav Dhillon is CEO of SnapLogic. You can follow him on Twitter @gdhillon.

Disconnected data is a drag on innovation

By Scott Behles

What do you consider to be a business’ most valuable asset? Is it the cash it holds? Product inventory? Property perhaps? In the pre-internet age, these traditional assets may have supported businesses and could be easily accounted for on an organization’s balance sheet, but the lifeblood of the 21st-century organization is, without question, data.

Whether it’s customer data, financial data, or increasingly machine data, the insights that can be gleaned from an organization’s data repository are invaluable in developing new products and services, deciding the future roadmap for a business, and gaining competitive advantage.

But are businesses taking full advantage of the data at their fingertips? Particularly in larger enterprises with multiple departments, global offices, and disparate IT systems, data often remains relegated to the department that is considered its primary owner. The finance department handles the accounting data while customer data stays with the marketing and sales teams, for instance.

It’s an antiquated way of handling things, and one that means company leaders and other business decision makers rarely see the full picture of what’s going on across the organization, leading to stifled innovation, unforeseen market threats, and missed opportunities.

Convincing business leaders that this is a serious problem can be a tough sell though. Unless you can assign a dollar figure to how significantly disconnected data is negatively impacting a business, you’ll likely not get much of a reaction.

Thankfully, our new Disconnected Data research has done just that.

We surveyed 500 businesses users and IT decision makers in large businesses across the US and UK and found that the wasted time and resources, duplication of work, and missed opportunities caused by disconnected data is collectively costing businesses $140 billion annually.

That stat alone might raise eyebrows, but when we dug a little deeper we uncovered that this issue in large businesses is likely having a far greater impact.

First, more than one-fifth were unaware of what data other departments actually held and one in six didn’t even know how many data sources actually existed. Against this backdrop, it’s even more surprising to learn that, on average, workers were spending more time searching for, acquiring, entering, or moving data than actually analyzing and making decisions on the data. Workers spending most of their time collecting some but not all data, and at the expense of possibly not incorporating it into their decision-making, paints less than a rosy picture for large businesses’ data-driven strategies.

To their credit, most of our respondents are aware of this problem. More than half (57%) admitted that their organization is struggling with data silos and nearly the same percentage said that data silos are a barrier to meeting their organization’s business objectives.

The business objectives most affected? Seizing new opportunities and driving innovation. A shocking 72% felt that siloed data was causing their business to miss out on opportunities, and a third stated that it was holding back innovation in product and services.

For us here at SnapLogic, that last stat is the real stinger. We firmly believe that innovation should be priority #1 for any business that wants to succeed and thrive in today’s fast-moving digital era. Without innovation, products and services won’t evolve which means customers won’t benefit from the latest developments and will start to look elsewhere. If a business can’t innovate, then its days are numbered. If disconnected data is standing in the way of that innovation, it’s a problem that must be solved. And quickly.

To read our complete study on “The High Cost of Disconnected Data,” to get all the details.

Scott Behles is Head of Corporate Communications at SnapLogic. Follow him on Twitter @sbehles

Why citizen integrators are today’s architects of customer experience

By Nada daVeiga

Lately, I’ve been thinking a lot about customer experience (CX) and the most direct, most effective ways for companies to transform it. As I recently blogged, data is the centerpiece – the metaphorical cake, as it were, compared to the martech frosting – of creating winning customer experiences.

That being said, which internal organization could possibly be better than marketing, to shape customer experience?

Nearly every enterprise function shapes CX

As it turns out, there are many teams within the modern enterprise that serve as CX architects. Think of all the different groups that contribute to customer engagement, acquisition, retention, and satisfaction: marketing, sales, service, and support are the most obvious, but what about product development, finance, manufacturing, logistics, and shipping? All of these functions impact the customer experience, directly or indirectly, and thus should be empowered to improve it through unbridled data access.

This point of view is reflected in SnapLogic’s new white paper, “Integration in the age of the customer: The five keys to connecting and elevating customer experience.” From it, a key thought:

[W]ho should corral the data? The best outcomes from customer initiatives happen when the business takes control and leads the initiative. The closer the integrators are to the customer, the better they can put themselves in their customers’ shoes and understand their needs. Often, they have a clear handle on metrics, the business processes, the data, and real-world customer experiences, whether they’re in marketing, sales, or service, and are the first to see how the changes they’re making are improving customer experience — or not.

Democratizing data integration

Because most departmental leaders in sales, service, and marketing are typically not familiar with programming, they look for integration solutions that provide click-not-code graphical user interfaces (GUIs) that enable a visual, intuitive process to democratize customer data integration. SnapLogic believes that GUI-driven, democratic data integration is an essential first step in empowering today’s CX architects to gain the analytic insight they need to improve customer experience.

In short, we believe that “citizen integrator” is really just another name for “citizen innovator;” fast, easy, seamless data integration shatters stubborn barriers to CX innovation by igniting exploration and problem-solving creativity.

To learn how to design your integration strategy to improve customer experience across the organization, download the white paper, “Integration in the age of the customer: The five keys to connecting and elevating customer experience.” In it, you’ll find actionable insights on how to optimize your organization’s data integration strategy to unlock CX innovation, including:

  • Why you need to ensure your organization’s integration strategy is customer-focused
  • How to plan around the entire customer lifecycle
  • Which five integration strategies help speed customer analytics and experience initiatives
  • How to put the odds of customer success in your favor

Nada daVeiga is VP Worldwide Pre-Sales, Customer Success, and Professional Services at SnapLogic. Follow her on Twitter @nrdaveiga.

Less frosting, more cake: Data integration transforms customer experience

By Nada daVeiga

I’ll start with the frosting. As far as I can tell, it’s been the Year of the Customer for several years now. During this time, every company has gotten the “customer experience” (CX) religion – improve it or die. Thousands of software applications have emerged during what’s now called the Age of the Customer, focused on improving CX by providing the right individual with the right interaction or information, at the right time.

The Age of the Customer has spawned an entirely new software category, marketing technology (martech), chronicled tirelessly by industry analyst Scott Brinker, who goes by @chiefmartec on Twitter. His oft-shared, visual history of the martech product landscape looks like this:

 

 

 

 

 

 

 

 

 

Of the 2017 marketing technology landscape, Brinker notes:

  • There are now 5,381 solutions on the graphic, 39 percent more than last year
  • There are now 4,891 unique companies on the graphic, up 40 percent from last year
  • Only 4.7% of the solutions from 2016 were removed (and another 3.5 percent changed in some fundamental way – their name, their focus, or their ownership)[1]

Where’s the cake?

My point is that there is a lot of frosting here – thousands of applications designed to address the sexiest elements of customer experience. But what’s missing is cake. Data is the cake onto which martech frosting should be added. Integrated enterprise data is the foundation for effective CX strategies to be built on because otherwise, you’re just playing an expensive guessing game.

That’s where enterprise integration comes in. With the expansion of digital channels and new customer initiatives, the variety and volume of customer signals are more diverse than ever. Beyond classical CRM systems around sales and service, understanding the customer lifecycle means bringing together data from, in addition to martech apps, sources including social media, websites, field service, quote management apps, and Internet-enabled things like mobile devices to sensors.

Bake the cake – integrate your enterprise data

More than ever, your company needs to focus on the cake of data, and the enterprise integration required to create it. The good news is, today’s enterprise integration cloud solutions make it easier than ever to build a rich data foundation for comprehensive, effective initiatives in the Age of the Customer.

To learn how to design your integration strategy to enable success with your customer initiatives, download the white paper, “Integration in the age of the customer: The five keys to connecting and elevating customer experience.” In it, you’ll find actionable insights on how to optimize your organization’s data integration strategy for the digital customer, including:

  • Why you need to ensure your organization’s integration strategy is customer-focused
  • How to plan around the entire customer lifecycle
  • Which five integration strategies help speed customer analytics and experience initiatives
  • How to put the odds of customer success in your favor

Download the white paper today!

Nada daVeiga is VP Worldwide Pre-Sales, Customer Success, and Professional Services at SnapLogic. Follow her on Twitter @nrdaveiga.

[1]Marketing Technology Landscape Supergraphic (2017): Martech 5000,” Scott Brinker, May 10, 2017.

 

 

 

Iris – Can you build an integration pipeline for me?

The promise of Artificial Intelligence technology is flourishing. From Amazon shopping recommendations, Facebook image recognition, and personal assistants like Siri, Cortana, and Alexa,  AI is becoming part of our everyday lives, whether we know it or not. These apps use information collected from your past requests to make predictions and deliver results that are tailored to your preferences. 

The importance of AI in today’s world is not lost upon us at SnapLogic. We are always trying to keep up with the latest innovations and technologies, so making our software fast, efficient, and automated for our customers has always been our goal. With the Spring release, SnapLogic launched the SnapLogic Integration Assistant. The SnapLogic Integration Assistant is a recommendation engine that uses Artificial Intelligence and machine learning to predict the next step in building a data pipeline architecture. Iris uses advanced algorithms to collect information from millions of metadata elements and billions of data flows to make predictions and deliver results that are tailored to the customer’s needs.

Currently, customers build pipelines by searching and selecting from over 400 Snaps in the SnapLogic catalog and dragging and dropping them into the canvas. Repeating this step for every single Snap, although easy, can make a pipeline building process somewhat tedious and time-consuming. But with the Integration Assistant, operations like these are simplified to give business users the right next steps in the building process, making pipeline building easy and efficient. Besides efficiency and speed, the “self-driving” software shortens the learning curve for line-of-business users to manage their data flows while freeing technology staff for higher-value software development. See how it works in this video.

In the next few steps,  learn how to enable this feature and start building interactive pipelines yourself.

Right now, we have two ways of building pipelines:

  • Choose a Snap from the SnapLogic catalog
  • Use the Integration Assistant for recommending the right Snaps

How to enable the Integration Assistant feature

By default, the Integration Assistant option is turned off,  allowing you to continue building pipelines by selecting Snaps in the SnapLogic Catalog. However, to utilize the Integration Assistant, just head to the Settings icon and check the Integration Assistant option.

Once the Integration Assistant is enabled, you’ll immediately see the benefits of the self-guided user interface. Drag the first snap onto the canvas and the Integration Assistant instantly kicks in and highlights the next suitable Snap. At the same time, it also opens up another panel that lists suggested Snaps on the right-hand side of the canvas. These AI-driven Snap recommendations are based on the historical metadata from your previous workflows.

Next, you can choose to click the highlighted Snap or pick from the recommended list by dragging the suitable Snap into the canvas. This process continues further until you select a snap with a closed output. At this point, the Integration Assistant will stop suggesting Snaps and the pipeline will be ready for execution.

As you can see, the Integration Assistant improves your pipeline building experience by suggesting Snaps that are the best fit for your organization based on the historical metadata flows.

Interested in learning more? Watch a quick demo on our YouTube channel – SnapLogic Spring 2017: Integration Assistant.”

Namita Prabhu is Senior QA Manager at SnapLogic.