“This is not your father’s ETL. This is not your mother’s message bus. This is not your uncle’s application integration”

– Rich Dill commenting on the need to think differently about application and data integration with SnapLogic

ETL pipelineThis week I sat down with Rich Dill, one of SnapLogic’s data integration gurus who has over twenty years of experience in the data management and data warehousing world. Rich talked about the leap forward in terms of ease-of-use and user enablement that an integration platform as a service (iPaaS) can provide. He also commented on the latency differences and when it comes to moving data from one cloud to another or from a cloud to a data center, compared to moving data from one application to a data warehouse in a data center.

“When using a new technology people tend to use old approaches. And without training what happens is they are not able to take advantage of the features and capabilities of the new technology. It’s the old adage of putting the square peg in a round hole.”

To highlight the power of how SnapLogic brings together multiple styles of integration in a single platform, Rich put together this demonstration, where he creates a data flow, called a pipeline, that is focused on a classic extract, transform and load (ETL) use case and goes much further. Here’s a summary of SnapLogic consuming, transforming and delivering data.

Part 1: ETL as a Service

  1. Rich selects data from two databases, explaining how you can preview data and view it in multiple formats as it flows through the platform.
  2. He reviews how SnapLogic processes JSON documents, which gives the platform the ability to loosely couple the structure of the integration job to the target, and goes on to perform inner and outer Joins before formatting the output and writing the joined data to a File Writer.
  3. Then he goes back and adds a SQL Server Lookup to get additional information.
  4. He runs the pipeline and creates a version of it.

Part 2: Managing Change (Ever get asked to add a few more columns in a database and then have to change your data integration task?)

  1. He goes in and modifies the underlying SQL table.
  2. He re-runs the SnapLogic pipeline and shows the new results, without having to make a change. This highlights the flexibility and adaptability of the SnapLogic Elastic Integration Platform.

Part 3: Salesforce Data Loading

  1. He brings in the Data Mapper Snap to map data to what’s in Salesforce.
  2. He drags and drops the Salesforce Upsert Snap and determines if he should use the REST or Bulk API.
  3. He uses SmartLink to do a fuzzy search and map the input and output fields.
  4. He reviews the Expression Editor to highlight the kinds of data transformations that are possible.
  5. He shows how data is now inserted into Salesforce and saves this version of the pipeline.

Part 4: RESTful Pipelines

  1. He removes the Data Mapper Snap because the output will be different and he brings in the JSON Formatter.
  2. Here, Rich takes a minute to review how not only is SnapLogic a loosely-coupled document model, but it’s also 100% REST-based. This means that each pipeline is abstracted and, as he puts it, “addressable, usable, consumable, trigger-able, schedule-able as a REST call.”
  3. He goes to Manager > Tasks and creates a new Task and sets it to Trigger.
  4. He executes the Task to demonstrate that, when calling the REST-based pipeline, it can then show how a mobile device can do a REST Get to bring the data into a mobile device.
  5. He wraps up the demonstration by changing the pipeline from a JSON document output to an XML document.

I’ve embedded the video below. Be sure to check out our Resource center for other information about SnapLogic’s Elastic Integration Platform. Thanks for a great demo Rich!

It’s been a busy few weeks since we announced our Fall 2014 release in September and our Winter release is just around the corner.

IDEVNews-logoWith the Fall release, we introduced the Hadooplex and support for Kerberos with SnapReduceIntegration Developer News noted:

“SnapLogic is stretching the capabilities of its Elastic Integration Platform with big enhancements for big data. The SnapLogic’ Fall 2014 cloud-based iPaaS brings its “snap-based” app integration technology to Hadoop 2.0, making it easier to acquire and prep data – and deliver analytics to users. These pipelines also support parsing and formatting for SequenceFile and RCFile formatting, as well as document (JSON) processing for MapReduce jobs.”

In a detailed review of the Fall release, Enterprise Management Associates (EMA) had this to say about our big data integration capabilities and recent series D financing news:EMA-logo

“When looking at the technology developments at SnapLogic and the recent infusion of capital, SnapLogic is well positioned to make the case that the company and its solution are moving into a strong leadership position in the big data analytics and integration space. EMA considers SnapLogic as an excellent strategic option for organizations to adopt and implement cloud-based data integration strategies.”

And Jason Bloomberg, author of the Agile Architecture Revolution, wrote on his Intellyx blog that:


“SnapLogic modernizes each element of ELT for today’s modern, cloud-centric, Big Data world. Instead of traditional extraction of structured data, SnapLogic allows for diverse queries across the full variety of data types and structures by streaming all data as JSON documents. Instead of simplistic, point-to-point loading of data, SnapLogic offers elastic, horizontally scalable Pipelines that hide the underlying complexity of data integration from the user. And within Hadoop, Hadooplexes simplify the distribution of YARN-based MapReduce algorithms, allowing users to treat the Hadoop environment as though it were a traditional reporting database.”

Here’s a SnapLogic big data integration demonstration featuring SnapReduce and the Hadooplex. Our November webinar also featured an interesting market perspective from our Chief Scientist, Greg Benson. You can watch it here.

ICIO-Insight-logon customer news, Harrison Lewis, CIO at Northgate Markets was recently featured in a great write up in CIO Insight, which outlines how the growing retailer tied together more than 64 integrations in five months with SnapLogic. He notes that: “We’re no longer simply exchanging data between systems, we’re putting it to use in more strategic ways.” Last week we also announced that Xactly has implemented SnapLogic to automate integrations between Salesforce, Workday, NetSuite and Domo among other business applications and corporate data sources and systems. The first phase of integrations was completed in less than two weeks. Join us for an interactive webinar with Xactly’s director of IT, Bob Genchi, on December 5th.


SAP-certification-logoOn the partnership front, we announced that the SnapLogic Elastic Integration Platform achieved certified integration with the SAP® ERP application, version 6.0. Here’s what Niraj Nagrani, our vice president of engineering had to say about the certification:

“Increasingly, our enterprise customers are making business decisions faster, enabled by real-time data connectivity between hybrid cloud applications and Big Data sources. SAP certification helps ensure that we’re able to deliver best-in-class solutions to SAP customers, allowing them to quickly gain enterprise data access regardless of where it resides.”

Salesforce Analytics Cloud Data IntegrationAnd finally, today we formally announced our new Snap and Snap Patterns for Wave – the Salesforce Analytics Cloud. Here’s what my old friend Keith Bigelow, senior vice president and general manager for the Salesforce Analytics Cloud had to say about the partnership:

“With partners such as SnapLogic joining the Analytics Cloud ecosystem, companies can benefit from best practices and industry expertise to extend analytics for every business need, making it easier than ever for anyone to explore and share data instantly, uncover new insights and take action from anywhere.”

It’s an exciting time for the company as we work with our customers to tackle big data and cloud integration challenges in the enterprise with a fast, multi-point and modern platform integration platform as a service (iPaaS). To sign up for an integration assessment, be sure to contact us.

It’s no secret that that there’s a wave of interest in cloud analytics in the enterprise. And as we head into 2015, you can expect to see a slew of predictions that focus on the need for improved big data access and analytics, managing disconnected SaaS silos and API proliferation, as well as the brewing storm of data represented by the Internet of Things. Throughout the year, we’ve covered the rise of the citizen integrator as well as the critical iPaaS requirements. Today we’re starting a series of TechFacts that we want to share based on recent and relevant cloud and big data integration research. In this first post the focus is on the drivers and barriers to adopt a cloud application and data integration platform that can handle multiple use cases, including powering big data and cloud analytics initiatives. (You can also check out the challenges and use cases here.)

At SnapLogic, we’re regularly speaking to our customers about 5 fundamental changes in the enterprise:

  1. The Need for Speed
  2. The New User (and Buyer) Expectations
  3. Changing Data Volumes, Variety and Velocity
  4. Cloudification and Data Gravity
  5. New Standard, Protocols and Architectural Styles

When it comes to adopting an iPaaS and cloud analytics solution and why same old, same old (SO SO) legacy integration tools won’t cut it, the results of our TechValidate Survey from earlier in the year are worth reviewing:

Business Drivers for a Cloud Integration Platform

Barriers to Cloud-based Analytics
Business Drivers for Cloud-Based Analytics

Main Challenges with Traditional On-Premise Technologies for Cloud Integration


Check out the whitepaper with the complete TechValidate survey results here. In my next few posts, we’ll share case studies of some of the survey respondents which highlight the business challenges and key iPaaS and cloud analytics requirements by industry.

Be sure to also register for our webinar in early December with SnapLogic customer Xactly: iPaaS in the Enterprise – What to Look for in a Cloud Integration Platform. And this webinar with industry analyst firm Forrester also dives deep into what’s different and what’s important when it comes to selecting the right iPaaS solution.

I had the opportunity to present to the San Francisco Bay Area Chapter of the Data Management Association (DAMA) this week on the topic of the changes to today’s data management stack. We discussed why CIOs and IT organizations in general are getting SMACT and reviewed some of the new integration challenges of the enterprise:

  • Big data access and analytics,
  • Disconnected SaaS silos,
  • API proliferation,
  • and the brewing storm of data represented by the Internet of Things.

benioff_speed_df14We then reviewed some of the characteristics and challenges posed by what we call the Integrator’s Dilemma and why old approaches to data management and same old same old (SO, SO) approaches to cloud and big data integration are not going to cut in the modern enterprise. What’s different, you ask? Here are 5 changes we discuss with our customers and partners at SnapLogic:

  1. Speed: As Marc Benioff recently noted at Dreamforce 2014, “Companies are no longer competing against each other. They’re competing against speed.”  Speed was also identified as the #1 reason companies choose an integration platform as a service (iPaaS), according to our recent TechValidate survey.
  2. User (and Buyer) Expectations: Self-service is a hot topic in the world of analytics, big data integration and iPaaS. We’ve written about the rise of the citizen integrator regularly on this blog. Gartner has recently published a report on the topic: Embrace the Citizen Integrator Approach to Improve Business Users’ Productivity and Agility.
  3. The Data: Of all of the changes in information management infrastructure, what’s new when it comes to big data volume, variety and velocity has been the most widely covered. See the original post on this topic from Gartner’s Doug Laney here.
  4. Cloudification and Data Gravity: With 2015 technology predictions season about to kick in, with so much recent attention on the shift to cloud analytics, here’s what Forrester has to say about the state of cloud adoption:  ”In 2015, cloud adoption will accelerate and technology management groups must adapt to this reality by learning how to add value to their company’s use of these services through facilitation, adaptation and evangelism. The days of fighting the cloud are over. This means major changes are ahead for you, your application architecture, portfolio, and your vendor relationships.”
  5. Standard, Protocols and Architectural Styles: We’ve talked about why the enterprise service bus (ESB) doesn’t fly in the cloud and written extensively about JSON and REST.  We’ve also written about why an ELA with a legacy data management vendor in 2014 is like betting on COBOL in 1998. As Craig Stewart, our Sr. Director of product management likes say: “It’s easy to put rows and columns into a document, but vice-versa doesn’t work.”

So what’s the same? Successful cloud application and big data platform adoption requires the right plumbing. Whether it’s orchestrating and streaming data between applications in real time or acquiring, preparing and delivering big data to downstream analytics tools and users, it’s still all about the pipes. While many of the circumstances and pace have changed, the burden of ensuring your data is timely, relevant and trustworthy has not. Your integration solution must be an innovation on ramp, not a roadblock.

I’ve embedded the presentation below, which includes a few recommendations for data management practitioners. Feedback appreciated.

LinkedIn_Posts_180x110Yesterday Hortonworks announced plans for an IPO in 2015 and the big data buzz machine was in full force. Check out some of the headlines:

Personally, I’d like to congratulate my friends at Hortonworks on the exciting news. Professionally, I’d like to note that SnapLogic’s Elastic Integration Platform has been certified on Hortonworks 2.1 (as well as Cloudera 5) and we’re continuing to see more and more opportunities to expand our iPaaS solution to be able to address both streaming application integration use cases (see our recent news about SAP Netweaver certification here) and big data integration requirements. Last week, our monthly interactive webinar featured an interview with our Chief Scientist (and University of San Francisco professor), Greg Benson. The webinar was a great opportunity to discuss with Greg Hadoop, Hive, MapReduce and Spark. We also addressed how, as the organizing principles of managing big (and small) data are in the midst of being re-written, there continues to be a lot of confusion in the market. Ultimately, the question we sought to answer was: What role will traditional extraction, transformation and loading (ETL) tools play when it comes to big data analytics?

We also talked about trends in the market, the impact of Spark and use cases we’re seeing for a more flexible integration platform as a service (iPaaS), while diving further into the following topics:

  • A Hadoop 2.0 primer
  • Why big data integration? Why now?
  • What’s different and what’s next

Check out the presentation slides below, take a look at the recording here and stay tuned for more on what’s going on in the world of Big Data.

SMACTWe recently reviewed the many fall tech events happening in the Bay Area and elsewhere and just got back from a few CIO-specific conferences last week. There have been a wide range of topics covered during recent events, including the next wave of cloud computing, cloud analytics, integration platform as a service (iPaaS), big data and big data integration.

Last week in Miami Beach we were at the Technology Business Management (TBM) Conference, which brought together Global 2000 CIOs, CTOs and CFOs to hear from peers and learn about new solutions on the market that can help modernize and transform enterprise IT organizations. While at the TBM Conference we had the opportunity to speak with CIOs and other IT leaders about how SnapLogic customers have reduced the complexity of their integrations by up to 85% and connected data, applications and APIs at least 4x faster. We also discussed common use cases for our platform such as cloud and on-premises application integration, digital marketing, big data analytics, self-service for “citizen integrators,” and the establishment of an agile enterprise integration layer or fabric.

old vs. newAt the Midmarket CIO Forum in Tucson, also last week, we met with CIOs from a variety of industries and reviewed the new data, application and API integration challenges that are facing today’s IT leaders. We discussed the “Integrator’s Dilemma” and how older, more traditional ETL tools and approaches to integration aren’t built for the new data challenges. Lastly, we talked about how to avoid getting “SMACT” summed up by the following:

  • Don’t settle for SO SO (same old, same old)
  • The first step to solving the Integrator’s Dilemma is recognizing it exists!
  • When it comes to Social, Mobile, Analytics, Cloud computing and the Internet of Things  wait to integrate!

You can find our presentation slides here. Also be sure to check out some of the social buzz below from TBM Conference and the Midmarket CIO Forum, as well as our infographic here on why CIOs are getting “SMACT”:

BigData_webinar_Graphic“By running natively on Hadoop, SnapLogic delivers powerful application and data integration and extends thee reach, performance and utilization of big data platforms.”

- Greg Benson, Chief Scientist at SnapLogic

Hadoop. Hive. MapReduce. Spark. As the organizing principles of managing big (and small) data are in the midst of being re-written, there continues to be a lot of confusion in the market. What role will traditional extraction, transformation and loading (ETL) tools play when it comes to big data analytics? Greg Benson is our in-house expert on all things big data, Hadoop, MapReduce and more and will be joining the SnapLogic team of data integration experts next week on Friday, November 7th for a webinar to discuss why the same old tools won’t cut it in the new world of big data and various integration needs.

In the webinar we’ll be talking about what’s new, what’s hot and what’s happening when it comes to accessing, preparing and delivering big data for a wide variety of use cases. We’ll also feature trends in the market, what’s changing and the impact of Spark (with mention of our new capabilities using the Sparklex). Lastly, we’ll review some of the use cases we’re seeing for a more flexible integration platform as a service (iPaaS) that can handle multiple styles of ingesting, synchronizing and transforming big data sources and dive into our latest platform demonstrations.

We recommend that you check out this webinar if you’re an Information Architect, Big Data Practitioner, Data Warehouse Architect, Business Analyst or BI Practitioner. Register here and we looked forward to discussing big data integration more next week!