We are pleased to announce that we will be performing an update this week to the SnapLogic Integration Cloud April 2014 Release.

UI Enhancements

The following enhancements have been made to improve usability.

  • The Pipeline Run Log and Run Pipeline icons in Designer were updated for clarity.
  • The Save icon in info boxes and dialogs now save the pipeline instead of just applying changes.
  • Pipeline tabs resize to make it easier to access all open pipelines.
  • The Snaplex Health Wall in Dashboard was updated with new icons to more clearer indicate the status of your Snaplexes.


  • Pipeline Tasks now support pipeline parameters. When you create a Task for a pipeline, you can now configure pipeline parameters if they are configured for a pipeline.


  • OAuth2 Support was added for REST
  • Conversion between date/time and epoch is now supported with the addition of Date Getter methods to the expression language.
  • Replicate Database Schema
    Support has been added to replicate a database tables in Redshift by the passing the schema in a second output view of the database Select Snap and sending it to the second input view of a Redshift Insert or Bulk Load Snap. See  the Replicate a Database Schema in Redshift use case for more information.


New Snaps will be available with this release to integrate with:

  • Active Directory
  • Vertica
  • LDAP
  • OpenAir

See the Release Notes for the lists of Snaps that were updated.

“Putting a JSON format on a traditional, XML-based ESB is like making a silk purse out of a sow’s ear.”

- Loraine Lawson’s analogy in reference to the article: Why Buses Don’t Fly in the Cloud: Thoughts on ESBs

I recently wrote about  why the legacy enterprise service bus (ESB) won’t fly in the cloud. Loraine Lawson at IT BusinessEdge reviewed the article and asked the question: Does Integration’s Heritage Matter in the Cloud? At SnapLogic we believe so strongly that heritage matters that we rebuilt our elastic integration platform from the ground up to be fast, multi-point and modern. Here’s why we believe that the heritage of integration products matters:

  1. Because the Innovator’s Dilemma creates major hurdles for legacy integration vendors  to venture completely into this new area of social, mobile, analytics, cloud and (Internet of) Things, which we’re calling SMACT.
  2. Attempts to build on their past successes results in experiments and half-baked solutions.

You’d be surprised how many on-premise integration product managers in Silicon Valley spend more time worrying about earnings per share (EPS) threats than the future of the ESB!

So let’s look at the two reasons why integration heritage matters.

The Innovator’s Dilemma Challenge
Clayton Christensen’s words echo throughout the boardrooms of Silicon Valley today as we face so many technology innovations and disruptions to traditional markets. It is extremely difficult to give up on the gravy train that is the perpetual licensing model of software maintenance. Transitioning to a subscription pricing model, let alone the cultural changes that the software as a service (SaaS) demands, is no simple option. Even if the company’s executives are willing to make this transition, it’s the shareholders that would be very unhappy going from 30-40% operating margins down to single digits. If you were a fly on the wall in the executive boardroom of a company with on-premise heritage trying to enter the cloud market, “cannibalization” will be the most commonly heard word. And even if the board and executives get it, good luck telling the legacy product teams that their baby is no longer beautiful and the sales team that you’re going to introduce a cloud service that will no longer require the on-premise boat anchor up-front price tag.

Half Baked “Hybrid” Integration Solutions
The other reason why on-premise software companies struggle to escape from their heritage is because most meaningful technological innovations cannot be applied as easily as the proverbial “lipstick on the pig”; unless the new offering is completely redesigned and developed from scratch to the latest market requirements and specifications. Not many successful companies have the appetite to do a complete rewrite for the reasons mentioned in the above “Innovator’s Dilemma” section. To draw an analogy, it is like an internal combustion engine-based car manufacturing company making cosmetic changes to their gas combustion-based car and expecting to compete with a state-of-the-art car like Tesla in the electric car market. Nissan had to build its Leaf car from scratch to cater to the electric car market, by building a completely new transmission, a new engine and a new power supply.

Coming back to the specifics of the integration market and why vendor heritage matters, here are some technical reasons why:

  1. Resiliency in the context of integration is the ability of  integration flows to handle changes and variations in data structures without breaking down. Most legacy integration products are strongly-typed or tightly-coupled. In other words, that means the platform needs to know the exact specifications of data that it needs to process when executing the flows. Unfortunately, the SMACT world is not as compliant as we would like. Changes in schemas and data structures are commonplace. Addition of columns to database tables, or a partner accidentally adding additional data fields in a document that gets sent to you should not bring your integrations, and thereby your business, to its knees. Resiliency or a weakly-typed/loosely-coupled paradigm is not something that can be introduced into a product as an afterthought. Introducing resiliency is as involved a process as replacing the transmission of the car that is necessary to move from an IC engine to an electric car. The platform has to be architected on such modern principles from the design phase. Hence, integration heritage does matter.

  2. Legacy integration products which came from the extract, transform and load (ETL) roots were optimized for relational data use cases such as moving large volumes of data from relational data sources into relational data warehouses. These products were built to read rows and columns, operate on them and write rows and columns. These products struggle today when it comes to handling hierarchical data. Similarly, the enterprise application integration (EAI) tools were built for message-oriented integrations that can handle hierarchical data but are optimized for real-time integrations to handle one message at a time in as efficiently as possible. Shedding this heritage to handle broader use cases is no small feat. It’s like changing your car’s engine to be battery powered. Anyone who has had engine trouble know that mechanics recommend buying a brand new car rather than replacing it!

  3. Lastly, integration products with an on-premise heritage are built with the on-premise mindset. Configurations and product libraries are laid out locally on every physical server. These local assets need manual attention when it comes to product upgrades and patch fixes. Managing these local files, especially in a highly distributed environment, turns into a nightmare very fast. This is another of those heritage inheritances that cannot be wished away without a complete product redesign. Think of this lifecycle management of the heritage platform as an oil change that you frequently have to do with your IC engine. Like most people, you as the IC engine car owner needs to take time off your busy schedule and take the car to the shop for minor and major oil changes. Teslas need no oil changes and all product maintenance is software defined. All upgrades are downloaded automatically to the car over mobile network and customers experience no downtime.

In summary, heritage is more of a disadvantage in the rapidly shifting sands of technology innovation. Technology paradigm shifts are still of large magnitude and often demand a new approach and redesign of products and technologies. In this article, we drew an analogy between integration platforms with heritage and IC engines, and between modern integration platforms and electric cars such as Teslas. Of course, one can always rightly argue that at the end of day, both cars will get you to your destination. But, as they say, it’s not the destination but the quality of the journey that makes the destination worth it. And with a modern integration platform as a service (iPaaS), your journey is speedier, more cost-effective and with fewer forced downtimes, making it truly enjoyable.

Next Steps:

  • Read Greg Benson’s posts about the SnapLogic architecture and platform services.
  • Check out some of our resources to learn more about the SnapLogic Integration Cloud.


In my last post I reviewed the classic integration requirements and outlined four new requirements that are driving the demand for integration platform as a service (iPaaS) in the enterprise:

  1. Resiliency
  2. Fluidity in hybrid deployments
  3. Non-existent lifecycle management of the platform
  4. Future-proofing for the world of social, mobile, analytics, cloud, and internet of things (SMACT)

In this post, I’ll review requirement #3: Non-existent lifecycle management of the platform.

With increasingly hybrid deployments (as discussed in iPaaS requirements post #2), lifecycle management can very quickly become a nightmare for users of legacy ESB and ETL integration technologies. Upgrading on-premises integration software, such as the core product libraries, typically means binary updates for every installation across hybrid environments. While each vendor is different, I’m always surprised to realize how many cloud integration installations are simply hosted on-premise software and not true multitenant SaaS. This means the vendor has to upgrade each customer and maintain multiple versions. Nevertheless, the more challenging upgrades are on-premise installations that are customer managed. It’s always amazing to find out how many enterprise customers are running old, unsupported versions of integration software due to the fear of upgrades and the unscalable mindset of “if it ain’t broke, don’t fix it!” Cumbersome manual upgrades of on-premise integration installations are error-prone and result in significant testing cycles and downtime. The bigger the implementation, the bigger the upgrade challenge – and connector libraries can be equally painful. Lastly, local configuration changes and the need to rebuild mappings (see my point on the myth of “map once” here) also demand thorough testing cycles.

SaaS customers are accustomed to interacting with complex business processes (such as opportunity-to-order management in a CRM application) through a simple web interface. Hence, the bar set for the modern integration platforms is quite a bit higher where these customers expect the vendor to shield customers from as much complexity as possible. There is a similar expectation with the management of the lifecycle of the iPaaS.

The new set of requirements around lifecycle are:

  1. Customers want zero desktop installations, period. Customers want to move away from integrated development environments (IDEs) that are extremely developer-centric and require their own upgrades from time to time. Customers want browser-based designers for building integrations where they can avail themselves the latest, greatest functionality automatically.

  2. Customers expect the installation of the runtime engine be self-upgrading as well. This is particularly important for the on-premise installations to avoid the cumbersome, error-prone tasks and endless testing cycles. Today’s iPaaS runtime engines should be smart enough to push binary upgrades to every runtime engine, regardless of its location – on-premise of the cloud. This is particularly efficient with a software-defined integration architecture because each of the runtime engines (we call our data plane the Snaplex) are stateless containers awaiting execution instructions from the control plane.

  3. Customers expect the execution instructions to also include connector libraries and configuration information, which means that customers no longer needs to worry about manual configuration changes at every installation location.

A truly modern iPaaS solution will deliver on all of the above and deliver an integration service that eliminates much of the pain of traditional lifecycle management. The cost and risks of not having a self-upgrading software is an order of magnitude higher in today’s age of agile software delivery (note that SnapLogic delivers new software innovation on a monthly cadence – check out our March update here). There are great benefits in this approach; for one, customers are always on the latest platform and automatically keep up with the innovation that vendors deliver. And two, they no longer have to plan long and costly upgrade cycles that are always associated with infrastructure downtime and hinder business continuity. But the biggest benefit is that your integration platform is built to run at cloud speed!

In my next and final post of this series, I’ll discuss the importance of choosing an iPaaS that future-proofs your integration investments to tackle challenges posed by the new world of SMACT (Social, Mobile, Analytics, Cloud, and internet of Things).

Salesforce1Tour Boston KeynoteI attended the #Salesforce1Tour in Boston this week with the SnapLogic team. The keynote started strong with the Boston Police Gaelic Column bagpipes warming up the crowd. Keith Block, who is from Boston, delivered the keynote with a few shout-outs to his Mom in the audience and a few shots at SAP. The keynote presentation contained a lot of familiar content if you attended Dreamforce 2013, but as always, there were some solid local customer stories and a cool mobile demo. Here’s a view of the keynote from the Green Monster seats.

The partner expo was buzzing most of the day. Here’s a few Boston Salesforce MVPs (great to see you @sfdc_nerd and Cathy O!), Sarah and Rich at the SnapLogic booth, and a picture of our partners from Cervello and Birst at the booth beside us.

Salesforce1Tour MVPs SnapLogic @ Salesforce1TourCervello







Early in the day I heard about a new initiative Salesforce has launched to get people to post “selfies” at their events. So I took a few and tweeted them out – here’s Jeff Kaplan (@thinkstrategies), Andrew Sim (@simspot), Craig Weich (@craigweich), and Mr. Nate Bride, who has just launched a new company built on the Force.com Platform. Congratulations!

Jeff Kaplan Andrew Sim Craig WeichNate Bride







At some point during the day, however, I received a DM on Twitter from @Salesforce letting me know that I hadn’t quite understood the rules of the Salesforce1Selfie competition. Turns out they’re giving away free passes to Dreamforce for the best screenshot (aka selfie) of your Salesforce1 mobile app. Apparently they weren’t looking for a bunch of goofy pictures of me and the Boston Salesforce gurus. Bummer! Here’s the note:


Just when I was starting to enjoy this Salesforce selfie thing…

I must admit that I am a big Salesforce1 fan. In fact, I now use Salesforce more on my smartphone than I do on my desktop thanks to Salesforce1. So get tweeting and get to Dreamforce 2014!

In the meantime, I couldn’t help putting one more Salesforce selfie out there with the Boston crowd. Guess it’s more SaaSy-selfie than a Salesforce1Selfie (and it’s definitely no Big Papi and Obama selfie). Good times in Boston either way. Thanks Salesforce for another great event!

Saasy Selfie

Over the last 6 months we’ve been talking with CIOs and IT leaders about what we call the “Integrator’s Dilemma.” As SnapLogic CEO Gaurav Dhillon puts it: “The dilemma for enterprise IT organizations is that their legacy integration technologies were built before the era of big data, social, mobile and cloud computing and simply can’t keep up.”

In a recent Wired Innovation Insights article, SnapLogic Director of Product Marketing and integration expert Maneesh Joshi provides a comprehensive overview of why the grand vision of services oriented architecture (SOA) was dead on arrival (DOA) because of the legacy technology of the enterprise service bus (ESB).

Integration Platform as a Service

The article summarizes how ESBs became an appealing concept for enterprise IT organizations in the on-premises world, but the result was typically high costs, brittle adapters and point-to-point enterprise application integration (EAI) implementations that didn’t fulfill the SOA vision. Public cloud computing and widespread SaaS application adoption has led to a re-imagining of integration. According to the article:

“Widespread adoption of SaaS and cloud-based applications has disrupted the legacy software delivery model in a good way. Today, the speed of projects is an order of magnitude faster than just five years ago and return on investment is measured by actual adoption and revenues generated or costs savings. Both IT and the line of business are getting increasingly accustomed to purchasing and getting up and running with SaaS applications within days, and sometimes even in minutes.”

The article goes on to outline a number of reasons why the legacy ESB technology approach won’t fly in in the new world of social, mobile, analytics and cloud (SMAC). Our belief is that whether the use case is cloud analytics (big data), applications (SaaS) or APIs, customers are looking for a fast, multi-point and modern approach to integration. Just as we’re seeing a transition from XML to JSON and from SOAP to REST, to address the need for speed and the Integrator’s Dilemma, we’re seeing a shift from ESBs to integration platform as a service (iPaaS).

ESB Won't Fly in the CloudOne may not replace the other any time soon in the hybrid IT world, but it’s clear that the practicality of the legacy enterprise service bus architecture and the heavy architecture implications that come with it are definitely worth re-thinking in the SMAC era.

Read the article and share your thoughts on Wired or in the comments below. Take a look at the video below on the Integrator’s Dilemma and why we believe buses don’t fly in the cloud. You can also register for our upcoming webinar next week where we’ll be hearing some insights from Forrester Research Principal Analyst Stefan Ried on the future of hybrid IT and what to look for in a cloud integration platform.

In my last post I reviewed the classic integration requirements and outlined four new requirements that are driving the demand for integration platform as a service (iPaaS) in the enterprise:

  1. Resiliency
  2. Fluidity in hybrid deployments
  3. Non-existent lifecycle management of platform
  4. Future-proofing for the world of social, mobile, analytics, cloud, and internet of things (SMACT)

In this post, I’ll review requirement #2: Fluidity in Hybrid Deployments.

Similar to data structure changes being a common occurrence (as discussed in the Resilience post), the introduction or retirement of applications is something most IT organizations are dealing with in the enterprise. Software as a service (SaaS) continues to fuel worldwide software growth and infrastructure as a service (IaaS) and platform as a service (PaaS) providers such as Amazon Web Services (AWS) offer customers the flexibility to build up systems such as relational data services (RDS) or Redshift and tear them down on short cycles. Digital marketers are always looking for new channels to expand their addressable markets and additional data sources to enrich their audience profiles. Change is the only constant in the agile enterprise. The impact of these changes on the integration layer is that it is expected to seamlessly transition from connecting on-premise systems to cloud systems or vice versa and yet ensure a high-degree of business continuity.

iPaaS technologies built in the last decade were not built for the fluidity that is needed in today’s hybrid IT architecture. Even though many legacy integration solution providers offer a dual deployment model – one on the premises and one for the cloud – they are typically not peers when it comes to management, monitoring and configuration (not to mention functionality). Here are some common issues that customers will face with what I call “franken-hybrid” integration technologies:

  • Deployment in the cloud is not same as deploying the on-premise engine. The provider may need a different set of preparatory and configuration requirements between the two environments. For example, setting up configurations that were typically local, such as connection pooling or even providing driver locations with their on-premise products, still must be done locally and manually for each of the runtime installations. This can be a very manual, cumbersome, and error-prone approach.

  • Dual management and monitoring dashboards – one for the on-premise execution engine and the other one for the cloud engine – means the administrator needs to manually stay on top of two environments. This is both time consuming and risky.

  • The on-premise engine was built for connecting on-premise systems and often requires many network ports to be open for communications in order to receive data processing instructions or send back monitoring information to the server. If the monitoring and metadata servers are running in the cloud, customers often will be requested to punch holes in their network firewall in order for all of the iPaaS functionality to work.

A truly modern iPaaS solution must deliver what I call “fluidity” in a hybrid deployment. Specifically, there are two things you need to watch out for while making your iPaaS purchasing decision:

  1. When looking at cloud integration solutions from legacy vendors, make sure you look under the hood to ensure they’re not simply “cloud washing” their on-prem product and hosting it in the cloud. Even if it’s the same code base that is hosted in the cloud and inside the firewall and can be deployed to with a drop down option, you will run into management, monitoring and configuration issues. The management and monitoring of these two runtimes will need separate consoles as they are not peers as they lack the ability to communicate to the monitoring server across the firewall.  Additionally, on-premise runtimes were designed to have configuration files that are not centrally managed and are local to the installation. Managing and configuring such hybrid environments becomes a recurring cost with every software upgrade and can add up significantly.
  2. Similarly, don’t fall for the map trap of “map once and run everywhere.”  There isn’t sufficient value when your mappings built once can run everywhere because mappings are typically very specific to the source and targets that are being integrated. Most times, they are not transferrable from on-premise to the cloud as typical on-premise endpoints (Oracle ERP, SAP, Teradata, etc.) are very different from cloud endpoints (Salesforce, Workday, Redshift, etc.). This renders the “run everywhere” story quite ineffective. The other issue with this approach is that it is really masking the reality that the “anywhere” implies a variety of different products that they are trying to make appear similar. This set of distinct products also implies management and monitoring headaches. And lastly, mappings are a one time cost and hence re-usability doesn’t fetch you much. It’s the management and monitoring costs that get you anyways.

It is because of the challenges I listed above, that a software defined integration platform as a service has become a key enabler of “enterprise cloudification.” A high degree of integration fluidity results in a higher degree of business agility.

Stay tuned for the next post in this iPaaS series, which will review the importance of lifecycle management and the right approach to cloud integration as you prepare for the enterprise shift to social, mobile, analytics, cloud and the internet of things (SMACT). You may also enjoy this post about why ESBs are the wrong approach to cloud services integration.

The SnapLogic team is very excited to announce our new SnapLogic Integration Cloud Free Trial for Amazon Redshift at the AWS Summit in San Francisco. Built on the the most recent innovation from SnapLogic called Snap Patterns, which are are pre-built packaged integrations that customers can configure through a step-by-step wizard, our goal is to help you accelerate your time to value with AWS Redshift by as much as 10x. There’s no coding necessary!

The free trial comes with Snap Patterns for some commonly recurring data integration requirements for Amazon Redshift that are common challenges for cloud data warehousing customers. For starters, the Redshift Loader Patterns take away the complexity of the initial load from Amazon RDS for MySQL, Oracle, SQL Server and PostgreSQL into Amazon Redshift. Users of the SnapLogic Snap Patterns can then use MicroStrategy, Qlikview, Tableau, Tidemark or other leading business intelligence (BI) tools to run analytics and visualize their data in Redshift.

SnapLogic Free Trial for Amazon RedshiftSnap Patterns for Amazon Redshift allow organizations to:

  • Accelerate cloud data warehouse adoption with prebuilt patterns that can be configured by an automatically-generated series of steps.

  • Rapidly connect Amazon Redshift to a variety of relational database services including Amazon RDS for MySQL, PostgreSQL, Oracle and SQL Server.

  • Quickly load data into an Amazon S3 bucket and kick off the Amazon Redshift import process in a single step.

  • Easily replicate source tables into their Amazon Redshift clusters and detect daily changes to keep data synchronized.

  • Take advantage of core REST and SOAP Snaps for broader connectivity.

  • Visually design a variety of data operations using a set of core Snaps such as Binary, Flow, Script, Transform and XML.

  • Do sophisticated extract, transform and load (ETL) operations such as slowly changing dimensions type 2 (SCD2) and database lookups without any coding.

  • Start with a free trial of an easy to use wizard and upgrade to the full SnapLogic integration platform as a service (iPaaS) in order to connect to data from Salesforce.com, Teradata, Netezza, SAP, Oracle ERP and other systems without losing any of their existing work.

If you’re ready to get started with the trial, check out the following resources:

Additional resources are available here:

We look forward to hearing your feedback as this trial powers your cloud analytics initiatives. And if you happen to be at the AWS Summit in San Francisco today, please swing by our booth (#621) for more information!