Winter 2017 Release Is Now Available

As enterprises grow and adopt best of breed solutions based in the cloud, on-premises and/or hybrid, integrating data between varied applications, databases and data warehouses (used by the enterprise) continues to be a challenge. New solutions are rapidly adopted, and technical and non-technical users alike need help to meet the challenge of quickly integrating the data from multiple sources into one view to make decisions at the speed of business.

snp-76209-winterrelease-484x252-facebookThe release includes several new Snaps and Snap updates that make it faster and easier to integrate Workday, NetSuite and Amazon Redshift with other applications and data sources across the enterprise. All three systems are increasingly popular as businesses embrace the cloud to run their business, a “cloud shift” that Gartner says will drive more than $1 trillion in technology spending by 2020.

Here is a brief overview of new and enhanced Snaps:

  • Confluent KafkaThe need for streaming data becomes more important and today about one-third of the Fortune 500 uses Kafka. SnapLogic is pleased to introduce a new Snap for Confluent’s distribution of Apache KafkaTM, an enterprise-ready solution that connects data sources, applications and IoT devices in real time.
  • TeradataSeveral new Snaps have been added to Teradata Snap Pack expanding support with Teradata TPT Load, TPT Update Snap, and Teradata Export to HDFS Snap which allows customers to easily export data from Teradata to an HDFS cluster without the need for any additional installation or complex configuration.
  • Workday: Workday Read Snap has been enhanced to provide a simplified Workday output format making it even easier to be consumed by downstream systems.
  • NetSuite: Asynchronous operations support for NetSuite, enables more efficient use of NetSuite’s capabilities, through new Snaps including Netsuite Async  Upsert, Async Search, Async Delete List, Async GetList, Check Async Status and GetAsync Result Operations Support Snap.
  • Amazon Redshift: Our customers use Redshift to connect multiple on-premises data sources and applications to Redshift without any coding. The Winter 2017 release introduces a new Snap to execute multiple RedShift commands in one Snap, thereby making RedShift data integration pipelines even more easy to create and manage.
  • Amazon S3: The Winter 2017 release brings additional streaming performance improvement while writing to an Amazon S3 bucket.

Continued Enterprise Focus: Introducing Asset Search Functionality

SnapLogic continues to be the best platform for enterprise IT and LOB teams to integrate applications and data sources without any coding. Enterprises often have thousands of pipelines, files and accounts and it’s hard to search for a given asset. The Winter 2017 release allows customers to quickly search for assets and also filter search outputs.

Security and Performance Enhancements

Security and performance continue to be focus areas for SnapLogic. To further tighten user passwords, the Winter 2017 release enforces enhanced password complexity requirements. Customers can also configure session timeout and idle timeout parameters. In addition, the MongoDB snap pack has been extended to support SSL.

SnapLogic is committed to supporting the growing enterprise’s needs. We hope you will find the new Confluence Kafta snap, expanded support for WorkDay, Netsuite, Amazon RedShift, enhanced search and security useful. Customers can start using the capabilities described in the Winter 2017 release right away. For more information on the Winter 2017 release, including demo videos, see www.snaplogic.com/winter2017.

Enterprise iPaaS and API Management

snaplogic_API_managementIt’s been said we have entered “the API Economy.” As our partner 3scale describes it: “As web-enabled software becomes the standard for business processes, the ways organizations, partners and customers interface with it have become a critical differentiator in the market place.”

In this post I’ll summarize SnapLogic’s native API management capabilities and introduce the best-in-class partnerships we announced today with 3scale and Restlet to extend the cloud and big data integration capabilities that SnapLogic’s Elastic Integration Platform delivers. In the next set of posts, we’ll provide a deeper overview of our API Management partners.

Exposing SnapLogic Pipelines as APIs
SnapLogic’s unified integration platform as a service (iPaaS) allows citizen integrators and developers to build multi-point integration pipelines that connect cloud and on-premises applications as well as disparate enterprise data sources for big data analytics, and expose them as RESTful APIs. These APIs can be invoked by any authorized user, application, web backend or mobile app through a simple and standardized HTTP call, in order to trigger the execution of the SnapLogic pipeline. Here are two options for exposing SnapLogic pipelines as Tasks:

  • SnapLogic_TasksTriggered Tasks: Each Task exposes a cloud URL and, if it is to be executed on an on-premises Snaplex (aka Groundplex), then it may have also have an on-premises URL. These dataflow pipelines may be invoked using REST GET/POST, passing parameters and optionally a payload in and out. These pipelines use SnapLogic’s standard HTTP basic auth authorization scheme.
  • Ultra Pipelines: These Tasks are “always on”, where the pipeline is memory-resident, ready to process incoming requests with millisecond-level invocation overhead. Ultra Pipelines are ideal for real-time integration processing. In terms of authentication, Ultra Pipelines may be assigned a “bearer token,” which is an arbitrary string that must be passed in the invoking HTTP/s request in the HTTP headers. The token is optional, allowing customers to invoke a pipeline with no authentication, which may be appropriate on internal trusted networks.

SnapLogic Platform APIs
SnapLogic also provides an expanding set of APIs for customer systems to interact with our elastic iPaaS. Examples include:

  • User and Group APIs: Programmatically manage and automate the creation of SnapLogic users and groups.
  • Pipeline Monitoring APIs: Determine pipeline execution status. A common use case is when you are using external enterprise schedulers to monitor your pipelines.

SnapLogic Snaps Consuming Application APIs
SnapLogic Snaps are the intelligent introspecting, dynamic connectors that provide the building blocks for pipelines. Snaps may interface to applications utilizing the APIs provided by the applications. As the application’s APIs evolve, SnapLogic takes care of keeping the Snaps up to date, allowing our customers take care of the business of their business, not the business of API-level integration.

SnapLogic provides a comprehensive set of 300+ pre-built Snaps, but no collection of connectors will ever be complete. To deal with the expanding cloud and on-premises application and data integration needs of our customers, SnapLogic provides an SDK for customers and partners to be able to create their own Snaps, addressing custom endpoints, or custom transformations.

Best in Class API Management Partnerships
From API authoring and authentication to governance and reporting to protocol translation, our enterprise ISV partners extend SnapLogic’s cloud and big data integration capabilities with best-in-class API Management capabilities. From our partners:

“Making it easy to share digital assets is at the core of what we do. Being able to offer our customers this seamless way to expose their data and application and integration pipelines as RESTful APIs is another way to make that happen.”

– Manfred Bortenschlager, API Market Development Director at 3scale

 

“With over 300 pre-built connectors available in SnapLogic’s Elastic Integration Platform, this partnership enables virtually any data sources, from legacy to cloud, big data and social platforms, to be exposed as RESTful APIs.”

–  Jerome Louvel, Chief Geek and founder at Restlet

Read the press releases to learn more about our API Management partners and Contact Us for more information:

Rest in Peace Old ETL

The other shoe dropped today. Last year Tibco was taken over by a private equity firm for $4.3 billion. Today it was announced that Informatica will be taken private in a deal worth $5.3 billion.

So in a matter of months, we’ve seen the market impact of the enterprise IT shift away from legacy data management and integration technologies when it comes to dealing with today’s social, mobile, analytics/big data, cloud computing and the Internet of Things (SMACT) applications and data.

No ESB No ETL. No Kidding!

Last year I posted 10 reasons why old ETL and ESB technologies will continue to struggle in the SMACT era. They are:

  1. Cannibalization of the Core On-Premises Business
  2. Heritage Matters in the Cloud
  3. EAI without the ESB
  4. Beyond ETL
  5. Point to Point Misses the Point
  6. Franken-tegration
  7. Big Data Integration is not Core…or Cloud
  8. Elastic Scale Out
  9. An On-Ramp to On-Prem
  10. Focus and DNA

Last week, SnapLogic’s head of product management wrote the post iPaaS: A new approach to cloud integration. He had this to say about ETL or batch-only data integration:No ETL

“ETL is typically used for getting data in and out of a repository (data mart, data warehouse) for analytical purposes, and often addresses data cleansing and quality as well as master data management (MDM) requirements. With the onset of Hadoop to cost-effectively address the collection and storage of structured and unstructured data, however, the relevance of traditional rows-and-columns-centric ETL approaches is now in question.

Industry analyst and practitioner David Linthicum goes further in his whitepaper: The Death of Traditional Data Integration. The paper covers the rise of services and streaming technologies and when it comes to ETL he notes:

“Traditional ETL tools only focused on data that had to be copied and changed. Emerging data systems approach the use of large amounts of data by largely leaving data in place, and instead accessing and transforming data where it sits, no matter if it maintains a structure or not, and no matter where it’s located, such as in private clouds, public clouds, or traditional systems. JSON, a lightweight data interchange format, is emerging as the common approach that will allow data integration technology to handle tabular, unstructured, and hierarchical data at the same time. As we progress, the role of JSON will become even more strategic to emerging data integration approaches.

And finally, Gaurav Dhillon, SnapLogic’s co-founder and CEO who also co-founded and ran Informatica for 12+ years, had this to say recently about the old way versus the new way of ensuring enterprise data and applications stay connected:

“We think it’s false reasoning to say, ‘You have to pick an ESB for this, and ETL for that.’ We believe this is really a ‘connective tissue’ problem. You shouldn’t have to change the data load, or use multiple integration platforms, if you are connecting SaaS apps, or if you are connecting an analytics subsystem. It’s just data momentum. You have larger, massive data containers, sometimes moving more slowly into the data lake. In the cloud connection scenario, you have lots of small containers coming in very quickly. The right product should let you do both. That’s where iPaaS comes in.”

At SnapLogic, we’re focused on delivering a unified, productive, modern and connected platform for companies to address their Integrator’s Dilemma and connect faster, because bulldozers and buses aren’t going to fly in the SMACT era.

I’d like to wish all of my former colleagues at Informatica well as they go through the private equity process. (Please note that SnapLogic is hiring.)

SnapLogic TechTalks: Elastic iPaaS and Big Data Integration in Action

Over the past few months we’ve run a series of TechTalks to help get our customers and partners up to speed on the SnapLogic Elastic Integration Platform. The sessions are 30 minutes and they dive into some of the more advanced aspects of our cloud and big data integration platform as a service (iPaaS). Topics we’ve covered so far include:

  • Java Database (JDBC) Connectivity
  • Event Driven Pipelines
  • Sub Pipelines and Guaranteed Delivery
  • Data Transformations and Mappings
  • Using the SOAP Snap
  • Scaling Your Integrations
  • SnapReduce 2.0

You can check out the recordings here and sign up for the next TechTalk here. We’re always looking for new topics to cover. Customers and partners can login to our Developer Community and post their suggestions or talk to your SnapLogic customer success manager. And if you’re new to SnapLogic and how we connect enterprise data, applications and APIs, be sure to check out our Resource Center and Contact Us if you’d like to discuss your cloud and big data integration requirements.

SnapLogic TechTalks