snapLogic_iotAccording to a recent Gartner report, “Systems of innovation that leverage mobile and Web applications will demand coherent data integration and application integration techniques to gain just-in-time insights and activate responses across continuously evolving events. Systems of record that interact with a growing range of endpoints (including machine-generated data and IoT-related data) will involve linking of business flows with data access and analysis of event streams.”

Today we announced the general availability of the SnapLogic Spring 2015 release, which expands our focus on big data integration with new support for the Internet of Things (IoT). Along with many new and updated Snaps, the new release introduces a set of Snaps for Message Queuing Telemetry Transport (MQTT), which is a lightweight machine-to-machine connectivity protocol. The new Snaps will allow SnapLogic customers to create pipelines that connect to an MQTT broker for sensors, mobile and connected devices and stream data to analytical and other applications in real time. Later this year, we’re looking at supporting additional IoT protocols such as AMQP, CoAP, OMA Lightweight M2M, and ETSI Smart M2M.

“The Spring 2015 release is yet another milestone for our company as we continue to focus on platform innovation and ensuring our enterprise customers can connect data, applications and things faster.”

- Gaurav Dhillon, founder and CEO, SnapLogic

The Spring 2015 release introduces updates and innovations throughout the SnapLogic Elastic Integration Platform, which has recently been recognized by Ovum as an enterprise integration platform as as service (iPaaS) leader. Spring 2015 updates include:

  • Pipeline Execution Statistics: When users execute a pipeline, a duration chart now provides an overview of where the pipeline is spending its time as it is processing the stream of JSON documents. The Pipeline Execution Statistics dialog displays how much CPU time is being consumed and how much memory is being allocated for each Snap in real time, allowing for optimized pipeline design and improved system performance.
SnapLogic Pipeline Statistics
  • Low-Latency Ultra Pipelines: Administrators can now set up multiple receiving Ultra Pipelines nodes to ensure there is no single point of failure and achieve higher availability and reliability. Ultra Pipelines now have improved error handling and performance monitoring in the SnapLogic dashboard.
  • Public Monitoring API: In addition to an API for managing users and groups, a new public monitoring API has been introduced allowing customers to proactively query pipeline status and take advantage of their existing monitoring tools to track the health of SnapLogic integrations.
  • Multi-Instance Kerberos Authentication: The SnapLogic Hadooplex runs natively as a YARN application on the Hadoop cluster, whether it is running behind the firewall or in the cloud. The Spring 2015 release introduces support for multiple instances authenticated with different Kerberos users on a single Hadoop cluster.
  • Hadoop for Humans: With an easy-to-use cloud-based designer, customers can create SnapReduce pipelines that now support router and copy functions as well as advanced expressions for more complex processing and analytics use cases.
  • New and Updated Snaps: The Spring 2015 release introduces new Snaps for Anaplan, Google BigQuery, SAP Hana and Splunk. The Salesforce Snap now integrates with CipherCloud’s security and compliance platform and the release also includes many other updated Snaps for applications and technologies including: AWS Redshift, Eloqua, Excel, Expensify, HP Vertica, HDFS, JDBC, LDAP, Microsoft Active Directory, Microsoft SQL Server, MySQL, NetSuite, OpenAir, Oracle, PostgreSQL, PGP, REST, SAP BAPI, ServiceNow, SOAP, and Tableau.
snapLogic_Snaps_spring2015

All SnapLogic Elastic Integration Platform customers have been updated to the Spring 2015 release. To learn more about what’s new, visit: www.snaplogic.com/spring2015.

Last week Ovum introduced their iPaaS Decision Matrix. The report noted:

“The rise of software-as-a-service (SaaS) and the increasing heterogeneity of enterprise applications portfolios and data sources have necessitated a shift to suitable alternatives to traditional integration approaches…Enterprises should consider iPaaS as a means to ease the complexity of hybrid integration as they continue to focus on managing the complex interplay of business needs and persistent budget constraints, and still achieve faster time to integration.”

Welcome to the new era of integration innovation. Is it time to call it Integration 2.0 or are we now at 3.0? I’m not sure, but there is definitely a re-thinking going on in the enterprise as IT organizations seek greater agility, speed and flexibility. As legacy vendors in the market deal with private equity buyouts, waiting for a faster horse is clearly not an option.

Gartner recently outlined 5 reasons to begin converging application and data integration.

Last week Loraine Lawson summarized 5 reasons why integration is changing from David Linthicum’s whitepaper on The Death of Traditional Data Integration. They are:

  1. Cloud computing
  2. Mass data storage
  3. Complex data, such as unstructured data
  4. A service-based approach to data (which is key to mobile apps and wider use of enterprise analytics)
  5. Streaming data, mobile data and the related data security challenges

Today, Valerie Silverthorne wrote about Nine questions for hybrid cloud integration. They are:

  1. What are you not going to move to the cloud?
  2. Where will most of the future development take place?
  3. What is the use case?
  4. What kinds of applications are involved?
  5. Who will manage the integration?
  6. How quickly can this happen?
  7. What’s the next application?
  8. Can I make do with what I have?
  9. Should you trust the vendor?

The bottom line?  We’re seeing a remarkably fast transition of late, with enterprise IT shifting away from legacy data management and integration technologies to iPaaS. The reason for this is that the confluence of several transformational technologies including social, mobile, analytics/big data, cloud computing and the Internet of Things (SMACT) is finally coming into place.

Next Steps:

G+_PostIcon_1200x627 (1)Big data is evolving as a practice and we are quickly approaching a point at which data will be treated as a single source, which will require a different type of architecture. According to John Myers of Enterprise Management Associates (EMA), this architecture will need to be one that is focused beyond a single platform, where operational and analytical workloads work together. This architecture is called a Hybrid Data Ecosystem.

Join us on Wednesday, April 29th for a live webinar with John, Managing Research Director for EMA’s Business Intelligence practice. This webinar will review the drivers associated with big data implementations, evolving technical requirements for big data environments, and how a robust information management layer is important to big data projects.

During the webinar, we’ll also review how recent EMA research describes the following:

  • Use cases that drive big data and the importance of Internet of Things and streaming applications in big data
  • The impact of cloud implementation avenues for big data projects
  • How the EMA Hybrid Data Ecosystem Information Management Layer coordinates integration between disparate platforms and data types

Register now and join John Myers and the SnapLogic team for this exciting webinar to learn about what constitutes the Hybrid Data Ecosystem – and why it’s a necessity for modern data integration.

Gartner-logoGartner has recently published new research outlining why enterprise IT organizations should begin to converge their application and data integration. According to their research:

“Unnecessary segregated application and data integration efforts lead to counterproductive practices and escalating deployment costs. Integration leaders can use five key drivers to engage stakeholders of data and application integration disciplines to articulate, intersect and work together.”

Written by Eric Thoo and Keith Guttridge, the March 2015 research note provides practical suggestions for enterprise IT organizations who are trying to move at the speed of business and ensure they’re putting in place holistic strategies so their app and data integration technology is an on-ramp to the cloud and big data and not a barrier to adoption.

The research is based upon 2014 international survey data as well as client inquiries during the past 12 months and interactions with attendees at Gartner’s Application Architecture, Development and Integration and Business Intelligence and Information Management Summits.

You can download the Gartner research here.

business-insider-logoThis week SnapLogic co-founder and CEO, Gaurav Dhillon was featured in a profile on Business Insider. He shared his perspective on the news of Informatica, the company he co-founded in the ’90s and ran for 12 years through a successful IPO, being acquired by a private equity firm. He had this to say:

“I genuinely feel that this kind of financial engineering is not good for customers. But from a SnapLogic perspective, we’re cleared for takeoff.”

On Sramana Mitra’s popular One Million by One Million Blog, Gaurav went into more detail about the state of the data and application integration market, Informatica, SnapLogic, and how to build a great company. I’ve embedded the video interview below:

The other shoe dropped today. Last year Tibco was taken over by a private equity firm for $4.3 billion. Today it was announced that Informatica will be taken private in a deal worth $5.3 billion.

So in a matter of months, we’ve seen the market impact of the enterprise IT shift away from legacy data management and integration technologies when it comes to dealing with today’s social, mobile, analytics/big data, cloud computing and the Internet of Things (SMACT) applications and data.

No ESB No ETL. No Kidding!

Last year I posted 10 reasons why old ETL and ESB technologies will continue to struggle in the SMACT era. They are:

  1. Cannibalization of the Core On-Premises Business
  2. Heritage Matters in the Cloud
  3. EAI without the ESB
  4. Beyond ETL
  5. Point to Point Misses the Point
  6. Franken-tegration
  7. Big Data Integration is not Core…or Cloud
  8. Elastic Scale Out
  9. An On-Ramp to On-Prem
  10. Focus and DNA

Last week, SnapLogic’s head of product management wrote the post iPaaS: A new approach to cloud integration. He had this to say about ETL or batch-only data integration:No ETL

“ETL is typically used for getting data in and out of a repository (data mart, data warehouse) for analytical purposes, and often addresses data cleansing and quality as well as master data management (MDM) requirements. With the onset of Hadoop to cost-effectively address the collection and storage of structured and unstructured data, however, the relevance of traditional rows-and-columns-centric ETL approaches is now in question.

Industry analyst and practitioner David Linthicum goes further in his whitepaper: The Death of Traditional Data Integration. The paper covers the rise of services and streaming technologies and when it comes to ETL he notes:

“Traditional ETL tools only focused on data that had to be copied and changed. Emerging data systems approach the use of large amounts of data by largely leaving data in place, and instead accessing and transforming data where it sits, no matter if it maintains a structure or not, and no matter where it’s located, such as in private clouds, public clouds, or traditional systems. JSON, a lightweight data interchange format, is emerging as the common approach that will allow data integration technology to handle tabular, unstructured, and hierarchical data at the same time. As we progress, the role of JSON will become even more strategic to emerging data integration approaches.

And finally, Gaurav Dhillon, SnapLogic’s co-founder and CEO who also co-founded and ran Informatica for 12+ years, had this to say recently about the old way versus the new way of ensuring enterprise data and applications stay connected:

“We think it’s false reasoning to say, ‘You have to pick an ESB for this, and ETL for that.’ We believe this is really a ‘connective tissue’ problem. You shouldn’t have to change the data load, or use multiple integration platforms, if you are connecting SaaS apps, or if you are connecting an analytics subsystem. It’s just data momentum. You have larger, massive data containers, sometimes moving more slowly into the data lake. In the cloud connection scenario, you have lots of small containers coming in very quickly. The right product should let you do both. That’s where iPaaS comes in.”

At SnapLogic, we’re focused on delivering a unified, productive, modern and connected platform for companies to address their Integrator’s Dilemma and connect faster, because bulldozers and buses aren’t going to fly in the SMACT era.

I’d like to wish all of my former colleagues at Informatica well as they go through the private equity process. (Please note that SnapLogic is hiring.)

SNAP_IN_BIG_DATAWhat happens when you’re faced with the challenge of maintaining a legacy data warehouse while dealing with ever-increasing volumes, varieties and velocities of data?

While powerhouses in the days of structured data, legacy data warehouses commonly consisted of RDMS technologies from the likes of Oracle, IBM, Microsoft, and Teradata. Data was extracted, transformed and loaded into data marts or the enterprise data warehouse with traditional ETL tools, built to handle batch-oriented use cases, running on expensive, multi-core servers. In the era of self-service and big data, there’s a rethinking of these technologies and approaches going on in enterprise IT.

In response to these challenges, companies are steadfastly moving to modern data management solutions that consist of NoSQL databases like MongoDB, Hadoop distributions from vendors like Cloudera and Hortonworks, cloud-based systems like Amazon Redshift, and data visualization from Tableau and others. Along this big data and cloud data warehouse journey, many people I speak with have realized that it’s vital to not only modernize their data warehouse implementations, but to future-proof how they collect and drive data into their new analytics infrastructures – requiring an agile, multi-point data integration solution that is both seamless and capable of dealing with structured and unstructured streaming real-time and batch-oriented data. As companies reposition IT and the analytics infrastructure from back-end infrastructure management cost centers to an end-to-end partner of the business, service models become an integral part of both the IT and business roadmaps.

The Flexibility, Power and Agility Required for the New Data Warehouse

In most enterprises today, IT’s focus is moving away from using its valuable resources for undifferentiated-heavy lifting and more into delivering differentiating business value. By using SnapLogic to move big data integration management closer to the edge, resources can be freed up for more value-based projects and tasks while streamlining and accelerating the entire data-to-insights process. SnapLogic’s drag-and-drop data pipeline builder and streaming integration platform removes the burden and complexity of data ingestion into systems like Hadoop, transforming data integration from a rigid, time-consuming process into a process that is more end-user managed and controlled.

Productivity Gains with Faster Data Integration

A faster approach to data integration not only boosts productivity, but in many cases results in substantive cost savings. In one case, a SnapLogic customer with over 200 integration interfaces, managed and supported by a team of 12, was able to reduce their integration management footprint down to less than 2 FTEs, realizing an annual hard cost savings of more than 62% annually, 8:1 annual FTE improvement, infrastructure savings of over 50% and a 35% improvement in their dev-ops release schedule. Ancillary effects with that same customer realized a net productivity gain and increased speed to market by transferring ownership of their Hadoop data ingest process to data scientists and marketers. This shift resulted in the company being more responsive while significantly streamlining their entire data insights process for faster, cheaper and better decision-making across their entire enterprise.

More Agile Company

Increased integration agility means having the ability to make faster, better and cheaper moves and changes. SnapLogic’s modular design allows data scientists and marketers to be light on their feet, making adds, moves and changes in a snap with the assurance they require as new ideas arise and new flavors of data sources enter the picture.

By integrating with Hadoop through the SnapLogic Enterprise Integration Platform, with fast, modern and multi-point data integration, customers have the ability to seamlessly connect to and stream data from virtually any endpoint, whether cloud-based, ground-based, legacy, structured or unstructured. In addition, by simplifying the data integration, SnapLogic customers no longer need to use valuable IT resources to manage and maintain data pipelines, freeing them to contribute to areas of more business value.

Randy Hamilton is a Silicon Valley entrepreneur and technologist who writes periodically about industry related topics including the cloud, big data and IoT.  Randy has held positions as Instructor (Open Distributed Systems) at UC Santa Cruz and has enjoyed positions at Basho (Riak NoSQL database), Sun Microsystems, and Outlook Ventures, as well as being one of the founding members and VP Engineering at Match.com.

Next Steps: