Summer Release 2017: Focus on operational efficiency and artificial intelligence

By Dinesh Chandrasekhar

SnapLogic Enterprise Integration Cloud’s Summer Release for 2017 (v4.10) is generally available for customers today. We are excited to share a lot of new features and enhancements that revolve around the themes of operational efficiency for digital transformation, continuing innovation in the use of Iris Artificial Intelligence, and added support for Cloud Data Warehouse and Big Data. While we stay focused on offering one of the finest and easiest user experiences in the iPaaS space, we wanted to extend that experience over to the operational side as well.

A few key highlights from this release that focus on improving operational efficiencies and the use of Artificial Intelligence:

  • AI-Powered Pipeline Optimization: SnapLogic’s Iris technology now evaluates pipeline configuration and operation to recommend specific performance improvements based on machine learning, delivered to users via dashboard alerts and notifications.
  • Local Snaplex Dashboard: Ultra Pipelines have been used by our customers in mission-critical use cases that warrant uninterrupted data flow across multiple data sources. Now, we have introduced a local dashboard that provides full visibility into Ultra Pipelines. This offers customers increased confidence that their high-priority projects are running automatically without any interruption.
  • API Threshold Notifications: On a similar vein around dashboards, we bring in some monitoring capabilities that focus on operations and project development. Real-time alerts and notifications provide customers with an early warning system to improve monitoring of complex integrations projects. Alerts can be set up to notify stakeholders when API limits are approached and exceeded when users are added or removed from the platform, and when projects are modified.
  • Project-level Migration: We’ve been listening to your requests! With the fast pace at which integration projects are being executed and delivered across the enterprise, it is pertinent that project lifecycles are not a bottleneck. We’ve introduced a new feature that automates the migration of projects across departments and organizations, eliminating hand-coding and hard-coding, resulting in seamless project migrations.
  • Public API for Groundplex Installation: A new API is now available to automate the installation and configuration of on-premises SnapLogic nodes, eliminating the time and effort to manually download and install additional nodes. 

A few key highlights from this release that focus on enhancing our already comprehensive integration capabilities into cloud data warehousing and big data:

  • Microsoft Azure SQL Snap Pack: A brand new Snap Pack that allows customers to quickly and easily migrate on-premises databases to Microsoft Azure SQL Database for fast cloud-based reporting and analytics.
  • Teradata Select Snap: Expands the Teradata Snap Pack, allowing users to retrieve data from a Teradata database to display on a table for easy reporting tasks.
  • Parquet Writer Snap: Allows users to store data in specific partitions in Hadoop HDFS, improving the ability to create reports and derive meaningful insights from big data.
  • Parquet Reader/Writer Snap: Allows users to write Parquet files to Amazon S3 and AWS Identity & Access Management in addition to HDFS, expanding SnapLogic’s support for cloud-based data warehousing via Redshift.

Watch the Summer 2017 Release webinar to learn all the features and enhancements across our Snaps, IoT, CRM, and more.

Dinesh Chandrasekhar is Director of Product Marketing at SnapLogic. Follow him on Twitter @AppInt4All.

The bigger picture: Strategizing your data warehouse migration

By Ravi Dharnikota

If your organization is moving its data warehouse to the cloud, you can be confident you’re in good company. And if you read my last blog post about the six-step migration process, you can be even more confident that the move will go smoothly. However, don’t pull the trigger just yet. You’ve got a bit more planning to do, this time at a more strategic level.

First, let’s recap the migration process I covered in my last post, of the data warehouse itself. In that blog post, I broke down all the components of this diagram:

Data Warehouse Migration Process

Now, as you can see in the diagram below, the data warehouse migration process itself is part of a bigger picture of migration planning and strategy. Let’s take a look at the important pre-migration steps you can take help to ensure success with the migration itself.

Migration Strategy and Planning

Step 1: Define Goals and Business Case. Start the planning process with a clear picture of the business reasons for migrating your data warehouse to the cloud. Common goals include:

  • Agility in terms of both the business and the IT organization’s data warehousing projects.
  • Performance on the back end, to ensure timeliness and availability of data, and on the front end, for fast end-user query response times.
  • Growth and headroom to ease capacity planning; the elastic scalability of cloud resources mitigates this problem.
  • Cost savings on hardware, software, services, space, and utilities.
  • Labor savings from reduced needs for database administration, systems administration, scheduling and operations, and maintenance and support.

Step 2: Assess the current data warehouse architecture. If the current architecture is sound, you can plan to migrate to the cloud without redesign and restructuring. If architecturally sufficient for BI but limited for advanced analytics and big data integration, you should review and refine data models and processes as part of the migration effort. If the current architecture struggles to meet current BI requirements, plan to redesign it as you migrate to the cloud.

Step 3: Define the migration strategy. A “lift and shift” approach is tempting, but it rarely succeeds. Changes are typically needed to adapt data structures, improve processing, and ensure compatibility with the chosen cloud platform. Incremental migration is more common and usually more successful.

As I mentioned in my last blog post, a hybrid strategy is another viable option. Here, your on-premises data warehouse can remain operating as the cloud data warehouse comes online. During this transition phase, you’ll need to synchronize the data between the old on-premises data warehouse and the new one that’s in the cloud.

Step 4: Select the technology including the cloud platform you’ll migrate to, and which tools you’ll need for the migration. There are many types of tools and services that can be valuable:

  • Data integration tools are used to build or rebuild ETL processes to populate the data warehouse. Integration platform as a service (iPaaS) technology is especially well suited for ETL migration.
  • Data warehouse automation tools like WhereScape can be used to deconstruct legacy ETL, reverse engineer and redesign ETL processes, and regenerate ETL processes without the need to reconstruct data mappings and transformation logic.
  • Data virtualization tools such as Denodo provide a virtual layer of data views to support queries that are independent of storage location and adaptable to changing data structures.
  • System integrators and service providers like Atmosera can be helpful when manual effort is needed to extract data mappings and transformation logic that is buried in code.

Using these tools and services individually or in combination can make a remarkable difference in your data warehouse, serving to speed and de-risk the migration process.

Step 5: Migrate and operationalize; start by defining test and acceptance criteria. Plan the testing, then execute the migration process to move schema, data, and processing. Execute the test plan and, when successful, operationalize the cloud data warehouse and migrate users and applications.

Learn more at SnapLogic’s upcoming webinar

To get the full story on data warehouse cloud migration, join me for an informative SnapLogic webinar, “Traditional Data Warehousing is Dead: How digital enterprises are scaling their data to infinity and beyond in the Cloud,” on Wednesday, August 16 at 9:00am PT. I’ll be presenting with Dave Wells, Leader of Data Management Practice, Eckerson Group, and highlighting tangible business benefits that your organization can achieve by moving your data to the cloud. You’ll learn:

  • Practical best practices, key technologies to consider, and case studies to get you started
  • The potential pitfalls of “cloud-washed” legacy data integration solutions
  • Cloud data warehousing market trends
  • How SnapLogic’s Enterprise Integration Cloud delivers up to a 10X improvement in the speed and ease of data integration

Register today!

Ravi Dharnikota is Chief Enterprise Architect at SnapLogic. Follow him on Twitter @rdharn1

The commoditization of integration

By Dinesh Chandrasekhar

Eight years ago, dozens of integration vendors were offering scores of solutions, all with what seemed to be the same capabilities. Pick any ESB or ETL tool and each seemed to perform the same functions as their competitors. RFPs were no longer a viable way to weed out the inferior vendors as each solution checked all the boxes across the board. Plus, all vendors were ready to lower their prices at the drop of a hat to win your business. It was at this time that the integration market had truly reached a level of commoditization. Consumers could easily pick and choose any solution as there were no true differentiators amongst them.

But, several factors have changed the landscape since then:

  • NoESB – The NoESB architecture had started gaining interest – pushing the idea of the irrelevancy of ESB for many integration scenarios. Yet, an API Gateway was not the right alternative.
  • Cloudification – The cloudification of pretty much all your favorite on-premises enterprise applications began around the same time. Enterprises that were thinking of a digital transformation couldn’t get too far without a definitive cloud strategy in place.
  • Convergence of ESB and ETL – The lines between application integration and data integration were blurring. CIOs and IT managers didn’t want to deal with two different sets of integration tools. With the onset of mobile and IoT, data volumes were exploding daily. As a result, even data warehouses moved to the cloud. To serve such big data needs, the traditional/legacy ESB/ETL tools were incompetent and unfit.
  • Agile Integrations – Finally, the DevOps and Agile movements impacted enterprise integration initiatives as well. They had given rise to new user personas in the enterprise – Citizen Integrators or Citizen Developers. These are the LOB Managers or non-IT personnel that needed quick integrations within their applications to render their data in different views. The reliance on IT to deliver solutions to business was becoming a major hindrance.

All these factors have influenced the iPaaS (Integration Platform as a Service) market. Now, thousands of companies are already leveraging iPaaS solutions to integrate their cloud and on-premises solutions. iPaaS solutions break away from legacy approaches to integration, are cloud-native, intuitive, fast, self-starting, support hybrid architectures, and offer connectors to a wide range of on-premises and on the cloud applications.

Now comes the big question – “Will iPaaS solutions be commoditized, too?” At the moment, the answer is a definite NO and there are multiple reasons why. Beyond scale, latency, tenancy, SLAs, number of connectors etc., one of the key areas that will differentiate iPaaS solutions is the developer experience. The user interface of the solution will determine the adoption rate and the value it brings to the enterprise. So, for a citizen integrator to actually use the system, the interface should be intuitive enough to guide them in building their integration flows quickly, effectively, and most importantly, without the assistance of IT. This alone will make or break the system adoption.

iPaaS vendors are trying to enhance this developer experience with features like drag-and-drop connectors, pipeline snippets, a templates library, a starter kit, mapping enhancements, etc. However, very few vendors are offering AI-driven tooling that enables intelligent ways to predict next steps – based on learnings from hundreds of other users – for your integration flow. AI-assist is truly a great benefit for citizen integrators, who may be non-technical. Even technically savvy developers welcome a significant boost in their productivity. With innovations like this happening, the iPaaS space is quite far away from being commoditized. However, enterprises still need to be wary of cloud-washing iPaaS vendors that offer “1000+” connectors, a thick-client IDE, or an ESB wrapped in a cloud blanket. And, that is a post for a different day!

Dinesh Chandrasekhar is Director of Product Marketing at SnapLogic. Follow him on Twitter @AppInt4All.

Gaurav Dhillon on Nathan Latka’s “The Top” Podcast

Popular podcast host Nathan Latka has a built a large following getting top CEOs, founders, and entrepreneurs to share strategies and tactics that set them up for business success. A data industry veteran and self-described “company-builder,” SnapLogic founder and CEO Gaurav Dhillon was recently invited by Nathan to appear as a featured guest on “The Top.”

Nathan is known for his rapid-fire, straight-to-the-point questioning, and Gaurav was more than up to the challenge. In this episode, the two looked back at Gaurav’s founding of Informatica in the ’90s; how he took that company public and helped it grow to become a billion-plus dollar business; why he stepped away from Informatica and decided to start SnapLogic; how data integration fuels digital business and why customers are demanding modern solutions like SnapLogic’s that are easy to use and built for the cloud; and how he’s building a fast-growing, innovative business that also has it’s feet on the ground.

The two also kept it fun, with Gaurav fielding Nathan’s “Famous Five” show-closing questions, including favorite book, most admired CEO, advice to your 20-year-old self, and more.

You can listen to the full podcast above or via the following links:

Gartner Names SnapLogic a Leader in the 2017 Enterprise iPaaS Magic Quadrant

For the second year in a row, SnapLogic has been named a Leader in Gartner’s Magic Quadrant for Enterprise Integration Platform as a Service (iPaaS).

Gartner evaluated iPaaS vendors on “completeness of vision” and “ability to execute.” Those named to the Leaders quadrant, as Gartner noted in the report, “have a solid reputation, with notable market presence and a proven track record in enabling … their platforms are well-proven and functionally rich, with regular releases to rapidly address this fast-evolving market.”

In a press release issued today, SnapLogic CTO James Markarian said of the recognition: “Since our inception, we have been laser-focused on delivering a modern enterprise integration platform that is specifically designed to manage the data and application integration demands of today’s hybrid enterprise technology environments. Our Enterprise Integration Cloud eliminates the complexity of legacy integrations, providing a platform that supports fast and easy self-service integration.”

The Enterprise iPaaS Magic Quadrant is embedded below. We’d encourage you to download the complete report as it provides a comprehensive review of all the vendors and the growing market.

Gartner 2017 iPaaS MQ

Thanks to all of SnapLogic’s customers, partners, and employees for the ongoing support and for making SnapLogic’s Enterprise Integration Cloud a leading self-service integration platform connecting applications, data, and things.

Finally viable: Best-of-breed enterprise environments

It’s one of the oldest, most contentious rivalries in the enterprise application arena: What’s better, best-of-breed environments or single-vendor suites? Since the turn of the century, suite vendors have argued that their approach avoids the steep data integration challenges that can be inherent with best-of-breed. On the flip side, point solution vendors say that enterprise suites pack in a lot of “dead wood” but don’t offer the real functionality, or customization potential, that is needed.

However, unlike religion and politics, this is one argument that is headed toward extinction. The biggest barrier to best-of-breed strategies — data integration — is, hands down, easier by an order of magnitude today, thanks to built-for-the-cloud app integration solutions that eliminate previous barriers. As a result, best-of-breed application environments aren’t just viable, they’re readily attainable.

Two dimensions of data integration

There are two ways in which data integration has dramatically improved with native cloud solutions: on the back end, between the applications themselves, and on the front end, from the user experience perspective.

On the back end, one of the first-order implications of a robust data model is the number of connectors a data integration solution provides. SnapLogic has hundreds of Snaps (connectors) and that’s not coincidental. Our library of Snaps proves our suitability to the modern world; it’s an order of magnitude easier to build and support a SnapLogic connector than an Informatica connector — the integration tool of choice for last-century best-of-breed environments — because our data model fits the modern world.

As a result, customers are up and running with SnapLogic in a day or two. In minutes we can show customers what SnapLogic is capable of doing. This is in comparison to Informatica and other legacy integration technologies; here, developers or consultants can work for weeks or months on the same integration project and still have nothing. They can’t deliver quickly due to the limitations of the underlying technology.

The ease of big data integration with SnapLogic has profound implications on the user experience. Instead of having to beg analysts to do ETLs (extract, transfer, and load) to pull the data set they need, SnapLogic users can get whatever data they want, themselves. They can then analyze it and get answers far faster than under previous best-of-breed regimes.

These are not subtle differences.

The economics of cloud-based integration

The subscription-based pricing model of cloud-based integration services further democratizes data access. Instead of putting the burden on IT to buy and implement an integrated application suite — which can cost upwards of $100 million in a large enterprise — cloud-based integration technology can be acquired at a nominal per-user fee, charged to a corporate credit card. Lines of business have taken advantage of this ease of access, making their own cloud big data technology moves with the full knowledge and support of IT.

For IT organizations that have embraced their new mission of enablement, the appeal of cloud-based data integration is clear. In addition to allowing business users to work the way they want to, the cloud-based solution is infinitely easier to customize, and deploy and support globally. And it offers an obvious answer to the question, “Do I want to continue feeling the pain of using integrated app suites or do I want to join the new century?”

Find out more about how and why SnapLogic puts best-of-breed integration within every organization’s grasp. Register for this upcoming webinar featuring a conversation with myself, industry analyst and data integration expert David Linthicum, and Gaurav Dhillon, SnapLogic’s CEO and also an Informatica alumnus: “We left Informatica. Now you can, too.”

SNP_Thumb_Informatica

James Markarian is CTO at SnapLogic. Follow him on Twitter @jamesmarkarian.

From helicopter to enabler: The new face of enterprise IT

Can an IT organization effectively run a 2017 business on 25-year-old technology? As someone who played a large hand in developing the data integration technology in question — at Informatica, where I was CTO for nearly two decades — I can tell you that the answer is simple: “No.”

A vastly different primordial landscape

That said, I know that when Informatica was created, it was the best technology for data integration at the time. The world was a lot simpler in 1992: there were five databases that mattered, and they were all pretty similar. There were just a few ERP systems: Oracle, SAP and a young PeopleSoft. Informatica was ideally suited to that software baseline, and the scale-up UNIX platforms of that era. The web, obviously, was not in the picture.

IT organizations were also a lot simpler in 1992. If any business person wanted new tech functionality — a new workstation added to a network, or a new report from a client/server system — they put their request into the IT queue, because that was the only way to get it.

IT is still important; it’s just different

Fast-forward 25 years to 2017. Almost everything about that primordial technology landscape, when Informatica roamed the world, is different. For example, now there’s the web, the cloud, NoSQL databases, and best of breed application strategies that are actually viable. None of these existed when Informatica started. Every assumption from that time — the compute platform, scale-up/scale-out, data types, data volumes and data formats — is different.

IT organizations are radically different, too. The command-and-control IT organization of the past has transformed into a critical enablement function. IT still enables core operations by securing the enterprise and establishing a multitude of technology governance frameworks. But the actual procurement of end-user technology, such as analyzing data aggregated from across systems and across the enterprise, is increasingly in the hands of business users.

In other words, the role of IT is changing, but the importance of IT isn’t. It’s like parenting; as your kids grow your role changes. It’s less about helicoptering and more about enabling. Parents don’t become less important, but how we deliver value evolves.

This is a good analog to the changes in enterprise IT. The IT organization wants to enable users because it’s pretty impossible to keep up with the blistering pace of business growth and change. If the IT organization tries to control too much, at some point it starts holding the business back.

Smart IT organizations have realized their role in the modern enterprise is to help their business partners become more successful. SnapLogic delivers a vital piece of required technology; we help IT organizations to give their users the self-service data integration services they need, instead of waiting for analysts to run an ETL through Informatica to pull the requested data together. By enabling self-service, SnapLogic is helping lines of business — most companies’ biggest growth drivers — to reach their full potential. If you’re a parent reading this, I know it will sound familiar.

Here’s another way to find out more about why IT organizations are embracing SnapLogic as a critical enabler: readSnapLogic’s new whitepaper that captures my conversation with Gaurav Dhillon, SnapLogic’s CEO and also an Informatica alumnus: “We left Informatica. Now you can, too.”

snp-informatica-wp-1000x744