Finally viable: Best-of-breed enterprise environments

It’s one of the oldest, most contentious rivalries in the enterprise application arena: What’s better, best-of-breed environments or single-vendor suites? Since the turn of the century, suite vendors have argued that their approach avoids the steep data integration challenges that can be inherent with best-of-breed. On the flip side, point solution vendors say that enterprise suites pack in a lot of “dead wood” but don’t offer the real functionality, or customization potential, that is needed.

However, unlike religion and politics, this is one argument that is headed toward extinction. The biggest barrier to best-of-breed strategies — data integration — is, hands down, easier by an order of magnitude today, thanks to built-for-the-cloud app integration solutions that eliminate previous barriers. As a result, best-of-breed application environments aren’t just viable, they’re readily attainable.

Two dimensions of data integration

There are two ways in which data integration has dramatically improved with native cloud solutions: on the back end, between the applications themselves, and on the front end, from the user experience perspective.

On the back end, one of the first-order implications of a robust data model is the number of connectors a data integration solution provides. SnapLogic has hundreds of Snaps (connectors) and that’s not coincidental. Our library of Snaps proves our suitability to the modern world; it’s an order of magnitude easier to build and support a SnapLogic connector than an Informatica connector — the integration tool of choice for last-century best-of-breed environments — because our data model fits the modern world.

As a result, customers are up and running with SnapLogic in a day or two. In minutes we can show customers what SnapLogic is capable of doing. This is in comparison to Informatica and other legacy integration technologies; here, developers or consultants can work for weeks or months on the same integration project and still have nothing. They can’t deliver quickly due to the limitations of the underlying technology.

The ease of big data integration with SnapLogic has profound implications on the user experience. Instead of having to beg analysts to do ETLs (extract, transfer, and load) to pull the data set they need, SnapLogic users can get whatever data they want, themselves. They can then analyze it and get answers far faster than under previous best-of-breed regimes.

These are not subtle differences.

The economics of cloud-based integration

The subscription-based pricing model of cloud-based integration services further democratizes data access. Instead of putting the burden on IT to buy and implement an integrated application suite — which can cost upwards of $100 million in a large enterprise — cloud-based integration technology can be acquired at a nominal per-user fee, charged to a corporate credit card. Lines of business have taken advantage of this ease of access, making their own cloud big data technology moves with the full knowledge and support of IT.

For IT organizations that have embraced their new mission of enablement, the appeal of cloud-based data integration is clear. In addition to allowing business users to work the way they want to, the cloud-based solution is infinitely easier to customize, and deploy and support globally. And it offers an obvious answer to the question, “Do I want to continue feeling the pain of using integrated app suites or do I want to join the new century?”

Find out more about how and why SnapLogic puts best-of-breed integration within every organization’s grasp. Register for this upcoming webinar featuring a conversation with myself, industry analyst and data integration expert David Linthicum, and Gaurav Dhillon, SnapLogic’s CEO and also an Informatica alumnus: “We left Informatica. Now you can, too.”

SNP_Thumb_Informatica

James Markarian is CTO at SnapLogic. Follow him on Twitter @jamesmarkarian.

From helicopter to enabler: The new face of enterprise IT

Can an IT organization effectively run a 2017 business on 25-year-old technology? As someone who played a large hand in developing the data integration technology in question — at Informatica, where I was CTO for nearly two decades — I can tell you that the answer is simple: “No.”

A vastly different primordial landscape

That said, I know that when Informatica was created, it was the best technology for data integration at the time. The world was a lot simpler in 1992: there were five databases that mattered, and they were all pretty similar. There were just a few ERP systems: Oracle, SAP and a young PeopleSoft. Informatica was ideally suited to that software baseline, and the scale-up UNIX platforms of that era. The web, obviously, was not in the picture.

IT organizations were also a lot simpler in 1992. If any business person wanted new tech functionality — a new workstation added to a network, or a new report from a client/server system — they put their request into the IT queue, because that was the only way to get it.

IT is still important; it’s just different

Fast-forward 25 years to 2017. Almost everything about that primordial technology landscape, when Informatica roamed the world, is different. For example, now there’s the web, the cloud, NoSQL databases, and best of breed application strategies that are actually viable. None of these existed when Informatica started. Every assumption from that time — the compute platform, scale-up/scale-out, data types, data volumes and data formats — is different.

IT organizations are radically different, too. The command-and-control IT organization of the past has transformed into a critical enablement function. IT still enables core operations by securing the enterprise and establishing a multitude of technology governance frameworks. But the actual procurement of end-user technology, such as analyzing data aggregated from across systems and across the enterprise, is increasingly in the hands of business users.

In other words, the role of IT is changing, but the importance of IT isn’t. It’s like parenting; as your kids grow your role changes. It’s less about helicoptering and more about enabling. Parents don’t become less important, but how we deliver value evolves.

This is a good analog to the changes in enterprise IT. The IT organization wants to enable users because it’s pretty impossible to keep up with the blistering pace of business growth and change. If the IT organization tries to control too much, at some point it starts holding the business back.

Smart IT organizations have realized their role in the modern enterprise is to help their business partners become more successful. SnapLogic delivers a vital piece of required technology; we help IT organizations to give their users the self-service data integration services they need, instead of waiting for analysts to run an ETL through Informatica to pull the requested data together. By enabling self-service, SnapLogic is helping lines of business — most companies’ biggest growth drivers — to reach their full potential. If you’re a parent reading this, I know it will sound familiar.

Here’s another way to find out more about why IT organizations are embracing SnapLogic as a critical enabler: readSnapLogic’s new whitepaper that captures my conversation with Gaurav Dhillon, SnapLogic’s CEO and also an Informatica alumnus: “We left Informatica. Now you can, too.”

snp-informatica-wp-1000x744

7 Big Data Predictions for 2017

As data increasingly becomes the means by which businesses compete, companies are restructuring operations to build systems and processes liberating data access, integration and analysis up and down the value chain. Effective data management has become so important that the position of Chief Data Officer is projected to become a standard senior board level role by 2020, with 92 percent of CIOs stating that a CDO is the best person to determine data strategy.

With this in mind as you evaluate your data strategy for 2017, here are seven predictions to contemplate to build a solid framework for data management and optimization.

  1.  Self-Service Data Integration Will Take Off
    Eschewing the IT bottleneck designation and committed to being a strategic partner to the business, IT is transforming its mindset. Rather than be providers of data, IT will enable users to achieve data optimization on a self-service basis. IT will increasingly decentralize app and data integration – via distributed Centers of Excellence based on shared infrastructure, frameworks and best practices – thereby enabling line-of-business heads to gather, integrate and analyze data themselves to discern and quickly act upon insightful trends and patterns of import to their roles and responsibilities. Rather than fish for your data, IT will teach you how to bait the hook. The payoff for IT: satisfying business user demand for fast and easy integrations and accelerated time to value; preserving data integrity, security and governance on a common infrastructure across the enterprise; and freeing up finite IT resources to focus on other strategic initiatives.
  1. Big Data Moves to the Cloud
    As the year takes shape, expect more enterprises to migrate storage and analysis of their big data from traditional on-premise data stores and warehouses to the cloud. For the better part of the last decade, Hadoop’s distributed computing and processing power has made it the standard open source platform for big data infrastructures. But Hadoop is far from perfect. Common user gripes include complexity and instability – not all that surprising given all the software developers regularly contributing their improvements to the platform. Cloud environments are more stable, flexible, elastic and better-suited to handling big data, hence the predicted migration.
  1. Spark Usage Outside of Hadoop Will Surge
    This is the year we will also see more Spark use cases outside of Hadoop environments. While Hadoop limps along, Spark is picking up the pace. Hadoop is still more likely to be used in testing rather than production environments. But users are finding Spark to be more flexible, adaptable and better suited for certain workloads – machine learning and real-time streaming analytics, as examples. Once relegated to Hadoop sidekick, Spark will break free and stand on its own two feet this year. I’m not alone in asking the question: Hadoop needs Spark but does Spark need Hadoop?
  1. A Big Fish Acquires a Hadoop Distro Vendor?
    Hadoop distribution vendors like Cloudera and Hortonworks paved the way with promising technology and game-changing innovation. But this past year saw growing frustration among customers lamenting increased complexity, instability and, ultimately, too many failed projects that never left the labs. As Hadoop distro vendors work through some growing pains (not to mention limited funds), could it be that a bigger, deeper-pocketed established player – say Teradata, Oracle, Microsoft or IBM – might swoop in to buy their sought after technology and marry it with a more mature organization? I’m not counting it out.
  1. AI and ML Get a Bit More Mainstream
    Off the shelf AI (artificial intelligence) and ML (machine learning) platforms are loved for their simplicity, low barrier to entry and low cost. In 2017, off the shelf AI and ML libraries from Microsoft, Google, Amazon and other vendors will be embedded in enterprise solutions, including mobile varieties. Tasks that have until now been manual and time-consuming will become automated and accelerated, extending into the world of data integration.

6. Yes, IoT is Coming, Just Not This Year
Connecting billions and billions of sensor-embedded devices and objects over the internet is inevitable, but don’t yet swallow all the hype. Yes, there is a lot being done to harness IoT for specific aims, but the pace toward the development of a general-purpose IoT platform is closer to a canter than a gallop. IoT solutions are too bespoke and purpose-built to solve broad, commonplace problems – the market still nascent with standards gradually evolving – that a general-purpose, mass-adopted IoT platform to collect, integrate and report on data in real-time will take, well, more time. Like any other transformation movement in the history of enterprise technology, brilliant bits and pieces need to come together as a whole. It’s coming, just not in 2017.

  1. APIs Are Not All They’re Cracked Up to Be
    APIs have long been the glue connecting apps and services, but customers will continue to question their value vs investment in 2017. Few would dispute that APIs are useful in building apps and, in many cases, may be the right choice in this regard. But in situations where the integration of apps and/or data is needed and sought, there are better ways. Case in point is iPaaS (integration platform as a service), which allows you to quickly and easily connect any combination of cloud and on-premise technologies. Expect greater migration this year toward cloud-based enterprise integration platforms – compared to APIs, iPaaS solutions are more agile, better equipped to handle the vagaries of data, more adaptable to changes, easier to maintain and far more productive.

I could go on and on, if for no other reason that predictions are informed “best guesses” about the future. If I’m wrong on two or three of my expectations, my peers will forgive me. In the rapidly changing world of technology, batting .400 is a pretty good statistic.