Mossberg out. Enterprise technology still in

By Gaurav Dhillon

A few weeks ago, the legendary tech journalist, Walt Mossberg, penned his last column. Although tech journalism today is vastly different than it was in 1991, when his first column appeared in the Wall Street Journal, or even five or 10 years ago, voices like Walt’s still matter. They matter because history matters – despite what I see as today’s widely held, yet unspoken belief that nothing much important existed prior to the invention of the iPhone.

Unpacking that further, history matters because the people who learn from it, and take their cues from it, are those who will drive the future.

Enterprise tech history is still unfolding

I like to think of myself as one of those people, certainly one who believes that all history is meaningful, including tech history. As tech journalism’s eminence grise, Walt not only chronicled the industry’s history, he also helped to define it. He was at the helm of a loose cadre of tech journalists and industry pundits, from Robert X. Cringely to Esther Dyson, who could make or break a company with just a few paragraphs.

Walt is now retiring. So what can we learn from him? The premise of his farewell column in Recode is that tech is disappearing, in a good way.”[Personal] tech was once always in your way. Soon, it will be almost invisible,” he wrote, and further, “The big software revolutions, like cloud computing, search engines, and social networks are also still growing and improving, but have become largely established.”

I’ll disagree with Walt on the second point. The cloud computing revolution, which is changing the way enterprises think and operate, is just beginning. We are at a juncture populated by unimaginably large quantities of data, coupled with an equally unquenchable thirst by enterprises to learn from it. The world has gone mad for artificial intelligence (AI) and analytics, every permutation of which is fueled by one thing: data.

The way we use data will become invisible

In his column, Walt observed that personal tech is now almost invisible. We use and benefit from it in an almost passive way. The way data scientists and business users consume data is anything but. Data is still moved around and manually integrated, on-premises and in the cloud, with processes that haven’t changed much since the 1970s. Think about it – the 1970s! It’s no secret that extract, transfer, and load (ETL) processes remain the bane of data consumers’ existence, largely because many enterprises are still using 25-year-old solutions to manage ETL and integrate data.

Cloud Computing

The good news is, data integration is becoming much easier to do, and is well on its way to becoming invisible. Enterprise integration cloud technology promises to replace slow and cumbersome scripting and manual data movement with fast, open, seamless data pipelines, optimized with AI techniques.

Remember how, as Internet use exploded in the late 1990s, the tech industry was abuzz with companies offering all manner of optimization technologies, like load balancing, data mirroring, and throughput optimization? These days you never hear about these companies anymore; we take high-performance internet service for granted, like the old-fashioned dial tone.

I am confident that we are embarking on a similar era for enterprise data integration, one in which modern, cloud-first technologies will make complex data integration processes increasingly invisible, seamlessly baked into the way data is stored and accessed.

Making history with data integration

I had the pleasure of meeting Walt some years ago at his office, a miniature museum with many of the personal tech industry’s greatest inventions on display. There, his love of tech was apparent and abundant. Apple IIe? Nokia Communicator 9000? Palm Treo and original iPod? Of course. If Walt were to be at his keyboard, in his office, for another couple of years, I’m pretty sure his collection would be joined by a technology with no physical form factor, but of even greater import: the enterprise cloud.

Hats off to you, Walt. And while you may have given your final sign-off, “Mossberg out,” enterprise tech is most definitely still in.

Follow me on Twitter @gdhillon.

Gaurav Dhillon is CEO of SnapLogic. You can follow him on Twitter @gdhillon.

Will the Cloud Save Big Data?

This article was originally published on ITProPortal.

Employees up and down the value chain are eager to dive into big data solutions, hunting for golden nuggets of intelligence to help them make smarter decisions, grow customer relationships and improve business efficiency. To do this, they’ve been faced with a dizzying array of technologies – from open source projects to commercial software products – as they try to wrestle big data to the ground.

Today, a lot of the headlines and momentum focus around some combination of Hadoop, Spark and Redshift – all of which can be springboards for big data work. It’s important to step back, though, and look at where we are in big data’s evolution.

In many ways, big data is in the midst of transition. Hadoop is hitting its pre-teen years, having launched in April 2006 as an official Apache project – and then taking the software world by storm as a framework for distributed storage and processing of data, based on commodity hardware. Apache Spark is now hitting its strides as a “lightning fast” streaming engine for large-scale data processing. And various cloud data warehousing and analytics platforms are emerging, from big names (Amazon Redshift, Microsoft Azure HDInsight and Google BigQuery) to upstart players like Snowflake, Qubole and Confluent.

The challenge is that most big data progress over the past decade has been limited to big companies with big engineering and data science teams. The systems are often complex, immature, hard to manage and change frequently – which might be fine if you’re in Silicon Valley, but doesn’t play well in the rest of the world. What if you’re a consumer goods company like Clorox, or a midsize bank in the Midwest, or a large telco in Australia? Can this be done without deploying 100 Java engineers who know the technology inside and out?

At the end of the day, most companies just want better data and faster answers – they don’t want the technology headaches that come along with it. Fortunately, the “mega trend” of big data is now colliding with another mega trend: cloud computing. While Hadoop and other big data platforms have been maturing slowly, the cloud ecosystem has been maturing more quickly – and the cloud can now help fix a lot of what has hindered big data’s progress.

The problems customers have encountered with on-premises Hadoop are often the same problems that were faced with on-premises legacy systems: there simply aren’t enough of the right people to get everything done. Companies want cutting-edge capabilities, but they don’t want to deal with bugs and broken integrations and rapidly changing versions. Plus, consumption models are changing – we want to consume data, storage and compute on demand. We don’t want to overbuy. We want access to infrastructure when and how we want it, with just as much as we need but more.

Big Data’s Tipping Point is in the Cloud

In short, the tipping point for big data is about to happen – and it will happen via the cloud. The first wave of “big data via the cloud” was simple: companies like Cloudera put their software on Amazon. But what’s “truly cloud” is not having to manage Hadoop or Spark – moving the complexity back into a hosted infrastructure, so someone else manages it for you. To that end, Amazon, Microsoft and Google now deliver “managed Hadoop” and “managed Spark” – you just worry about the data you have, the questions you have and the answers you want. No need to spin up a cluster, research new products or worry about version management. Just load your data and start processing.

There are three significant and not always obvious benefits to managing big data via the cloud: 1) Predictability – the infrastructure and management burden shifts to cloud providers, and you simply consume services that you can scale up or down as needed; 2) Economics – unlike on-premises Hadoop, where compute and storage were intermingled, the cloud separates compute and storage so you can provision accordingly and benefit from commodity economics; and 3) Innovation – new software, infrastructure and best practices will be deployed continuously by cloud providers, so you can take full advantage without all the upfront time and cost.

Of course, there’s still plenty of hard work to do, but it’s more focused on the data and the business, and not the infrastructure. The great news for mainstream customers (well beyond Silicon Valley) is that another mega-trend is kicking in to revolutionize data integration and data consumption – and that’s the move to self-service. Thanks to new tools and platforms, “self-service integration” is making it fast and easy to create automated data pipelines with no coding, and “self-service analytics” is making it easy for analysts and business users to manipulate data without IT intervention.

All told, these trends are driving a democratization of data that’s very exciting – and will drive significant impact across horizontal functions and vertical industries. Data is thus becoming a more fluid, dynamic and accessible resource for all organizations. IT no longer holds the keys to the kingdom – and developers no longer control the workflow. Just in the nick of time, too, as the volume and velocity of data from digital and social media, mobile tools and edge devices threaten to overwhelm us all. Once the full promise of the Internet of Things, Artificial Intelligence and Machine Learning begins to take hold, the data overflow will be truly inundating.

The only remaining question: What do you want to do with your data?

Ravi Dharnikota is the Chief Enterprise Architect at SnapLogic. 

Talking “SMACT” With CIOs

SMACTWe recently reviewed the many fall tech events happening in the Bay Area and elsewhere and just got back from a few CIO-specific conferences last week. There have been a wide range of topics covered during recent events, including the next wave of cloud computing, cloud analytics, integration platform as a service (iPaaS), big data and big data integration.

Last week in Miami Beach we were at the Technology Business Management (TBM) Conference, which brought together Global 2000 CIOs, CTOs and CFOs to hear from peers and learn about new solutions on the market that can help modernize and transform enterprise IT organizations. While at the TBM Conference we had the opportunity to speak with CIOs and other IT leaders about how SnapLogic customers have reduced the complexity of their integrations by up to 85% and connected data, applications and APIs at least 4x faster. We also discussed common use cases for our platform such as cloud and on-premises application integration, digital marketing, big data analytics, self-service for “citizen integrators,” and the establishment of an agile enterprise integration layer or fabric.

old vs. newAt the Midmarket CIO Forum in Tucson, also last week, we met with CIOs from a variety of industries and reviewed the new data, application and API integration challenges that are facing today’s IT leaders. We discussed the “Integrator’s Dilemma” and how older, more traditional ETL tools and approaches to integration aren’t built for the new data challenges. Lastly, we talked about how to avoid getting “SMACT” summed up by the following:

  • Don’t settle for SO SO (same old, same old)
  • The first step to solving the Integrator’s Dilemma is recognizing it exists!
  • When it comes to Social, Mobile, Analytics, Cloud computing and the Internet of Things  wait to integrate!

You can find our presentation slides here. Also be sure to check out some of the social buzz below from TBM Conference and the Midmarket CIO Forum, as well as our infographic here on why CIOs are getting “SMACT”:

[Infographic] Why Are CIOs Getting SMACT?

According to industry analysts, within 5 years the CMO will spend more on IT than the CIO. Check out our new infographic, which explains why CIOs are getting “SMACT” due to the adoption of Social, Mobile, Analytics and Big Data, Cloud Computing and the Internet of Things, and provides some compelling statistics in each of these categories. You can also learn more here about the SnapLogic Elastic Integration Platform, delivering real-time, event-driven application integration and batch-oriented and streaming big data integration for analytics in a single platform.

[Infographic] Why Are CIOs Getting SMACT?

Here are the sources for all of the SMACT Infographic stats:

In addition to downloading a PDF of the above infographic here, check out some additional SnapLogic resources below all about the new world of social, mobile, analytics and big data, cloud computing and the Internet of Things:

Technology Conference Season Has Arrived!

It’s back to school for the kids and back to the conference center for technology companies and their customers, employees, and partners. With a focus primarily on cloud applications and platforms, big data and analytics, here’s a list of September events in the US that we’re tracking at SnapLogic. What are we missing? Where else should we be? This list of Big Data events is a helpful resource and there’s a good overview of cloud computing events here. Safe travels!

SnapLogic Recognized as a Cloud and Big Data Leader

As we continue to roll out our Elastic Integration Platform to more and more enterprise customers, I’m excited to say that SnapLogic has been recognized this week by two prominent publications. Here’s a summary: sandhill_logoThe Sand Hill Cloud 50 represents a unique set of players of various sizes and hues that span SaaS, Paas and IaaS, security, storage and services spaces and that stand out from the crowd with a unique differentiation and value proposition.” According to Sand Hill,  the 50 “represents a unique set of cloud players that stand out from the crowd.” Here’s what they said about SnapLogic:

“Integration platform as a service that connects cloud applications, APIs and disparate data sources with the rest of the enterprise. Named a Visionary in the Gartner Magic Quadrant in the integration platform as a service category. Customers include Netflix, CapitalOne, iRobot and Acxiom. The company enjoys 100 percent growth in YoY bookings.”

DBTA_100Database Trends and Applications magazine introduced the second annual DBTA 100 list of companies that matter most in data. According to the DBTA, “with organizations increasingly seeking to become data-driven entities—companies that actually use the data they are amassing for competitive advantage—DBTA set out to recognize innovative providers of hardware, software, and services….The 100 companies that matter in data comprises both seasoned veterans and disruptive new vendors.” As part of the DBTA 100, we had the opportunity to publish a “View From the Top” company overview. Here’s my summary of SnapLogic and what we’re setting out to do: Enterprise IT organizations today are facing a dilemma—their legacy integration technologies were built before the era of big data, social, mobile and cloud (SMAC) computing and simply can’t keep up. With respect to Clayton Christensen and his book The Innovators Dilemma, we call this The Integrator’s Dilemma. In 2013 SnapLogic introduced the industry’s first Elastic Integration platform as a service (iPaaS). Fast, multi-point and modern, it’s built from the ground up to handle today’s data, application and API connectivity challenges.

  • Data Integration as a Service:  Don’t let old ETL slow down your new analytics. SnapLogic goes beyond rows and columns-centric tools while providing pre-built Snaps for Tableau, Amazon Redshift and big data sources.
  • Cloud Application Integration: Got Salesforce? Workday? ServiceNow? What about SAP and Oracle? Whether it’s cloud-to-cloud or cloud-to-ground connectivity, if you’ve got SaaS, we’ve got Snaps.
  • APIs and the Internet of Things: Keep up to date and be ready for what’s next with Elastic Integration. With over 160 pre-built connectors, called Snaps, and a software-defined multi-tenant architecture that respects data gravity, the SnapLogic Elastic Integration Platform is ideally suited for today’s hybrid IT environments. It’s the Integrator’s Solution for the SMAC era.

I should also note that SnapLogic is included in this week’s BVP Cloudscape: Top 300 Private Cloud Companies. It’s a pretty impressive list of cloud and big data companies. We’re also listed in the 2014 OnDemand 100 Top Private Companies.

To learn more about SnapLogic, visit SnapLogic.com/resources or Contact Us.

Cloud Speed the Primary Business Driver for Integration Platform as a Service (iPaaS)

TechValidate SnapLogic survey resultsThis week we published the results of a survey we ran in March with TechValidate, which asked about the barriers to software-as-a-service (SaaS) adoption and the business and technical drivers for cloud-based integration services. We’ll be reviewing the details of the research in a webinar on April 25th, which will also provide a detailed overview of the SnapLogic Integration Cloud.

Here are some of the key findings from the research:

  • 56% of survey respondents are running four or more SaaS applications.
  • 43% prioritized application and data integration challenges as a barrier to SaaS application adoption in their companies.
  • 59% of survey respondents listed speed or time to value as the primary business driver for a cloud integration service.
  • 52% said a modern and scalable architecture was the primary technical requirement of a iPaaS.

When asked about the challenges of legacy integration tools for cloud integration (see our my colleagues post on Why Integration Heritage Matters and his summary of Why the Enterprise Service Bus Doesn’t Fly in the Cloud), 43% took issue with the requirement for costly hardware purchases and software installation and configuration. 37% found on-premise integration tools to be too expensive due to the perpetual licensing model and 35% noted that change management is painful where end point changes mean integration re-work.

Integration Platform as a Service

As I noted in the press release, the results of this TechValidate survey are in line with the conversations we’re having with our customers, partners and prospects. As SaaS application, analytics and API adoption grows in the enterprise, the ability to connect with other systems is the essential ingredient to long-term customer success. Integration should be a cloud accelerator not a bottleneck, which is why increasingly companies of all sizes are looking for modern, elastic integration alternatives to power their cloud services initiatives.

I hope you can join us for the webinar next week. You can also download the complete survey results here.