Data and analytics – behind and after an acquisition

By Karen He

Now more than ever, organizations need to move beyond innovation to grow their business, stay competitive, and remain relevant. And it’s a combination of data and analytics that provides businesses the right insights to help them move in the right direction and beyond innovation. Amazon and Walmart have done just that. Their competitiveness and relentless business tactics to become leaders in the extremely competitive retail world have set them apart from many. A shared tactic, acquiring smaller competitors in their space, is something both companies have done well, and data is at the core.  

In late 2016, Walmart strengthened its e-commerce strategy by acquiring Jet.com. It continued into 2017 by acquiring five other online retailers, including Moosejaw, ModCloth, Bonobos, Shoebuy.com, and Hayneedle.com.

Amazon, on the other hand, and just this year, expanded its brick-and-mortar expansion strategy with its first physical store in New York City to its pending purchase of Whole Foods.

Acquisitions are easier said than done. The teams behind most retail conglomerates like Walmart or Amazon do not buy companies by merely following their instincts. Instead, they rely heavily on data from multiple sources to develop a business strategy to grow their business. In acquisitions, data would show retail leaders whether an acquisition is strategically feasible for the business.

The data behind the decision

Nowadays, the amount of data available to organizations is both a blessing and a curse. Businesses are surrounded by deep pockets of their own data, residing in different cloud-based and on-premises applications and databases. For the most part, this volume of enterprise data is extremely hard to retrieve without technical assistance. As a result, businesses continuously face the challenge of spending an immense amount of time and effort simply rounding up and compiling data.

Subsequently, retail leaders gather and analyze data to make an informed decision on whether to purchase another company or not. But most retailers are not in the position to make such decisions because that insightful and illusive data resides in multiple places. They pivot from one tool to another, sifting through thousands of data sets before even getting into the analysis. These cumbersome, manual processes prevent retailers from gaining real-time insights, potentially preventing them from taking on first-mover initiatives and leapfrog its competitors. Until now.

Companies that can pull data and derive insights in real-time are empowered to transform and grow their business. Retailers need to pull data on-demand to be able to visualize complete insights to make a sound business decision. In Walmart and Amazon‘s cases, they tapped into extensive data sources to understand whether they would gain more value by growing organically or by dropping millions of dollars to acquire established companies. Of course, we know what they’ve acquired, but not necessarily what they didn’t acquire or why.

Post-acquisition alignment

Beyond all the pre-acquisition data and number crunching, both Amazon and Walmart are aware of the many M&A processes needed post-acquisition. Once a company acquires another company, the parent company must realign the business by consolidating virtually all the departments, operations, and processes between both companies. In Amazon and Walmart’s case, consolidating business operations and supply chains involve complex data migrations. For conglomerates like Amazon and Walmart to have a seamless flow of information, subsidiaries need to migrate their data from all their systems and applications into their parent company.

Without realignment, business users across functions can become unproductive due to the lack of data or inefficient manual labor to connect data files from disparate systems and applications. A marketing department alone may have at least a half a dozen marketing applications, including CRM, marketing campaign automation, web analytics, marketing intelligence, predictive analytics, content management and social media management systems. Gaps in marketing reports and insights, for example, emerge when marketers use duplicate data from different marketing applications, and result in potentially lower business performance. To fold in existing operations and processes, companies need a smarter way to connect systems together or migrate data from one system to another.

Amazon and Walmart are proof points of how companies must innovate and grow in the competitive retail market. Businesses across industries should also look into their data to unearth growth opportunities. Complete, real-time data and analytics empower business professionals to expand their business and stay competitive in the market.

Learn more about connecting systems and applications to fuel rich data and analytics in this recorded webcast.

Karen He is Product Marketing Manager at SnapLogic. Follow her on Twitter @KarenHeee.

 

Mossberg out. Enterprise technology still in

By Gaurav Dhillon

A few weeks ago, the legendary tech journalist, Walt Mossberg, penned his last column. Although tech journalism today is vastly different than it was in 1991, when his first column appeared in the Wall Street Journal, or even five or 10 years ago, voices like Walt’s still matter. They matter because history matters – despite what I see as today’s widely held, yet unspoken belief that nothing much important existed prior to the invention of the iPhone.

Unpacking that further, history matters because the people who learn from it, and take their cues from it, are those who will drive the future.

Enterprise tech history is still unfolding

I like to think of myself as one of those people, certainly one who believes that all history is meaningful, including tech history. As tech journalism’s eminence grise, Walt not only chronicled the industry’s history, he also helped to define it. He was at the helm of a loose cadre of tech journalists and industry pundits, from Robert X. Cringely to Esther Dyson, who could make or break a company with just a few paragraphs.

Walt is now retiring. So what can we learn from him? The premise of his farewell column in Recode is that tech is disappearing, in a good way.”[Personal] tech was once always in your way. Soon, it will be almost invisible,” he wrote, and further, “The big software revolutions, like cloud computing, search engines, and social networks are also still growing and improving, but have become largely established.”

I’ll disagree with Walt on the second point. The cloud computing revolution, which is changing the way enterprises think and operate, is just beginning. We are at a juncture populated by unimaginably large quantities of data, coupled with an equally unquenchable thirst by enterprises to learn from it. The world has gone mad for artificial intelligence (AI) and analytics, every permutation of which is fueled by one thing: data.

The way we use data will become invisible

In his column, Walt observed that personal tech is now almost invisible. We use and benefit from it in an almost passive way. The way data scientists and business users consume data is anything but. Data is still moved around and manually integrated, on-premises and in the cloud, with processes that haven’t changed much since the 1970s. Think about it – the 1970s! It’s no secret that extract, transfer, and load (ETL) processes remain the bane of data consumers’ existence, largely because many enterprises are still using 25-year-old solutions to manage ETL and integrate data.

Cloud Computing

The good news is, data integration is becoming much easier to do, and is well on its way to becoming invisible. Enterprise integration cloud technology promises to replace slow and cumbersome scripting and manual data movement with fast, open, seamless data pipelines, optimized with AI techniques.

Remember how, as Internet use exploded in the late 1990s, the tech industry was abuzz with companies offering all manner of optimization technologies, like load balancing, data mirroring, and throughput optimization? These days you never hear about these companies anymore; we take high-performance internet service for granted, like the old-fashioned dial tone.

I am confident that we are embarking on a similar era for enterprise data integration, one in which modern, cloud-first technologies will make complex data integration processes increasingly invisible, seamlessly baked into the way data is stored and accessed.

Making history with data integration

I had the pleasure of meeting Walt some years ago at his office, a miniature museum with many of the personal tech industry’s greatest inventions on display. There, his love of tech was apparent and abundant. Apple IIe? Nokia Communicator 9000? Palm Treo and original iPod? Of course. If Walt were to be at his keyboard, in his office, for another couple of years, I’m pretty sure his collection would be joined by a technology with no physical form factor, but of even greater import: the enterprise cloud.

Hats off to you, Walt. And while you may have given your final sign-off, “Mossberg out,” enterprise tech is most definitely still in.

Follow me on Twitter @gdhillon.

Gaurav Dhillon is CEO of SnapLogic. You can follow him on Twitter @gdhillon.

Case study: Connecting the dots at Box

“Data needs to be delivered to a user in the right place at the right time in the right volume.”

Spoken by veteran SnapLogic user Alan Leung, Senior Enterprise Systems Program Manager at Box, Alan provides insight in this case study for why an analytics cloud-first ecosystem with self-service integration is the right solution for many enterprise companies. Just as Box is on a mission to improve and innovate cloud-based file storage, internally they have moved toward a cloud-centric infrastructure that benefits from a cloud-based integration platform.

connecting-the-dots-at-box

Read the full case study here or take a look at some highlights below:

  • Overall problem: Box needed to more efficiently integrate cloud-based applications, including Salesforce, Zuora, NetSuite, and Tableau, all of which they relied on for daily operations.
  • Challenges: The primary challenge was around APIs – each application’s endpoints for integration purposes behaved differently, limiting abilities to build useful connections quickly and resulting in a series of disjointed silos. Manual upload and download processes caused a strain on resources and a waste of time and effort.
  • Goal: To satisfy the need to aggregate the business data piling up in various applications into a cloud-based warehouse to enable self-service, predictive analytics.
  • Solution needed: A cloud-based integration platform that vastly reduced or eliminated the time-consuming manual processes the users faced.
  • Solution found: With the SnapLogic Elastic Integration Cloud, Alan and his team benefitted from:
    • A platform that did not require sophisticated technical skills
    • The agility to enable quick and efficient integration projects
    • The ability to handle both structured and unstructured data at speed
    • An enhanced ability to quickly analyze and make sense of so much data, allowing the company to “rapidly pivot [our] operations to seize opportunity across every aspects of the business.”

For a quick snapshot, Box currently has 23 applications connected through the platform, resulting in 170 data pipelines processing over 15 billion transactions daily. They also have eliminated the need to build a single interface internally; and an ongoing benefit of Box’s partnership with SnapLogic is that more Snaps are always being created and can be implemented for future integration needs.

Learn more about all of our customers here, and stay tuned for more customer stories.

Podcast: James Markarian and David Linthicum on New Approaches to Cloud Integration

SnapLogic CTO James Markarian recently joined cloud expert David Linthicum as a guest on the Doppler Cloud Podcast. The two discussed the mass movement to the cloud and how this is changing how companies approach both application and data integration.

In this 20-minute podcast, “Data Integration from Different Perspectives,” the pair discuss how to navigate the new realities of hybrid app integration, data and analytics moving to the cloud, user demand for self-service technologies, the emerging impact of AI and ML, and more.

You can listen to the full podcast here, and below:

 

VIDEO: SnapLogic Discusses Big Data on #theCUBE from Strata+Hadoop World San Jose

It’s Big Data Week here in Silicon Valley with data experts from around the globe convening at Strata+Hadoop World San Jose for a packed week of keynotes, education, networking and more - and SnapLogic was front-and-center for all the action.

SnapLogic stopped by theCUBE, the popular video-interview show that live-streams from top tech events, and joined hosts Jeff Frick and George Gilbert for a spirited and wide-ranging discussion of all things Big Data.

First up was SnapLogic CEO Gaurav Dhillon, who discussed SnapLogic’s record-growth year in 2016, the acceleration of Big Data moving to the cloud, SnapLogic’s strong momentum working with AWS Redshift and Microsoft Azure platforms, the emerging applications and benefits of ML and AI, customers increasingly ditching legacy technology in favor of modern, cloud-first, self-service solutions, and more. You can watch Gaurav’s full video below, and here:

Next up was SnapLogic Chief Enterprise Architect Ravi Dharnikota, together with our customer, Katharine Matsumoto, Data Scientist at eero. A fast-growing Silicon Valley startup, eero makes a smart wireless networking system that intelligently routes data traffic on your wireless network in a way that reduces buffering and gets rid of dead zones in your home. Katharine leads a small data and analytics team and discussed how, with SnapLogic’s self-service cloud integration platform, she’s able to easily connect a myriad of ever-growing apps and systems and make important data accessible to as many as 15 different line-of-business teams, thereby empowering business users and enabling faster business outcomes. The pair also discussed ML and IoT integration which is helping eero consistently deliver an increasingly smart and powerful product to customers. You can watch Ravi and Katharine’s full video below, and here:

 

7 Big Data Predictions for 2017

As data increasingly becomes the means by which businesses compete, companies are restructuring operations to build systems and processes liberating data access, integration and analysis up and down the value chain. Effective data management has become so important that the position of Chief Data Officer is projected to become a standard senior board level role by 2020, with 92 percent of CIOs stating that a CDO is the best person to determine data strategy.

With this in mind as you evaluate your data strategy for 2017, here are seven predictions to contemplate to build a solid framework for data management and optimization.

  1.  Self-Service Data Integration Will Take Off
    Eschewing the IT bottleneck designation and committed to being a strategic partner to the business, IT is transforming its mindset. Rather than be providers of data, IT will enable users to achieve data optimization on a self-service basis. IT will increasingly decentralize app and data integration – via distributed Centers of Excellence based on shared infrastructure, frameworks and best practices – thereby enabling line-of-business heads to gather, integrate and analyze data themselves to discern and quickly act upon insightful trends and patterns of import to their roles and responsibilities. Rather than fish for your data, IT will teach you how to bait the hook. The payoff for IT: satisfying business user demand for fast and easy integrations and accelerated time to value; preserving data integrity, security and governance on a common infrastructure across the enterprise; and freeing up finite IT resources to focus on other strategic initiatives.
  1. Big Data Moves to the Cloud
    As the year takes shape, expect more enterprises to migrate storage and analysis of their big data from traditional on-premise data stores and warehouses to the cloud. For the better part of the last decade, Hadoop’s distributed computing and processing power has made it the standard open source platform for big data infrastructures. But Hadoop is far from perfect. Common user gripes include complexity and instability – not all that surprising given all the software developers regularly contributing their improvements to the platform. Cloud environments are more stable, flexible, elastic and better-suited to handling big data, hence the predicted migration.
  1. Spark Usage Outside of Hadoop Will Surge
    This is the year we will also see more Spark use cases outside of Hadoop environments. While Hadoop limps along, Spark is picking up the pace. Hadoop is still more likely to be used in testing rather than production environments. But users are finding Spark to be more flexible, adaptable and better suited for certain workloads – machine learning and real-time streaming analytics, as examples. Once relegated to Hadoop sidekick, Spark will break free and stand on its own two feet this year. I’m not alone in asking the question: Hadoop needs Spark but does Spark need Hadoop?
  1. A Big Fish Acquires a Hadoop Distro Vendor?
    Hadoop distribution vendors like Cloudera and Hortonworks paved the way with promising technology and game-changing innovation. But this past year saw growing frustration among customers lamenting increased complexity, instability and, ultimately, too many failed projects that never left the labs. As Hadoop distro vendors work through some growing pains (not to mention limited funds), could it be that a bigger, deeper-pocketed established player – say Teradata, Oracle, Microsoft or IBM – might swoop in to buy their sought after technology and marry it with a more mature organization? I’m not counting it out.
  1. AI and ML Get a Bit More Mainstream
    Off the shelf AI (artificial intelligence) and ML (machine learning) platforms are loved for their simplicity, low barrier to entry and low cost. In 2017, off the shelf AI and ML libraries from Microsoft, Google, Amazon and other vendors will be embedded in enterprise solutions, including mobile varieties. Tasks that have until now been manual and time-consuming will become automated and accelerated, extending into the world of data integration.

6. Yes, IoT is Coming, Just Not This Year
Connecting billions and billions of sensor-embedded devices and objects over the internet is inevitable, but don’t yet swallow all the hype. Yes, there is a lot being done to harness IoT for specific aims, but the pace toward the development of a general-purpose IoT platform is closer to a canter than a gallop. IoT solutions are too bespoke and purpose-built to solve broad, commonplace problems – the market still nascent with standards gradually evolving – that a general-purpose, mass-adopted IoT platform to collect, integrate and report on data in real-time will take, well, more time. Like any other transformation movement in the history of enterprise technology, brilliant bits and pieces need to come together as a whole. It’s coming, just not in 2017.

  1. APIs Are Not All They’re Cracked Up to Be
    APIs have long been the glue connecting apps and services, but customers will continue to question their value vs investment in 2017. Few would dispute that APIs are useful in building apps and, in many cases, may be the right choice in this regard. But in situations where the integration of apps and/or data is needed and sought, there are better ways. Case in point is iPaaS (integration platform as a service), which allows you to quickly and easily connect any combination of cloud and on-premise technologies. Expect greater migration this year toward cloud-based enterprise integration platforms – compared to APIs, iPaaS solutions are more agile, better equipped to handle the vagaries of data, more adaptable to changes, easier to maintain and far more productive.

I could go on and on, if for no other reason that predictions are informed “best guesses” about the future. If I’m wrong on two or three of my expectations, my peers will forgive me. In the rapidly changing world of technology, batting .400 is a pretty good statistic.

SnapLogic Sits Down with theCUBE at AWS re:Invent to Talk Self-Service Cloud Analytics

SnapLogic was front-and-center at AWS re:Invent last week in Las Vegas, with our team busier than ever meeting with customers and prospects, showcasing our solutions at the booth, and networking into the evening with event-goers interested in all things Cloud, AWS integration and SnapLogic.

Ravi Dharnikota, SnapLogic’s Head of Enterprise Architecture and Big Data Practice, took time out to stop by and visit with John Furrier, co-founder of the live video interview show theCUBE.  Ravi was joined by Matt Glickman, VP of Products with our partner Snowflake Computing, for a wide-ranging discussion on the changing customer requirements for effective data integration, SaaS integration, warehousing and analytics in the cloud.  

The roundtable all agreed — organizations need fast and easy access to all data, no matter the source, format or location — and legacy solutions built for a bygone era simply aren’t cutting it.  Enter SnapLogic and Snowflake, each with a modern solution designed from the ground-up to be cloud-first, self-service, fully scalable and capable of handling all data. Customers using these solutions together — like Kraft Group, owners of the New England Patriots and Gillette Stadium — enjoy dramatic acceleration in time-to-value at a fraction of the cost by eliminating manual configuration, coding and tuning while bringing together diverse data and taking full advantage of the flexibility and scalability of the cloud.

To make it even easier for customers, SnapLogic and Snowflake recently announced tighter technology integration and joint go-to-market programs to help organizations harness all data for new insights, smarter decisions and better business outcomes.

To watch the full video interview on theCUBE, click here.