SnapReduce 2.0 Eases the Big Data Journey

TechTalk: How SnapReduce 2.0 Makes Big Data ElasticLast week, I sat down for a TechTalk with SnapLogic’s Chief Scientist, Greg Benson. Greg uniquely straddles academia and the industry and always brings great insights to industry’s emerging trends. (You can see some of his blog posts here). In this recorded webinar, we discussed Hadoop and SnapReduce 2.0, SnapLogic’s most recent innovation that helps customers embark on their big data journey. Greg shared his insights on technologies like Sqoop, Flume, HDFS and both legacy and modern data integration platforms. He’s been talking to many SnapLogic customers and prospects who are looking at us to help them with their big data initiatives. IT organizations don’t want to miss out on the opportunity to refresh their supporting technology investments such as data integration when investing in Hadoop. Customers reach out to SnapLogic because of Gaurav Dhillon’s track record in the data management space and some of the recent news about the company’s new, elastic integration platform that can address both application and data integration requirements.

Greg noted that Financial Services, Retail and Telco are some of the top verticals that are early adopters of Hadoop. We discussed some of the drivers of Hadoop adoption in the enterprise including the lower cost resources for archiving and analyzing large volumes of data compared to traditional enterprise data warehouses. Not only are traditional data warehouses prohibitively expensive from a hardware and software standpoint, they are also not well suited for the new age of semi-structured and unstructured data. Similarly, these customers are also noticing that their traditional ETL technologies are bursting at the seams trying to handle large volumes of unstructured and semi-structured data. Traditional data integration tools and approaches are falling short on this front as well because they don?t natively speak the lingua franca of big data – JSON (Java Script Object Notation) and they aren’t built to scale out.

SnapReduce 2.0Increasingly customers are looking to make Hadoop their de facto data management platform of the future and amortize its cost across multiple projects. Over time, more and more of their data management tasks will run on Hadoop, including migrating their existing data integrations and ETL tasks into Hadoop to run them at Hadoop scale. However, customers are pragmatic enough to not expect to be able to turn off their existing data warehousing investments overnight, if ever in some cases. They look at new technologies like SnapLogic to be the conduit between Hadoop and the relational data warehouse, where SnapLogic acts as the delivery mechanism for analytics result sets from Hadoop. With this approach, they can continue to deliver value using their existing business intelligence (BI) tools and data management investments while get their Hadoop competency up and running.

In this TechTalk, Greg also speculated on the future of Hadoop and the big data industry in general. You can view the webinar recording here and I’ve posted the slides below.  For anyone interested in SnapLogic’s SnapReduce 2.0 early access program, feel free to sign up here.

Category: Product

We're hiring!

Discover your next great career opportunity.