Eight Data Management Requirements for the Enterprise Data Lake

SnapLogicDataLakeMgmt01itbe_logoThis article originally appeared as a slide slow on ITBusinessEdge: Data Lakes – 8 Data Management Requirements.

2016 is the year of the data lake. It will surround, and in some cases drown the data warehouse and we’ll see significant technology innovations, methodologies and reference architectures that turn the promise of broader data access and big data insights into a reality. But big data solutions must mature and go beyond the role of being primarily developer tools for highly skilled programmers. The enterprise data lake will allow organizations to track, manage and leverage data they’ve never had access to in the past. New enterprise data management strategies are already leading to more predictive and prescriptive analytics that are driving improved customer service experiences, cost savings and an overall competitive advantage when there is the right alignment with key business initiatives. Continue reading “Eight Data Management Requirements for the Enterprise Data Lake”

SnapLogic and the Data Lake

In this final post in this series from Mark Madsen’s whitepaper: Will the Data Lake Drown the Data Warehouse?, I’ll summarize SnapLogic’s role in the enterprise data lake.

SnapLogic is the only unified data and application integration platform as a service (iPaaS). The SnapLogic Elastic Integration Platform has 350+ pre-built intelligent connectors – called Snaps – to connect everything from AWS Redshift to Zuora and a streaming architecture that supports real-time, event-based and low latency enterprise integration requirements plus the high volume, variety and velocity of big data integration in the same easy-to-use, self service interface. Continue reading “SnapLogic and the Data Lake”

Big Data Integration: Don’t Wait for a Faster Horse!

While many believe that he never actually said it, one of my favorite quotes about modern innovation is the Henry Ford line: “If I had asked people what they wanted, they would have said faster horses.” (Note that there is similar disagreement about a famous Steve Jobs quote about listening to customers.)

When it comes to re-thinking the integration plumbing that will be required for the next wave of big data and microservices in the modern enterprise, David Linthicum had this to say in our webinar last week:

linthicum_big_data_integration“Those who don’t understand the strategic value that new approaches to data integration will have in the emerging world will end up being caught without the technology they need to be successful. I get a call a week from people who are trying to take the existing approaches, patterns and technologies that they leverage, 10, 15, 20 years ago and re-apply them into big data systems into Data Lakes and new versions of how we’re dealing cloud-based systems and petabyte-scale databases and they’re falling short and the reason they’re falling short is because they were engineered to solve a particular static problem.”

He went on to say, “The Data Lake concept and the ability to move in that direction, I think is going to provide a ton more value within the enterprises than we saw in the past.” The lively discussion with SnapLogic co-founder and CEO, Gaurav Dhillon, covered ETL, ESBs, the future of data warehousing and the Internet of Things. Be sure to check out the recording or listen to the podcast on iTunes.

As Ventana Research recently noted in their benchmark research, Big Data Requires Integration Technology. But trying to get your old ETL tool to solve your new big data integration challenges is like hoping your horse will run faster when what you really need a Tesla. Back to David Linthicum, who has authored 13 books on integration:

“We have to change the way in which these systems exchange information and that is something either you can resist and try to take your existing technology and throw it at the problem and yell at your vendor for not getting the things into their technology they need to get in, or you take this as an opportunity to reinvent the way in which you do data integration, the way in which you approach the problem and ultimately bring more value into your enterprise by automating access to the information that, quite frankly, you need to run your business and will change your business, if used effectively.”

Here’s an overview of how SnapLogic is approaching the need for a modern approach to cloud and big data integration, with SnapReduce, Hadooplex and our investment in building a JSON-centric iPaaS from our Chief Scientist, Greg Benson: