Eight Data Management Requirements for the Enterprise Data Lake

SnapLogicDataLakeMgmt01itbe_logoThis article originally appeared as a slide slow on ITBusinessEdge: Data Lakes – 8 Data Management Requirements.

2016 is the year of the data lake. It will surround, and in some cases drown the data warehouse and we’ll see significant technology innovations, methodologies and reference architectures that turn the promise of broader data access and big data insights into a reality. But big data solutions must mature and go beyond the role of being primarily developer tools for highly skilled programmers. The enterprise data lake will allow organizations to track, manage and leverage data they’ve never had access to in the past. New data management strategies are already leading to more predictive and prescriptive analytics that are driving improved customer service experiences, cost savings and an overall competitive advantage when there is the right alignment with key business initiatives. Continue reading “Eight Data Management Requirements for the Enterprise Data Lake”

The New Data Lake Environment Needs a New Workhorse

In the whitepaper, How to Build an Enterprise Data Lake: Important Considerations Before You Jump In, industry expert Mark Madsen outlined the principles that must guide the design of your new reference architecture and some of the difference from the traditional data warehouse. In his follow-up paper, “Will the Data Lake Drown the Data Warehouse,” he asks the question, “What does this mean for the tools we’ve been using for the last ten years?”

In this latest in the series of posts from this paper (see the first post here and second post here) Mark writes about tackling big data integration using an example: Continue reading “The New Data Lake Environment Needs a New Workhorse”

JSON is the New CSV and Streams are the New Batch

Mark MadsenThis is the 2nd post in the series from Mark Madsen’s whitepaper: Will the Data Lake Drown the Data Warehouse? In the first post,  Mark outlined the differences between the data lake and the traditional data warehouse, concluding: “The core capability of a data lake, and the source of much of its value, is the ability to process arbitrary data.”

In this post, Mark reviews the new environment and new requirements: Continue reading “JSON is the New CSV and Streams are the New Batch”