The future for big data processing lies in the adoption of commercial Hadoop distributions and their supposed deployments. The macro use case for big data are data lakes, which are massive amounts of structured and unstructured data that do not carry the same restrictions as traditional data warehouses. They store everything, including every type of data, any volume, any scope of data that may be used by enterprise data users, for any reason.
Despite the power and potential of data lakes, many enterprises continue to approach this technology with the same data integration approaches and mechanisms they’ve used in the past, none of which work well. How can we tap into the power of the data lake? Continue reading “The Data Lake Data Integration Challenge”
This article originally appeared as a slide slow on ITBusinessEdge: Data Lakes – 8 Data Management Requirements.
2016 is the year of the data lake. It will surround, and in some cases drown the data warehouse and we’ll see significant technology innovations, methodologies and reference architectures that turn the promise of broader data access and big data insights into a reality. But big data solutions must mature and go beyond the role of being primarily developer tools for highly skilled programmers. The enterprise data lake will allow organizations to track, manage and leverage data they’ve never had access to in the past. New enterprise data management strategies are already leading to more predictive and prescriptive analytics that are driving improved customer service experiences, cost savings and an overall competitive advantage when there is the right alignment with key business initiatives. Continue reading “Eight Data Management Requirements for the Enterprise Data Lake”