This week, the SnapLogic team will be supporting one of our partners, Amazon Web Services, in Las Vegas for the annual AWS re:Invent conference. This gathering of the global AWS community will feature hands-on labs and bootcamps and cover topics such as infrastructure maintenance, and improving developer productivity, network security and application performance.
Gone are the days when enterprises had all of their apps and data sources on-premises. Today is the era of big data, cloud and hybrid deployments. More and more enterprises are rapidly adopting different SaaS applications and hosting their solutions in public clouds including Amazon Web Services and Microsoft Azure. But soon enterprises realize that their SaaS applications and on-premises data sources are not integrated with their public cloud footprint and the integration itself becomes an expensive and time consuming undertaking.
Short answer to that provocative question: no – it’s just changing. Our data integration road show hit four U.S. cities over the past two weeks. The keynote presentation was delivered by James Markarian, SnapLogic’s CTO. He shared his perspective on modern data management, the role of the data warehouse in a hybrid cloud environment, and important considerations for an enterprise data lake strategy.
We were also joined by our partner, Amazon Web Services, who highlighted their cloud data management platform, with an emphasis on Amazon Redshift. Together we highlighted some of the joint SnapLogic/AWS success stories around hybrid cloud data integration, including GameStop, Box, CapitalOne, and eero.
The bulk of James’ keynote focused on the changes to the data landscape that are affecting the role and structure of the data warehouse. He described “data warehousing 1.0” from the 1980’s which introduced a convenient, single place to warehouse all data, but which was expensive and reliant on scripting to integrate sources. He contrasted that with “data warehousing 2.0” of the 1990’s which saw the rise of ETL processes and data marts, but which was still rigid and typically on-premises. Since that era, however, data warehousing has remained generally static. Now, with the dramatic increase in unstructured/polystructured data, plus cloudification of data sources, data warehousing 2.0 has fallen a bit short. Enter the data lake. James cautioned the audience not to think of a data lake as an amorphous, no-rules dumping ground for unstructured data. Instead, he identified multiple “zones” within the data lake, each of which has certain requirements, rules and uses.
Finally, James illustrated how lakeshore data marts and cloud-based data warehouses – connected using SnapLogic – address some of the risk and high labor costs that can be associated with Hadoop-centric data lakes.
Attendees – some of whom were already far along in their data lake adoption journey and some still in a “data warehouse 2.0” environment – certainly left with food for thought. Watch this space for the next time SnapLogic comes to a city near you.
In the mean time, don’t forget to subscribe to our data management podcast series called SnapTalk.
SnapLogic and Amazon Web Services are hosting a series of exclusive live seminars starting this week in Dallas. Next week we’ll be in Chicago and New York, followed by Palo Alto later in the month. The seminar series is focused on the future of data warehouse solutions and analytics in the modern enterprise. A key question that we’ll address is: Is the Data Warehouse Dead? Continue reading “Is the Data Warehouse Dead?”
Are you currently using Amazon Web Services and struggling to find the right integration solution to connect the platform with other cloud and on-premises applications and data sources? Adopting AWS Redshift and looking for a modern data integration solution? If so, check out this demonstration of SnapLogic for AWS.
Adding on to our Fall 2015 release, tonight our library of 350+ pre-built intelligent connectors, called Snaps, is being updated with our November 2015 release. Updates to the SnapLogic Elastic Integration Platform are quarterly and Snap updates are monthly. Some of the updates in this Snap release include:
- Select Snap and Update Snap added to our core JDBC Snap Pack, allowing you to fetch data and update tables respectively.
- Updates and improvements made to several Snap Packs including: Anaplan, Box, Google Spreadsheet, JDBC, JMS, AWS Redshift, Salesforce, ServiceNow, Splunk, Vertica and Workday.
- Updates to core Snaps such Binary, Flow, REST, SOAP, and Transform.
You can read more about Snaps here. Contact Us if you have questions about our hybrid cloud and big data integration platform or any of our Snaps. Customers and partners can also visit and subscribe to our Trust Site for updates.
I recently had the pleasure of chatting with SnapLogic customer Yelp. Given the nature of their business, Yelp had a lot of customer data that they needed to process and act upon quickly in order to optimize their revenue streams. They decided to adopt the Amazon Redshift data warehouse service to give them the analytics speed and flexibility they needed. So the next question was: how to get the data into Redshift efficiently.
Once they discovered that SnapLogic had the out-of-the-box connectors they needed — not only for Redshift but for data sources Salesforce and Workday — it came down to build versus buy. They could build the integrations using in-house resources, but there was an opportunity cost and speed penalty that came with a DIY approach. In the end, they chose SnapLogic and estimate that they cut development time in half.
And they’re not done – they are connecting Workday with Redshift next. Yelp told me, “Looking ahead, we’re planning to deploy the Workday Snap to connect our human resources data to Redshift. SnapLogic has proven to be a tremendous asset.” Sounds like a 5-star review. Read more here.