SnapLogic Best Practices: Deploying Projects Between Phases

[update – check out what’s new in our Spring 2016 release – the Metadata Snaps are also useful for Lifecycle Management requirements]

One of the areas our integrated data services team and partners spend time with customers early in a SnapLogic Elastic Integration Platform deployment is on deploying from one project phase to the other (Dev -> QA -> Prod). There are a number of different configuration options. In this post, I’ll describe one. First a few assumptions:

  • The enterprise Lifecycle Management feature is not implemented in this example
  • The phases that are in use are Development, QA and Production
  • Each phase in use is being managed at a project level as a separate project with in a single Organization Setup
  • The users have the necessary permissions to perform the operations described in this post
  • The enhanced account encryption feature is not in use in the current SnapLogic Org

Continue reading “SnapLogic Best Practices: Deploying Projects Between Phases”

Key Announcements from Splunk Conference

Last week we attended the Splunk Worldwide Users’ Conference in Las Vegas. The theme of the event was Your Data, No Limits, and the spirit of the event really embodied new approaches to finding data and exploring innovative opportunities for us to use that data. There were a few key announcements I took away from the conference that I want to share.

First, they announced Hunk – an analytics engine in beta that aims to make Hadoop more understandable for users by combining “the best of both worlds: user experience meets compute intelligence.” With Hunk, a search query is split into a streaming query and a map-reduce job that are then executed in parallel, allowing the platform to cater to UX needs while keeping up with performance requirements. Last year, Splunk announced Hadoop Connect, which is used behind the scenes to pull data, and the new Splunk Hunk showed impressive performance demonstrations that processed more than a billion records in seconds.

Splunk-graphic

Data Models was also announced as a way for users to define ways in which they interpret data, and allow them to create views of that data in the specific ways they want to visualize it. A drag and drop interface then enables them to build these models and transform them into dashboards from which Splunk can then create visual representation. The ease of use here, combined with performance and specific security measures configured as needed, make Data Models even more valuable.

Overall, I was most impressed by the various implementations of Splunk in all areas of IT ranging from network monitoring, OS Error Analysis for mobile, and Application Performance Management and Monitoring. Finally, the concept of “Forwarders” also has huge potential for application in various industries to capture machine data. Forwarders help push the data from any machine to the Splunk engine for analysis and visual representation of that data. What’s even more important is the fact that the analysis is run in realtime and the data is analyzed and visualized seamlessly for the user.

With the emerging growth in SaaS market and enterprise adoption to hundred of SaaS applications, it’s becoming imperative for the enterprise to be able to run analytics on all the data from various applications. Unfortunately, one cannot deploy a Forwarder to any of their apps, but the need for the data movement remains. So with all of these announcements, what happens to the data from applications that needs to be analyzed or loaded into Hadoop? This is where SnapLogic is able to fill the void with seamless near-realtime integration to any application in the cloud. In keeping with the “data” theme from last week and where SnapLogic comes in, we recently held a webinar to explain how, by using SnapLogic for Splunk, customers can achieve seamless integration between cloud and on-premise applications which can only help to improve the latest Splunk updates even more.