Introducing SnapLogic Community

We are happy to announce that our new SnapLogic Community is live!

community-overview

Created by popular demand - it’s been running in alpha and beta for a few months – the community is a place where we encourage customers to ask questions about SnapLogic and share their expertise and best practices. Whether you’re looking for help or have tips or tricks, the goal is to provide a community that helps each other.

community-categories

  • Want assistance configuring a particular Snap? Post a question in the Snap Packs category.
  • Interested in sharing a pipeline you have found to be particularly useful? Write about it in the Designing Pipeline category.
  • Want to build your own Snap Pack? Go to the Developing Snaps category.

And yes, we are involved. SnapLogic developers, product managers, field teams, and others review the posts as they come in and when they are updated, but we will rely on the community to support each other.

How to get started

The SnapLogic community is currently for SnapLogic customers only. If you’re a customer with a verified customer email domain, it won’t take us long to approve you and grant you access. 

Go to https://community.snaplogic.com/ to request access. 

When approved, look out for an email with further instructions.

Once you’re approved and logged in, take a look at the content already provided by customers and SnapLogic employees who took part in the alpha and beta releases. Find something particularly useful? Go ahead and “Like” the post. Want to contribute to a thread or have a question? Please do! This community is for you. You will help drive the direction the community takes as it evolves. As your Community Manager, I will occasionally ask for suggestions for the site, but feel free to post ideas that you might have as they come up to the Community Info category.

We’re looking forward to growing this community. Sign up today and let us know what you think.

Request access at  https://community.snaplogic.com/.

SnapLogic iPaaS November 2015 Snap Update

snaplogic_snaps_designerAdding on to our Fall 2015 release, tonight our library of 350+ pre-built intelligent connectors, called Snaps, is being updated with our November 2015 release. Updates to the SnapLogic Elastic Integration Platform are quarterly and Snap updates are monthly. Some of the updates in this Snap release include:

  • Select Snap and Update Snap added to our core JDBC Snap Pack, allowing you to fetch data and update tables respectively.
  • Updates and improvements made to several Snap Packs including: Anaplan, Box, Google Spreadsheet, JDBC, JMS, AWS Redshift, Salesforce, ServiceNow, Splunk, Vertica and Workday.
  • Updates to core Snaps such Binary, Flow, REST, SOAP, and Transform.

You can read more about Snaps here. Contact Us if you have questions about our hybrid cloud and big data integration platform or any of our Snaps. Customers and partners can also visit and subscribe to our Trust Site for updates.

March 2015 Snap Release for the SnapLogic Elastic Integration Platform

SnapIn_HexagonsThis month, we are planning the delivery of our latest Snap Release.

New Snaps

NetSuite Update will be added to the NetSuite Snap Pack. This Snap provides the ability to update the records of an object in NetSuite.

The SAP HANA Snap Pack will be expanded with the addition of a Stored Procedure Snap. Use this Snap to execute a stored procedure in the database and writes any out put to the output view.

This release also introduces the Splunk Search Snap, which executes a search query using Splunk’s REST API.

Updated Snaps

As with all releases, we make continual improvements on our Snaps. Changes in this release focus on Database Snaps, NetSuite Snap Pack, Script Snap, and others. See the Release Notes for more information.

SnapLogic Elastic Integration Platform January 2015 Release

We are pleased to announce that our January 2015 release went live this past weekend. Some of the updates included are:

Faster Pipelines
The Beta subscription feature Ultra Tasks lets a pipeline, known as an Ultra Pipeline, continuously consume documents from external sources. Plain HTTP/S requests can be fed into a pipeline through a FeedMaster that is installed as part of a Groundplex.

ultra-pipeline

A new type of account, Service Accounts, cannot log into the user interface of SnapLogic, but can be used for running triggered tasks using basic authentication.

User Experience Productivity
Continuing to make building pipelines easier, the SnapLogic Designer now supports rubber band select, which allows you to Shift-click and drag to circle around multiple Snaps on the canvas to select them.

rubber-band

In addition, common keyboard commands support was added for copy/paste/delete, undo/redo, select all, and zoom in/zoom out.

Public APIs
User and Group management is now available to org admins through a Public API. Functionality includes listing, creating, updating and deleting users and groups. See the SnapLogic Developer site for more information (customer login required).

api-docs

Enhanced Expression Language
Additional functions have been added to the SnapLogic Expression Language to gather more information on your Snaps and pipelines. Additionally, the documentation of the Expression Language was reorganized to match the structure of the functions and properties drop-down list.

pipe-functions

New and Enhanced Snaps
In this release, we introduce the SAP HANA Upsert Snap and the SQL Server Stored Procedure Snap. In addition, enhancements have been made to multiple Snaps including database accounts, REST, Redshift, Salesforce, SOAP, and Workday.

See the release notes for more information.

November 2014 Release for the SnapLogic Elastic Integration Platform

We are pleased to announce that our November 2014 release went live this past weekend. Some of the updates available in this release include:

Security

Enhanced Account Encryption lets you encrypt account credentials used to access endpoints from SnapLogic using a private key/public key model.

enhanced-encryption-2

Additionally, initial user passwords are now system-generated and the ability to reset a forgotten password has been added.

 

Patterns

Projects can now be saved to show up on the Patterns tab of the catalog, letting you use any pipelines within them as templates.

create-project

SnapReduce

SnapLogic’s support for big data integrations using Hadoop’s framework to process large amounts of data across large clusters has come out of Beta. Support for Kerberos has been introduced as a Beta feature.

 

Lifecycle Management (Beta)

The subscribable feature lets you manage pipeline development though the concept of promotion through phases to production.

org-phases

UI Enhancements

A few changes have been implemented to make pipeline building easier, including:

  • Unique Snap names when placing multiples of the same Snap on the canvas.
  • Copying and pasting of configured Snaps.
  • Multi-select of Snaps
  • The ability to turn off the auto-validation that occurs with each save.

 

Pipeline Documentation

With one click, you can download or print a document that describes your pipeline, listing all Snaps, their configurations and the pipeline properties.

pipeline-doc

New and Enhanced Snaps

This release introduces the Email Delete Snap, SumoLogic Snap Pack (Beta), and SAP HANA Snap Pack (Beta). In addition, enhancements have been made to SAP Execute, Directory Browser, Google Analytics, File Writer, Aggregate, and database writer Snaps.


See the release notes for detailed information.

SnapLogic Tips and Tricks: Understanding the Task Execute Snap

This article is brought to you by our Senior Director of Product Management, Craig Stewart.

SnapLogic’s Task Execute Snap was introduced in the Summer 2014 release. In the Fall 2014 release, the Task Execute Snap was enhanced with the addition of (transparent) compression and data-type propagation. This Snap is similar to the ForEach Snap, where a single execution of the pipeline is fired off for each incoming data document), but the Task Execute Snap:

  • sends the whole input document
  • can aggregate a number of input rows to stream to the target pipeline
  • sends the SnapLogic data-type information across to the target, preserving date/time, numeric and string types
  • compresses data as it is passed to the target pipeline for optimization of both network and memory use

The target pipeline should be configured to receive the input data (in the case of POSTing the data) or produce the data (in the case of a GET), much like a sub-pipeline, although this is even more loosely coupled. The target pipeline will be invoked once for each batch of input data. Let’s dig into some details. I’ve created three examples:

  1. POST-Type, where we push data to the target pipeline and expect no response
  2. GET-Type, where we get the output or a remotely executed task
  3. POST-and-GET type, where we combine inbound payload and retrieved payload

Example 1: POST-type

For my first example, I created a simple pipeline to consume the data, expecting it to be POSTed, as payload to the URL request, just made up of the JSON Formatter and a File Writer, although it could have been any other Snap with a document input:

JSONFormat

And then I created a triggered task in the Manager to invoke the target pipeline:

triggeredTask

And created a pipeline to send the data, in this case I’m selecting from my favourite Oracle database, limiting it to 50102 rows (an arbitrary number).  As you see, I have configured the Task Execute Snap it to use the task I defined earlier, with a batch size of 10,000 rows, implying that it should make 6 calls, 10,000 x 5, 1 x 102.  Each request is made synchronously. Note that as this is all within the same organisation, the Snap handles all of the authentication and authorisation for you.

task-execute

The Task is selected from the drop-down, which introspects onto the available metadata, showing on the triggerable pipelines from both the current and shared projects. (Note: If the Use On-Premises URL option is checked, it will only show those pipelines where an on-premises URL is available, i.e. running in a Groundplex.) If this option is selected, and the Snaplexes are all on-premises, no data will go out through the firewall; it would all remain secure between the nodes locally.

The Batch size can be adjusted to your requirements, balancing the load and memory usage. Each pipeline invocation does have a certain overhead in preparing, executing and logging, and this should be considered if you are using a low number of rows in batch.  The higher the number of rows per batch, the higher the memory consumption.

When I run the pipeline, the data is streamed from the source, in this case into the task execute, which with the batch size set to 10,000, aggregated in memory until it either completes the input stream, or reaches the batch size, when it then sends the data to the target pipeline with the data payload.

Here is the execution run log, where you can see the expected 6 calls, where it has passed the data to the target task, as expected, compression taking place automatically as it knows it is able to gzip the content and preserve the data types.

stats-1

The output of the Task execute is just the HTTP return code given by the target pipeline:

preview-1

This is shown in the Dashboard pipeline display as follows:

dashboard-1

 

Example 2: GET-Type

In this example, I have changed the first pipeline to remove the input view, and just execute the target task and receive its output data.

task-execute-2

In this case, the batch size is irrelevant. Next, I changed the called pipeline to be my data producer:

oracle-select-2

This time, the result is a smaller set of data out of my Oracle database. Next, I created a new Task, this time to my smaller, producer pipeline:

create-task-2

Now, when I execute the pipeline the run-time stats are as follows:

stats-2

And from the Dashboard Pipeline Display is as follows:

dashboard-2

 

Example 3: POST-and-GET-type

The Executed pipeline can also be a data producer. In this example, I am using the same type of calling pipeline, although this time I limited the Oracle SELECT to 52 rows of data. The Driving pipeline looks remarkably similar:

task-execute-3

Notice how you see I have a different target URL, and a much lower batch size. In the Executed pipeline this time you can see it has an input stream, which will take the inbound payload, and in this case double data, by copying and unioning the result. Then it has an unterminated output, which will be returned to the caller.

union

Again I created a task for it in the SnapLogic Integration Cloud Manager:

task-3

Now I have the complete set, the idea of this configuration is that I select a set of data out of my Oracle database, in this case 52 rows, which I then send in batches of 10 to the target pipeline, taking the benefit of passing the data types, compression, etc. as described previously. But this time I will actually get a result set streamed back, again preserving data types and formats.

Here are the run-time execution stats:

stats-3

As you can see, this time I both sent and received payloads, allowing the SnapLogic Elastic Integration Platform to handle the authentication, authorisation and payload compression. No messing with headers or any other additional configuration. Here, you see the execution from the Dashboard Pipeline Display:

dashboard-3

Summary

In summary, the Task Execute Snap enables you to pass batches of data to and from target pipelines, automatically aggregating, authenticating, compressing the data payload, and waiting for successful completion. For more SnapLogic best practices and tips and tricks, be sure to check out our TechTalk webinars and recordings.

SnapLogic Tips and Tricks: REST Snap Compression Capabilities

This article is brought to you by our Senior Director of Product Management, Craig Stewart.

In the Fall 2014 release, SnapLogic added a number of new features across the broad range of Snaps. Amongst those was the ability for a REST GET operation to accept gzip-encoded data. When combined with a triggered pipeline in another Snaplex, this can add significant performance and reliability (the less time you spend moving data over the wire, the less total packets moved, the less scope there is for network errors, and the less time it should take).

As an example, I created a simple pipeline which outputs a set of data, in this case just an Oracle database query returning 101,000 rows of data:

Oracle Select

For this, I created a task so I could call it using the REST GET Snap in the other pipeline:

task

To call it, I created a pipeline using the REST GET snap, which would call this URL:

rest-get

As the URLs for triggered pipelines require authentication, I created and assigned a Basic Auth account with my credentials, and associated it with the REST GET Snap.  The URL is copied and pasted from the task created previously. This was all possible in earlier versions of SnapLogic. The change in this version, is the ability to add the content-type accept headers:

rest-get-headers

Now what will happen is that the Snap, if it gets data in gzip format, will automatically uncompress and process that data received (even when not from a SnapLogic triggered pipeline). No additional Snaps required. The clever bit is that the the triggered pipeline will also note that the caller is able to accept gzip format, so it will automatically send the data in that format.

In summary, you just need to add the HTTP Headers to the REST Get.

As an aside, the Task Execute Snap will do this compression automatically, to be covered in a future post. For more SnapLogic Integration Cloud best practices and tips and tricks, be sure to check out our TechTalk webinars and recordings.