SnapLogic continues to democratize big data with several additions to its core integration platform and Snap library. The Winter release adds the ability to translate data pipelines into the Spark data processing framework without scripting. Now, experts and citizen integrators alike can easily choose whether to execute a data pipeline in MapReduce, Spark or standard modes for optimal performance.
Learn about SnapLogic’s capabilities for big data ingestion, preparation and delivery here.
The Winter 2016 release brings a new “queued” state for data pipeline tasks designed to support large enterprise implementations. Users can set a cluster resource threshold, such that when the threshold is reached, the pipeline task is queued until resources become available. This helps administrators schedule pipeline execution for optimal times and determine appropriate resource allocation.
Learn more about SnapLogic as a leading integration platform as a service (iPaaS) here.
Further enhancing SnapLogic’s self-service capabilities to support big data integration, the Winter release offers:
New with this release are Snaps for Parquet and ORC, columnar storage solutions designed for big data and Hadoop. The SnapLogic Snaps for Parquet, for example, makes it simple to read Parquet files from HDFS and convert the data into documents, or convert documents into the Parquet format and write to HDFS.
Learn more about how SnapLogic works here.
A core strength of the SnapLogic platform is the extensive library of prebuilt Snaps. The Winter 2016 Release features some updates to how Snaps are used:
The Winter 2016 Release also includes many new and updated Snaps including: AWS S3, Anaplan, Eloqua, mySQL, NetSuite, SOAP and essential data engineering transformations such as pivot and group by.
View SnapLogic’s library of intelligent connectors, called Snaps, here.