Workday integrations “on sale”: How to save up to 90%

Nada-headshotBy Nada daVeiga

Macy’s. NastyGal. The Limited. BCBG. No, that’s not a shopping itinerary. These stores are just a short list of the major retailers that soon will be closing hundreds of stores. Some are declaring bankruptcy. Why? In conjunction with the massive shift to online shopping, retailers have trained consumers to shop only when merchandise is “on sale.”

Why pay retail for Workday integrations?

It’s still a bit of a mystery, then, why many enterprises will pay “full retail” to integrate their applications. Let’s take Workday applications, for example. Workday HCM and Workday Financial Management are rapidly gaining traction as enterprise systems of record; thousands of companies are choosing these apps to help drive digital transformation, and to move with more speed and agility.

However, enterprises are often challenged to implement Workday quickly and cost-effectively. Associated migration, integration, and implementation services typically cost up to two and a half times of the Workday software cost* due to:

  • Customization: Most enterprises require at least 12 weeks to tailor core Workday software-as-a-service (SaaS) applications to their business processes and needs; integration with other enterprise applications is a separate, additional implementation phase.
  • Complexity of Workday integration offerings: In addition to third-party products such as Informatica PowerCenter, multiple integration solutions are available from Workday. Depending on requirements, enterprises need to work with one or more Workday integration tools:
    • Workday Integration Cloud Connect provides pre-built integrations to common applications and service providers that extend Workday’s functionality. These include Human Capital Management, Payroll, Payroll interface, Financial Management and Spend Management.

      While Workday Cloud Connect has “pre-built” integrations, mapping, changing and customizing them is still labor- and consulting-intensive.

    • Workday Enterprise Interface Builders (EIB) enable simple integrations with Workday, i.e., importing data into Workday from a Microsoft Excel spreadsheet for tasks such as hiring a group of employees or requesting mass compensation changes. Users can also create outbound exports of data from Workday to Excel spreadsheet.

      However, this feature does not have native integration with Microsoft Active Directory, so employee on- and off-boarding can only be accomplished via manually intensive EIBs.

    • Workday Studio is a desktop-based integrated development environment used to create complex hosted integrations, including Workday Studio Custom Integrations, the most advanced.

      The Workday Studio development environment, while powerful, is complex, consulting-heavy and costly to use.

    • Workday Web Services gives customers a programmatic public API for On-Demand Workday Business Management Services.
  • Reliance on external resources: Across industries and geographies, Workday consultants and programmers are scarce and expensive.
  • Time-intensive manual integrations: Many Workday integrations are built manually and must be individually maintained, incurring “technical debt” that robs resources from future IT initiatives.

SnapLogic can reduce the cost of Workday integrations by 90%

The SnapLogic Enterprise Integration Cloud uniquely enhances Workday HCM and Workday Financials with a built-for-the-cloud, easy to use solution optimized for both IT and business users. The building blocks of the Enterprise Integration Cloud, SnapLogic Workday Snaps, are pre-built connectors that abstract the entire Workday API visually, allowing data and processes to be quickly integrated using pre-built patterns. With SnapLogic’s Enterprise Integration Cloud, companies can use a visual tool to automate HR and financial processes between Workday applications, cloud point solutions and legacy systems.

The SnapLogic Enterprise Integration Cloud greatly increases the flexibility of HR and Financial processes, and eases the pain of adding or retiring applications, thus enabling teams to focus on more strategic business priorities. These benefits allow both IT users and “citizen integrators” to execute Workday integrations, without reliance on IT.

Attention shoppers: SnapLogic delivers

Faster time to value: Workday integrations can be done in days, not months.

  • Dramatically lower cost: Using the SnapLogic Enterprise Integration Cloud can reduce the time and cost of Workday integrations by up to 90%.*
  • No programming or maintenance required: SnapLogic’s visual orientation doesn’t require specialized consultants or Java/XML programmers to build or maintain labor-intensive, manual integrations.

Find out how SnapLogic can drive down the cost of your organization’s Workday integrations, while enabling new agility. Download the new white paper, “Cause and effect: How rapid Workday integration drives digital transformation.”

Nada daVeiga is VP Worldwide Pre-Sales, Customer Success, and Professional Services at SnapLogic. Follow her on Twitter @nrdaveiga.

 

*Savings calculated on the average cost of Workday integrations: service fees of 2.5x of the Workday license fee. Source: Workday Analyst Day, 2016.

We Left Informatica. Now You Can, Too | Webinar

 

You can run a modern company on a mainframe. You can also ride a horse to the office. But would it really make sense to do this? Join us on Wednesday, March 22 for a discussion with Informatica’s former CEO Gaurav Dhillon and CTO James Markarian about reinventing data integration for the modern enterprise.

Infa Webinar Banner

Does your business still run on Informatica? It might make more sense to switch to a more modern platform. Join the conversation, hosted by industry analyst David Linthicum, as our distinguished panel discusses the key business reasons and technology factors driving modern enterprises to embrace data integration built for the cloud.

They will also cover:

  • The evolution of data integration – from the pre-internet, mainframe days of Informatica – to today’s modern cloud solutions
  • How they have re-invented application and data integration in the cloud
  • The changing role of IT – from “helicopter” to enabler
  • The cost to modern enterprises of inaction
  • Why sticking to the status quo is not an option

Register for this exclusive webinar here and be sure to join the conversation on Wednesday at 11am PT/ 2pm ET.

VIDEO: SnapLogic Discusses Big Data on #theCUBE from Strata+Hadoop World San Jose

It’s Big Data Week here in Silicon Valley with data experts from around the globe convening at Strata+Hadoop World San Jose for a packed week of keynotes, education, networking and more - and SnapLogic was front-and-center for all the action.

SnapLogic stopped by theCUBE, the popular video-interview show that live-streams from top tech events, and joined hosts Jeff Frick and George Gilbert for a spirited and wide-ranging discussion of all things Big Data.

First up was SnapLogic CEO Gaurav Dhillon, who discussed SnapLogic’s record-growth year in 2016, the acceleration of Big Data moving to the cloud, SnapLogic’s strong momentum working with AWS Redshift and Microsoft Azure platforms, the emerging applications and benefits of ML and AI, customers increasingly ditching legacy technology in favor of modern, cloud-first, self-service solutions, and more. You can watch Gaurav’s full video below, and here:

Next up was SnapLogic Chief Enterprise Architect Ravi Dharnikota, together with our customer, Katharine Matsumoto, Data Scientist at eero. A fast-growing Silicon Valley startup, eero makes a smart wireless networking system that intelligently routes data traffic on your wireless network in a way that reduces buffering and gets rid of dead zones in your home. Katharine leads a small data and analytics team and discussed how, with SnapLogic’s self-service cloud integration platform, she’s able to easily connect a myriad of ever-growing apps and systems and make important data accessible to as many as 15 different line-of-business teams, thereby empowering business users and enabling faster business outcomes. The pair also discussed ML and IoT integration which is helping eero consistently deliver an increasingly smart and powerful product to customers. You can watch Ravi and Katharine’s full video below, and here:

 

Azure Data Platform: Reading and writing data to Azure Blob Storage and Azure Data Lake Store

By Prasad Kona

Organizations have been increasingly moving towards and adopting cloud data and analytics platforms like Microsoft Azure. In this first in a series of Azure Data Platform blog posts, I’ll get you on your way to making your adoption of the cloud platforms and data integration easier.

In this post, I focus on ingesting data into the Azure Cloud Data Platform and demonstrate how to read and write data to Microsoft Azure Storage using SnapLogic.

For those who want to dive right in, my 4-minute step-by-step video “Building a simple pipeline to read and write data to Azure Blob storage” shows how to do what you want, without writing any code.

What is Azure Storage?

Azure Storage enables you to store terabytes of data to support small to big data use cases. It is highly scalable, highly available, and can handle millions of requests per second on average. Azure Blob Storage is one of the types of services provided by Azure Storage.

Azure provides two key types of storage for unstructured data: Azure Blob Storage and Azure Data Lake Store.

Azure Blob Storage

Azure Blob Storage stores unstructured object data. A blob can be any type of text or binary data, such as a document or media file. Blob storage is also referred to as object storage.

Azure Data Lake Store

Azure Data Lake Store provides what enterprises look for in storage today and it:

  • Provides additional enterprise-grade security features like encryption and uses Azure Active Directory for authentication and authorization.
  • Is compatible with Hadoop Distributed File System (HDFS) and works with the Hadoop ecosystem including Azure HDInsight.
  • Includes Azure HDInsight clusters, which can be provisioned and configured to directly access data stored in Data Lake Store.
  • Allows data stored in Data Lake Store to be easily analyzed using Hadoop analytic frameworks such as MapReduce, Spark, or Hive.

How do I move my data to the Azure Data Platform?

Let’s look at how you can read and write to Azure Data Platform using SnapLogic.

For SnapLogic Snaps that support Azure Accounts, we have an option to choose one of Azure Storage Account or Azure Data Lake Store:

Azure Data Platform 1

Configuring the Azure Storage Account in SnapLogic can be done as shown below using the Azure storage account name and access key you get from the Azure Portal:

Azure Data Platform 2

Configuring the Azure Data Lake Store Account in SnapLogic as shown below, uses the Azure Tenant ID, Access ID, and Secret Key that you get from the Azure Portal:

Azure Data Platform 3

Put together, you’ve got a simple pipeline that illustrates how to read and write to Azure Blob Storage:

Azure Data Platform 4

Here’s the step-by-step video again: Building a simple pipeline to read and write data to Azure BLOG storage

In my next blog post, I will describe the approaches to move data from your on-prem databases to Azure SQL Database.

Prasad Kona is an Enterprise Architect at SnapLogic. You can follow him on LinkedIn or Twitter @prasadkona.

 

Cloud fluency: Does your data integration solution speak the truth?

There’s been a lot of cloud-washing in the enterprise data integration space — vendors are heavily promoting their cloud solutions, yet for many, only a skinny part of their monolithic apps has been “cloudified.”

In an era of “alternative facts,” it’s important to make technology decisions based on truths. Here is an important one: A data integration solution built on genuine, made-for-the-cloud platform as a service (PaaS) technology offers important benefits including:

  1. Self-service integration by “citizen integrators,” without reliance on IT
  2. For IT organizations, the ability to easily connect multiple data sets, to achieve a bespoke enterprise tech environment

These are in addition to the traditional benefits of cloud solutions: no on-premise installation; continuous, no-fuss upgrades; and the latest software innovation, delivered automatically.

Why “built for the cloud” matters

You can’t get these benefits with “cloudified” software that was originally invented in 1992. Of course, I’m referring to Informatica; while the company promotes its cloud capabilities, the software largely retains a monolithic architecture that resides on-premises, and does most of its work there, too.

In contrast, SnapLogic is purpose-built for the cloud, meaning there are no legacy components that prevent the data and application integration service from running at cloud speed. Data streams between applications, databases, files, social and big data sources via the Snaplex, a self-upgrading, elastic execution grid.

In more everyday terms, SnapLogic has 100% cloud fluency. Our technology was made for the cloud, born in the cloud, and it lives in the cloud.

The consumerization of data integration

Further to point 1 above, “citizen integrators,” industry futurists like R. “Ray” Wang have been talking about the consumerization of IT for more than half a decade. And that is exactly what SnapLogic has mastered. Our great breakthrough, our big innovation, is that we have consumerized the dungeon-like, dark problem of data integration.

Integration used to be a big, boring problem relegated to the back office. We’ve brought it from the dungeon to the front office and into the light. It is amazing to see how people use our product. They go from one user to hundreds of users as they get access to data in a secure, organized and appropriately access-controlled manner. But you don’t have a cast of thousands of IT people enabling all this; users merely help themselves. This is the right model for the modern enterprise.

“An ERP of one”

As for the second major benefit of a true cloud solution — a bespoke enterprise tech environment, at a fraction of the time and cost of traditional means — here’s a customer quote from a young CEO of a hot company that’s a household name.

“Look, we’ve got an ‘ERP of one’ by using SnapLogic — a totally customized enterprise information environment. We can buy the best-of-the-best SaaS offerings, and then with SnapLogic, integrate them into a bespoke ERP system that would cost a bajillion dollars to build ourselves. We can custom mix and match the capabilities that uniquely fit us. We got the bespoke suit at off-the-rack prices by using SnapLogic to customize our enterprise environment.”

To my mind, that’s the big payoff, and an excellent way to think about SnapLogic’s value. We are able to give our customer an “ERP of one” faster and cheaper than they could have ever imagined. This is where the world is going, because of the vanishingly low prices of compute power and storage, and cloud computing.

Today you literally can, without a huge outlay, build your own enterprise technology world. But you need the glue to realize the vision, to bring it all together. That glue is SnapLogic.

Find out more about how and why SnapLogic puts best-of-breed integration within every organization’s grasp. Register for this upcoming webinar featuring a conversation with myself, industry analyst and data integration expert David Linthicum, and Gaurav Dhillon, SnapLogic’s CEO and also an Informatica alumnus: “We left Informatica. Now you can, too.”

SNP_Thumb_Informatica

Gaurav Dhillon is CEO at SnapLogic. Follow him on Twitter @gdhillon.

Finally viable: Best-of-breed enterprise environments

It’s one of the oldest, most contentious rivalries in the enterprise application arena: What’s better, best-of-breed environments or single-vendor suites? Since the turn of the century, suite vendors have argued that their approach avoids the steep data integration challenges that can be inherent with best-of-breed. On the flip side, point solution vendors say that enterprise suites pack in a lot of “dead wood” but don’t offer the real functionality, or customization potential, that is needed.

However, unlike religion and politics, this is one argument that is headed toward extinction. The biggest barrier to best-of-breed strategies — data integration — is, hands down, easier by an order of magnitude today, thanks to built-for-the-cloud solutions that eliminate previous barriers. As a result, best-of-breed application environments aren’t just viable, they’re readily attainable.

Two dimensions of data integration

There are two ways in which data integration has dramatically improved with native cloud solutions: on the back end, between the applications themselves, and on the front end, from the user experience perspective.

On the back end, one of the first-order implications of a robust data model is the number of connectors a data integration solution provides. SnapLogic has hundreds of Snaps (connectors) and that’s not coincidental. Our library of Snaps proves our suitability to the modern world; it’s an order of magnitude easier to build and support a SnapLogic connector than an Informatica connector — the integration tool of choice for last-century best-of-breed environments — because our data model fits the modern world.

As a result, customers are up and running with SnapLogic in a day or two. In minutes we can show customers what SnapLogic is capable of doing. This is in comparison to Informatica and other legacy integration technologies; here, developers or consultants can work for weeks or months on the same integration project and still have nothing. They can’t deliver quickly due to the limitations of the underlying technology.

The ease of data integration with SnapLogic has profound implications on the user experience. Instead of having to beg analysts to do ETLs (extract, transfer, and load) to pull the data set they need, SnapLogic users can get whatever data they want, themselves. They can then analyze it and get answers far faster than under previous best-of-breed regimes.

These are not subtle differences.

The economics of cloud-based integration

The subscription-based pricing model of cloud-based integration services further democratizes data access. Instead of putting the burden on IT to buy and implement an integrated application suite — which can cost upwards of $100 million in a large enterprise — cloud-based integration technology can be acquired at a nominal per-user fee, charged to a corporate credit card. Lines of business have taken advantage of this ease of access, making their own technology moves with the full knowledge and support of IT.

For IT organizations that have embraced their new mission of enablement, the appeal of cloud-based data integration is clear. In addition to allowing business users to work the way they want to, the cloud-based solution is infinitely easier to customize, and deploy and support globally. And it offers an obvious answer to the question, “Do I want to continue feeling the pain of using integrated app suites or do I want to join the new century?”

Find out more about how and why SnapLogic puts best-of-breed integration within every organization’s grasp. Register for this upcoming webinar featuring a conversation with myself, industry analyst and data integration expert David Linthicum, and Gaurav Dhillon, SnapLogic’s CEO and also an Informatica alumnus: “We left Informatica. Now you can, too.”

SNP_Thumb_Informatica

James Markarian is CTO at SnapLogic. Follow him on Twitter @jamesmarkarian.

Deep Dive into SnapLogic Winter 2017 Snaps Release

By Pavan Venkatesh

Data streams with Confluent and migration to Hadoop: In my previous blog post, I explained how future data movement trends will look. In this post, I’ll dig into some of the exciting things we announced as part of the Winter 2017 (4.8) Snaps release. This will also address future data movement trends for customers who want to move data to the cloud from different systems or migrate to Hadoop.

Major highlights in 2017 Winter release (4.8) include:

  • Support of Confluent Kafka – A distributed messaging system for streaming data
  • Teradata to Hadoop – A quick and easy way to migrate data
  • Enhancements to the Teradata Snap Pack: On the TPT front, customers can quickly load/update/delete data in Teradata
  • The RedShift Multi-Execute Snap – Allows multiple statements to be sequentially executed, so customers can maintain business logic
  • Enhancements to the MongoDB Snap pack (Delete and Update) and the DynamoDB Snap pack (Delete and Delete-item)
  • Workday Read output enhancements – Now it’s easier for the downstream systems to consume
  • Netsuite Snap Pack improvements -Users can now submit asynchronous operations
  • Security feature enhancements – Including SSL for MongoDB Snap Pack and invalidating database connection pools when account properties are modified
  • Major performance improvement while writing to an S3 bucket using S3 File Writer – Users can now configure a buffer size in the Snap so larger blocks are sent to S3 quickly

Confluent Kafka Snap Pack

Kafka is a distributed messaging system based on publish/subscribe model with high throughput and scalability. It is mainly used for ingestion from multiple sources and then sent to multiple downstream systems. Use cases include website activity tracking, fraud analytics, log aggregation, sales analytics, and others. Confluent is the company that provides the enterprise capability and offering for open source Kafka.

Here at SnapLogic we have built Kafka Producer and Consumer Snaps as part of the Confluent Snap Pack. A deep dive into Kafka architecture and its working will be a good segue before going into the Snap Pack or pipeline details.

kafka-cluster

Kafka consists of single or multiple Producers that can produce messages from a single or multiple upstream systems, and single or multiple Consumers that consume messages as part of downstream systems. A Kafka cluster constitutes one or more servers called Brokers. Messages (key and value or just the value) will be fed into higher level abstraction called Topics. Each Topic can have multiple messages from different Producers. User can also define different Topics for new category of messages. These Producers write messages to Topics and Consumers consume from one or more Topics. Also Topics are partitioned, replicated, and persisted across Brokers. Messages in the Topics are ordered within a partition and each of these will have a sequential ID number called offset. Zookeeper usually maintains these offsets but Confluent calls it coordination kernel.

Kafka also allows configuring a Consumer group where multiple Consumers are part of it, when consuming from a Topic.

With over 400 Snaps supporting various on-prem (relational databases, files, nosql databases, and others) and cloud products (Netsuite, SalesForce, Workday, RedShift, Anaplan, and others), the Snaplogic Elastic Integration Cloud in combination with the Confluent Kafka Snap Pack will be a powerful combination for moving data to different systems in a fast and streaming manner. Customers can realize benefits and generate business outcomes in a quick manner.

With respect to the Confluent Kafka Snap Pack, we support Confluent Version 3.0.1 (Kafka v0.9). These Snaps abstract the complexities and users only have to provide configuration details to build a pipeline which moves data easily. One thing to note is that when multiple Consumer Snaps are used in a pipeline and have been configured with the same consumer group, then each Consumer Snap will be assigned a different subset of partitions in the Topic.

kafka-producer

kafka-consumer

pipeline1

In the above example, I built a pipeline where sales leads (messages) stored in local files and MySQL are sent to a Topic in Confluent Kafka via Confluent Kafka Producer Snaps. The downstream system Redshift will consume these messages from that Topic via the Confluent Kafka Consumer Snap and bulk load it to RedShift for historical or auditing needs. These messages are also sent to Tableau as another Consumer to run analytics on how many leads were generated this year, so customer can compare this against last year.

Easy migrations from Teradata to Hadoop

There has been a major shift where customers are moving from expensive Teradata solutions to Hadoop or other data warehouse. Until now, there has not been an easy solution in transferring large amounts of data from Teradata to big data Hadoop. With this release we have developed a Teradata Export to HDFS Snap with two goals in mind: 1) ease of use and 2) high performance. This Snap uses the Teradata Connector for Hadoop (TDCH v1.5.1). Customers just have to download this connector from the Teradata website in addition to the regular jdbc jars. No installation required on either Teradata or Hadoop nodes.

TDCH utilizes MapReduce (MR) as its execution engine where the queries gets submitted to this framework, and the distributed processes launched by the MapReduce framework make JDBC connections to the Teradata database. The data fetched will be directly loaded into the defined HDFS location. The degree of parallelism for these TDCH jobs is defined by the number of mappers (a Snap configuration) used by the MapReduce job. The number of mappers also defines the number of files created in HDFS location.

The Snap account details with a sample query to extract data from Teradata and load it to HDFS is shown below.

edit-account

terradata-export

 

The pipeline to this effect is as follows:

pipeline2

As you can see above, you use just one Snap to export data from Teradata and load it into HDFS. Customers can later use HDFS Reader Snap to read files that are exported.

Winter 2017 release has equipped customers with lots of benefits, from data streams, easy migrations, to enhancing security functionality, and performance benefits. More information on the SnapLogic Winter 2017 (4.8) release can be found in the release notes.

Pavan Venkatesh is Senior Product Manager at SnapLogic. Follow him on Twitter @pavankv.