Productivity killer: Disconnected data is holding workers back

By Scott Behles

A productive business is, more often than not, a successful one, and a productive employee is a happy employee. While the modern business world is, admittedly, a little more complex than this, these truisms still hold water.

However, as noted by McKinsey, productivity growth in G20 nations has been stagnating, which is no doubt causing worry for governments, businesses, and their employees.

But why is productivity slowing to a crawl? After all, we live in a time of unparalleled technological advancement, where digital transformation projects have been specifically designed to boost organizational speed and agility, and make our working lives easier and more productive.

As the second part of our study into disconnected data reveals, businesses are struggling to access and integrate enterprise data across their ever-growing number of applications and systems. Turning the systems that are meant to help us into those that, ultimately, harm productivity.

Nearly all of our respondents (98%) in the study indicated that they were involved in projects that rely on company data from multiple systems and departments, using seven different business apps and systems regularly. Those in IT use, on average, more than business users (eight compared to five), and companies in the financial services sector use even more, clocking in with an average of nine apps.

This is too much data and too many systems for man alone to handle. By overloading employees with multiple systems and not properly integrating the data, it inevitably means much of the painstaking work searching for data, completing data entry, data processing and analysis, and integration will fall to an employee or employees. This, of course, is not the quickest or most efficient means of performing this kind of task and creates a greater likelihood of error. It also detracts from more value-add and mentally stimulating work.

Shockingly, nine out of ten business users we asked said they’re involved in performing these mind-numbing tasks and, unsurprisingly, nearly two-thirds (61%) of our respondents expressed frustration that projects suffer delays caused by poor integration of data.

The impact of sub-par data integration on productivity can be significant, particularly for large businesses like those we surveyed. All in all, businesses are wasting 19 working days a year, per employee, by asking their skilled employees to perform these rote tasks, while simultaneously frustrating their workforce.

What’s likely most frustrating for employees is that they’re aware that the route to greater productivity lies in better data integration, but not enough is being done to address it.  Nearly two-thirds of our respondents stated that poor data integration practices, which are too often manual and sidestep automation, are negatively impacting productivity, and an overwhelming majority (91%) pointed to connecting data, applications and systems as an important move for their organization. In fact, over a third – likely the most heavily burdened with manual data tasks – see it as essential. Our respondents speculate that, if these data gaps were closed, they could see a 28% boost in efficiency on average.

When businesses invest in digital transformation projects to improve efficiency, productivity and competitive edge, they have to take a long term view of how new systems and apps will interact with each other. Burdening their employees with the onerous task of manually migrating data between various systems easily counteracts any productivity benefits for which these new tools were implemented in the first place, and serve to frustrate and bore their skilled employees. Remember, when all’s said and done, a productive employee is a happy employee.

To review the full results of our study, download “The Productivity Pains of Disconnected Data.

Miss the first part of our study? Read “The High Cost of Disconnected Data.”

Scott Behles is Head of Corporate Communications at SnapLogic. Follow him on Twitter @sbehles.

How SnapLogic enables faster digital transformation with Azure cloud migration

By Pavan Venkatesh

Digital transformation is a key initiative many organizations are undertaking to deliver value to their customers rapidly. This type of initiative requires fundamental organizational changes, including operational changes, culture and leadership change, innovation through the adoption of new business models, and the improvement in the experience for their overall ecosystem partners and customers.

A recent IDC report shows that “By 2018, 70% of siloed digital transformation (DX) initiatives will ultimately fail because of insufficient collaboration, integration, sourcing, or project management.” Hence, it is essential for organizations to have the appropriate set of digital tools, expertise, mindset, and integration mechanisms to achieve digital transformation.

Organizations must consider folding a cloud strategy into their digital transformation efforts that will allow them to migrate data from on-premises environments to the cloud. By migrating data onto the cloud, organizations can improve their operational agility and quick enablement.

Microsoft’s cloud-first strategy

Microsoft also has fully embraced the cloud first strategy model with SQL Server’s newest capabilities being first released to Azure SQL Database in the cloud, and later to an on-premises SQL Server Database.

In the SnapLogic Enterprise Integration Cloud Summer 2017 Release (4.10), we launched the new Azure SQL DB Snap Pack that provides abstractions to users and enables them to quickly move data from an on-premises environment to the Azure cloud.

Azure SQL DB is a relational database-as-a service using the SQL server engine underneath. It provides multi-tenancy and can scale out based on the application needs with no downtime. SnapLogic offers the abstraction layer component, called Snaps, allowing users to perform various operations on Azure SQL DB without any coding. The following Azure SQL Snaps are provided in the Summer 2017 release:

  • Azure SQL Bulk Load: The Bulk Load Snap enables users to quickly move on-premises data stored in databases like MySQL, SQL Server, or other file systems to Azure SQL DB in the cloud. It uses the BulkCopy API to stream data quickly to Azure SQL DB. This API was introduced in SQL Server JDBC v4.2 and it does not rely on BCP command line utilities. This eliminates the need to generate temporary files during the process as the data is handled in memory. It is fast!

This Snap also has the flexibility to be used in cloud or on-premises environments, regardless of the execution location.

  • Azure SQL Bulk Extract: Bulk Extract Snap allows users to move large amounts of data stored in Azure SQL DB to other downstream systems. These downstream systems can be Azure Blob, Azure Data Lake Store, Azure Data Warehouse, Redshift, or others. This Snap uses the BCP command line utility to extract data, and storing it temporarily in the local system before moving it to the designated system.
  • Azure SQL Execute: This Snap executes various SQL statements (select, insert, delete) and can be used in a pipeline to perform respective database operations.
  • Azure SQL Stored Procedure: This Snap invokes a stored procedure in the Azure SQL DB.
  • Azure SQL Table List: This Snap connects to Azure SQL DB, reads its metadata, and outputs a list of tables in a database.
  • Azure SQL Update: This Snap updates database columns associated with a table based on a given condition.

The Azure SQL Snap Pack supports two types of authentications:

  • SQL Authentication (Username and password)
  • ActiveDirectoryPassword (standard AD integration)

The following are some of the use cases where users can gain value from the Azure SQL Snap Pack:

  • On-premises database migrations (SQL Server or MySQL or Oracle) to Azure SQL DB in the cloud.
  • Data movement in Azure SQL DB to either Azure Data Lake or Redshift or other CDWs for analytics.
  • Strategically invested in Microsoft Azure cloud space or Microsoft in general.

Azure SQL sample pipelines

Below is a sample pipeline with details. The goal is to move data stored in on-premises environments such as files and SQL Server to Azure SQL DB in the cloud. Users can select an already existing schema name and table name in the Snap settings or create a new table by enabling the selection. The batch size can be tuned based on the data size and how fast users want to load data.

 

 

 

 

 

 

 

 

In the second pipeline, the data is extracted from Azure SQL DB and moved to Azure Data Lake store, so users can run analytics on top of it. More info on the Azure Data Lake can be read in my previous blog post.

 

 

Cloud strategy becomes imperative in order for organizations to move towards digital transformation, so they can achieve business agility and provide quick workforce enablement. This includes moving data from old systems stored in on-premises systems to the cloud. SnapLogic – an enterprise integration cloud platform – enables customers with the right set of Snaps like Azure SQL DB and others (over 400+ Snaps) to easily move data to the cloud and to meet digital transformation goals.

Interested in more? Watch the Azure SQL Demo here.

For a complete list of features and functionality in our most recent release, please see the Summer release blog post

Pavan Venkatesh is a Senior Product Manager at SnapLogic. Follow him on Twitter @pavankv.

Summer Release 2017: Focus on operational efficiency and artificial intelligence

By Dinesh Chandrasekhar

SnapLogic Enterprise Integration Cloud’s Summer Release for 2017 (v4.10) is generally available for customers today. We are excited to share a lot of new features and enhancements that revolve around the themes of operational efficiency for digital transformation, continuing innovation in the use of Iris Artificial Intelligence, and added support for Cloud Data Warehouse and Big Data. While we stay focused on offering one of the finest and easiest user experiences in the iPaaS space, we wanted to extend that experience over to the operational side as well.

A few key highlights from this release that focus on improving operational efficiencies and the use of Artificial Intelligence:

  • AI-Powered Pipeline Optimization: SnapLogic’s Iris technology now evaluates pipeline configuration and operation to recommend specific performance improvements based on machine learning, delivered to users via dashboard alerts and notifications.
  • Local Snaplex Dashboard: Ultra Pipelines have been used by our customers in mission-critical use cases that warrant uninterrupted data flow across multiple data sources. Now, we have introduced a local dashboard that provides full visibility into Ultra Pipelines. This offers customers increased confidence that their high-priority projects are running automatically without any interruption.
  • API Threshold Notifications: On a similar vein around dashboards, we bring in some monitoring capabilities that focus on operations and project development. Real-time alerts and notifications provide customers with an early warning system to improve monitoring of complex integrations projects. Alerts can be set up to notify stakeholders when API limits are approached and exceeded when users are added or removed from the platform, and when projects are modified.
  • Project-level Migration: We’ve been listening to your requests! With the fast pace at which integration projects are being executed and delivered across the enterprise, it is pertinent that project lifecycles are not a bottleneck. We’ve introduced a new feature that automates the migration of projects across departments and organizations, eliminating hand-coding and hard-coding, resulting in seamless project migrations.
  • Public API for Groundplex Installation: A new API is now available to automate the installation and configuration of on-premises SnapLogic nodes, eliminating the time and effort to manually download and install additional nodes. 

A few key highlights from this release that focus on enhancing our already comprehensive integration capabilities into cloud data warehousing and big data:

  • Microsoft Azure SQL Snap Pack: A brand new Snap Pack that allows customers to quickly and easily migrate on-premises databases to Microsoft Azure SQL Database for fast cloud-based reporting and analytics.
  • Teradata Select Snap: Expands the Teradata Snap Pack, allowing users to retrieve data from a Teradata database to display on a table for easy reporting tasks.
  • Parquet Writer Snap: Allows users to store data in specific partitions in Hadoop HDFS, improving the ability to create reports and derive meaningful insights from big data.
  • Parquet Reader/Writer Snap: Allows users to write Parquet files to Amazon S3 and AWS Identity & Access Management in addition to HDFS, expanding SnapLogic’s support for cloud-based data warehousing via Redshift.

Watch the Summer 2017 Release webinar to learn all the features and enhancements across our Snaps, IoT, CRM, and more.

Dinesh Chandrasekhar is Director of Product Marketing at SnapLogic. Follow him on Twitter @AppInt4All.

The bigger picture: Strategizing your data warehouse migration

By Ravi Dharnikota

If your organization is moving its data warehouse to the cloud, you can be confident you’re in good company. And if you read my last blog post about the six-step migration process, you can be even more confident that the move will go smoothly. However, don’t pull the trigger just yet. You’ve got a bit more planning to do, this time at a more strategic level.

First, let’s recap the migration process I covered in my last post, of the data warehouse itself. In that blog post, I broke down all the components of this diagram:

Data Warehouse Migration Process

Now, as you can see in the diagram below, the data warehouse migration process itself is part of a bigger picture of migration planning and strategy. Let’s take a look at the important pre-migration steps you can take help to ensure success with the migration itself.

Migration Strategy and Planning

Step 1: Define Goals and Business Case. Start the planning process with a clear picture of the business reasons for migrating your data warehouse to the cloud. Common goals include:

  • Agility in terms of both the business and the IT organization’s data warehousing projects.
  • Performance on the back end, to ensure timeliness and availability of data, and on the front end, for fast end-user query response times.
  • Growth and headroom to ease capacity planning; the elastic scalability of cloud resources mitigates this problem.
  • Cost savings on hardware, software, services, space, and utilities.
  • Labor savings from reduced needs for database administration, systems administration, scheduling and operations, and maintenance and support.

Step 2: Assess the current data warehouse architecture. If the current architecture is sound, you can plan to migrate to the cloud without redesign and restructuring. If architecturally sufficient for BI but limited for advanced analytics and big data integration, you should review and refine data models and processes as part of the migration effort. If the current architecture struggles to meet current BI requirements, plan to redesign it as you migrate to the cloud.

Step 3: Define the migration strategy. A “lift and shift” approach is tempting, but it rarely succeeds. Changes are typically needed to adapt data structures, improve processing, and ensure compatibility with the chosen cloud platform. Incremental migration is more common and usually more successful.

As I mentioned in my last blog post, a hybrid strategy is another viable option. Here, your on-premises data warehouse can remain operating as the cloud data warehouse comes online. During this transition phase, you’ll need to synchronize the data between the old on-premises data warehouse and the new one that’s in the cloud.

Step 4: Select the technology including the cloud platform you’ll migrate to, and which tools you’ll need for the migration. There are many types of tools and services that can be valuable:

  • Data integration tools are used to build or rebuild ETL processes to populate the data warehouse. Integration platform as a service (iPaaS) technology is especially well suited for ETL migration.
  • Data warehouse automation tools like WhereScape can be used to deconstruct legacy ETL, reverse engineer and redesign ETL processes, and regenerate ETL processes without the need to reconstruct data mappings and transformation logic.
  • Data virtualization tools such as Denodo provide a virtual layer of data views to support queries that are independent of storage location and adaptable to changing data structures.
  • System integrators and service providers like Atmosera can be helpful when manual effort is needed to extract data mappings and transformation logic that is buried in code.

Using these tools and services individually or in combination can make a remarkable difference in your data warehouse, serving to speed and de-risk the migration process.

Step 5: Migrate and operationalize; start by defining test and acceptance criteria. Plan the testing, then execute the migration process to move schema, data, and processing. Execute the test plan and, when successful, operationalize the cloud data warehouse and migrate users and applications.

Learn more at SnapLogic’s upcoming webinar

To get the full story on data warehouse cloud migration, join me for an informative SnapLogic webinar, “Traditional Data Warehousing is Dead: How digital enterprises are scaling their data to infinity and beyond in the Cloud,” on Wednesday, August 16 at 9:00am PT. I’ll be presenting with Dave Wells, Leader of Data Management Practice, Eckerson Group, and highlighting tangible business benefits that your organization can achieve by moving your data to the cloud. You’ll learn:

  • Practical best practices, key technologies to consider, and case studies to get you started
  • The potential pitfalls of “cloud-washed” legacy data integration solutions
  • Cloud data warehousing market trends
  • How SnapLogic’s Enterprise Integration Cloud delivers up to a 10X improvement in the speed and ease of data integration

Register today!

Ravi Dharnikota is Chief Enterprise Architect at SnapLogic. Follow him on Twitter @rdharn1

SnapLogic seeks partners to help drive LATAM growth

By Carlos Hernandez Saca

I am very excited to announce that SnapLogic is now seeking Latin America (LATAM) - Mexico, Brazil, Argentina, Colombia and Chile – partners to support our global expansion. This news follows our successful launch of SnapLogic’s global/US partner program, SnapLogic Partner Connect, just over a year ago.

We’ve got momentum. We’ve seen first-hand that SnapLogic dramatically transforms how companies integrate data, applications, and devices for digital business. In December 2016, SnapLogic secured an additional $40 million in funding to accelerate its global expansion. And we see the channel as an important driver of this expansion.

We’re making this call for partners to help us build a rich ecosystem of technology, consulting, and reseller partners in the LATAM market so that joint customers see the benefits of adopting software-as-a-service (SaaS) applications, cloud data management, and big data technologies. SnapLogic’s technology enables this by accelerating integration and driving digital transformation in the enterprise.

The SnapLogic Partner Connect program includes several categories, reflecting the breadth of the SnapLogic platform and depth of adjacent partner solutions. Partner types include:

  • Technology Partners– These are providers of technologies with which SnapLogic integrates, and includes partners such as Salesforce, Workday, AWS, Microsoft, Cloudera, and Snowflake.
  • OEM/Managed Services Partners- OEM partners integrate SnapLogic into their product or service offerings. OEM partners include Manthan, Reltio, Planview, and Verizon.
  • Global Systems Integrator Partners– SnapLogic’s GSI partners are implementing SnapLogic as part of comprehensive digital transformation projects at some of the world’s leading enterprises. These partners include firms such as Accenture, PwC, Cognizant, HCL, Tata Consultancy Services, and Tech Mahindra.
  • Consulting Partners– These are organizations that provide integration consulting and best practices services, and include partners such as Interworks, Softtek, and Semantix Brazil.
  • Reseller Partners– The roster of SnapLogic international resellers is growing rapidly and increasing SnapLogic’s presence in new markets worldwide. Resellers include MatrixIT in Israel, KPC in France, Sogeti in United Kingdom, Systemation in the Netherlands, Softtek in Mexico, DRZ and Semantix in Brazil.

SnapLogic CEO Gaurav Dhillon says, “Extending our partners program into the LATAM channel is a crucial part of our business growth strategy. Working alongside strategic channel partners will provide an exciting opportunity to reach new LATAM customers and help them realize their digital transformation goals through connected data and applications.”

And I agree. The market opportunity for data, application, and device integration is large and growing, representing a unique opportunity for our partners in LATAM. As customers across the region increasingly use our modern, cloud-first platform to integrate and manage data across cloud, big data, IoT, mobile, social, and more, we look forward to working with new partners in LATAM who can help accelerate their digital transformation efforts.

SnapLogic’s self-service integration platform was recognized as a Leader in Gartner’s 2017 Magic Quadrant for Enterprise Integration Platform as a Service (iPaaS). Global enterprise customers such as Adobe, AstraZeneca, Box, Capital One, GameStop, Verizon and Wendy’s rely on SnapLogic to automate business processes, accelerate analytics and drive digital transformation. If you’re interested in partnering with us, we want to hear from you.

Carlos Hernandez Saca is the Sales & Partnership Area Director for Latin America at SnapLogic. You can follow him on Twitter @rivaldo71.

 

Our live demo series is back

By Rich Dill

Due to popular demand, we’re bringing back a regular demo series where you get a first-hand look at SnapLogic’s Enterprise Integration Cloud in action, hear from a technology expert, and have your questions answered … live.

If you want to learn more about what SnapLogic can bring to your application and data integration plans, this demo is for you. Join me Wednesday, August 9 from 10:30AM – 11:00AM PT for the kick off of our Live Demo Series.

In this month’s demo, I will be highlighting the multiple reasons why the Enterprise Integration Cloud is the best option for quickly connecting and efficiently managing legacy and modern data, cloud application, and API integration needs.

I will be providing an overview of iPaaS and SnapLogic along with a detailed look inside the industry’s first AI-powered integration technology, Iris. I will also highlight how to connect an application to a database in minutes.

Each month we will be showcasing our leading industry integration cloud, so if you can’t attend this demo, we’ll have others for you to participate in with different themes each time.

Register now for the SnapLogic Live Demo Series

Rich Dill is Enterprise Solutions Architect at SnapLogic. You can follow him on Twitter @richdill.

 

 

Get your game plan on: Data warehouse migration to the cloud

By Ravi Dharnikota

You’ve decided to move your data warehouse to the cloud, and want to get started. Great! It’s easy to see why – in addition to the core benefits I wrote about in my last blog post, there are many more benefits associated with cloud data warehousing: incredibly fast processing, speedy deployment, built-in fault tolerance and disaster recovery and, depending on your cloud provider, strong security and governance.

A six-step reality check

But before you get too excited, it’s time to take a reality check; moving an existing data warehouse to the cloud is not quick, and it isn’t easy. It is definitely not as simple as exporting data from one platform and loading to another. Data is only one of the six warehouse components to be migrated.

Tactically and technically, data warehouse migration is an iterative process and needs many steps to migrate all of the components, as illustrated below. Here’s everything you need to consider in migrating your data warehouse to the cloud.

1) Migrating schema: Before moving warehouse data, you’ll need to migrate table structures and specifications. You may need to make structural changes as part of the migration, including indexing or partitioning – do they need to be rethought?

Data Warehouse Migration Process

2) Migrating data: Moving very large volumes of data is process intensive, network intensive, and time-consuming. You’ll need to map out how long will it take to migrate and if you can accelerate that process. You may need to restructure as part of schema migration and transform data as part of the data migration? Alternatively, can you transform in-stream or should you pre-process and then migrate?

3) Migrating ETL: Moving data may be the easy part when compared to migrating ETL processes. You may need to change the code base to optimize for platform performance and change data transformations to sync with data restructuring. You’ll need to determine if data flows should remain intact or be reorganized. As part of the migration, you may need to reduce data latency and deliver near real-time data. If that’s the case, would it make sense to migrate ETL processing to the cloud, as well? Is there a utility to convert your ETL code?

4) Rebuilding data pipelines: With any substantive change to data flow or data transformation, rebuilding data pipelines may be a better choice than migrating existing ETL. You may be able to isolate individual data transformations and package them as executable modules. You’ll need to understand the dependencies among data transformations to construct optimum workflow and the advantages you may gain – performance, agility, reusability, and maintainability – by rebuilding ETL as modular data pipelines using modern, cloud-friendly technology. 

5) Migrating metadata: Source-to-target metadata is a crucial part of managing a data warehouse; knowing data lineage, and tracing and troubleshooting is critical when problems occur. How readily will this metadata transfer to a new cloud platform? Are all of the mappings, transform logic, dataflow, and workflow locked in proprietary tools or buried in SQL code? You’ll need to determine if you’ll be able to export and import by either reverse engineering the metadata or rebuilding it from scratch.

6) Migrating users and applications: The final step in the process is migrating users and applications to the new cloud data warehouse, without interrupting business operations. Security and access authorizations may need to be created or changed, and BI and analytics tools should be connected. To do this, what communication is needed and with whom?

Don’t try to do everything at once

A typical enterprise data warehouse contains a large amount of data describing many business subject areas. Migrating an entire data warehouse in a single pass is usually not realistic. Incremental migration is the smart approach when “big bang” migration isn’t practical. Migrating incrementally is a must when undertaking significant design changes as part of the effort.

However, incremental migration brings new considerations. Data location should be transparent from a user point of view throughout the period when some data resides in the legacy data warehouse and some in the new cloud data warehouse. Consider a virtual layer as a point of access to decouple queries from data storage location.

A hybrid strategy is another viable option. With a hybrid approach, your on-premises data warehouse can remain operating as the cloud data warehouse comes online. During this transition phase, you’ll need to synchronize the data between the old on-premises data warehouse and the new one that’s in the cloud.

Cloud migration tools to the rescue

The good news is, there are many tools and services that can be invaluable when migrating your legacy data warehouse to the cloud. In my next post, the third and final in this series, I’ll explore the tools for data integration, data warehouse automation, and data virtualization, and system integrator resources that can speed and de-risk the process.

Learn more at SnapLogic’s upcoming webcast, “Traditional Data Warehousing is Dead: How digital enterprises are scaling their data to infinity and beyond in the Cloud,” on Wednesday, August 16 at 9:00am PT. I’ll be presenting with Dave Wells, Data Management Practice Lead, Eckerson Group, and highlighting tangible business benefits that your organization can achieve by moving your data to the cloud. You’ll learn:

      • Practical best practices, key technologies to consider, and case studies to get you started
      • The potential pitfalls of “cloud-washed” legacy data integration solutions
      • Cloud data warehousing market trends
      • How SnapLogic’s Enterprise Integration Cloud delivers up to a 10X improvement in the speed and ease of data integration

Sign up today!

Ravi Dharnikota is Chief Enterprise Architect at SnapLogic. Follow him on Twitter @rdharn1