Four Critical iPaaS Requirements You Can Ignore Only At Your Own Peril (Part 2)
In my last post I reviewed the classic integration requirements and outlined four new requirements that are driving the demand for integration platform as a service (iPaaS) in the enterprise:
- Fluidity in hybrid deployments
- Non-existent lifecycle management of platform
- Future-proofing for the world of social, mobile, analytics, cloud, and internet of things (SMACT)
In this post, I?ll review requirement #2: Fluidity in Hybrid Deployments.
Similar to data structure changes being a common occurrence (as discussed in the Resilience post), the introduction or retirement of applications is something most IT organizations are dealing with in the enterprise. Software as a service (SaaS) continues to fuel worldwide software growth and infrastructure as a service (IaaS) and platform as a service (PaaS) providers such as Amazon Web Services (AWS) offer customers the flexibility to build up systems such as relational data services (RDS) or Redshift and tear them down on short cycles. Digital marketers are always looking for new channels to expand their addressable markets and additional data sources to enrich their audience profiles. Change is the only constant in the agile enterprise. The impact of these changes on the integration layer is that it is expected to seamlessly transition from connecting on-premise systems to cloud systems or vice versa and yet ensure a high-degree of business continuity.
iPaaS technologies built in the last decade were not built for the fluidity that is needed in today?s hybrid IT architecture. Even though many legacy integration solution providers offer a dual deployment model – one on the premises and one for the cloud – they are typically not peers when it comes to management, monitoring and configuration (not to mention functionality). Here are some common issues that customers will face with what I call ?franken-hybrid? integration technologies:
Deployment in the cloud is not same as deploying the on-premise engine. The provider may need a different set of preparatory and configuration requirements between the two environments. For example, setting up configurations that were typically local, such as connection pooling or even providing driver locations with their on-premise products, still must be done locally and manually for each of the runtime installations. This can be a very manual, cumbersome, and error-prone approach.
Dual management and monitoring dashboards – one for the on-premise execution engine and the other one for the cloud engine – means the administrator needs to manually stay on top of two environments. This is both time consuming and risky.
The on-premise engine was built for connecting on-premise systems and often requires many network ports to be open for communications in order to receive data processing instructions or send back monitoring information to the server. If the monitoring and metadata servers are running in the cloud, customers often will be requested to punch holes in their network firewall in order for all of the iPaaS functionality to work.
A truly modern iPaaS solution must deliver what I call ?fluidity? in a hybrid deployment. Specifically, there are two things you need to watch out for while making your iPaaS purchasing decision:
- When looking at cloud integration solutions from legacy vendors, make sure you look under the hood to ensure they’re not simply “cloud washing” their on-prem product and hosting it in the cloud. Even if it’s the same code base that is hosted in the cloud and inside the firewall and can be deployed to with a drop down option, you will run into management, monitoring and configuration issues. The management and monitoring of these two runtimes will need separate consoles as they are not peers as they lack the ability to communicate to the monitoring server across the firewall. Additionally, on-premise runtimes were designed to have configuration files that are not centrally managed and are local to the installation. Managing and configuring such hybrid environments becomes a recurring cost with every software upgrade and can add up significantly.
- Similarly, don’t fall for the map trap of “map once and run everywhere.” There isn’t sufficient value when your mappings built once can run everywhere because mappings are typically very specific to the source and targets that are being integrated. Most times, they are not transferrable from on-premise to the cloud as typical on-premise endpoints (Oracle ERP, SAP, Teradata, etc.) are very different from cloud endpoints (Salesforce, Workday, Redshift, etc.). This renders the “run everywhere” story quite ineffective. The other issue with this approach is that it is really masking the reality that the “anywhere” implies a variety of different products that they are trying to make appear similar. This set of distinct products also implies management and monitoring headaches. And lastly, mappings are a one time cost and hence re-usability doesn’t fetch you much. It’s the management and monitoring costs that get you anyways.
It is because of the challenges I listed above, that a software defined integration platform as a service has become a key enabler of “enterprise cloudification.” A high degree of integration fluidity results in a higher degree of business agility.
Stay tuned for the next post in this iPaaS series, which will review the importance of lifecycle management and the right approach to cloud integration as you prepare for the enterprise shift to social, mobile, analytics, cloud and the internet of things (SMACT). You may also enjoy this post about why ESBs are the wrong approach to cloud services integration.