Self-Service Big Data and AI/ML at Intelligent Data Summit
Recently, I presented at the Intelligent Data Summit, a virtual event hosted by IDevNews. I was joined by SoftwareAG, MapR, among others and one of our technology partners, Reltio. This timely, online event was focused on all things AI/ML, data, IoT, and other modern technologies and it was a pleasure to be a part of it.
During my presentation, “Self-Service Big Data and AI/ML. Reality or Myth?” I dug into how more and more organizations are becoming data-driven and leveraging that data to obtain the insights they need to undertake key decisions such as launching products at the right time, anticipating their customer’s needs and entering new markets successfully. Being data-driven can provide a competitive advantage that translates into revenue. “By 2021, insights-driven business will steal $1.8 trillion a year in revenue from competitors that are not insights-driven,” according to Forrester. However, becoming data-driven requires empowerment of employees, not just the IT department, when it comes to making use of technologies such as Big Data, artificial intelligence (AI), and machine learning (ML).
Here are some of my presentation’s takeaways:
- Digital Transformation is being held back by legacy platforms – Most organizations are undergoing Digital Transformation, but technology is holding them back. A Vanson Bourne study found that the top digital transformation limitations include 1) complexity of using multiple technologies, 2) lack of resources and 3) reliance on legacy database technologies. An iPaaS is purpose-built to solve many of the data-related challenges organizations are facing in their quest to enhance the customer, partner and employee experiences and improve financial performance. By definition, an iPaaS connects different endpoints such as applications, data, IoT, and processes including on-premises/legacy data sources so valuable information does not reside in a silo. Ideally, the iPaaS is intuitive and easy to use so employees with varying skillsets can be productive quickly.
- Big Data comes with even bigger problems – Big data, a promising technology for storing data necessary to drive insights, comes with even bigger problems. For example, a Gartner survey analysis found that 90% of Data Lake projects are delayed and over budget while 60% completely fail. Many companies are moving their on-premises data to the cloud to reduce their CapEx costs but quickly find that their OpEx costs still remain. Why? It’s because people with specialized skills are required to manage the Spark development not to mention large amounts of time to configure and spin up and down processing clusters which can be often. These organizations are also feeling the talent shortage and the frustration at the inability to allow much of organization access to the data lake due to the skills gap.
- Machine Learning should not be only for Data Scientists – An increasing number of organizations realize that descriptive analytics isn’t enough and are leveraging ML for predictive and prescriptive analytics. However, they are unable to successfully build and deploy ML-based models because of a global talent shortage of Data Scientists and the lack of useful data. To solve this problem, they need a no-code paradigm that allows analysts and developers to prepare all the data they’ve amassed into something that is accessible and can be used to train the model using different ML algorithms. The algorithms will show the results and the most accurate model can be used to predict events when real-life data is fed into it.
If you want to learn how your company can take advantage of Big Data and build ML models and more, all without a single line of code, here are some resources to get you started:
- A Buyer’s Guide to Modern Application and Data Integration
- Easing the Pain of Big Data: Modern Enterprise Data Architecture
- Extending the Value of Microsoft Dynamics CRM
- SnapLogic vs. Mulesoft Comparison Guide