Video

SnapLogic Success Stories From Business Requirement to Deployed AI Solutions [Integreat 2025]

Transcript:

My colleague, Chris Ward, who runs the Center of Excellence/Center of Enablement, we expand that acronym both ways, and I thought we would tell you three customer stories, from different industries around the world and try to show you how they achieved their results, how they solved their particular business problems, and what that would look like in practice.

So the three customers that we are going to talk about are Aptia, which is a healthcare marketplace Accor, which is a travel membership program and Drax, renewable energy Generation. And we’ll have time for questions and answers at the end as well.

But if we start with Aptia. So Aptia is a long-time SnapLogic customer.

They do pension and benefits administration for their business. So they have something like 7,000,000 people that they cover with these health care benefits across 1,100 companies.

They operate in The U.S., The UK, but also India and Portugal. And so they run this marketplace, this portal where people can see what benefits they’re entitled to.

They can request various services. And so onboarding new people, new companies into their system is obviously a key part of their business and then being able to process the interaction between the people, the services they’re entitled to, and the providers that they’re dealing with.

So first of all, they can’t have any data loss. They can’t have a situation where someone has signed up health care and then they’re not receiving whatever it is.

There’s also a lot of manual data entry that’s involved in this process. It’s only five to eight minutes per participant, but when you multiply that by hundreds and thousands of participants, that adds up fast.

And then depending on the jurisdiction type of healthcare, there might be payroll deductions to manage. And again, those get updated over time.

So every month, they’re making 5,000 to 10,000 different records changes to keep track of all of that as people’s salaries and entitlements and whatever change over time. So basically what this boils down to is a massive data challenge that Aptia is dealing with.

So high volumes of data flows, this includes all sorts of data about the people that’s covered, the enrollment data, their entitlements. But also this is just the data that’s internal to Aptia, but there’s also the integration with all of the health care providers and the employer systems that Aptia has to work with.

So integrations that go beyond the firewall, they’re always the fun ones. Payroll client specific and all of this is also subject also to compliance regulations as well, which are then different from one country to another.

And they can’t mess around here both on the personal level. You can’t mess with people’s health care.

They tend not to appreciate that very much. But also there’s all the compliance, the legal oversight that is required in the health care and financial space.

So the whole lot of processes just to ensure that all of this is working well. So what they were able to do with SnapLogic is that they were able to deliver basically similar stories what we heard from Spotify.

So they were able to get a lot of people building pipelines in there. They did actually manage to get to the point that subject matter experts were building pipelines directly.

There are 30 odd people building there and that has reduced that data entry time by 99%. It’s basically eliminated the data entry.

It’s only by exception that someone has to actually go and set hands to keyboard. They’ve also started to bring in data integrators for higher level tasks for those integrations with other systems, especially the ones outside the company.

And they’ve expanded to five different functional groups across the company because of the success. So, so far so good.

But how did they actually do that? So Chris, why don’t you peel back the curtain and we’ll talk about what that looked like in practice?

Absolutely. So if we look to kind of break down the operational workflow as it existed within Aptery, we really think of it as an email processing workflow.

So they had teams manually monitoring email inboxes, various different plan types. So they would take the email in, download the attachments, manually extract the data and insights from those attachments and then look to transpose that into an Excel template.

So we sort of took a step back and sort of looked at what are the sort of four pillars here that we need to accommodate for in terms of the solution. So firstly, it’s that continuous monitoring automation around the arrival of the email landing in the inbox, extraction, parsing of those attachments attached to those emails and then breaking down the different sort of mapping rules related to each of those attachments as well.

And then what we had was then a structured JSON format, so a consistent format that we could work with that then made it really easy to map into a template that the team could then take forward and present to the business teams. So the key sort of things to point out here is a lot of what we see upfront is just extraction of data.

And then we’ve got the mapping rules, which is where the LLM comes in to support with some of that activity. And then transposing that onto the Excel spreadsheet as well, we’re using the LLM to help with that.

And then so we’re kind of closing the loop here. So we’ve got a sort of end-to-end flow of the email from start to finish.

We’re looking to sort of close off and mark that initial email as read as a result of that as well. So in terms of how that’s represented in SnapLogix, we’ve got a pipeline here that you can see on the screen that’s really just kind of breaking down those pillars in both sort of snaps.

So a few things to point out. We’ve got standard out-of-the-box snaps here for monitoring that email inbox at the start there.

We’ve got a router snap, which allows us to take the business rules, that routing logic for for each of the different plan types and attachments and then impose some different logic there. And then towards the end, we’ve got the generation of the plan summary, which is the Excel template that we end up with and then closing the loop here with the finishing the generation of the emails that get sent to the team as well.

Okay. Excellent.

So that’s how this sort of thing works. So we have some standard components.

We have some best practices. We have some patterns and templates that we can put into practice.

And that is able to deal with what was previously a very manual process and automate it so those people could go and concentrate on something else like serving people with their health care needs a bit better. So let’s talk about a different example and I hope you’ll start to see themes develop here.

This also match up with what Ralph from Boehringer Ingelheim was talking about earlier. So Accor plus is a travel membership program that operates in the Asia Pacific region.

So Accra operates a number of hotel brands. They have 25 odd hotel brands from the Fairmont Sofitel, the Raffles Hotel at the high end to the sort of place where I might stay, the Ibis, the Mercure or the Tribe, these things.

So the but in Asia Pac, they also have the extra fund that they’re operating this across 20 different countries. And obviously, in each of these different countries across all of these brands, the matrix of the skills and job descriptions and remuneration and career advancement etc., etc. was very, very complex and not remotely aligned.

So they were trying to bring this into some sort of harmonization with the one hour initiative. So across the region, they’re going to have consistency and this was going to help them achieve the Hey Job Evaluation Certification, which is a certification basically of this standardization and predictability of career progression and such.

So the big problem that the Acarbel Group was dealing with was a lot of these legacy job descriptions were stored in unstructured data. So we’re talking PDFs, Word docs, this sort of thing, even scans of printouts, possibly in different languages as well.

And so the role, responsibilities, competencies, leadership expectations, all completely different, defined differently, described differently and materially different even across the different regions from one nation to the next. And they have this strategic drive to standardize all of this to make sense of it so that this would facilitate their recruiting, their onboarding, their performance alignment so someone would know I’m a grade X whatever, what does that actually mean in this country versus that country.

It should be at least somewhat comparable. They also had this hardest culture, so they’re trying to standardize the languages and job descriptions and things like that.

Again, just in the process of standardizing all of these systems. And they were trying to do all of this by hand, which just does not scale.

There are delays, there’s inconsistencies, there’s lots of rework required. And again, you’re messing with people’s livelihood, you’re messing with their pay, with their career progression, that scope for promotion or from a corporate perspective, your ability to retain your trained staff rather than have them leave and go to competitors, which is not great.

So you need a scalable process to support all of this. So how do they do it? Yes. So similar to the Aptera example, we looked at kind of breaking down the problem space.

So rather than throwing everything at the large language model and expecting it to sort of solve for the problem, we looked at what are the sort of key elements we need to consider here. So the leading with the extraction of the data, which is self explanatory, we wanted to sort of pull out those legacy job descriptions from Google Drive.

We actually transitioned them into an S3 bucket. They wanted to better leverage their AWS services for other needs based on that data.

We then read that job description in, pause it and then sort of solve for the first order, which is the standardization of the job matrix. So they had, as Dominic mentioned, varying levels of consistency of data across the different geos and the way that they ordered and structured that data.

So it was let’s find a way to standardize that. Obviously, we know the LLMs are excelling at that type of work.

So again, coming to a sort of formalized JSON schema there for that data. And then we look to map in and augment the sort of the company culture that Dominic talked about and the new job role mapping and new job description schema.

And also, we help guide the LLM in what does good look like in terms of how do we think about writing a good job description. So wrapping all that up into a prompt, end result is we have a transformed job description then.

And then the last sort of step here again is to how do we present that in a way that it can be sort of shown to executives and management. So we use the LLM to generate a PowerPoint presentation with that new job description and then map that into SuccessFactors, which is their internal HR system there as well.

So really solving for the end to end process, reducing that time and yes, being able to move faster. So the pipeline, again, very, very similar, reading the data, parsing it initially.

Then we look to augment the additional data around, yes, the sort of company culture, the new job description schema. We’re We’re informing the LLM with a system prompt, giving it a specific task, a role to then transform it into the new job description and then at the end there, calling out to Lambda function and uploading that data into S3.

And I think this one’s interesting in particular because I don’t know if you noticed one of the outputs is a PowerPoint presentation. So the classic workflow before would have been okay, we’ve got maybe everything in SuccessFactors and then someone’s generating something and copy pasting it into PowerPoint as a classic example of make work.

Now someone with domain expertise and access to the systems would have to do, But if SnapLogic can do it, that’s not going to replace that person. That person’s got better things to do with their time is the point.

And if we can free them up from that drudgery, that’s where you get the impact, the benefit. So let’s talk a third example on our whirlwind tour of the world.

So we had a U.S. customer and an Asia Pacific customer. This one is closest to home. It’s Drax. It’s a renewable energy generation right here in the UK. So they are focused primarily on renewable power generation, but they also have significant operations around sustainable biomass production, and also carbon capture and storage.

So really interesting company as we keep on hearing about how the LLMs are consuming more and more power. It’s good to hear that someone is planning on providing that power in a way that’s clean and sustainable.

And so they had the idea of using AI, not for any of that, but to manage their timesheets. But timesheets are a significant problem because tracking people’s work, if you’ve ever tried to do this, I mean, probably all of us have been at least on the receiving end of tracking our time, you have to go and enter your time into some sort of timesheet portal, but there are also calendar events.

There’s probably a spreadsheet somewhere. There’s some sort of domain system.

In this case, it was Azure DevOps task sprints, tickets, the DevOps units of work, projects and cost center allocations and then the catch-all buckets everything else. That’s where the spreadsheets and the scraps of paper on the desk and whatnot all come in.

So all of that had to be gathered because you have to pay people. But if you’re doing it by hand, it’s very inefficient, it’s inconsistent and people are going to be unhappy with the results if you don’t pay them enough and you could end up overpaying and then where you’ve wasted a bunch of time.

The delays in doing this also themselves are expensive. It’s not just expensive in terms of the time of the people doing it.

It’s expensive in terms of the results because you’re not able to close the books, you’re not able to do your forecasting for the next financial period. That means you might not get the budget that you need for whatever you’re trying to do next.

The knock on impacts are huge. So you’re directly involved in this one.

Why don’t you tell us how it works?

I was and I believe you’ve got some of the Drax folks in the room as well, so very close to home. So, yes, this was an interesting one.

So sort of coming from a professional services background, I could very much relate to the sort of work that we were looking to transform here. Every Friday, I was sort of submitting my time that I spent on sort of working on different projects.

Very time consuming and the last thing you want to be doing on a sort of Friday afternoon. So yeah, for these, the sort of subject matter around this was really looking to go into the different source systems.

So using the Microsoft Graph APIs to extract all the data, structured data in calendar invites, time spent there and also going into DevOps for the different work activities that the team were conducting day to day. One interesting thing about this was Draxwear very sort of conscious about providing and exposing sensitive information to the LLM.

So they wanted to think about, well, how do we sort of mitigate any risk or any exposure of sensitive data. So we looked at incorporating NVIDIA NEMO guardrails, which for those who don’t know is effectively an LLM firewall.

So we can give it a prompt that’s maybe exposing some data and it will give us, yes, a pass or fail based on whether it’s exposing the data. So that was the first sort of phase there.

And then moving into mapping in the project code, so based on their time sheet portal, various mappings and codes that they want to translate to the work activities that are coming from the source there using an open source LLM, Alarmah, which was hosted on premise. And then the results of that is we get a draft time sheet, which again, we want to keep the sort of human in the loop component here, not just submit a timesheet.

We want to be able to just sense check the quality of that. So really the last step is the stakeholder there can review what was produced and approve or reject that.

So this is actually something that, yes, again, we were able to wrap up in a series of very simple pipelines, again, sourcing the data from the different source endpoints there, consolidating that, merging it, providing it to the LLM with a system prompt and then taking that structured response and then mapping it into the timesheet there as well towards the end. So again, using the sort of out-of-the-box capabilities that we have around sort of data extraction and mapping and transformation as well as the new AI snaps as well.

Excellent. So what have we learned? So one of the things that we try to do at SnapLogic is to improve and progress and help our customers improve.

And so we have various initiatives to try and capture the information. So it’s not that someone delivers Drax or AbbVie or whatever project and then leaves and that information stays on that person’s hard drive.

We want to try and document and share it, and try and make sure that everyone is benefiting from that as much as possible. So that’s what this session wants to, in a very small way, give you a taste of what we have learned from these initiatives and then we’ll talk in a moment about how we do that at a more structured level.

Absolutely. So some of these may resonate from the session earlier if you’re in with Ben from Spotify.

Some common themes here that we’re seeing, I think, generally in the field when we approach and talk about these AI use cases is really have a clear sort of view on where we feel that we can build a case around. So if those of you who were in the Slalom workshop earlier, that was very much centered around sort of return on investment and helping sort of develop a really clear business case based on how work is being conducted today within the business, what are the risks, the cost that we can then project and build into a solid sort of ROI model there.

And what that helps us do is it prevents us from just bringing the technology and really throwing everything at it and really helps us sort of narrowly focus on a clear sort of area of the business that we can focus on. And then taking that into really understanding at a detailed level what that workflow looks like today, if it exists, sitting with the business, walking through the sort of the process end to end.

We did that with CCB, one of our customers who are doing a very big sort of AI transformation project around asset finance. We’re really investing that time, and it helps us understand where the key points of friction are within the process and and where we feel we can bring AI to help sort of move the needle in that regard.

The next one, really being able to prototype with as much real data as we can, And that helps us kind of really surface any edge cases that may arise in production. So rather than just mocking sort of test data initially to help build out that initial prototype, Using real concrete data there helps us refine the solution.

And then the validate and refine process is we’re not just delivering one thing, putting it in production and leaving it there. There’s a lot of value that can be realized through activity that happens in a production environment with these AI use cases.

It’s really understanding the output of the prompts and the responses from the LLMs to help reiterate and refine the use case going forward as well. So in terms of how we wrap all that up and then take it to sort of customers like yourself and support you on your journey, so we have a Identik Elite engagement, which is a twelve month enablement pathway that really looks to codify and bring together all these best practices that we’re seeing amongst all the different customers and use cases that we’re working with in AI.

So our first goal is to accelerate that first production use case as fast as we can, supporting you on that journey, but doing it in a safe way because we want to make sure that the right sort of guardrails, observability and discussion around ROI are happening from the outset rather than down the line. And then as we look to kind of scale the development of AI use cases as well, we want to embark upon upskilling, enabling the wider workforce around the development of AI use cases, again, using best practices.

So we have a Sigma framework library that’s available in our community that looks to document and bring a lot of that sort of best practice and how we think about doing things the right way. So feel free to scan that.

We’ve got some great artifacts on there as well.

Yes. What’s that all about?

So you remember Dayle’s stats, 85% of AI projects don’t go into production. It’s because they usually fail on one of these hurdles.

They either take too long to get started and so they’re not showing any results in a useful time or they’re not perceived as being safe and so someone in legal or compliance or whatever holds up their hand and says stop, stop, stop, stop, stop. This is not going into production.

Or they’re just not used. They’re not adopted, and so they just die on the vine in some way.

And so we’ve now been lucky enough to work with a number of customers on AI. As you saw in Jeremiah’s slides, we were pretty early to the AI game.

We now have a pretty good sense of what works, what doesn’t, and we would love to make sure that your projects are the ones that work. They’re in the 15% that do succeed, that do show concrete results and that we can then go and find where we go next, which ones of your colleagues we can go and tell the story to together so that they can also get the benefit.

But with that, Chris and I are here to answer your questions for the next few minutes before we rejoin everyone else for the fireside chat session. Any questions in the room? Right here at the front.

It’s always the way. Sorry, we’re making you run today.

When you’re using LLMs to populate templates and send notifications and things, how do you test them to the point that you’re happy to put them into production?

Great question. Yes, this is quite close to home.

So I I’ve worked on a lot of internal projects, and that was the first thing that sort of came to my mind is how do we build confidence in the results and be able to sort of track the progress as we build out that. So I think key to that is having some KPIs that we can anchor the solution around.

So it’d be that quality of output, consistency of data, building that in around evaluations, test framework around what we’re building and constantly evaluating that. So yes, it’s having that sort of clear view of what is it.

Is it are we taking unstructured data, mapping it to a JSON sort of format, looking at the completeness of that schema and having sort of golden test data sets as well that we can reference to sort of orient the quality of the output. But yes, this is something that’s really I think we want to think about sort of to the point of the AgenTic Elite framework is really at the start of the project, understand what those sort of key measurements are of quality and success.

And we have built product features in as well. So if you look at the early iterations of the GenAI App Builder, it just gave you a text box to write your prompt.

And that’s good as far as it goes, but you got to keep a lot of state in your head when you’re doing that. The new Prompt Composer workbench that’s in the latest version of AgentCreator, it gives you a lot more visibility because you got your data, the schema of the input data.

You’ve got your prompts because you’re still doing prompt engineering, but you’ve got an AI assistant to help you with that if you want. And you can see the results in real time.

As you change the prompt, you can see the change in the output data and the output schema. So that already helps a lot.

And then there’s the best practice, the skill set that Chris and team can bring. And yes, we’re continuing to iterate on that.

Another piece is the agent visualizer, which lets you see when you’re building something truly agentic. So the workflow is not deterministic.

The thing is figuring out on its own to some extent which tools to call, which agents to involve. Being able to visualize that process is crucial both in the development moment to understand how the thing is doing or potentially not doing the task, but also later on to provide the audibility that Ben was talking about to understand how it’s achieved a certain result or gave a certain decision.

Yes. And just to add that as well, one quick win we can think about doing is asking the LLM to give us a confidence level based on the task that we’re asking it to conduct.

And that gives us a way of helping sort of understand what the AI thinks in terms of if it’s sort of in line with what we’re asking as well. So we can escalate any confidence levels that is below a certain threshold for human review.

And that goes not just in the sort of build and test and iterate phase, but in production as well. It’s helpful to yes, the tactic there you can use.

Great. Next question.

Alright. One more then we’re free.

 I work with quite a few customers who are very versed on the SnapLogic platform. So what would be your recommendations of helping them to, kind of expand their usage of agents and, kind of, yeah, get started, really?

Yeah. So we have a community.

So the Sigma Library that we spoke about there that has some really good sort of resources there to help to understand how we can leverage the platform to develop AI use cases. Beyond that, there’s tons of material on the web around sort of concepts like prompt engineering, data context engineering, that type of work as well.

And really, I think just dive in, just start building, prototyping, sort of using the technology to understand where you could take it to solve problems that exist within the business? Well, there we go.

No, but I will say one thing. One of the things that kills AI projects dead in the water is access to data.

So you have a pretty cool demo and you can Vibe code something up in no time at all. It’s gonna be amazing and everyone’s gonna go, oh, wonderful.

Can we get production data into it? If you are already a SnapLogic user, then you are already ahead of most of the market because you have that integration fabric, that data fabric available. You can plug it into anything.

So this could be something that’s operating behind the scenes, which actually all of the demos or all of the situations we showed today were or it could be something much more forward facing like what Jeremiah bravely demoed on stage using cloud desktop. But all of those require the interaction with all of the back-end systems.

So as I say, as SnapLogic users, you have a big leg up already on your colleagues, so maybe just starting with the I should use AI for something sort of question rather than what am I going to use AI for given that I already have access to all of the data that’s needed to feed AI. Our founder, Gaurav Dhillon, always says that AI dies of data dehydration, and we don’t want to do that.

We want to be well hydrated.

Trusted by leading companies across the globe