Video
CTO Keynote with Jeremiah Stone, CTO, SnapLogic [Integreat 2025]
Transcript:
Thank you, Dayle, and thank you all for joining us here today. It is an incredible time to be alive, to be in our profession.
You know, I think I’m thrilled on a regular basis with the progress that academia is making in terms of creating new technologies that can help us advance the species, have a better run planet, to have better run societies, to bring better health care, better services to people across the world and it’s fun to be part of that and that’s the really great thing about being in the integration space is we bring people together, we bring technology together, and we we deliver business outcomes. So thank you for doing what you do every day.
It is unsung heroism and, having been in this space myself, I know that very well. So it’s always great for me to be at these events.
So, you know, Dale talked about how we think, how we operate, and I think it does make sense to start there in terms of having a journey together, and that’s really our focus with our customer base, with our partner base, and it is core to how we think about how we’re taking on the challenges of the day. And so, one of the things we really want to reemphasize and recommit to is customer co-innovation and working together to accelerate time to value, to remove waste, friction, and challenges from our daily work, and to really create competitive advantage.
So we are not here to talk about middleware per se or bits and bytes but really working backwards from tech, you know, business outcomes that we’re critical to helping deliver with our colleagues in different areas and so you always want to invite again if you’re working on something hard, we want to be as close as possible to the hard problems and to help you, unpack those and deliver, great, great business value and to really use this moment as a moment to seize competitive advantage and to disrupt your own industries. And it is thrilling to be part of many of these such initiatives.
We held our CIO advisory council yesterday prior to the event. We have two of these events a year.
And the number of companies in our advisory council that are undertaking really disruptive approaches to being in a competitive environment has really shot up. I think the last couple of years were years of experimentation, of looking at focused, you know, I’d say, technology readiness testing.
Now we’re seeing that shift completely to implementation. And I want to take you through some of how we’re doing our part unapologetically and humbly within our domain to help deliver that.
And really what we’re seeing, I think is that the move is already afoot, that eventually we will see that agentic AI essentially taking and automating a new domain of business processes and handling different types of data, business process flows, etc. that we couldn’t previously is having an impact. It’s having an impact on reducing cycle time in existing business processes.
It’s having an impact, reducing the amount of manual touches that need to be made and is having an impact on moving organizations forward. It’s also wasting a lot of people’s time who don’t know what they’re talking about, that aren’t doing the proof of concept and is creating a lot of challenges but I think we’re very focused on helping to identify the end where this technology does make sense to apply and helping to bring that into production.
And from our perspective, the fundamentals still apply. And so we’ve long believed that being data-driven with a composable approach to business architecture is crucial for business process improvement, for delivery of services across the enterprise or to clients and we still believe that data-driven process, service-oriented architecture and is enabling for agentic investments as well.
And the good news is we are seeing early adopters are actually working backwards. So saying, we want to reimagine and reinvent a portion of our enterprise landscape.
What can we do now with new technology to approach the problems in here and actually look at certain parts of the architecture to take into a composable approach or, you know, work on the underlying data connectivity, data management in order to get there. The good news is we’re definitely seeing that you can take an outcome-oriented approach and progressively refactor, to deliver and that’s really exciting.
The impact of that, however, is pretty significant. As we increase automation, as we increase the ability to run processes 20 fourseven, where we’re not constrained by our working hours, it will change the impact on our underlying landscapes.
We’re going to have a different kind of volume approach, and anybody that’s in industrial space here has seen this when we do operational technology, informational technology convergence, IT/OT architectures, it changes the load profile, it changes the frequency and volume of requests and I think we’re seeing the same thing happen here. So the investment journey that many of us have been on together to move towards a loosely coupled API centric architecture is even more important if you are now seeking to have an agent.
An agent is nothing more than a loop with a language model in the middle of it, and this starts to really add up if you’re calling enterprise services, in the underlying world. So this continues to be a major impact and a major place of investment, particularly for other players in your landscape.
So I think, you know, the vast majority of us work with one or more of the mega vendors, often three or four, and each of these companies are very much coming out with their own capabilities in the agentic AI space and claiming to be soup to nuts for all these things, being able to solve all things for all people. I think what we’re seeing within these early releases and even as they mature is that, as it was with an API centric architecture or service-oriented architecture, most of the mega vendors are good within the realm of their domain but instant you get out of that domain, it’s very challenging.
And so the data availability to these capabilities is constrained there meaning that if you have business processes stay within the domain, good stuff. How many of our business processes only stay in one domain, though? You know, it is a subset.
Most of our important and impactful processes are multi domain and cross, you know, the landscape that we have from the front of the office to the back office and our ability to take and convert on a customer requirement and deliver that business value is is across the space and I think that is really where we see our value is to be able to go across these different domains and support a diverse infrastructure where you have technology purchasing choices and do not have to standardize on an individual area. I think the challenges we see though, over and over again, are pretty repeatable.
We can talk about how powerful a given language model is becoming or a certain capability is. However, despite the capabilities of these models, they don’t have access to your data.
They don’t have access to your domain systems. They can’t hit those systems, and if they could, you probably would be in breach of your privacy requirements or your regulatory requirements anyway, without any improved governance.
We also see just tremendous adoption at the line of business, and sort of bring your own AI, where people are working with these systems and bringing them in. Who here has more than 10 or 15 different AI systems running in their landscapes? A hundred, two hundred, no hands, no one wants to admit, alright, there’s some over here.
Certainly every single architecture coming in is bringing its own individual systems, and employees are bringing their own systems as well, whether that’s desktop cloud, open AI chat, GPT, etc. And so this then opens up privacy and governance challenges, which are incredibly necessary to manage.
But this isn’t really new, is it? We’ve been dealing with these problems with every new wave of technology whether it was the expansion of cloud when we were managing our on premises systems, whether it was the creation of online analytic processing and the separation of data from application to data store and of course every time that happened we had a new acronym and a new technology to try to manage and you know MCP A2A, you know, the newest acronyms to come out here model context protocol agent to agent is just the latest in what has been just readily accumulating technical debt managed across your landscapes because of course we have to manage technology through time. We cannot, you know, take any point in time to system decision and live with it.
In fact it’s the accumulation of decisions through time that dictate our day-to-day activities and the majority of what we work on. And this is what we’re built for.
We’re built to help manage technical debt through time, to help manage evolution of system architecture through time in a much lower cost and burden on you and on your colleagues to be able to continuously evolve with landscapes to be able to bring together everything from mainframes that we have as our heritage systems that continue running You know, our own on premises technology, about 50% of enterprise technology is still managed on premises or in managed service providers or cloud technologies as well. That’s fundamentally how we approach it.
But I do want to step back and think about this from a, let’s say, architectural point of view and a philosophical point of view because often when I’m speaking with members of our community that have been with us for some time, we often lose sight of the underlying strategy for how we want to approach these things and that informs how we’ll approach them in the future. So the way that we really approach any new technology adoption, this includes the world of AI, is thinking about things first as highly heterogeneous and diverse.
So as I mentioned, you know, everything from systems we may have implemented a decade or two ago to systems we implemented last week, our focus and goal is to create an abstraction layer or a facade against all of those systems and everything SnapLogic touches we turn into a common language run time so that you can have a common programming model against it. I think that’s crucial so that’s how we can manage our labor costs within our teams, that’s how we can manage our system architecture through time and it’s also why we can take on these different paradigms.
Whether we’re talking about integrating processes and systems from an API tier in the business world or doing event-driven you know, high-frequency models or data integration. We can manage those the same way because we have this common operating model and then that allows us also now to integrate things like artificial intelligence in the operation and build out of the system.
So SnapGPT, as Dayle mentioned, does everything from building and managing your infrastructure to in the future actually handling the administrative and supervisory controls to also including AI in your workflows themselves. That then manifests itself into a common asset catalog that runs across the entire portfolio that now we recently released the ability to expose everything that is within the SnapLogic portfolio at the asset level or even at the run-time level, via open lineage into external catalog management systems, like data.
I know several people in the room have come and said, ‘Hey, when can we export this into our catalog management, whether it’s a Calibro or Relation or other open source system, like DataHub, so that we can we can manage that as well. And if we look at the service driven spaces in area we’ve been working on for quite some time and we continuously releasing capabilities being able to actually manage our loosely coupled architecture via an API management system that handles all of lifecycle operations on our services as well as then a best in class user experience whether that’s a user looking to utilize an AI service, say an MCP endpoint or an API and be able to develop that multiple user types across that spectrum but a common place to discover all services available to really improve composability of new services and systems to drive business velocity and it’s the same paradigm here for how we will enable secure manage access for agents.
Really coming through the same front door applying the same process and governance we would apply to programmatic access to agents as well, being able to manage authorization, identification and also business rules and deterministic processing in a blended fashion with agents which then opens us up to systems of engagement whether that is a desktop system, another application system, you know, any kind of a chat interface etc.. And so this is how we tend to think about how we can manage this high degree of diversity and heterogeneity in systems through time, whether that is a back end transactional system or a customer facing system, that may be coming into work with your company’s products and systems.
So this is much a philosophical mindset as it is a technical architecture that allows us to radically simplify the bowl of spaghetti that many of our enterprise system landscapes tend to start to look like and simplify the ability to add new investments and to accelerate on top. Is that a helpful reminder or summary of how we’re looking at the universe? Okay.
Some good nodding heads. In terms of the impact that this has, though we’re extremely excited and then we talk about the world of digital labor, it’s important to see the early adopters that are leaping into the fray and it’s incredibly exciting.
So, did you just walk through some of our publicly referenceable members of the community? APTIA. APTIA is, you know, in the healthcare benefits management space and they were able to go from literally over an hour to seconds in terms of managing explanation of benefits processes and so this was a application integration use case or business process management where they were able to integrate language models into an already defined process and gain tremendous productivity within their processing.
Digital Federal Credit Union, now the largest credit union in the United States, has really reduced their time to manage fraud alerts. So they have a case management system, they have a shared service center that, is it has to look at any identify fraud.
They had pre-existing predictive models in place already, via, you know, training neural networks and other ways to identify outliers and transactions. But those produced a very, I’d say, complicated and sophisticated output from the models.
They then used AgentCreator to create a nice human summary of the case, which is dramatically decreasing their cycle time in doing a shared service center. Spirent completely changed the way they look at customer support, and actually the way that they could improve their ability to sell to new customers by creating a dedicated customer dossier for their salespeople before they went on-site.
And KBS is one of the world’s largest janitorial services companies. Most of their workflow with their customers is actually paper-based.
They do work orders that are filled out, paper-based based and then they basically write what the service was. They’re using a combination of computer vision and a classification model to actually do the classification of those workflows that start off, you know, in paper form, to be able to drive their business processes.
So wide variety of what we would call and I think, you know, if you, read Keith Guttrich’s recent research around the difference between everyday AI and transformational AI, I think this is where we would see these are transformational AI investments. This is not putting a chatbot on the desktop.
This is fundamentally changing deep research. So, from there, I do want to talk about model context protocol.
It is something that is on everybody’s mind. Anybody here experimenting with or working with MCP in your jobs currently? Alright.
Slightly less than half the people that are using AI day to day, so that’s a pre-indicator. We had dinner last night with Gartner colleagues, and they said that MCP inquiries are really overtaking the volume of input there.
So we certainly see that as well. But for those of you that don’t know what model context protocol is, my simplistic way of describing what model context protocol is, it is a way to improve the inference of the language model to use tools, resources, or prompts within their use because the language model itself is creating the system access.
Well, if you wanna do that on a highly reliable basis, you need to have structured, templated formatting. It’s essentially what MCP gives us.
But if you do that in a reliable, frequent fashion, you can accelerate innovation because you can reduce your time to open the surface of your enterprise systems to language models and how they actually work together. From a decision-making process, if you are using if you’re building processes that are using LLMs to help with decision making, doing things like abstracts or summarization of underlying information.
MTP can help with that and also can create a, again, an abstraction layer for us to be able to develop the underlying services through time with basically a standard way to approach these things. And so, the way that we’re looking at MCP is any way we would have looked at other types of interface or integration technology as well, by looking at how we can best use that using the same paradigms that we would have used for other types of integration.
So, the first way that we’re supporting MCP is already released. I think it was one or two releases ago, where you can now call MCP endpoints from within SnapLogic processes or pipelines.
So within a standard pipeline, let’s say for example you need to categorize some information like we said in the KBS sample, you could now call a, not only a language model, but you could actually call an endpoint that could be a more sophisticated system with MCP or even call other systems that are exposing data, via MCP as well, and then incorporate that into a SnapLogic pipeline. So that would be the client snap and I know there’s several people in the room that have already been working with this today.
The next thing, however, is actually exposing SnapLogic as an MTP server. Now this gets very interesting because if in the data plane of SnapLogic you could have an MCB server that implicitly means that any pipeline or asset you have accessible via SnapLogic can now become accessible to a language model endpoint for development of either just a direct user interface, if we see the world evolving to have more and more conversational UIs or to other processing itself.
And so as an example, one of the research projects we have ongoing right now internally is to MCP enable the core SnapLogic runtime APIs. So many of you probably use our public APIs to extract data or runtime information.
Now we’re making those accessible via MCP so that you could actually have a natural language query. You know, how many pipelines were degraded in the last twenty-four hours, and why, and actually get a meaningful response from the system to then decrease the time and energy it takes to manage this.
Not only that, but if you have SnapLogic and MCP server, any system you have built that has those kind of APIs you could MCP-enable using SnapLogic. You think about that for a second.
If you have legacy investments where you’re managing them and the systems administration surface is challenging, you could actually make that easier. But I want to, you know, talk about this in a more concrete example, and before I talk about that, let me make sure that my system that’s been sitting up here for a while, of course, my security and user timeouts are all working well.
So I’ll get that going in the background. Let’s talk about an example where this could be useful that I certainly deal with in my world, and that’s supplier onboarding.
If you’re anything like SnapLogic, your supplier service onboarding process is opaque, deeply frustrating, takes too long but it’s absolutely required to follow a certain approach in order to be done correctly. So in my world what we have to do to onboard a supplier at even a small Silicon Valley startup is I go in and I fill out a form, to, you know, describe my supplier.
I then have to upload that, to, I think it’s actually a Google form that I submit. And then the, you know, the finance team goes through and performs a bunch of business processes to ensure that we can actually onboard these suppliers.
Whether that’s looking at the credit rating of the supplier, validating that the bank details are correct, looking at the risk, you know, does the Better Business Bureau have a flag up on this particular supplier, etc.. Does this supplier already exist in my CRM system, for working on? These are all things that happen in the background.
They’re time-consuming. They’re manual.
They’re error-prone. So we can imagine now, instead if you had built good business rules into how you accomplish these things, already, how could you expose that to an end user in a way that is highly useful and usable? So as an example here, I’ll go ahead and show a demo if we can cut to my screen.
Okay. So I have brought my own AI, and I’m even too cheap to pay for the $20 a month, plan.
So here’s my cloud desktop. And what I’d like to do is walk through what supplier onboarding could look like in this world.
So as an example, I have my onboarding form. You can see here, this company I wanna work with, Global Office Supplies.
It’s a demo company but, you know, has all of the necessary information. I filled out. I can go ahead and bring this into my cloud workspace.
And then for the purposes of demo, so that you don’t have to watch me be nervous and type, I’ve got some prompts here, but we could type them out instead. We can say, you know, read the supplier onboarding PDF and tell me all about it.
Now, okay, this isn’t much of a parlor trick. All of us have been doing this sort of thing.
I’m already using language models, and you know these built-in services to help us with things like document Q&A, etc. What becomes interesting, however, is when I give Claude extra skills.
So as you can see here, I’ve enabled the SnapLogic supplier onboarding MCP server, and now with my authentication to the system, I can now do things like validate suppliers, create new suppliers, and run analytics on them. So let’s start with the first one.
I’d like to validate the supplier using the SnapLogic system for supplier validation. So what’s happening now is that the cloud is taking this prompt, is introspecting the services available to it, and is now calling a tool.
And so the tool call that’s behind this is a very straightforward pipeline that is going ahead and validating using our corporate systems to go ahead and do a Dun and Bradstreet check. Go ahead and do a credit check relative to the supplier.
Actually use the data that was in my onboarding tool to follow a prescribed business process, in order to deliver the outcome. Let’s see if the demo gods are with me today.
I had this conversation earlier. So, I said, you’re really going to do a live demo? I said, well, that’s my intention.
But let’s see if, Claude’s working with us. It certainly seems to be wanting to.
Come on, Claude. You think I’m going to get the time out here? I did test this earlier today.
So while Cloud’s working, you can see, I’m gonna go ahead and kill this and start it up again. Yeah.
It should be good. Three years of doing this.
Is this my first demo face plan? We’ll see. Maybe U.S. East is down again.
Alright. Well, while that’s ongoing, you can imagine, you know, if it did come back, what’s actually happening in the background? Oh, here we go.
Alright. So supplier validation completed and improved Little bit longer inference time, but a heck of a lot shorter than if my finance team were doing it, I’ll tell you.
As you can see here, what’s happened is we’ve gone through using the pipelines in the background, the summaries to go ahead and approve the supplier, and then detailed risks. So you can see here we’re going through, Dun and Bradstreet verification, we’re doing credit rating, you know, scoring, banking compliance, actually double-checking that this is actually the right swift code, etc..
And then actually saying, look, this supplier is cleared with onboarding. You can go ahead and set up.
So, you know, so create this supplier in CRM. And again, because of the registration of supplier creation, you’ll see it’s already knowing that what I mean is to go ahead and create the supplier in Salesforce and use the information that’s been extracted from the document already.
And so in the process, we’re using the same underlying tool. So what is the tool that’s actually being called? It is a straightforward SnapLogic pipeline where we’re doing a Salesforce create based on the actual account information.
So you can see this information has been created in Salesforce and we can go ahead and actually access the account directly here. Go ahead and open this up.
So I think, you know, just trying to illustrate here how you can use these technologies to meet your users where they are in the interfaces they’re using to open those underlying, you know, capabilities up and the idea that if you have already invested a lot of time and energy in your data integrations, your application integrations, your actual orchestration, you now can open those up, as well. And then you can do even more interesting and sophisticated things.
Our presales team, for example, has stopped using pre-built and canned reports on productivity and are actually just doing automatic generation of what typically would have been done in Tableau or something like that. You can actually start to do real development and delivery of analytical reporting as well within these tools.
And so now I’m combining a data integration pipeline in the background to pull supplier information and performance, and using, you know, what has become pretty nice rapid prototyping if you just want to do a one-off, you know, visualization. Why pull that out into a spreadsheet and do all of your pivot tables etc. if you can ask the model to develop something for you.
And so, asking for this to do a visualization that I can share with my executives, we’re able to actually build something quite interesting in a relatively nice ephemeral, almost a throwaway type activity to understand supplier analytics etc.. I’ll pay for cloud later.
But as you can see, I think reducing friction to rapidly prototype to get access to information and even automate business processes, this is where we’re headed. We’re able to now break down the boundaries or the curves between our existing well-structured, well-managed, well, let’s say governed enterprise landscape and then helping users to have access to that information in a lower-friction way.
And I think that’s kind of illustrative or evocative of what we can do with different front-end experiences, different types of systems, when you start to really reduce the ability to consume your existing investment. Pretty interesting? Yes? No? Yeah? Alright.
Well done, Claude. Alright.
Back to the presentation. Okay.
So as we can see here, you can reimagine how to apply these technologies. In that case, we’re using the model really to interpret the user’s input and be able to select the right underlying tools where we’re able to maintain our governance.
I think it’s a crucial, crucial point to make that simply because you’re using these tools, you’re not giving up, you’re not sacrificing rigor and process management in the way that we’re designing and operating our enterprises as well. You can have both and so you can have your well defined, well scoped delivered processes and you can apply the value and velocity that you have bringing these technologies to bear and make things well governed, fast and efficient but also at a level of quality which you must have and if you know our concern, I think our observation is if you don’t maintain these two worlds in balance with each other, you’re going to open up a much greater risk envelope and you’re going to create a world in which you’re not happy with the the outcomes.
Okay, so that’s pretty interesting. Everything you have from a SnapLogic point of view you can expose as an MCP endpoint, you can expose, and you can orchestrate processes and data acquisition, but what if you’ve been investing for twenty or thirty years in other middleware solutions and you have these landscapes already? The vast majority of middleware today of integration technology was delivered between the years of 1995 and 2006.
We were founded in 2006. The current version of SnapLogic is about ten years old.
In terms of the core runtime architecture, they’ve been continuously improving, but we are cloud native. We are JSON centric.
We are, let’s say, contemporary with the technologies today. None of the systems on the screen is that true for.
All of these systems are, you know, columnar, row oriented, on premises, self managed, systems that continue to be the backbone of integration for most businesses on the planet. And so one of the things we have had the request continuously is we need to reduce the cost of modernizing these systems.
We cannot continue to keep, you know, our our systems live even. Some of these are going complete out of support.
Microsoft BizTalk, many people are dealing with a burning platform on BizTalk right now. Many of us, SCP process orchestrator with end of life in a couple of years with the ECC to s four world.
These are imposed upon us, but you know you can kick the can, you can get extended maintenance. But when it starts to become impossible to be able to drive the velocity you need with your business, then it becomes imperative to move into a more modern and contemporary architecture and this is always something that has been far too expensive, far too time consuming to do.
And so we started a data science project last year where we first investigated is it possible to shift from a practitioner led approach to modernization, meaning somebody sits with two monitors and looks at the old system and builds the new system and just sort of moves back and forth to a data driven approach. And what we identified is not only is it possible, we think that we can get an 80% reduction in modernization time and cost.
Time and cost, what is more important there? Well, time, if we can compress by even 50% the time that it takes to do monetization, reduce your risk profile, you reduce your switch over cost and challenge, and time is not only people time, it’s project time overall and and time to develop. So the way that ultimately we’ve come to do this is not a purely technical approach where we’re doing just a technical conversion or or lift and shift, although for a portion of your workloads, I think that could be very appropriate.
We’re really looking holistically at the entire process of modernization from understanding the portfolio. Many of these portfolios that are decades old, the people that wrote the workloads no longer work with us.
They’re probably retired or passed on. They didn’t document things appropriately so we understand these systems on a break fix basis.
And in order to even modernize their migration, often we’re looking at a six to eight month project just to understand what we’ve got in any level of useful detail. So what we’re doing is completely automating that and then having business centric planning whether that’s by a given business domain or process orientation, developing and then delivering with with test and release.
And so what we’re seeing in terms of our early production deployments of Slim is a 40% reduction in overall integration complexity. So the ability to refactor the solutions again not just from a lift and shift but actually refactor and have a different design methodology around building these things.
Looking at, you know, for example, in some of these systems, you’ll note by the term mapping, I mean informatica, the ability to do massive compression, from one system to another, primarily through parameterization and reuse. And then, you know, the reduction in implementation time in particular.
So to look at how we’re approaching this, here’s some screenshots of of the system. If we think about the analysis phase, here we’re basically taking an export of the metadata for these legacy systems, typically in an export in an XML format, uploading that into the system and then performing a statistical analysis of that metadata.
What are the endpoints? What are the transformations? How are you managing it? And then actually building out the detailed workflow diagrams and process understanding, that you have. From there we we do the planning, I’ll skip over that, and then actual development.
So taking this and building the to the specifications, the actual first drafts of the pipelines themselves, test data and handling you know full CICD version control and then crucially test and release management. So fully automated and iterative testing in the development cycle both at the unit level and at the regression level itself.
And, oh, by the way, this, test approach that we came up with SLIM is independently available for your SnapLogic environments as well. So you can move unit testing into the development life cycle as well as managing full regression test analysis.
I see some eyes perking up at that because this has been a long-standing gap for us. We’ve closed the gap and now we have the ability to have fully automated unit and regression testing as well.
So, pretty exciting. Top-tier SnapLogic rep.
You know, follow up with me afterwards, and we’re happy to give demos and look at your data in this system. Okay.
So what’s next? Well, you know, the first wave of the sort of modern middleware starting in ’96 with Informatica, TIBCO, etc., was really on premises, whether it’s ETL or ESB type technologies. Next wave is, you know, cloud, with iPaaS.
Our prediction and where we’re headed is really towards agentic integration and I will disagree with my marketing colleague that this is merely hype. Actually, what we mean by agentic integration is what I just showed you with slim.
We’re actually applying AI in the actual domain of developing the systems themselves. SnapGPT is an agentic copilot now.
If you’re using SnapGPT and those of you that are using it have seen the continuous steady improvement, now you can take the deeper thinking approach, you know using SnapGPT where it is now an iterative cycle whether it is refactoring existing workloads or creating new ones. That is an agentic system, and as you can see, we’re developing these as well.
So that’s what we mean. How do you get to minimizing the human cost of integration, automation, orchestration, development and management? You do it by automating all the things.
Okay. So we’re committed to this future we’re all in.
We’re continuing to develop. We started in 2017, building our first artificial neural networks and building up our own expertise.
We’ve had a steady, you know, continuous effort against this. We’re dead set on minimizing the cost, risk, and time spent on this and, you know, having just released MCP and looking, you know, down the stream at how you would have multi-agent systems well orchestrated with each other.
We’re starting on the early research there. You know, come contact us if you’re interested in agent-to-agent systems.
Right now we’re really focused on exposing MCP in the use cases that it makes sense. So in closing, come and talk to us.
Come talk to us if the work you’re doing is stalled because you haven’t figured out how to have AI models or agents actually have secure access to core enterprise data in a way that’s repeatable, highly governed, and useful. With that central control, come talk to us if you’re risking governance failure.
Forgetting all of that, your ability to be a high-performing operational group is blocked by technical debt and legacy, and then you need more than tools. You need a community, you need expertise, you need guidance.
Everything I’m talking about here is available on our community, you know, technical blog as well, and we really see this as a group activity. So, it’s really a wonderful and exciting time, and we just continue to see the pace and velocity of innovation increase.
So, thank you very much.
Enjoy the rest of the event playlist of showing our customers, experts, and thought leaders.


