Setting The Stage For AI Success In 2026: Lessons from AWS re:Invent

headshot of Dominic Wellington
6 min read
Summarize this with AI

This year I was not able to be in Vegas in person, but our SnapLogic team was there in force, talking to customers and partners who gathered in fabulous Las Vegas to hear the latest news about the state of the industry. The conversations spanned AI, data and application integration, governance, lineages, and automation, which aligns well with the topics that AWS executives discussed in their keynotes. 

MCP is definitely top of mind for many people. That focus is likely to continue now that Anthropic is open-sourcing the protocol and donating it to a new Agentic AI foundation. 

What follows is not going to be an exhaustive rundown of everything that was discussed in the keynotes; some of that stays in Vegas, but there was so much to talk about that some announcements even started trickling out before the show. What I am going to try to do here is call out some of the announcements that are most immediately relevant to SnapLogic users.

The year of AI

2025 was definitely the year of AI at AWS re:Invent. Last year, the first mention of AI did not come until half-way through Matt Garman’s opening keynote. This year, we dived in almost immediately, with promises that agents could scale impact by 10x. This bold claim was explained by a renewed focus on developers, which was the through-line for all of the big three keynotes this year

The promise is that AWS will take care of infrastructure so that developers can build, in an update of what after all has been AWS’ brand promise since day one, but updated for the era of AI. It’s no longer sufficient simply to deliver big infrastructure, although there was plenty of that, with a renewed commitment to deliver “scalable and powerful AI infrastructure to power AI workloads at the lowest possible cost.”

Many more AWS customers have joined SnapLogic in building on AWS Bedrock — twice as many as last year, we are told. All sorts of new models are available, from AWS and from partners such as Mistral. 

What many of these developers are finding is that the models are not sufficient on their own. In order to be useful, they need specific contextual knowledge about the process they are to implement. This deep integration between model and data is the key to success. The industry has been doing Retrieval-Augmented Generation (RAG) for a while, but AWS is proposing to go one step further with its new Nova Forge “open training” model. 

We heard more about this new model from Dr Swami Sivasubramanian. Basically, the problem is about how to give models the specific context for the domain they are to operate in. There have historically been two approaches:

  • Train a model from scratch. This approach is incredibly expensive, because on top of massive amounts of compute power, it also requires deep expertise and skills, as well as a certain amount of time for the training process.
  • Start from a pre-trained model and use fine-tuning or reinforcement learning. This approach works up to a point, but there are limits to how effective it can be. There is also a risk that the model can become too focused on a specific task due to over-training and lose its original capabilities.

The new approach in Nova Forge uses what AWS calls “reinforcement fine-tuning” to avoid the complexity and cost of custom training. These “novellas” deliver a claimed 66% average improvement in accuracy over base models, delivering at least some of the benefits of a custom-trained model but without the time and expense that would otherwise be required.

Build, then deploy

Building models with more specific contextual knowledge has been one of the factors holding back production deployment of AI, but another factor keeping AI in the “proof of concept jail” has been governance and control. As Matt Garman put it, “boundaries provide confidence.” In a sign of the maturing of the AI market, there was a wide variety of announcements around setting policy and observing agent behaviour. 

New policies in Bedrock Agent Core enable users to place limits on agents’ behaviour, not just the tools and data which they can access. The example used in the keynote was an agent which can only process returns on items that cost less than $1000, with the policy specified in very natural-looking syntax.

Even systems that rely on single AI models can change their behaviour over time as the underlying model changes, and of course this problem is exacerbated in agentic systems with dynamic combinations of models and tools. New observability features help to understand how these agents operate and communicate, which is a core requirement for deploying them in production environments where they will be interacting with real customers and real-world systems.

AI comes of age

The last of the big three keynotes was from Dr Werner Vogel, Amazon’s beloved CTO/guru — and it turned out to be his very last, as he is going to cede the stage to younger voices from next year. Perhaps fittingly, he signed off with a philosophical ramble through the past and the future of computing, emphasising how AI requires techies to become “Renaissance developers.”

While Werner was explicitly talking to “pro-code” developers, not to users of low-/no-code platforms like SnapLogic, much of his advice still holds true for us. The entire point of a graphical development environment like SnapLogic’s is that it does not require users to go enormously deep in the technology — become I-shaped, in Werner’s example — and instead welcome in users with broad expertise and holistic understanding.

As Werner said, so-called “vibe coding” runs into a problem of “verification debt”: AI can generate code faster than you can understand it. A graphical representation goes a long way to helping more users understand the logic of a pipeline. AI assistants like SnapGPT can help both to generate pipelines, but also to understand and evolve them, helping SnapLogic users become centaurs: expert humans riding powerful machines.

AI is becoming mainstream. Matt Garman shared that more than fifty AWS customers processed more than a trillion tokens each through Bedrock. That is a lot of usage, but turning it into real-world value requires that models be connected to enterprise data and systems, observed and improved in action, and governed by enterprise policies throughout. Let’s talk about how SnapLogic makes that happen.

Get the blueprint for AI-based automation in this report from Harvard Business Review, “Harnessing the Power of Generative AI and AI Agents.”

headshot of Dominic Wellington
Director of Product Marketing for AI and Data at SnapLogic
Category: AI Partners
Integration is the AI Advantage - Lessons from AWS re:Invent 2025

See SnapLogic in action with an interactive tour!

Interactive Tour Demo Image