Podcast Episode 26
Generative AI’s Impact on the Future of Work and Society
with Jeremiah Stone, SnapLogic and Greg Benson, University of San Francisco
In the season 2 finale of Automating the Enterprise, Dayle Hall engages in a dynamic conversation with data science expert Jeremiah Stone and computer science professor Greg Benson. From unraveling the mysteries of generative AI to discussing its transformative potential in industries like healthcare and academia, this episode sheds light on the present and future of AI-driven innovation.
You’re listening to the SnapLogic Automating the Enterprise podcast. This podcast is designed to give all the organizations out there valuable insights and best practices on how to integrate, automate, and transform their enterprise. I’m your host, Dayle Hall, the CMO of SnapLogic.
This is the last episode of our Season 2. So far, we’ve covered a ton of different topics across the enterprise. We’ve got AI ethics, navigating AI in healthcare process automation. But today, we have two very special guests. And I’m proud to have them on this podcast with us as we close out. And I think if anyone tuned in to the end of last season, you hear me talk to, first of all, our CTO, Jeremiah Stone. So Jeremiah is joining us again. Jeremiah, thanks for joining the end of season. We keep you to the end technically, not the season out of the park.
Well, I am a loyal listener, and it is always fun to come on and talk to you.
Great. And then we have another special guest today, who’s actually a SnapLogic chief scientist and also a professor of computer science at the University of San Francisco. Welcome, Dr. Greg Benson.
Great to be here, Dayle. Glad to be a part of the podcast. Also a regular listener.
Yeah. Well, I appreciate that. I’m sure you’re not saying that just to make me feel good and make the host feel all warm and fuzzy about his guests really listening in.
So listen, like I said, we’ve had a ton of different topics. We’ve had some of our customers. We’ve had thought leaders from nonprofits. We’ve had thought leaders from agencies and how the enterprise landscape is changing, but obviously, some of the things that SnapLogic is working on and some of the things that we see in the market. Generative AI is everywhere. And I know we’re going to get to this at the end of the podcast because I’d specifically want to cover what SnapLogic is doing, but the concept of generative AI and then what we’re talking about at SnapLogic, which is generative integration. So we’ll go into that second part in a second.
But I just want to start with just a general question around what you’re seeing in the industry. Jeremiah, let’s kick off with you. So we’re seeing all this hype, I’m going to say hype, and excitement, which is good on both regards, but around generative AI. Why do you think the industry has, it seems like, jumped on this really quickly, like they were just waiting for something to happen and now everyone is talking about generative AI? What are you seeing in the industry? And why does it feel like this has a lot more momentum than some of the other AI discussions we’ve had to this point?
I think it comes down to engagement, Dayle. If I think back over the last 20, 25 years of technology, when you see this kind of excitement, the last time I can remember this level of engagement, excitement was when people were waiting in lines around blocks to get the first iPhones because it was a fundamentally different thing that everybody could touch, feel, and gain new skills, new abilities, new powers.
And I think the capabilities that have been exposed with generative AI for those of us in the field have been known to us for years, really, since transformational, no pun intended, work came out in the transformer architecture with models BERT or D5 or the early GPT models. But really, that launch of ChatGPT was something that so many people across ages, professions, geographies were able to experience really the power and capabilities, I think, has just led to an explosion of imagination of possibility.
I think that’s really what’s driving it, is this is the new fuel in the tank of optimism for the ways that technology can improve the human condition fundamentally. And then that naturally leads into, okay, well, how could this help my business? And given the environment we’re in right now, with volatility and challenges in the macroeconomic environment, high interest rates, quiet quitting, what have you, from the last year, this is really a domain and a space where there’s optimism. People have actually seen it, felt it, experienced reality that was different and positive. And the distance between asking ChatGPT to help you write an email based on a couple of bullet points can very easily be translated in many professional domains. So I really think it’s just the engagement with the technology and the ability to dream what might be with that.
When I joined SnapLogic, CEO Gaurav Dhillon, who’s obviously an industry stalwart, the OG of integration as we call him, he used to tell me, and I know he still talks about this, which is we’re moving to a point where you’ll be able to talk into your machine. We can type it, you will be able to say, hey, SnapLogic, connect my enterprise, like Star Trek. When I first heard that three years ago, I mean, it was very inspirational. That would be really cool. It feels like we’re about as close now than we actually could ever be to getting to that point and because of some of those things that have happened recently. So it’s nice to see Gaurav smile as you walk around the office, as we talk around the launch of SnapGPT, because it kind of feels like some of that is coming true, don’t you think?
I do. And you make a really good point. When we see technology that in our lifetime has been speculative fiction, science fiction, and suddenly, it’s here, that makes an impact and it’s exciting, just fundamentally. And we started to see it with the voice assistants. And now we see that capability just continuing to improve and accelerate, and it’s tremendously exciting. I think it is that transition from wait a minute, I thought that was just a movie TV thing, that’s actually a thing. I can touch it, and it’s real, give me some more.
I always wanted to be a Trekkie, too. I was a Trekkie. I always want to be in the show.
Okay, Greg, let me ask you a similar question. But from your perspective, you’re obviously a little bit more engaged in a lot of the detailed technical work. And given your work at the University of San Francisco, you’re also probably seeing some influx around what the students are going to be experiencing and how are they using it. So same question to you, but what are you seeing and why do you think- is that excitement translating more into the detail, data science, technical communities as it is in academia? Or are there other concerns? Or how are they responding?
Yeah. I mean, a lot to say, they’re taking one step back at this moment in time, like the iPhone moment, Google Search moment. I think the renewed optimism that this technology has really brought to all sorts of communities, business community, education community, it’s just a great time to be in tech.
So the step function, it’s interesting. One way I’d like to describe it is just think about what you did six months ago if you’re trying to learn something. Six months ago, you would get recommendations, you do your Google searching, you distill information. And now you can really ask incredibly pointed questions and get incredibly detailed responses and answers. It’s transformed- going back to your point about education, when I’m learning new languages now, it’s completely reversed the process for me.
What I can do is I can say, hey, I want to learn the Go programming language. So I say, hey, write this program that extracts information from JSON in a particular way. And it’ll write the program for me. Okay, I can run it, I can test it. But then I can say, hey, you know what, I’m learning this language, can you explain this program to me line by line? And it will do that. And then I can say, I don’t understand this particular thing about the Go language, could you explain that to me? Keep in mind that I’m familiar with these other languages. And it will tell me in the terms of the other languages what this specific concept in Go is.
And the reason why I’m telling you this from the world of education is this is transformative to how students are going to learn, and not just computer science, all sorts of fields. I think we’re seeing in industry the same sort of shock and awe as we’re seeing in higher education, is that how are we going to embrace this? If you heard this thing before, ChatGPT is the most talked about thing that nobody’s used. Lots of people have lots of opinions about ChatGPT. And you ask a question like, so have you used it? No, but I’ve read about it, right? It’s when you go and use it, engage with it in a meaningful way for something that you do, like in my case, teaching computer science or building software, it’s really remarkable.
So yes, to go to your point in education- and the studies are already showing that engagement with this technology is going to have a transformational experience on how quickly students learn computer programming. Harvard, for their next semester, CS1 is building up an entire, basically, LLM-based chatbot specifically for their CS1 class to basically augment or maybe even replace the role of a TA in helping students guide them through the process of learning this material. Yeah, just like we’re seeing in enterprise software, education is going through a transformational moment with this technology.
Yeah. I always say to my team, actually, whether it’s something in the market, it’s competitors, it’s product, it’s market forces, as a professional, I want to work in times where I’m challenged. Now I don’t want to be to challenged where I’m stressed out to the point of tearing my hair out. But if you have the opportunity, you get these things that come your way, like product, like competitive forces or whatever, and you have to deal with it, that is exciting to me. And I think given what we’re seeing around generative AI, I think we’re probably just on the cusp.
And that’s my next question to both of you. And I know you’re not fortune tellers. But where are we on the AI journey? Because I feel like, at least in a marketing perspective of tech companies, we’ve been talking about AI for a long time. We went through this AI washing two or three years ago. But is generative AI, is it different to the AI washing? Is it another step on the journey? For listeners out there, where do you put generative AI on this AI journey? And are we a quarter of the way through? I know it’s hard to tell, but where would you say we are today?
Obviously hard to say. In some of my circles, when this really started to emerge last fall, there was a sense that on the one hand, we had this huge step function. The question is with these large language models, how much better in their reasoning, creative, generative abilities for a wide range of tasks, how much better will that get? It’s truly amazing in summarization and generating code and art and lots of different things. Question is a year from now, for example, how much better can it be?
My opinion is that as we focus that technology in very specific problem areas- and you can read about this every day, people are doing that for medical diagnosis, for material science, right? I think as we focus it in these specific domains, like we are doing at SnapLogic for data integration, it’s going to get remarkably better. The core foundation of it, that’s still- because the next step that everybody’s talking about is artificial general intelligence, or AGI, like the idea of something beyond what we’re seeing today. And that’s a very debatable concept. And are we a step closer to that or not? But I would say we’re going to see progress in specific domains using this technology. The open question for me is how much better can the foundational technology become better in reasoning and generating, synthesizing? That remains to be seen in my mind.
I think the way that I think about the question of where are we on the journey is how applicable would this be to my life from a listener perspective, right? Because there is a lot of- and I’ll go back to my mobile example. I distinctly remember a time when I had a flip phone, and it was awesome. And I use it for a phone. And I could text by hitting the number key to get to the next thing. And I had a Blackberry, and that was like an incremental improvement because of that a full keyboard and whatnot. And then I heard all the hullabaloo and the yearly reports out of predicting what mobility of compute would do. We have a combination of the maturation and processing power, increasing memory, and then in particular, what that would enable for media, photographs and movies.
And I remember distinctly thinking, okay, I’ve got a great little Nikon digital camera. It’s got a great memory card on it. I take it where I go, and it’s wonderful. I still have that camera. I even have a digital SLR, I never used them. And I think that taught me an important lesson, which is the ability to understand where technology is on the arc is more predicting what is the applicability to how it changes people’s lives. And the reason why I will spend the money on getting the upgraded version of a given smartphone with the better cameras, because I’ve discovered that it helps me feel better as a father, because I take good pictures of my kids. And I share them, and I get more engagement with my mother who lives far away, and I share those photographs with her. So it actually impacts my life in a meaningful way. So that’s how I would interpret the question, how much better would it get.
And I would suggest that the only way that most people have felt this kind of technology is in a couple of different domains, in the media that you receive via the different sources that you get your information or entertainment from, in your knowledge and information access, and then in your daily life from a working perspective or even a home perspective. I think what most people don’t realize is that this technology has already been incorporated into websearch by the major engines. And you probably didn’t notice it, but your search experience has gotten incredibly better over the last three or four years in terms of the quality of information retrieved, summarized, and presented.
Most of us don’t think about it, but now when you search for a given topic, you now get these little composite views in your screen that give you a summary and then links to more information. That is generative AI that is doing that when that happens. Similarly, when we look to navigate somewhere now in the primary navigation and mapping applications, that is also generative AI creating your access to information about a business or entertainment venue or something like that.
I think the area where we have tended not to feel these things is in our working lives because the incorporation of these technologies to the smartphone, okay, well, that kind of invaded my brain. It made my working life better, but it was always on. And I think that is something that is going to change as far as the AI hype. And we’ve heard a lot of hype, the domains that AI has been typically directed towards has been sort of confined to this office of strategy, of optimization, of supply chains. Those are hidden, esoteric, small population use cases. But it’s always been there.
I had the good fortune of working at General Electric in a division that used artificial intelligence to predict degradation and failure of industrial assets. And some of the world’s major airlines use artificial intelligence to generate predictions of how the engine will degrade. And we’ve seen the quality and stability of those machines improve. That’s what’s going to change in the next year. A year from now, we will have generative AI in our working life. You already see it with Microsoft and Google with their productivity suites. And increasingly, this will enter into our lives in other ways where now, to Greg’s point, it changes the way you learn, changes the way you create.
And the reason why it does that is because the huge leap forward with transformers and the generative AI we’ve worked with is text to text, or text to image, or image to text. And these are the fundamental building blocks of most business, is we exchange documents, we exchange mail, we exchange information, and that information is the text. And so I think we are, I don’t know, 2% into what that will be. We’ve barely seen anything yet.
But I’m a complete optimist on what this will do because I think what we often will speak about in the work context is we’re better editors than we are writers. We’re often good at editing things and making them better based upon our experience, our exposure to our professional domains. And I think allowing us to focus on that portion of the work, which really gives the competitive lift or improvement, was all good. And we will then not be spending as much time with the unpleasant initial, bad first draft phase of any given piece of work product. And that’s the information work is everywhere, and that is the professional classes across the world.
And that will then give, I think, a huge equality of opportunity bump broadly in workforces across the planet because, let’s say, English is not your first language, now it doesn’t really matter because you can actually take something written in your native language. I don’t know if you’ve played around with translating using these models, but it’s beautiful. They are now at a par with expert human translators for relatively tactical tasks. And we’ve seen during the hype days that, oh, there was earbuds that would translate for you. And there are things that you can use. I recently was traveling and my high school Spanish is very poor, but able to use now Google Translate, which is powered by generative AI, and these other tools to do this. And so now translate that into the work context. And I think we’ve barely started to touch what will be.
And the really interesting thing is we’ve reached this point at which we have a sufficient amount of private, proprietary business data that is not exposed to these models. But now we have the tools that make that accessible to enterprise technologists. I think that is what has fundamentally changed, is that the core technology has actually been relatively mature for, I don’t know, five, seven years. We’re pretty far into the things we’ve seen with ChatGPT with more computing power thrown at an architecture that existed, whatever, seven years ago is when the first paper came out about this stuff. And now that’s what really changes in the working world. We have access to the technology to make this relevant. So semantic search gets better, extractive summarization, abstractive summarization, knowledge work, creative work gets better. And that’s what we’ll see over the next year or so.
I think that for me is the key. I mean, look, we’re in SnapLogic. We’re here to build, market, and sell integration technology. And I like what you both said. Greg, you talked specifically around different industries, like in healthcare, you’re going to see massive change, in academia. And what you mentioned, Jeremiah, about the impact on our personal lives. And we just started to delve into there, Jeremiah, about work, like how is this going to work?
And I feel like- look, I’ve been doing marketing for 25 years in the tech industry, and I’ve been talking about the future of work for 25 years in one various form or another. You talked about iPhones. Aruba Networks, we were trying to help people understand the Bring Your Own Device policies, because people were bringing all this technology in that they couldn’t control. So we’re always talking about the future work.
What do you think the big changes, sooner rather than later, will be around- let’s focus on IT because I know there’s a lot of other things we could focus on. And I know in a marketing team, we’re already using different generative AI tools to help us. But if you’re out there and you’re listening, you’re some kind of engineer, or you’re chief data officer in an enterprise, what are the things that they need to be thinking about? What are they going to get hit with around how their work is going to change in the next 12 months? Greg, why don’t you start with that?
One practical matter, besides just engaging with the technology and understanding what it’s capable of, there’s the broad strokes of the language models that are provided via APIs, or whether you go down the path and finetune or train your own model, which is currently more expensive. What your touch point is with the technology is one thing that you’re going to have to sort out, what provider do we use. The thing that we’re facing internally and lots of companies are facing is the rapid change of where is this headed from a regulatory perspective, GDPR. And it’s funny. It’s happened so fast that when there was the letter to let’s [hope] generative AI for six months, that would be like turning off cryptocurrency, you can’t do it. It’s like, it’s here.
That’s all in the bag.
Yeah. So there’s two things that I would say immediately. One is if you’re professional and you’re just getting started- and we were at this AWS event yesterday, and a lot of people are at the just very beginning phases of, okay, we- and a lot of companies are having top-down directives. There might be some grassroots, but there is now boards who are saying, okay, where is our generative AI? What is our initiatives? Where does it fit into our company? And we want to see some results in the next three to six months.
So you have to understand those touch points and just level up on how you will access the technology. Then there’s the regulatory stuff that, quite honestly, isn’t fully sorted out. But if you’re in an industry that has to deal with sensitive data, that’s going to be part of the equation. A little plug for us is this is something that we’ve been navigating for a while now and something that we definitely can help our customers navigate as well. So the regulatory piece and compliance piece is another thing that you’re going to have to be aware of.
Yeah. Jeremiah, what about you? What do you see as the big changes within IT, within the roles that are there, that you think are going to be impacted in the next year or so?
Well, I’ve run internal IT departments, been part of internal IT. My first job out of college was actually as a systems administrator. So having been in the trenches of IT, I think that, number one, end user support is going to get a lot better. The ability for the IT department to create automated knowledge repositories and for individuals in the organization to self-serve with regard to their problems, I think that’s going to improve quite a bit. So that sort of help desk experience where you’re having to provide IT support. I think the whole IT support and ability for people to manage their own problems is going to increase dramatically. And that’s going to compress more productive usage of tools. So I think that’s one area right out of the gate, that IT department should be looking at how to utilize the ability to use these tools to support their end users and access to knowledge.
Similarly, I think the ability for skill transferability, to Greg’s point. He gave a beautiful view earlier how to learn a new language, how to learn a new technology, how to learn an old technology. So the ability to hand over responsibilities for given portions of the enterprise landscape of the estate, as you will, think about that differently. The ability to have fluidity of task and responsibility through transferability of knowledge and how you would set that up, I think, is definitely going to change.
I do think that IT departments are going to be under even more pressure to adopt new tools, new capabilities, and carry more risk for the enterprise because the other thing that is less talked about is the highest-performing models in generative AI are only available via APIs. You can’t download them and run them inside your own four walls. And so now you can only consume them via the extended enterprise. And now, understanding how to manage your proprietary and confidential data and assets suddenly becomes very complex and sophisticated. That’s the one domain of this, where things get much more nuanced. And I would say that that is one area where the major providers of these systems are struggling right now.
And I do think we’re going to see- if we’re going to see one area where the entire domain really starts to spread out and offer us a lot of optionality, it’ll be compute and memory-constrained models that can be finetuned for narrow tasks that then protect intellectual property and assets, and also constrained pretrained models where you have access to actually understand the data that it was trained on. Therefore, you don’t have to deal with copyright issues, those sorts of things. So that’s one area where you’re getting access and identifying somebody on the team to become an expert, because there are no experts on this today, identify someone to get savvy on this to get started, find the right external counsel because your in-house counsel probably can’t, doesn’t have any transferable knowledge or bandwidth to learn. Those are a couple of areas that come to mind.
Yeah. So Greg, let’s go to you for this question. I’ve done a lot of these podcasts now. We’ve talked about AI in various aspects. We’ve talked about ethical AI. And I’ve talked to some nonprofits that are really focused on this. Steve Nouri developed an organization down in Australia, groups of people that come together to try and make sure that we’re acting responsibly that people that this is going to impact the most, and bringing groups of different various skill levels and experience and backgrounds together to weigh in on that kind of thing.
I have two questions. But the first one is, specifically, is generative AI, does that help or hinder the questions around ethical AI? Or is it do we still have to answer the underlying challenge around how do we make sure that the AI is not trained on bad data initially and that generative AI doesn’t make it worse, we still have to go back to making sure that the data we have is correct in the first place? How do you think about generative AI and ethical AI together?
Certainly, as you probably have read, there’s been numerous studies even before the recent large language model boom around bias in training data and its impact on maybe using ML to do home loan approvals or filtering out job applicants, and in the bias that has emerged, maybe unwittingly from the people who were sourcing and collecting the training data. Certainly, that problem can exist in these generative AI systems as well. And I think there’s a lot of both good private companies and, as you pointed out, nonprofits that are very much concerned about trying to ensure that we are aware of and try to mitigate ill effects from potential bias in training data.
The one thing I will say about it, the recent step function that we’ve seen in this technology has maybe created even more focus than we’ve seen before on this issue. I would say one of the exciting things that is going to help is the overwhelming amount of public and community engagement around models and the increasing number of public models. And the availability of the source data to train the models is only going to help allow the world as a whole and the community as a whole to evaluate, understand, explore, highlight where there are deficiencies in these systems.
A good example is if you think about perhaps the most successful operating system today, which is Linux by the numbers, right, is publicly available and has been publicly scrutinized and has resulted in an underlying technology platform that we all rely on for everything, for banking, for healthcare. Yes, there’s other operating systems, but it is a widely used, depended on operating system that is public, open source.
And so I think the generative AI, I don’t see it as necessarily helping or hurting. But I see the movement around the open sourcing of models and the community engagement allowing us to help surface issues and help address them going forward.
So the question I was going to ask you, and I think you alluded to this earlier, when generative AI started to take off very publicly, and there are some, call them, innovators around the technology world who called for a pause, for altruistic reasons or not, what was your response to that? And I’m not saying that it didn’t have merit. I’m not saying it was a stunt or a marketing trait or something like that, but when you see that kind of response come out and someone asking to slow down, we just talked about it, which is you can’t really put the train back at this point, but how do you see that? Was it just publicity was the real concern there, looking at it from your academic as well as data science perspective?
I think it largely was a very genuine concern about humanity. I think that when you see- I forget the quote. But when you see technology indistinguishable from magic, when we’ve seen this incredible leap that by all accounts is not immediately explainable, and to be honest, even the structures, these deep neural networks that we have built with the transformer technology that Jeremiah was talking about, we built these things, do we fully understand them? That’s still not clear.
So when you see this leap, I think there was just a, wow, where’s this going to go next? We took such a big jump forward in the ability to have something reason and summarize and act in a way that is mimicking a lot of human response. Maybe it is just a natural reaction like, well, since we can’t fully explain it, it is truly amazing, where does it go next? And I think that’s fundamentally where the concern came from is could these AI systems start making more decisions? And should they be making more decisions? And where are they inserted into our processes, like healthcare and government and policing and these sorts of things?
By the way, I don’t deny that we, as a society, have to be very vigilant to understand where and how these systems are used. I don’t think anybody’s denying that. So I think the pause all came from a genuine concern about the future of humanity as it intersects with this technology. I think as we pointed out, it’s out there. So now the question in my mind is, how do we respond as a society, how do we respond as governments to understand where do we feel this technology is going to benefit society and where do we see the greatest risks of society? And there’s a lot of work to be done, just like with cars and windshield safety and things like understanding the health risks of cigarettes, we’ve had to deal with lots of things that have been harmful to society. And these are things that we will continue to have to address.
Yeah. Well, what at least gives me some comfort is that I know how the things that you’re looking at. And again, you’re shaping the minds of the future engineers and the data scientists with the work that you’re doing. And the fact that you’re thinking about it that way, the fact that I know a lot of the processes we’re going through at SnapLogic around things like the SnapGPT product is to make sure that it’s at least as safe and secure as any product should be that we release.
But again, as you point out, I don’t know if it was you or Jeremiah that said we’re 1%, 2% into this journey. I think we all have to look at the capabilities, look at the opportunities, but also make sure that we continue to act responsibly for everyone’s benefit.
Well, I appreciate that there were great answers. Just as we finish, Jeremiah- and this podcast is not about plugging SnapLogic’s product. That being said, it is a good opportunity with the advent of generative AI and what SnapLogic is doing around generative integration. Tell me why you’re excited about what we will have launched by the time this podcast comes out, which is the general availability of SnapGPT.
I could not be more excited about the launch of the work that our team has been really slaving over for the past couple of months. Really what we’re opening up is the possibility for the individual with the business problem to describe their problem and move much more quickly to a system that delivers their need. And that is something that we’ve always struggled with in the project delivery, application delivery world is taking someone’s intention or their requirements and translating that into a working system. And so what we’re able to do today is to describe a system-to-system integration or a workload and have the first draft, the prototype of that workload be generated in seconds, and then, again, move to editing, not creating, remove all of that anxiety, burden, and challenge of building something out.
And speaking to our early adopters, it’s amazing to hear people say, normally, this would have taken me a day or two, I created the first draft in an hour. And these are difficult to complex pipelines. And seeing that kind of time compression- I just think about somebody getting home to their family sooner, to live fuller lives. Again, it comes back to how did I internalize- when this technology really matters, it’s when it improves not only the business and people’s ability to deliver their work, it improves our lives fundamentally.
And I think we have that in our grasp now with SnapGPT and the work we’re doing because, ultimately, as we’ve seen from many well-publicized sources, 65%, 50% to 60% of all work in IT is integrating systems together. And if we can erase that and drop it to zero, just think what a change that makes to people’s daily lives, their ability to focus on their customers, their colleagues, and delivering value rather than the plumbing of the pieces to bring it together,I’m just super excited about that.
And as you said, spending more time with the families would be a nice fringe benefit. Jeremiah, thank you so much for joining us on the podcast today.
Thanks for having me, Dayle. I really appreciate it. And I’m excited to continue being a loyal listener and see where this journey takes us. It’s a tremendous time to be doing what we’re doing. And I feel blessed.
Sounds good. Greg, thanks for being part of the podcast today. And I think at some point in the future, we’ll hopefully get you to make a return appearance on this.
I’d be happy to. Yeah, I’ve enjoyed it.
Thanks for joining us, Greg.
So that’s it for Season 2 of our podcast. I think this is a great way to end this season. Definitely some exciting developments. I think when we started recording Season 2, a lot of the generative AI was just hitting. And look at where we are now. We’re three or four months down the road, and the capabilities that we’re seeing come out and just from our own company but from so many other places. The technology, the innovation that they’re able to develop now is just staggering, absolutely staggering.
I think about what my children are going to be experiencing over the next 5, 10, 15 years. And it’s exciting. I get nervous about some of these things too because we all have to make sure that we continue to act responsibly. Some of this technology, we don’t know where it’s going to go. And I hope, I believe, most of the organizations out there are going to act in the same way. But for us at the end of this Season 2, thank you for listening to this podcast. This has been Automating the Enterprise. I’m Dayle Hall, the CMO of SnapLogic, and we’ll see you on the next one.