Home Episode 18

Podcast Episode 18

Challenges with AI, Generative AI and Responsible AI

with Bill Wong, Principal Research Director – AI and Data Analytics at Info-Tech Research Group

Looking to stay ahead in the ever-changing world of AI? In this episode, Bill and Dayle discuss responsible AI and the challenges of self-governance in the enterprise. They also explore the most exciting upcoming innovations in the field of AI, from personalized movie-making to the creation of worldwide groups to establish rules and responsibilities for ethical AI.

Full Transcript

Dayle Hall: 

Hi, you’re listening to our podcast, Automating the Enterprise. I’m your host, the CMO of SnapLogic, Dayle Hall. This podcast is designed to give organizations the insights and best practices on how to integrate, automate, and transform their enterprise.

Our guest for today has extensive experience working with some of the biggest names in the tech industry, like IBM, Microsoft, Dell, Oracle, you know those small companies we’ve all seen a million times. He’s a well-respected key opinion leader and is a key contributor to AI research. It’s our honor to have our podcast today around AI and data analytics lead at Infotech Research Group, Bill Wong. Bill, welcome to the show.

Bill Wong:

Thank you, Dayle. Great to be here.

Dayle Hall:

What I usually do as we start these, I think it’s always good to give people a little bit of background, how you became into this, how you became this research director, what brought you into AI. Everyone has different types of background. But give us some context, give me the cliff notes of your career and how you ended up where you are today.

Bill Wong:

Goes back to university days. I was a pure math major, just was fascinated about how to optimize algorithms at that time. Took computer science and I hated it. I just did not want to code. So I went into business, took an MBA, joined IBM. All my roles were data type roles. I spent a lot of time in development. I was kind of the executive briefing go-to person. And a lot of opportunities to meet a lot of different customers from around the world. And then after, I wanted to work with technology that was more pervasive, I guess, and went to Microsoft to work on their big data offerings. And after Microsoft, went to Oracle to work on their big exit data systems or analytics.

And then I got an offer from Dell to lead their AI practice. And that’s what I was doing for the last five years. And that was great, best of breed. Dell played nice with everybody. I didn’t have to say, hey, IBM is the best or Microsoft’s the best. I can actually pick and choose what was best for the customer. And now at Infotech. Again, very agnostic, focused on solutions that customers can deploy based on unique requirements.

Dayle Hall:

Okay. So that’s good, because what I like about your background is you’ve been on the vendor side, and now you’re doing a lot of client advising. Got some good background, you’ve seen both sides of the equation. A couple of these other podcasts, everyone that we’ve talked to that has come up through AI, it’s surprising, everyone has a different- almost a different way. And we’ve had software engineers, math professionals like yourself. We’ve even had someone that’s come up just through HR. He’s doing AI for people analytics and so on. I love the background because it gives us a different perspective. You’re clearly much smarter than me on the math thing.

So let’s move on. We’ve got three different sections I’m going to talk about. One is we’re just going to start with working with AI common challenges. Then we’re going to go into probably the hot topic of the day, which is generative AI. And I don’t know when this podcast will specifically get released, but everyone will have definitely heard of ChatGPT and all the new incarnations. So I’m excited for that. And then we’ll talk about responsible AI at the end.

So let’s start with a couple of things around some of the things that you’re working on right now. What current projects have you got? What are the big things that you’re working on with some of your clients?

Bill Wong:

As you mentioned, there’s a lot of buzz in generative AI. So we’re working with a number of customers, and they’re asking us to help them develop their AI strategy. But with generative AI, it’s kind of in their face. And what I’m seeing right now is IT people, they’re being asked by the president of the firm and say, hey, can we use this today? There’s so much hype out there.

So what I’m working on right now is there’s a lot of great information out there to help customers manage change. And what I’ve attempted to do is to take the best of the maturity curves out there. And not that it’s totally new, but I think the idea of generative AI causes people to ask, is this legal, is this ethical. And so, hopefully, folks, when they come to the website, we have this maturity curve that we’re presenting to our customers.

The thing that’s different about this is on the axis of maturity. What we see most companies do is they first focus on can this work. They’re very technical focused. Have we got the right data? Have we got the right model? Have we got the right infrastructure? What’s the performance of our prediction, etc.? Naturally, you’re just trying to make it work. But over time, as people get more comfortable with more success- and by the way, you can make a ton of money just focusing on the technical side. But as technology gets more pervasive, people ask, hey, is your algorithm fair? Is the data that I’m providing you private? So these type of questions, the AI community has generally called principle-based AI. And so what I want to do is just put it out in people’s faces and say, hey, when you’re looking at AI, you have to start thinking about principles as part of your maturity.

Dayle Hall:

Okay, we’re on a podcast, so people are listening. We will try and make sure that if you’re looking at this podcast, you find a way of finding this maturity model that Bill’s talking about. But just imagine it’s a journey from technology-centric to principle-based Bill mentioned. And over time, you move along, I’ll call it, a flattened S curve that shows all the way from exploration to incorporation. And then very quickly, once you’ve got proliferation, you get to optimization. And then the last piece, which is a little bit longer, is true transformation.

So could you describe for us, Bill, some of those initial journeys? What does exploration and incorporation mean? And then let’s go all the way after that to transformation. What is a true example of transformation along your maturity model?

Bill Wong:

I do spend a lot of time still on the exploration. Hospital research centers, universities, they get graphs. They’re trying to figure out, hey, can we do this on the existing equipment? Or do we need specialized AI accelerators to do this? In a perfect world, you would get the AI accelerators, some especially more difficult or especially with billions of data points. Yeah, you can’t do this with your typical server infrastructure. So I do spend a lot of time taking them through best practices on what is the hardware infrastructure required and what software there is open to them to choose to develop their models.

So do a lot of that. And then incorporation, that’s usually helping them develop a POC based on what kind of predictions do we want this model to make. As you can imagine, the bulk of the folks, they start off together. Proliferate. There are a number of great companies out there who all adopted AI.

Now as you get into the optimization and transform, that’s where a lot of companies fall off, because this is where we become more principal and you think about things like if asked on these important, let’s say, financial decisions, I want to make sure a bank makes an unbiased decision on whether this person should get a loan or not. So you have to ask the model, tell me how you arrived at that decision.

The explainability or transparency, these are things that just a few years ago, most people did not really build for. Especially if you’re using unsupervised algorithms, that’s kind of the challenge. I think the literature out there is focused more on this, but it’s still undeveloped. If I ask people, what tools are you using for things like data classification, how are you governing data privacy, these are questions that people don’t really think about. They were just concerned about, let’s make the model work. Let’s make it before. Now you’re asking me things on how do I preserve data privacy, how am I making sure that I’m transparent on how I’m making my decision. So this field is still growing. It’s still kind of like the Wild West.

And then for transforming, there are a number of companies I would say are great in being what I call AI first. Everything is driven, giving value to their customers using AI technology. But are they also doing this with guiding principles from being responsible? And that’s tough. When I present this, the leading firms, my hat’s off to them, have all made spectacular mistakes. You can take a look at all the big tech companies, and all of them say, okay, this is how you do it. But they’ve all made spectacular mistakes. Not going to name them, but everybody’s heard of companies using AI for taking a look at resumes, and then all of a sudden, they find it’s quite biased.

And you think it’s so obvious. But again, people are so focused on getting the model to work that they forget about these principles. And nobody’s immune to this. You really have to have somebody thinking about this from the start.

Dayle Hall:

Yeah. And I like that. I actually did a podcast with someone from the HR space, as I mentioned, and the bias, we talked a lot about the bias within that, particularly around the HR space. And if you’ve got bias coming in, AI doesn’t fix it. It almost elaborates on it a little bit more.

You said something that was really interesting. I’ve never heard it described that way before, unsupported algorithms. A lot of people have talked about the explainability and the transparency of the AI, but what specifically do you mean by unsupported algorithms?

Bill Wong:

I mean unsupervised.

Dayle Hall:

Unsupervised.

Bill Wong:

There’s supervised learning where humans help train them all. But when it’s unsupervised, you let the algorithms make a decision on their own. It’s kind of difficult for them to explain how the decision was arrived. So the example is, and one would argue like for image-based processing, typically, it’s done with unsupervised algorithms. And so how you can tell a dog from a cat. Do you really need to know how that was done? Is that really important? And many will say no, just as long as it’s very reliable, and they’re okay with that. But for things like whether or not a person should get approved for a bank, how much they should get charged for insurance, those kinds of questions, people are almost demanding some level of transparency so that that decision is totally unbiased.

Dayle Hall:

What I’m always pleased about when I talk to people that are influencing companies and people around this growing- we’re still in the infancy of AI. I’m very encouraged that I believe people are trying to do the right thing. They’re trying to act responsibly. And then one of the podcasts that we had recently, I talked about AI is not the same as social media when social media sites started hitting. But it felt like we really did hit some walls and have some major problems before the social media sites said, okay, we got to do something about this. I feel like the people that are involved in AI have a little bit more caution or a little bit more responsibility to make sure that we do this right. That’s just my opinion. I don’t know whether you see that too.

Bill Wong:

I share that sentiment. It’s good that there’s this awareness level, and people know it’s important. The challenge is to implement this, which gets back to this maturity curve and responsible AI. I think everybody has to try to do the right thing. We can’t expect laws to come in to govern it. it just historically has never worked. And coming from big tech companies, some of them think about it, but usually big tech, they can govern themselves up, just to put it politely. They hire people who are good at sales, good at programming. They don’t hire people who think about social implications. That’s why Facebook or Meta, they’re an easy target to say what’s wrong because they simply don’t have the right people making those decisions. And I can go on and on, but I think the big techs would agree that they’re not prime for that. And so we have to take responsibility ourselves.

And like it or not, if they don’t do a good job about it, governments will come down. They will lag, of course, but they’ll come in with a very blunt object and say, hey, it doesn’t matter if we’re going to stifle innovation. There’s just too much bad actors, bad behavior out there. We’re coming in. That’s the downside of if we don’t do something ourselves.

Dayle Hall:

Yeah, agree. Before we move forward, I wanted to ask you a couple of questions. Because we talked a little bit about the challenges of putting AI in an organization, and you talked about where you’ve worked and some of the clients. I want to ask you a couple of questions specifically. We ran a survey around how employees perceive AI. And I’m not going to go through the whole thing. I have two questions specifically. One is about 70% of the people said that they feel their enterprise, their company, is open and has the right culture to be ready to have AI in the business. But only a third of them think that they have the skill set or the capabilities to actually implement it in the right way and get the most out of it.

So most of us think that we’re ready, but only a small portion think that our companies are ready to address it. As you talk to clients and customers, is that something that resonates with you? Do they see that? Do they not see it? How would you advise them to potentially get ready or be in the best position, particularly on that skill set piece, if they’re going to look to do AI?

Bill Wong:

I have the same perspective. I’ve seen that. And hopefully, when they say the culture is ready, that means that they’re a data-driven culture, that data is something that is treated as an asset that helps them make better decisions. And I have seen companies do innovative things. One hospital research center hired a physicist, let him get more familiar with algorithms and was their homegrown data scientist. And I think I’m optimistic because more and more of the tools out there are- you don’t need to be a developer. It’s a no-code approach. I think that’ll help decrease, won’t eliminate, but it helps diminish the challenges of getting skills. You still need people who are skilled, hopefully, within that industry. But yes, it is a bit of a barrier.

I think generative AI kind of reduces that barrier a lot. So again, I’m expecting to see a lot of more innovative tools where you don’t have to be the data scientist or the Python programmer.

Dayle Hall:

Yeah, for sure. The second question I had from the survey- so we asked people what they thought the top benefits were. And of everyone we asked, about half of them said they thought it would save time. Just under half said they also thought it would improve productivity. And probably about a third said that it would reduce risks or errors in their work. So my question for you is people in enterprises, that’s what they think it can do, what they would probably like it to do. When you talk to customers and clients, or in your experience with the companies you work for, what do you advise them around setting the right goals first? Do they come to you with those kinds of things, like this is what we want to do, we want to improve productivity? Do they have specific use cases? How do they approach you? What are they looking to get out of potentially using AI?

Bill Wong:

The most value I can give is to help them develop a strategy for AI. And with any kind of disruptive type of technology, you really need a business strategy. It’s not like a standalone silo technology, let IT people play with it. This requires the line of business or the stakeholders of the company to figure out what is our company mission and how does AI align to that. Are we a company that’s known for innovation? Then, yes, AI can be used to augment their work, make them more productive. Are we a company that the lowest cost is the best thing we can do for our customers? Then AI is likely there to automate things, sometimes to replace a lot of things that humans do. So it all starts with the strategy and from a business perspective. Once they have that in place, then how you use it, the policies as a derivative of that.

Dayle Hall:

Right. No, it’s good. Again, I think there’s so many of these types of surveys. But what’s interesting for me is getting your perspective because I think probably you’re engaged on more of these conversations across different businesses. So you bring a unique perspective to that. Is there a project you’ve worked on recently that takes some of those guidance that you’ve been giving them around the business benefit? You don’t have to mention any names, but where you’ve really been able to show that AI has really helped them as an organization, whether it’s exploration or proliferation, all the way to transformation. Doesn’t have to be the full thing, but can you give some examples so potentially, the listeners can also be thinking about, ah, that’s a project I could do, or our enterprise could potentially do that kind of thing?

Bill Wong:

I’ll give a shout out to an institution in the University of British Columbia. They have a set of researchers there. We had this little thing called a pandemic a couple of years ago.

Dayle Hall:

I’d heard something about it.

Bill Wong:

And they were tasked as many as can you create new therapies or treatments. If you’re in the field of drug discovery, it’s a horrendously complex environment where hundreds of drug candidates are looked at every year. And you might get a new drug after 10 years and $3 billions worth of research. So that’s the kind of pipeline. And how they wanted to impact and move the needle was, can we do this faster? If you followed a lot of the COVID research as a virus, a lot of the research was spent on how do we negate the action of making the virus active. There’s a field of study called molecular docking. If you think of the space shuttle as an analogy, when we set up the space shuttle to the lab, when they meet before the astronauts can get to the space station, they have to dock, and that dock allows them to enter the vehicle.

So when it comes to viruses, what you want to do is have a molecule that docks with the virus that negates its activity, so molecular docking. So the challenge is there are billions to test. And so what they did was they took the popular docking programs out there, which kind of predicts how well a molecule will negate a virus. They calculated there’s not enough compute power. Even if they had all the compute, itwould take years to do all this. So the thought was, why don’t we use deep learning to predict docking scores, because docking is such a compute-intensive thing. Tons of course to do that. Back when 40 billion was a big number- I think it’s still kind of big, but that’s what we did, 40 billion. So we used deep learning to say, we’re going to predict it. We’re not going to physically run the simulations. We’re just going to use deep learning AI algorithms to predict that.

And by doing that, what they were able to do rather quickly was create kind of a finite top candidates list of molecules that you should test to see if they’re viable against the virus. What that does was Nature Magazine, a well-respected scientific journal, they were the first to publish there, and then slowly, you started seeing more and more researchers use this technique, which they call deep docking, a hybrid of molecular docking and deep learning.

I feel most proud of that because when I talk to people who are doing AI in life sciences, I think we all feel that we’re trying to make a difference in people’s lives. It’s more than just making money. I’m sure people can make money, of course, through drugs, and sure they should be compensated. But I think the majority of the people I work with, we all felt that we’re making a difference, or at least trying to make a difference in people’s lives. They were kind enough to include me as a contributing author. For me, that was one of the highlights,that I really enjoyed that project.

Dayle Hall:

Yeah, no, that’s great. We talked about this a little bit earlier. We mentioned, I don’t think enterprises themselves- they don’t have bad intention. But I do like hearing about things like that, that is using AI for medical research. Hopefully, this is going to be something that’s going to make the world a better place. And that sounds a little bit motherhood and apple pie, I know. But if anything can, if anything can, this is what it should be used for. I have a 12-year-old and a 15-year-old, and we want to hope that we leave the world in a better place in as many ways as we can. And this is an interesting part of it.

And that brings us on to probably one of the hottest topics of the day, which is around generative AI and, obviously, ChatGPT, of which my 15-year-old in high school has asked me whether she should use that for her homework, to which I said, I don’t know what the school principles are yet. But I said it’s an aid, it’s not a solution. It shouldn’t be doing the work for you. You should still understand it. So we’ll see how that goes.

But with that and the advent of other AI platforms, what are the things that you think that should be used for today? And what are you looking forward to in the near future? What do you think the possibilities are of something like ChatGPT?

Bill Wong:

I’m really excited about the possibility of generative AI, or bring that kind of power creativity to the masses. I like probably a lot of people are spending a lot of time trying to fool the system, see where it falls. But I think over time, it’s going to continue to get better and better.

I think the normal use case that you see AI for, you’re going to see generative AI also contribute there. I think the point of entry will be easier. So for getting data, as I mentioned before, that’s a universal problem I’ve seen. But what happens if you could ask the system to generate synthetic data for you to test your model? That barrier, let’s say, has certainly reduced questions on how good is synthetic data. But again, I think over time, it shows real promise of this technology being proliferated.

That’s where guidance like education, we were talking to institutions right now who are struggling. They’re trying to figure out what to do. There are some that are totally banning it. So you can’t use it. And long term, the banning of technology generally doesn’t work. People will find workaround. The education institutions, they’re going to be put on point to transform the way they educate. Might be a little more old school, but talking to people, asking people to communicate, small groups, what can you do on site without accessing any kind of compute platform. Because that’s what they’re going to do in real life.

Dayle Hall:

You mentioned it. I don’t think stopping or prohibiting innovation- whether we like it or not, it will find its way. It will find its way in there. And this goes back to what we talked earlier about, Bill, which is we should all be responsible with it. We should all understand that we each have some level of responsibility. Not just saying all the higher ed institutions have to put in policies or ban it or how to work with it. We should all use it in the right way. And I think if we can act that way- like you said, big companies make mistakes. We’re all going to make some mistakes. Some higher eds are going to make mistakes. But I think that’s going to be okay.

Outside of higher ed, do you see any specific industry, we call them, vertical solutions, that kind of thing, any specific industry really taking this and driving it forward? Obviously, tech companies kind of picked up on it pretty quickly. I’ve started to get emails from sales development reps that just send me an email saying, hey, we’re the Chat GPT of sales pipeline. I’m like, stop it, stop, stop. We don’t need ChatGPT washing. But outside of tech companies, who do you think could have a potential for really driving this forward?

Bill Wong:

I think the low-hanging fruit for the industries are ones that are text based. That’s the easy one. There are other ones like the image-based video. It’s certainly powerful things, but I think the easy one is text. And so examples of that are companies you hire to create, let’s say, marketing copy, new product descriptions. ChatGPT could do a work of 100 people. You don’t really need those kind of tasks to be done anymore by humans at least. So on the vein of heavy tech space, I’ve started seeing some law firms carefully looking at it, so training it. You can tailor, let’s say, ChatGPT upfront with a persona that you can create that understands legalese. So expect that scan may come as well.

Any kind of research, even in my profession, there are debates like why hire an analyst when I’m going to get everything from ChatGPT? But the risk and the challenge is still out there, that you have to vet this information. It’s all fed from the internet. A lot of it that we know is biased, inaccurate. We have people to check for that. The tech space stuff, those are the kind of industries that we’re seeing more adoption. Newspapers, journalisms started to see some of that, people to write people’s blogs, etc.

Dayle Hall:

Yeah. I think we’ve tried a couple of things, things like blogs and how that works. It definitely needs more specificity. It needs some responsible person. Because everything can sound the same, because, theoretically, it is going to be the same because it’s coming from the same source. So there’s very little differentiation if you do it that way. But definitely tons of advantages with it. So it’s going to be interesting to see where that goes.

As we move from that, let’s talk a little bit about responsible AI. We talked a little bit ahead, we mentioned a couple of points already around explainability and transparency of AI in general. I think I said the wrong word, unsupervised algorithms.

Bill Wong:

That’s a type of AI algorithm. So the basics of responsible AI that we’ve seen- there’s a lot of nonprofit organizations, government initiatives, and they’ve coalesced on six major themes. And so when we talk to customers with the things we talked about as privacy, does it respect data privacy and personal privacy? Is the algorithm fair, unbiased? Then you mentioned before the explainability, transparency.

The last three are the system that you build, is it safe, secure? Sometimes it’s referred to as robustness. And then governance, governance of your data, governance of your model. And the last one, and very importantly, is accountability. Even though you can just kind of deploy this and say, hey, it’s just the algorithm, somebody, somewhere, some organization has to be identified as, hey, if something goes wrong, there’s an error. That’s the organization you go to take responsibility for those actions.

As a basis, when I say you start with this, and then certain industries, like healthcare, legal, might have much more in terms of compliance issues, that would fall into the realm of responsible AI. But that’s the foundation that we see common regardless of which industry you’re from.

Dayle Hall:

Without giving any names, but when you talk to clients or customers, or when you are going through this kind of work when you’re in the vendor organizations, does the concept of responsible AI come up proactively? Do you hear things that you then have to say, well, look, we have to be thinking about how we use it in this way?

And again, what we talked about earlier is enterprises and people in enterprises, they just find it hard to self-govern for this kind of thing. So I’m not saying everyone should be on top of it, but do you feel like you still have to make those points when you talk to clients and customers?

Bill Wong:

I guess it’s the latter. I don’t assume it would be a nice world, that everybody comes to this realization that this is best for the customer and the business. But if I was to be totally truthful, some people, when you say the word ethics or principles, they think that this is going to slow down innovation, it’s going to slow down adoption. Some. But I’m optimistic that through education, there is a business case to do this, that if you do not do this, it is bad for business, that in the long run, it is in your best interest to adopt these principles.

Dayle Hall:

Yeah, no, I think that’s fair, because a lot of this is new innovation, and it’s things that we haven’t necessarily experienced. I don’t expect everyone to know all the things that they should be doing with ethical AI or making sure they’re responsible. I’m more encouraged that people like you and a couple of other AI luminaries that we’ve talked to on this podcast, they think about it, and they’re trying to make sure- there was one we talked to recently, they’re creating worldwide groups, where we, the people, the ones that are talking about what those rules and responsibilities should be. And I think that’s great. We should be involved in this from the start.

When you go into your organizations, the things that you ask them, do you give them the principles upfront? Do you not assume and just say SnapLogic? What are the questions we should be asking ourselves as we start to use AI?

Bill Wong:

Have a business strategy for AI and then select what are your guiding principles going forward. And this guides you on what technology you select and really gets into you when you develop the model that you are thinking about these principles. So you want to make sure when you get data that you worry about things like data lineage, so you know where the data comes from, so that if anybody asks, was this trustworthy data, is it unbiased, that you pick tools out there that can help reinforce and implement the principles that you’ve adopted.

Dayle Hall:

Yeah. No, it’s interesting. Okay, so last question as we wrap up the podcast. What is the most innovative or most exciting thing that you’re expecting or looking forward to over the next two to three years around this specific field? It could be something that will be practical, it could be something totally visionary. But as we leave the podcast, what is Bill really excited about that could be coming to our screens anytime soon?

Bill Wong:

It’s possible- okay, so this is just my own desire of the technology. I think everybody loves a great story, and they have challenges articulating it. This technology has the ability to say, let’s say, I want to create my own Star Trek movie. I want new starships in the vein you see on Starfleet. And you define the characters, you define the plot. And you could actually have the software create that. That, to me, would just be my line. And I think that the building blocks are there till we have our holodecks. But yeah.

Dayle Hall:

I always want it to be Jean-Luc Picard. So maybe I could be- hopefully a little bit more hair. But in general, I think that would be great to see something like that come to fruition. Again, I look at these things and think of my 12-year-old and 15-year-old, when they hit work, when they start to work, whatever it is that they’re going to do, but what a world they’re coming into with some of these possibilities now.

Bill Wong:

Kind of envy them there. But we’re hopefully still going to be around to see a lot of that. And that isn’t going to be that far in the future, I think.

Dayle Hall:

I hope so, too. Well, Bill, thank you so much. Appreciate you being part of this podcast today. Again, I love these discussions. It takes me away from my daily grind of trying to sell software to people and market software. So I really appreciate it. Bill, thank you so much for being on the podcast today.

Bill Wong:

Thanks, Dayle. Thanks very much. And folks, any questions, please feel free to reach out.

Dayle Hall:

We’ll definitely make sure your contacts get out there when we post this online. Thanks, everyone, for listening in on this latest podcast of Automating the Enterprise. We’ll see you on the next episode.