Home Episode 20

Podcast Episode 20

AI4Diversity and Our Future of AI

with Steve Nouri, Founder of AI4Diversity

If you haven’t heard about ChatGPT, you’re being left behind. 😳 Listen to this podcast episode with Steve Nouri, Founder of AI4Diversity, as we discuss how AI is impacting our lives. He also talks about his community AI4Diversity, which connects people to make AI more diverse, inclusive, and people-driven.

Full Transcript

Dayle Hall: 

Hi. You’re listening to our podcast, Automating the Enterprise. I’m your host, Dayle Hall, the SnapLogic CMO. This podcast is designed to give organizations the insights and best practices on how to integrate, automate and transform the enterprise. 

Our guest today is an influential data scientist leader. And I’m not joking when I say this, he has over a million followers on LinkedIn, numbers I can only hope to ever achieve in my lifetime. He is the pioneer of an initiative, a nonprofit called AI4Diversity, which has a mission to use AI to benefit people globally, which I think is a great mission we’re going to dig into. He’s a member of the Forbes Tech Council. And he’s one of Australia’s ICT Professionals of the Year. So not only altruistic, looking at nonprofits, but also an award winner. 

We’re really honored to have, on our podcast today, Steve Nouri. Steve, welcome.

Steven Nouri:

Thanks for having me, Dayle. I love to talk about the topics of the innovation, the latest trends, and I would definitely hope that our audience will enjoy

Dayle Hall: 

I think they will. I think they will. You have a very unique background, very unique perspective that we’re going to dig into. So let’s start a little bit with the background before we actually get into it. Like how did you- tell me a little bit about your career, the organizations you’ve been in, how you’ve thought about it and culminating in getting to this AI4Diversity nonprofit.

Steven Nouri:

My journey, at least in the beginning, was a very straightforward, cookie-cutter boring one where I sometimes talk to people like their doctors, and then they accidentally start becoming an AI scientist, or they change the course of action. But I started as a software engineer, Bachelor of Software Engineer, doing that coding all my life. And then at some point, I was on a project, the hospital information system, full of data, a lot of different tables, different records. And we’re thinking like, what can we do with data more than what we’re doing instead of just building those random stuff. 

And what happened is that we just found that the best thing to do is just doing data mining and some new processes that were new back then, and I’m talking about like 15 years ago, and data science was kind of not a word back then. So then I did my Masters in Data Mining. And from there, I transitioned to the world of data science and AI, heading a couple of different organizations here in Australia, was head of data science at Australian Computer Society, Data61, and a couple of scale-ups and start-ups here in Australia. Recently, it’s mostly, for me, advising different companies, I guess, in their product, their product market fit, their customer acquisition and marketing. At the same time, I’m working with great teams in different parts of the world. 

So it’s just a very interesting journey, I guess, at least for me, from coding to managing teams, and also, starting to do more of a community work. And this is a dream come true to me that I’m now in between things that I love, community, teaching, AI, and that’s the right triangle for me. I guess that’s what I enjoy the most. And I discovered this when I was a lecturer at university. To be very honest, it was like, that’s the most fulfilling thing that I was doing after work. So I was working nine-to-five. And after that, after five, I had lectures at UTS for postgraduate students. And that was the time that I was like, all right, I’m enjoying this so much. I’m tired from the work, but I’m like, I have enough energy to come here to do this. And then from there, I was excited to talk about it on social media as well. So there’s just like, that’s the transition that happened that I- right now, I’m less hands-on, but more of evangelizing and working with the latest tech in the world.

Dayle Hall:  

Yeah. Which is great. I think you have a unique perspective from that because you still advise brands. And obviously, brands are looking at how to leverage AI capabilities, yet you’re focused on this nonprofit idea, which we’ll get into in a second. But you are unique because a lot of the people that I’ve talked to on this podcast series around the AI space, they didn’t necessarily come through being a software engineer like you. They weren’t into coding and seen so much change. 

If I was to ask you one thing over the last 15, 20 years, because you’ve probably seen it all, what’s the most exciting thing that you have been a part of or that you’ve seen? Because back then, yes, we had data, but the access to it now and the things that we can do with it is just incredibly advanced. What would be the one thing that you say like, this just blew my mind when it happened in the industry?

Steven Nouri:

You are right about me being there for a very long time. Because while I’m talking about this interesting stuff, or when I talk about the old days in a conference or to my students, some of them don’t believe like you, you were there when Windows came, you were there when Google just debuted like [branched], and the whole world were-

Dayle Hall: 

I’m right there with you, Steve. I was- I went through it all, too.

Steven Nouri:

I know. It’s- it looks like it’s centuries ago. And people cannot believe that there was a day that there was no Google and there was- the Internet was making some weird sounds to be connecting to the Internet. And in order to have the image loaded, you need to wait and that it will be something crazy, do your work and come back and see if it is loaded. So that’s the history. 

But the other flip side of the coin is just the tech. And also, especially the tech that I’m working, which is data science and AI, are advancing super fast. Think about how many new stuff just arrived in the last couple of years, or maybe even one year, the ChatGPT story, and then Microsoft, and Bing started using ChatGPT. And then Google just showed up out of nowhere with their own ChatGPT called Bard and then they are going to implement it. And it all happened within a couple of months, millions of people joined this platform, they are all posting about it, talking about it. It’s already changed the way people think about AI. And so for me, it’s just like, as we get further and further, things go crazier and faster. 

So I will just tell you that the whole ChatGPT story, for me, is just the most craziest thing that has happened in my life. And it is very recent. So it’s just a little bit trendy right now. But I’m still trying to get my head around that particular moment of time that, I guess, changed history forever. And now, we have people believing in what AI can deliver versus that a long time ago, we were still getting memes of, oh, AI cannot distinguish between cats and dogs. What are you talking about?

Dayle Hall: 

Yeah, no, it’s- I think it’s amazing, as you say, not just how quick ChatGPT took off, but then how quickly the other big companies that want to be part of this have built their own version, and I’m sure they were probably working on something, but this has obviously escalated the situation. 

On ChatGPT specifically, do you think- so I don’t want to say it’s a fad because clearly, it’s not a fad. And it’s going to grow from strength to strength. But in all these people now looking at like, how do you use it, is it something that you feel like it’s going to be prevalent across all brands, is it something that everyone’s going to have to use, is it going to cause more problems than it helps at this point? I’d love to get your first take on that.

Steven Nouri:

Yeah, it is an interesting, I guess, revolution that is happening. So first, the most important thing that happened with ChatGPT is not actually the ChatGPT itself from technology perspective, which [Ian Lacrune?] said like, it is not that innovative. I might agree, I might not agree with that particular statement. What is happening is just, it changed the culture, it changed the way people are thinking. It’s just connected to the public the way that none of the previous products has connected with the public before. It played the evangelist role for all the AI products. And now all of them, following the same footsteps to get to the same audience, which are now more open minded about leveraging AI. At the same time, it’s being an engine because it’s open, right? If you tell me that Google has the same quality product down- somewhere down the secret lab, I will believe you but then, well, so what? Like what are we talking about, like if they have it somewhere hidden. And maybe you say, Amazon also has something hidden that is even better than ChatGPT, and I might agree with that. But the problem is, they open it to the public, then millions of people could just join and leverage it for free, and I use it for free a lot. I’m still using it for free. I might explore to- have the Pro version later if there’s a big difference. 

But what is happening is just, it is changing the culture and a lot of others are following the same trend. At the same time, it’s making something that was not available for free, was not available for other businesses to leverage. It is now available. Many different people in different career stages, they are thinking about how to incorporate it with their own work. Like, for example, I’ve seen people talking about leveraging ChatGPT for specific purposes, like SEO CEOs, or, for example, for building a better web page or user interface, writing content, it goes forever.

Dayle Hall:  

Yeah. It’s incredible. I’m not going to- I don’t think I’m giving any secrets away that said, so we have sales kickoff coming up very shortly. I don’t know when this podcast will be going out. But some of the things that I’m looking to write to help me with the narrative, I’ve used it for that to help me call out a couple of things. One of the things you said, Steve, which I think is really insightful is, I think a technology or a trend or something like that, when it changes the culture of how we’re interacting with it, that’s when it becomes mainstream, that’s when you know it’s not going away. You mentioned Windows. Let’s face it, when the first Windows machine PC came out, it changed everything, it changed the culture, the iPhone, there’s a ton of other examples. But the point, if a technology, whether it’s a software or a hardware, whatever it is, when it changes our culture and changes how we behave, then it’s definitely here to stay.

Steven Nouri:

Exactly. And there will be an hype cycle about any new pioneer innovative tech that comes to the life. There will be, at one point, everybody thinks that’s just too good to be true. And then they will find some holes in the armor, there will be some people will be disappointed. A lot of people actually poking this ChatGPT in many ways. Just think about it, this is now being used by millions of people, querying this daily. And it is not very easy to be always accurate, 100% accurate, not having any bugs. Any product that has been there, under this much scrutiny, will just break either way, one way or another. And this ChatGPT was, all right, it’s still coming out pretty successful. They had the Google’s advertisement, they were unfortunate that they couldn’t fix one of the bugs, which is, it just went mainstream, and the stock market didn’t like it. So that just think about how much of the pressure is right now in OpenAI to lose. But we’re seeing that they’re still looking pretty strong.

Dayle Hall: 

That’s right. Like I said, I think that’s here to stay. So let’s move on to- that’s obviously a good backdrop what’s going on in around our world, what the changes you’ve seen. Talk to me a little bit about this nonprofit initiative, this AI4Diversity. So what is it? How does it start? And what’s your mission?

Steven Nouri:

Yeah. So this is very close to my heart. I was part of many organizations here in Australia. I saw how decision makers are making decisions for either in government sector or public sector or in corporates. I can see that if we are not aware of diversity in decision making, in making these products, there would be an inherent bias being injected in the way that AI is being built. And the problem is just thinking about like something like ChaTGPT being biased, then you are impacting the whole world. And this is not something that is being used for in a city or a very small environment. It is going to be used globally. It will change the way that people are thinking. It will impact people’s lives. 

So that was the idea back then, thinking about the people that don’t have a lot of digital footprint, so they would not have data that make them as equal in terms of the AI’s outcomes and opportunities, and also the people that are not in the same level to help with the decision making. So let’s start a movement, bring people on board, make sort of a community-led project to be able to gain the trust of those people. And at the same time, the connection, human connection is very important, right? So one part, we are pushing the technology very far, trying to make everything technically advanced and helping to ease the relationships of live events and AI and robots are entering into our life. But at the same time, I think we would love to be more human. 

So this particular initiative would connect people face to face. And we would have discussions about what are the risks and opportunities in AI, how to learn, how to leverage that, learn from each other. Let’s trust the people. And if there are entities that have less trust within certain communities, we can replace those voices. Let’s say, I know that some people might not trust the government or corporates for many reasons, and some of them might be reasonable. So now, we will have a community-led initiative that you can go to, you can talk about your issues, you can learn from them, and you know that you are not under any influence of anyone that has any secret agenda for AI adoption or technology adoption. 

So that’s the main idea. And this particular community, if grows enough would have impact into decision-making, into policymaking. Also, at the same time, we’ll be like a watchdog for making sure that all the main important AI projects are following the same guidelines and following the regulations. And if there is any red flags, it would definitely bring it to the public and make it known. I thought about it like around two years ago, it is an obvious issue. And when we talk about it, everybody can understand it. But I was thinking about it in terms of starting such a movement. And at that point, I was part of many different committees. And I was seeing like at ISO, people are trying hard to come up with standards for ethical frameworks. And then at the same time, European AI Act and those initiatives are coming up. 

The World Economic Forum is working on a lot of responsible AI, ethical frameworks. And most of them are top-down approaches.  So this was turning it upside down. It’s going to be bottoms-up approach. And there’s always a need for different way of looking at the problem statement. So I shared about it on social media. And so I have the tools to talk to people. So that’s a thanks, God, that’s available. When I shared it on social media, I wasn’t expecting that a lot of people would resonate with that. I was like,  maybe 100 people, maybe 200 will reach out to me and say, oh, that sounds good. Let’s think about it. 

So within a couple of weeks, I got 4,000 people signing up to volunteer for this project. And then 4,000 is a lot of people, like distracting me that now, I actually need a pretty deep, decent plan because you cannot just bring 4,000 people in a Zoom call and just, guys, let’s just start it. And that doesn’t work like this. So let’s dial it back a little bit. So I started a newsletter, an internal newsletter. I kept people updated about things. But then things became a little bit slow because we had to think about the basic structure, how to make it happen, how to scale it to the point that we can cater for 4,000 volunteers. And we got around 400,000 members on social media. And that’s just a lot of people are watching this. So-

Dayle Hall:

A lot of interest there, clearly.

Steven Nouri:

It is. I think it resonates with a lot of people. And it’s a right way to tackle this problem. When we are talking about ethical frameworks, there’s always something important in the mix. And it’s like, according to who? And once you answer that, you know that there is a need for such a movement in the world. I haven’t seen any similar project that is thinking about diversity as a holistic approach. We have a lot of difference, I guess, more niche communities around gender diversity, maybe around cultural diversity, or other diversity aspects. But I guess we would like to keep it holistic and bring all of them, all the parties together. And that’s where it probably resonating with a lot of people.

Dayle Hall: 

Yeah. Well, I think- just a couple of things you mentioned there, which are interesting. You mentioned trust a few times. And obviously, that’s the trust in the process, trust in the people that- any kind of AI and make sure it isn’t just a black box where they don’t know what happens behind the scenes. And you mentioned something else, which is like keeping the humans involved. Because it is- AI is something that is amazing. But ultimately, it’s there to help us achieve other things, be great, get better at things. It’s not a replacement for us. So we have to keep them in it. 

And I just wonder with all those people that are obviously interested, and you talk heavily about community, was there a lack of trust in this process when people- did people sign up because they wanted to make sure that they could trust it? And how do you address some of that trust challenge when people being involved in the process? Because I actually think for your community is one thing, but I think broader across any AI capabilities, whether it’s a brand trying to sell an AI tool, or a community that you talk about, there’s still that how do you get people past the trust barrier?

Steven Nouri:

That’s going to be challenged anyway, as you said, for any organization working on such a project. And a lot of times, it’s much easier to talk about when you make sure they know that you’re not selling anything and you will not gain any benefit out of this adoption. It’s just much easier. It doesn’t mean that when you’re selling things, you shouldn’t be trusted. But it’s just more of a complex discussion. When there’s no benefit directly, like commercial benefit coming out of that adoption, coming out of that trust, people were like, all right, this particular transaction doesn’t have one end winning, the other end losing at all. It’s going to be just a collaboration. I’m going to sit there, talk about it. And if I join this movement, I might even shape it. So another discussion is that it’s a community-led. So a lot of times, when government or corporates reach out to people, it’s not like you can make a decision for me. I’m like, I’m still going to make a decision for myself or my company, but I’ll get advice from you. But then here, you can literally be part of it, and you can make the decision, like you can be part of the advisory board, you can get voted to be part of it. And that’s just essentially a different way, I guess, to make people feel comfortable.

Dayle Hall: 

Yeah. I definitely understand the non-winning, losing in a transaction kind of thing. But I think with so much access to our data, then breaches and spam, and all that kind of thing, there’s still a little bit of reticence to sign up to this thing. So the fact that you’ve had all those people interested, sign up to your newsletter, I think clearly, you’re doing something right in how you communicate and the value and the long term impact. Can you talk about anything- some of the things that you’ve started to do, any research that you have, any initial findings? What are you learning so far from the group that you’re pulling together?

Steven Nouri:

So I guess most of the stuff that we’re doing at this point are more logistics. So more connecting with partners, making sure that we put together our heads together to understand how can we get more people involved, how can we put some sort of structure into place that is sustainable, it keeps everybody involved and happy and trusts the organization. So these are the stuff that is happening in day-to-day life. And as I said, fairly recently, we started a podcast. So it’s a responsible AI podcast that we had like a couple of great guests in there. And the latest news initiative I guess we are launching is a global meetup group. So we are going to have meetups in every country and hopefully, in every city, talking about responsible AI. It is in collaboration with ACM. So that is another interesting thing that has happened in the past couple of days. 

Dayle Hall:

What is ACM, if you don’t mind me asking?

Steven Nouri:

ACM is actually the most prestigious largest computer science organization in the world giving away the Turing Award.

Dayle Hall:

I know the Turing Award. I haven’t heard of ACM.

Steven Nouri:

Yes. So they give away the Turing Award. And they’re mostly around policymaking, bringing together academics, professors, researchers. So all the godfathers of AI have been given the Turing Award. And they’re just like, this is like equal to Nobel Prize. 

Dayle Hall:

Big deal is what you’re saying.

Steven Nouri:

It is a big deal. So they do have the credibility out there. And they’re doing great research projects, they’re connected to professors, academics, researchers. And at the same time, we are trying to get together with the publics. And I think there is a good synergy between these two organizations, bringing the research, the latest innovative projects that are being developed by these great minds in front of people and try to collaborate, work together to get the buy-ins, get the sense of the public. And at the same time, we both want to push forward the responsible AI from different perspective, but at the same time, we all have the same vision. 

So what is happening is like we partnered with ACM. And AI4Diversity and ACM are going to run this meetups groups in different countries starting from Australia, obviously, because I’m based in Australia, but it will go on and hopefully in different countries, we would have more people standing up, putting their hand up to be part of the organizing committee because it needs a little bit of work to make great events, and we are conscious of that. So we hope that we get the team together very soon. And by the end of this year, we have an ambitious sort of goal of running more than 50 meetup groups in the world, which is a pretty ambitious objective. But then at the same time, when I started this project, I had no idea how many people are going to sign up, and now we have that many people. So I think we can do that.

Dayle Hall:

Yeah. No, I think you probably can. When you pull these together, when you have newsletters or when you do meetups, you have to pull concepts together or at least ideas or things you want to focus on. I know you talked about responsible AI. But are you thinking about social issues, challenges that people are facing? Are you pulling your- the people that are interested, how are you going to go out? Because there’s so many things that you could potentially do with this, which again, that could get so nebulous that we may do a lot of talking and actually not get to what you’re trying to do. So what kind of issues are you really thinking about addressing and how you’re using that member group or the people interested to validate that?

Steven Nouri:

I guess the most important parts that we are focusing initially is education and also talking about the latest AI products that are available and how people are perceiving it. So we are just getting the pulse check from the public about those AI products, which we did actually with ChatGPT. And we asked our community about ChatGPT, should we allow it, the school, universities, should we ban this? And the results came with around 20,000 people participating in that survey. And it was strongly around 35% of them, they voted to have ChatGPT allowed, but with very strict guidelines. 

So we had four options like ban ChatGPT, which gets around 20%. And there was another one that just like guidelines were like more loose and less guidelines that got around 20%, no guidelines got around roughly 20%, and then very strict guidelines got 35%, which to me seems that people are open to idea of having technology incorporated, and specifically ChatGPT, which is changing the way people write content. And specifically, writing was one of the skills that was very important and still is very important in schools and universities. And having such a technology allowed will probably need us to rethink about how we are assessing students. So then the response was like 20% ban, 35% very strict guidelines. So I think people are open, but then very conservative about having this. But to me, it’s a good sign.

Dayle Hall: 

First of all, the fact that you have a wide range of views on it is not surprising. I think the fact that the majority said that they want strong guidelines means, like you said, they’re open to it. So they see it as having benefit, but there’s probably a little bit of the unknown, how it’s going to impact or something like that. That’s probably why they’re like, okay, let’s have guidelines, and strict guidelines versus non strict, you can argue that. But again, I think the positive thing, as we talked about, with ChatGPT is there’s an acceptance, there’s an openness to potentially use these things. 

And I think from the podcasts that I’ve done with people involved in AI, it doesn’t matter whether it’s they’re using it within HR to make sure that they do have diverse cross section of candidates or they’re using it to sell their product better. I think as long as you’re open to it, as long as it’s not  something that goes on in the shadows that you don’t really understand how it’s working, I think people will accept it. And like I said, in education, it can have a really positive impact. It’s  just has to have- this is why I think your initiative is the perfect time because people do need that level of control or need to understand what’s happening.

Steven Nouri:

Exactly. Yeah, I totally agree. And I can also see the other side or the flip side of the coin. It’s just if you don’t think about the risks and the problems, and we opened the door, it’s sometimes not very easy to close the doors later. So it happened with the social media, I can tell you that. I don’t want to derail the conversation to social media. But like when you open it, now social media is out there, now the researchers are coming, like all right, these are the negative impact. But then the cat is out of the bag. And it is very difficult to dial back. And in many countries, we have a difficulty of pushing the nation to rethink that. A lot of times now, it’s being perceived as one of the rights for people to have access, to own it. And it’s I don’t know if it’s going to happen, or anytime soon, we are going to talk about it, that we need to put more, I guess, guidelines for this particular thing. So- 

Dayle Hall: 

You think AI could go down the same route, which is when social media came out, it was incredible, the opportunity to connect with people. We won’t talk about what Elon Musk is doing with Twitter specifically, but there’s a massive opportunity. And then, like you said, then here come the guidelines or here come the my kids use it way too much than I would like. So that’s a worry for me. Is there a possibility that AI could go the same way?

Steven Nouri:

I think the short answer is yes. So there is a possibility, not only the possibility. I think there’s a high probability that AI can go the same way. It’s just the same reason because of the greed, the way that AI spreads the same way that social media spread. Think about, again, going back to the same, I guess, product. But essentially, that’s an example. So ChatGPT got 1 million, I guess, subscriber in five days, right? So that’s the way that these things spread. And that the problem is if something is wrong, that particular thing would go to every household, to every business, to every content that is generated. And I don’t know if we can put it back to where it started. Like sometimes, it’s not very, very easy. And specifically, if there is an AI war between corporates and then you would think about it and you would probably know that it goes beyond the corporates and it sometimes get to the national level where things get even more complex.

Dayle Hall:

Yeah. Well, when I think back and social media has been around- my kids don’t even remember, they don’t have a world where there was no social media, right? So for them, it’s commonplace. But I don’t remember when these platforms were coming online, that we had the same kind of thought for community-based feedback for social. I think what we did is, we were so enamored with it, we let it go. And then after the fact, we then said, hmm, maybe there’s some downsides here. Because that’s what I mean, essentially, Steve. That’s what innovation is right? Innovation is, we want to charge forward, we want to see what we can do. And I think that is where- and everyone I’ve talked to about AI, specifically AI ethics, we seem more thoughtful about the potential negative impacts, which I don’t recall seeing when social media came online. 

So I think your initiative that you’re working on, as I said, it feels like it’s the right time, the right place. Because I feel like now, we’re trying to put the cat back in the bag with social, let’s not do the same mistake with AI. And to your point, a million people in five days to sign up for a platform, it’s already out there.

Steven Nouri:

Yeah, exactly. And this is the problem with this. When we compare it with the technology that we used to start with leveraging something and we would see how it would react and how it would impact and then when it goes wrong, we would fix it, right? Like let’s just start leveraging, using cars. And then there was a crash. All right, sorry, one person got injured. Let’s now think about rules. But the problem is like, now, we are talking about millions, right? It’s like, ah, 1 million people injured. Sorry. Let’s just put the thing in the back. 

Also the other problem is that a lot of times when things are physical, it’s much easier to understand, to evaluate and to assess and also address. Things like ChatGPT, social media, they have psychological effects, which takes a lot of time to actually first develop, second, assess and scientifically address, right? It’s not as easy as something like a car accident. So now we’re just talking about, all right, how much it is changing our focus, how is the impact of social media on depression, what is the impact on people’s feeling happy about their life about, about regulating dopamine in their bodies, these are like goes deep and deep. And that’s not something you can evaluate very quickly. So by the time that we actually know that it’s not very positive for maybe certain ages, or maybe there should be some guidelines about the  usage and things like that or the content, it’s going to be like at least millions of people are already impacted by this.

Dayle Hall: 

Yeah. And I think that’s what I appreciate about the initiative that you have. It’s- you talk about community, it’s crowdsourcing, it’s from people who are being impacted. Because there’s definitely that reticence of a Google do it. I’m not picking on Google. But if a big brand does it, there’s obviously a little bit of like, okay, yeah. But ultimately, they want to make money somehow. If the government put controls in place, then clearly, they’re just trying to control us or tell us what to do. 

So I think this is unique in that how you’re thinking about it and the people that you have joining, it feels it’s beyond just what a brand should do. It feels like it’s beyond what the government should control. Look, I think you have a massive opportunity to really drive a lot of positive change and impact and support for what is great technology. But again, as you said, people just want to make sure that they’re going to be safe with it. So it’s a great time for this kind of thing. If you ever come to the Bay Area, or if you’re going to do these globally, I’m definitely signing up for one of these things. I might even get my 15-year-old daughter to sign up, that would be useful, too.

Steven Nouri:

Yeah. Thanks, Dale. That’s exactly what we are looking for. Like everybody feels that’s related and feels that they can get something out of it, and they can contribute. So that’s great. I definitely will be-

Dayle Hall:

I definitely want to be part of it. I’m definitely interested in this. I know we’re at time. Again, these are the best conversations for me because they take me out of my, I’m a marketer, I’m trying to sell, I’m trying to do this all day. These are brilliant discussions. 

Is there anything that you see within the AI world, within the things that we’re discussing, maybe outside of some of the controls? But what are you seeing around AI that’s being really positive today, or is there something else, there’s an AI debate that’s raging, that you think is interesting? Like what’s the things in social that gets you to click when it’s a debate about AI?

Steven Nouri:

So first of all, there are lots of positivity in AI, like doing real work, like adding real value in people’s lives, right? In healthcare, education, you are having AI diagnosing disease much faster, more accurate. So this is like very positive for everyone. In environments, you will have AI monitoring and protecting wildlife. These are like the real thing, real deal. We are seeing this daily. It’s not just having a recommender system for your Netflix, which is sometimes actually not too bad because it just gives you a new series to binge. So at the same time, we do see some social impact, which makes me very happy. 

Whenever I’m scrolling social media news, wherever I’m checking, I’m looking for these impactful social-related projects that are being implemented and then they’re real. Because a lot of time, you’re talking about things that are not yet done and some of them look more futuristic than the others. But the one that are using the field, it just makes me super happy that we are doing something for ourselves, for our community and for the earth in general. At the same time, some of the stuff that make me click are the ones that are more controversial about some of the drawbacks of AI, maybe the risks. Let’s say some of the random answers are coming out of these language models. Some of them are a little bit worrisome. And at the same time, the technology behind it is going forward. 

People are talking about AGIs. And I’m excited to think about it, to talk about it. But I still don’t know how close we are and how we are going to achieve it because there is no sign for me at this point that we are getting close to the technology that is needed for the AGI to happen. But these are the topics that gets me excited. And I’m reading all the time about it.

Dayle Hall:

Yeah. I love- we did a tour, a global tour for a brand and a CEO talked about DALL·E, the AI generated images and so on. And it’s just an interesting take on what you ask and what comes out. But as we wrap up, I think there’s a lot of opportunity. I appreciate what you’ve said around how AI is being used for benefits today, healthcare, how it’s going to be used in education. Netflix is useful for me. I also like messing with my friends. When I go to their house, I watch, I just put a bunch of shows on that they would never watch. So it suggests random things for them in the future, that’s just to mess with them. But massive opportunity. 

I love the initiative that you have around giving us some kind of thoughtful process of how this is going to work. And personally, for me to you, I have a 12-year-old and a 15-year-old. So the things that you do now will help protect them and create a better world for them. So I’m very grateful for you doing it. I can’t wait to be part of it. I’ll give you the last word. Anything that you want to sign off with? If people are out there and listening, how should they think about your initiative, how should they think about using AI in their life?

Steven Nouri:

I would plug in again our initiative. So as I said,  we are going to roll out these meetup groups very soon in different parts of the world. If people are excited to be part of the organizing teams and they are committed to run these meetup groups, definitely reach out to me. We are looking always for these volunteers that have the time and capacity to help us. At the same time, we are looking for venues all around the world to help us to deliver this. It’s a non-for-profit initiative. And we are not connected to any sort of government or corporate, I guess, as a sponsor at this point. So we are looking forward to leverage these connections, to get this project forward. So these are two things I guess I would leave with your audience and hopefully, it will help us to grow faster.

Dayle Hall:  

Oh, absolutely. I’m no Mark Zuckerberg in terms of followers. But when this comes out, we’ll do what we can to promote it. Again, I think it’s a great initiative. 

Steve, founder of AI4Diversity, thank you so much for joining us on the podcast.

Steven Nouri:

Thanks, Dayle. Thanks for having me.

Dayle Hall:

Thank you, everyone. That’s the wrap of another episode. We appreciate you joining us. We’ll see you on the next episode of Automating the Enterprise.