Home Episode 2

Podcast Episode 2

Supercharging Your Enterprise Automation With AI & Machine Learning

with Dr. Vinesh Sukumar, Head of AI/ML Product Management at Qualcomm

As the talk around Artificial Intelligence continues to grow exponentially, have you asked yourself if your business is keeping up? In our latest episode, Dr. Vinesh Kumar, Head of AI/ML Product Management at Qualcomm, shares how AI and ML are driving growth for businesses today.

Full Transcript

Dayle Hall:

Hi. You are listening to our podcast, Automating the Enterprise. I’m your host, Dayle Hall. This podcast is designed to give organizations insights and best practices on how to integrate, automate, and transform their enterprise. Today, we have a very special guest. He’s the senior director, the head of AI and Machine Learning Product Management of Qualcomm. Please welcome to the show, Dr. Vinesh Sukumar.

Vinesh Sukumar:

Hi, everyone. And thank you, Dayle, for providing me the opportunity to talk on your show.

Dayle Hall:

Absolutely, Vinesh. We’re thrilled to have you. Great experience, amazing company. So we’re looking forward to getting to the questions. Why don’t you give us a few minutes around how you came into this role, AI and machine learning specifically? Obviously, we hear a lot of that these days. But how did you come into this role? Where did that passion come from for you specifically?

Vinesh Sukumar:

Quick history about my professional background is I started my career working for Jet Propulsion Labs, JPL, quite a while back. Those days trying to do circuit designs for image cameras, basically, quite large CCD-based cameras. Then from a research-based organization, jumped onto a more commercial industry, consumer-oriented industry. One of the first products I started to design was the Motorola Droid phone specifically. As I recall, about 15 years ago, I think it was the most popular icon those days. That’s how I started getting into cameras. And one of the first things we noticed is people started to love a lot of pictures and videos.

And then over a period of time, started to do some interesting projects, hopped and started and did things for Apple, the very first iPhone that they launched. And then they wanted to put a lot more focus on visual analytics to classify images, to do detection of certain images. That’s how I put my footprint into computer vision space. I was also doing my doctorate degree in AI/CV. But those days, people didn’t really understand what artificial intelligence meant. So it was mostly a theoretical-based study at the point of time. But as I started to get a lot more practical use cases and examples, it excited me and I was able to map my theoretical study to practical applications.

So that’s how I started to get into the space of CV and AI. In those days, there was no AI, it was all CV or computer vision. This time, with experience as people started to get more educated about what artificial intelligence is, what it can actually do in the vision space or tech space, I got excited a little bit more on the system design. And that’s how I started working on architecture, on engineering, system design, applications that fit and now drive the product team, really make sure we can come up with something exciting, improve on user experience from that perspective.

Dayle Hall:

Right. What’s interesting from your background is seeing some of those technical developments and talking theoretically about AI and it feels like we’ve come a long way in those years. And now actually we’re releasing products that have AI built in. So it feels like some of that theory, maybe you saw it in some of what we thought was going to be coming out in movies around artificial intelligence. We’re not at that level yet. There’s no terminators, thank God. I just feel we’re going so fast. You talk about the tech development cycle and innovation, there’s so many opportunities.

That’s actually one of the places I wanted to start because given your experience and you’ve seen a lot of technical development. I remember those platforms, those Motorola platforms, and how it was developing. But looking at enterprise and looking at the opportunities there, what do you think some of those key initial or dip your toe in the water opportunities for enterprises using things like AI and machine learning? Where are you seeing enterprises really latch on to?

Vinesh Sukumar:

I think these days, most companies are at this juncture, which are looking at using AI to optimize existing operations rather than radically transforming their business models. And one such example would be in the areas of task automation or task elimination under the category of productivity in the enterprise space. What I mean by saying that is, during the COVID situation or during the pandemic situation, there was a lot of these virtual meetings happening on a very regular basis. A lot of people participating and became quite difficult to transcribe the meetings, take action items, map it to a specific speaker and organized meetings, plus quite manual in nature.

Now with the advent of artificial intelligence, we are making this more automatic. In other words, you now have intelligence built into the application as part of these enterprise models, where they can automatically transcribe these meetings, then these transcriptions are mapped to a specific speaker, who’s registered as part of the meeting. And then if any action items are there, they’re recorded and automatically looks at open slots of the calendars of the different speakers and makes sure that these are scheduled so that there’s continuity to it.

And if there’s any data being shared, that data is actually captured, put them in a Confluence page or even translate this into a PowerPoint and all done automatically behind the scenes using deep learning and machine learning. This really helps because these are all manual labor related work. And if you use AI to really help with this productivity, you can have actually people focus on solving problems than doing these intermediate stuff. So I think from that perspective, that really helped using one such example of productivity.

Dayle Hall:

Yeah. And I think that is a great entry point for a lot of people. I think for enterprises, they can look at helping people within their jobs. And I think sometimes what I’m seeing is organizations also then are attracting different people, depending on what type of role it is. But if you’re trying to recruit, which we know recruiting is really hard these days, but if you’re trying to recruit and you show them, hey, we have all these technologies to help you be more productive so you don’t have to do the manual stuff. That to me is going to have a big impact on recruiting and help people come in, knowing that, hey, I can be more efficient from day one because you put these technologies in place. Do you think there’s a broader impact for these technologies to help things like recruiting?

Vinesh Sukumar:

Oh, absolutely. One of the biggest challenges technical companies usually have, especially when you happen to have specialized fields like AI, you want to really make sure you get the right set of people, the right set of experience, that they can actually contribute from day one. And for that to really happen, you give your recommendations, possibly to a recruiter. The recruiter then does this manifestation of going through filtered candidates. But if you only had a database, if an application can tune your recommendations, not based on the search words, but giving an abstract, this is the individual I’m looking for, that virtual assistant or a smart assistant as part of an application can go search for resumes as part of an existing database and then provide questions out there and say this is possibly a candidate, would be of interest that matches the expectations. So I fully anticipate AI to help out there. It’s just how you manifest it to really give the right response.

Dayle Hall:

Again, this is one of those areas where I think we’re very much in our infancy around understanding broader impact to the enterprise. SnapLogic, we are essentially an integration, automation-type tool. We’re not RPA. A lot of people, when they hear the term automation and AI and ML are very much related to that, they think of the tasks that are just repetitive. And you just mentioned it yourself, which is how can we free up people’s time to really go and focus on the more important task where you do need a little bit more, you want the human experience, you want their intellect on it, so can we free up some of those other tasks to automate so we can spend more of our time helping the business grow that way. 

Do you have other examples around making this workforce, when they join or being in the enterprise? Do you have some examples of how they’re making people more productive, more successful? Is there something that we could do? I know there’s always been this concern that AI and ML is going to take jobs away, but it feels like recruiting is just getting harder, trying to find the right skill set. So how can AI and ML actually help us on job shortages rather than us being worried about taking people out of the business?

Vinesh Sukumar:

I don’t think AI at this stage is really taking over jobs. There’s always this wrong misconception that AI is going to completely take over the world. At least not yet, that’s not happening at this point of time.

Dayle Hall:

Do you still see that, Vinesh? Are you still hearing that? There’s still reports about this, but I don’t see it as much. But I still feel that concept, that misconception is out there.

Vinesh Sukumar:

Oh, definitely. That misconception is always there. It also, I think, addresses the use cases people usually use. For example, people are so used to manual driving in a car. If you really want AI to be completely automated or automated driving from an ADAS perspective, you sit behind the wheels. There’s always this misconception of how is the car making decisions on taking a right turn or left turn or to stop because there’s a life in that situation. There’s a little bit of a concern there. While there’s a lot of transactional-based activity, financial institutions, a lot of automation happening in that space. Their concern is, hey, if I give my credit card out there, if I give my banking information out there, how is it going to impact me? There’s always a bit of a misconception there, but it really depends on how the data has been constructed, how applications have been constructed.

And if there’s a lot of technical investments that go there and if it’s done the right way, I’m pretty sure most of this will go away. But it’s probably going to take some time. Really my expectation from my professional experience is a photographer always used to have a DSLR camera for everything. And with the improved image quality in cameras getting into mobile phones, DSCs basically phased off. And then the mobile cameras became the de facto standard for people to actually take pictures. So it’s going to be a cycling time, I would suspect, but we’re going through that phase that people need to get educated. And I’m pretty sure with time, that conception that AI is going to take over the world, my privacy is lost, would probably be eliminated.

Dayle Hall:

Yeah, for sure. And you mentioned a little bit earlier with that one example around Zoom meetings. Obviously, that was accelerated because of the pandemic. The pandemic, whilst terrible in nature for what it meant for most of us and the impact on our lives and obviously no one really wants to go through that, but it has helped organizations, some of us individuals to actually learn new practices, be able to leverage technology that’s out there. Do you feel the pandemic has helped other areas around these opportunities and things that we might just not have got to if we’re also going to the office and doing the same things the same way?

Vinesh Sukumar:

Oh, absolutely. I believe so. There’s an old saying, that necessity is the mother of all invention. As the pandemic cast a state that was bad in nature in many ways, it did provide a platform to do a few things more effectively, especially as part of video conferencing, as an example. There was a lot of increased human-to-human interactions. And as part of human-to-human interaction, it’s quite important to emphasize on video image and also audio-based streams. And one such element was, hey, I don’t happen to have an office space in my home. How can I really eliminate my background? How can I replace my background with something else, really make sure if I look good or the garbage in the back doesn’t show up? And we use AI to do all that stuff.

You don’t even have to be worried of the fact that you’re not prepared enough to enter the meeting because using AI can take care of that stuff. Or if you happen to have kids at home, a lot of people sharing the same room or background noise and it’s not really helpful when you happen to have an important meeting with a lot of urgent needs, we use AI to really make sure it only recognizes that specific user’s voice and all of the disturbances around it is completely subdued or eliminated. So all of this was possible only using AI. Maybe two or three years ago, this was never the primary focus of attention. Now post-pandemic, obviously given the usage of the specific application, there has been a lot more interest from researchers, from the engineering community. And you could see a lot more infusion of data intelligence in these applications, making these experiences much better.

Dayle Hall:

And obviously, for working parents, you have small kids at home. I’m sure some of those features did come in handy. If we go back to the enterprise and we look at a lot of people that are responsible for these types of technologies, it’s generally the IT organization but also lines of businesses are also looking to implement newer technologies, things to help them be more successful. Do you feel the enterprise IT organizations are looking broader now at technologies to implement? Are they looking at we need an AI/ML-type initiative? Or are they still looking at we’re solving specific use cases and can we do that with AI? Do they have broader AI/machine learning initiatives? Or are they still focused on use cases? And is there a challenge with either of those?

Vinesh Sukumar:

I think I would probably summarize it as a combination of both. There’s always going to be a tactical element that you won’t really resolve and can that be done as a function of AI/ML. A tactical element could be, again, using pandemic as a platform, as a lot of people are virtually working from home. And obviously, things break down. IT resources are limited. You can’t be in a position to really work with all of them nor can you go onsite to get help. So you happen to have the creation of virtual bots acting as virtual assistants for IT. And depending upon the construct of your problem, the virtual bots provided about 50% to 70% of the responses and guided them what can be done even before it goes to a real person. So I think that really helped from a tactical standpoint of really addressing those problems.

Strategically, can I really use AI/ML to do a lot more stuff from an enterprise standpoint? Absolutely. There’s always been thought about how do I really emphasize more on customer stories? How can I really emphasize on virtual assistance? And this could be mapped to use cases. I should probably explain this as an example.

From an enterprise standpoint, there’s always this issue of improving overall performance of the machine. Can I really look at predicting the failure of a specific component even before it happens? But because it’s a critical failure, obviously everything is lost.

Now the general question from the IT standpoint enterprise segment is , can I get a few snippets by monitoring the data, the performance of key modules that really indicate that a failure is about to happen? And can I inform the user that please make sure you do XYZ things or provide a certain amount of compensation before the entire component fails, as an example? It’s totally diagnostic at this point of time. And that’s an important area is to really make sure you don’t lose productivity, you don’t lose data, especially if it’s sensitive data, and then provide recommendations to the user to make changes to the hardware even if the failure happens. So I think this is another example mostly future-looking that can be established but makes its way completely in enterprise space.

Dayle Hall:

For each enterprise, whether you’re trying to solve a bigger initiative, whether you are just trying to solve a use case or you’re thinking about how can we just get better, I think there’s three specific areas that enterprises are challenged with when looking to implement, adopt these AI-type technologies. Now we know they are here to potentially make our lives easier in terms of what we do. SnapLogic, for example, we have an assistant that helps you build your integrations without you having to go and enter every single system you have. It identifies and says here’s what you do. That makes you more productive.

But I think the three areas that we hear about more are getting the organization and the people to adopt it. The second area is the technological challenge of how do I implement? What do I need to look into? What are the impacts on systems? And then the third one is just the cost. Is it more expensive? How do you get the ROI? So if we take each of those separately, let’s start with the people and the adoption. Why are organizations, if they are, and do you think they are, are they still struggling to get people to adopt? And what are they nervous about and how can they overcome it?

Vinesh Sukumar:

I’ll probably start off by answering the question by saying that for AI to really work in those applications and really based on the fact it needs to have the right data, as an example. Data is the lifeline of machine learning. Models only know what they have been shown. So when the data that they trade on is inaccurate, unorganized or biased in some way, these model outputs would be faulty. And what that really means is the user experiences become crappy. And as a result of it, adoption becomes less. So you avoid that situation to begin with. This is one of the top topics that every organization goes through. How do I eliminate bias, which is a completely big topic by itself?

Dayle Hall:

That probably comes onto the next piece, which is the technology. So if you have the wrong data coming in, the chances of people actually being productive and recognizing that this is helping them is going to be lower. So adoption will be less, people won’t trust it as much. Then that comes down to who’s implementing it? Who’s actually facing these technology challenges? And how do they, to your point, make sure there’s no, we all know the term, crap-in, crap-out from the data perspective? How do organizations, how do enterprises ensure that that technical piece is done, that the data that they’re pulling is believable and right to make sure that when they go down this path, they can help drive that adoption?

Vinesh Sukumar:

One of the biggest challenges most of the enterprise organizations usually face is how can you make the results predictable? How can you make these results more explainable? How can you make these results more consistent independent of the geography? Maybe if it’s in Asia or North America or in Europe, you get the same result. And as you present those results, can I explain the reasoning behind the result? And is it consistent? So this is something that’s being heavily looked at and I think needs a lot more investment, both from a research standpoint, from a data scientist standpoint and how he actually looks at the data, how the model is being constructed from the data, and does he have enough diversity as part of those data samples to make that prediction much more accurate. So I think this is easier said than done. And I think this has been a constant focus by folks pretty much across all enterprise organizations.

Dayle Hall:

So then let’s go into that last piece around putting these technologies in which is the cost, the investment, the expectation of what it’s going to deliver and the ROI. How have you seen enterprises go through that process? Where do they draw the line as it’s going to take X amount of years to come back? Do they even know the return of these things or they just feel we have to do it? How do enterprises make that assessment of what the cost to ROI benefit is these days?

Vinesh Sukumar:

These days in enterprise, it’s not a matter of why to use it but how quickly it can be used and how it can be used in a proper fashion. The technical complexity continues to be one of the biggest challenges for enterprise use of machine learning. Now the basic concept of feeding data to an algorithm and letting it learn the characteristics of the data, it’s pretty simple enough. But to start off by doing simple examples, as I mentioned, one such area happens to be productivity. How can I really improve productivity? How can I improve on recommendations, say, be it virtual bots? How can I improve customer stories by making sure that if you’re trying to make a purchase online, you’re able to retain that same customer or a consumer by buying patterns?

So I think you’ve got to continuously study it, implement it and try to understand that when you start to build an application, you have certain key performance and key experience indicators. When you put them in production, are you actually able to meet those indicators? If not, can I dynamically change these models? Can I dynamically change these algorithms to be able to customize to a specific user? Mostly, what you’re seeing these days is you happen to have one generic model and you apply across everyone, which is not generally the case that it works out.

If you happen to have an opportunity to continue to optimize it based on a specific user by learning the patterns of that specific user or an application or a device’s form factor, you’ll have a much better impact. So I think enterprises are starting to understand it, trying to study and get more data, modify the implementation to go from a model-centric view to a data-centric view. And I think as they continue to do that, more than likely, you’re going to see much more success in that segment.

Dayle Hall:

One of the topics we’ve talked a little bit about is fear of taking people’s jobs. We’ve talked about making sure the data’s correct, and that people can actually use the tool and trust what’s coming out of it. What about security in general? Do you see concerns from the enterprise around security, even with their own data or in particular when they’re using these AI tools with customers? Is the security discussion a non-issue? Or is it something that people are very cautious about? Does it stop people acting?

Vinesh Sukumar:

Security these days is becoming quite an important issue. Now security obviously is manifesting in many ways. It could be, one, your biometric signature. A biometric signature could be your voice print or your facial print. Where is it actually getting stored? Is it being stored within the device or being stored in the cloud? And how easy is it to access that biometric signature? Because it’s known that a biometric signature can be altered. Once that somebody has it, it’s gone forever. So how can I really store it in a fashion that’s extremely secure? And what are people and enterprise organizations really doing to really protect that asset as much as possible?

Then there’s another portion of it, which is that I happen to have these models. All these models are basically binary files, which are basically ones and zeros. These binaries basically contain algorithms to make a certain prediction, to make a certain detection, to make a certain classification. And most of these algorithms are being developed based on tons and tons of research and a lot of human capital investment. How can you make sure these binaries are not corrupted or have been extracted by some other application? How do they understand exactly what the secret sauce is in how those predictions are being done?

Another piece of emphasis to really understand, how do we really store and then keep it in a safe location? So I think it’s increasingly becoming important. And then moving forward, organizations are putting a lot more emphasis, both on the hardware side and the software side, to really make sure that the templates, the binaries, the algorithms are all securely fastened, and the chances of somebody extracting useful information from it can be lowered as much as possible.

Dayle Hall:

Security has always been a big issue, particularly around software and accessibility to people’s data and so on. But once you have AI and ML introduced, like you said, something as important as your biometrics, obviously you have to be very careful of that. And I think that does slow some adoption in certain quarters with certain people is that they still don’t trust that, that is secure. So that’s something that I would anticipate becoming more important as we go along. If I said, forget the timeline for a second. Forget it’s a hundred years, maybe it’s 10 years, 5 years. If we look at the AI/ML journey and all these opportunities, are we halfway through the journey? Are we 2% through the journey of what’s possible? And my question to you would be, however long you end up working in this field, what is the thing that you would be really excited to see during your working time with this technology?

Vinesh Sukumar:

I think AI is entering that phase where it’s getting a lot more acceptance across many verticals in segments. Ten, 15 years ago, when I started my professional career, AI was very much a theoretical physics of math. No applications around it. Now you see a lot of applications around it. Every facet of life, mobile phone, car, PC, glasses, no matter what you take, you see some elements of it. But I think the next important element, I would say the evolution of AI is to induce common sense into AI, wherein AI just does not do things because the data suggests it has to do certain things.

But it tries to understand the context of it. It tries to get inputs from various sensory flavors of information and then make a prediction based on the scenario and based on the context. That’s what I mean by common sense. How can I really make sure AI is smart enough that it does not give me the same answer all the time? It needs to understand what the context is and provide a response. So that’s what I think. You’re getting into that space of a true AI bot, a true AI virtual world. And that’s what excites me, the expectation that we are getting to that phase of AI becoming a truly intelligent AI.

Dayle Hall:

I love that. As we come to the end of this, we could talk for three hours, I’m sure. Not sure whether anyone could listen to us for three hours. But if I look at these enterprises out there, we’ve talked a little bit about use cases versus just big AI initiatives. We talked about optimizing apps for productivity. You’ve talked about true data as the lifeline of AI and ML. I love the common sense in AI to really understand the context. If you had a last piece of advice, enterprises are looking at different digital transformation initiatives, but let’s say they have multiple ones. What would be the piece of advice you’d give to an enterprise that said, look, if you’re going to start on this journey, whether it’s use case, whether it’s just a bigger initiative, what’s the piece of advice that they have to really dig in on now before they start looking at technology to implement and the costs and so on? What’s the thing that they need to think about as they start this journey?

Vinesh Sukumar:

I would say that enterprise AI as it is right now, any new technology that comes and has its own risks, but as long as you understand the strengths of it and plan for success, AI is going to do some fantastic stuff in the enterprise space. I would start the discussions. And then obviously, it’s how you look at data, how you manipulate the data, how you use the data, what makes the experience to be much better. So that would be my emphasis moving forward.

Dayle Hall:

That’s great. Well, look, again, I appreciate your time, Vinesh. These conversations, whilst relatively brief for a podcast, very insightful. I’ve learned a lot and I appreciate your time. Thank you for being part of our podcast today.

Vinesh Sukumar:

Thank you, Dayle. This was fantastic for me as well. And thank you for providing me the opportunity to talk about Qualcomm AI, my role in Qualcomm, and some of the fantastic relationships I have with the engineering and the research organization. Thank you, Dayle, once again.

Dayle Hall:

Absolutely. We’ll keep an eye on all the things that you’re going to be doing in the industry as one of the leaders on the forefront of this. So we appreciate your time. Thanks for joining. To everyone else, thank you for listening in to this episode of Automating the Enterprise, and we’ll see you on the next one.