Mike Vizard: Hello and welcome to the latest edition of the Techstrong.ai video series. I’m your host, Mike Vizard. Today we’re with Param Kahlon, who is executive vice president and general manager for automation and integration at Salesforce. And we’re talking about the impact AI is going to have on the need for more sophisticated approaches to automation. Param, welcome to show.
Param Kahlon: Mike. Thank you so much for having me. Good morning.
Mike Vizard: What are we seeing here exactly? We’ve had automation and integration frameworks for years. It seems though that we have lots of people building AI models, but the model’s got to connect to something to go execute something. So how has that changed in the way we need to think about integration and automation?
Param Kahlon: Yeah, it’s a really good question. First of all, what we’re seeing is a lot of growth in just general or automation integration solutions because think about AI. AI is all about making sure that you’re collecting a lot of data across that enterprise and making sure that you’re making available to the AI algorithms so that they can reason over that data. And once the algorithms process the data, make a decision, they need to go take an action on that insight on the prediction. So what we allow with automation is the ability to go in real time, take an action on that data. So in general, whenever a company’s looking at implementing AI to automate a business process to make a process more efficient, you are in fact using automation integration. But on top of that, what AI is doing is it’s really helping create more models.
So if you look at automation, automation is all about making sure that you can take a repetitive pattern of work that humans are doing because things are not connected to each other, can be integrated or automated in the work. But a lot of the work that we do that’s repetitive and tedious is really processing unstructured information, processing information that’s based on a certain domain. So AI is making it possible to process that unstructured information so it can be processed in an automated and integrated way by our algorithms. So there’s a lot of scope for how these models are now more accurate and processing that information.
Mike Vizard: So there’s going to be multiple models and they’re all going to be operating at higher levels of scale and things will be processed in real time. And as one wag said, well, it’s one thing to be wrong, it’s another thing to be wrong at scale. So how do I govern all this and kind of put some controls in place where if something does go awry, A, I know about it and B, I can do something that’ll roll it back?
Param Kahlon: That’s a really great question and I think that is the biggest question that is, when I speak to customers, they’re wondering about, it’s like how do we make sure that this thing actually does what it’s supposed to do and not hallucinate, don’t start doing things that weren’t supposed to do and then obviously start doing it at scale. So it’s not causing small harm, it’s causing a lot of harm. One of the things that we invested early on in our technology was the ability to do API management. So integration was built along the ability to support APIs that can connect systems together. And on top of that will be lifecycle of that building that API executing that API was the ability to govern that API make sure that you can write policies that can detect things like rate limiting. How many times is the user calling this API? Is this API processing some unstructured information? Is this API going over its consumption pattern? Is there an enameling in the execution of this?
So we’ve actually taken that product that we built around API management and really applied it to make sure that we can do things like governance and security for calling a large language model, for example, in execution. So you can do things like detect if critical personalized information is leaving the company’s boundary as it’s supposed to make sure that are you masking that data before it leaves the company boundary if it’s going to go execute and over a model outside of the company’s four walls. The application of those security and governance protocols is really extremely important to make sure that you’re executing on any AIs that you’re automating the business process for is important. And that’s really what we’ve invested over the course of the past six months is really making sure build that trust layer on top of the AI algorithm. So you’re monitoring for information leak, you’re monitoring for any hallucination and toxicity in the detection of results that are coming back from the AI algorithms, and that is an extremely important part for enterprise customers.
Mike Vizard: Do you think that as we see more AI regulations starting to be built, that organizations are going to have to detail not just what’s in the AI model but how it’s being employed and kind of be able to show that they have control over it and that they have some ability to kind of roll things back if necessary?
Param Kahlon: Absolutely. I think part of the first wave of AI was it’s a black box. It’s going to do what it’s going to do and we have to just go trust it. What companies are realizing now is that the black box isn’t sufficient. You need to be able to explain, especially in some industries which are highly governed and regulated, you want to be able to explain exactly what you’re doing and how you can manage and control that. So we already have customers, for example, in healthcare and financial services that are essentially saying, “Yes, we have to be able to document how we’re using AI. We have to be able to explain into our regulators.” And there’s a lot of work that’s happening today to be able to manage the execution of these things. And I think that’s a really good point. Being able to build a layer of governance around how you’re executing AI in the enterprise is going to be very, very critical, especially in some of those regulated industries.
Mike Vizard: It seems as we go along that organizations are going to start training LLMs using a smaller amount of data to get to a more accurate set of results that reduces hallucination. But if that happens, we’re going to see, I think applications may have 1, 2, 3, half a dozen LLMs that they’re going to call through an API somewhere. So is this going to stress the API management because we need to be able to do that dynamically?
Param Kahlon: I think that’s a really good point. First of all, the data that’s required to train the LLMs is usually massive. That’s sort of the results for why LLMs work better than the traditional deep learning algorithms. But companies are definitely using more contextual data to ground the results of the LLM. And one of the approaches that companies are trying is that instead of executing, essentially triangulating the results across multiple LLM models to make sure that we can pick the one that seems the most appropriate, based on the use case, based on the situation, based on the context of that. And that’s an approach I think we’ll see more and more of. It will definitely drive more compute in both training the models and then also executing those models around that. So I think that’s a really good observation. We will see companies leverage more compute and more of that, but what all of that does is actually creates the need for companies to aggregate more data.
We see that as generally a trend that basically says as you’re creating more models, as you’re training more models and you’re executing more models, all of that’s going to create the need to bring more enterprise data together. Maybe a lot of times as you’re grounding more contextual data to an LLM model, you’re having to go pull data not just within your company’s four walls, but outside of the four walls as well. And all of that trend, as we’ve seen from our customers, is actually really exploding the need for the products that I’m responsible for at Salesforce, which is around the ability to integrate and automate those things.
Mike Vizard: What do you think the impact of all this is going to be on the way IT teams are organized? Historically, we’ve had DevOps teams managing the API integration and we have data science teams that are doing machine learning operations, and those two things need to come together and somehow converge. And of course there’s some data management folks running around with some data engineering skills. So how does this all converge and how will we organize ourselves?
Param Kahlon: It’s another really good question, Mike. I think what we are seeing in our businesses, we’re seeing IT and business teams collaborating more than ever before because what is happening is traditionally what was in the realm of IT departments to build and code is now through large language models, through natural language prompts. It’s available more and more to business. So more and more businesses saying, well, I can go create this automation myself. I can go create this workflow myself and I can give it to somebody in IT to validate, have it run through the DevOps cycles and make sure that it’s scrutinized and get automated. So that is one trend that we are starting to see, which is more and more business and IT working together. A lot of times we see this formation of Center for Excellence, which is formed through skill sets that come from IT and business together.
But a lot of times business processes is well understood by business. IT doesn’t understand the nuances of different scenarios that business has to take care of, and IT team can really be in this role of facilitation to be able to support that. On the AI side, and a question around DevOps and machine learning and data, I see the skillset around the ability to companies hiring data engineers to be able to say, “Well, how can we go create this data lake?” We have our own attempt at Salesforce called Data Cloud, which is really ingesting all the data about the customers to make sure that you can reason over large amounts of data, but also present that data, present a contact profile that’s aggregating data across different interaction points and that your contact that customers are having with you. So we see a large focus on and creating these data lakes and making sure that the data can be aggregated and supported, not just for AI algorithms, for other parts of applications as well.
And we see the data science team that is working on these algorithms continuing to, now, a lot of times say that there used to be two problems in data science. One was the data, the other one was the science. You couldn’t get the data and then if you got the data, you can get the scientist in the space. Now the data science teams actually feel for the first time that they actually have the data and need to have more scientists come in and create those algorithms around this thing. So I think data science just practices within our companies will also grow.
Mike Vizard: How automated can the integration get itself? I mean, are we going to apply AI to the integration process and put something that feels like a natural language interface in front of this so I don’t have to go hire a bunch of specialists and other forms of high priests to go manage integration?
Param Kahlon: We’re not quite there yet. I think that is a journey that we’re on. What we have seen is that the generation of code, the code generation models are driving productivity of developers and also they are making it easy for business users to step in and create simpler workflows, simpler automations that do not need IT to be involved every step of the way. In terms of autonomously or maybe without any developer skillset, without any IT skillset to be able to create a technical component. I think we’re a little bit away from that. We see today this is a productivity boost as opposed to an automation of removing the engineer, removing the programmer completely from the loop. But I think the day is not far where we’ll be able to say business users are able to create more sophisticated forms of programming or automations approach themselves without having to do that.
What we are seeing success is in things like document processing. Historically, when you wanted extract information from documents that only worked if you were looking at very structured data sets like tax forms where everything was pixel perfect in a position and then you had some success in semi-structured information like ID documents. When you’re looking at driver’s license IDs from, or invoices from different vendors, you knew that the same information existed, but the layout was different across those things, and you can build a deep learning model to be able to extract that. Now we’re seeing more success in more complex forms of semi-structured or even unstructured documents like contracts and others because LLMs can answer a very specific question from a very unstructured form. So we’re seeing more automations and that processing of unstructured information, but in actually writing a more complicated integration that’s going across systems that was historically done through code. I think we’re seeing productivity improvements.
Mike Vizard: There’s been a longstanding debate in the land of API integration is whether you should have a single centralized cloud service that everything goes through and standardized and is managed by a center of excellence versus bringing the integration closer to maybe where the code and the data is and having a more federated approach, but it’s maybe a little more difficult to manage. Does AI change that debate in any way?
Param Kahlon: I think AI makes it easier to manage some of these different touchpoints. So some of the things that AI helps with is if you have a more of a federated approach and you were concerned about security and governance and anomaly detection, I think it helps with that debate. In general, what we have seen is businesses need more flexibility to be able to manage and run their business. They need agility at higher speeds and to be able to do that. So I’d say I think AI generally doesn’t help with that approach quite yet. What it does do is it helps companies detect and manage more unstructured ways of doing business and provides more flexibility and agility there.
Mike Vizard: So what’s your best advice to folks is they look at this whole issue business. I think just about everybody’s kind of scratching their head going, how do I make this work? And so where do I get started? It’s a little intimidating.
Param Kahlon: Yes. I think everybody looking at how they bring more AI into their enterprise, into their business processes should start with one tenant, which is, do you trust this technology? What happens? I think the question you asked earlier on is what happens if the results that are coming out of this are not in conformance with what we stand for, for the values we stand for? How do we detect it, how do we manage it, and how do we control it? Do we have a strategy around that? I think having that trusted layer to support the execution is super important to have a strategy around that.
The second thing I’d say is that data, a lot of times if you are running a company’s business process and you start to leverage a generic LLM that is not trained on your data because you didn’t have your data and you start to execute on results on that or that which could seem tempting, you would get mediocre or at worse, non-mediocre or non-relevant results out of that thing. So I think having a strategy around data to be able to say, I am going to have this thing be grounded in contextual data that I’m collecting about my customers, about my business processes, about doing this myself is extremely important to have a data strategy.
And the third thing is just like we do with any transformation, have invest in people, invest in people that understand what the implications of this technology are, results can it deliver. I believe AI is going to fundamentally change how companies use enterprise business applications forever, but it has to be done in ways that you do not get burnt in the process. You do not jump into it without knowing all the implications of how to leverage it, how to maximize the outcomes of this. So investing in people that understand the implications of that, both from a technology, from a business process perspective, and also understanding how jobs and how people’s lives are going to get transformed for this.
Because there is integration automation, AI are going to fundamentally change some jobs. It is going to create opportunities for people to do new things, but it is going to take away some of the things that people have done for a very long time and not enjoyed it, and understanding how jobs are going to get transformed, how people are going to learn new skills, and how do we staff up to make sure that we have the people to think through that transformation is extremely important too.
Mike Vizard: All right, folks. Well, you heard it here. The first question anybody ask after they build their first few AI models is now what? I think the answer to that question starts with the word integration. Ram, thanks for being on the show.
Param Kahlon: Thank you so much, Mike. Thanks for having me.
Mike Vizard: Thank you all for watching the latest episode of Techstrong.ai. You can catch this episode and others on the Techstrong.ai website. We invite you to check those all out, and until we see you again, we’ll catch you next time.