Synopsis: In this interview, Amanda Razani speaks with Manasi Vartak, founder and CEO of Verta, about the risks businesses should be aware of when attempting to harness AI tech.

Amanda Razani: Hello, I’m Amanda Razani, with Techstrong.ai, and I’m excited to be here today with Manasi Vartak, the CEO and founder of Verta. How are you?

Manasi Vartak: I’m doing well. How are you, Amanda?

Amanda Razani: Doing well. Thank you. Can you tell our audience a little bit about Verta and the services you provide?

Manasi Vartak: Absolutely. So Verta is a MIT spin off. That’s where I did my PhD work, and that turned into Verta actually. And we provide services for machine learning infrastructure. So this means we help data scientists and machine learning teams manage the models that they are building, making sure they know what models exist, where they came from, were they are being used, and if they are performing as expected. And we also do model operations, which is taking the R&D and turning it into a scalable unit of ML work or ML applications that can be integrated into products. So in technical terms, we do model management and operations. We help companies make sure that they can build AI safely and as rapidly as they hope to be doing.

Amanda Razani: Wonderful. And we’re going to talk a little bit about that today – the issues and concerns that surround integrating AI and machine learning into business. So to get started, what are some key considerations that businesses should be discussing when trying to implement AI technology into their business?

Manasi Vartak: Got it. And you’re thinking overall AI? Right, not just generative AI, right now? Yes. Makes sense. Whenever I think AI has a lot of potential to impact every line of business, if you’re in marketing, you’re in sales; it’s back office, it’s accounting software type things, it’s user facing product, every business PaaS, SaaS or every product that you’re building could be up leveled, or made more valuable through AI. And the kinds of things that businesses typically think about are going to be – I like to use the framework of “people, process and tools.” It just works really well. And I’ll go into what’s specific about ML there. In terms of people, you want to make sure your data scientists, you have the right people on the team. It’s data scientists, it’s ML engineers, and it’s also your software engineers who are going to be integrating AI into your applications. So make sure you have the right tools and you have the right people. Second one is process. And process can sound a bit sort of mundane, but it’s actually really important. And the process starts at the beginning, when you even decide to use AI in your application. You want to know why you’re using AI. What is the benefit that you could achieve through it? And you know, what are the costs? Are you trying to start a data science group from scratch? In order to use AI? Are you going to use off the shelf components, and then go through also a governance process? And we’re going to talk about some of the risks of using AI. How do we know that the way that we’re building this AI application is going to be unbiased, is going to be safe for our customers is going to be safe for the business? So making sure those questions are answered? And then going into the technical details of how are we building the AI application? How are we going to manage it? How are we going to deploy it and monitor it? So that’s actually a big part of the process, and getting the process down is actually one of the hardest parts as you look to scale your AI practice. And then finally, tools. I spent the last decade or more building different kinds of systems for ML. And the right tools can significantly accelerate your process. And so I’m a big believer in choosing the right tools for what you’re trying to accomplish.

Amanda Razani: So you talked about those risks that are associated; how can companies manage the risks associated with generative AI outputs when building products using this type of technology?

Manasi Vartak: That’s a massive topic right now. And for folks who are listening and are following the regulations, whether it’s the opening ICAO, testifying before Congress or the EU AI act – there’s also regulations that are going to make it essentially mandatory for a company to reckon with these risks and have processes in place. So, AI, sort of Gen AI risks fall into a few different categories. And I’ll highlight a few and mitigations, depending on the kind of risks that we’re talking about. The first biggest one, as you noted, is that the outputs are very open ended for these models. So consider a GPT model that can write ad copy, that’s going to produce ad copy for sneakers, it could produce ad copy for medicines, it could produce ad copy for sort of devices that are going to send objects to space. So it’s a wide spectrum. And so making sure that the outputs of these models actually aligned with what you are looking for and meet the quality bar is pretty critical. And it’s fairly hard to do from a technical perspective, because these models tend to fabricate information or hallucinate, as it’s called, where it’ll come up with information that is just not factual. And it’s typically very hard for the model to tell where that information came from. Because it’s such a complex model, it’s hard to say, “Oh, I produced this information from these steps of reasoning.” And so validating and spot checking and manually reviewing the outputs from these models is actually pretty important. The second one, I would say, is legal risks that are associated with using Gen AI. These have to do with, is the model likely to produce copyrighted content? Now, this depends on how the model was trained, what datasets were used; it also depends on privacy. So are we allowed to use certain piece of information about the user when we’re making a prediction of recommendation? So GDPR, and other existing privacy policies actually play a really big role. But copyright and privacy considerations are fairly significant. And then with all ML, essentially, it is also what degree of bias does this model introduce? Is it treating people of a certain ethnicity differently from a different ethnicity? Those are the things that you do need to get deeper into the model outputs to understand how it’s behaving.

Amanda Razani: Oh, let’s talk about bias a little bit more. Because with the open source nature of AI, how is there a way to kind of untrain the bias? Or is there a way to untrain the bias that’s already there? And how do we keep it from continuing?

Manasi Vartak: That’s a great, great question. I think there isn’t a clear way to keep these models bias free, I think what we can do is make them as close to that ideal as possible. And there’s a few reasons for that. One is the data that we’re using to train them, as you’re noting might be biased, because historical policies have introduced bias in the data sets, and therefore the models are really learning from those patterns that are bias patterns. There are a few techniques that have been proposed in research to help de-bias models, but that’s for a small subset of models or small types of bias. So IBM, for instance, has done really great work here. And IBM 360 is one of the tools that’s really strong in this space. And so, you can use these technical tools; you can also make sure that your data isn’t bias. And some of the discussions are also from a process and ethics perspective. It’s having those discussions early on about what kind of bias might this model produce? And are there ways for us to mitigate that bias, maybe through manual review of the outputs that’s being generated? Or assessing bias, again, using technology to see if a output is biased, and then choosing not to actually display that output. So you can add protection, as these models and the data get better over time. And so I think having a human in the loop and making sure that we’re bringing in our judgment, and you know, the company’s charter of this is what ethical practices are and converting them into model output considerations is a really effective way to go.

Amanda Razani: So in your view, when you’re looking into different businesses and their attempts to implement AI into building different technologies and products, what do you think are some of the key issues and concerns you’re seeing? Maybe some of the roadblocks on their journeys? Can you talk a little bit about that?

Manasi Vartak: The funny thing that Gen AI has done is that it has made AI accessible to a lot more people than it previously was. And so we’re seeing a shift in the general AI maturity of organizations. And so where previously we might not have thought of certain enterprises as being very AI forward, they can use these off-the-shelf AI components to rapidly increase our AI maturity. I think the challenges really do depend on the maturity of the organization, on their AI journey. Some things typically that we see roadblocks are, you know, is there data organized? Is it clean, is its quality high enough to do good AI? That’s one of them. Getting the right skills on the team continues to be a challenge across all levels. Then when you begin to scale, so when you start going from, I have one or two models and applications that are working to I want to build dozens of these applications – that’s where the tools and the processes become so critical, because you don’t want to be reinventing the wheel. So if you want to use AI to say, develop your ad copy, you want to be able to as quickly use AI to also write recruiting job descriptions. And how do you do that quickly, without having to spend the six or nine months that you spend on the first application really is important. Otherwise, teams just go a lot slower. And then the final one is the regulations. And we’re hearing that more and more. Gen AI has brought a lot more awareness to the concerns with AI in general. And so we’re seeing a lot more conversation around what checks can we put in place, and you don’t want to hamper innovation, but you want to make sure that people are thinking about the ramifications of their work as they’re building AI enabled products.

Amanda Razani: Is implementing AI good for all companies? Because I would think it kind of depends on the cost. And is it cost effective to be implementing AI at this point in time? Because it can be kind of expensive?

Manasi Vartak: I think that’s a great question. It depends on the use case quite a bit. And I think I would highly recommend folks do kind of ROI or cost benefit analysis. And the picture has changed a lot with Gen AI, but again, Gen AI is only good for a specific set of applications. If you’re looking to say do a yes, no classification on whether someone will click on an ad, Gen AI is not going to help you. You’re going to need a completely different kind of AI model. And so as with all business sort of decisions, and this is not a great answer, I recognize that – depending on the use case, the benefit of AI is going to vary. And sometimes you can get by with simple rules, you don’t need to put machine learning or more fancy kinds of AI into your application. If a set of five rules is gonna get you by for the next year, that’s great. You know, do that. And then once you’re at a more refined stage, you can go and invest in building or buying some AI capabilities.

Amanda Razani: And I noticed a little while ago, you mentioned about finding the skill set needed and a lack of skill set. I know that could be the issue now. But a lot of people are concerned actually that in the future AI is going to take over a lot of these jobs and the skill sets are not going to be necessary. Can you talk to that? And what do you think?

Manasi Vartak: For sure. I think that it’s a very mathematical answer, but the distribution of jobs is going to change with AI. So historically, we might have needed a lot more people doing a certain kind of job. And in the future, we’ll need fewer people doing that same job. But we will need those people or a different set of people to do other jobs more. So it’s the distribution changes. I always think about AI as augmenting humans as opposed to replacing them. And I’ve referenced marketing copy a bunch of times, because I think that’s one of the things that Gen AI does well. But even there, you need a human to be reviewing it. For instance, if a marketing department is writing, you know, articles, and they want to summarize a particular article from McKinsey, just asking ChatGPT to do that is actually very risky, because ChatGPT will make up a lot of things. And so that’s where we still need the humans who are reviewing the work of the AI. They’re also directing the work, whether it’s via prompting, or by more sort of active involvement. I think that AI is going to magnify what individuals can do and augment our abilities. So it’s always, it’s a brand new world. I think the landscape of jobs will shift over time. But human creativity is still gonna be really unique. I hope so.

Amanda Razani: Absolutely. The human element still going to be important. That’s a good thing. So looking down the road five years from now, what do you see as some of the solutions that are going to play out with generative AI specifically? How is that going to affect different areas of business?

Manasi Vartak: This field is moving very, very quickly – where every month we have mind-blowing discoveries. I think that making a discovery or building a prototype is different from actually building a production grade product. That’s also something I like to put out there. Because if you go on Twitter or LinkedIn, there are a lot of hot takes on, “Oh my god, the Gen AI did this, that and the other.” But building a prototype to actually having a solution that works with high accuracy each time, there’s a big drop. And at Verta, we’ve been working on this on the ML side, general purpose ML, not necessarily Gen AI only, and it takes months to get to that level of validation. It takes months to get to that level of performance. And so I think there’s going to be a lag between prototypes that are really exciting, to something that meets requirements everyday for our business. And so I think we’re gonna see that gap. We’re seeing applications across the board, even if it’s brand new from Pharma. I think it’s really about language, images, of course, that’s great as well. I think some of the core ML functions are still not addressed by Gen AI. These are what are called discriminative models. So should I recommend this item on an e-commerce website? Or should we take this particular action on a loan that’s still not generative AI – that’s quite different. And so I think Gen AI is going to have impact in unexpected places. And what has historically been traditional ML and data science is going to be fairly safe. I do think that it is going to democratize intelligence sort of applications to a wider user base. So SQL is what folks use to query databases. And now you can query databases using natural language, which means departments other than technology are going to be able to use that every day. And that’s going to open up a new kind of skill entirely for them. And they’re going to be able to do things they couldn’t before. So I’m super excited for that.

Amanda Razani: And it’s very exciting what the future holds. And this technology is just advancing, like you said, so rapidly from one month to the next. It’s just incredible. It’s really the biggest waves I’ve seen made across the globe in a long time.

Manasi Vartak: Yes. I think the internet was probably the last, or I don’t know, that or mobile phones. But I want to say this is massive for sure.

Amanda Razani: Yes, absolutely. Well, thank you so much for coming on the show and sharing your insights with us. And those of you in the audience. Stay tuned for more great interviews on Techstrong.ai.