Synopsis: In this AI Leadership Insights video interview, Amanda Razani speaks with Demi Ben-Ari, CTO and co-founder of Panorays, about the privacy and security concerns business leaders face with the use of AI.
Amanda Razani: Hello, I’m Amanda Razani with Techstrong.ai and I’m excited to be here today with Demi Ben-Ari and he is the CTO and Co-founder of Panorays. How are you doing today?
Demi Ben-Ari: Doing great, Amanda. Thank you for having me.
Amanda Razani: Happy to have you on our show. So can you explain to our audience what is Panorays and what services do you provide?
Demi Ben-Ari: Sure, of course. Okay. The grand scheme of things is we’re doing third-party cyber risk management, and if going to particular use cases and how we’re implementing that with our own customers, we’ve created a SaaS platform that covers from end-to-end, the whole portion of auditing any third-party entity and covering continuous monitoring on these aspects. Why am I saying the term third-party? Because it can be a lot of use cases. When people used to perceive that the term, they used to look on vendor management. Okay, I’m using vendors to actually provide my services and maybe I’m using their services for other related issues. But third-party risk management is much broader and there are various third-party use cases of implementing technology, subsidiary management, product integrations. Every company from every business today has some kind of technology integration partners, et cetera, managed service providers, et cetera.
So all of these third-party relationships are firstly mapped out in the product itself and then decided which audit process will be conducted with the third-party entities and how to continuously monitor them in day one and going on. And again, we’re covering customers from all verticals, all industries from the largest supply chains in the world like the PSMC and to also like SMB companies. Everybody has the problem of third-party security because there are so many third-party breaches in the world today, and it’s not over, okay. There will be only more and more like these because every company today leverages hundreds if not thousands of third-party relationships to provide their services. And that’s the challenge that we’re helping our customers to face.
Amanda Razani: Wonderful. Thanks for sharing. So the first question that brings to my mind is what’s on almost every business leader mind these days is AI. And I know with the OpenAI release of ChatGPT about over a year or so ago that opened the doors to using AI globally in so many different ways, and now everyone is trying to harness and implement this technology in their business. But what security and privacy concerns are you seeing with the use of AI and specifically this generative AI?
Demi Ben-Ari: Okay, I do want to actually tackle it in a way of layers, okay. Because eventually what happened that boom, that OpenAI, AI-21 and companies like that actually created in the market right now is commoditization of this ability. Eventually what happened is that everybody can use that common Joe, right? Basically I can register ChatGPT with my Gmail and that’s it. I ask cool questions and do stuff. But it’s much, much broader than that because eventually this commoditized ability, people do not understand the technology and how it’s leveraged. And then we try to also layer it as security practitioners when we look on architecture of implementing these types of solutions and you mentioned that also. Any portion of the business can implement AI abilities to be better, customer success sales, that intelligence portion of gathering information and answering questions that you need to do a ton of research and somebody just answers.
And it goes to the generative AI and conversational AI that had been already implemented and commoditized because AI technology existed in the world in the last 20 years. But it wasn’t that approachable because eventually what happened right now is due to that fact the LLMs and also the prompt engineers, this is the new term that actually came up. They were right now the bridge between the technology of AI and something that a common Joe, a person can understand and leverage that to their abilities. Any security practitioner right now needs to understand when employees, and again, I’m looking on the company and how it leverages these abilities, needs to take that in a few layers. What type of information is being processed with that AI ability? Is it implemented self-hosted or we’re using a service, right? ChatGPT is an example, and let’s take the broader, OpenAI by Microsoft right now. And they have different legal agreements and also terms and conditions due to the direct to consumer type ChatGPT or maybe usage of the API.
How am I leveraging that, right? Because they literally state that they do not take any liability on whatever you put there. And again, not from bad intent. Employees can basically put in a lot of confidential information to these abilities. They might get value from that, but eventually what happens is the inference level actually gets that trains, the future models or even something going back and reflecting back to technology due to that information. And it was already proven that without any concrete data, right? Anonymized or whatever, you can take that conversational aspect, the inference level and reflect back how that data was trained upon. And this is like wow. Right now you can basically expose all of your most confidential information of the company both on the first, which would be the corporate information, the enterprise information, whatever you have to provide your business, and also sometimes customer data. Because you want to add that ability to your product.
It’s cool, it’s precise. You do that with a click of a button, but eventually they touch your customer data, they become your sub processor in the world of data privacy, et cetera. So it’s a lot of considerations to take and you have to have supporting technology that actually does that. Because right now we’ve described maybe a discrete use case. What happens when everybody in the company and you have not too much, a thousand employees in your company. Let’s take 10% of them adopting AI technology. Without technology covering all of these loose ends that are opening up, it will be, I think, impossible to actually keep track on that.
Amanda Razani: Absolutely. So what advice do you have for business leaders to make sure that it is secure and safe to use?
Demi Ben-Ari: Okay. Firstly, there are a lot of safety mechanisms that security practitioners can implement. Even with usage of specific services, SaaS services, because most of them are introduced that way, I think the initial mapping would be white listing. To use something like that in the corporate strategy or your security program, you’ll need to define safety mechanisms for people to be able to adopt. So even the adoption should be, I won’t say controlled, I would say monitored because you can’t control everything. At least you need to get visibility of all of the services that are used. Okay? So that third party portion from what I’m seeing also in limitations of customers. Once you have identified, you need to identify also what is being passed and how. That is a bit deep tech, so people that came from the process security type of person need to leverage other abilities of maybe even architects or maybe programmers or something like that to be able to understand what is the risk that is imposed from that third-party AI usage type.
And whatever you’re using with services, always consider more securely if it’s running on your own tenant and your own environment. So basically if you can take all of these open source models that we’ve discussed that had been trained on something, run it on your own tenant or self-hosted as it called, might be a bit safer, but it’s always the thin line between that and how much it costs you. Because if introducing that ability and time to market right now is something that is important, it’s a business risk that people need to consider and maybe it’s worthwhile to actually doing so and taking the risk upon yourself because you will be able to drive more business.
Amanda Razani: Now that brings up another question, which is does every business need AI? Is AI a tool that every business should be trying to harness and implement? Or what should they be considering first before just trying to jump on this tool and use it right away?
Demi Ben-Ari: Okay. I really believe that in this era, the AI boom, anybody that won’t consider that will become obsolete in at least like five years from now. Okay, I’m not really sure if 10, 10 is too long. Five years from now they will become obsolete. Why? Because think of it, that efficiency of driving business in that aspect, it’s multiplied in orders of magnitude. Think of it that what people used to do, and it took them a lot of time, right now you can do that with the click of a button. And I’m not saying it’s replacing people, it’s augmenting them. So right now with far less, you can do much more. So if you don’t do that, your operational cost of running the company would be much higher comparing to your competitors.
So only on that portion of efficiency, I think a company that won’t adopt AI, and I’m not even speaking about on the product portion. If you’re leveraging AI capabilities to make your product better. It’s basically to make the operational aspect of a company and also your service better, I think will be the first step. And then it really depends what type of product you’re providing to your customer base. Eventually if it’s a deep tech product, yes, of course you can make that more efficient with AI mechanisms in various matters, not only in generative AI. This is my take on that.
Amanda Razani: So how do you think we stand globally from the governance level? Where do you see us going with that? Because I know we do have the EU and the White House looking at this and-
Demi Ben-Ari: Exactly.
Amanda Razani: But where do you think it stands?
Demi Ben-Ari: So right now it’s in the maturity level of kindergarten, in my opinion, okay. Basically, everybody understands that it’s a really, really large risk right now that you’re introducing to a business. They fully understand the magnitude I think, of what might go wrong. And more and more governments are starting to form regulations around it. I think that right now we’re at that stage of education I think, to the people of what that means and what risk is posed from these type of relationship and technology. The next phase would be regulators actually mandating companies to do things. Companies, meaning the providers that provide AI abilities, not the people that use that. Because this is the easiest way to actually implement any proper program. And I think in the next five years we’ll mature into that of having tooling around that also to keep track. But right now people are really, really trying to understand just like the data privacy regulations of GDPR started to boom back in 2016.
The actual enforcement happened somewhere in 2018 plus I would call that. It really depends how you look on it, at least like two to five years after. And then of course you’ll have fines, you’ll have the whole legal aspect going into that when somebody goes out of that scope in the box of the regulation. So I think this will be almost the same evolution process, where somewhere in the first year or two, more and more regulations, and you can see that also in the US it happened in the EU and more and more will adopt like data privacy regulations got adopted globally. Most of them are with the same guidelines, by the way, called a bit differently. Even GDPR, CCPA and then compare that to [inaudible 00:12:58] in South Africa and PDPA in every other country in the East. I think this will happen. That evolution process will happen and more and more tools will surface also to answer these privacy and security needs.
Amanda Razani: So as we’re looking into the future, and maybe this is far into the future or maybe not, I would like to hear your stance on it. But I know there are some concerns about quantum computing and when quantum computing and AI come together. Do you think we’re prepared for this or even close to getting started, to being prepared for the security risks and concerns that come once quantum AI, quantum and AI combined?
Demi Ben-Ari: Skynet. No, I’m just kidding. It’s a problem. It’s a problem that people do understand and they have to put safety mechanisms on all of these entities that maybe combined even can break a lot of stuff. As an example, breaking encryption. People spoke about quantum computing being able to break the current state-of-the-art encryption mechanisms. It is true, but the evolution of that also might occur on the other way. On the good side, so basically if people are worried that modern encryption will be broken by quantum computing, going back make a better encryption mechanism. The same thing would be for AI. You will have to find some kind of safety mechanism for that to be able to operate because it can be stopped, in my opinion. And if we will try to stop that and not adopt that in a secure manner, bad actors will try to actually evolve that, like everything happened in history. I think that that won’t be different.
Amanda Razani: Yes, it’ll be both the problem and the solution in many ways. So if there’s one-
Demi Ben-Ari: The chicken and egg going forward.
Amanda Razani: Yes. So if there’s one key takeaway then that you could leave our audience with today, what would that be?
Demi Ben-Ari: I think education. Literally try, for any perspective if it’s technology-oriented and trying to implement AI mechanisms, and if it’s also on the user’s end, try to understand what you’re doing. Ask a few questions before starting to use any service and what might be the implications of giving that type of data. And data is even the questions that you’re trying to ask all of these entities, right? Because say for instance, all that we spoke about the conversational AI portion and what type of information you’re also consuming because this can be also a risk from your end. Can you trust that mechanism to provide truth? Is it something that you might consider? Asking ChatGPT is the new real.
Or maybe again, you can see that people are complaining sometimes that all of the information that is being yielded from that might be 50% false. So just take in consideration every time that you speak, right, speak with that type of entity, take everything with a grain of salt. Okay? Have it as an augmentation or as an auxiliary tool to whatever you’re trying to achieve and try to consider intelligent human being if it’s something that you do want to trust or not. Okay. That’s my take on that.
Amanda Razani: Wonderful. Thank you for coming on our show and sharing your insights with us today.
Demi Ben-Ari: Thank you, Amanda. Have a great rest of the day.