Synopsis: In this AI Leadership Insights interview, Amanda Razani speaks with Yaron Litwin, chief marketing officer of Canopy, about the dangers and risks posed to children and teens by AI.
Amanda Razani: Hello, I’m Amanda Razani with Techstrong.ai. I’m excited to be here today with Yaron Litwin. He is the chief marketing officer for Canopy and Netspark. How are you doing?
Yaron Litwin: I’m great. Thanks so much for having me today. I appreciate it.
Amanda Razani: So can you explain a little bit about your company? And what services do you provide?
Yaron Litwin: Sure. So there are actually kind of two companies. One is Netspark, which is the over overarching kind of global corporate company, which has been developing AI for over 14 years, specifically, identifying images and video focused on pornography and nudity. And although AI now is kind of the buzzword, we’ve been doing this for 14 years and have developed a homegrown proprietary platform, that when compared to some of the leaders in the space, we come out six times more accurate and faster in being able to identify those type of explicit images and content. And so as part of that platform, we launched Canopy, which at its core is a real time filtering, using our AI. And around that we’ve built additional features to kind of provide an all-in-one parental control app that has the ability to filter out in real time, as opposed to just blocking content on apps and websites. And additional kind of parental control features such as screen time management, which can be done on a per device basis. So perhaps the parent wants to provide more screen time on a laptop for homework versus for some mobile device. And an additional features – an important one – is our, what we call our anti-sexting feature, which will alert parents if an image is being sent or received. To avoid what we’re seeing now, kind of more often, which is extortion, where kids are getting blackmailed for sending – and that will kind of tie into our AI conversation as well.
Amanda Razani: That’s wonderful work that you’re doing then, because I know in today’s environment with our children and teens more and more, at younger ages, being on screens for a longer period of time, both in school and recreationally. And so I’m sure this is a concern all parents and teachers have for the children. So what are some of the issues, the main issues and dangers you are seeing, as we see AI advancing and all this other technology advancing? And can you share some use cases or examples that you’ve seen?
Yaron Litwin: Sure. So I think one area that is definitely – so first of all this week our interview is timely – just the Surgeon General came out basically announcing the dangers of social media for young kids and the amount of time that they’re spending on it and the type of content that they’re exposed to. So it’s definitely becoming kind of a public health issue. And one area, when it comes to the AI, that we’re seeing kind of becoming more concerning, is the ability to edit, or invent conversations that kids might be having. And so for example, we’re aware of a recent case where a young teenage boy was sharing, kind of naively, just his muscular chest to kind of show off, and someone was able to take that image and manipulate it using AI and turn it into actually a nude photo, and then try and blackmail this teen which of course was a very scary situation for both the teen and the family. And so those are kind of the AI type risks that are starting to see out there where kids can use kind of deep fakes for inventing what one kid might be saying about another, or manipulating images in such a way that could be very disheartening. So I think with AI – it just becomes even so much more important for parents to be aware of some of those risks and, also being aware of some of the solutions that might be out there.
Amanda Razani: Absolutely. And those deep fakes. I’ve been seeing examples of those playing out across social media. It’s very scary because you start to wonder at what point will we know what’s real and what’s not real with this AI technology?
Yaron Litwin: Yeah, Very true. And so that’s why I think protecting kids in terms of kind of what they’re showing online. And knowing that, you might naively think that you’re showing just a photo, but it could very easily be turned into something unintended. And so even more so, I think parents today need to be aware of these risks, and also kind of find ways to protect their kids both through communication; so that kind of the parental intelligence, but also through what we’re calling kind of “AI for Good” which is what Canopy offers – AI that can kind of really help de-risk some of these type of situations.
Amanda Razani: And especially just with the case that you gave, as an example – I think about children and teenagers – their minds aren’t fully developed. They’re still growing and developing, and the psychological repercussions of things like this happening and being manipulated. Can you talk to that a little bit?
Yaron Litwin: Sure. I mean, we’re seeing perhaps in a post COVID world, just the mental health of these young kids really being impacted, both from just the amount of time that they’re spending on some of these networks – the social media, online gaming – and then also the type of content and some of the bullying and the interaction that happens there. And it definitely very concerning. The FBI announced about 5000 cases just last year for kind of these type of sextortions, and it’s probably much larger. Unfortunately, some of those kids ended up committing suicide as a result of the kind of hardship, and what they had to go through. So this is real, it’s very scary, and the FBI put out an announcement that this is really something for parents to watch out for.
Amanda Razani: So that being said, do you think there’s gonna have to be some kind of government regulation? And how do we have government regulation on this AI technology that’s open source?
Yaron Litwin: That’s very challenging – certain states are trying to regulate access to social media, and certain platforms. But AI is definitely challenging, though. Kind of not to forecast too far, but the idea of having some kind of watermarks, and really being able to identify what is AI generated, and what is real, is probably kind of the next wave that we’ll see when it comes to kind of these “AI for Good” type solutions. And, of course, we hope to be at the forefront of that.
Amanda Razani: And that’s another thing, as we talk about all the scary, bad things about AI and technology – but as you said also “AI for Good” and how it’s actually going about in helping to solve these issues. So can you speak a little bit more to that and how AI is actually doing good in the world?
Yaron Litwin: Sure. So outside of what we do at Canopy, which again, is AI to protect the mental health of young kids and young adults, we are seeing some other solutions such as chatbots that kids can kind of, perhaps interact with, where they feel more comfortable than interacting with an adult or their parents, and these chatbots can kind of identify mood and potential risks of depression. And so there’s definitely kind of, I think, some interesting AI that’s that’s evolving to really protect young kids and young adults, kind of, when it comes to the mental health side. And so we can we expect to see much, much more of that.
Amanda Razani: And what do you say to parents? Do you have any suggestions to parents on how they can kind of monitor their children and make sure that they’re staying safe and not going to inappropriate sites or being manipulated by AI and such?
Yaron Litwin: Sure. So eventually, it comes down to kind of parenting and the open communication that the parents should have with their children about some of the risks. What’s interesting for us is that often, the feedback we get from our customers is that by having a platform of solutions such as Canopy, it actually opens up opportunities for them to have those conversations – have those kind of open conversations when an instance arrives. So for example, we had a mother that was notified when her son was about to send kind of this explicit image through our sexting feature and she shared with us that she actually used that as an opportunity to explain some of the risks and how to interact and the habits that she would want for her children to have. And so, sometimes the technology can also kind of be that accelerator or that enabler of positive conversations, when it’s real. So those are the type of examples of, on one hand having the artificial intelligence that’s protecting. But of course, the parental intelligence and the communication – that kind of thing can happen as a result of having both in place.
Amanda Razani: And I know another thing, when we think about the classrooms, some of the teachers are concerned, and some are embracing it. I’d love to have your input on generative AI, especially, being used to write papers, being used to seek information for projects – what is your stance on that in the school?
Yaron Litwin: So, you know, the leaders in this space, from Bill Gates and others, are really seeing AI as kind of like this next wave of innovation. And, on one hand, kids definitely need to embrace it, and be part of it and not be left behind. Because it will change how we do many things. And so the conversation of, you know, does one use AI for homework or not? I think is definitely a challenge for teachers. But at the same time, perhaps, we also need to adapt to how the world is moving, and in certain schools, kind of having kids focus on innovation, and creativity. And, you know, sometimes figuring out these things and finding the right prompts and so forth is definitely a skill set that will need to be used. So I think it’s finding that delicate balance. And I’m sure that for teachers today those are difficult discussions.
Amanda Razani: It’s a bit of an oxymoron, I’d say. And it just comes down to how it’s used, as with most technology – how it’s used – it can do so much and really help automate and make processes more efficient, and help enhance schoolwork, help children learn more quickly, help them with their writing skills. But then we don’t want them just using it to write their papers for them and not learning the skill. So just that fine line, for sure.
Yaron Litwin: Exactly.
Amanda Razani: So last question. With this technology, what do you see the world looking like in say, two to three years from now – as fast as it’s moving?
Yaron Litwin: Oh, wow. Well, I think that there will definitely be more awareness around the use of social media and the amount of time that we’re online. It took us 100 years to realize that cigarettes are bad for you. I think that we’re moving faster on some of the public health risks with social media, and screen time. And so I think we’ll definitely start seeing parents being much more – like what we’re seeing is the parents are very fearful about this. And hopefully, with some of the regulation, with some of the solutions, with some of the tools, parents will have kind of more confidence and a better understanding of how they need to handle and parent in this new digital world. And so hopefully, we’re kind of at a point where we’re start seeing some of those positive effects take place as well. And so that kids can really benefit from the positive that online can provide, and digital experiences, and the creativity and the innovation that comes from that. But also having tools in place to protect them from the bad.
Amanda Razani: Certainly. Well, thank you for coming on our show today and giving your insights into this topic.
Yaron Litwin: Oh, thanks so much for having me. I appreciate it.