Synopsis: In this AI Leadership Insights video series, Mike Vizard speaks with Joe Murphy, tech evangelist with DeepBrain AI, about all things related to deepfakes.
Mike Vizard: Hello and welcome to the latest edition of the Techstrong.ai video series. I’m your host, Mike Vizard. Today we’re with Joe Murphy, who’s tech evangelist for DeepBrain AI, and we’re talking about all things related to deepfakes. Starting to see a lot of them out there, in fact, you might not even be sure if this is me and that’s him, but we’ll see how this plays out from here. Joe, how are you doing?
Joe Murphy: Good, Mike, it’s nice to meet you, and hello to your audience. As you mentioned, I’m with DeepBrain AI, we are a video synthesis company and we create AI models that are text in and video out. So we create digital humans, essentially. You type a script, we synthesize the video of the model you’ve selected, looks like a real person, sounds like a real person, and along the way digital humans and deep fakes, and there’s a lot of controversy and swirling topics, so I’m happy to talk to you about it today.
Mike Vizard: Well, before we jump too far into the negative side of this thing, what are some of the positive use cases that people are using this for in terms of things that we couldn’t do before that now we can do that you’re seeing companies take advantage of this stuff with?
Joe Murphy: Yeah, absolutely. So I would say at a higher level, AI in general, we view it as an enabler, an amplifier, a tool for humans. So video synthesis is another tool for humans, it’s an AI tool that recently has taken off in Asia for AI news anchors. So what we see is TV stations, and DeepBrain is working with over a dozen TV stations right now, they come to us and they ask us to create a virtual human based on their lead anchor or the franchise face of the network. And we go through our process, we develop that AI human, and we deliver it to the station, and then they can use this model to present hourly updates throughout the day using the trusted face and voice of the lead anchor without bringing them into the studio and having everything that it takes just to do a one minute or two minute segment update, they can now use AI to do that.
I’ll let you know that every news station we’re working with is very transparent with the audience, when they’re using the AI news anchor it says it right on the screen, this is the AI anchor, here’s an update, here’s what we’re working on for the evening news, and then the real anchor is out in the field working on the stories and then comes in and delivers the evening news just as they always would. So it’s really, in the media world, considered a win-win, because the news anchor is getting more face time with the audience throughout the day, and the network is getting more face time and more time with their talent, creating more and more content, that not only are they pushing out on full HD television, but they can push it out to social media channels as well. So media has been a big one.
Also, customer service. We’ve deployed our AI humans in hotels as an AI concierge. So it’s a kiosk that you can walk up to and ask, “What time does the pool open?” Or, “What good restaurants are nearby?” All these, what I call low touch engagements, you can now have an AI human handle that with the guests. And then the high touch engagements are the staff can spend more time for somebody’s complaining about really hard things to handle, like, “The room next door is too loud, do something about it.” That’s not what the AI is there for, that’s what the real agents are there for and they can handle, because it takes multiple systems and also a bit of nuance to handle requests like that.
Finally, banking is a big one. AI lobby leader is a position that we see across Korea right now, where you can talk to an AI agent, they’ll tell you what documents you need, they’ll tell you, “Oh, maybe you could do this on the mobile app, could I show you how to do that?” So in that, it’s really a line buster, because the worst thing you can do is wait in line for 20 minutes at the bank, get there, and then they say, “Oh, you don’t have the documents,” or, “That person’s not available right now,” so this lobby leader position, again, is just customer service orientated.
Mike Vizard: I get all that, but there are clearly people who are going to use this for nefarious purposes, and that gives rise to this whole conversation about deepfakes. How will we know when somebody is up to something where they’re trying to mimic my boss and send me a video asking me to transfer, I don’t know, half a million dollars into some account somewhere for some important task or something, and when we lose trust and faith with everything that we’re seeing because we won’t know what to believe.
Joe Murphy: Yeah, so I envision a future where an AI avatar is an extension of our digital footprint, in a further line down in the future where everyone has their own AI avatar and it’s another tool that they use, just like you have an email address, social media accounts, bank accounts, all online, you’ll have an avatar online as well. But it is a real concern, at DeepBrain we do have deepfake detection software. We have worked with governments, I can’t share much on this public forum about it, but deepfake detection software is something that is being developed by many companies, and deepfakes and digital humans in my mind are very different. Digital humans are opted in, people have chosen, they want this avatar. Deepfakes, anybody can create a deepfake of someone using off the shelf tools. It’s a different technology, it’s called, usually, face swap technology, so it is a little easier to detect than the digital human or hyperrealistic avatars that we create.
But it is a challenge, deepfake detection software is out there. The government, a US government just last week, the Biden administration announced that they have several of the leading AI companies signing on, that when AI generated content is being presented to people, it needs to be labeled as such. The EU AI Act is a little bit further ahead. China’s releasing rules. I do think government involvement and regulation is something that is going to help alleviate this, but it will be a period of growth and learning and there’ll be some missteps along the way.
Mike Vizard: So if there isn’t some way to verify it, and maybe we’re going to start to assume that everything that isn’t verified is false and then we’re going to go from there, does that change the way we engage with content ultimately? Because today we believe everything that comes across, but maybe we’re going to be more of the verify first and trust second kind of mindset.
Joe Murphy: Yeah, and I agree with exactly what you just said, and I’m excited about this. Because I feel like we started in a time, before the internet, where our news sources were all trusted, vetted news sources, and then with social media becoming more of a news source, it wasn’t vetted, it wasn’t verified, a lot of misinformation is out there. I mean, you can read story after story about it. I’m not going to get into it here because I don’t want to get political. But absolutely social media has become a news source, but already we’re at the point where you can’t believe what you read, you can’t believe what you see, you can’t believe what you hear, and that’s just going to get exasperated as this technology proliferates and becomes democratized and everybody can use it.
So I envision a future, it’s almost like a pendulum swinging, where news sources were the trusted verified content, and then the pendulum swung towards social media where people went to get their news, and it’s going to swing back because these news agencies are going to invest in deep fake technology. You’re going to need a ground truth, you’re going to need to be able to go to a source and say, yes, I trust what’s coming out of here, and they have the technical know-how and the due diligence to vet their content before they present it to me. So as the pendulum swings one way to the other, I think this will be a positive thing, because social media, fake news, misinformation is going to be amplified in the near term and people will need to get back to their trusted news sources.
I kind of think of it as when you’re standing in the supermarket line at checkout and you see those magazines right there, and it’s all these crazy stories of aliens and Hillary Clinton, whatever it is. You know that’s not real news, and you don’t have to think about it. And I think that’s where, as a society, we’re headed with social media, you’re going to use it more for entertainment, more for socialization, and less for getting news.
Mike Vizard: Mm-hmm. Do you think we’ll find ways to suppress things that are false more aggressively? Because I think part of that conversation is there’s all this distribution of stuff and there’s a lot of noise in the system, so if we can recognize it sooner, will we get better at tampering it down a little bit and saying, hey, we’re not going to amplify this thing.
Joe Murphy: Yeah, absolutely. So there will be new technology for detecting audio, detecting video that is AI generated, many companies are working on it. As laws come into place that you have to label content that’s AI generated as such, I think that will go a long way to really tamping down the hobbyists or the troublemakers that are just goofing around to do it. There will be people who are doing it with more nefarious motives, but that will be something that we’re just going to go through as a natural cycle of a new technology comes along, people think about ways they can use that technology for good and for bad.
I think about it when cell phones first started getting cameras, that’s a little bit of a nod to my age, I was around when cell phones didn’t have cameras, and then they added cameras to cell phones and everybody was like, oh my God, everybody’s going to use this for all this in the locker room, in the bathroom, I don’t want cameras there. But society adjusted, now every cell phone has a camera, and I feel like we’re going to go through a similar transition with deepfakes and also cloned voices where society will adjust, we’ll learn to. And technology will help us along the way, it’s not just society adjusting, it’s technology advancements and society adjusting together.
Mike Vizard: Okay. So what implications does this all have? I mean, some of the things that we’re talking about here with video and AI, people have wrapped their brains around it a little bit, but what are we not talking about? What should we be thinking about here that people are not spending enough time realizing that there’s this next thing coming? So what is that?
Joe Murphy: Yeah, so video synthesis and digital humans I think are going to be become more and more commonplace in our day-to-day life. I view this as a good thing. If you go to Korea or China now, you will see AI news anchors, you will run into kiosks with AI humans in them answering questions and helping out. So again, I always say AI is a tool to help us do our tasks, to help amplify our presence, so I see that continuing. As far as what we don’t know we don’t know, I hesitate to go down that path because I’m no better at predicting the future than most other people out there. I could say I’m predicting a bumpy ride, what those bumps are I can’t really guess.
Mike Vizard: There’s more known than unknown. What’s your best advice to folks then given all those things? I mean, wise man said, you should expect the unexpected, but what should organizations be doing right now to get themselves in the right place for all this?
Joe Murphy: Yeah, my advice right now is to lean into AI. I think with ChatGPT, a great cross section of the public now has experience trying AI, being impressed by AI, being scared by AI. But these tools now are so easy to use, I don’t want people to think in order to effectively use AI they need to know matrix multiplication and deep learning and neural nets. The tools are at a point now where they’re code free, you can use tools like ChatGPT to generate text, like DeepBrain AI to generate video, there’s plenty of tools out there for generating voice, these are all generative AI, and even images.
So my advice to the public is lean into it, use it, don’t be intimidated by it, this is a tool that can help you probably in your job in ways you haven’t even thought about yet. I know I use ChatGPT on the Daily. I’m very metered in how I use it. I don’t want to lose my own voice, I want my communications to be in my voice, and I don’t want it to sound like, ChatGPT sometimes writes these emails or paragraphs and it sounds like a used car salesman who’s just coming at you full force, and you need to be cautious of that on how you use it, when you use it. But lean into it and try it, and don’t be intimidated by technology like AI because it really is easier to use than it has ever been before.
Mike Vizard: All right, and folks, you heard it here, the genie’s out of the bottle and not going back in, so you might as well figure out how to wield it for good. And yeah, there will be people wielding it for ill, but we’ll deal with them when the time comes. Hey Joe, thanks for being on the show.
Joe Murphy: Awesome, thanks so much, Mike.
Mike Vizard: And thank you for watching the latest episode of Techstrong.ai, we invite you to check out this episode and others on our website. Until we get to the next episode, of course, we’ll see you next time.