Vancord CyberSound

103 - Deepfake and Democracy: Senator Tony Hwang on the Threat of Deepfake and the Need for Legislation

Vancord Season 1 Episode 103

In the latest episode of CyberSound, hosts Jason Pufahl and Michael Grande engage in a thought-provoking discussion with State Senator Tony Hwang about the burgeoning risks associated with artificial intelligence (AI), particularly focusing on the spread of disinformation through deepfakes. Highlighting a live demonstration created using voice cloning technology, the episode showcases the alarming ease with which convincing yet fake audio conversations can be produced. 

Senator Hwang emphasizes the need for a balanced approach, advocating for both education and regulation to mitigate these risks while still fostering innovation. The conversation delves into the potential misuse of AI in manipulating public opinion, especially in the context of elections, and underscores the critical role of technology companies and consumers in ensuring the integrity of information. The episode underscores the critical balance between embracing AI's benefits and safeguarding against its potential harms, advocating for increased public awareness and legislative action to protect society.

______________
Stay up to date on the latest cybersecurity news and industry insights by
subscribing to our channel and visiting our blog at https://www.vancord.com/💻.

Stay Connected with us 🤳
LinkedIn: https://www.linkedin.com/company/vancord
Facebook: https://www.facebook.com/VancordCS
Instagram: https://www.instagram.com/vancordsecurity/
Twitter: https://twitter.com/VancordSecurity

00:02

This is CyberSound, your simplified and fundamentals-focused source for all things cybersecurity.

 

Jason Pufahl  00:11

Welcome to CyberSound. I'm your host, Jason Pufahl, joined, as always now by Michael Grande.

 

Michael Grande  00:16

My regular seat.

 

Jason Pufahl  00:18

Now it's becoming official, right? There's only two of us so we are not all cramped. And we've got Senator Tony Hwang, pleasure to have you again, I'm glad that we could find a time to have to do a redo.

 

Senator Hwang  00:28

Well, it's, thank you very much for having me. And we tread new ground every time we get together. And it's always insightful. And I appreciate the opportunity to be able to share and, and try to be a resource to help our general public.

 

Jason Pufahl  00:43

Yeah of course. So I think we're going to insert a little audio clip here of a conversation between us just from a few days ago. So we'll we'll let people listen to that for a second.  Welcome back, folks to another episode of CyberSound. Today, we have a special guest joining us, State Senator Tony Hwang, who has been vocal in the state legislature about risks associated with the spread of disinformation due to artificial intelligence. Welcome, Tony.

 

Senator Hwang  01:09

Thank you for having me on, Jason.

 

Michael Grande  01:11

It's great to have you, Senator. So let's dive right in. What exactly concerns you about AI?

 

Senator Hwang  01:16

Well, Mike. AI has immense potential, no doubt. But my concern lies in its misuse. Take deepfakes for instance, these AI generated videos can manipulate reality to a dangerous extent. We're talking about impersonating people, spreading false information, and even potentially inciting violence.

 

Michael Grande  01:38

That's a valid concern, Senator, deepfakes are indeed a growing issue. But do you think there's a way to mitigate this risk without stifling innovation?

 

Senator Hwang  01:46

Absolutely, Mike. Education and regulation are key. We need to raise awareness about deepfakes and their implications, while also implementing strict laws to hold perpetrators accountable. But we also need to invest in advanced detection technologies, can help identify and combat the spread of malicious deepfakes.

 

Jason Pufahl  02:06

It sounds like a multifaceted approach is necessary to address this challenge effectively. What are your thoughts?

 

Senator Hwang  02:11

Precisely, Jason, we have to embrace the benefits of AI while remaining vigilant against its potential harms. It's about striking a balance between innovation and security, to ensure a safer future for all.

 

Michael Grande  02:25

Well said, Senator, I feel like the more we spread awareness on this critical issue, the closer we will come as a society to coming to a consensus on where AI is beneficial and useful, and where it can cause irreparable harms. 

 

Jason Pufahl  02:38

For those at home listening, none of this exchange was real. It was scripted using ChatGPT and generated using ElevenLabs AI Voice Cloning technology. Pretty concerning. Right? Let's dive in.  So pretty interesting, right? I mean, really convincing, 

 

Michael Grande  02:57

Extraordinary, really, would be the word I would describe.

 

Jason Pufahl  03:02

Senator, do you want? So do you want to speak, it was it was your office that helped put this together, right, so you want to talk to how that happened?

 

Senator Hwang  03:07

I'm frightened because this was done without any input, or my voice, right, as you hear it now. And that's the real dramatic impact that that, that someone could take my audio from a compilation of what we do as part of our legislative work and public communications work, to be able now to create an entire message. And now a dialog with podcast programs, with with sort of the validation with my voice, and obviously, my potential kind of acquiescence per se, in misrepresenting maybe my viewpoints, although what was said there was right in line with what we talked about. But nevertheless, the potential for ill intent and malfeasance is just unimaginable. And as I said before, when we did the podcast before was that this is a tremendous invasion, in privacy, in identity theft, but most important of all, the real impact and how we have to manage and technology. So it really fits right in with with what your podcast is all about is how do we understand what's happening in the profound impact of artificial intelligence that is all around us to really understand it so we can protect our businesses, families and personal identities.

 

Jason Pufahl  04:42

And your reaction is real, which I think is the important part about this, right? Like you and I have a good enough relationship that I felt comfortable saying, well, let's let's get this deepfake together. And you had no idea so it wasn't until you heard it and you realized Holy mackerel, this this seems convincing everything about this, the message is legit, you cannot distinguish the voices from your real voice at all right? It is perfect in a lot of ways.

 

Senator Hwang  05:08

Well, my wife can tell. But I gotta tell you, what was really kind of the startling for me was the exchange. Yeah, the flow of exchange that made it real and conversational, rather than a one way transaction. That is really the frightening part. I mean, we're doing this podcast, we could literally replace talent, with a message and theme based upon, you know, AI generated kind of messaging. And that really gets to where we are, is, artificial intelligence is now this profound technological impact on our society. But we really don't fully understand it. I'll be honest with you, you talk to the general public, they're like, AI? Is this Terminator? Where machines are going to take over the world? Or is this going to create efficiencies and help those within our society to advance in technology? Or is it something that could be profoundly an assistance to help educate in workforce development and innovation and productivity? We just don't know. But the reality is, I tried to simplify it. It's super computers, that takes in hundreds of millions and, and trillions of data points. And now we're creating computers with enough power generation and sophistication, that we can create algorithms that takes that data. So you know, the algorithm of what we watch on YouTube and TikTok, and how one video flows from another and another, and another, that's all algorithm based upon your viewing your interests and your kind of being in put into a trance, how many times have we seen our friends, not just kids, but young adults just sit there for minutes on in and start laughing? And being cued by a machine. And they continue on as though you're oblivious to them, even though you're right next to them?

 

Michael Grande  07:21

The formula seems to work each time, you know, to sort of entrench you into into that into that state. If we, jumping off talking about AI, and thank you, that was really helpful. sort of segue. Maybe we define sort of deepfake because it's almost like another component of, of this whole conversation.

 

Jason Pufahl  07:43

Yeah, I mean, yeah, I think we assume people always know what this stuff is. And deepfake is not it's not incredibly complicated in the sense that it's just taking the content and creating something from existing images, sounds, your videos, right, that appear to be legitimate, appear to be real, but are completely manufactured through technology. I mean, that's all we're doing. And and, you know, I think people have heard about the Taylor Swift things, right, because nobody misses any of the Taylor Swift things. But I think there's more mundane uses for this, your day to day uses that, you know, to the Senator's point that we see all the time, which is content that's simply curated and directed at you because you've demonstrated interest, and it continues to simply feed your interests rather than give you diverging viewpoints, which is, which is a huge risk.

 

Michael Grande  08:34

Is there? Is there a tremendous amount of I mean, we're talking about all the data points. And I think, you know, as we talk about machine learning, artificial intelligence and the vast amount of data that we create on a daily basis for ourselves and our businesses and all the other things that we do, is there, you know, is this an onerous process to go through this and create it, is it sort of like how, how much work does it take, I guess, to

 

Jason Pufahl  08:59

To create this? ,

 

Senator Hwang  09:00

Well it's frightening that it really doesn't, because look at the exchange that was created. It took 15/20 minutes. And that is a high level of sophistication. I think the other part of it is, you're right, Jason talking about, you know, we hear about deepfake, about Taylor Swift, but how many other transactions have occurred that are innocent? You know, it is not a coincidence that that AI has been very much prevalent in our everyday presence. Look, Google phones advertise as their strength that they have cameras that can put a different smiling face right, and create a different kind of a facial reaction gets rid of blinking eyes and smirky you know, you know, closed mouths, that they can artificially manipulate the photo. That's a selling point, that is all AI in altering photos, right? In real time, the chat and the the, you know, the one, Eleven, that did the voice audio. These are all kind of the features that people use for everyday convenience to make their lives and their content more interesting and creative and more perfect kind of presentation, but taken to another degree in regards in wrongdoers hands, it can be very dangerous, and it could create real bad intentions and and how do we get to that? So for us in the legislative body this year, we looked at, obviously, Senate Bill, is it two or three I've forgotten, it's only been two weeks. But it's the AI bill. But it was such a comprehensive bill. But tucked in that section was two parts that I thought was a very, very important one was to give some some criminal protection, a criminal punishment to protect against the use of a deepfake imagery, false representation and revenge, that that's used against people using AI. Now, one of the questions is, who do you regulate? Who do you punish for these criminal behaviors? Do you catch the person who is perpetrating this? Well, they could be from eight different protection zones and different countries and identity anonymously? Who do you protect once that content goes out? Right? The problem is, you can say, oops, this ill begotten and vengeful content goes out. You say it's wrong, you get rid of it. But how many people have screenshot this? How many people have captured it? And create irreparable damage? Right. So who do you when we looked at a policy, who do you hold responsible for the regulation and the protection in this marketplace? One, you want to catch the perpetrators, no doubt. The other I think critical is for those technology companies that procure and sell and deliver these sources, they have a higher standard to ensure that these kinds of content, as well had who is presenting that content is regulated. And that's that's a really thin line on that.  And that sort of leads to a question, and I appreciate your sort of review of that bill that was out there. I don't I don't believe it was it was passed on this session,

 

Jason Pufahl  12:42

And that's Senate Bill Two, just to make sure if people want to take a look. 

 

Michael Grande  12:45

But if you could, from your perspective, right, and, you know, perhaps this is just an opinion. But I'm interested to hear where you feel the majority of the responsibility does lie in sort of ferreting out, sort of, you know, these these false images or deepfakes that may exist or the proliferation of some of the content that may get transmitted on social media? Is it with the media outlets, who, you know, accidentally, may or may continue to promote something? Is it with the social media companies? Is it with the individual and the consumer? You know, you talked about sort of that long chain of sort of custody, right? All the different hands it could touch? You know, at what point? Where does the responsibility ultimately lie from a legislative perspective in your eyes?

 

Senator Hwang  13:33

Well, from my perspective, first and foremost, it is the perpetrator of these kind of attacks, whether it is it is, you know, ill intention or a just a, a humorous take. Ultimately, I think the perpetrators need to be first and foremost, held accountable for delivering and perpetrating and promoting this talent. Now, you're going to say that technology companies has a role, I do believe they have a role. And upon finding that these kinds of ill content are out there, they have a responsibility then now to regulate and define where the originating documents came from, where the ISPs are, and then do the due diligence to close that action out, quickly pull that content down. So I think there has to be a flow of accountability. And so when I say that, let me repeat again, there are perpetrators are first and foremost, those people that need to be held accountable, don't do it. Right. Number two, then it is incumbent for an accountability and the responsiveness of the content providers and the delivers to take a look and punish these perpetrators to now allow these actions to continue. But number three, it is also the protection of the consumer of that product. We as a consumer of that product, can not be drawn by it. And perpetrating and sharing that information that we need to understand that this is ill begotten, raise, send a send a note and an alarm and a spam call to say this is wrong. And to also recognize that continuation of that misinformation could potentially open each individual to liability.

 

Jason Pufahl  15:36

Yeah, don't share it, don't promote it. 

 

Senator Hwang  15:38

Exactly, that has to be a education message to say, oh, it came on to my feed, but I took a snapshot of it, a screenshot of it, and I'm continuing to promote it and push out content, that you have a responsibility, I do believe it extends to that, that there has to be some degree of personal responsibility. But here's the problem, right. And in some ways, people don't know if it's the wrong information. 

 

Jason Pufahl  16:07

So I think it's so that's, that's such a great point, because I spend a lot of time doing security awareness training. And generally, I'm able to say, well, they'll look for these telltale signs, you'll look for these markers, and that will help you identify it. In the cases of like, the deepfakes that we played earlier, it's really challenging to identify, it might not be quite as conversational as this, right. So maybe that's a little bit of a cue. But the reality is, most people aren't listening that carefully. They're not that discerning, I think the responsibility is less on can identify it. And more, if it's been identified as illegitimate content, they have a responsibility not to share it. So at least at a minimum, you should not proliferate something that is deepfake, something that's created something that isn't authentic, even if you haven't been able to identify it individually, personally.

 

Michael Grande  16:57

I think back, you know, of growing up, and all of the literature classes that I would take, and, and the importance of citing your work, and always knowing sort of the sources, right, that were that were used and giving credit appropriately. And obviously, we can go on and on about how society has changed from that perspective. But we just don't have that expectation of really relying on, hey, this has been completely, you know, investigated. And it's coming from a truthful source source, and we can rely on this information very easily.

 

Senator Hwang  17:35

The content can be so professionally produced, yeah, that and then think about that, think about that the algorithm in which all we get from our news is through this phone, all through that we are through the algorithms siloing the same content. And the provider provides that kind of soundbite that we no longer use citations, right, that, you know, I had a chance to talk to about 60 high school students, seniors, who are going to go on the world, who has a voice. And I said, how many of you get your news from the phone? Every single one of them? How many read the paper, one out of 60 said they read the New York Times, right? But then the rest of it is, where do you get contrasting viewpoints? Well, we don't sit me down, you go into the silo. And we wonder why people have taken such a and society as a whole have taken on such a divisive, you know, one way or the highway kind of tone manner. I mean, look, make no mistake about it. Our environment right now is I'm right, you're wrong. I thought that only existed in my household. But nevertheless, the conversation right now in many things is people don't even want to talk about it. They don't want to have disagreeing contrasting viewpoints. Because who wants that? It's, it's it's confrontational. It creates discomfort. I'm gonna go back to my phone, and watch the things that I like to watch, validate your perspective, reaffirms the things that gives me stability and comfort in the world, as I know, it, discourses against it. But in my own personal world, I'm convinced of my viewpoint that you're right. And I'm right, right. So that's the power of the the AI the algorithm, the generation and it's much more subtle, not on a widespread, fraudulent or dramatic impact. But it is kind of the social creep, that has changed our society. It's remarkable. Now I'm old enough to know and I said to the class, I'm old enough to know when a laptop didn't exist, and a smartphone didn't exist. And the kids look at me like I'm a dinosaur. But think about that, we grew up in a generation where we didn't have that phone. And and think of how fast that has, in a span of 30 years, how we've transformed. So when we talk about discourse, and divisive, and the the real steadfast friction that we have, perhaps we should look at the subtle assimilation and transformation of our society, because we're getting fed the same information over and over again. And we're creatures of comfort.

 

Jason Pufahl  20:33

So, all right, well, I think we're getting to a point we're gonna have to start thinking about wrapping up. But there's one really, really important topic that we need to cover. And it's germane to everything we just spoke about, which is, we're entering election season, how concerned are you and your peers about the potential for deepfake or you're any of the technology that we just discussed here, to have an impact on the election? Do you do you envision actual uses of this that can potentially turn the tide for certain candidates or at least have a real negative impact on other candidates?

 

Senator Hwang  21:09

Oh, absolutely. Because as you get further and further away, and you have many elections in which it's not local, right, even local anymore, where outlets do you have, you don't, the only outlet is through the algorithm of these phones. And again, going back to the same thing we've had the old cyber, the podcast is, the information you get is so narrow down and siloed, that if you believe in one viewpoint, you're gonna get that bed repeatedly. We don't I mean, one of the, one of the most wonderful experience I had with those high school students was, they saw me as an individual, not as a label, not as an imagery. But But think about that if you have statewide elections, you know, region wide elections, how possible is for myself, to have an exchange one to one for them, to break labels and perceptions. So when you think about where we're deepfakes and imagery is happening, it may not need to be so explosive, to be so kind of controversial. But what's more dangerous is the insidious, reinforcing creep of information. I was just listening to a podcast that said, in North Korea, an authoritarian communist, authoritarian, one person rule government, they created a imagery campaign that said, the ruler could levitate and walk on water. And would you believe it? For those people in that country? They believe it. And even though the alternate sources, right, they're the ultimate source and just think of what's happening in in Russia and other authoritarian countries. Look, I grew up in an authoritarian, you know, a government that even though it was a democracy was under martial law for 38 years, it wasn't odd to me that when the President died, his son took over. It was a democracy, there was no third rail of independent thought and press. So with that said, even the people from North Korea, that was the scary part about the podcast, when they left the country, and they lived in freedom, they still believed in a doctrine nation in the brainwash. So the most powerful thing is that we need to understand where are we getting our information? And how can we define it, process it and get contrasting viewpoints? It really, you know, elections have consequences. But nevertheless, the flow of information is decidedly partisan, and one sided, and we need to branch out on that.

 

Jason Pufahl  23:57

And ultimately, right, the controlling of the media has been the way that many elections have had been won and lost over the years. So this isn't substantively different. I think. It's the scale and the speed at which it can be done is the difference today. So the risk of the threat of the same, the medium and the mechanism to actually accomplish it are different.

 

Michael Grande  24:17

And we've talked about the limited number of steps used, you take, take the candidates, there's plenty of information, there's plenty of data points, there's plenty of apps and programs available. You know, everybody essentially becomes a proliferator. If they choose to be and that's, that's a scary proposition.

 

Senator Hwang  24:36

So that is scary. And the speed in which that gets done is is just, you know, it's just mind boggling on that process, but nevertheless, I think it is a very much of a heads up warning for people to think and process deliberately what they're saying and not always take what is being told to them, if something sounds too good to be real, I say this all the time, that an individual's campaign promoting themselves, they're never as good as they're promoting themselves, nor are they ever as bad as the opposition promotes it. It's common sense, you have an opportunity in this country, that's the great gift, you have an opportunity in this country, to pick up the phone, send an email, and demand accountability for those that you will let, and a lot of people just just relinquish that. And that is one of the greatest things that we need to be sensitive of accountability, transparency, and ultimately, self determination and independent thinking. 

 

Jason Pufahl  25:44

And I think you've demonstrated how important it is to be out in the public, viewable,  and human. Instead of purely this, you know, digital face that everybody gets to be sort of recognizing you as. So it's important to still get in touch with people on a human level. Senator, it's great always having, I enjoyed the conversation. I think this one in particular, is on the minds of a lot of people and during the election season, I appreciate all the work that you're doing, helping to govern this and helping to identify appropriate legislation in the space, especially in something in a new field like this. Hopefully, we can have you on again in the future. It's always fun.

 

Senator Hwang  26:25

No, it's a pleasure and, you know, reminder as we head into Memorial Day. And we think about the sacrifices made by our men and women in the armed forces to give us the rights that we have. Listen, we have one of the greatest countries in the world, with liberties and rights that we have, we have press, we have the freedom of thought. And I'll be honest with you growing up for the years that I have in an authoritarian rule, you didn't know any better. And you know what you're doing in bringing information out and what we have in, in this country is one of the true to true greatest gifts that we have. And sometimes we take it for granted. So I appreciate the opportunity it's a privilege to represent. Thank you.

 

Michael Grande  27:06

Thank you, Senator.

 

Jason Pufahl  27:06

If anybody has, has comments, feel free to let us know. The best way to get this message out. And it's an important message. So people understand what the risks are, like, subscribe. You know, the more people that hear it, the more people will understand what they're up against. So as always, Senator, thank you, and I'm sure we'll talk in the future.

 

Senator Hwang  27:25

Thanks, guys. Have a great week.

 

Jason Pufahl  27:26

Thank you. Bye.

 

27:28

We'd love to hear your feedback. Feel free to get in touch at Vancord on LinkedIn. And remember, stay vigilant, stay resilient. This has been CyberSound.