Vancord CyberSound

089 - Connecticut Senators on the Cutting Edge: AI & Data Privacy in Focus

November 06, 2023 Vancord Season 1 Episode 89
Vancord CyberSound
089 - Connecticut Senators on the Cutting Edge: AI & Data Privacy in Focus
Show Notes Transcript
 Connecticut has been a leader in addressing data privacy issues and Artificial Intelligence (AI) regulation through policies that protect residents and foster innovation.

In this episode of CyberSound, Jason and Michael have a thoughtful conversation with Senator Tony Hwang and Senator James Maroney of the State of Connecticut on the complexities of AI, data privacy, and the state’s approach to transparency and accountability in AI usage. The Senators address social equity concerns and emphasize the importance of a balanced and informed approach to these issues. 

00:02

This is CyberSound, your simplified and fundamentals-focused source for all things cybersecurity.


Jason Pufahl  00:11

Welcome to CyberSound. I'm your host, Jason Pufahl, joined in the studio, I'm gonna say in the studio, because we got everything going on now. 


Michael Grande  00:18

Yes. 


Jason Pufahl  00:18

By Michael Grande, so thanks for joining. 


Michael Grande  00:20

Absolutely. 


Jason Pufahl  00:21

And we're privileged today to have two Senators from the State of Connecticut join us, Senator Tony Hwang and Senator James Maroney. Thanks to both of you for joining.


Senator James Maroney  00:32

Thanks for having us.


Senator Tony Hwang  00:33

Thanks for having us.


Jason Pufahl  00:34

We're, we're gonna spend some time today on, I think a bit on AI, and I know that you both came out of out of an AI Taskforce, so I think this is actually really relevant, and the timing here couldn't be better.


Michael Grande  00:49

So you know, in the absence, so jump right into some current legislation that has been going around, with the absence of federal legislation for data privacy, you know, Connecticut, in many ways, is leading a charge with some of the larger states out there, perhaps discuss Bill 1103, and maybe some high points and what was, what you were really trying to address with that bill?


Senator James Maroney  01:20

Yeah, so I guess I'll start. So you know, with data privacy, Connecticut, we were the fifth state to pass comprehensive data privacy in 2022, was Senate Bill Six. We did, this past year in 2023, passed Senate Bill Three, which focused on children's data privacy, as well as making some overall amendments. And one of the things where the, right now that our bill has not been challenged, there has been an injunction against the California Age-Appropriate Design Code, and then some of the other social media bills, but our children's privacy bill as of yet, has not been challenged. And so we are one of the few states that has passed a law to protect our children while they're online. But Senate Bill 1103 was actually born out of the task force or working group that was created from Senate Bill Six, to look at, you know, what other states were doing with data privacy, but then to look at other topics, right. And so to look at artificial intelligence, and when we created the task force, it was before ChatGPT launched, and so we were looking more at the algorithmic decision making and the use and the potential for, you know, for automated decision making, potential for algorithmic discrimination, right, and seeing, there's a book, Virginia Eubanks, Automating Inequality, which cites a lot of examples of ways that algorithms have been used and provision of government services that had discriminatory impacts, right. And so whether it was kicking people off the rolls for food stamps in Indiana through automating a system there, there was a system in Allegheny County, in Pennsylvania, where they were the way that they were screening calls for child abuse and the way that they were determining who would get investigated. So we wanted to look at that, you know, within government use, right. And so we started by thinking we would regulate government use of AI. And so within that bill, there were a few things that we did, one, transparency, right, we want to know, where are we using AI? So it requires an inventory by December 31st, this year of every agency and where they're using AI, and Senator Hwang and I were just on a Data Privacy Taskforce, or the AI Taskforce meeting, and they presented to us and I guess they're just under 1400 systems in use by the state that are inventoried. And so far five of them have been identified as using AI and there's still that 540 something to go, to see, but anyways, just so the public knows, are we using it? Where are we using it? And that'll all be published on the state website. The other thing is to create policies and procedures governing the state use of AI and procurement and implementation. And those policies and procedures have to be put forward by February One of 2024. And then one thing that we did say is that after February One of 2024, you cannot put any systems using AI in place without doing impact assessments to ensure that there are impacts and you know, what they look like? Who does them? Those will all be determined by the policies and procedures. So we did give them some flexibility to determine that but we said that, you know, we need to know that they're, you know, we're not employing any systems that are discriminating against our residents.


Senator Tony Hwang  05:02

And I want to compliment Senator Maroney, he's been a tremendous advocate. And he is one of the Co-Chairs of that Taskforce. And the expertise and the broad range of insight that we've been able to gain has been incredibly valuable. And I think it's important to kind of take a step back, you know, the discussion of AI and data privacy and the impact, given even the recent Attorney General lawsuit against Meta, about the impact of of the social media's impact on mental health, and the addiction of algorithms. These are all kind of upon us, we see it, we hear it. But one of the fascinating things that came out of this meeting today about AI was to have a a very experienced State Senator from Washington State, talk about literally a year ago, helping state legislators deal with Zoom and turning on literally their computer. And now there is a haste and an enthusiasm to see how we can craft policies to regulate AI and data privacy, when they don't fully understand the the very, very consequential and multiple impact that it has not only from a standpoint of regulatory policy, we as policymakers, we do very well in saying here's a problem, let's create a law, and let's change it. But with AI, as Senator Maroney talked about is, we've got to have real strong concerns in regard to social equity issues. You know, AI simplified is a multiple fold of thousands and hundreds of thousands of data input into algorithms, which then can identify and think based upon the algorithm analysis. But it depends on who's putting in that data. It also, it really brings to bear the the social equity concern and the perspective of AI implementation, we have to be very careful. But I think another factor that I did some studies on is the fact that public administration and government and nonprofits have a unique set of dynamics that is different than private sector. Private sector can implement, pull their expertise, and just kind of mandate that. For us, in government and nonprofits, we have an expertise driven system that is embedded in discretion and, and bureaucratic processes. It's very hard to kind of ingrain that into algorithms without the buy in, and the incorporation of the processes. So we've got a lot of issues at hand. But I want to compliment again, as I began, I'll end on my conversation with complementing Senator Maroney, he has really delved into this, and it feels like we're going around the entire country, Senator Maroney, we've got legislators, we got policy people from as far west as Washington, as international as England, I think we need to have that broad base basis. And in working with Senator Maroney on the data privacy bill, and the use social media bills, we're very proud to be able to use the work load of others that came before us, from California's caseload from what was happening in the European Union in regards to data privacy. They're leagues ahead of us in regards to that aspect, in our policy work together, with Senator Maroney leading it, is the fact that we're able to learn from the lessons of others before us. And being part of this program is critical to be able to educate your clients and the consumer out there that, you know, it sounds like it's relevant. And it is, it sounds like it's all upon us. Yes, it is. But there's so much complexity and depth to it, that you really do need experts and perspective as you navigate this.


Jason Pufahl  09:03

So there's so much there I want to comment on, but I think my initial thought really is, I appreciate you bringing up Meta, not not necessarily because of the company itself, but because I think it demonstrates how we're all using AI every single day. And people think of it I think in terms of the OpenAI generative language and what the outcomes can be there. But you know, it's Netflix providing you content recommendations, right? It's it's Facebook or Meta, providing you what it believes to be the most important posts that you should read that day, right. It's obviously all your advertising. So I'll be curious to see how you can create an inventory of all the applications out there that actually use it because I think as quickly as you write it down, they're going to provide an update that adds AI to their platform because that obviously is a race today, right? And then on top of that, as much as it's a fascinating technology, privacy is one of the biggest concerns for sure. And you know touching on, certainly touching on GDPR, and sort of the EU's approach, what I'd say pretty progressive approach to privacy, in comparison with the United States is pretty, you know, sort of trailing, trailing perspective, right. We're doing it at the state level in some cases, but you know, there's really nothing federally, I think, Connecticut is one of what just a half dozen states with sort of comprehensive data privacy laws?


Senator James Maroney  10:36

Yeah, we were the fifth state to pass but there's about a dozen now. So this past year, there were a number more but yeah, still 12 out of 50.


Jason Pufahl  10:48

Yeah, it's amazing to me still, that we don't have something more comprehensive, you know, on that national level.


Michael Grande  10:55

Yeah. Something Senator Hwang made reference to right, is it what a great point that was raised by the Senator from Washington, you know, maybe a year or two ago, there was training on just, you know, logging into a virtual Zoom session. And now the expectation is, hey, come along with really substantive and impactful guidance, and legislation on these very, very difficult topics. You know, and it does require, you know, an enormous amount of information and education to get there. So,


Jason Pufahl  11:25

And now, right, if you move your mouse, you'll see in the center of your Zoom screen at the very bottom, something specific to AI, right, because they're embedding now, to your point, Senator Maroney, the ability to capture the video and the audio, extract key points, right, present them in a way that's meaningful. I mean, that that's all leveraging that technology.


Michael Grande  11:46

With respect to sort of ethical considerations, from, you know, at least the state legislative perspective, can you talk through some some of the things that may be being discussed on the state level of how regulation may positively or negatively impact some of the progress in AI? Or what are the steps that are being taken? That would be, that'd be helpful.


Senator James Maroney  12:10

Yeah. And I think that the balance is, how do you protect residents without hampering innovation, right? And it's going to be that delicate balance that we're trying to strike. And I think the other thing is, how do you not create a patchwork of 50 different pieces of legislation across the country, so companies have to comply in all different ways. And so, you know, Senator Hwang and I, we're in the Connecticut Privacy Taskforce, but we've also assembled a multistate taskforce, we have almost 30 states that are represented, we're meeting tomorrow. And so that that's one of our goals is to come up with a framework that can be shared across states, and then I almost look at it like Lego blocks, because some states are going to be interested in different things. But how do we create these different bits of code or bits of policy that you can put together, customized for your state, but it's compliable, right, that companies know what to expect in the different states. So some states are going to be interested in hiring algorithms, right, and facial recognition and other things that we may not get into. And to your point that, you know, AI is really it's how you define it, right? And so we know that it's, you know, just typing in a Google search, right, trying to predict what what you're going to type or using your Google Maps, it's in so many things that people may not think of as AI and that if you define it too broadly, in, in policy, it draws it in almost everything. And so that's the other thing, but you know, the way I'm looking at it, as far as the regulation is broad guardrails. Right, so putting in requiring impact assessments and that companies abide by a risk management framework that you know, probably looking to a national model like the NIST AI Risk Management Framework, ongoing assessment, because we know the models drift, right, ChatGPT, there's a math problem it was solving in February and then by July, it couldn't solve that same problem. And so we see as the models train and they get new data, they they change, right. So we need ongoing assessment, borrowing from cybersecurity, looking at bug bounties, incentivizing people to provide user feedback, looking at Red Teaming or adversarial testing of the models. So those are things that would be within your risk management framework likely is how we would incorporate them and then perhaps putting in some, some sense for companies that if a user identifies an error, you know, you can fix it, right, in that without getting penalized, right, if you fix it in a certain timeline, because we want to make sure again, that you are continuing to look for those problems instead of turning a blind eye to them. So that would be with the testing, but also transparency. So with generative AI, looking at if images, voice, audio, or video files are created or generated, using AI that there is, you know, whether you call it content authenticity, or digital watermarking, that there is something attached some form of metadata that's attached to the file that's created. But then going a step beyond that, because not everyone would know to right click look at their source file look and see that so, requiring that social media platforms that they provide some form of a signal, if you're sharing an image that was generated by AI, again, we need to build that trust in what we see. But then also looking at how do we promote the businesses? Right? So I think in Connecticut, we have the opportunity to lead in health tech and using AI to drive better health outcomes. So how do we convene and bring people together for that? How do we build the skills there's, you keep hearing you know, of AI as a co-pilot, or as an assistant, it's going to make us 30% more efficient? Well how? Right, so how do we assemble like an AI Academy, a Citizens Academy of free classes that people, video classes that people can go to to learn how to properly implement AI to help make them more efficient? And how do we build the skills in our workforce? I think as a state for Connecticut, our advantage is our skilled workforce. So looking at the certificate programs, looking at what we're teaching K-12, higher ed, but also looking at, what do the companies need, right, and then making sure we can match that, again, one of the issues there is, workforce needs often change faster than government's ability to respond. So how do you build a flexible workforce development, but that's not something we'll solve now, maybe in the future, but again, and then the third thing is looking at State uses, you know, where are the ways that we can actually deploy AI within our agencies to help improve the citizens experience, and and make it easier for them to access services, make the operation of government more efficient, and user friendly. And I kind of talked a long time. So I'll let Senator Hwang,


Jason Pufahl  17:26

So actually, if you don't mind, I want to just quickly comment on that, because there's one, there's one overarching feeling that I got from that discussion, which is, you're not looking, you're not in a position where you feel like we shouldn't be using AI, you're really coming from the position of it's here, let's make sure we're transparent about what its capabilities are, we monitor and manage it to ensure that it's used safely, and that we train people on actually how to use it effectively. So none of your language, which I so appreciate was, you know, let's be afraid of it. Let's, let's find ways to slow and halt the adoption. I think I applaud you for that, because not everybody has that sentiment, I think.


Senator James Maroney  18:05

Yeah, you know, I look at accountability and transparency, right? Accountability with the, you know, we're not saying not to use it, not to employ it, what we're saying is, test it, make sure it's safe before you put it out there. You know, when during the pandemic, the whole world, we were at our knees, right, everyone was at home, people were washing their groceries before they brought them in the house. They're doing all these things. But we didn't just put the vaccines out, right, they had to be tested to make sure they were safe. So we can pause and wait, before launching a new generative AI or a new model just to make sure it's tested that it's safe. It's not creating disparate impacts. So and then the transparency, you have a right to know when you're interacting with AI or when something was generated by AI. But yeah, I don't think, it's here, we want to make sure that we do those things now to make sure that it's safe, and that we do a little bit of the pause, right, between stimulus and responses is that, pause, you know, we're not saying not to do it, but that we're doing it thoughtfully, and making sure that it's safe, because again, it's that balance. There's so many, I see more positive potential, you know, AI to drive health outcomes, right, Yale companies that are going to spin out of Yale, you know, the Yale research on looking at the echocardiograms and identifying potential heart problems, up to two to five years before they actually happen, you can do low cost interventions. So you're saving lives and saving money, right? The same with others accompany with identifying strokes before they happen by analyzing some of these from the wearable. So if you're at risk, you have the wearable that will give you the indication. So, there's so many positive health outcomes and other outcomes as well that we don't want to stop them from happening. We just want to make sure that it's safe.


Senator Tony Hwang  19:52

And I think that the key, as articulated by Senator Maroney, really reinforced the value of the Taskforce. The thoroughness and the extent of what just been articulated, and it was lengthy, but it showed that there was work done. And and we did the same thing on the data privacy, where we delved into critical issues and elements of concern. But we had to simplify the process when articulating it as a policy debate. That's the critical difference. And wouldn't you agree, James, that all the meetings we had, all the details that you just even quickly articulated, went through hours and hours of trying to understand the intricacies of this very, very kind of prevail in subject matter. But we had to then process it, and simplify the process, when we're drafting bills in words and, you know, 300 pages of articulating sections and all that, and trying to explain that to our very policymakers who understand the pressure and the reaction that's needed to address a critical social agenda issue. But nevertheless, they worry about the unknown. So we spend a lot of time educating our colleagues and trying to explain and simplify the process, at least that's the way I saw one big part of our mission has advocates and policymakers that came out of the Taskforce committed the times and the education and the awareness so that we can then explain it to our colleagues, as peers and then say, this is why, this is what we've gone into, we've talked to industry, we've talked to shareholders across the board, we talked about social equity considerations, these all came together, because we were sensitive as the policy makers and the architects to do the homework. So that we had credibility, so that when James talks about transparency and accountability, we as the articulators and the advocates for that policy, had that credibility. We're not trying to throw stuff against the wall and hope it gets through, because ultimately, anything we do as a policymaker will lead in this, and it's going to be challenged, the forces at hand, the industry leaders are multi-billion dollar, international powerhouse corporations that have specific interest, first, business interests in mind. And what we are articulating in policy may hamper their way of doing business. So, you know, through our work collaboratively, we have to have our ducks lined up and to be credible, knowledgeable, and expert, with using all the resources around us, not just us as policymakers, but to really engage and collaborate. And I'll be clear on this, in the data privacy, as well as the social media and protecting youths that we pass through, it was bipartisan. We got everybody's buy in, but the complexity is is challenging, because you think about AI, data privacy, technology, social media, you would think that it would come out of the energy and technology committee. But no, it started in general law. And why is that? Because Senator Maroney is the chair of that and has great expertise and led in that in its passage. But the bill that we eventually passed out last session came out of the Judiciary Committee. Remember the confusion, James? An,d that is another example that we, as a legislative body, can't really box how we approach this because, again, we talked earlier in the show, the interrelatedness of all of this touches everything that we do. And so, when we looked at judiciary and making the final committee of cognizance in that area, was exactly that, that that our policies and data privacy and regulatory guidelines, touched all aspects of what we're trying to do. So when you talk about the first question of the ethical guidelines, the policy impact, it really is kind of a new frontier. And so for me, in following the really tremendous human work of Senator Maroney and being able to support him, because we want to craft the best bipartisan policy, that's going to be effective, that's going to be proactive, and it's going to be a good model that we can take to the rest of the country that wants to articulate these kinds of policies. I know Senator Maroney is very, very well versed in the education arena. I'll give you an example that just happened in regards to AI and the pushback that you will get in regards to the unknown. So just as January this year, when ChatGPT became very prominent, the entire city of New York's Board of Education banned ChatGPT from any usage by its students. Right? And within about four months afterwards, after consideration, after review and after kind of evaluation, they realize, for us to not utilize that technology would be a disservice to the generation of students who needs to live in that environment. And they thought better to rescind that ban, and incorporate all the shareholders, the teachers, the educators, the administrators, the outside contributors, technology firms, to be able to use AI and use ChatGPT as part of the curriculum design, I found that to be very interesting, but but in a lot of organizations that aren't accustomed to this, the unknown, the leading up to Halloween right now, the boogie person, is one that is powerful, to to, to really contradict or block this kind of initiative. And I hope through our Taskforce, through our bipartisan effort, we can provide answers, we can provide information for people and along with your organization provide this information to kind of overcome the unknown.


Jason Pufahl  26:29

So I really appreciate both of your perspectives on this. And I think we're up against time here, so I think we're going to want to wrap up. But one of my main takeaways from this has been, you've had to react quickly, with a lot of thought, right, and how best to sort of manage this going forward in in in a in a way that enables it to be utilized, but in a way that enables it to be utilized sort of safely, securely with privacy in mind. I mean, six months ago, we really weren't having maybe as public or substantive a conversation around AI as we are now. So I just want to say thanks for convening around this so quickly, for being thoughtful in your approach for being, I think, cognizant of the fact that it's here, and it's really all about how do we use it appropriately, and not, how do we find ways to hamper its use? Because I think your point is well made. New York reacted quickly, came to the realization that they probably, you know, too hasty, yeah, a little bit too hasty and maybe a little bit of too much, kind of overreached in their in their prohibition of it. It's here, people are excited about it. And it's really about how do we make sure that we're using it in a responsible way.


Michael Grande  27:42

And set up responsible guardrails, looking forward because the next evolution of this technology, you know, we may not be aware of today, so the laws and the regulations that are being written, have to be thoughtful about the future as well. So thank you so much.


Jason Pufahl  27:57

Yeah, thank you. Thank you very much for joining today. I sincerely appreciate it.


Senator Tony Hwang  28:03

To finish by saying, I began by complimenting Senator Maroney, in the middle I complimented him, again I'm complimenting Senator Maroney and his leadership and initiative on this. I'm just glad to be able to help in any way I can.


Jason Pufahl  28:18

So maybe that'll be our title for the episode. Right? Compliments to Senator Maroney. All right, guys. Thank you very much. I appreciate the time and maybe in the future. We'll talk about you know, where things stand with this. 


Senator James Maroney  28:30

That would be great. 


Jason Pufahl  28:31

Super. Alright. Have a great afternoon. Take care. Thank you. Bye.


28:35

We'd love to hear your feedback. Feel free to get in touch at Vancord on LinkedIn. And remember, stay vigilant, stay resilient. This has been CyberSound.