CyberSound™
CyberSound™ is a podcast built by and for business owners and professionals. Tune in as our cybersecurity experts cover the latest news regarding IT security, the most recent and relevant threats organizations are facing today, and provide tips to keep your business safe.
CyberSound™
090 - AI Unveiled: From Basics to Business and Beyond
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Navigating the intricate landscape of Artificial Intelligence (AI) can be daunting– but it doesn’t have to be. On this episode of CyberSound, the team provides valuable insights into generative AI, explores its strengths, weaknesses, and how it has garnered widespread attention in the professional services landscape, and discusses practical business applications across various industries.
Today, Jason, Michael, and Steve are joined by Zach Warren, Technology and Innovation Insights Lead at Thomson Reuters Institute, to shed light on this groundbreaking technology for listeners.
______________
Stay up to date on the latest cybersecurity news and industry insights by
subscribing to our channel and visiting our blog at https://www.vancord.com/💻.
Stay Connected with us 🤳
LinkedIn: https://www.linkedin.com/company/vancord
Facebook: https://www.facebook.com/VancordCS
Instagram: https://www.instagram.com/vancordsecurity/
Twitter: https://twitter.com/VancordSecurity
00:02
This is CyberSound, your simplified and fundamentals-focused source for all things cybersecurity.
Jason Pufahl 00:11
Welcome to CyberSound. I'm your host, Jason Pufahl, joined today by Michael Grande and Steve Maresca. Hey guys.
Steven Maresca 00:17
Hey there.
Michael Grande 00:17
Hi.
Jason Pufahl 00:18
So, we're fortunate enough today to be joined by Zach Warren, the Technology and Innovation Insights Lead with Thomson Reuters Institute. Welcome, Zach.
Zach Warren 00:28
Thank you very much for having me. Happy to be here.
Jason Pufahl 00:30
So, we get to talk all things AI today, which is probably the hottest topic on the internet. And you've got your people in one camp who think it's the greatest, you got your people in the other camp who think it's going to take over the world, right?
Zach Warren 00:45
Yeah, I like to make the joke, think about the Disney movie, WALL-E, that's where we're all gonna be in 10 years, right? Robots just controlling every aspect of our lives, we're gonna have no free will. No, it's not gonna be bad.
Michael Grande 00:57
Is that a better alternative than Terminator? That's probably the question, right?
Jason Pufahl 01:02
Yeah, WALL-E was a pretty good feel-good, I'd rather have a Disney movie, right?
Steven Maresca 01:05
You know, it might make your breakfast too, that could be a good thing.
Jason Pufahl 01:08
It could, that's coming. So I think, let's start we'll start with a real softball Zach, which is simply, you know, spend a minute if you would, describing what AI is? Because I do think it means different things to different people.
Zach Warren 01:22
Certainly, so I think it's, I'm gonna parse your question even a little bit further in that AI has been around for years now. Even if you think about something like Google, there is automation on the back end that happens, where it's taking data, say your preferences, and readjusting how exactly it gives you answers. So say you're like, in my case, living in Minneapolis, you put in something and it will understand on the back end, not only where you are and give you location based searches, but also understand, this is what has been clicked on in the past by Zach, by others like Zach, by others who are similarly situated, and all of that data combines to give you an answer. That's been around for years, what's really new, and what's gotten people really excited recently is generative AI, which is AI, but is a subset that really what it's looking to do is create, generate something new, whether that is text, images, video, audio, in some cases, it takes a whole lot, a lot, a lot, a lot, a lot of data, and uses that to generate something pretty much out of whole cloth. AI previously would take data and generate something, but a lot of times it either was surfacing something that had already been created previously, or wasn't doing the sort of what they call large, using large language models to really pour over all of this data in a short time. The generative AI, which is what ChatGPT and other programs use, are really doing that by looking to predict what is next and create something new.
Jason Pufahl 03:08
So I actually just wrote an article comparing AI to the Blockchain. And from a simple level, I think the big driver for AI has been accessibility, you know, things like ChatGPT out of OpenAI and people being able to go in there, type a query and get a response and realize the immediate, immediate uses, right, where with Blockchain probably much less so. And in fact, I'd still say we're kind of waiting for that. How much of a contributor to the you know, the enthusiasm, do you think that is?
Zach Warren 03:39
Huge, absolutely huge. I saw a stat somewhere where even if you look at extremely popular technology platforms, like your Facebook, Instagram, Spotify's of the world, it took months to a year for them to get a million users. ChatGPT got a million users, five days. And part of the reason behind that, I think, is exactly what you're talking about. It's the plain language of it all, people are intuitively able to understand asking a question, get a question in response for so many facets of life, whether it's your home life, asking it for a recipe, your business life asking it to draft you an email, there are so many different ways to use a tool like this, that I think people kind of understand the use cases intuitively, much more than something like the Blockchain while powerful, takes a little bit of education to get there.
Steven Maresca 04:31
And the adoption in the news and the hype is basically derived from all of that accessibility. You needed to be a data scientist 15 years ago to use AI and related technologies, today democratized and available to everybody.
Zach Warren 04:44
Yeah, which is also why it's something I'm seeing in my research, it's really interesting is, it's not just the tech centric people that are interested in generative AI. The people listening to this podcast probably know how the blockchain works, how the cloud works, all of these different architectural infrastructural technologies. Generative AI isn't necessarily that, you're seeing others within an organization, let's say from the finance department, operations, even up to the CEO level that are saying, oh, I get this. And because I get this, I'm interested not only in playing around with it, but hearing what others are doing with this as well, so we can keep up.
Michael Grande 05:25
From a business perspective, more practical application, you know, could you talk about some of the things that you're seeing, you know, across industries, with with businesses and enterprises, you know, integrating AI into their standard applications and other, other use cases?
Jason Pufahl 05:41
And actually, maybe as a quick follow on, is there any business that you feel that you do the most work with? Anybody, is there any any industry vertical that seems really eager to adopt this?
Zach Warren 05:53
So kind of two different questions. The ones I work with are primarily professional services. So legal, tax, risk, and fraud, that's usually where my sweet spot is. They're also the ones that aren't necessarily the most eager to adopt this, probably because especially a lot of the public facing tools, like your ChatGPT's of the world, do have some privacy risks, and do have some accuracy risks involved. And I think, particularly if you're looking at, say, the legal industry, people in house legal departments, and law firms are much more aware and scared, let's say, of those risks than a lot of other people. So to the use cases, for that reason, we're seeing it use a lot more internally than for external facing work right now. So for stuff like question answering services, chatbots that people are starting to build, just be able to ask it a question, get something back, for internal document drafting, for email drafting. The especially generative AI tools are also a really good synthesizer of information, which you might not expect. So one way, I was actually at a legal recruiters conference talking last week, and there are people just feeding in a bunch of resumes and saying, okay, tell me what is different between these resumes, poring over 100 at a time, and what are some of the key standouts that I need to know, of course, there are a bunch of data bias questions that comes come into that. But just as a preliminary tool to get started, that's how a lot of people are using it. But for the risk purposes, and especially because if you're using something like a ChatGPT it's not necessarily 100% accurate, for external contracts, external say, legal brief drafting, for tax returns, things like that. A lot of people playing around with it, but feel that the tool isn't quite necessarily there yet.
Steven Maresca 07:56
So what would you say the strengths of the platform like ChatGPT and other generative AI services are compared to their weaknesses? Because knowing the weaknesses means you can steer away from them, right?
Zach Warren 08:07
Certainly, it's quick. Number one, I was talking with somebody in a law firm who said, yeah, a lot of technologies like, say the cloud, it would take a ten hour task, make an eight hour task, because you're able to get that knowledge in there previously. That's awesome. That's great. Is it really moving the needle though, that I need to shake heaven and earth to adopt this right now? Maybe not, I can do it a little bit easier scale, something like generative AI, if it's taking a ten hour task to write a legal brief, and now it's taking two hours because generative AI actually gives you a first draft in about one minute, and then you iterate with the prompts, you do some editing that will take you about two hours or so. But either way, you just cut eight hours off your time. Well, that's earth shattering right there. That's something that really moves the needle. So because of that, I think that's that's why a lot of people in professional services have their ears perked up is just the efficiency of all this, and how big of a swing this could potentially be in how people do their day to day work.
Michael Grande 09:20
You know, you know, when you see the news reports a lot, the first sort of glaring headline always comes out is right, the end of the world is upon us. And soon we'll be you know, working for robots. And then the second piece is more of an economic argument, right? Is this, is this really going to become more of a job replacement? Or is it a job enhancement technology? And you know, maybe talk through that with us for a few minutes?
Zach Warren 09:47
Yeah, the phrase I've heard a few times and it is one that I subscribe to is, generative AI isn't going to replace jobs, but people who use generative AI are going to replace people who don't. I think, yeah, there is definitely an element here of, you need to adjust your skill sets and how you think about daily work. But those who do that will become even more efficient, provide even more value to their businesses. Going back to the legal example again, a lot of times, say first, second year associates in law firms, what is the work that they're doing? They're drafting documents, drafting contracts, doing research, all stuff that probably is going to be automated out within the next five years or so. Or at least the time that it would take to do those tasks are getting severely cut. But what are you going to do, you can't not have first or second year lawyers, there has to be some sort of progression, you have to give them something else to do. So it's a matter of not only in their first couple of years on the job, but even law school education level before that, you need them to be able to get more skills, maybe not writing, maybe it's editing, but in particular, thinking more strategically, and thinking more, okay, so I'm not just doing this rote, repeatable work, how actually, am I providing value, even at this early stage, where maybe I'm not entrusted to take on an entire case, but I can provide something else and I have the time to do so now. So what is that? How exactly am I developing new skills to really push not only myself, but the entire organization forward?
Jason Pufahl 11:37
So you touched on something, which was the requirement to use good prompts in order to get good results? And, you know, certainly we've all seen job descriptions already, where they're talking about somebody with the capability of developing, you know, generative AI prompts, etc. But then it also begs the question, who sort of who owns that output? Is it your output? Is it AI output? You know, the output is only as good as the thought that goes into creating your prompts and sort of deciding what the structure is going to be. I'm curious what your thoughts are about that.
Zach Warren 12:13
Yeah, it definitely is a garbage in garbage out, certainly with generative AI, but also to the listeners of this podcast in particular, there are major privacy and security concerns, particularly with some of the public facing tools like ChatGPT. I think a lot of people might not necessarily realize, but if you go swimming in OpenAI's Terms of Service for ChatGPT, you will see, all prompts are property of OpenAI and could potentially be used to train the system further, which, particularly in professional services, contexts, like I work with tax, you don't want any personal tax information to be property of OpenAI, you don't want dollars and cents being used to train the system further. So, that's a big question right now, there are proprietary, more secure tools that are popping up right now. But it's not only a garbage in garbage out from knowing how exactly to write the best prompt, but making sure that all the information that you're feeding in actually is going to be confidential, or leaving out the information that you don't want out there, necessarily.
Steven Maresca 13:24
So to translate relative to an earlier subject you brought up, the synthesis argument where generative AI is great in producing an aggregate view into something like the resumes you mentioned earlier, that inherently means that that practice might be problematic, unless it's a platform, unlike OpenAI's ChatGPT that has protections around that input.
Zach Warren 13:46
Exactly, 100%. And that's why particularly in the legal tax fears, professional services that I talked to, you're not necessarily seeing widespread adoption right now, because it is mostly confidential information. So something like ChatGPT, it's out there, and if you want to play around with it good. But for enterprise wide use, I'm expecting probably not OpenAI's tools, but maybe Microsoft's, Google's, some of those other tech giants that have built in privacy, security features, and that people have trusted with their confidential information in the past.
Jason Pufahl 14:22
I think Microsoft's going to be an interesting introduction, I think it's only a week or so away before it becomes more publicly available that Word and PowerPoint and I think they've got a tool called Designer now for image like, you're going to be able to create an awful lot of content very quickly. And to your point, I think an ecosystem that people generally trust, so I'm pretty enthusiastic about that, sort of that that more formal announcement.
Zach Warren 14:50
And I think it's going to be interesting when it's baked into the backend of systems too. And it just becomes kind of a natural way that the system works as much as a proprietary product that people are going out for, like, right now we're on the Office 365 Suite in my organization. And in Teams, not only can you already get it to record, you can get a transcription of your meeting. But also through generative AI, you can get a summary of that meeting. So even if it's an hour long transcription, you can say, okay, one of the four bullet points that I really need to remember here, which is something really interesting, but it's not necessarily for the people are using it, you're not necessarily thinking, oh, I'm using generative AI here. It's, this is a cool tool that is already baked into something that I use, and it's very helpful. And I think that that's really where we're going to see the adoption is not necessarily people going specifically for generative AI tools. But it's just another feature in a way that they already work.
Jason Pufahl 15:46
It makes you wonder how many of those tools and we were just talking about yesterday, the virtual note takers, you know, how many of them go by the wayside simply because it's baked into Zoom, it's baked into Teams?
Steven Maresca 15:57
It's an entire industry, just evaporates.
Jason Pufahl 16:00
Almost overnight.
Zach Warren 16:03
Yeah, very true. Now, there's gonna be big shake ups with this.
Jason Pufahl 16:06
So let's, I think we're kind of bouncing up against time here, but I did want to touch a little bit on the ethical considerations around this, right? There's obviously concerns about misinformation, misrepresentation of data, certainly you touched on even just the tools spitting out inaccurate information, even if it's not intended to be malicious in any way. So, what are the risks there, do you think?
Zach Warren 16:33
There are a bunch of different risks. And I know particularly in the legal context, there are some courts around the country that have already made their lawyers sign quote unquote, AI pledges, saying if you're going to use AI tools in this courtroom, you are going to do so ethically, and with the tools actual intended purpose in mind. We have some seen some people run afoul of AI, there was a pretty famous court case earlier this year, Mata vs. Avianca out of the Southern District of New York, where the lawyer was just doing legal research, wanted to get some cases that paid attention or the would correspond to his case, and ask ChatGPT and said, hey, what are some court cases that you need? And ChatGPT, being a tool and wanting to please said, oh, yeah, here's six cases that you can use. Of course, none of the cases actually existed, because the way the technology works, it was just predicting what the person wanted to hear. Doesn't matter. He went back submitted the brief, the judge said, oh, you realize none of these cases actually exist, right? So this is my favorite part, the next part, the judge tells him, okay, so certify this for me, get me some certified cases. What does the guy do? Goes back to ChatGPT to ask, these are certified cases, right? And of course, ChatGPT said, yeah, so he submitted that, got sanctioned, National Council on the case also got sanctioned. But what that tells me is that, you can't just use these tools willy nilly. These are technological tools and to the point of garbage in garbage out, there's some element of thinking, that has to be involved here, you can't just take everything the tool says at face value, which is partially why I don't think the tool will ultimately replace people in the professional context, either, because it's a matter of not only taking what the tool gives you, but interpreting it, sense checking it, making sure that you're using it in a manner that is consummate with the ethics of the organization and with any potential risks. There's a lot that can be done, but only through the marriage of person plus machine, not just letting the machine do its own thing.
Steven Maresca 18:48
So bottom line, you need to be a domain expert in the prompt that you're placing into the system in order to properly evaluate the response.
Zach Warren 18:56
100%, yes.
Jason Pufahl 18:58
But interestingly, so I was kind of working on a presentation that I wanted to do relative to AI. And one thing I wanted to show was, you know, maybe how easy it would be to have misinformation. So I asked it, tell me valid facts about climate change? And then I tried to prompt it to say, well, tell me why climate change is false. And it's its response over and over, despite, I think, being pretty creative about the way I constructed them was continually there is demonstrable evidence that climate change is real. I can't give you false information. So I'm not going to I'm not going to opine on that. And so it wasn't as straightforward as simply saying, lie to me about it.
Steven Maresca 19:34
And you'll see the same thing if you ask for medical advice, too, right? Like I'm not, I'm not a doctor or a medical professional, please seek actual guidance. It you know, they do have safeguards around it. But you know, there's, there's a game of golf developing in some quarters about trying to trick the generative AI platform into revealing something it shouldn't. It's certainly still possible.
Jason Pufahl 19:57
I mean, I think it's very possible right? But but you do have to work out a little bit, which maybe is a little comforting.
Zach Warren 20:03
A little bit, yeah, and those guardrails will continue to improve. I know OpenAI is working hard on him with version 4.0. But yeah, to your point, it's never going to be perfect. There are all sorts of questions and all sorts of ways that you can word those questions that you can't just grab everything. So you have to be smart about it.
Steven Maresca 20:20
This is one of those areas that I try to turn into a positive because if you know that it's likely that a generative AI platform will emit something that is simply to satisfy the prompt, you can use it to your advantage. If I submit, hey, how do I do something with this piece of code? And I know that the response contains an answer that isn't possible, I might as a programmer interpret that to be something I can build into it, as opposed to a failing of the platform. So there's still a way to turn the output into something meaningfully useful, with that knowledge.
Zach Warren 20:55
I like that, that is one use case that I've seen a lot, by the way that I didn't mention earlier, is it is very good at building code, and understanding how to do so for somebody like me who doesn't, but I think your point is very well taken where it's a first draft like that legal recruiters conference I was talking about, I had ChatGPT write a job listing for a cloud at a law firm, just to see what it would come up with. And it came up with a really awesome thing. This is a law firm that really wants to have happiness for its employees and bla bla bla bla bla, but it cognitively doesn't make sense. But you can take that and say, oh, okay, so this is how exactly it's going to approach something like a job listing. So let me tweak here to make sure that I get something that's actually worthwhile out of this. Just so you're not staring at a blank screen.
Steven Maresca 21:49
I hope the job description was court jester, or would that be too clever?
Zach Warren 21:56
It was a little bit too clever.
Jason Pufahl 21:57
That would have been a good one. Zach, this has been a great conversation. I appreciate your, your insights, your expertise on this, and kind of breaking down the subject, I think into something that's a little bit more straightforward for everybody to understand. Because I think people are trying to get their arms around this.
Zach Warren 22:17
It's not easy, but a lot of education, I think over the next coming days. And thank you very much for having me to help out with it.
Jason Pufahl 22:24
Yeah, thanks much, and enjoy the rest of your day. And of course, as always, if everybody wants to talk about it more, I think we're talking about a lot internally, we can always engage Zach again. So let us know and we can continue the conversation. But as always, you know, thanks for joining Zach, I appreciate it. And thanks to the two of you for being here.
Steven Maresca 22:39
Take care.
Michael Grande 22:40
Thanks.
22:41
We'd love to hear your feedback. Feel free to get in touch at Vancord on LinkedIn. And remember, stay vigilant, stay resilient. This has been CyberSound.