Vancord CyberSound

017 - Demystifying AI in Security

November 23, 2021 Vancord Season 1 Episode 17
Vancord CyberSound
017 - Demystifying AI in Security
Show Notes Transcript

The center of cybersecurity conversations has revolved around Artificial intelligence (AI) and machine learning. Maybe the better question should be, "Do you need AI in your security products?" 

This week, we discuss the basics of AI, how it can work with cybersecurity, and is it a silver bullet solution. Join Jason as he speaks with Vancord Senior Security Engineers Matt Fusaro and Steve Maresca.


[00:00:01.210] - Voiceover

This is CyberSound, your simplified and fundamentals-focused source for all things cybersecurity, with your hosts, Jason Pufahl and Steven Maresca.



[00:00:12.850] - Jason Pufahl

Welcome to CyberSound. I'm your host, Jason Pufahl, joined today by Steve Maresca, Senior Security Engineer at Vancord, and Matt Fusaro, also Senior Security Engineer at Vancord.



[00:00:22.330] - Matt Fusaro

Hey, guys.



[00:00:22.880] - Steve Maresca

Hey.



[00:00:24.550] - Jason Pufahl

So, we chatted a bit about this, but we're going to cover the idea of AI and security products. And I think, I want to hearken back a little bit to Episode Seven, where we talked about security products and silver bullets versus snake oil--the efficacy of security products. Are they what most clients need? Are there better things that people can do as a precursor to some of this complexity? AI has really leapt onto the scene as, a little bit, what feels like a silver bullet. This idea that you're going to have automation--or, you know, some intelligence within your products--solve a lot of problems for you.



[00:01:08.410] - Jason Pufahl

So I think I'll throw out to both of you: Describe if you could a little bit what AI is, and then maybe as we go through it, do we actually need it? Is it a marketing term? Is it legitimate? Let's start diving into that.



[00:01:24.130] - Matt Fusaro

So let's maybe tackle though what is AI? That's a tough one.



[00:01:27.070] - Steve Maresca

...or what it isn't?




[00:01:28.510] - Matt Fusaro

Yeah, yeah. It's kind of an evolving term all the time. I think what people think AI is, is anything that makes a decision on its own without you interacting with it. And that's not really how it gets developed today, at least. Those are the expectations and not the reality, right?



[00:01:52.730] - Steve Maresca

I mean, I think that it's marketed as something that makes better decisions than people can, or an efficiency aid that makes decisions on your behalf. That's not necessarily the case, is it? I mean, at the end of the day, these are tools that really are intended to make challenging problems simpler. And whether that's actually the case is, frankly, an open question, in my opinion. AI, as it's marketed anyway, is supposed to identify things that are hiding. Machine learning is a related term, but they might be used interchangeably. These are facilities and security products that are meant to reveal the mysterious or deceptive.



[00:02:35.810] - Matt Fusaro

A lot of it, too, is there's fire holes of data coming in. It's supposed to help you with that--be your augmentation that can take these huge amounts of data and discern something from it; make some type of decision.



[00:02:51.770] - Jason Pufahl

So how does it work, though? Is it, like, Skynet? Does it know a little bit about everything, or is it actually learning anything as it gets data and starts making decisions?



[00:03:03.290] - Steve Maresca

Ultimately, what we see in actual products doesn't remotely resemble what people think of AI from the Hollywood notion. And that's very important to know.



[00:03:12.290] - Jason Pufahl

I don't know. That's a cool notion though.



[00:03:14.330] - Steve Maresca

Well, I mean, it sounds great, makes for good movies, but the reality isn't quite the case. It's not very smart. If anything, AI is pretty stupid and you want it to be stupid. It's really a very special purpose tool: Systematizing decisions, bubbling up anomalies, helping identify things that are hidden. 

Ultimately, AI works in application by analyzing a baseline, learning what's normal, and then, in some fashion, reporting on deviations from that baseline. And in some cases, making decisions based around that.



[00:03:52.730] - Jason Pufahl

You threw around the term "machine learning" before. That just seems like a more appropriate way to describe it--like getting data, understanding what that baseline is, and then to your point, alerting you when it sees something anomalous.



[00:04:05.990] - Steve Maresca

And honestly, that's a more modern term. AI, in terms of actual use, has a long history. One of the examples I like to cite is logistics aid back in the Gulf War. We're talking early ‘90s. AI was used out of the really early research prototypes from MIT to help supply chain on the battlefield. And it's a challenging problem. When you're moving goods, you have interruptions, you have supply shortages, you have stuff being attacked. And, in order to make that time from delivery to actually use shorter, these were specialized systems developed to find bottlenecks.



[00:04:47.910] - Steve Maresca

Applied today, what that actually means, is finding unique pieces of information that are highly, highly sensitive markers of attack that may not be obvious in an environment that's very noisy. I think Matt and I were talking earlier about the main challenge in security is that we're dealing with the realm of the unknown, and all products prior really deal with the known bad. AI and machine learning: Their goal is to find things that deviate from the norm within your environment but aren't necessarily stand-out malicious.



[00:05:28.890] - Matt Fusaro

Right. You're dealing with security, which is, like Steve said, “A huge topic, huge problem space.” Things are constantly changing, but you're trying to apply a very complex system to that. It's a recipe for disaster, really.



[00:05:45.690] - Jason Pufahl

So then is it really just a good marketing term? Because what you described, I feel like, are things that tools have done for a long time. So, at a collective baseline (especially those that have really explicit purposes), alert you on those things that are known malicious. Maybe pull out or extract some of those things that deviate a little bit from that. But those still require a human being to evaluate and go down that, too, and call an incident response path to determine.



[00:06:16.350] - Matt Fusaro

Yeah, I think that it ended up being an over-promise. I don't think that people building these systems ever really intended it to be interpreted as the silver bullet that is just going to learn your environment, go off, and everything it says is going to be true from now on. I think when people were building these things, they pretty much expected them to be another data point that security engineers can use. It's a decision that they may not have come to, "Here's some more data. Go make a smart decision."



[00:06:47.490] - Matt Fusaro

And marketing kind of picked that up and said, "Oh, something is making a decision for you. This must be AI."



[00:06:51.870] - Jason Pufahl

Got to be AI.



[00:06:53.790] - Steve Maresca

All of the related technologies require shepherding by people, and that might be the engineering department, that might be the development team behind it in order to tune it appropriately. I think you said that a second ago. They don't work in isolation. The decisions made and the anomalies identified in one environment might be entirely benign in another. So, there's a lot of potential for false positives. And frankly, getting back to I think your original question, Jason, why is it out there? What's the purpose of it? I think it's a differentiator between competitors in a very cluttered space. It's useful, right?



[00:07:33.930] - Steve Maresca

It's just a way of adding robust behavioral analysis capabilities to platforms that might otherwise just know about the known bad.



[00:07:42.690] - Jason Pufahl

So I guess one of the questions that I have then is, going back to that idea of snake oil for example: Does it give people maybe an overdeveloped sense of being secure if you're calling things AI, if you're saying it has an artificial intelligence to it? There's a lot of folks out there who have limited budgets, probably need to make some spend on technology. If they buy something with the idea that it's just going to somehow knowingly solve their problems. Is the industry doing itself some harm by labeling things as AI, do you think?




[00:08:19.170] - Matt Fusaro

I'd say that probably, the answer is yes. It's like I said before, you're over-promising what that system can do. I can see why it was marketed that way. It's a major pain point for a lot of organizations to be able to make decisions on huge data sets. So that was a way for them to say, "Hey, we can solve that pain point." But, as we're seeing with products that are even out today, they're not fulfilling that promise. It's not doing what's on the label, right? It's not coming into your environment and immediately protecting you. I still need a team of people reading this stuff, tuning it, actually discerning if, what the results are usable.



[00:09:02.250] - Steve Maresca

And that's not to say that AI machine-learning capable tools can't be used successfully. It's just that they require that extra investment. Sometimes that's done upfront; sometimes not at all. I'd say that those products that have those capabilities, frankly, are either not very heavily used or a source of nuisance because of the amount of alerts that they might generate without the benefit associated with that nurturing. Sounds strange saying something like “nurturing” in a context for computer security, but it's the absolute truth when we're talking about these technologies.



[00:09:37.530] - Steve Maresca

Silicon Valley had--I'm not going to describe the particulars--but there was an image recognition algorithm in that popular series. And computers don't really know what they're looking at. You have to teach them appropriately. And that's true for data points like images. And that's true for what Bob is doing or shouldn't be doing on a given day on a computer system.



[00:10:00.570] - Matt Fusaro

Yeah. So one thing a lot of people don't know about a lot of these products is they won't even license it to you if you're under a certain amount of people, because they need bigger data sets to even train these systems. These trainable systems need more data to even be effective. I know in our organization, back in the day, we had gone down the path of seeing if one of these were viable. And they just told us, “Yeah, it's too small, our systems just won't work.” And that's not exactly on the front page of the website. They want to lure you in with the AI machine learning marketing terms, but then you get there to actually use it, and they say, "Well, wait a second....



[00:10:39.690] - Matt Fusaro

…”We need more data. We need more input. We need more feeding of the system."





[00:10:45.330] - Jason Pufahl

Yeah, one of the things that comes to mind is, I've always heard the term "technology rich, people poor." And that really seems apropos here where you can buy a lot of technology, but if you still don't have people to interpret some of the output, you get limited results, limited successful results. So, in spite of this being intelligent, it sounds less so the more and more we talk about it. It’s really just taking data and then trying to make some decisions based on that.



[00:11:16.290] - Steve Maresca

And I'll bring it even further back. These are tools. They're arrows in the quiver, yet you still need all the foundational elements to help protect an organization appropriately. You need attention paid to alerts. You need controls that back up your anomaly detection. If they're not there to begin with, the rarefied heights of behavioral analysis and stimuli that you get from machine learning and AI are meaningless.



[00:11:45.150] - Steve Maresca

So returning again, as we tend to, to the fundamentals--that's where focus is required. That's where organizations should spend their initial effort. And, if you're considering new products based upon marketing around AI or machine learning, it's probably not worth being on the comparison sheet unless you've done all that pre-work.



[00:12:09.460] - Matt Fusaro

Yeah, I hear that.



[00:12:12.430] - Jason Pufahl

So I think maybe to wrap it up, there's a couple of things that come to mind. I think, one, definitely is if there are security vendors out there who are listening to this saying, "Hey, AI really does solve problems that you guys aren't addressing," or "You're being unfair to the space," we're happy to have a conversation to explore that. So feel free to reach out to us at Vancord Security on Twitter or Vancord on LinkedIn, whatever. We could be, maybe, being a little bit more negative here, possibly.



[00:12:42.490] - Steve Maresca

And it's worth tempering before we do close: We have experience with these technologies. Matt's worked with expert systems. I've worked with image recognition technology. They're tiny examples of some of the related material. We believe in the utility of expanded data sets. It just needs to be applied appropriately.




[00:13:00.970] - Matt Fusaro

Yeah, and there's a lot more other industry use going on now. There's millions of dollars being put into buy private equity at this point to enhance AI systems, to further these things along..for things like wider supply chains, maybe not so specific as the example you brought up, Steve. But I think in a few more years we'll probably get somewhere where this stuff will be useful.



[00:13:25.930] - Jason Pufahl

So, neither of you feel strongly, though, that we're on the edge of Skynet and a legitimate takeover by the machines, it sounds like to me.



[00:13:37.150] - Matt Fusaro

No, generally speaking, I wouldn't say so.



[00:13:39.130] - Steve Maresca

Any of the tools that act on your own, frankly, have many, many fuses and circuit breakers in place to prevent something from going wrong.



[00:13:48.190] - Jason Pufahl

So I guess we can still unplug them?



[00:13:49.510] - Steve Maresca

I think so.



[00:13:50.410] - Jason Pufahl

Yeah, that will work.



[00:13:50.950] - Steve Maresca

For now.



[00:13:51.730] - Jason Pufahl

All right. So on that note, I appreciate you guys joining, chatting a little bit about this. This is a big space, honestly, and I think in 15 minutes, you can do a little bit of high level, but I think there's probably a lot to cover here. People who are interested in it, so feel free to reach out to us. We're happy to chat more. Steve and Matt, thanks for joining as always. And hope people got value out of this. Thanks.



[00:14:18.250] - Voiceover

Stay vigilant. Stay resilient. This has been CyberSound.