Vancord CyberSound

059 - Turning the Tables on Attackers

September 13, 2022 Vancord Season 1 Episode 59
Vancord CyberSound
059 - Turning the Tables on Attackers
Show Notes Transcript

 Understanding the behavior of attackers can increase reactivity, improve your overall security stance, and give you a leg up in detecting malicious activity.
On today’s episode of CyberSound, Jason, Steve, and Matt discuss common indicators of an incident in hopes of improving your defense tactics and recommend technologies that may be useful for you to deploy.

00:01

This is CyberSound. Your simplified and fundamentals-focused source for all things cybersecurity, with your hosts, Jason Pufahl, Steven Maresca and Matt Fusaro.


Jason Pufahl  00:13

Welcome to CyberSound. I'm your host, Jason Pufahl, joined, as always, by Steve Maresca and Matt Fusaro. So I think today we're going to talk a little bit about how you might use the behavior or sort of common techniques of attackers to slow them down a little bit, maybe better detect their activities, confuse some of the activities that they've got, right, understanding their techniques, and maybe deploying some technologies that might just otherwise give you an opportunity to react a bit more. You know, at a really high level, Steve, common things that you see deployed or that you recommend people deploy?


Steven Maresca  00:57

Yeah, so basically some terminology up front, because I think this is kind of important. 


Matt Fusaro  01:00

Yeah, it'll be important. 


Steven Maresca  01:03

It's a very unusual area of security. And I think a lot of people hear of it being discussed, not necessarily understanding what the lay of the land is. Deceptions in a defensive manner include technologies that you may have heard of, like honey pots, honey tokens, canaries, things that, you know, to some degree have meanings outside of cybersecurity, a canary in a coal mine is obviously the origin of that one, what's the purpose, it's to give you a bit of a leg up to detect activity that should be definitely deemed malicious. That's kind of the intent here. And I'll add to what you had in the opening, Jason, part of this is to gain information about the attacker so that you might, in the moment, defend the network. So what are your common initiators of an incident? Phishing, right? So what happens in a phishing attack that could be defensively meaningful? For example, you know that your users are potentially submitting their identities to a form, right? They're revealing their passwords. How can you use that in an environment? I think that if you use a fake identity that is known, baked into logging systems, and examples of that variety, you can submit that to a form and hope that the attacker attempts a login. I've personally used this with success to get information about where attackers are attempting their logging, attempting their logins, attempting their other secondary activity. What's that get you, ultimately?


Matt Fusaro  02:44

Yeah, at that point, you're starting to get some IOC's that you can start putting in to your SIEM's or even your Firewall rules, right? You know, if you want to start moving into the defensive area. So if you're looking to gain more information, you might not want to be blocking that activity just yet, right, but yeah, by putting that form in, getting more activity from that actor than, yeah, you're now getting sources, you're getting, what's the behavior, right, what types of things are they trying to get into? Are they looking for exchange, for example, etc.?


Steven Maresca  03:20

Right, so we're getting some notion about what systems they're targeting, we're getting some notion about the users themselves that may have actually submitted their credentials. Now, it's not atypical at all for users to mistype a password in a phishing form. So you're not necessarily tipping off this activity by submitting a fake account outbound, but it allows you to infer what other users might have actually been hit. And that's really important, because especially when you think of people working from home, using their devices, the odds of an organization having visibility to see what user clicked an email from the network traffic is much, much, much lower than it ever used to be. So you want these secondary indicators as one mechanism to gain visibility into what's going on. That's phishing.


Jason Pufahl  04:10

So I want to actually pause you there for a second because I think that's a good point. I tend to think of some of these techniques, from the perspective of networks or servers that are deployed in your more traditional data center or business environment, right, where, you know, not on an allocated IP address space that you don't expect any of your actual user to ever hit. You know, if you see traffic there, you probably can assume it's malicious. Do these techniques still have value with some of the changes that we've seen today in terms of the hybrid workforce or the switch, at least, to the out of office workforce?


Steven Maresca  04:47

I mean, I wanted to start with that one, because I think it proves that there are techniques that are valuable given that, you know, more traditional deceptive techniques do depend on that local internal network visibility.


Matt Fusaro  05:00

Yeah, it's also going to depend on you know, your employee, like how many people you actually have to maintain something like a dark net, for example, and if you're not familiar with dark net would be almost like a trap network that you try to lure attackers into that environment, maybe replicate a couple of things that looks similar to yours, so that you can gain information about an attacker and hopefully block that activity in your production environment.


Steven Maresca  05:25

Yeah, so honestly, that's a good segue into internal defenses, right? What do attackers do when they get a foothold in your network, they run scans, they perform reconnaissance, they want to start staging an attack. A dark net, like Matt described, would be a reserved area in your network that really, there is nothing legitimate. Therefore, any traffic destined toward that location should be a bellwether, that, you know, someone's either made a typo, which is potentially legitimate, or searching. And that will, exactly, and that will tell you what, potentially compromised systems are doing reconnaissance. Those are really great ways of gaining early indications of attack before they really get rolling. Honey pots are related technology, they're not quite the same as the dark net, they're far more interactive to an attacker, they look like a real system, you know, ideally, it pretends to be something that, you know, maybe has an attractive keyword and its hostname, you know, accounting files, something like that. But you know, they pretend to be vulnerable systems or similar, same goal, attract attention from an attacker and potentially in this particular case, make them waste time trying to break in.


Jason Pufahl  06:39

Yeah, so to some degree, that's the goal, right? Get them focused on something that is unimportant, gives you an opportunity to respond, rather than maybe getting right after the real legitimate assets.


Steven Maresca  06:50

Any speed bump in an incident is a help. An old term for this used to be called a tarpit. Far, far more network-oriented, deliberately trying to slow down network traffic. Not very common to encounter today. But the notion, the actual metaphor is useful here. If you can dream up a way to direct attention away from your actual assets, you've gained some time.


Matt Fusaro  07:15

Yeah, these things used to be quite onerous to deploy, right, I mean, Steve, you you helped run some of your targets in the past. That's not so much the case anymore. It's being built into a lot of products now. Now, your endpoint security solutions, have them now, there's point solutions that are literally like, was it? Was it Red Canary? Yeah, one of those out there that you can actually deploy a physical hardware or virtual machine that acts as a honeypot for you. So this technology is attainable now, and helps you with the alerting, and everything used to be very, a lot of work to put something like this up, not so much anymore.


Steven Maresca  07:52

Returning to the fake identities aspect, you mentioned fake tokens and EDR platforms, very useful scenario, because they insert potentially interesting credentials into memory, and anything interacting with them, or attempting to authenticate with them become an immediate indicator of attack, great example.


Jason Pufahl  08:15

But what I like about that example is, and I do find myself thinking about this, if you use an MDR, and it's doing something like, you know, fake tokens, that's something that your IT staff doesn't have to do, it's taken care of by the platform, implementing a dark net, or some of these other technologies, they are time consuming, you do have to monitor them. And I wonder where they sit in terms of value against other potential security activities.


Steven Maresca  08:42

So I'd say there's a spectrum. Honeypots, onerous, unless you're deploying something off the shelf like Matt mentioned. Other things, you know, for example, like creating an identity that sits in Active Directory or whatever your identity stores and literally does nothing else other than be an illegitimate thing to be used. You set it and forget it. And then if it happens to appear in dark web data, because you're hopefully searching for that, then you know, without a shadow of a doubt that your local identities have been leaked. It's a great way of determining through inference and monitoring that something's occurred. So low effort, high fidelity if it gets triggered, right. Similar examples, this is a more of a exfiltration, and data access scenario. Weaponized documents are used all the time in phishing attacks to actually compromise a system, but similar techniques are possible to deploy to gain a sense of someone opening a file. Canarytokens.org is a great site, it will generate you a document like a Word doc or an Adobe PDF with a callback to a system so you can get an email, for example, if it's open, put that in a file share in a place that shouldn't really be open during normal business or legitimate activity, you know, someone snooping, maybe it's an insider threat, maybe it's an attacker trying to determine the value of the material. Related concept, a lot of network file shares, if you're still using them on prem, have a capability to trigger a script if a file has been opened or listed, even. That is enough to infer some types of attacks like ransomware, underway, deny the user automatically in that script, from accessing the share, shut down the attack and potentially trigger an email. Those are again, low effort, low cost, potentially free ways of knowing without a doubt that something unusual is going on.


Jason Pufahl  10:48

So what strikes me by listening to this is, you need to have at least decent documentation if you're gonna implement some of these things then, right? Because if you start to get alerts for an account that was triggered, and maybe you're at the helpdesk or somewhere else, you could think that that's a real human being, right. So you don't want just one technical person implementing a bunch of these techniques, not necessarily being the one who gets the alerts, and then confusing everybody, when they start to see activity. So there is an aspect of this, that strikes me that you have to have at least good documentation, if not a reasonably robust security program to really get value out of these. 


Matt Fusaro  11:25

Yeah, I mean, I'd say you probably need to have a robust incident management plan, right? You may not want to tell the rest of your IT team about some of those things, right? Because sometimes you do want to understand if your helpdesk is going places it shouldn't, you know, the other engineers are going places they shouldn't. So, but if you have a good threat management system in place, that stuff will get to the right people and be dealt with, right, though they'll know that it's not an issue, etc.


Steven Maresca  11:50

Right, I think that in every one of these scenarios, an alert to the people who should know needs to be sort of a tacit, expectation, piece of it. Because if it's not, you're not getting the value, right.


Matt Fusaro  12:05

And these types of things are really valuable to companies that would probably be targeted by a certain attacker, right. So a lot of the what you're doing is trying to get information about your attacker that you may not be getting from your standard IOC sources, right? That's not public data anywhere. It's not something that you're gonna get from the alien vaults or Fortinets of the world, you're trying to create your own set of information, right. And it's typically because you've got something valuable, you may be getting attacked by the, I don't know, a zero day, if you will, or an organization we don't know about yet.


Steven Maresca  12:46

Or its industrial espionage and your most sensitive information, guarded, you know, you've planted something that at least gives you a heads up that it has been touched went away, it shouldn't have been, that's the way you want to approach this problem. There are other scenarios too, you know, as a matter, of course, that identities are being leaked in dark web data, they might have been third-party breaches, but they're using organizational identities, right? There's nothing inherently in that data that you know to be associated with your organization unless you had a breach, right. But if there is a plaintext password  and dark web data, you might as well use a service that allows you to fetch them for all of your organizational identities, test them against your Active Directory, against Office 365. If you find one, A, it's a security awareness, educational opportunity with that individual user, B, you can actually lock the account, change the password, impose more rigorous controls, and defend in a way that, frankly, could have been used by an attacker at that moment to gain access. So there are lots of options here. I do think that they are worthy of investigation, either when evaluating products, for your point Matt, in terms of EDR, but also as just rainy day projects, you know, enable something, hopefully you never receive an alert, bias it towards low noise, and hopefully, you'll be able to react appropriately early in an attack.


Jason Pufahl  14:18

Yeah. And honestly, they can be valuable sources of data at low cost, potentially. So, you know, from the standpoint of, you know, risk versus expense or complexity versus expense, some of these aren't that hard to do. They do provide high fidelity, I like that term that you used before, right, high fidelity data. So you're not guessing, you're not gonna have to do a lot of legwork to determine whether or not it's a legitimate alert, you can take some action on it and it gives you some information that you otherwise wouldn't have had. So, I think that's it, that's a good high level topic or high level overview of, you know, maybe how to turn tables on attackers, how to use some of the data you have at your disposal just to buy you some time in the event of an attack or just give you some information. If there's any follow up that people want, feel free to reach out to us at LinkedIn at  Vancord. We're happy to give you tips, techniques, tricks that we use or that we've seen, or just otherwise, talk about the topic a bit more. So as always, we do appreciate people listening and hope you got value out of the podcast today.


15:22

We'd love to hear your feedback. Feel free to get in touch at Vancord on LinkedIn or on Twitter at Vancordsecurity. And remember, stay vigilant, stay resilient. This has been CyberSound.