Vancord CyberSound

022 - Ransomware Attack: The First 48

December 28, 2021 Vancord Season 1 Episode 22
Vancord CyberSound
022 - Ransomware Attack: The First 48
Show Notes Transcript

If your organization has not been a victim of ransomware, be thankful but remain aware. When you least expect it, someone could click on the wrong link or download the wrong file, and your entire company will be held hostage to the highest bidder. 

What do you do when that happens? Where do you turn for incident response? How will your team prepare? The next few moments of Cybersound may help you get ready for the first steps of that emergency if it ever happens. 


[00:00:01.210] - Speaker 1

This is CyberSound, your simplified- and fundamentals-focused source for all things cybersecurity, with your hosts Jason Pufahl and Steven Maresca.


[00:00:11.690] - Jason Pufahl

Welcome to CyberSound. I'm Jason Pufahl, your host, joined as always by Steve Maresca and Matt Fusaro. Hey, guys.


[00:00:19.180] - Matt Fusaro

Hey.


[00:00:19.920] - Steve Maresca

Hey, nice to be here.


[00:00:21.350] - Jason Pufahl

So, we are going to record an episode today about the first couple of hours, or maybe even the first day of response during a ransomware attack.


[00:00:30.520] - Jason Pufahl

Ransomware is top-of-mind for a lot of clients that we have. We do a lot of incident response. I think we've done enough that we know really what the…criticality is for the beginning of these incidents.


[00:00:44.270] - Jason Pufahl

So, I'll kind of tee it off. You get hit with a ransomware. You're probably the IT guy, or maybe a security person who identifies it, and you're concerned right away with that containment piece.


[00:01:00.940] - Jason Pufahl

How do you stop it? How do you slow the attack? What's the first major step that somebody should take to protect themselves?


[00:01:08.510] - Matt Fusaro

Right, so I think the first thing, we always tell, especially the business leaders. That's usually who we're talking to first. They get in contact with us, want to know what it is they should do. Usually telling them, "Hey, it's time to take the Internet out."


[00:01:22.730] - Steve Maresca

Yeah. It's disruptive, but that's part of the process. It's one of those key steps that we recommend to stop the bleeding and buy time to respond appropriately.


[00:01:33.070] - Steve Maresca

It impacts business. That's part of the conversation. But, doing so is really a recommendation that's tolerable at that phase. Business is already impacted. It's not going to necessarily harm too much. And ultimately, it's a recommendation today of the FBI and other law enforcement agencies to proceed in that fashion. It's reasonable to act in that way. And whether you're a business leader or IT directing, it's important that you feel able to make that type of decision and support it when it's made.


[00:02:02.950] - Matt Fusaro

Right. It's important to cut that lifeline off from the attacker. We don't want them moving around anymore, introducing any more anomalies or artifacts into the systems that we now need to go and look for or stop.


[00:02:15.470] - Steve Maresca

So, what does that mean ultimately? We're severing the connection. But we think of that in multiple ways. You want to stop outbound communication to the Internet. You want to stop the Internet from reaching in and as a complement, if there are multiple sites involved, stop connectivity between them.


[00:02:33.450] - Matt Fusaro

Right, exactly.


[00:02:34.610] - Jason Pufahl

So, you make a good point, though, Steve, which is—it is a big deal. There's that perception that shutting down the Internet causes business interruptions where maybe you're not really feeling a huge business interruption already. So, we've seen events where somebody calls and says, "Hey, we've noticed encrypted files, and we recognize that as maybe the precursor for a larger attack or an ongoing activity." There might not be any other obvious issues. So now you've got a security practitioner potentially making a call to take a pretty drastic step that might not be supported by management.


[00:03:11.440] - Jason Pufahl

So, we're talking about that, "Hey, what do you do in the first couple of hours?" But the reality is, you want to have those conversations early because it's important to have that buy-in from the top to give you permission to take a step like that.


[00:03:24.290] - Jason Pufahl

So, if people are listening to this, I would say immediately, let somebody else who is in a decision-making position hear this because it is critically important. You want to take steps to stop this attack as quickly as you can.


[00:03:35.740] - Matt Fusaro

Yeah, the conversation changes a lot, too. When you start talking about, “Now, we have data exfiltrated that may have been going on while you've got the Internet up and running.” The faster you get that cut offline, the less you may have to worry about that. We want to take that out of the equation any time that we can.


[00:03:52.330] - Steve Maresca

Right, and that's the data exfiltration question. That's something that may not be known until literal weeks after the incident has, by all rights, concluded from a business operations standpoint. So instead of thinking about it and wondering about it, just cut it off at the knees and prevent it from being a possibility. Might it still have occurred prior to encryption of data? Absolutely, but you're minimizing the potential for it to proceed thereafter.


[00:04:17.090] - Jason Pufahl

So, there's no argument here then. Step one is containment.


[00:04:20.060] - Steve Maresca

Absolutely.


[00:04:20.460] - Jason Pufahl

Deal with that right out of the gates, right?


[00:04:23.210] - Jason Pufahl

Step Two: I feel like we're always concerned about backup data. Because in these events, clearly, that's one of the main...maybe, a secondary attack? Encrypt something locally? Always go after the backup data and make it more difficult to recover. So how do you protect that or what steps do you take there?




[00:04:39.590] - Matt Fusaro

Yeah, this is another situation of "get it offline.” Don't let it be reachable. Make sure that if it has to be reachable for some reason, the same credentials aren't used to go and manage your systems that may already be compromised. Manage your backup system.


[00:04:55.080] - Matt Fusaro

Separate those two away. The backups are targeted so much now. We see them in scripts. We see them in malware itself, trying to find vendor names that are out there to go and look for systems. So, they're definitely part of the attack path now.


[00:05:10.500] - Steve Maresca

Right, and we've seen an instance like Matt's saying, just as a case-in-point. One of our customers with arguably, robust backup infrastructure had their data being synchronized to Amazon. But the attacker gained access to Amazon, deleted it. Their replica was gone. So, take swift action. It's really the best route to helping restoration to actually be achievable.


[00:05:33.480] - Matt Fusaro

Yeah, that's a good point. The cloud backups now are somewhat new in a lot of organizations. It's not a new technology, but a lot of people are now...there's buy-in. People are moving their backups out there. People forget that there's encryption keys and access and API keys that work with those to actually get access to those backups. And if those are compromised, while you may have taken that local system offline, your cloud really can't come offline. So, you've got to protect credentials.


[00:06:05.270] - Jason Pufahl

Generally speaking, is there a risk, similar to shutting down the Internet, is there risk stopping these backups midstream? Because that's a likely possibility. You could easily be severing connectivity at the time that backups are occurring.


[00:06:17.930] - Jason Pufahl

It feels like it's worth the risk. I can't imagine there's any reason that you'd say, “Wait ‘till the backup finishes necessarily.” But I'm wondering, thoughts there?







[00:06:26.510] - Matt Fusaro

A lot of it's going to be game-time decisions. I think you're going to have to take a risk-based approach to the in-flight backup. What are you backing up, right? Are we talking about a transactional database that is going to be severely impacted by not taking that backup? You might end up with hundreds, sometimes terabytes worth of data sitting there. That's going to just keep growing. So, I think risk has a lot to do with that. But normally, systems, you could probably live without that backup. Stop that backup that's in-flight and protect what's there.


[00:06:59.280] - Steve Maresca

There's another consideration, of course. Modern backup infrastructure is tolerant of interruption of a backup job. They're built around that, frankly, most of them have some sort of an assumption that there might be a loss of connection between the source system and the target system.


[00:07:17.620] - Steve Maresca

It's safer to do than it used to be. And to Matt's point: Know what you're backing up, understand the function of the backup environment, and that's how you make your decision. Ultimately, you want to make sure that the backups you have are retained, that you're not overriding them with potentially malicious or compromised information. So, it's essential that you do so even if there is a secondary fallout. 


[00:07:39.740] - Matt Fusaro

Good time to check your retention policies.


[00:07:41.700] - Jason Pufahl

So, I was going to say this is supposed to be everything that you do in the first couple of hours. But I'm going to step back again and say, as an early activity, always make sure you have good backups. Too many times we see clients that set them and forget them and never actually check to see if the data is recoverable or restorable anyway. So, make sure, ahead of time, that you actually go through some restoration processes and validate that data.


[00:08:05.120] - Steve Maresca

So, bringing that back to what you do in the immediate timeframe, you're securing your backups. But realistically, you need to make sure that they're still good. So how you do that safely? I think this is sort of a segue way into credential use and making sure that you access systems safely.





[00:08:20.630] - Steve Maresca

Your backups, you want to check them immediately. You've just locked them down. What do you do? Well, you have to get into them somehow, most likely. Make sure that the system from which they're being accessed is secured. Nothing else can reach the backup infrastructure. Make sure that you're using accounts that are not used throughout the environment—local administrative accounts and things of that sort.


[00:08:40.920] - Steve Maresca

Ultimately, to shift a little bit, what we care about is preservation of your infrastructure, as it stands, because some of it's probably still safe. But, the attacker is present, they're collecting your passwords, they're collecting usernames to use. Minimize what you do in a privileged way so that you don't give the keys away.


[00:09:01.560] - Matt Fusaro

Yeah, I'd say any incident that we've been in where re-infection happened, it was because of that. Administrative credentials being used in places that they shouldn't have been recaptured. A lot of times we'll change passwords just for as a protection mechanism. If you're logging into systems with rolled credentials, then they're just going to get harvested again.


[00:09:23.040] - Steve Maresca

Really good example of that is someone with a C-level position as a workstation that they really need access to. The only way to get into it, at that moment, happens to be an account that might be highly privileged. So, that system is compromised. Immediately, it gives access away to the attacker. It's just things to avoid.


[00:09:42.830] - Jason Pufahl

Yeah, it's important to recognize that during these active attacks, the attackers are potentially physically in the system monitoring your activities. So, you really do want to make sure that you protect those credentials and take steps to get them out of there as quickly as you can.


[00:09:59.390] - Jason Pufahl

I think, finally, maybe, a less technical approach, but something that we see overlooked a lot, is reaching out to your insurance provider. If you've got cyber-liability insurance, the nice thing about those policies is they often will bring resources to bear. Either maybe it's forensics or incident response services. It could be legal services, but you want to notify them. They ideally want to be your partners in that activity and help sort of mitigate some of the potential impact.


[00:10:30.440] - Steve Maresca

And, as a sort of a complement, they provide legal services or will involve legal services and counsel. Engage your own internal counsel, if you have it, or your own retained counsel. They need to be part of that conversation early. As sort of an outgrowth of that, that's the time to have a conversation about notification requirements. If you're a company with international sites, and you're subject to GDPR, for example, you have 72 hours. It's not a lot of time you need to report. And if you don't do that until the close of the incident, you've already missed your window.


[00:11:02.870] - Jason Pufahl

Yeah, incident response; it's called “incident” for a reason. That's something you do every day. And I think when you have an issue like that, you ought to reach out to a partner of some sort. It might be somebody like us who specifically does that sort of containment and restoration aspect. It could be the insurer who provides some guidance around sort of legal requirements. But the reality is, you need to know what steps to take initially, and ultimately what the tale is going to be, what your notification requirements might be, and start thinking about that early because it does help.


[00:11:34.850] - Jason Pufahl

Doing some of that data preservation, the backup preservation, the log preservation—things that are going to give you information later are really valuable to collect or at least retain early on in the incident.


[00:11:45.140] - Steve Maresca

Right, and to make that more tactical, it's a money-saving exercise to do it early.


[00:11:50.120] - Jason Pufahl

Sure.


[00:11:50.370] - Matt Fusaro

Absolutely.


[00:11:50.710] - Steve Maresca

If you don't have to provide credit monitoring for 50,000 people and it's 500, that's a substantial cost difference just to provide some sort of example. And you can get that data and constrain the scope by taking those early steps.




[00:12:06.420] - Matt Fusaro

A lot of companies are just not in a position to be paying out of pocket for an incident to get resolved. So, if you're working hand-in-hand with the insurer who theoretically is hopefully going to be accepting your claim and paying for all that, work with them and understand what they want for requirements, so that in the end, you are covered for what you're trying to get done.


[00:12:25.410] - Jason Pufahl

Well, and actually, it's a good point that this sort of is a money-saving activity to some degree. If you can get better information through that incident response phase, it does give you information from which to make decisions later when it comes to notification and some of the other requirements. So, the more thorough you can be on that containment and restoration piece, the better off you're going to be.


[00:12:52.050] - Steve Maresca

I have a final comment, I think, regarding just overall communications. It's a real common behavior to not share the fact that an incident has occurred with internal staff or with business partners until long after the fact. The truth is that the best time to start those conversations is early, and it's really going to maximize the perception of readiness, preparedness, and overall attention to the issue if that's done ahead of time. It's not a technical problem. It's purely communications, but crisis communications are rather important. And frankly, they help to maintain business relationships and goodwill with employees, for example.


[00:13:32.390] - Matt Fusaro

Yeah, you're talking about making some major changes to operations during the day. Taking things offline, taking the Internet down. That's going to cause a lot of problems. And if you haven't been communicating that that's what's going to happen to all the stakeholders, that...


[00:13:49.090] - Group

[crosstalk 00:13:49] 


[00:13:49.530] - Jason Pufahl

You're not separating the Internet, and then hoping that people don't actually recognize that you've done it. So, you may as well be transparent about it.


[00:13:55.080] - Steve Maresca

Exactly.



[00:13:56.730] - Jason Pufahl

So, one final thing, and I think this segue way is right with the beginning of it, which is: We've talked about disconnecting the Internet. You'll have conversations probably pretty quickly around re… sort of, opening backup key business systems.


[00:14:13.490] - Jason Pufahl

We never have an engagement where you leave the Internet down for a week and expect no fallout. The conversation then is always well, we have to have our ERP available, or we have to have invoicing, or some critical business system.


[00:14:26.970] - Jason Pufahl

Just be prepared to have those, because I think it's okay to make some, to your point, Matt, some risk-based decisions around...Listen, we can't let business completely stop. We've got a couple of things that are critical. Is there a way to open them up safely for some period of time, or at least get some usability from them? So, you're going to have those continuity-based discussions throughout an incident.


[00:14:49.270] - Jason Pufahl

So, I think finally, just to sort of wrap this up: I think we all agree containment at the very beginning is the most important thing. And taking probably a pretty drastic step early on can really reduce that downstream impact for an incident.


[00:15:05.540] - Jason Pufahl

So, we definitely recommend separating or disconnecting Internet connectivity. Protecting backup data— absolutely critical. Making sure that you've got an offline copy that can't be encrypted or otherwise affected by the attacker. Being careful how you log into systems throughout an incident to make sure that you protect those credentials from the attacker. And I think finally, really, engaging early on with that insurer to get either technical or, sort of, legal guidance throughout it to help you make better decisions.


[00:15:38.530] - Jason Pufahl

All really important steps and all things that probably have to occur, really, frankly, within those first few hours. I mean, the quicker you can make those decisions, the better. Anything that you guys want to add at all to that or does that feel like if you've done that in the first few hours, you're in great shape?






[00:15:56.830] - Matt Fusaro

That's a lot to accomplish in a couple of hours, right? We've seen it take significantly longer, but if you have been thinking about these things beforehand, it is easier to accomplish some of this. The faster you can hit the ground and actually get to restoring and doing things like forensics, the faster you can get things back online to keep people happy. It will be an easier incident for you.


[00:16:23.390] - Steve Maresca

Yeah, absolutely. These are activities you perform to give yourselves the time to think, evaluate the situation, and act appropriately.


[00:16:32.400] - Jason Pufahl

Clearly, we're trying to give a lot of advice in 15 minutes for a really stressful event.


[00:16:37.640] - Jason Pufahl

So, we do a ton of incident response. If anybody wants to talk further or, "Hey, what should I do to prepare for an incident, or what is the next three weeks look like once we've recovered?" We're happy to have those conversations as well.


[00:16:52.570] - Jason Pufahl

Feel free to reach out to us at Vancord on LinkedIn or VancordSecurity on Twitter. We could set up a whole other podcast around other aspects of incident response and provide a ton-more information.


[00:17:06.320] - Jason Pufahl

Thanks for joining us. As always, we hope you got some valuable information for this. And for me, I think a major takeaway would be let your executive leadership hear this, because some of those early-stage decisions are disruptive. It's really important to have that buy-in early. I think hopefully this will provide some information for them to help make those decisions later on.


[00:17:26.160] - Jason Pufahl

So thanks, everybody. Thanks, Matt. Thanks, Steve.


[00:17:28.570] - Steve Maresca

Thank you.


[00:17:31.090] - Speaker 1

Stay vigilant. Stay resilient. This has been CyberSound.