Webinar: Protecting against multi-vector cyber attacks with MDR and Microsoft 365 Defender Register

Resource Type: Webinar

Making The Grade: Using MDR To Protect Schools From Cybersecurity Threats

From big universities to small school districts, educational institutions are facing increases in ransomware, phishing, DDoS and data breach attacks. At the same time budgets and resources are constrained. Learn how Managed Detection & Response (MDR) services protect institutions against cybersecurity threats 24/7/365, while optimizing IT resources.

Agenda:

  • MDR Overview: Why use Managed Detection and Response services for school districts & universities and how it works
  • Hear from Centennial School District on how they leverage MDR services
  • Quick Demo of the tools that enhance MDR services: Zero Trust Analytics Platform (ZTAP) and MOBILESOC app

Zero-day Targeting Exchange Servers: Steps to Evaluate Your Risk

Join Critical Start’s CISO, Travis Farral, and Microsoft Solutions Principal, Charlie Smith, as they walk through the zero-day targeting Microsoft Exchange Servers.

What will be covered:

  • What happened?
  • What did the attackers do?
  • Who are the attackers?
  • Who is impacted?
  • What should your organization do?

Need more information? Contact us at, https://www.criticalstart.com/contact/

Exploiting Enterprise Passwords

No matter how much you think you’ve done to safeguard your data and systems against breaches, common vulnerabilities continue to wreak havoc on enterprises. 80% of hacking-related breaches involved compromised and weak credentials.

Attackers are finding quick access to enterprise domain admins by simply guessing a password and logging in as that domain admin.

Given these challenges, what can you do to shore up your passwords and protect your organization? Join CRITICALSTART’s TEAMARES security experts, Cory Mathews and Joffrin Alexander, as they present, “Exploiting Enterprise Passwords.”

From phishing pages to password spraying, you’ll learn how attackers gain access to passwords and what they do with those passwords once they’ve cracked them, as well as:

– Methods to defend against these attacks specifically focusing on using strong passwords, password manager solutions, and probably most importantly multi-factor authentication.

– External attacks such as logging in to enterprise solutions such as OWA, VPN, and file shares to maliciously changing passwords to lockout user’s access – and what you can do to prevent these attacks.

– Proactive defense strategies including how to ensure you’re using strong passwords and how multi-factor can prevent breaches.

– Internal actions you can take such as limiting excessive admin rights and other preventative measures.

Impact of Zero-Day Exploits on Breaches

Organizations are losing the endpoint security battle against new or unknown zero-day attacks. A recent Ponemon Study on endpoint security revealed that 68% of IT security experts say their company experienced one or more endpoint attacks that compromised data assets or IT infrastructure in 2019. Of those breaches, 80% were zero-day attacks – with the frequency of zero-day attacks expected to increase to 42% next year. These exploits frequently go beyond just endpoints. Some of the most impactful vulnerabilities announced in 2020 deal with externally facing network appliances such as firewalls, routers, VPN concentrators and other devices.   

While there is no way to prevent zero-days, there are steps you can take to reduce the impact and/or severity of security incidents whether or not zero-days are involved.   

To find out how you can shore up vulnerabilities, join CRITICALSTART’s TEAMARES Cory Mathews, Offensive Security Manager, and Rich Mirch, Senior Adversarial Engineer, for “Impact of Zero-Day Exploits on Breaches,” on Aug. 26 at 11 am CT for the third webinar in our series of webcasts exploring the world of cybersecurity breaches.

Key takeaways from this session include:  

  • What a zero-day is – and what it is not  
  • Using a zero-day to breach the perimeter and pivoting into sensitive areas within organizations  
  • Challenges and methods in defending against zero-days including patching, defense-in-depth, and mature network and endpoint monitoring solution and team
  • TEAMARES’ vulnerability research team’s success stories of discovering zero-days and developing full Proof-of-Concept exploits  
  • Examples of infamous zero-days and +1-day events such as WannaCry using NSA tools released by the Shadow Brokers

Best Practices for Securing OT and SCADA Networks

In the connected world of industrial Internet of Things, prime OT targets include:

  • Supervisory control and data acquisition (SCADA) networks, a common framework of control systems used in industrial operations to provide services such as water, electricity, and natural gas to cities and communities
  • Distributed control systems (DCS) providing automation in large processing plants and manufacturing facilities
  • Building automation systems that connect heating, HVAC, lighting, and other hardware and software systems.

Join CRITICALSTART’s TEAMARES security experts Allyn Lynd (Blue team) and Chase Dardaman (Red team) on Wednesday, October 21 at 11 am CT for “Reduce Your Risk: Best Practices for Securing Operational Technology and SCADA Networks.”

In this live, one-hour session, you will learn best practices to:

  • Deploy and maintain security for your OT / SCADA devices (spoiler alert: number one is OT/IT segmentation).
  • Design new OT / SCADA systems integrating security from the beginning as opposed to after it has been deployed.
  • Ensure your incident response plan has what it takes to limit damages from breaches.
  • Build your business case to secure approval and funding for new security measures.

You will also discover where the biggest risks are, the motivations of attackers, and where to find more best practices and resources.

Not enough? How about actual use cases of OT / SCADA attacks; Allyn will share from his experience working as an FBI agent.

Lessons Learned From Billion Dollar Breaches

What can we learn from some of the most expensive data breaches in history? 

CRITICALSTART’s TEAMARES’ security expert, Allyn Lynd, recently dove into this topic as he looked back on some of the most infamous breaches during “Lessons Learned from Billion Dollar Breaches,” the second in our five-part series aimed at understanding what causes breaches and what it takes to prevent them. 

Key takeaways: 

  • Organizations continue to make the same types of mistakes in 2020 that other organizations were making over three decades ago. From one of the earliest breaches in 1984, when over 90 million Americans’ data and credit histories were displayed on a public bulletin board, to recent breaches costing billions of dollars, organizations continue to struggle to protect their data.  
  • Data breaches are getting bigger and more expensive. As more data has come online, threat actors realize that data is easier to access and of high value. The average cost of a breach in the U.S. is $8 million, with an average of 32,000 records lost.  
  • About 25% of the breaches are caused by hackers, with errors following closely behind. 
  • Regulations, fines, and costs to reputation are increasing. Europe’s adoption of GDPR, varying laws across all 50 states in the U.S., and breach impacts on corporate reputations and stock prices all mean organizations need more help than ever before in keeping security practices ahead of current regulations – or risk paying a steep price.  

As threats and costs continue to rise, how do you protect your organization against a breach?  

  • Conduct a risk assessment to get a thorough threat analysis to determine where the most impactful avenues of attack might be and test for specific vulnerabilities in those priority areas.  
  • Perform a perimeter penetration assessment. Test to determine specific threat scenarios and threat actors that can impact your organization to determine how far a malicious actor can go; restricting lateral movement is critical to your cybersecurity strategy.  
  • Develop a remediation roadmap to outline the top objectives of your security assessment. Your plan should strengthen your security posture to include clearly identified steps to achieve specific objectives in key areas. These areas may include general security controls and policy review, network security controls, Windows platform assessments, privileged account access, vulnerability management processes, management of mobile devices, investigation, blocking, and response capabilities, and user awareness training. 
  • Assess your security tool inventory to identify redundant or unused products, evaluate security architecture to understand proper product placement in the organization and identify pain points with current security products. In addition, conduct a cost analysis of your security product inventory to ensure you are getting what you paid for. 
  • Have an incident response plan, regularly practice it, and update it as needed. 

Additionally, consider two-factor authentication, limiting administrative privileges, and segmenting of networks; all standard security controls that can be done at minimal cost and will pay for themselves many times over if or when a breach occurs. They all limit the potential for a breach and can reduce the severity and impact of a breach when it occurs. 

And finally, we invite you to join us Aug. 26 at 11 am CT for our next webinar, “Impact of Zero-Day Exploits on Breaches,” the third in our series, “Once More Unto the Breach.” You’ll learn what a zero-day is, challenges and methods in defending against zero-day including patching, defense in-depth, and mature network and endpoint monitoring solutions, and more. 

 Watch the full webinar on-demand now!

Full Transcript:

So, the purpose of today’s presentation and what I intend to cover with you all is basically a little bit about what actually a breach is and I know you all probably know that, but I’m going to define it for you for a variety of reasons, what to do once a breach occurs, how to recover from a breach in the long term and what mistakes to avoid if and when a breach does occur, and frankly, some of the history of what a breach is and why we’re concerned about it.

So, my name is Allyn Lynd. I’m a former FBI agent, now I’ve been out in the private sector for about two years. During that time I’ve been doing forensics and incident response basically since 1998. I have worked everything from intrusions into the White House to 9/11, to hackers, counterintelligence, cyber terrorists, copyright, trademark extorsion, whatever you want to say for cybercrime, I’ve worked at during that time. So, when I get to the breaches, some of those are going to be ones I actually worked on myself and there’s some insights from where I was at and what I was doing with the FBI. So, some of these are going to be nameless just because they’re not necessarily public, but the lessons are still valid from that even though the names of the companies are going to be redacted.

Currently, I’m with CRITICALSTART where I’m one of the two senior managers for the incident response team.

There’s three definitions up here that are on data breach and the three definitions honestly are there more for regulatory and compliance issues and even legal issues because a breach is different from state to state and country to country. So, these are all useful definitions to know. I gave you the GDPR, the EU, and the DOJ definition, but what a breach really boils down to when it comes down to it is much simpler than any of those definitions.

A data breach is a confirmed incident, and confirmed is important in that, a confirmed incident in which sensitive, confidential, or otherwise protected data has been accessed and/or disclosed in an unauthorized fashion. Now, you see that there’s aspects of that reflected in all three of those definitions, but if we use a generic definition, it’s going to make it much easier because you’re going to avoid the differences in the regulatory and the legal framework wherever you may be or wherever your servers may be or wherever your clients may be that you’re helping out with data breaches.

So, we’re calling this Billion Dollar Breach, how are we getting to those numbers? How are we assessing the cost of those breaches? And, there’s a lot of things that go into making up the cost of a breach. The first thing that’s going to be in there that’s going to be important to understand is that it’s not just the amount of money that you get as a fee or a fine that’s described by the federal government to pay back or GDPR to assess the fine on you.

The parts that go into making up the cost to your data breach are numerous. It’s the actual investigation, the finding out what happened, doing a root cause analysis. It’s things like when you’re responding afterwards, are you going to set up a call center, so that people can call in and find out what’s going on? Or, whatever you’re going to do, whether it’s social media, however you’re going to relay information and steps on what to do next.

It’s legal services for defense and compliance, it’s conducting that outreach as I said, it’s the identity protection services for individuals who may have lost their records, it’s preparing notice documentation, it’s doing auditing and consulting services, it may be a rise in your insurance rates, and it’s going to be a large factor of lost business and lost recurring revenue and I’ll get to the exact numbers later on, but that’s going to be the single largest cost in this.

And then, there are tertiary costs which don’t apply to the company that was breached necessarily but are associated with the total cost of the breach. So for example, on the first case I’m going to talk about, which was way back in 2000, it was about $2 a card for the card issuer, Visa, Mastercard, whoever it was, to reissue a card. So, if 4.5 million records are taken, that’s another $9 million just on a tertiary cost, not the company itself. And again, a lot of this can be covered by the insurance, but those tertiary costs add up to victims who actually lost money, et cetera. So, all of those things add to those tertiary costs.

So, what are the most common causes of a data breach? And, this has pretty much remained the same for years. Occasionally, they’ll switch positions in which one’s the most important or most significant that year, but hacking is the single most egregious cause and that includes, when I say hacking, credential theft, and the use of those credentials. Some people may categorize it differently, but if you’re putting it in that bucket, that is by far the single most prominent cause of breaches. Errors, this can be accidental publication, misconfiguration, or failure to patch, social engineering, malware, which some people would lump in with hacking, but a lot of that is drive-by malware, so I don’t want to put it in the same category, inside job or inside threat, or lost or stolen computer or media.

The last one, when you go back to the definition I used where it’s accidental disclosure, it’s not necessarily a hack in the cause of it. You can see, if you lose a laptop, Hotels.com lost a laptop with all their sensitive data on it, that’s a breach whether or not somebody got into your network or not. If they can get to that data and they can disclose that data, it’s a breach.

As we go forward on this, what’s the history of the data breach? When you go down and you think about it, really in 2005 is when you started seeing this become prominent in the news. If you think about it, from that same definition I’m using where it’s the confirmed disclosure of confidential information, data breaches have been around forever, decades. Digital breaches have made it easier, but if you look at it, this is espionage. You go back all the way to the Romans or the Greeks, or even the Rosenbergs providing secrets to the atomic bomb to the Russians, it’s a data breach.

You will usually see 2005 is the marker for data breaches becoming large because that’s honestly when the Privacy Rights Clearinghouse started beginning its chronology of it.

If you went into a Wikipedia page, you wouldn’t see anything really earlier than 2005, but the one that I would say is the one that started it for our industry, for digital breaches is really in 1984 accidental disclosure of Experian records to a bulletin board where all the personal data and credit histories of 90 million Americans were put up on bulletin board. And for those of you who have not been around long enough to know what a bulletin board is, I apologize, but it’s an older form of internet traffic.

In fact, the only reason Experian is Experian is at the time they were a different company and in order to regain any kind of semblance of credibility, they changed their name and reorganized, so that people wouldn’t remember who they were.

But, what’s happened really since 2005 is more data has come online, the threat actors have realized that it’s easier to get, it’s all in one place, the value of it’s important, and there’s very low risk of getting caught, so the digital data breach became more widespread.

By 2010, everybody was seeing one in the news or two in the news a year, or three, or four and now it’s almost a monthly occurrence that you’ll see something about a data breach.

So, some basic statistics on data breaches and these are coming from a variety of sources, the Verizon data breach report, IBM’s data breach report, the White House Economic Council’s data breach report, which varied between 2018 and 2020, so you might see slightly different numbers, but about 4% is the average loss of customers post-breach.

Now, there’s actually a larger initial drop, it can be as much as 12%, but long term you’re going to lose 4% of your annual recurring revenue from customers. 314 days, that’s the average life cycle of a breach caused by a hacker. That doesn’t mean 314 days to confirm it once it’s found, then the containment, and to go forward, that’s much shorter, but once they get in, until the time it’s been put to bed, that’s 314 days. That’s pretty close to a year. The average cost of a breach in an organization without security automation is generally 95%, so almost twice as much as an organization that does have automated security.

$242 is the average cost per record lost, 30% is the number of companies who experienced a breach who will experience another one again within two years. It’s actually gone down a little bit from what it used to be, but it’s still pretty high. The average cost of a breach and this is the US number, is $8 million on a breach. The average number of records lost is 32,000 and change. And on average again, 25% of the breaches that are caused by hackers, cost 25% more than one caused by an error. So, the hacker coming in, they’re the most prevalent and they’re the most costly.

So again, the lost business is generally the largest single factor. The largest factor in containing a breach and making it cost less is going to be how fast it’s noticed, identified, and contained. The largest factor adding to the cost of a breach is basically third party breaches, compliance failures, and a non-methodical cloud migration. The largest factors in reducing the cost of a breach is having an IR plan in place, having an IR team whether they’re yours or outsourced regularly test against that IR plan, the use of encryption, and employee training, followed very closely by two-factor authentication and other methods of securing the data.

The other thing to know is that the larger the breach experienced, the less likely another breach will occur, which when you think about it makes sense because if you have a large breach, you are going to put a lot of money into making sure it doesn’t happen again

And I think we have a poll at this point and there it is. So, I’m just going to tell you what the poll results were: Do you have a data breach plan, yes or no? One out of four of you responded with yes, three out of four responded no.

So, the single largest factor in containing that data, 75% said that they don’t have one in place, so just think about that as you’re leaving today and about what you can do to reduce the chances of a breach and the reduce the cost of those breaches.

So, I’m going to go through a couple of cases, some of these are ones I worked, like I said, and I can’t use the names and you’ll see at the end these are 20 years of it, so while you’ll see a lot of these from 2005 going forward, you probably never heard of this one.

In 2000, basically what happened was a retailer sent all of their purchase data to a third-party so that third-party could do analytics on it. So for example, which of our stores sell more Nikes, which of our stores sell more Adidas, what should we stock the different stores with, seasonally what do we want to bring in, what do we not want to bring in, and how do we maximize our shelf space, et cetera. It all made sense to do and the analytics are a good idea.

What went wrong? Well, the company, the retailer did not get hacked, the third-party got hacked and the first thing that was wrong is there was no, let’s call it redaction of the data that went to the third party. Was there really any reason for the third-party to have the credit cards, and the billing information of the purchases or wouldn’t have been sufficient just to have the purchase data itself?

I would argue vociferously that just having the types of purchases and when the purchases occurred and leaving the credit card data completely out would have been a smarter move. There was also no agreement in place with the third-party on how to protect the data. So, you basically have a whole bunch of data, 4.5 million records with full credit cards, full three-digit verification codes on the back, et cetera all sitting there at a third-party and it was unencrypted also at the third-party just exposed for anybody who was going to come in and get it.

Well, what went right? When the hack did occur, the third-party was able to identify it in under 24 hours because they had good security measures in place, good security tooling in place. They immediately went to the data owner, the retailer, and told them what was happening, and they did have the data plan in place and this was before, by the way, there was mandatory reporting, and they decided the right thing to do in this case was to go to law enforcement. They came to us, they gave us the list of all 4.5 million credit cards and we basically took that down to the card issuers that discovers the Amex, et cetera, Visas, provided them the list of cards, and within a few short hours, all those cards were discontinued.

So, what happened as a result of that? Again, 4.5 credit cards, the total value on those cards was somewhere about $5 billion. So, if they had been used, that would have been the loss, because of the quick reaction, only three cards were used, total loss of less than $50 in false charges were replaced and the retailer was able to implement new security policies for dealing with third-party vendors, which actually went really well and they have been breached again since then in the 20 years past there, but what they learned from this one minimized the losses on those other two times they were breached as well. So, each time it’s been good for them, educational for them, and they’ve been able to update and advance along with the times.

The next case is one that probably everybody’s heard of and it’s actually near and dear to my heart, which is the 2014 OPM loss to Personnel Management, loss of millions of personnel records such as the SF-86. So, for those who don’t know, OPM basically is where they store all the background checks on all federal employees, so it’s not just your name, your number, your Social Security number, it’s every bank account you’ve ever had, it’s every friend you’ve ever had, their contact information, every relative you’ve ever had, their contact information, your polygraph results, anything you could possibly think about knowing about somebody in a background check they’ve got, everywhere you’ve ever lived, anything that you want is in those background checks.

So, this one is just honestly a mess. Prior to these going online, I can tell you that myself individually and a number of the other security investigators and cybersecurity investigators to the FBI were livid about it going online. It had been done on paper before that and we were like, “Just keep sticking to paper. Why does this need to be online?” But really, there were two vendors who were doing background checks because they were getting so backed up, one of them was USIS, United States Information Systems, the other one was KeyPoint and they didn’t want to have to go into various offices. If they were doing a background check for somebody in Dallas, they didn’t want to have to go into the FBI office in Dallas and get a packet and be given it and then go out and do a hardcover. They just wanted to be able to access it from their computers wherever they may be as an investigator and do the background check from there. So, the business need is what drove it and they didn’t do a good job of implementing security when putting it on.

So, some of the things that went wrong, again, some of the things we saw before, outdated systems. None of these systems had the ability to encrypt. They did not notice the hack for a significant amount of time, so for a timeline for this, and I’m going to use the public timeline, I’m not saying it’s the real timeline, I’m going to use the public timeline, in late 2013, what has now been identified as Chinese hackers got into OPM and took basically the network diagram, but not any personnel records out.

They didn’t notice that until 2014. They also got into KeyPoint and USIS and hacked them as well. Nobody noticed that, in fact, OPM went and had an audit done on USIS and KeyPoint which cleared them as being secure even though they were in fact already hacked. So, in the middle of 2014, they did notice that some of those hacks had occurred and that people had taken those blueprints for the system, the network diagrams, et cetera, and they actually came up with a plan that they were going to try and trick and honeypot the guys who were doing the hack, that didn’t work.

So, that was pretty bad that they were allowing the people in the network and allowing them access to real data to try and catch them and not catching them. So, it went on for another year almost until late 2014, early 2015 when an alert employee at OPM actually noticed that there was web traffic going to sites that weren’t really owned by the OPM, so instead of going to opm.gov, they were going to opm/security.org, et cetera. And they went, “This is bad. We need to check it out.” And then, they brought DHS in, US-Cert in, and those guys were able to find all of the intrusions. It did take about six months to a year to find all of the intrusions and during that time, the number of records kept going up and I don’t know what the publicly disclosed one is at this point, but it’s the vast majority of them.

So, what went right in this? The third-party breach, it did get detected and notified to the OPM about it. DHS and the FBI actually did a good job exploring the breach and setting up the monitoring of threat actors. There were actually a couple of key loggers that were stopped from being installed.

So, there was some of that that went well, but for the most part, it was a failure and the results of this were that in general, credit monitoring services alone are already in excess of 133 million. The reason I say the total figure might go up to 1 billion is there’s been multiple additional laws passed since the initial breach.

The first breach, it was only for so many years, now it’s to 2025 and there’s currently a law before Congress that says we’re going to provide credit monitoring services to all federal employees who were affected or retired employees who were affected all the way for the rest of their lives. If they do that, that would be the $1 billion.

As a result of this, they also did eventually implement two-factor authentication which hadn’t been implemented prior to that, again, one of the big ones that we talked about having, and they did update the security tools that were in place at OPM. They still have enacted encryption, as they should, but this also brings up other aspects.

Not only is this a problem for the loss of records and for the people who got it, but for those of us in the intelligence community and in law enforcement, the first question that passed a lot of our minds is, okay, if they’re in there taking out records and copying records out, what’s to say they’re not putting records in? So, there’s no way at this point of going back and checking and seeing if people who shouldn’t have gotten security clearances were altered to do it. And we know this of the Chinese, so did they insert security clearances for people who are their agents? There’s no way to know that at this point.

So again, we’re back to this is a huge, massive failure on the government’s part that was only salvaged by one individual happening to notice one oddity as they were doing their normal course of business.

The other one that probably everybody knows about and is familiar with is the 2017 Equifax loss of about 147 million US PII records. Now, Equifax is almost into itself, in size and scope of it, and the failures that went along with it.

Initially, the company was hacked via an Apache Struts known vulnerability. It’s one of the web applications, they were told about it, they didn’t react to it, they were told about it a second time, and they didn’t react to it. So again, you’re being told that you’re vulnerable and you’re not doing anything about it, that’s bad.

The other thing that was going on was that as a result of not really paying a close enough attention, they had allowed a security certificate to expire, so since this encryption certificate on one of their security tools was no longer working, they weren’t actually seeing the unencrypted traffic, so they didn’t know where all this traffic was going to. So basically, once the attackers were able to gain a foothold, their tooling was unable to actually explore it. When they noticed this hack was after they re-enabled that certificate and their tools started working again. The time between when this started and when they found it was almost two years. So for two years, people had the threat actor, which again, the Chinese, had access to 147 million records of US people, driver’s license, credit card, Social Security card members, names, dates of birth, all of that was open and available to them.

Then, they compounded the problem, once it did occur and they did find it by making a number of mistakes, and again, this comes back to how we’re going to communicate to our victims who have had their records taken. In those communications, there was some confusion about which website to go to check it out and check out what was going on. They didn’t use the normal site, they set up another website, and in fact, the bad guys set up a number… Not the same bad guys necessarily, but thread actors set up fake websites to go to check to see if you’d been breached and at one point, the Twitter feed of Equifax actually told people who were going to go check on it to go to one of those fake websites and not the real website. So, horrible job of communicating, a horrible job of trying to contain this, and it kept trickling out that there were more records, more records, more records, so it built just an incredible loss of trust with the corporation.

Again, eventually, they did notice the suspicious activity, they took the application that was affected offline, and they later hired some external cybersecurity firms to conduct a forensic investigation and they’re the ones who found all the problems.

The result, just in fines and restitution, that’s $700 million of loss. That doesn’t talk about any lost business, any reputational damage, that doesn’t talk about third-party damage, it doesn’t talk about insurance, it doesn’t talk about anything else except for just fines, that’s $700 million. Credit reports moved from being able to be frozen for a minor cost, to being able to be frozen at no cost as a result of this, which is a good change for the consumer, but that means, again, they lost a source of revenue and there were new breach reporting requirements enabled.

So, here you’ve got three very different companies, very different organizations I should say, that were coming at this from very different standpoints, but you see that there’s a lot of things common between them. So, the overall lessons being learned in this, organizations are still making the same mistakes in 2020 that they’re making in 1984. We’ve had 25 years to learn better, and we haven’t. Data breaches are getting bigger and more expensive with each one. The slow response to a data breach increases cost, reducing the time it takes to identify a breach and contain it, limits the cost, limits the impact, and also makes it look better for the customer who wants to stay with that company.

Regulations and fines are both increasing and are increasingly becoming a patchwork. So in Europe, you have the GDPR, but when you look across the states, there’s 50 states and not every state has a data breach law. There’s no federal data breach law. If you look at Singapore, you have different reporting requirements if you have people from Singapore than anywhere else. So, right now it’s a large traffic jam. You almost need to have a legal team or GRC team on hand or at least be able to hire it out to know who you have to notify and when, so you’re meeting all the requirements.

Companies that aren’t willing to pay for the expertise to secure their data and to ensure it’s there end up paying more in the long run. Data breach impact on stock prices continues to rise. So again, stocks will usually, and it’s kind of strange, initially, they’ll actually go up and then they’ll plummet. Data breach impacts on the customer retention is getting larger. So again, it’s about 4% right now, that’s up from three and change a few years ago.

What can we do in order to limit this? What can we do to prevent these kinds of breaches? And, it’s pretty straightforward, it’s have a response plan, practice it, and I’m not even saying it’s internal, external, you hire somebody else, you want to keep it in-house, you want to outsource it, whatever, have that response plan, practice it.

How you’re going to respond and how you’re going to present to the public about what happens is crucial to keeping customers, it’s crucial to keeping the business alive. Disclose fully, quickly, don’t lie, don’t mitigate. Now, that doesn’t mean say, if you don’t know anything about it, make things up. Lots of times it’s perfectly acceptable to say, we’ve only been aware of this for the last four hours, twenty-four hours, we’re looking into it, we don’t know, we will continue to communicate and provide information, that’s great, that’s fine.

Maersk had an issue where they had ransomware, not a breach, but ransomware. Their transparency was a model of customer confidence. They actually didn’t lose – I think it was like a half a percent of their customers over it. So, that tells you that that’s the method to go with, be open, be honest, and be quick, and don’t make mistakes and tell them to go to the wrong website.

Use of encryption, automating security wherever possible, having an IR team, having employee training internally, having cyber insurance, getting the board involved. If there’s no executive buy-in in this, it’s not going to work. So, not only does this have to be your security team who’s doing this, but your employees have to believe in this, and they’re not going to believe in this unless executive buy-in occurs.

And finally, there is two-factor authentication, limiting of administrative privileges, segmenting of networks, all of the security standards that are standard security controls, and can be done at minimal cost all will pay themselves back when and if a breach occurs. They all limit the chance of the breach, they all limit the severity of that breach, and therefore they limit the impact of that breach.

Well, I will leave it open for questions for a minute or two here, so the people have a chance to write them, but I do want to go ahead and also let everybody know that our next scheduled webinar is going to be Impact of Zero-Day Exploits on Breaches and that’s going to be given by the security manager for our offensive team, Cory Mathews and that’ll be on August 26th at 11:00 AM. Well-worth tuning into to hear Cory. He’s a very knowledgeable and well-spoken presenter. If you want to hear from the other side of the house, the attack as opposed to the defense side, it’s going to be a very good presentation.

So, we do have one question from the audience: Do you see more sophisticated attacks by financially motivated actors or nation-states?

Right now, and it does go back and forth, right now the number of attacks that we’re seeing against commercial entities, and I’m making that distinction, are higher by financially motivated actors. So, organized crime for example, Russian organized crime, even if it’s not US organized crime, that is larger right now in the defense industry, intellectual property, sometimes it’s actually impossible to distinguish between a nation-state and the financially motivated actor, especially with countries like China and Russia, but usually on that side, you end up seeing the nation-state more than the financially motivated actor.

Ok, another question just popped up. In my opinion, what’s the first and most important step I should take to protect my organization against a breach?

Honestly, the first step is to do your own assessment to see what you’ve got that’s actually worth protecting. You may not have something. If you have the formula for Coke, obviously you’re going to pick that, and then you’re going to tailor your response to that piece of intellectual property.

If you’ve got credit card data, PII, things like that, you’re going to have to identify that, so you have to do a business assessment to see what it is you want to protect and then you’re basically going to have to do a risk analysis against that to see how risky it is that you’re going to be attacked and that step is going to be taken and it’s honestly a business decision then at that point to say, how much are we willing to spend in order to prevent or mitigate those risks?

So, identifying what you’ve got that you want to protect, identifying what the threat is to that, identifying your security posture, and then are you going to do this in-house or are you going to outsource it to somebody else? Do you have any in-house ability to handle those security controls? Do you want to bring somebody else in to do the security controls? Who’s going to do your monitoring? Because monitoring isn’t once and done, monitoring is forever and always.

And then, what you’re going to do in terms of reacting and developing your plans and who in your company needs to be part of that.

What does it take to change the mindset of a company, individual to take security measures before a breach versus after in general? They don’t seem to change their attitude towards security regardless of how many people tell them and how much loss they suffer. It’s the same game all over again.

So, I will tell you where I’ve had the most success doing that and I’ve done lots of presentations to various organizations, banking, healthcare, telecommunications, over the 20 years I was in the FBI and what it’s going to take to change them is to get them to understand it in terms of business, and you can’t just present it as here’s a risk to the company, it has to be presented as hard financial numbers.

So, if we have a breach, this is how much it’s going to cost and this is our estimate, and this is our chance of having a breach, so this is what our annualized cost will be and here’s how much it’s going to cost to protect against that and when you present it in that kind of a factual way, I see that you get a generally better response from executive management. I won’t say it’s always there. I gave a talk to a large number of banks and I’ll just say the names, so it’s not top-secret, this was an open forum.

We were talking about two-factor authentication and I was saying how that would make a huge difference in the number of business email compromises that occurred, the numbers of fake ACHs, et cetera and not Jamie Dimon, but a high-level executive at JPMorgan Chase was on the same panel I was and basically said, “We’re never going to do that. We’re never going to do two-factor authentication because while that may work for a consumer, most of our transactions are businesses and causing even a fraction of a second that it’s going to cost to add that authentication in there, we’re going to lose business to our contemporaries, our other banks, so we’re not going to do that and right now insurance is paying for it”

But, I think is going to change that in the future is as insurance becomes more and more expensive, and as insurance says, are you taking reasonable steps before we pay you out, that it’s going to change, but until some of that stuff is litigated about who’s actually responsible.

So for example, and I’m going to use another real-world example, HUD is the Department of Housing and Urban Development, it does a lot of loans for people who can’t afford housing, Section 8 housing, et cetera to developers. There was a real estate company here in Dallas, they had a business email compromise and there were close to $3 million in funds that were redirected to a Chinese bank instead of their normal bank.

To be fair, the company was breached because they let an employee’s child on the computer that was only supposed to be used for banking on there and they were on Facebook and downloaded malware, but the bank didn’t do any kind of checking with this company that had never sent money to China before and they never sent money over 30 or $40,000 for these loans, and all of a sudden it’s $700,000 ACHs to China, the bank didn’t call them and ask them, “Hey, what’s going on?”

And eventually, HUD is like, “We want to be paid our money back. That’s our money.” And basically, it got into a fight over insurance and who’s insurance was going to pay and who was going to be responsible for it.

And until fights like those get settled, it’s going to be a fight with corporate executives because right now it’s, “Oh, we have cyber insurance that covers it.”

So, hopefully that answered that question you were looking for. If there’s any follow up to any of those answers, please type them in.

Then, again, we do appreciate your time and attendance today. I do appreciate this space and in the polls and I would encourage anybody, once again, to sign up for Cory’s presentation next month. It’ll be again, from the other side, it won’t be from the side of defense, we’ll do this from the side of looking at the breach from the attacker’s side.

One more question: How effective is open threat intelligence data for protecting, blocking threat hunting if your company doesn’t want to get a paid service?

It’s good, is it as good? No, but it’s definitely better than not doing it. A lot of the services are great at taking and synthesizing and amalgamating all the disparate data out there and that helps you, so you’re not having to do that analysis on your own, but frankly, there’s even open threat intelligence sources that you don’t have to pay for in the way you would traditionally think of.

So, there’s a Carnegie Mellon, FBI, Department of Homeland Security joint public/private partnership that is centered in Pittsburgh and they will actually allow you to join that as a company and get their intel feed from there. The way you pay for that basically though is by also reading back to them one of two things, either you give them your threat intel as you’re getting it, so they can help emulate, an acquire and process that, or you can put somebody actually in the threat center there, which is what they prefer and by putting a body in there and helping out, you then get access to all of the information they’re getting.

So, there’s ways like that that you can get something that’s a bit better than open-source intelligence, but open-source intelligence is definitely valuable and should not be ignored.

All right, well, I do appreciate everybody’s time. Thank you very much for attending and look forward to seeing you guys all on the next webinar in the series. Thank you very much.

Uncovering Your Security Blind Spots

How big is your security risk and how do you identify and contain those risks? You may be surprised to learn that there are looming threats you can’t see, and attackers may be enjoying a long dwell time within your system – before you even detect an issue.

Quentin Rhoads-Herrera, CRITICALSTART’s Director of Professional Services, explores how big your security risk is throughout your organization, and how to uncover those potential blind spots in the first part of our new webinar series “Once More unto the Breach”.

Visit our webinar page here to learn more about the series and save your spot for the next webinar “Lessons Learned from Billion Dollar Breaches“.

Video Transcript:

Welcome, everyone. Thank you for joining me.

My name is Quentin Rhoads-Herrera, I’m the Director of Professional Services. I’ve been at CRITICALSTART for roughly two and a half years now. I run the team known as TEAMARES. It’s our offensive and defensive security research and product teams, so we conduct everything from red teaming, penetration testing, offensive security research, as well as forensics, incident response, and malware reverse engineering type work.

I’ve been doing professional pentesting for roughly seven years now. I have done it in companies as large as Fortune 30 companies and I’ve done it for government organizations as well as part of contracts. So that’s a little bit about me.

Risk Management

Today we’re talking about how to reduce time-to-breach and effectively be able to analyze how you can effectively perform risk mitigation, risk management, in a way, and increase efficiency while doing it. Part of that when we begin is really talking about asset management, process and procedure development, and risk register creation. These are almost the fundamentals that we see a lot when we’re doing our Incident Response Tabletops and we’re doing our penetration testing that is lacking in organizations. Partly because asset management was an afterthought or processes and procedures are being developed later on in the development of the company or the risk register has not even been started because the company doesn’t understand what their risk tolerance, what their threat actors may look like, and what is concerning.

IT Asset Management: Software Governance & Compliance

Starting off, one of the most important things that I see quite often when we discuss risk management is asset management. This is a key fundamental to IT infrastructure and software risk management. If we don’t understand as a company what is out there in our infrastructure, what software we’re leveraging, then we’re not obviously going to have insight into how are we patching, how are we updating, how are we sending these softwares through, these applications through our own risk management fundamentals so we can see is the company or vendor following good practices for software development lifecycle. Are the active directory team, are they patching, are they cleaning up after their own active directory groups and policies, the GPOs, etc.?

This is a huge fundamental piece and it often is found lacking in a lot of pentests that we do because we find a lot of systems that are either Windows XP, they’re Windows 7 that are still vulnerable to BlueKeep, we find software that’s several years outdated, and even in the most mature companies that we come across we find that this isn’t fully thought through. It’s something that is dependent upon the teams to report up to the risk management teams and there’s no double-checking or auditing going back. This is kind of a double-edged sword. We trust that the teams that are standing up infrastructure, they’re deploying software, that they’re going to inform us, the leaders within the organization that they’re doing this type of work. Then what we commonly find is that when a pentest occurs, the pentesters find the orphan systems or the web server and database running on somebody’s laptop that nobody knew about for several years that is vulnerable to several attacks.

Asset Management

This can lead to several issues. However, in order to actually conduct asset management we need processes and policies in place in order to require and force how we, as an organization, are going to manage our asset management list. How are we going to instruct users to inform about new infrastructure being stood up? Are they going through a defined process or is it the Wild West of IT? Obviously you need systems and you need organization type of structure in place. If you have the Wild West in place then what ends up happening is that you end up having that user with the very vulnerable application running on their laptop that nobody ever knew about.

When you do this effectively what happens is you can help avoid an orphan system. These are the systems that nobody knows about, that have been unpatched for several decades, and commonly lead to domain admin because a service account is on them. It also helps with patch management. We commonly find that when we go onto an assessment we’ll initially scan for just the low hanging fruit. It’s a common thing that most pentesters should do, and what we’ll find is that there’s at least one server somewhere within that organization that is vulnerable to EternalBlue or BlueKeep or something maybe even older because of the fact that it’s a system that fell out of asset management.

But outside of just vulnerability management, this also helps with enabling wide-scale audits. Being able to say, “Okay, well let me check how my patch management is effectively working. Let me evaluate how effectively we’re putting applications through our software development lifecycle that should include security.” With an asset management system, we can effectively do that. We can say, “All right, out of all of these infrastructures that we have, all of them are included in this asset management and they’re all patched, they’re all up to date.” Reducing your risk.

Finally, it does help with normalizing and standardizing your infrastructure. If you have a set policy and procedure on how to stand infrastructure, that then is fed into your asset management. You can make sure that your entire infrastructure is using a set of systems that you’ve already approved of and that has already gone through your risk assessment process. You’ve already evaluated that, if you’re using a rail system, a red hat system, that you’re using a version that’s already been identified as safe and up-to-date. You can evaluate to make sure that all of the packages installed on that system have been identified as common to Linux or if they’re custom, they’ve gone through a security development lifecycle. This just helps standardize your environment.

The more standard you can become, the better chance you have of reducing any type of risk in terms of exploitation. You’re effectively raising the bar that a pentester is going to have to find either a zero-day or just become very lucky because of a weak password if you standardize your infrastructure, this includes software.

Processes and Procedures

Okay, moving on to this next topic. We talked about IT asset management and one of the requirements in order to have an effective asset management is to have appropriate policies and procedures in place in order to enforce the submission of systems and applications to your asset management list. This is a cornerstone for all security teams within any company. You need policies and procedures around password complexity, user management, implementation, infrastructure software, or even application development and deployment. But what often is confused is what is the difference between a policy or procedure or standard. Not to read verbatim but a policy is effectively guidelines or how am I, as the company, going to say something is to be done and taking password complexity, a policy would be all users must have a password that is 12 characters in length, it has alphanumeric characters, and is rotated every 90 days. That’s a policy.

Now procedure is, how are we going to accomplish this? Using the password example, we can accomplish this by enforcing rules on active directory for length of password existence or password complexity. We can incorporate third-party tools that include dictionaries, or we can even get more finite and say, “Well all local admins are going to be random and we’re going to use Microsoft LAPS for that.” Effectively removing any requirement that a user has and local admin, we’re not relying on their password complexity, we’re relying on the technology that Microsoft has provided.

The standards is a detailed description of what must be done to comply, so what are we going to do in order to comply to that policy? The standards being that we’re going to enforce LAPS, we’re going to enforce password expirations.

These are, like I said, really critical to a successful security team. Without these set of guidelines and restrictions, your users who are not security-focused aren’t going to understand how to appropriately apply security fundamentals in their everyday life. That is going to lead to some very interesting situations.

One brief example is a company we did an assessment for, they were a fairly large company. However, they were fairly new to the security space. Probably within the first day during a physical, we walked in, went to a conference, plugged in, and had an enterprise admin password within an hour. This was due to the fact that the company itself was still developing their security practice.

After we reported that to them, the next following year they started implementing their own SOC, their own policies and procedures and standards, the password complexity was definitely raised due to a lot of restrictions they put in place to include the leverage of LAPS, to include separated admin accounts from your local users making it pretty complex for a hacker to gain access to.

Risk Register

Outside of that, we also need to discuss about risk register when we’re talking about our risk management, our risk policy. A lot of people are really confused about risk register really is and they tend to overthink it. It’s fairly simple. It is created to help your department get their strategic IT risk management program off the ground.

It’s sole purpose in life is to highlight what your risk statement is, the risk causes, and the risk impacts to your company, as well as the likelihood impact of that risk. It effectively can be as easy as an Excel sheet that we have here in this screenshot, that can be sorted based off of your catalog, off of your IT risk as well, and your IT domain. So it could be based off of your domain in regards to active directory. You can get even more finite and say, “This is my risk for business.” This tool is intended to help you think through your risk.

So if you don’t understand, as an organization, what your risks are, what effectively is going to happen is you’re going to try to lock down everything and become one of two things. You’re going to be ultra-paranoid, lock down everything and just go full force, or you’re going to think more about productivity and less about security, leaving yourself open to vulnerabilities. You’re not going to be able to successfully explain to your upper leadership why security’s important and what the risks are for not funding security or staffing it or following some basic guidelines that security wants to push through.

This is a very easy document to put together, it shouldn’t take incredibly long. It can be incredibly long depending on how many risks that you have identified for your organization, but it could be very simple for certain domains within IT. You can make a risk catalog based off of application development, you can make it off of infrastructure management, you can make it pretty particular to your specific area so you can at least identify what risks exist, what is the likelihood of that risk occurring, and what is the impact of that? That way you can effectively communicate throughout your organization.

Operationalize Risk Mitigation

But how do we operationalize risk mitigation? How do we take all of these things that we’ve learned from the asset management, from the policies and procedures, and from a risk register, and effectively ensure that we’re complying with that all the way through and reducing the risk as much as possible? Well, a lot of people will say, “Well, just put it to a risk committee board and have them vote on it and move forward,” but really all that does is it means a bunch of executives or upper leadership is evaluated and said, “Well, yeah the risk exists but I’m okay accepting it.” Instead what we should be doing is figuring out ways to reduce that risk by leveraging known industry standards for cybersecurity.

It can be as simple as conducting vulnerability scans throughout your infrastructure. This can not only help you identify what is out in your infrastructure that you may have missed by providing insiders instead of exact IP so you can catch any IPs that may not be on your asset management so it can help reduce the chance or orphan systems but it can also help you identify any type of unpatched systems, gaps in patching. The same with threat intelligence, right? Understanding who your threat actors are can help you reduce the risk by identifying their common tactics, techniques, and procedures, and placing defensive technologies in front of that in order to reduce the possibility of them gaining access.

Vulnerability Management

Vulnerability management is a huge aspect of offensive security and defense security solutions. Without vulnerability management in place, you are effectively trusting that each team is patching by themselves, they’re following your policies and procedures, and you have very little insight into what is occurring. Just to start with some very basic definitions, so what is a vulnerability? It’s a weakness or a flaw. It’s something that is in an application or system that can be exploited in order to gain access that the user wouldn’t have had originally. It can be a chained vulnerability or it can be a singular.

Then what is an assessment? It’s the process of identifying those vulnerabilities. How do you take that vulnerability and find it? Well, you have to do that through conducting a vulnerability assessment, a penetration test, or something along those lines that can help identify those vulnerabilities giving you the potential of patching.

Vulnerability Lifecycle

The lifecycle of a vulnerability assessment really shouldn’t be complex, it should be very basic. The more you add to this, the more you’re effectively going to reduce the effectiveness. Really it’s discovery, this is your enumeration. If you think of it from a vulnerability scan this is you’re putting siders of your internal network, you’re scanning it, and you’re effectively looking at what is on your network.

Like I said, this can help you identify any assets that are not a part of your asset management list. It can also help you identify any weaknesses in patching. You’re prioritized, so this is where the risk register comes in. This is you prioritizing okay well, to me an external asset is going to be more risk than my internal random file share that nobody ever uses. So it helps you prioritize what is more important for scanning than what is higher risk than something else.

You conduct the assessment. If it’s a scan, you just hit scan. If it’s an actual penetration test, you conduct your work. You report on it, so you hand it over to all of the individuals that own those infrastructure or devices that were scanned and effectively stating here is the problem, here is the remediation efforts. This helps guide them through understanding how to fix it.

We, as security professionals, can’t hand over something and say, “Well there’s issues here,” without also following up and saying, “Here’s how to fix those.” After the owner remediates it is on us really to confirm that that remediation existed or was put in place and it worked. This is a very huge problem we see.

Recently TEAMARES found it in VMware Horizon, the Mac VMware application. One of our assessors found a privileged escalation vulnerability in it and it was submitted to VMware. VMware issued a patch, they didn’t have anybody test this patch, they sent it out, another researcher outside of our team actually found this outside of our company and found a workaround from their patch. This means that they issued a patch, they developed it, but they never tested the remediation. Did it actually fix the problem or did they just add complexity? Really they just added complexity. They made it so that now as an attacker you had to bypass the signature check, which could easily be done by a race condition. They were able to exploit that and move forward.

As we release vulnerabilities to these asset owners and say, “Hey patch this.” They come back and say, “I patched it.” We need to own the remediation check. We need to verify that it actually was patched. Without that there’s no point to doing any of this.

Vulnerability Scanning

One of the fundamental parts about vulnerability management, like I was saying, is vulnerability scanning. Having some sort of tool or system in place that could scan across your infrastructure looking for any potential vulnerabilities, unpatched systems, other issues that could cause impact to your infrastructure, to your organization, and that could be patched. One thing to note is vulnerability scanners are not designed to find zero-days. I know there’s breach tack simulation tools but those are entirely different. Those are leveraged to effectively find flaws in your defensive technologies using the tactics, techniques, and procedures of red teams or threat actors. They’re not designed to find zero-days or vulnerabilities within your systems.

Vulnerability scanner effectively looks for anything that has been already designed and implemented in that scanner be it Nessus or something else, and it looks for the existence of that issue somewhere within your organization. The purpose behind vulnerability scanning can range. Most of the time it’s leveraged for compliance needs like PCI, sometimes it’s leveraged for continuous to check on patching or asset management, to confirm that the organization is following the standards, policies, and procedures they put in place. It’s also used for risk reduction. Because having a vulnerability scanner run monthly, quarterly, whatever it is can help you reduce your risk by ensuring that the low hanging fruit, the vulnerabilities that have already been found on network-connected devices is being scanned and checked against constantly so you don’t have what we call in this industry the script kiddies, effectively exploiting some system using EternalBlue or something that’s built in the Metasploit.

You want to raise the bar on your attackers and your pentesters. If you’re hiring a company that is commonly finding exploits that are Metasploit and they’re using them and exploiting you with it, then you need to go back and rethink how you’re doing your vulnerability management. The bar’s constantly being raised, your pentesters talent should constantly be challenged and forcing them to get better because that means you, as an organization, is also improving.

Penetration Testing

We talked a little bit about vulnerability management, vulnerability scanning. I even covered a little bit about breach attack simulation tools and there’s a lot of other automated ways of checking for deficiencies in defensive technologies or deficiencies in patching or asset management and so on. But there’s a huge piece of vulnerability management that’s commonly left out and a lot of people are trying to automate as much as possible, which is fine. There’s a human element to penetration testing that really is paramount to any successful vulnerability management program.

The biggest reason behind this is a pentesting team or even individual, they’re leveraging techniques in order to abuse vulnerabilities, flaws, misconfigurations, and systems that an automated scanner isn’t going to naturally pick up. Sometimes they’re doing it in a very clever way. For example, leveraging a person in order to identify the basic misconfigurations and systems, in order to violate or bypass segmentation rules that have been put in place within the organization in order to keep people from gaining access to systems they shouldn’t have access to, a business need. Commonly like SCADA systems, we do this quite often where we will be tacked with trying to reenact from the guest or user network all the way into a SCADA infrastructure.

What we’ll do is we’ll sit there and we’ll flow low and quiet, crawl through the infrastructure looking for even the most minute issue in order to gain access to those SCADA systems. We’ve seen a lot of DLP tools, we’ve seen a lot of vulnerability scanners, etc., claim that they can find passwords and chest of chairs being opened and I’m sure that some of them can but they can’t cover everything and that’s where the human element really plays a pivotal role in the vulnerability management program.

A pentester should not be relying on scanners. They should be relying upon their own knowledge base and their own ability to crawl through an infrastructure and finding those flaws. That’s where they really separate themselves from vulnerability scanners and all those automated programs.

But there’s also a compliance and regulation need for it, right? A lot of compliance regulation requires penetration testing to PCI and others. It’s also a pivotal part of the software development lifecycle (SDLC) so being able to attach to your applications before they go into productions, thus limiting your risk to your organization. We see quite often that companies who are developing applications they just really sit within their internal production infrastructure, their intranet or they’ll publish it live to the internet without really any checks on security from the code that they’re releasing, thus it does raise a lot of risks to the company. That also tells us that they’re not effectively doing risk management, they’re not identifying that an external application is high risk and should go through these barriers or checkpoints to evaluate the security of that application prior to releasing.

This is where our penetration test can come in place and a pentester can go and evaluate either through code reviewed assisted testing or black box testing and really evaluate what that application is going to have as an impact to the organization if it was breached. Pentesters are also a good way of measuring your risk as it stands now in reducing it long-term. A good pentest report should be able to identify flaws that you found or that they found in your infrastructure and be able to define those as risks, and be able to show you how to remediate. Once you remediate those issues your risk should go down overall, at least that is the hope and goal of most penetration tests.

Incident Response Tabletop

We talked a lot about offensive security things in regard to vulnerability management, operationalizing your risk mitigation and management. One of the things that we haven’t talked about yet is defensive methods that you can put in place to truly reduce your risk and evaluate where you are from a risk management perspective. One of the easiest is an Incident Response Tabletop. This will really challenge your processes, your procedures, your communication by simulating a breach, or some type of security event that may simulated occur against your infrastructure, your organization, and by following your current written policies and procedures.

This isn’t something that has to be elaborate. It could be something as simple as one of my external websites and database, connected database have been reached. Start scenario. How are we going to take control over that database, that application, evaluate what actually occurred? Cutting off the bleeding, and start notifying internal teams of the event and having them work through the business continuity plan as well as the incident response plan on trying to recover from that breach. It’s not something that is usually done fearing a live engagement like a pentest or red team, this is something that’s just as the title indicates, it’s done in the conference room, on the phone, via a web chat going through the steps of your incident response plan.

This really helps you identify any weaknesses that you have in your plan, allowing you to improve before real-life scenario occurs or before you move to actually testing your plan against red teams or purple team type situations.

Incident Response Tests

The other option that you have is Incident Response Test. So, this more of a live-fire scenario. This is where you either have an internal red team or an external red team go after your organization and use all the capabilities that team has in order to breach. You don’t tell your blue team or your defensive security personnel about what’s happening. This is a true test of your real-world incident and how your teams are going to respond to what happens.

A lot of companies don’t do this because they’re afraid that it may make them look bad, their defenses look bad, but really this is an accurate test of what you’re going to face and by having a pentest done at the same time you can see from their perspective how hard or difficult is it going to be for them to breach you when your defensive teams are blocking IPs, when they’re kicking people off the networking, they’re force changing passwords that have been identified during that breach. It really raises the game for your red team, internal or external that you have, as well as your blue team and you’re really being able to see how’s that communication working internally and externally because that’s a huge factor when it comes to breaches is communication. How are your teams following those processes and procedures? How is your technology holding up? Do you have a good EDR solution that’s actually preventing red teamers, or at least alerting on red teamers, dumping memory or accessing Windows APIs that are known to be leveraged for malicious purposes?

This also allows you to really test in a real-world scenario your policy standards and procedures. Do you have a procedure and a policy in place that if an account is identified during a breach that that password is changed? How fast did that occur? This really gives you a really solid insight into what would happen if a breach did occur.

Increasing Efficiency

There’s a lot of tools and a lot of software I see, a lot of training that’s out there that you can pay for in order to increase your efficiency. I’m a big believer in open-source. I’m a big believer in free if possible. I don’t think everything should be paid for. This isn’t an extensive list, but it captures the point I want to make.

We need to leverage automation and technology to increase efficiency. It’s a no brainer. We need to have vulnerability scanners, we need to have technology in place that can help centralize any type of potential incident response or activity or any breach activity. We need the ability to crawl our IT infrastructure in order to reduce the human aspect of asset management so that we can have scanners add to our asset management before us. However, it doesn’t need to break the bank. On the right side, I have some examples of open source tools for the big fundamentals of risk management and vulnerability management such as an asset management tool, vulnerability management, vulnerability scanners. I mean these may not be the best, OpenVAS is definitely not the best in the industry. It’s free, it’s decent for very small organizations. There are others out there that are free that are a little bit more wide-scale. But Incident Response Management, The Hive, this is a project I recently came across, it looks pretty decent.

The point being is just because it’s open-source or free doesn’t mean it’s not going to help increase your automation, your efficiency. It may need a little bit more labor and work from your side to actually get it to a point where it’s increasing your efficiency. But if it reduces overall cost, increases efficiency, it’s a no brainer. It’s a success in my eyes. That means you can use your security money for more important things like personnel training where paid training is required. Things of that nature.

If you have any questions, I’ll answer them now. If not thank you all for joining and I hope you learned something from this and feel free to reach out to us if you have any questions.

3 Challenges Facing Cybersecurity Professionals Revealed

Join CRITICALSTART‘s industry-leading cybersecurity professionals Randy Watkins, Chief Technology Officer, and Jordan Mauriello, Vice President of Managed Services, as they discuss 3 Factors to Consider When Selecting a Managed Detection and Response Service Provider.

©2020 CRITICALSTART. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
©2021 CRITICALSTART. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.