How big is your security risk and how do you identify and contain those risks? You may be surprised to learn that there are looming threats you can’t see, and attackers may be enjoying a long dwell time within your system – before you even detect an issue.
Quentin Rhoads-Herrera, CRITICALSTART’s Director of Professional Services, explores how big your security risk is throughout your organization, and how to uncover those potential blind spots in the first part of our new webinar series “Once More unto the Breach”.
Welcome, everyone. Thank you for joining me.
My name is Quentin Rhoads-Herrera, I’m the Director of Professional Services. I’ve been at CRITICALSTART for roughly two and a half years now. I run the team known as TEAMARES. It’s our offensive and defensive security research and product teams, so we conduct everything from red teaming, penetration testing, offensive security research, as well as forensics, incident response, and malware reverse engineering type work.
I’ve been doing professional pentesting for roughly seven years now. I have done it in companies as large as Fortune 30 companies and I’ve done it for government organizations as well as part of contracts. So that’s a little bit about me.
Today we’re talking about how to reduce time-to-breach and effectively be able to analyze how you can effectively perform risk mitigation, risk management, in a way, and increase efficiency while doing it. Part of that when we begin is really talking about asset management, process and procedure development, and risk register creation. These are almost the fundamentals that we see a lot when we’re doing our Incident Response Tabletops and we’re doing our penetration testing that is lacking in organizations. Partly because asset management was an afterthought or processes and procedures are being developed later on in the development of the company or the risk register has not even been started because the company doesn’t understand what their risk tolerance, what their threat actors may look like, and what is concerning.
IT Asset Management: Software Governance & Compliance
Starting off, one of the most important things that I see quite often when we discuss risk management is asset management. This is a key fundamental to IT infrastructure and software risk management. If we don’t understand as a company what is out there in our infrastructure, what software we’re leveraging, then we’re not obviously going to have insight into how are we patching, how are we updating, how are we sending these softwares through, these applications through our own risk management fundamentals so we can see is the company or vendor following good practices for software development lifecycle. Are the active directory team, are they patching, are they cleaning up after their own active directory groups and policies, the GPOs, etc.?
This is a huge fundamental piece and it often is found lacking in a lot of pentests that we do because we find a lot of systems that are either Windows XP, they’re Windows 7 that are still vulnerable to BlueKeep, we find software that’s several years outdated, and even in the most mature companies that we come across we find that this isn’t fully thought through. It’s something that is dependent upon the teams to report up to the risk management teams and there’s no double-checking or auditing going back. This is kind of a double-edged sword. We trust that the teams that are standing up infrastructure, they’re deploying software, that they’re going to inform us, the leaders within the organization that they’re doing this type of work. Then what we commonly find is that when a pentest occurs, the pentesters find the orphan systems or the web server and database running on somebody’s laptop that nobody knew about for several years that is vulnerable to several attacks.
This can lead to several issues. However, in order to actually conduct asset management we need processes and policies in place in order to require and force how we, as an organization, are going to manage our asset management list. How are we going to instruct users to inform about new infrastructure being stood up? Are they going through a defined process or is it the Wild West of IT? Obviously you need systems and you need organization type of structure in place. If you have the Wild West in place then what ends up happening is that you end up having that user with the very vulnerable application running on their laptop that nobody ever knew about.
When you do this effectively what happens is you can help avoid an orphan system. These are the systems that nobody knows about, that have been unpatched for several decades, and commonly lead to domain admin because a service account is on them. It also helps with patch management. We commonly find that when we go onto an assessment we’ll initially scan for just the low hanging fruit. It’s a common thing that most pentesters should do, and what we’ll find is that there’s at least one server somewhere within that organization that is vulnerable to EternalBlue or BlueKeep or something maybe even older because of the fact that it’s a system that fell out of asset management.
But outside of just vulnerability management, this also helps with enabling wide-scale audits. Being able to say, “Okay, well let me check how my patch management is effectively working. Let me evaluate how effectively we’re putting applications through our software development lifecycle that should include security.” With an asset management system, we can effectively do that. We can say, “All right, out of all of these infrastructures that we have, all of them are included in this asset management and they’re all patched, they’re all up to date.” Reducing your risk.
Finally, it does help with normalizing and standardizing your infrastructure. If you have a set policy and procedure on how to stand infrastructure, that then is fed into your asset management. You can make sure that your entire infrastructure is using a set of systems that you’ve already approved of and that has already gone through your risk assessment process. You’ve already evaluated that, if you’re using a rail system, a red hat system, that you’re using a version that’s already been identified as safe and up-to-date. You can evaluate to make sure that all of the packages installed on that system have been identified as common to Linux or if they’re custom, they’ve gone through a security development lifecycle. This just helps standardize your environment.
The more standard you can become, the better chance you have of reducing any type of risk in terms of exploitation. You’re effectively raising the bar that a pentester is going to have to find either a zero-day or just become very lucky because of a weak password if you standardize your infrastructure, this includes software.
Processes and Procedures
Okay, moving on to this next topic. We talked about IT asset management and one of the requirements in order to have an effective asset management is to have appropriate policies and procedures in place in order to enforce the submission of systems and applications to your asset management list. This is a cornerstone for all security teams within any company. You need policies and procedures around password complexity, user management, implementation, infrastructure software, or even application development and deployment. But what often is confused is what is the difference between a policy or procedure or standard. Not to read verbatim but a policy is effectively guidelines or how am I, as the company, going to say something is to be done and taking password complexity, a policy would be all users must have a password that is 12 characters in length, it has alphanumeric characters, and is rotated every 90 days. That’s a policy.
Now procedure is, how are we going to accomplish this? Using the password example, we can accomplish this by enforcing rules on active directory for length of password existence or password complexity. We can incorporate third-party tools that include dictionaries, or we can even get more finite and say, “Well all local admins are going to be random and we’re going to use Microsoft LAPS for that.” Effectively removing any requirement that a user has and local admin, we’re not relying on their password complexity, we’re relying on the technology that Microsoft has provided.
The standards is a detailed description of what must be done to comply, so what are we going to do in order to comply to that policy? The standards being that we’re going to enforce LAPS, we’re going to enforce password expirations.
These are, like I said, really critical to a successful security team. Without these set of guidelines and restrictions, your users who are not security-focused aren’t going to understand how to appropriately apply security fundamentals in their everyday life. That is going to lead to some very interesting situations.
One brief example is a company we did an assessment for, they were a fairly large company. However, they were fairly new to the security space. Probably within the first day during a physical, we walked in, went to a conference, plugged in, and had an enterprise admin password within an hour. This was due to the fact that the company itself was still developing their security practice.
After we reported that to them, the next following year they started implementing their own SOC, their own policies and procedures and standards, the password complexity was definitely raised due to a lot of restrictions they put in place to include the leverage of LAPS, to include separated admin accounts from your local users making it pretty complex for a hacker to gain access to.
Outside of that, we also need to discuss about risk register when we’re talking about our risk management, our risk policy. A lot of people are really confused about risk register really is and they tend to overthink it. It’s fairly simple. It is created to help your department get their strategic IT risk management program off the ground.
It’s sole purpose in life is to highlight what your risk statement is, the risk causes, and the risk impacts to your company, as well as the likelihood impact of that risk. It effectively can be as easy as an Excel sheet that we have here in this screenshot, that can be sorted based off of your catalog, off of your IT risk as well, and your IT domain. So it could be based off of your domain in regards to active directory. You can get even more finite and say, “This is my risk for business.” This tool is intended to help you think through your risk.
So if you don’t understand, as an organization, what your risks are, what effectively is going to happen is you’re going to try to lock down everything and become one of two things. You’re going to be ultra-paranoid, lock down everything and just go full force, or you’re going to think more about productivity and less about security, leaving yourself open to vulnerabilities. You’re not going to be able to successfully explain to your upper leadership why security’s important and what the risks are for not funding security or staffing it or following some basic guidelines that security wants to push through.
This is a very easy document to put together, it shouldn’t take incredibly long. It can be incredibly long depending on how many risks that you have identified for your organization, but it could be very simple for certain domains within IT. You can make a risk catalog based off of application development, you can make it off of infrastructure management, you can make it pretty particular to your specific area so you can at least identify what risks exist, what is the likelihood of that risk occurring, and what is the impact of that? That way you can effectively communicate throughout your organization.
Operationalize Risk Mitigation
But how do we operationalize risk mitigation? How do we take all of these things that we’ve learned from the asset management, from the policies and procedures, and from a risk register, and effectively ensure that we’re complying with that all the way through and reducing the risk as much as possible? Well, a lot of people will say, “Well, just put it to a risk committee board and have them vote on it and move forward,” but really all that does is it means a bunch of executives or upper leadership is evaluated and said, “Well, yeah the risk exists but I’m okay accepting it.” Instead what we should be doing is figuring out ways to reduce that risk by leveraging known industry standards for cybersecurity.
It can be as simple as conducting vulnerability scans throughout your infrastructure. This can not only help you identify what is out in your infrastructure that you may have missed by providing insiders instead of exact IP so you can catch any IPs that may not be on your asset management so it can help reduce the chance or orphan systems but it can also help you identify any type of unpatched systems, gaps in patching. The same with threat intelligence, right? Understanding who your threat actors are can help you reduce the risk by identifying their common tactics, techniques, and procedures, and placing defensive technologies in front of that in order to reduce the possibility of them gaining access.
Vulnerability management is a huge aspect of offensive security and defense security solutions. Without vulnerability management in place, you are effectively trusting that each team is patching by themselves, they’re following your policies and procedures, and you have very little insight into what is occurring. Just to start with some very basic definitions, so what is a vulnerability? It’s a weakness or a flaw. It’s something that is in an application or system that can be exploited in order to gain access that the user wouldn’t have had originally. It can be a chained vulnerability or it can be a singular.
Then what is an assessment? It’s the process of identifying those vulnerabilities. How do you take that vulnerability and find it? Well, you have to do that through conducting a vulnerability assessment, a penetration test, or something along those lines that can help identify those vulnerabilities giving you the potential of patching.
The lifecycle of a vulnerability assessment really shouldn’t be complex, it should be very basic. The more you add to this, the more you’re effectively going to reduce the effectiveness. Really it’s discovery, this is your enumeration. If you think of it from a vulnerability scan this is you’re putting siders of your internal network, you’re scanning it, and you’re effectively looking at what is on your network.
Like I said, this can help you identify any assets that are not a part of your asset management list. It can also help you identify any weaknesses in patching. You’re prioritized, so this is where the risk register comes in. This is you prioritizing okay well, to me an external asset is going to be more risk than my internal random file share that nobody ever uses. So it helps you prioritize what is more important for scanning than what is higher risk than something else.
You conduct the assessment. If it’s a scan, you just hit scan. If it’s an actual penetration test, you conduct your work. You report on it, so you hand it over to all of the individuals that own those infrastructure or devices that were scanned and effectively stating here is the problem, here is the remediation efforts. This helps guide them through understanding how to fix it.
We, as security professionals, can’t hand over something and say, “Well there’s issues here,” without also following up and saying, “Here’s how to fix those.” After the owner remediates it is on us really to confirm that that remediation existed or was put in place and it worked. This is a very huge problem we see.
Recently TEAMARES found it in VMware Horizon, the Mac VMware application. One of our assessors found a privileged escalation vulnerability in it and it was submitted to VMware. VMware issued a patch, they didn’t have anybody test this patch, they sent it out, another researcher outside of our team actually found this outside of our company and found a workaround from their patch. This means that they issued a patch, they developed it, but they never tested the remediation. Did it actually fix the problem or did they just add complexity? Really they just added complexity. They made it so that now as an attacker you had to bypass the signature check, which could easily be done by a race condition. They were able to exploit that and move forward.
As we release vulnerabilities to these asset owners and say, “Hey patch this.” They come back and say, “I patched it.” We need to own the remediation check. We need to verify that it actually was patched. Without that there’s no point to doing any of this.
One of the fundamental parts about vulnerability management, like I was saying, is vulnerability scanning. Having some sort of tool or system in place that could scan across your infrastructure looking for any potential vulnerabilities, unpatched systems, other issues that could cause impact to your infrastructure, to your organization, and that could be patched. One thing to note is vulnerability scanners are not designed to find zero-days. I know there’s breach tack simulation tools but those are entirely different. Those are leveraged to effectively find flaws in your defensive technologies using the tactics, techniques, and procedures of red teams or threat actors. They’re not designed to find zero-days or vulnerabilities within your systems.
Vulnerability scanner effectively looks for anything that has been already designed and implemented in that scanner be it Nessus or something else, and it looks for the existence of that issue somewhere within your organization. The purpose behind vulnerability scanning can range. Most of the time it’s leveraged for compliance needs like PCI, sometimes it’s leveraged for continuous to check on patching or asset management, to confirm that the organization is following the standards, policies, and procedures they put in place. It’s also used for risk reduction. Because having a vulnerability scanner run monthly, quarterly, whatever it is can help you reduce your risk by ensuring that the low hanging fruit, the vulnerabilities that have already been found on network-connected devices is being scanned and checked against constantly so you don’t have what we call in this industry the script kiddies, effectively exploiting some system using EternalBlue or something that’s built in the Metasploit.
You want to raise the bar on your attackers and your pentesters. If you’re hiring a company that is commonly finding exploits that are Metasploit and they’re using them and exploiting you with it, then you need to go back and rethink how you’re doing your vulnerability management. The bar’s constantly being raised, your pentesters talent should constantly be challenged and forcing them to get better because that means you, as an organization, is also improving.
We talked a little bit about vulnerability management, vulnerability scanning. I even covered a little bit about breach attack simulation tools and there’s a lot of other automated ways of checking for deficiencies in defensive technologies or deficiencies in patching or asset management and so on. But there’s a huge piece of vulnerability management that’s commonly left out and a lot of people are trying to automate as much as possible, which is fine. There’s a human element to penetration testing that really is paramount to any successful vulnerability management program.
The biggest reason behind this is a pentesting team or even individual, they’re leveraging techniques in order to abuse vulnerabilities, flaws, misconfigurations, and systems that an automated scanner isn’t going to naturally pick up. Sometimes they’re doing it in a very clever way. For example, leveraging a person in order to identify the basic misconfigurations and systems, in order to violate or bypass segmentation rules that have been put in place within the organization in order to keep people from gaining access to systems they shouldn’t have access to, a business need. Commonly like SCADA systems, we do this quite often where we will be tacked with trying to reenact from the guest or user network all the way into a SCADA infrastructure.
What we’ll do is we’ll sit there and we’ll flow low and quiet, crawl through the infrastructure looking for even the most minute issue in order to gain access to those SCADA systems. We’ve seen a lot of DLP tools, we’ve seen a lot of vulnerability scanners, etc., claim that they can find passwords and chest of chairs being opened and I’m sure that some of them can but they can’t cover everything and that’s where the human element really plays a pivotal role in the vulnerability management program.
A pentester should not be relying on scanners. They should be relying upon their own knowledge base and their own ability to crawl through an infrastructure and finding those flaws. That’s where they really separate themselves from vulnerability scanners and all those automated programs.
But there’s also a compliance and regulation need for it, right? A lot of compliance regulation requires penetration testing to PCI and others. It’s also a pivotal part of the software development lifecycle (SDLC) so being able to attach to your applications before they go into productions, thus limiting your risk to your organization. We see quite often that companies who are developing applications they just really sit within their internal production infrastructure, their intranet or they’ll publish it live to the internet without really any checks on security from the code that they’re releasing, thus it does raise a lot of risks to the company. That also tells us that they’re not effectively doing risk management, they’re not identifying that an external application is high risk and should go through these barriers or checkpoints to evaluate the security of that application prior to releasing.
This is where our penetration test can come in place and a pentester can go and evaluate either through code reviewed assisted testing or black box testing and really evaluate what that application is going to have as an impact to the organization if it was breached. Pentesters are also a good way of measuring your risk as it stands now in reducing it long-term. A good pentest report should be able to identify flaws that you found or that they found in your infrastructure and be able to define those as risks, and be able to show you how to remediate. Once you remediate those issues your risk should go down overall, at least that is the hope and goal of most penetration tests.
Incident Response Tabletop
We talked a lot about offensive security things in regard to vulnerability management, operationalizing your risk mitigation and management. One of the things that we haven’t talked about yet is defensive methods that you can put in place to truly reduce your risk and evaluate where you are from a risk management perspective. One of the easiest is an Incident Response Tabletop. This will really challenge your processes, your procedures, your communication by simulating a breach, or some type of security event that may simulated occur against your infrastructure, your organization, and by following your current written policies and procedures.
This isn’t something that has to be elaborate. It could be something as simple as one of my external websites and database, connected database have been reached. Start scenario. How are we going to take control over that database, that application, evaluate what actually occurred? Cutting off the bleeding, and start notifying internal teams of the event and having them work through the business continuity plan as well as the incident response plan on trying to recover from that breach. It’s not something that is usually done fearing a live engagement like a pentest or red team, this is something that’s just as the title indicates, it’s done in the conference room, on the phone, via a web chat going through the steps of your incident response plan.
This really helps you identify any weaknesses that you have in your plan, allowing you to improve before real-life scenario occurs or before you move to actually testing your plan against red teams or purple team type situations.
Incident Response Tests
The other option that you have is Incident Response Test. So, this more of a live-fire scenario. This is where you either have an internal red team or an external red team go after your organization and use all the capabilities that team has in order to breach. You don’t tell your blue team or your defensive security personnel about what’s happening. This is a true test of your real-world incident and how your teams are going to respond to what happens.
A lot of companies don’t do this because they’re afraid that it may make them look bad, their defenses look bad, but really this is an accurate test of what you’re going to face and by having a pentest done at the same time you can see from their perspective how hard or difficult is it going to be for them to breach you when your defensive teams are blocking IPs, when they’re kicking people off the networking, they’re force changing passwords that have been identified during that breach. It really raises the game for your red team, internal or external that you have, as well as your blue team and you’re really being able to see how’s that communication working internally and externally because that’s a huge factor when it comes to breaches is communication. How are your teams following those processes and procedures? How is your technology holding up? Do you have a good EDR solution that’s actually preventing red teamers, or at least alerting on red teamers, dumping memory or accessing Windows APIs that are known to be leveraged for malicious purposes?
This also allows you to really test in a real-world scenario your policy standards and procedures. Do you have a procedure and a policy in place that if an account is identified during a breach that that password is changed? How fast did that occur? This really gives you a really solid insight into what would happen if a breach did occur.
There’s a lot of tools and a lot of software I see, a lot of training that’s out there that you can pay for in order to increase your efficiency. I’m a big believer in open-source. I’m a big believer in free if possible. I don’t think everything should be paid for. This isn’t an extensive list, but it captures the point I want to make.
We need to leverage automation and technology to increase efficiency. It’s a no brainer. We need to have vulnerability scanners, we need to have technology in place that can help centralize any type of potential incident response or activity or any breach activity. We need the ability to crawl our IT infrastructure in order to reduce the human aspect of asset management so that we can have scanners add to our asset management before us. However, it doesn’t need to break the bank. On the right side, I have some examples of open source tools for the big fundamentals of risk management and vulnerability management such as an asset management tool, vulnerability management, vulnerability scanners. I mean these may not be the best, OpenVAS is definitely not the best in the industry. It’s free, it’s decent for very small organizations. There are others out there that are free that are a little bit more wide-scale. But Incident Response Management, The Hive, this is a project I recently came across, it looks pretty decent.
The point being is just because it’s open-source or free doesn’t mean it’s not going to help increase your automation, your efficiency. It may need a little bit more labor and work from your side to actually get it to a point where it’s increasing your efficiency. But if it reduces overall cost, increases efficiency, it’s a no brainer. It’s a success in my eyes. That means you can use your security money for more important things like personnel training where paid training is required. Things of that nature.
If you have any questions, I’ll answer them now. If not thank you all for joining and I hope you learned something from this and feel free to reach out to us if you have any questions.