Accuracy in AppSec Is Critical to Reducing False Positives

According to a new report from the Neustar International Security Council (NISC), over one-quarter of security alerts fielded within organizations are false positives. Surveying senior security professionals across five European countries and the U.S., the report highlights the need for more advanced and accurate security solutions to help reduce alert-wary cybersecurity teams overwhelmed by massive alert volumes.

Alert Fatigue and Its Causes

Following are some of the key highlights from the report:

More than 41% of organizations experience over 10,000 alerts a day. That said, many of them are not critical. Teams need to be able to quickly differentiate between low-fidelity alerts that clutter security analysts’ dashboards and those that pinpoint legitimate potential malicious activity. This expanding volume of low-fidelity alerts has become a source of “noise” that consumes valuable time — from developers to the security operations center (SOC). Thousands of hours can be wasted annually confirming if an alert is legitimate or a false positive.

While security tools may trigger alert notifications, this doesn’t mean the activity is malicious. Security configuration errors, inaccuracies in legacy detection tools, and improperly applied security control algorithms can all contribute to false-positive rates. Other contributing factors include:

  • Lack of context in the alert generation process.
  • Inability to consolidate and classify alerts.

Another reason for the deluge in alerts is the fact that many companies deploy multiple security controls that fail to correlate event data. Disparate events may not be linked, with the tools used by security analysts operating in separate silos with little consolidation. Log management and security information and event management (SIEM) systems can perform a correlation between separate products, yet they require significant customization to accurately report events.

Tools like these often require a security analyst to confirm the accuracy of the alert—namely, if it’s a true legitimate alert or false positive. While these types of solutions can coordinate and aggregate data to analyze alerts, they don’t address the challenges posed by high rates of false positives.

Further complicating matters are intrusion detection and prevention systems (IDS/IPS) that cannot accurately aggregate multiple alerts. For instance, if a single alert shows that an internal system attempted but failed to connect to an external IP address 50 times, most tools will generate 50 separate failed connection alerts, versus recognizing it as one repeated action.

Security Alert Overload Introduces Risk and Inefficiencies

The time it takes to investigate and validate a single alert can require a multitude of tools just to decide if an alert should be escalated. According to a report by CRITICALSTART, incident responders spend an average of 2.5 to 5 hours each day investigating alerts.

Unable to cope with the endless stream of alerts, security teams are tuning specific alert features to stem the stream of alerts to reduce volume. But this often ratchets up risk, as they may elect to ignore certain categories of alerts and turn off high-volume alert features.

As a result, one of the challenges development teams have in managing alert fatigue in application security (AppSec) is finding the right balance between setting liberal controls—that could potentially flood systems with alerts—and more stringent alert criteria that could find teams subject to false negatives.

While false positives may be annoying and burden teams with additional triage requests, false negatives tend to be more nefarious, because the functionality of an application that is tested is erroneously flagged as “passing” yet, in reality, it contains one or more vulnerabilities. For AppSec teams, the objective is having the ability to detect valid threats that provide quality alerts, supported by the context and evidence to inspect them accurately and continuously.

Reducing Alert Fatigue With Instrumented AppSec

Fortunately, technologies like instrumentation help automate security testing to reduce false positives and false negatives.

Instrumentation is the ability to record and measure information within an application without changing the application itself. Some current “flavors” of security instrumentation today include the following technologies:

  • Software Composition Analysis (SCA).SCA performs inventory and assesses all open-source libraries
  • Runtime Application Self-Protection (RASP). A RASP monitors threats and attacks while preventing vulnerabilities from being exploited.
  • Interactive Application Security Testing (IAST). An IAST monitors applications for novel vulnerabilities in custom code and libraries.

By instrumenting an application with passive sensors, teams have more access to information about the application and its execution, delivering unprecedented levels of speed and accuracy in identifying vulnerabilities. This unique approach to modern AppSec produces the intelligence and evidence necessary to detect vulnerabilities with virtually no false positives and no false negatives.

At the end of the day, your security tools need to give you less, but significant, alerts that contain the correct intelligence to best inform your security and development teams. With technologies that use instrumentation, like SCA, IAST, and RASP, you can achieve high accuracy due to the visibility into an application and its runtime environment as code loads into memory to provide enhanced security logging for analytics.

Featured in DZone | June 1, 2020

Newsletter Signup

Stay up-to-date on the latest resources and news from CRITICALSTART.
Analyst-Led Cybersecurity with AI Assistance. Upcoming Webinar - December 17
This is default text for notification bar