Benchmarking Security Information & Event Management (SIEM) - Sponsored Whitepaper

Sponsored by:
NitroSecurity, Inc Logo
NitroSecurity, Inc
Download Entire Whitepaper
Critical business systems and their associated technologies are typically held to performance benchmarks. In the security space, benchmarks of speed, capacity and accuracy are common for encryption, packet inspection, assessment, alerting and other critical protection technolo- gies. But how do you set benchmarks for a tool based on collection, normalization and corre- lation of security events from multiple logging devices? And how do you apply these bench- marks to today's diverse network environments? This is the problem with benchmarking Security Information Event Management (SIEM) sys- tems, which collect security events from one to thousands of devices, each with its own differ- ent log data format.If we take every conceivable environment into consideration, it is impossi- ble to benchmark SIEM systems. We can,however,set one baseline environment against which to benchmark and then include equations so that organizations can extrapolate their own benchmark requirements. That is the approach of this paper. Consider that network and application rewalls,network and host Intrusion Detection/Preven- tion (IDS/IPS),access controls,sniffers,and Unied Threat Management systems (UTM)—all log security events that must be monitored. Every switch, router, load balancer, operating system, server,badge reader,custom or legacy application,and many other IT systems across the enter- prise, produce logs of security events, along with every new system to follow (such as virtual- ization). Most have their own log expression formats. Some systems, like legacy applications, don't produce logs at all. First we must determine what is important. Do we need all log data from every critical system in order to perform security,response,and audit? Will we need all that data at lightning speed? (Most likely, we will not.) How much data can the network and collection tool actually handle under load? What is the threshold before networks bottleneck and/or the SIEM is rendered unusable, not unlike a denial of service (DOS)? These are variables that every organization must consider as they hold SIEM to standards that best suit their operational goals.

SANS Analyst Program 1 Benchmarking Security Information Event Management (SIEM)

Why is benchmarking SIEM important? According to the National Institute of Standards (NIST), SIEM software is a relatively new type of centralized logging software compared to syslog. Our SANS Log Management Survey1 shows 51 percent of respondents ranked collecting logs as their most critical challenge – and collecting logs is a basic feature a SIEM system can provide. Further, a recent NetworkWorld article2 explains how different SIEM products typically integrate well with selected logging tools, but not with all tools. This is due to the disparity between logging and reporting formats from different systems. There is an effort under way to standardize logs through Mitre's Common Event Expression (CEE) standard event log language.3 But until all logs look alike, normalization is an important SIEM benchmark, which is measured in events per second (EPS). Event performance characteristics provide a metric against which most enterprises can judge a SIEM system. The true value of a SIEM platform, however, will be in terms of Mean Time To Remediate (MTTR) or other metrics that can show the ability of rapid incident response to miti- gate risk and minimize operational and nancial impact. In our second set of benchmarks for storage and analysis, we have addressed the ability of SIEM to react within a reasonable MTTR rate to incidents that require automatic or manual intervention. Because this document is a benchmark, it does not cover the important requirements that cannot be benchmarked, such as requirements for integration with existing systems (agent vs. agent-less, transport mechanism, ports and protocols, interface with change control, usability of user interface, storage type, integration with physical security systems, etc.). Other require- ments that organizations should consider but aren't benchmarked include the ability to process connection-specic ow data from network elements, which can be used to further enhance forensic and root-cause analysis. Other features, such as the ability to learn from new events, make recommendations and store them locally, and lter out incoming events from known infected devices that have been sent to remediation, are also important features that should be considered, but are not benchmarked here. Variety and type of reports available, report customization features, role-based policy management and workow management are more features to consider as they apply to an individual organization's needs but are not included in this benchmark. In addition, organizations should look at a SIEM tool's overall history of false- positives, something that can be benchmarked, but is not within the scope of this paper. In place of false positives,Table 2 focuses on accuracy rates within ...
Download Entire Whitepaper
Copyright © 2014, Questex Media Group LLC
Company descriptions and contact information are quoted from the company's website or other promotional information. Questex is not responsible for the accuracy of this information. Unless specifically noted, Questex is not sponsored by, affiliated with or otherwise connected with any of the listed companies.