SIEM for Beginners

We tend to use a lot of stand-alone systems for the analysis of not-so-easy-to-understand processes, but having a thorough log analysis and the big picture of what the systems do altogether is of great importance.

Let’s talk about Security Information & Event Management or SIEM for short. Such systems are used to collect and analyze information from a maximum number of sources of information – such as DLP system, IPS, routers, firewalls, user workstations, servers, and so on. Practical examples of threats that can only be identified correctly by SIEM:

  • APT attacks – relevant for companies holding valuable information. SIEM is perhaps the only way to detect the beginning of such an attack (with research infrastructure, attackers will generate traffic at different ends that allows you to see this activity by the security event correlation systems SIEM);
  • Detection of various anomalies in the network and on the individual nodes, the analysis of which is unattainable for other systems
  • Response to emergency situations, rapid changes in user behavior

The principle of “supply and forget“ is not applicable. Absolute protection does not exist, and the most unlikely risks can backfire and stop the business and cause huge financial losses. Any software and hardware may not work or be configured incorrectly and let the threat through.

WHAT’S THE NEED FOR INFORMATION SECURITY AND EVENT MANAGEMENT?

  • Regulatory mandates require log management to maintain an audit trail of activity. SIEM’s provide a mechanism to rapidly and easily deploy a log collection infrastructure. Alerting and correlation capabilities also satisfy routine log data review requirements. SIEM reporting capabilities provide audit support as well;
  • A SIEM can pull data from disparate systems into a single pane of glass, allowing for efficient cross-team collaboration in extremely large enterprises;
  • By correlating process activity and network connections from host machines a SIEM can detect attacks, without ever having to inspect packets or payloads;
  • SIEM’s store and protect historical logs, and provide tools to quickly navigate and correlate data, thus allowing for rapid, thorough, and court-admissible forensics investigations.

HOW TO SCOPE A SIEM INTEGRATION?

  • Analysis of events and creation of alerts at any network traffic anomalies, unexpected user actions, unidentified devices, etc.;
    Creation of reports, including ones customized specifically for your needs.
  • For example, a daily report on incidents, a weekly report of top 10 violators, a report on the performance of devices, etc. Reports are configured flexibly according to their recipients;
  • Monitoring events from devices / servers / mission-critical systems, the establishment of appropriate notifications;
  • Logging of all events in the event gathering evidence, analyzing attack vectors, etc.

HOW THE SIEM FUNCTIONS

 

DESIGN & INTEGRATION STEPS

The SIEM implementation should leverage a phased approach, with systematic follow-through of the required stages for solution deployment. The typical SIEM implementation phases are:

REQUIREMENTS GATHERING & ASSESSMENT

А detailed assessment of the company’s environment must be performed with the goal to inventory the existing architecture and identify basic SIEM requirements – to understand the current enterprise security architecture and its critical components, the current tools and procedures used to determine potential risk and the procedures used to confirm regulatory compliance. Identifying the business objectives to be met by the development and implementation of a SIEM, as well as capture a clear network with an inventory of all devices in order to ensure solution comprehensiveness.

SYSTEM DESIGN

А detailed technical SIEM deployment design is to be created, based on the gathered requirements. Converting business requirements to conceptual scenarios, as well as creating technical use cases, logical and physical SIEM architecture designs, and SIEM integration project plan.

INTEGRATION ACTIVITIES

System characteristics require the provision of real-time, centralized monitoring and correlation system over the entire network security infrastructure, as well as notification of and response to harmful security events. Sharing information security event data with all relevant business units and generating security even data for forensic purposes.

This phase involves the tasks of configuring and installing the development environment, implementing technical use cases and the interface component, testing system configuration, documenting system configuration, rolling-out to production, and training & knowledge transferring.

POST-DEPLOYMENT ACTIVITIES

As with most systems, the SIEM one also needs looking after. Ensuring support for the solution, placing an effective 24/7 solution monitoring, and preparing for a change of management, always with an eye of evolving threats, are all a must.

CHOOSING A VENDOR

This is a question that can not be answered in advance. The integrator typically examines client infrastructure, their needs, figuring out what is the client’s budget.

After that the vendors make offers and the integrator proposes to the customer the one most suitable. This is needed because there is a lack of compatibility between different vendors.

Sometimes, it is believed that if you have a SIEM, there is no need to install DLP, IDS, vulnerability scanners, etc. In fact, this is not the case. SIEM can track any anomalies in the network stream, but it will not be able to make the normal analysis. SIEM, strictly speaking, is useless without other security systems. The main advantage of SIEM – collection, storage, and analysis of logs – will be reduced down to zero without the sources of these logs.

DDoS Stress Testing for Increased Resiliency

You’ve heard of DDoS, right? In short, DDoS stress testing is a specific service that helps your organization understand just how well you are prepared for the different DDoS attack vectors that, unfortunately, may come your way. The service consists of simulations of DDoS or high load on your IT and are carried out in a strictly controlled and pre-scheduled manner. What you get is a detailed report that tells you of network and server issues related to DDoS resiliency. You also get remediation and mitigation advice on how to harden your DDoS mitigation solution or how to implement one, in case you don’t have it yet.

WHY WOULD YOU PROCURE DDoS STRESS TESTING?

Today, DDoS is as easy to inflict on a victim as buying a pizza online. It’s cheap and effective too. By stress testing your IT infrastructure, you will be able to identify and plan for mitigating DDoS-related issues before attacks do happen and harm you. You will also gain insight into your incident response procedures and improve them, or simply gain better control over a DDoS mitigation solution you may have. If you’re looking to purchase such a solution, stress testing may help you choose the right vendor for the job.

HOW DOES IT WORK?

The stress testing process usually starts with a verification and customization procedure. Real-time DDoS attack vectors are pointed at the organization’s IT public-facing infrastructure from the outside (real-life scenario) or in a closed environment (on-premise simulation). DDoS attack simulations should be carried out on all applicable Layers of the OSI model in a fine-grained controlled manner with a “Stop” capability at all times. The process must be supervised by the service provider’s support member and a representative of the tested organization at all times.

PLACE IN THE SECURITY PROCESS

Confidentiality, integrity, and availability, also known as the CIA (or AIC triad for wanting to avoid association with a certain intelligence agency) triad, is at the heart of Information Security, working together to make sure your data and systems remain secure. It is wrong to assume one part of the triad is more important than another. Every IT system will require a different prioritization of the three, depending on the data, user community, and timeliness required for accessing the data. There are opposing forces to the triad concepts and they are disclosure, alteration, and destruction. Disclosure is when you are faced with unauthorized disclosure of information, alteration constitutes the unauthorized modification of data, and destruction is making systems unavailable.

 

Availability keeps information available when needed. All systems must be usable (available) for business-as-usual operation. Typical availability attacks are the Denial of Service (DoS) or Distributed Denial of Service (DDoS) attacks, whose aim is to deny the service (or availability) of a system. Being prepared and informed of weaknesses in your system against DDoS attacks involves stress testing.

WHAT COVERAGE OF STRESS TESTING DO YOU NEED?

Determining the readiness of your organization’s IT infrastructure for DDoS attacks through stress testing must include all known attack vectors and possible sources. Remember, DDoS today is cheap and effective, thus the following characteristics of the testing method and approach must be in place:

  • Attack vectors simulating floods generated by real known botnets;
  • Volumetric attacks with unlimited size and adjustable increments;
  • Service-centric selection of floods on the Application layer;
  • Flexible attack timing and combined vector capability;

The attack scope is very important and must (i.) be able to show at least fundamental weaknesses of the target servers and (ii.) comply with your security policies and strategy.

A good stress testing vendor will have the expertise and capacity to employ a wide variety of attack vectors to include, but not limited to various HTTP/HTTPS methods and combinations (GET, POST, HEAD, PUT, DELETE, TRACE, CONNECT, OPTIONS, PATCH, etc.), various attacks on WebDAV protocol, SYN-ACK Floods, ACK or ACK-PUSH Floods, Fragmented ACK Floods, RST/FIN Floods, Same Source/Destination Floods (LAND Attack), Fake Session Attacks, UDP Floods, UDP Fragmentation, ICMP Floods, ICMP Fragmentation Floods, Ping Floods, TOS Floods, IP NULL/TCP NULL Attacks, Smurf/Fraggle Attacks, DNS Floods, NTP Floods, various Amplified (Reflective) attacks, Slow Session Attacks, Slow Read Attacks, Slowloris, HTTP Fragmentation, various types of Excessive Verb (HTTP/HTTPS GET Flood), Excessive Verb – Single Session, Multiple Verb – Single Requests, Recursive GET, Random Recursive GET, various Specially Crafted Packets, etc.

INTERNAL vs. EXTERNAL TESTING

In order to establish perimeter resilience to DDoS attacks, from a risk management point of view, proper identification and listing of assets under threat is required and must be followed by an assessment of the critical assets’ vulnerability. Generally, DDoS Stress testing is performed either externally, or internally.

As the name suggests, the external approach simulates DDoS attack by deploying resources that are very close in their nature to a real-life attack, i.e. originating from the Internet. The attacking “botnet” is simulated from a stress testing cloud platform. The maximum volume of the simulated test attacks must be discussed with the client and agreed upon prior to starting the tests. Generally, a typical topology for external tests, including a sample legitimate client ( a machine used to perform availability tests), is implemented:

 

In contrast to external testing, internal DDoS stress testing means performing the simulation in a location within the perimeter of the client network. Flood traffic is generated internally and pointed to resources, which are usually part of a purpose-built test environment. Displayed below is a typical network topology for internal testing, where the Internet is simulated with a local network and includes segmented test targets and a simulated legitimate client PC:

 

When performing DDoS Stress testing, it is imperative that a detailed test plan is made available in advance and is pre-approved by all parties involved. All tests must be performed in stages, with every stage lasting long enough to perform an availability test and measure an approximate download speed from the target server by connecting to it from the simulated client PC. Tests must be designed in such a way that they can be stopped at any time and stage on your request. It is highly recommended to not perform tests on the production environment, as their behavior and possible aftereffects depend on specific target server settings.