Business Continuity & Disaster Recovery 101

Even when all else fails, there is still hope! Business Continuity Planning and Disaster Recovery Planning are here as the last resort to protect your business.

Business Continuity Planning (BCP) and Disaster Recovery Planning (DRP) are an organization’s last corrective control when all other controls have failed! BCP/DRP may prevent or provide a remedy for force majeure circumstances such as injury, loss of life, or failure of an entire organization.

Furthermore, BCP/DRP provide the advantage of being able to view the organization’s critical processes and assets in a different, often clarifying light. Risk analysis conducted during a BCP/DRP plan stage often leads to immediate mitigating actions.

An eventual potentially crippling disaster may have no impact due to prudent risk management steps taken as a result of thorough BCP/DRP plans.


Developing a Business Continuity Planning and Disaster Recovery Planning are essential for a company’s responsiveness and ability to recover from an interruption in normal business functions or catastrophic events. In order to ensure that all planning has been considered, the BCP/DRP have a specific set of requirements to review and implement. Below are listed the high-level steps to achieving a sound, logical BCP/DRP:

  • Define Project Scope;
  • Business Impact Analysis;
  • Identify Preventive Controls;
  • Recovery Strategy;
  • Plan Design and Development;
  • Implementation, Training, and Testing;
  • BCP/DRP Maintenance.

what is the difference between BUSINESS CONTINUITY and DISASTER RECOVERY?

Business Continuity Planning will ensure the business will continue to operate prior to, during, and after a disaster happens.

The focus is on the business in its entirety and making sure critical services and functions provided by the business will still be performed, both if threatened by disruption as well as after the threat has subsided.

Organizations need to consider common threats to their critical functions as well as any associated vulnerabilities that might facilitate a significant disruption. Business Continuity Planning is a long-term strategy for continued successful operation despite inevitable threats and disasters.

Disaster Recovery Planning– while Business Continuity Planning is responsible for the strategic, long-term, business-oriented plan for uninterrupted operation when faced with a threat or disruption, the Disaster Recovery Planning will provide the tactics. In essence, DRP is a short-term plan for dealing with specific IT-oriented outages.

Mitigating a virus infection with a risk of spreading is an example of a specific IT-oriented disruption that a DRP must address. The focus is on efficiently mitigating the outage impact and the immediate response and recovery of critical IT systems. Disaster Recovery Planning provides a means for immediate response to disasters.


The relation between BCP & DRP – the BCP is an all-inclusive plan that includes, amongst multiple specific plans, the DRP – the importance stems from the fact that the focus and process of these overlap critically.

Continual provision of business-critical services facing threats is achieved with the aid of the tactical DRP. The plans, with their different scopes, are organically intertwined.

In order to distinguish between a BCP and a DRP one needs to realize that the BCP is concerned with the business-critical function or service provided by the company, whereas the DRP focuses on the actual systems and their interoperability so the business function is performed.


As mentioned before, the Business Continuity Plan is an umbrella plan that contains other plans, in addition to the Disaster Recovery Plan:

Continuity of Operations Plan (COOP) – describes the procedures required to maintain operations during a disaster. This includes the transfer of personnel to an alternative disaster recovery site and operations of that site.

Continuity of Support Plan – focuses narrowly on the support of specific IT systems and applications. It is also called the IT contingency plan, emphasizing IT over general business support.

Cyber Incident Response Plan (CIRP) – designed to respond to disruptive cyber events, including network-based attacks, worms, computer viruses, Trojan horses, etc.

Business Recovery Plan (BRP) – also known as the business resumption plan, details the steps required to restore normal business operations.

Crisis Communications Plan – used for communicating to staff and the public in the event of a disruptive event. Instructions for notifying the affected members of the organization are an integral part of any BCP/DRP.

Occupant Emergency Plan (OEP) – provides the response procedures for occupants of a facility in the event of a situation posing a potential threat to the health and safety of personnel, the environment, or property.

how does the testing work?


The Disaster Recovery Plan must be an actionable prescription for recovery. Writing the plan is not enough, thorough testing is needed. Information systems are in a constant state of flux, with infrastructure, hardware, software, and configuration changes altering the way the DRP needs to be carried out. Testing the details of the DRP will ensure both the initial and continued efficacy of the plan. The tests must be performed on an annual basis as an absolute minimum.

Review – the most basic form of initial DRP testing. It involves simply reading the DRP in its entirety.

Checklist – also referred to as consistency testing, lists all necessary components required for a successful recovery and ensures that they are, or will be, readily available should a disaster occur.

Walkthrough/Tabletop – the goal is to talk through the proposed recovery procedures in a structured manner to determine whether there are any noticeable omissions, gaps, erroneous assumptions, or simply technical missteps that would hinder the recovery process from successfully being carried out.

Simulation (aka Walkthrough Drill) – goes beyond talking about the process and actually has teams carry out the recovery process. The team must respond to a simulated disaster as directed by the DRP.

Parallel Processing – involves the recovery of critical processing components at an alternative computing facility, and then restore data from a previous backup. Regular production systems are not interrupted.

Partial & Complete Interruption – extreme caution should be exercised before attempting an actual interruption test. This test causes the organization to actually stop processing normal business at the primary location and use an alternative computing facility.

What is an Independent Audit Good For?

The audit of Information Security is a comprehensive assessment, which is allowed, in order to assess the current condition of Information Security in the business and to plan timely actions in order to increase the level of security.

The audit of Information Security is conducted when a current necessity of independent assessment of the condition of Information Security is needed.

Why do you need internal audit?

There are a number of reasons to perform internal audits either one-time, ad-hock, or regularly. Some of these may be:

  • If there is a change in the strategy of the company;
  • In case of mergers or acquisitions;
  • When there are significant changes in the organizational structure of the company or change of leadership;
  • When there are new internal or external requirements for Information Security;
  • In the event of significant changes in the business processes and IT infrastructure.


When performing an internal audit, one needs to take into account and adhere to the following “rules”:

  • Analysis of the organizational and administrative documents of the company;
  • Interviews with employees of the organization: representatives from the business units, the administrators and developers of information systems, professionals in Information Security;
  • Technology for inspection of office space in terms of physical security of the IT infrastructure;
  • Analysis of the configuration settings of hardware and software;
  • Auditing of special hardware (scanners, security analysis, control of the leakage of information, etc.);
  • Penetration testing;
  • Assessment of the knowledge of workers in the field of Information Security.


An extra special examination can be made that takes into account the particularities of the audited company. If necessary, in the phase of the study, additional information may be collected, that is needed for the implementation of other projects, which hereinafter will save additional resources for the organization and will help the distribution of its budget.


Objective – An independent audit is usually performed either due to regulatory requirements or those of third parties wishing to enter in collaborative or supplier relations, an outsourcing partner, for example. Internal audits are usually mandated by management and are more focused on business operations and their continuity.


Auditors – An independent audit is carried out by an external team, while internal audits are performed by members of staff. While the independent auditor may provide a more “fair view” of the current state, the internal audit may reflect a business’s proprietary technological and organizational characteristics more closely, with in-depth findings.


Reporting – Usually, the independent IT audit will result in the main report being in a format required by auditing standards, with a focus on whether the Information Security claims of the company give a true and fair view and comply with requirements. These reports, whether formal or not, are designed to provide a status snapshot, rather than go into detailed recommendations on how to make things better.


Internal audit should produce a tailored report about how the risks and objectives are being managed – with a focus on helping the business move forward. As such, internal audit reports are expected to contain recommendations for improvement of the organization’s Information Security.

SIEM for Beginners

We tend to use a lot of stand-alone systems for the analysis of not-so-easy-to-understand processes, but having a thorough log analysis and the big picture of what the systems do altogether is of great importance.

Let’s talk about Security Information & Event Management or SIEM for short. Such systems are used to collect and analyze information from a maximum number of sources of information – such as DLP system, IPS, routers, firewalls, user workstations, servers, and so on. Practical examples of threats that can only be identified correctly by SIEM:

  • APT attacks – relevant for companies holding valuable information. SIEM is perhaps the only way to detect the beginning of such an attack (with research infrastructure, attackers will generate traffic at different ends that allows you to see this activity by the security event correlation systems SIEM);
  • Detection of various anomalies in the network and on the individual nodes, the analysis of which is unattainable for other systems
  • Response to emergency situations, rapid changes in user behavior

The principle of “supply and forget“ is not applicable. Absolute protection does not exist, and the most unlikely risks can backfire and stop the business and cause huge financial losses. Any software and hardware may not work or be configured incorrectly and let the threat through.


  • Regulatory mandates require log management to maintain an audit trail of activity. SIEM’s provide a mechanism to rapidly and easily deploy a log collection infrastructure. Alerting and correlation capabilities also satisfy routine log data review requirements. SIEM reporting capabilities provide audit support as well;
  • A SIEM can pull data from disparate systems into a single pane of glass, allowing for efficient cross-team collaboration in extremely large enterprises;
  • By correlating process activity and network connections from host machines a SIEM can detect attacks, without ever having to inspect packets or payloads;
  • SIEM’s store and protect historical logs, and provide tools to quickly navigate and correlate data, thus allowing for rapid, thorough, and court-admissible forensics investigations.


  • Analysis of events and creation of alerts at any network traffic anomalies, unexpected user actions, unidentified devices, etc.;
    Creation of reports, including ones customized specifically for your needs.
  • For example, a daily report on incidents, a weekly report of top 10 violators, a report on the performance of devices, etc. Reports are configured flexibly according to their recipients;
  • Monitoring events from devices / servers / mission-critical systems, the establishment of appropriate notifications;
  • Logging of all events in the event gathering evidence, analyzing attack vectors, etc.




The SIEM implementation should leverage a phased approach, with systematic follow-through of the required stages for solution deployment. The typical SIEM implementation phases are:


А detailed assessment of the company’s environment must be performed with the goal to inventory the existing architecture and identify basic SIEM requirements – to understand the current enterprise security architecture and its critical components, the current tools and procedures used to determine potential risk and the procedures used to confirm regulatory compliance. Identifying the business objectives to be met by the development and implementation of a SIEM, as well as capture a clear network with an inventory of all devices in order to ensure solution comprehensiveness.


А detailed technical SIEM deployment design is to be created, based on the gathered requirements. Converting business requirements to conceptual scenarios, as well as creating technical use cases, logical and physical SIEM architecture designs, and SIEM integration project plan.


System characteristics require the provision of real-time, centralized monitoring and correlation system over the entire network security infrastructure, as well as notification of and response to harmful security events. Sharing information security event data with all relevant business units and generating security even data for forensic purposes.

This phase involves the tasks of configuring and installing the development environment, implementing technical use cases and the interface component, testing system configuration, documenting system configuration, rolling-out to production, and training & knowledge transferring.


As with most systems, the SIEM one also needs looking after. Ensuring support for the solution, placing an effective 24/7 solution monitoring, and preparing for a change of management, always with an eye of evolving threats, are all a must.


This is a question that can not be answered in advance. The integrator typically examines client infrastructure, their needs, figuring out what is the client’s budget.

After that the vendors make offers and the integrator proposes to the customer the one most suitable. This is needed because there is a lack of compatibility between different vendors.

Sometimes, it is believed that if you have a SIEM, there is no need to install DLP, IDS, vulnerability scanners, etc. In fact, this is not the case. SIEM can track any anomalies in the network stream, but it will not be able to make the normal analysis. SIEM, strictly speaking, is useless without other security systems. The main advantage of SIEM – collection, storage, and analysis of logs – will be reduced down to zero without the sources of these logs.

Vulnerability Assessment – Know Your Weaknesses

Relax, we’ll not be talking about personal and psychological vulnerabilities here. Instead, let’s talk about IT, its inherent vulnerabilities and their assessment.

IT Vulnerability assessment, also known as vulnerability analysis, is a conscious action aiming to define, identify, and classify the security vulnerabilities in a computer, network, or an entire communications infrastructure. Furthermore, the vulnerability assessment can be used to forecast the effectiveness of proposed countermeasures and evaluate their actual effectiveness after they are put into use.


Vulnerability assessment is usually the first step taken in the direction of strengthening an organization’s Information Security. Inasmuch, as it provides a picture of open doors or holes in the security landscape, the vulnerability assessment can be a starting point in rationalizing one’s security strategy, policies, etc. Ultimately, data collected and rationalized fuels the entire Risk Management process.


Regardless of the methodology, scope, and timing that can differ, Vulnerability Assessment has to follow certain steps:

  • Determine the scope of assessment;
  • Scan entire network with all devices;
  • Identify and confirm found vulnerabilities;
  • Classify and determine vulnerability levels;
  • Prepare vulnerability report.


An important part or extension (depending on the underlying philosophy) to vulnerability assessment –Penetration Testing – is usually performed by a white hat using ethical hacking techniques. Using this method to assess vulnerabilities, security experts deliberately probe a network or system to discover its weaknesses. This process can provide guidelines for the development of countermeasures to prevent a genuine attack.

Imagine you’re in a room with many doors and you want to know which ones of all these are locked and which not. Vulnerability Assessment does just that – it provides a “list” of unlocked doors. These doors could be used to break into an organization’s communication system, inflicting damage and disrupting operations.

The scope of Vulnerability Assessment is usually all-encompassing, spreading over an entire organization or, at least over an entire critical system the organization uses.

Penetration Testing, on the other hand, may follow a narrower scope. Instead of just listing doors, it goes through each unlocked door to see how far can one reach into the system.

Also, what impact such entry can have, thus exposing possible vulnerabilities that were not seen in the Vulnerability Assessment of the first “batch” of doors.


Lack of Vision: Creating a plan for vulnerability assessment is not an easy task. As such, you need to look it over from as many sides as possible and explore every aspect of vulnerabilities found. Being narrow-minded when talking about such an assessment, is one of the biggest mistakes you can make. To adequately examine weaknesses in your infrastructure, you need to put yourself in the shoes of the attacker. What better way to do that, than to try even the most outrageous ideas for testing and to simulate even the rarest situations. Don’t exclude any idea before seriously considering it. You should also have in mind that having a member of the senior management in the room, while thinking of ways to assess vulnerabilities, is a bad idea because suddenly ideas stop flowing and people become afraid to explore different possibilities.

Inadequate Compliance: Complying with laws and regulations is not always enough to secure the information infrastructure of your business. Furthermore, in every country, there are examples of government legislation, enforced to increase business security that can, sometimes, interfere with the business environment in an incomplete fashion. The wise and legal thing to do is to address inadequacies with additional measures in order to enhance productive legislation requirements with legally permitted actions.

Bad Reporting: A problem that is often encountered is lacking a technique of reporting. It is nothing new, for an external consulting company, just to drop off a report full of vulnerabilities and problems, leaving the rest to the client. On other occasions, people focus too much on the problem itself, without providing any answers for the weak points in the infrastructure. Another example of bad reporting is concentrating only on categorizing and enumerating the problems found, again with no perspective of finding a solution. Creating a report with the detailed categorization of all problems is vital, but is only half of the work. The other half involves a detailed analysis of the report and the effort to solve problems found in it.

Knowledge Gained Does Not Enter Corporate Culture: Although there is security-sensitive information in a vulnerability assessment report, that cannot be shared lightly with employees, this is no reason to keep staff members in the dark. Security is part of the corporate culture and as such must be embraced by everyone in the company, not as a mandatory requirement, but as something they are involved in. Security staff meetings and debating of security incidents, both in the company and in other companies, will greatly affect the understanding of security as a group effort.


Determining the Information Security risks in a company is a complex and involving task. In a dynamic and integrated environment, locating and assessing threats and vulnerabilities is simply not enough. Therefore, what you need is not only a simple vulnerability assessment but an integrated process of vulnerability management.

What is vulnerability management and how is it different from vulnerability assessment?

Vulnerability assessment will tell you where and what the vulnerabilities are, while vulnerability management will make sure these vulnerabilities are addressed by actionable measures, such as but not limited to the installation of a patch, a change in network security policy, reconfiguration of software (such as a firewall), educating users about social engineering, etc.


Vulnerability management is the ongoing, cyclical practice to identify, classify, remedy, and mitigate vulnerabilities. The process is especially important when treating issues related to software and firmware. Vulnerability management is integral to computer security and network security and is accompanied by vulnerability assessment, which provides the initial “food for thought”.

Although vulnerabilities are classified by their severity, they are not directly translated to risks in an organization. A high severity vulnerability may or may not be regarded as a critical risk. The risk definitions are handled in the risk assessment process, part of Risk Management activities.

DDoS Stress Testing for Increased Resiliency

You’ve heard of DDoS, right? In short, DDoS stress testing is a specific service that helps your organization understand just how well you are prepared for the different DDoS attack vectors that, unfortunately, may come your way. The service consists of simulations of DDoS or high load on your IT and are carried out in a strictly controlled and pre-scheduled manner. What you get is a detailed report that tells you of network and server issues related to DDoS resiliency. You also get remediation and mitigation advice on how to harden your DDoS mitigation solution or how to implement one, in case you don’t have it yet.


Today, DDoS is as easy to inflict on a victim as buying a pizza online. It’s cheap and effective too. By stress testing your IT infrastructure, you will be able to identify and plan for mitigating DDoS-related issues before attacks do happen and harm you. You will also gain insight into your incident response procedures and improve them, or simply gain better control over a DDoS mitigation solution you may have. If you’re looking to purchase such a solution, stress testing may help you choose the right vendor for the job.


The stress testing process usually starts with a verification and customization procedure. Real-time DDoS attack vectors are pointed at the organization’s IT public-facing infrastructure from the outside (real-life scenario) or in a closed environment (on-premise simulation). DDoS attack simulations should be carried out on all applicable Layers of the OSI model in a fine-grained controlled manner with a “Stop” capability at all times. The process must be supervised by the service provider’s support member and a representative of the tested organization at all times.


Confidentiality, integrity, and availability, also known as the CIA (or AIC triad for wanting to avoid association with a certain intelligence agency) triad, is at the heart of Information Security, working together to make sure your data and systems remain secure. It is wrong to assume one part of the triad is more important than another. Every IT system will require a different prioritization of the three, depending on the data, user community, and timeliness required for accessing the data. There are opposing forces to the triad concepts and they are disclosure, alteration, and destruction. Disclosure is when you are faced with unauthorized disclosure of information, alteration constitutes the unauthorized modification of data, and destruction is making systems unavailable.


Availability keeps information available when needed. All systems must be usable (available) for business-as-usual operation. Typical availability attacks are the Denial of Service (DoS) or Distributed Denial of Service (DDoS) attacks, whose aim is to deny the service (or availability) of a system. Being prepared and informed of weaknesses in your system against DDoS attacks involves stress testing.


Determining the readiness of your organization’s IT infrastructure for DDoS attacks through stress testing must include all known attack vectors and possible sources. Remember, DDoS today is cheap and effective, thus the following characteristics of the testing method and approach must be in place:

  • Attack vectors simulating floods generated by real known botnets;
  • Volumetric attacks with unlimited size and adjustable increments;
  • Service-centric selection of floods on the Application layer;
  • Flexible attack timing and combined vector capability;

The attack scope is very important and must (i.) be able to show at least fundamental weaknesses of the target servers and (ii.) comply with your security policies and strategy.

A good stress testing vendor will have the expertise and capacity to employ a wide variety of attack vectors to include, but not limited to various HTTP/HTTPS methods and combinations (GET, POST, HEAD, PUT, DELETE, TRACE, CONNECT, OPTIONS, PATCH, etc.), various attacks on WebDAV protocol, SYN-ACK Floods, ACK or ACK-PUSH Floods, Fragmented ACK Floods, RST/FIN Floods, Same Source/Destination Floods (LAND Attack), Fake Session Attacks, UDP Floods, UDP Fragmentation, ICMP Floods, ICMP Fragmentation Floods, Ping Floods, TOS Floods, IP NULL/TCP NULL Attacks, Smurf/Fraggle Attacks, DNS Floods, NTP Floods, various Amplified (Reflective) attacks, Slow Session Attacks, Slow Read Attacks, Slowloris, HTTP Fragmentation, various types of Excessive Verb (HTTP/HTTPS GET Flood), Excessive Verb – Single Session, Multiple Verb – Single Requests, Recursive GET, Random Recursive GET, various Specially Crafted Packets, etc.


In order to establish perimeter resilience to DDoS attacks, from a risk management point of view, proper identification and listing of assets under threat is required and must be followed by an assessment of the critical assets’ vulnerability. Generally, DDoS Stress testing is performed either externally, or internally.

As the name suggests, the external approach simulates DDoS attack by deploying resources that are very close in their nature to a real-life attack, i.e. originating from the Internet. The attacking “botnet” is simulated from a stress testing cloud platform. The maximum volume of the simulated test attacks must be discussed with the client and agreed upon prior to starting the tests. Generally, a typical topology for external tests, including a sample legitimate client ( a machine used to perform availability tests), is implemented:


In contrast to external testing, internal DDoS stress testing means performing the simulation in a location within the perimeter of the client network. Flood traffic is generated internally and pointed to resources, which are usually part of a purpose-built test environment. Displayed below is a typical network topology for internal testing, where the Internet is simulated with a local network and includes segmented test targets and a simulated legitimate client PC:


When performing DDoS Stress testing, it is imperative that a detailed test plan is made available in advance and is pre-approved by all parties involved. All tests must be performed in stages, with every stage lasting long enough to perform an availability test and measure an approximate download speed from the target server by connecting to it from the simulated client PC. Tests must be designed in such a way that they can be stopped at any time and stage on your request. It is highly recommended to not perform tests on the production environment, as their behavior and possible aftereffects depend on specific target server settings.