The Purpose of Intrusion Detection & Prevention Systems

Intrusion Detection System (IDS) is a detective device designed to detect malicious (including policy-violating) actions. An Intrusion Prevention System (IPS) is primarily a preventive device designed not only to detect but also to block malicious actions.

Depending on their physical location in the infrastructure, and the scope of protection required, the IDS and IPS fall into two basic types: network-based and host-based. Both have the same function and the specific type deployed depends on strategic considerations.

WHY ARE IDS and IPS necessary?

The IDS and IPS devices employ technology, which analyses traffic flows to the protected resource in order to detect and prevent exploits or other vulnerability issues.

These exploits can manifest themselves as ill-intended interactions with a targeted application or service. The goal is to interrupt and gain control of an application or a machine, thus enabling the attacker to disable the target causing a denial-of-service situation, or to gain access to rights and permissions available through the target.


There are four types of IDS and IPS events: true positive, true negative, false positive, and false negative. The goal of implementing an IDS or IPS is to achieve only true positives and true negatives.

One should keep in mind that most implementations have false positives so monitoring engineers spend time investigating non-malicious events, and false negatives, which can lead to intrusions. Thus, a proper configuration of the system is of crucial importance as it must reflect the organization’s traffic patterns.

IDS are designed to provide readiness to prepare for and deal with cyber attacks. This is accomplished through information collected from a variety of systems and network sources, which is then analyzed for security problems. IDS are generally deployed with the purpose to monitor and analyze user and system activity, audit system configurations and vulnerabilities, assess the integrity of any critical system and data files, perform statistical analysis of activity patterns based on the matching to known attacks, detect abnormal activity and audit operating systems.


The IPS is generally deployed in-line and analyses network packet traffic as it flows through. Thus, it is similar in function to an IDS – both attempt to match packet data against a signature database or detect anomalies against what is pre-defined as “normal” traffic.

In addition to this IDS functionality, an IPS does more than log and alert – It is usually used to react to detected anomalies. This reaction ability of the detections is what makes IPS more desirable than IDS in general.


These questions are to be answered taking into account the specifics of one’s environment. The most common locations for intrusion detection/protection sensor are between the network and extranet, in the Demilitarized Zone (DMZ), between the servers and the user community, on the remote access, intranet, and database environment, establishing network perimeter, and covering all possible points of entry should be possible.

Once placed, the sensors must be configured to report to the central management console, as dedicated administrators will manage the sensors, provide a new or updated signature, and review logs. In order to avoid data tampering, one must ensure the communication between the sensors and management console is secure.

The proper identification of mission-critical systems and points of entry requires the following roles in an organization to be involved in any IDS/IPS deployment:

  • Senior Management
  • Information Security Officers
  • Data owners
  • Network Administrators
  • Database Administrators
  • Operating System Administrators

If the key people representing these roles are not involved, the resources won’t be used efficiently and the resulting measure will be inadequate. It is strongly advisable to perform Vulnerability and Risk Assessment prior to implementing IDS or IPS.

Once the IDS is up and operational, logs must be reviewed, and traffic must be tailored to meet the specific needs of the company. Remember, traffic that may be perceived as abnormal by the IDS/IPS may be perfectly suitable for the environment. IDS/IPS must be properly maintained and configured.


There are times when you may feel you lack the knowledgeable staff to deploy and administer the IDS/IPS. Here the vendors come in. Instead of spending a considerable amount of time and money trying to figure out the how’s and why’s, specialized teams can come to the aid, with the required expertise to get you started and train your personnel.

When choosing a vendor, look for a team that:

  • Eliminates false positives by systematic tuning of detection to meet the characteristics of the particular system;
  • Eliminates false negatives. Eliminating false positive alarms may result in incurring false negatives, and that must not happen;
  • Understands what constitutes a security-relevant event and develop proper reporting;
  • Installs and configures a complete solution;
  • Provides and devises methods to test IDS/IPS;
  • Determines the damage caused by a detected attack, limits further damage, and recovers from the attack;
  • Makes your systems scalable to the size required.


What is a Corporate Anti-Virus System Good for?

Antivirus or anti-virus software (AV), sometimes also referred to as anti-malware software, is developed with the purpose to detect, remove and prevent the proliferation of malicious code.

The consequences of malware infection in a corporate environment may be very different – from loss of valuable information, stealing of confidential information, sending unsolicited emails and spam, to unsolicited remote computer access and unauthorized malicious attacks on the server.


The most commonly used product for endpoint security is antivirus software. Many of today’s integrated endpoint security offerings have evolved over time from the initial development of antivirus software. Anti-virus products are often ridiculed for their continued inability to stop the spread of malicious software.

Unfortunately, there is no perfect remedy or elixir to stop malware, so antivirus products will still be necessary, though insufficient. Antivirus software is a single layer (of many) for defense-in-depth endpoint protection.


Although antivirus vendors often employ heuristic or statistical methods for malware detection, the predominant means of detecting malware is still signature-based. Such approaches require that a malware specimen is available to the antivirus vendor for the creation of a signature. This is an example of application blacklisting. For rapidly changing malware or malware that has not been previously encountered, signature-based detection is much less successful.



To start with, antivirus software was designed to primarily detect and remove computer viruses, and that’s where it got its name. With the invention and proliferation of many other types of malware, antivirus products have begun providing protection from other computer threats. Modern antivirus software can protect from malicious Browser Helper Objects, browser hijackers, ransomware, keyloggers, backdoors, rootkits, Trojans, worms, dialers, adware, and spyware.


Integrating comprehensive antivirus protection secures:

  • Control of all possible intrusion channels for viruses – email, HTTP, FTP, external storage media (floppy, CD, DVD, flash-cards, etc.), file servers;
  • Protection against various types of threats – viruses, network and email “worms”, “Trojan horses”, unwanted programs (spyware, adware, etc.);
  • Apart from being installed on endpoint devices (servers, workstations), antivirus software can be run on the Internet gateway, so traffic is scanned before reaching the network;
  • Continuous monitoring and periodic anti-virus scan of all servers and workstations;
  • Automatic notification when an “infection” or “treatment” of viruses has occurred;
  • Protection of mobile devices, etc.;
  • Deploying a corporate antivirus system will enable centralized management and software update distribution.


Today’s organizations require a comprehensive, multi-layer, defense-in-depth security strategy to successfully address malware-related issues. A successful antivirus installation will help protect assets and endpoint devices against targeted attacks, prevent data loss and theft, address security policies, and protect vital company information.

Deploying the best antivirus is usually not enough. It must go hand in hand with other controls that ensure the organization is comprehensively protected. As part of building corporate anti-virus protection, look for vendors that offer a range of services, with scope varying in accordance with the needs of the client, and may include:

  • Preparation of proposals for the selection decision, so the customer is protected against compatibility risks, system scalability, additional hardware capacities, etc.;
  • Deployment of solutions on a limited segment, thus reducing potential risks for customer implementation, using the results of a “pilot” operation;
  • Preparing instructions and guidelines for further development on the basis of the results of the deployment of a limited segment;
  • Installation and configuration of a complete solution;
  • Standardization of requirements for anti-virus protection system with respect to installation, configuration, and operation of its components;
  • Development of instructional (operating) system documents for administrators and users;
  • Development of custom policies;
  • Conducting internal workshops in order to educate all participants.

Can DLP Solve Leakage Problems?

Information security has many faces and comes with a lot of bells and whistles. We have the SIEMs, the IDS’ and IPS’ and of course the DLPs.

As some of you may know, DLP (Data Loss Prevention) is an information traffic control mechanism in the information system of an enterprise. The main objective of DLP systems is to prevent the transmission of confidential information outside of the information system. Such transfers or often called leakages can be both intentional and unintentional.

Practice shows that most of the leaks that are known (about 3/4) occur not by malicious intent, but because of errors, carelessness, or negligence from employees. The rest of the leaks are associated with malicious actors and users of the information systems. It is understandable that insiders usually try to overcome DLP systems. The outcome of this effort depends on many factors and it is impossible to guarantee success, but the risks can be greatly minimized. DLP is necessary because there is a lot of data, unauthorized diversion of which could cause significant damage to the organization.

To assess in advance the size of the damage is not always directly measurable or fully foreseeable. However, in most cases, in order to realize the danger posed by leaks, it is sufficient to provide it even for the basic consequences.

For example, the release of top-secret information or copies of the original documents in the press or other “inconvenient” bodies, the cost of PR and a subsequent decision needed to fix problems caused by leakage, reduced trust and outflow from partners and customers, problems with competitors, leakage schemes, technology, know-how and more.


This is a complex task, that has numerous things that have to be taken into consideration. In addition to the DLP system – a technical complex for information protection from leaks, its scope goes beyond just monitoring and blocking of the users’ actions with protected information. The modern DLP system is also a tool that allows you to control the exchange of information, the use of information in the electronic files of the company, and other “useful” areas, such as:

  • Control over the sharing not only of confidential but also other information of interest (libel, spam, excessive amounts of data, etc.), control over the level of business ethics, etc.;
  • Tracking the loyalty of employees, their political attitudes, beliefs, gathering compromising information, tracking any single interest or suspicious object;
  • Identification of brain drain in the early stages, the actions of timely identification, aimed at finding a job/career change – the exchange of electronic messages containing employee information (resume), with external employers, visiting sites about finding a job. Thus, you can more efficiently monitor employee satisfaction, employer and labor conditions in a shorter time in order to take corrective action;
  • Monitoring the misuse of corporate resources, employee time – regular monitoring of storage and use in non-working order files (audio, video, photo, etc.) and the use of communication channels (e-mail, Internet, instant messaging) for the misuse of information exchange


Integrating a DLP, as some of you may already know, is a complicated matter. The main tasks of DLP are monitoring and prevention of number of data transmission occurrences, such as transmission of protected information by email (SMTP, including SSL), transmission of unencrypted data on the Internet (FTP, HTTP, web-mail, chat), transmission of encryption protected information on the Internet (HTTPS, SFTP, SCP (SSH), etc.), transmission of protected information using instant messengers (ICQ, Jabber, Skype, WebEx Connect, QIP, etc.), entry of protected information to removable media (USB drives, CD / DVD, flash-media, etc.) and mobile devices (smartphones, iPhone, iPad), printing documents that contain protected information (monitoring and / or blocking printing on local, network and virtual printers) and copying of such data, control over user access to documents containing protected information (logging), archiving of all transmitted information, monitoring user search activities, controlled data transfers between servers and workstations, monitoring of all storage on network shares (shared folders, work-flow systems, databases, e-mail archives, etc.)



It is believed that the introduction of DLP system is justified only in the case when the organization has reached a very high level of maturity workflows. In particular, it has developed and implemented policies for handling confidential information, has developed a list of its constituent data matrix, defined role-based access to different kinds of information, etc.

Of course, the presence of all these mechanisms makes the use of the DLP system more efficient, but the full implementation of the policy for handling confidential information involves substantial elaboration.

However, for starters, it will be very useful and a more simple approach to highlight the most critical areas.

In this case, we are not trying to build an overall picture of handling all types of sensitive data, instead, we allocate multiple repositories of documents intended solely for use within the organization. The system (with some regularity) scans all documents held within this repository and then fixes any attempts to move the protected information outside the organization.


As with almost anything, there are multiple ways to tackle an issue. With DLPs, we have two basic approaches.


Through an integrated approach. There are companies that specialize in these technical solutions for years. Costs about $200-500 in the workplace for implementation, and in the order of $20-50 per year per license.

This approach, of course, solves the problem more efficiently, enables the integration, or future integration, with other systems such as SIEM, RMS, etc., integration with ERP and guarantees compliance with international standards of information security.


Trying to use free or low-priced products from multiple vendors that do not solve the problem comprehensively, but only close certain channels of communication.

As a result, we obtain a limited solution that works in principle over some channels and even sometimes solves the problem. However, the data is not structured and is not consolidated, efficiency is very seriously affected and there may be serious problems with scalability. Companies using this approach are eventually forced into an integrated approach.

DLPs are sometimes required in certain certification engagements. You may find yourself looking for DLP when becoming compliant with the GDPR or under ISO27001.

Business Continuity & Disaster Recovery 101

Even when all else fails, there is still hope! Business Continuity Planning and Disaster Recovery Planning are here as the last resort to protect your business.

Business Continuity Planning (BCP) and Disaster Recovery Planning (DRP) are an organization’s last corrective control when all other controls have failed! BCP/DRP may prevent or provide a remedy for force majeure circumstances such as injury, loss of life, or failure of an entire organization.

Furthermore, BCP/DRP provide the advantage of being able to view the organization’s critical processes and assets in a different, often clarifying light. Risk analysis conducted during a BCP/DRP plan stage often leads to immediate mitigating actions.

An eventual potentially crippling disaster may have no impact due to prudent risk management steps taken as a result of thorough BCP/DRP plans.


Developing a Business Continuity Planning and Disaster Recovery Planning are essential for a company’s responsiveness and ability to recover from an interruption in normal business functions or catastrophic events. In order to ensure that all planning has been considered, the BCP/DRP have a specific set of requirements to review and implement. Below are listed the high-level steps to achieving a sound, logical BCP/DRP:

  • Define Project Scope;
  • Business Impact Analysis;
  • Identify Preventive Controls;
  • Recovery Strategy;
  • Plan Design and Development;
  • Implementation, Training, and Testing;
  • BCP/DRP Maintenance.

what is the difference between BUSINESS CONTINUITY and DISASTER RECOVERY?

Business Continuity Planning will ensure the business will continue to operate prior to, during, and after a disaster happens.

The focus is on the business in its entirety and making sure critical services and functions provided by the business will still be performed, both if threatened by disruption as well as after the threat has subsided.

Organizations need to consider common threats to their critical functions as well as any associated vulnerabilities that might facilitate a significant disruption. Business Continuity Planning is a long-term strategy for continued successful operation despite inevitable threats and disasters.

Disaster Recovery Planning– while Business Continuity Planning is responsible for the strategic, long-term, business-oriented plan for uninterrupted operation when faced with a threat or disruption, the Disaster Recovery Planning will provide the tactics. In essence, DRP is a short-term plan for dealing with specific IT-oriented outages.

Mitigating a virus infection with a risk of spreading is an example of a specific IT-oriented disruption that a DRP must address. The focus is on efficiently mitigating the outage impact and the immediate response and recovery of critical IT systems. Disaster Recovery Planning provides a means for immediate response to disasters.


The relation between BCP & DRP – the BCP is an all-inclusive plan that includes, amongst multiple specific plans, the DRP – the importance stems from the fact that the focus and process of these overlap critically.

Continual provision of business-critical services facing threats is achieved with the aid of the tactical DRP. The plans, with their different scopes, are organically intertwined.

In order to distinguish between a BCP and a DRP one needs to realize that the BCP is concerned with the business-critical function or service provided by the company, whereas the DRP focuses on the actual systems and their interoperability so the business function is performed.


As mentioned before, the Business Continuity Plan is an umbrella plan that contains other plans, in addition to the Disaster Recovery Plan:

Continuity of Operations Plan (COOP) – describes the procedures required to maintain operations during a disaster. This includes the transfer of personnel to an alternative disaster recovery site and operations of that site.

Continuity of Support Plan – focuses narrowly on the support of specific IT systems and applications. It is also called the IT contingency plan, emphasizing IT over general business support.

Cyber Incident Response Plan (CIRP) – designed to respond to disruptive cyber events, including network-based attacks, worms, computer viruses, Trojan horses, etc.

Business Recovery Plan (BRP) – also known as the business resumption plan, details the steps required to restore normal business operations.

Crisis Communications Plan – used for communicating to staff and the public in the event of a disruptive event. Instructions for notifying the affected members of the organization are an integral part of any BCP/DRP.

Occupant Emergency Plan (OEP) – provides the response procedures for occupants of a facility in the event of a situation posing a potential threat to the health and safety of personnel, the environment, or property.

how does the testing work?


The Disaster Recovery Plan must be an actionable prescription for recovery. Writing the plan is not enough, thorough testing is needed. Information systems are in a constant state of flux, with infrastructure, hardware, software, and configuration changes altering the way the DRP needs to be carried out. Testing the details of the DRP will ensure both the initial and continued efficacy of the plan. The tests must be performed on an annual basis as an absolute minimum.

Review – the most basic form of initial DRP testing. It involves simply reading the DRP in its entirety.

Checklist – also referred to as consistency testing, lists all necessary components required for a successful recovery and ensures that they are, or will be, readily available should a disaster occur.

Walkthrough/Tabletop – the goal is to talk through the proposed recovery procedures in a structured manner to determine whether there are any noticeable omissions, gaps, erroneous assumptions, or simply technical missteps that would hinder the recovery process from successfully being carried out.

Simulation (aka Walkthrough Drill) – goes beyond talking about the process and actually has teams carry out the recovery process. The team must respond to a simulated disaster as directed by the DRP.

Parallel Processing – involves the recovery of critical processing components at an alternative computing facility, and then restore data from a previous backup. Regular production systems are not interrupted.

Partial & Complete Interruption – extreme caution should be exercised before attempting an actual interruption test. This test causes the organization to actually stop processing normal business at the primary location and use an alternative computing facility.

Fight Back with DDoS Mitigation

Have you ever experienced having a server being overloaded by incoming traffic, or how would we call it – a denial of service? It’s one of the most common cyber attacks and it aims to shut down one’s online systems.

DDoS (Distributed Denial of Service) is an attack on the computer system aiming at bringing the system to a failure, i.e., the creation of conditions under which legitimate users cannot access the victimized resource. In addition to its direct purpose – resource unavailability and failure of the targeted system, it can be used to take steps towards mastering the system (in contingencies it may provide critical content – for example, the version of the code, etc.) or to mask other subsequent attacks.


DDoS attacks can be divided into two basic types: attack on the channel and attack on the process, the first of which is just hammered with an overwhelming mass of specially crafted requests, whereas the second is an exploiting software and network protocol vulnerabilities, causing limited productivity of hardware, thus blocking customers’ access to information system resources.



The network DDoS attack type is usually carried out by means of a botnet (zombie networks). The botnet consists of a large number of computers infected with special malware. Usually, the computers are used without the consent or knowledge of their owners. The botnet is commanded from the control center (by the attacker) to start sending many specially forged requests to the target computer. When the requests consume the available resources, access to legitimate users is blocked.


Cloud Protection – Service providing DDoS attack protection based on the provider’s infrastructure. All traffic is redirected to the proxy of the provider, where traffic is filtered and sent back cleansed from DDoS traffic.


  • No need to invest in special equipment, uplinks, training, etc.;
  • Freedom and availability in the choice of a supplier;
  • Diversification of the hosting and protection against DDoS attacks;


  • Lack of complete control over what is happening;
  • Unreliable information on attack situation from vendor;
  • Traffic is redirected for filtering outside the customer’s infrastructure;

On-Site Protection – protection on the perimeter of the customer’s own infrastructure using specialized equipment – devices acting as a filter to all ingress traffic enters client’s network.


  • Exercise total control over the mitigation process;
  • A comprehensive view of the attack;
  • No traffic is redirected for filtering outside the customer’s infrastructure;


  • Considerable investment in special equipment, uplinks, training, etc.;
  • Protection is limited to an uplink capacity;
  • Need to maintain a crew of trained professionals 24/7.

Why is professional mitigation necessary?

  • Using your own existing Equipment? Routers and switches will fold under the load, due to insufficient capacity to deal with DDoS. Stateful in-line Firewalls and ISP’s are not designed to mitigate such attacks – if they can withstand the flood at all, packets simply pass through them.
  • Software solutions that don’t work: the likes of mod_evasive, iptables, Apache / LiteSpeed tuning, kernel tuning are not capable of handling attack size or complexity, thus being useful on a very limited number of occasions.
  • ISP’s won’t help. Your service provider has one way to “help” and that’s to null-route your traffic for a period of their own discretion. You may even get banned for suffering a DDoS attack and bringing others on the shared resource down.
  • Who do you block? Massive numbers of IPs are attacking you, seems the whole world is after your resource. You need to block all attacking IPs and allow only the good ones. Can you do that? And how?
  • Human-like attack behavior. It’s not just the sheer flood you’re dealing with. L7 attacks mimic the behavior of real users, thus eating CPU and RAM.
  • Bandwidth is not enough to mitigate. Feasibility is important when provisioning bandwidth. How much do you need, and how much can you afford? Is it worth it?
  • Is your team up to speed? With changing attack methods, your team needs to be able to roll with the punches – tweaking defenses, finding solutions. Can they do that? Quickly?
  • Can you isolate the victim? DDoS attacks inflict collateral damage. When you can’t isolate the victim of an attack, the others on the network suffer too.
  • Insufficient insight into attack details. You only see the symptoms, without attack details you don’t know the cause nor the solution.


When you have chosen a good cloud DDoS Mitigation service you will benefit from:

Mitigation Invisibility – Depending on the DDoS attack type, the vendor must use different bot verification methods, with at least the larger majority of them being almost completely invisible to your visitors, so they don’t “feel” the mitigation a hindrance.

Search Engine Friendly – It is important to understand that your website needs to remain visible to search engines, so the vendor must provide full support for the most popular search engines. Also, being open to requests for additional search engine support is a plus.

Multi-Gigabit Protection – Sizable network channels distributed over multiple Points of Presence around the world, empowering the mitigation solution to provide performance and scalability to keep the protected resource going.

Multiple Points of Presence – In order to ensure the lowest latency and lag times globally, the vendor will have placed Points of Presence (PoP) in strategic locations announced with BGP Anycast, thus ensuring your visitors’ traffic goes to the cleansing center that is the geographically closest.

And the rules of thumb for On-Site DDoS Mitigation

While so-called proxy shield vendors are abundant, a contemporary market supply of on-premise solutions is represented by a handful of manufacturers and software developers, each claiming to have the best product for meaningful, cost-effective DDoS mitigation.

On-premise DDoS Mitigation solutions provided by today’s vendors consist of server boxes of one to several U’s, which one is expected to place in their data center, switch on and watch them do the job. Unfortunately, that’s not always efficient against all floods, as 98% of today’s DDoS attacks can be mitigated automatically with hardware, but the remaining 2% require qualified human intervention. Why is that? DDoS methods are constantly changing to find new vulnerabilities in OS, Browser, and Protocol execution. As it happens, predefined counter-measure strategies don’t always work, and attack floods do get past the mitigation device.


Constant care – The best vendor will offer not just the hardware, but you will also benefit from round-the-clock care so you’re never alone when a new type of flood arrives. The vendor will be able to intervene in times of need, and place a global monitoring system at your disposal to make sure your content is available to the world.

Custom integration – The vendor engineers must assess your needs and current or planned network structure. They must ensure the best fit in your specific scenario, so you get the most out of the “Box”. Look for vendors that have the knowledge and expertise to do that and gladly place it at your disposal.

Flexible manning – A good vendor will man your protection stack with dedicated remote intervention engineers. Alternatively, you must be able to train your own people to monitor and effectively fend off DDoS attacks – the vendor must offer initial and interim training courses for your staff.


TCO spread over time – Instead of spending USD 1/2M on heartless hardware in one go, you should be able to spread the cost over easy, affordable monthly payments. You want to be protected without it costing you an arm and a leg, with pricing based on affordable monthly installments to cover hardware, support, upgrade/update, and manning requirements.

Tailored support– Flexibility in choosing comfort level in receiving and paying for support is an important aspect of choosing a product or service. Most vendors will give you preset levels of support, while a good vendor, will estimate your support requirements and offer you only what you need, when you need it.

Upgrades & updates – Total Cost of Ownership (TCO) can be tricky – usually, you’d have to pay for the initial hardware/software configuration and then factor in the upgrade, maintenance, and update expenditures. A good vendor makes it easy and transparent to assess your TCO.


Failover & redundancy – With DDoS attacks, it is not uncommon to see criminals increase flood magnitude when faced with successful mitigation at first, thus you may have to deal with a situation where the “Box” is not the weak point in your setup, but your own uplink capacity. For those times, when you can’t wait to upgrade your uplink, a versatile vendor will offer to switch you over to their global proxy protection service (if they have one).

Linear scalability – A good “Box” comes preconfigured to protect your entire inbound channel from all types of DDoS attacks. Optionally, larger modules should be available so you can increase the capacity by adding additional mitigation modules that feature linear scalability in protection power. Instead of having to replace the entire solution with a more powerful one in order to meet your needs, a good vendor gives you a Lego-like approach to building your defenses as high as you require by simply adding perfectly integrated modules on top of your existing protection configuration.