Can DLP Solve Leakage Problems?

Information security has many faces and comes with a lot of bells and whistles. We have the SIEMs, the IDS’ and IPS’ and of course the DLPs.

As some of you may know, DLP (Data Loss Prevention) is an information traffic control mechanism in the information system of an enterprise. The main objective of DLP systems is to prevent the transmission of confidential information outside of the information system. Such transfers or often called leakages can be both intentional and unintentional.

Practice shows that most of the leaks that are known (about 3/4) occur not by malicious intent, but because of errors, carelessness, or negligence from employees. The rest of the leaks are associated with malicious actors and users of the information systems. It is understandable that insiders usually try to overcome DLP systems. The outcome of this effort depends on many factors and it is impossible to guarantee success, but the risks can be greatly minimized. DLP is necessary because there is a lot of data, unauthorized diversion of which could cause significant damage to the organization.

To assess in advance the size of the damage is not always directly measurable or fully foreseeable. However, in most cases, in order to realize the danger posed by leaks, it is sufficient to provide it even for the basic consequences.

For example, the release of top-secret information or copies of the original documents in the press or other “inconvenient” bodies, the cost of PR and a subsequent decision needed to fix problems caused by leakage, reduced trust and outflow from partners and customers, problems with competitors, leakage schemes, technology, know-how and more.

HOW TO SCOPE A DLP INTEGRATION?

This is a complex task, that has numerous things that have to be taken into consideration. In addition to the DLP system – a technical complex for information protection from leaks, its scope goes beyond just monitoring and blocking of the users’ actions with protected information. The modern DLP system is also a tool that allows you to control the exchange of information, the use of information in the electronic files of the company, and other “useful” areas, such as:

  • Control over the sharing not only of confidential but also other information of interest (libel, spam, excessive amounts of data, etc.), control over the level of business ethics, etc.;
  • Tracking the loyalty of employees, their political attitudes, beliefs, gathering compromising information, tracking any single interest or suspicious object;
  • Identification of brain drain in the early stages, the actions of timely identification, aimed at finding a job/career change – the exchange of electronic messages containing employee information (resume), with external employers, visiting sites about finding a job. Thus, you can more efficiently monitor employee satisfaction, employer and labor conditions in a shorter time in order to take corrective action;
  • Monitoring the misuse of corporate resources, employee time – regular monitoring of storage and use in non-working order files (audio, video, photo, etc.) and the use of communication channels (e-mail, Internet, instant messaging) for the misuse of information exchange

HOW IS IT DONE?

Integrating a DLP, as some of you may already know, is a complicated matter. The main tasks of DLP are monitoring and prevention of number of data transmission occurrences, such as transmission of protected information by email (SMTP, including SSL), transmission of unencrypted data on the Internet (FTP, HTTP, web-mail, chat), transmission of encryption protected information on the Internet (HTTPS, SFTP, SCP (SSH), etc.), transmission of protected information using instant messengers (ICQ, Jabber, Skype, WebEx Connect, QIP, etc.), entry of protected information to removable media (USB drives, CD / DVD, flash-media, etc.) and mobile devices (smartphones, iPhone, iPad), printing documents that contain protected information (monitoring and / or blocking printing on local, network and virtual printers) and copying of such data, control over user access to documents containing protected information (logging), archiving of all transmitted information, monitoring user search activities, controlled data transfers between servers and workstations, monitoring of all storage on network shares (shared folders, work-flow systems, databases, e-mail archives, etc.)

DLP IS NOT JUST FOR THE BIG FISH

 

It is believed that the introduction of DLP system is justified only in the case when the organization has reached a very high level of maturity workflows. In particular, it has developed and implemented policies for handling confidential information, has developed a list of its constituent data matrix, defined role-based access to different kinds of information, etc.

Of course, the presence of all these mechanisms makes the use of the DLP system more efficient, but the full implementation of the policy for handling confidential information involves substantial elaboration.

However, for starters, it will be very useful and a more simple approach to highlight the most critical areas.

In this case, we are not trying to build an overall picture of handling all types of sensitive data, instead, we allocate multiple repositories of documents intended solely for use within the organization. The system (with some regularity) scans all documents held within this repository and then fixes any attempts to move the protected information outside the organization.

THE 2 WAYS TO SOLVE THE PROBLEM

As with almost anything, there are multiple ways to tackle an issue. With DLPs, we have two basic approaches.

THE RIGHT WAY…

Through an integrated approach. There are companies that specialize in these technical solutions for years. Costs about $200-500 in the workplace for implementation, and in the order of $20-50 per year per license.

This approach, of course, solves the problem more efficiently, enables the integration, or future integration, with other systems such as SIEM, RMS, etc., integration with ERP and guarantees compliance with international standards of information security.

THE WRONG WAY…

Trying to use free or low-priced products from multiple vendors that do not solve the problem comprehensively, but only close certain channels of communication.

As a result, we obtain a limited solution that works in principle over some channels and even sometimes solves the problem. However, the data is not structured and is not consolidated, efficiency is very seriously affected and there may be serious problems with scalability. Companies using this approach are eventually forced into an integrated approach.

DLPs are sometimes required in certain certification engagements. You may find yourself looking for DLP when becoming compliant with the GDPR or under ISO27001.

Business Continuity & Disaster Recovery 101

Even when all else fails, there is still hope! Business Continuity Planning and Disaster Recovery Planning are here as the last resort to protect your business.

Business Continuity Planning (BCP) and Disaster Recovery Planning (DRP) are an organization’s last corrective control when all other controls have failed! BCP/DRP may prevent or provide a remedy for force majeure circumstances such as injury, loss of life, or failure of an entire organization.

Furthermore, BCP/DRP provide the advantage of being able to view the organization’s critical processes and assets in a different, often clarifying light. Risk analysis conducted during a BCP/DRP plan stage often leads to immediate mitigating actions.

An eventual potentially crippling disaster may have no impact due to prudent risk management steps taken as a result of thorough BCP/DRP plans.

HOW DO you BEGIN?

Developing a Business Continuity Planning and Disaster Recovery Planning are essential for a company’s responsiveness and ability to recover from an interruption in normal business functions or catastrophic events. In order to ensure that all planning has been considered, the BCP/DRP have a specific set of requirements to review and implement. Below are listed the high-level steps to achieving a sound, logical BCP/DRP:

  • Define Project Scope;
  • Business Impact Analysis;
  • Identify Preventive Controls;
  • Recovery Strategy;
  • Plan Design and Development;
  • Implementation, Training, and Testing;
  • BCP/DRP Maintenance.

what is the difference between BUSINESS CONTINUITY and DISASTER RECOVERY?

Business Continuity Planning will ensure the business will continue to operate prior to, during, and after a disaster happens.

The focus is on the business in its entirety and making sure critical services and functions provided by the business will still be performed, both if threatened by disruption as well as after the threat has subsided.

Organizations need to consider common threats to their critical functions as well as any associated vulnerabilities that might facilitate a significant disruption. Business Continuity Planning is a long-term strategy for continued successful operation despite inevitable threats and disasters.

Disaster Recovery Planning– while Business Continuity Planning is responsible for the strategic, long-term, business-oriented plan for uninterrupted operation when faced with a threat or disruption, the Disaster Recovery Planning will provide the tactics. In essence, DRP is a short-term plan for dealing with specific IT-oriented outages.

Mitigating a virus infection with a risk of spreading is an example of a specific IT-oriented disruption that a DRP must address. The focus is on efficiently mitigating the outage impact and the immediate response and recovery of critical IT systems. Disaster Recovery Planning provides a means for immediate response to disasters.

 

The relation between BCP & DRP – the BCP is an all-inclusive plan that includes, amongst multiple specific plans, the DRP – the importance stems from the fact that the focus and process of these overlap critically.

Continual provision of business-critical services facing threats is achieved with the aid of the tactical DRP. The plans, with their different scopes, are organically intertwined.

In order to distinguish between a BCP and a DRP one needs to realize that the BCP is concerned with the business-critical function or service provided by the company, whereas the DRP focuses on the actual systems and their interoperability so the business function is performed.

SOME RELATED PLANS

As mentioned before, the Business Continuity Plan is an umbrella plan that contains other plans, in addition to the Disaster Recovery Plan:

Continuity of Operations Plan (COOP) – describes the procedures required to maintain operations during a disaster. This includes the transfer of personnel to an alternative disaster recovery site and operations of that site.

Continuity of Support Plan – focuses narrowly on the support of specific IT systems and applications. It is also called the IT contingency plan, emphasizing IT over general business support.

Cyber Incident Response Plan (CIRP) – designed to respond to disruptive cyber events, including network-based attacks, worms, computer viruses, Trojan horses, etc.

Business Recovery Plan (BRP) – also known as the business resumption plan, details the steps required to restore normal business operations.

Crisis Communications Plan – used for communicating to staff and the public in the event of a disruptive event. Instructions for notifying the affected members of the organization are an integral part of any BCP/DRP.

Occupant Emergency Plan (OEP) – provides the response procedures for occupants of a facility in the event of a situation posing a potential threat to the health and safety of personnel, the environment, or property.

how does the testing work?

IT STARTS WITH THE DISASTER RECOVERY PLAN

The Disaster Recovery Plan must be an actionable prescription for recovery. Writing the plan is not enough, thorough testing is needed. Information systems are in a constant state of flux, with infrastructure, hardware, software, and configuration changes altering the way the DRP needs to be carried out. Testing the details of the DRP will ensure both the initial and continued efficacy of the plan. The tests must be performed on an annual basis as an absolute minimum.

Review – the most basic form of initial DRP testing. It involves simply reading the DRP in its entirety.

Checklist – also referred to as consistency testing, lists all necessary components required for a successful recovery and ensures that they are, or will be, readily available should a disaster occur.

Walkthrough/Tabletop – the goal is to talk through the proposed recovery procedures in a structured manner to determine whether there are any noticeable omissions, gaps, erroneous assumptions, or simply technical missteps that would hinder the recovery process from successfully being carried out.

Simulation (aka Walkthrough Drill) – goes beyond talking about the process and actually has teams carry out the recovery process. The team must respond to a simulated disaster as directed by the DRP.

Parallel Processing – involves the recovery of critical processing components at an alternative computing facility, and then restore data from a previous backup. Regular production systems are not interrupted.

Partial & Complete Interruption – extreme caution should be exercised before attempting an actual interruption test. This test causes the organization to actually stop processing normal business at the primary location and use an alternative computing facility.

Fight Back with DDoS Mitigation

Have you ever experienced having a server being overloaded by incoming traffic, or how would we call it – a denial of service? It’s one of the most common cyber attacks and it aims to shut down one’s online systems.

DDoS (Distributed Denial of Service) is an attack on the computer system aiming at bringing the system to a failure, i.e., the creation of conditions under which legitimate users cannot access the victimized resource. In addition to its direct purpose – resource unavailability and failure of the targeted system, it can be used to take steps towards mastering the system (in contingencies it may provide critical content – for example, the version of the code, etc.) or to mask other subsequent attacks.

TYPES OF DDOS ATTACKS

DDoS attacks can be divided into two basic types: attack on the channel and attack on the process, the first of which is just hammered with an overwhelming mass of specially crafted requests, whereas the second is an exploiting software and network protocol vulnerabilities, causing limited productivity of hardware, thus blocking customers’ access to information system resources.

 

HOW DO THINGS WORK?

The network DDoS attack type is usually carried out by means of a botnet (zombie networks). The botnet consists of a large number of computers infected with special malware. Usually, the computers are used without the consent or knowledge of their owners. The botnet is commanded from the control center (by the attacker) to start sending many specially forged requests to the target computer. When the requests consume the available resources, access to legitimate users is blocked.

TYPES OF DDoS MITIGATION SOLUTIONS

Cloud Protection – Service providing DDoS attack protection based on the provider’s infrastructure. All traffic is redirected to the proxy of the provider, where traffic is filtered and sent back cleansed from DDoS traffic.

ADVANTAGES

  • No need to invest in special equipment, uplinks, training, etc.;
  • Freedom and availability in the choice of a supplier;
  • Diversification of the hosting and protection against DDoS attacks;

THE DOWNSIDE

  • Lack of complete control over what is happening;
  • Unreliable information on attack situation from vendor;
  • Traffic is redirected for filtering outside the customer’s infrastructure;

On-Site Protection – protection on the perimeter of the customer’s own infrastructure using specialized equipment – devices acting as a filter to all ingress traffic enters client’s network.

ADVANTAGES

  • Exercise total control over the mitigation process;
  • A comprehensive view of the attack;
  • No traffic is redirected for filtering outside the customer’s infrastructure;

THE DOWNSIDE

  • Considerable investment in special equipment, uplinks, training, etc.;
  • Protection is limited to an uplink capacity;
  • Need to maintain a crew of trained professionals 24/7.

Why is professional mitigation necessary?

  • Using your own existing Equipment? Routers and switches will fold under the load, due to insufficient capacity to deal with DDoS. Stateful in-line Firewalls and ISP’s are not designed to mitigate such attacks – if they can withstand the flood at all, packets simply pass through them.
  • Software solutions that don’t work: the likes of mod_evasive, iptables, Apache / LiteSpeed tuning, kernel tuning are not capable of handling attack size or complexity, thus being useful on a very limited number of occasions.
  • ISP’s won’t help. Your service provider has one way to “help” and that’s to null-route your traffic for a period of their own discretion. You may even get banned for suffering a DDoS attack and bringing others on the shared resource down.
  • Who do you block? Massive numbers of IPs are attacking you, seems the whole world is after your resource. You need to block all attacking IPs and allow only the good ones. Can you do that? And how?
  • Human-like attack behavior. It’s not just the sheer flood you’re dealing with. L7 attacks mimic the behavior of real users, thus eating CPU and RAM.
  • Bandwidth is not enough to mitigate. Feasibility is important when provisioning bandwidth. How much do you need, and how much can you afford? Is it worth it?
  • Is your team up to speed? With changing attack methods, your team needs to be able to roll with the punches – tweaking defenses, finding solutions. Can they do that? Quickly?
  • Can you isolate the victim? DDoS attacks inflict collateral damage. When you can’t isolate the victim of an attack, the others on the network suffer too.
  • Insufficient insight into attack details. You only see the symptoms, without attack details you don’t know the cause nor the solution.

THINGS TO LOOK FOR WHEN PROCURING MITIGATION SOLUTIONS

When you have chosen a good cloud DDoS Mitigation service you will benefit from:

Mitigation Invisibility – Depending on the DDoS attack type, the vendor must use different bot verification methods, with at least the larger majority of them being almost completely invisible to your visitors, so they don’t “feel” the mitigation a hindrance.

Search Engine Friendly – It is important to understand that your website needs to remain visible to search engines, so the vendor must provide full support for the most popular search engines. Also, being open to requests for additional search engine support is a plus.

Multi-Gigabit Protection – Sizable network channels distributed over multiple Points of Presence around the world, empowering the mitigation solution to provide performance and scalability to keep the protected resource going.

Multiple Points of Presence – In order to ensure the lowest latency and lag times globally, the vendor will have placed Points of Presence (PoP) in strategic locations announced with BGP Anycast, thus ensuring your visitors’ traffic goes to the cleansing center that is the geographically closest.

And the rules of thumb for On-Site DDoS Mitigation

While so-called proxy shield vendors are abundant, a contemporary market supply of on-premise solutions is represented by a handful of manufacturers and software developers, each claiming to have the best product for meaningful, cost-effective DDoS mitigation.

On-premise DDoS Mitigation solutions provided by today’s vendors consist of server boxes of one to several U’s, which one is expected to place in their data center, switch on and watch them do the job. Unfortunately, that’s not always efficient against all floods, as 98% of today’s DDoS attacks can be mitigated automatically with hardware, but the remaining 2% require qualified human intervention. Why is that? DDoS methods are constantly changing to find new vulnerabilities in OS, Browser, and Protocol execution. As it happens, predefined counter-measure strategies don’t always work, and attack floods do get past the mitigation device.

THE BOX

Constant care – The best vendor will offer not just the hardware, but you will also benefit from round-the-clock care so you’re never alone when a new type of flood arrives. The vendor will be able to intervene in times of need, and place a global monitoring system at your disposal to make sure your content is available to the world.

Custom integration – The vendor engineers must assess your needs and current or planned network structure. They must ensure the best fit in your specific scenario, so you get the most out of the “Box”. Look for vendors that have the knowledge and expertise to do that and gladly place it at your disposal.

Flexible manning – A good vendor will man your protection stack with dedicated remote intervention engineers. Alternatively, you must be able to train your own people to monitor and effectively fend off DDoS attacks – the vendor must offer initial and interim training courses for your staff.

THE PRICE

TCO spread over time – Instead of spending USD 1/2M on heartless hardware in one go, you should be able to spread the cost over easy, affordable monthly payments. You want to be protected without it costing you an arm and a leg, with pricing based on affordable monthly installments to cover hardware, support, upgrade/update, and manning requirements.

Tailored support– Flexibility in choosing comfort level in receiving and paying for support is an important aspect of choosing a product or service. Most vendors will give you preset levels of support, while a good vendor, will estimate your support requirements and offer you only what you need, when you need it.

Upgrades & updates – Total Cost of Ownership (TCO) can be tricky – usually, you’d have to pay for the initial hardware/software configuration and then factor in the upgrade, maintenance, and update expenditures. A good vendor makes it easy and transparent to assess your TCO.

THE OPTIONS

Failover & redundancy – With DDoS attacks, it is not uncommon to see criminals increase flood magnitude when faced with successful mitigation at first, thus you may have to deal with a situation where the “Box” is not the weak point in your setup, but your own uplink capacity. For those times, when you can’t wait to upgrade your uplink, a versatile vendor will offer to switch you over to their global proxy protection service (if they have one).

Linear scalability – A good “Box” comes preconfigured to protect your entire inbound channel from all types of DDoS attacks. Optionally, larger modules should be available so you can increase the capacity by adding additional mitigation modules that feature linear scalability in protection power. Instead of having to replace the entire solution with a more powerful one in order to meet your needs, a good vendor gives you a Lego-like approach to building your defenses as high as you require by simply adding perfectly integrated modules on top of your existing protection configuration.

What is an Independent Audit Good For?

The audit of Information Security is a comprehensive assessment, which is allowed, in order to assess the current condition of Information Security in the business and to plan timely actions in order to increase the level of security.

The audit of Information Security is conducted when a current necessity of independent assessment of the condition of Information Security is needed.

Why do you need internal audit?

There are a number of reasons to perform internal audits either one-time, ad-hock, or regularly. Some of these may be:

  • If there is a change in the strategy of the company;
  • In case of mergers or acquisitions;
  • When there are significant changes in the organizational structure of the company or change of leadership;
  • When there are new internal or external requirements for Information Security;
  • In the event of significant changes in the business processes and IT infrastructure.

THE RULES OF AUDIT

When performing an internal audit, one needs to take into account and adhere to the following “rules”:

  • Analysis of the organizational and administrative documents of the company;
  • Interviews with employees of the organization: representatives from the business units, the administrators and developers of information systems, professionals in Information Security;
  • Technology for inspection of office space in terms of physical security of the IT infrastructure;
  • Analysis of the configuration settings of hardware and software;
  • Auditing of special hardware (scanners, security analysis, control of the leakage of information, etc.);
  • Penetration testing;
  • Assessment of the knowledge of workers in the field of Information Security.

 

An extra special examination can be made that takes into account the particularities of the audited company. If necessary, in the phase of the study, additional information may be collected, that is needed for the implementation of other projects, which hereinafter will save additional resources for the organization and will help the distribution of its budget.

INDEPENDENT vs. INTERNAL IT AUDIT

Objective – An independent audit is usually performed either due to regulatory requirements or those of third parties wishing to enter in collaborative or supplier relations, an outsourcing partner, for example. Internal audits are usually mandated by management and are more focused on business operations and their continuity.

 

Auditors – An independent audit is carried out by an external team, while internal audits are performed by members of staff. While the independent auditor may provide a more “fair view” of the current state, the internal audit may reflect a business’s proprietary technological and organizational characteristics more closely, with in-depth findings.

 

Reporting – Usually, the independent IT audit will result in the main report being in a format required by auditing standards, with a focus on whether the Information Security claims of the company give a true and fair view and comply with requirements. These reports, whether formal or not, are designed to provide a status snapshot, rather than go into detailed recommendations on how to make things better.

 

Internal audit should produce a tailored report about how the risks and objectives are being managed – with a focus on helping the business move forward. As such, internal audit reports are expected to contain recommendations for improvement of the organization’s Information Security.

SIEM for Beginners

We tend to use a lot of stand-alone systems for the analysis of not-so-easy-to-understand processes, but having a thorough log analysis and the big picture of what the systems do altogether is of great importance.

Let’s talk about Security Information & Event Management or SIEM for short. Such systems are used to collect and analyze information from a maximum number of sources of information – such as DLP system, IPS, routers, firewalls, user workstations, servers, and so on. Practical examples of threats that can only be identified correctly by SIEM:

  • APT attacks – relevant for companies holding valuable information. SIEM is perhaps the only way to detect the beginning of such an attack (with research infrastructure, attackers will generate traffic at different ends that allows you to see this activity by the security event correlation systems SIEM);
  • Detection of various anomalies in the network and on the individual nodes, the analysis of which is unattainable for other systems
  • Response to emergency situations, rapid changes in user behavior

The principle of “supply and forget“ is not applicable. Absolute protection does not exist, and the most unlikely risks can backfire and stop the business and cause huge financial losses. Any software and hardware may not work or be configured incorrectly and let the threat through.

WHAT’S THE NEED FOR INFORMATION SECURITY AND EVENT MANAGEMENT?

  • Regulatory mandates require log management to maintain an audit trail of activity. SIEM’s provide a mechanism to rapidly and easily deploy a log collection infrastructure. Alerting and correlation capabilities also satisfy routine log data review requirements. SIEM reporting capabilities provide audit support as well;
  • A SIEM can pull data from disparate systems into a single pane of glass, allowing for efficient cross-team collaboration in extremely large enterprises;
  • By correlating process activity and network connections from host machines a SIEM can detect attacks, without ever having to inspect packets or payloads;
  • SIEM’s store and protect historical logs, and provide tools to quickly navigate and correlate data, thus allowing for rapid, thorough, and court-admissible forensics investigations.

HOW TO SCOPE A SIEM INTEGRATION?

  • Analysis of events and creation of alerts at any network traffic anomalies, unexpected user actions, unidentified devices, etc.;
    Creation of reports, including ones customized specifically for your needs.
  • For example, a daily report on incidents, a weekly report of top 10 violators, a report on the performance of devices, etc. Reports are configured flexibly according to their recipients;
  • Monitoring events from devices / servers / mission-critical systems, the establishment of appropriate notifications;
  • Logging of all events in the event gathering evidence, analyzing attack vectors, etc.

HOW THE SIEM FUNCTIONS

 

DESIGN & INTEGRATION STEPS

The SIEM implementation should leverage a phased approach, with systematic follow-through of the required stages for solution deployment. The typical SIEM implementation phases are:

REQUIREMENTS GATHERING & ASSESSMENT

А detailed assessment of the company’s environment must be performed with the goal to inventory the existing architecture and identify basic SIEM requirements – to understand the current enterprise security architecture and its critical components, the current tools and procedures used to determine potential risk and the procedures used to confirm regulatory compliance. Identifying the business objectives to be met by the development and implementation of a SIEM, as well as capture a clear network with an inventory of all devices in order to ensure solution comprehensiveness.

SYSTEM DESIGN

А detailed technical SIEM deployment design is to be created, based on the gathered requirements. Converting business requirements to conceptual scenarios, as well as creating technical use cases, logical and physical SIEM architecture designs, and SIEM integration project plan.

INTEGRATION ACTIVITIES

System characteristics require the provision of real-time, centralized monitoring and correlation system over the entire network security infrastructure, as well as notification of and response to harmful security events. Sharing information security event data with all relevant business units and generating security even data for forensic purposes.

This phase involves the tasks of configuring and installing the development environment, implementing technical use cases and the interface component, testing system configuration, documenting system configuration, rolling-out to production, and training & knowledge transferring.

POST-DEPLOYMENT ACTIVITIES

As with most systems, the SIEM one also needs looking after. Ensuring support for the solution, placing an effective 24/7 solution monitoring, and preparing for a change of management, always with an eye of evolving threats, are all a must.

CHOOSING A VENDOR

This is a question that can not be answered in advance. The integrator typically examines client infrastructure, their needs, figuring out what is the client’s budget.

After that the vendors make offers and the integrator proposes to the customer the one most suitable. This is needed because there is a lack of compatibility between different vendors.

Sometimes, it is believed that if you have a SIEM, there is no need to install DLP, IDS, vulnerability scanners, etc. In fact, this is not the case. SIEM can track any anomalies in the network stream, but it will not be able to make the normal analysis. SIEM, strictly speaking, is useless without other security systems. The main advantage of SIEM – collection, storage, and analysis of logs – will be reduced down to zero without the sources of these logs.