Misleading and defeating importance-scanning malware propagation (original) (raw)

On the impact of dynamic addressing on malware propagation

Proceedings of the 4th ACM workshop on Recurring malcode, 2006

While malware models have become increasingly accurate over the past few years, none of the existing proposals accounts for the use of Network Address Translation (NAT). This oversight is problematic since many network customers use NAT in their local networks. In fact, measurements we collected from a distributed honeynet show that approximately 19% of the infected hosts reside in NATted domains. To account for this fact, we present a model that can be used to understand the impact of varying levels of NAT deployment on malware that spread by preferentially scanning the IP space. Using this model, we show that NATting impedes malware propagation in several ways and can have a significant impact on non-uniform scanning worms as it invalidates the implicit assumption that vulnerable hosts reside in densely populated subnets.

Importance-scanning worm using vulnerable-host distribution

2005

Abstract Most Internet worms use random scanning. The distribution of vulnerable hosts on the Internet, however, is highly non-uniform over the IP-address space. This implies that random scanning wastes many scans on invulnerable addresses, and more virulent scanning schemes may take advantage of the non-uniformity of a vulnerable-host distribution. Questions then arise how attackers may make use of such information, and how virulent the resulting worm may be.

Defending against hitlist worms using network address space randomization

2005

Worms are self-replicating malicious programs that represent a major security threat for the Internet, as they can infect and damage a large number of vulnerable hosts at timescales where human responses are unlikely to be effective. Sophisticated worms that use precomputed hitlists of vulnerable targets are especially hard to contain, since they are harder to detect, and spread at rates where even automated defenses may not be able to react in a timely fashion. This paper examines a new proactive defense mechanism called Network Address Space Randomization (NASR) whose objective is to harden networks specifically against hitlist worms. The idea behind NASR is that hitlist information could be rendered stale if nodes are forced to frequently change their IP addresses. NASR limits or slows down hitlist worms and forces them to exhibit features that make them easier to contain at the perimeter. We explore the design space for NASR and present a prototype implementation as well as preliminary experiments examining the effectiveness and limitations of the approach.

A self-learning worm using importance scanning

2005

Abstract The use of side information by an attacker can help a worm speed up the propagation. This philosophy has been the basis for advanced worm scanning mechanisms such as hitlist scanning, routable scanning, and importance scanning. Some of these scanning methods use information on vulnerable hosts. Such information, however, may not be easy to collect before a worm is released.

Internet-scale malware mitigation

Proceedings of the 4th ACM workshop on Recurring malcode, 2006

Security on the Internet today is treated mostly as a data plane problem. IDS's, firewalls, and spam filters all operate on the simple principle of detecting malicious data plane behavior and erecting data plane filters. In this paper we explore how breaking down the barrier between the control and data plane can significantly enhance our understanding of how to detect and filter Internet threats like worms and botnets. Our investigation is guided by two specific goals: using information and anomalies detected on the data plane to inform control plane decision support and using anomalies detected on the control plane to inform data plane filtering. We begin by analyzing the source of persistent worms and other persistent malicious and misconfigured data plane traffic to understand the scope of this behavior on the control plane. We then analyze how anomalies on the control plane associated with poorly managed networks correlate with the sources of malicious and misconfigured traffic detected on the data plane. Our results show that malicious and misconfigured data plane behavior is widely spread across the control plane suggesting that constructing only a few control plane filters to block the most infected organizations is ineffective. We demonstrate that networks with data plane anomalies tend to exhibit more routing misconfigurations. Finally, we discuss how these correlations could be used to reject or filter routes and help stop recurring threats like persistent worms.

A Lustrum of Malware Network Communication: Evolution and Insights

2017 IEEE Symposium on Security and Privacy (SP), 2017

Both the operational and academic security communities have used dynamic analysis sandboxes to execute malware samples for roughly a decade. Network information derived from dynamic analysis is frequently used for threat detection, network policy, and incident response. Despite these common and important use cases, the efficacy of the network detection signal derived from such analysis has yet to be studied in depth. This paper seeks to address this gap by analyzing the network communications of 26.8 million samples that were collected over a period of five years. Using several malware and network datasets, our large scale study makes three core contributions. (1) We show that dynamic analysis traces should be carefully curated and provide a rigorous methodology that analysts can use to remove potential noise from such traces. (2) We show that Internet miscreants are increasingly using potentially unwanted programs (PUPs) that rely on a surprisingly stable DNS and IP infrastructure. This indicates that the security community is in need of better protections against such threats, and network policies may provide a solid foundation for such protections. (3) Finally, we see that, for the vast majority of malware samples, network traffic provides the earliest indicator of infection-several weeks and often months before the malware sample is discovered. Therefore, network defenders should rely on automated malware analysis to extract indicators of compromise and not to build early detection systems.

Hotspots: The Root Causes of Non-Uniformity in Self-Propagating Malware

International Conference on Dependable Systems and Networks (DSN'06)

Self-propagating malware like worms and bots can dramatically impact the availability and reliability of the Internet. Techniques for the detection and mitigation of Internet threats using content prevalence and scan detectors are based on assumptions of how threats propagate. Some of these assumptions have recently been called into question by observations of huge discrepancies in the quantity of specific threats detected at different points around the Internet. We call these deviations from uniform propagation "hotspots". This paper quantifies and explains these influences on malware propagation. We then propose that hotspots can be explained by two fundamental influences on propagation: algorithmic factors and environmental factors. We use measurement data from sensors deployed at 11 locations around the Internet to demonstrate the impact of these factors on worm and bot propagation. With this understanding, we simulate the outbreak of new threats with hotspots and show how algorithmic and environmental factors reduce the visibility of distributed detectors resulting in the inability to identify new threats.

LIMITING SELF-PROPAGATING MALWARE BASED ON CONNECTION FAILURE BEHAVIOR

Self-propagating malware (e.g., an Internet worm) exploits security loopholes in software to infect servers and then use them to scan the Internet for more vulnerable servers. While the mechanisms of worm infection and their propagation models are well understood, defense against worms remains an open problem. One branch of defense research investigates the behavioral difference between worm-infected hosts and normal hosts to set them apart. One particular observation is that a worm-infected host, which scans the Internet with randomly selected addresses, has a much higher connection-failure rate than a normal host. Rate-limit algorithms have been proposed to control the spread of worms by traffic shaping based on connection failure rate. However, these rate-limit algorithms can work properly only if it is possible to measure failure rates of individual hosts efficiently and accurately. This paper points out a serious problem in the prior method and proposes a new solution based on a highly efficient double-bitmap data structure, which places only a small memory footprint on the routers, while providing good measurement of connection failure rates whose accuracy can be tuned by system parameters

Malware Avoidance Using Two Epidemic Layers

Malware are malicious software programs deployed by cyber attackers to compromise computer. The solution to this problem is desperately desired by cyber defenders as the network security community does not yet have solid answers. The main scope of our project to investigate how malware propagate in networks from a global perspective. We propose a two layer malware propagation model to describe the development of a given malware at the Internet level. Compared with the existing single layer epidemic models, the proposed model represents malware propagation better in large-scale networks. We propose a two layer malware propagation model to describe the development of a given malware at the Internet level. Compared with the existing single layer epidemic models, the proposed model represents malware propagation better in large-scale networks

Topological protection from the next generation malware: a survey

International Journal of Critical Infrastructures, 2013

The spreading of dangerous malware in inter-dependent networks of electronics devices has raised deep concern, because from the ICT networks infections may propagate to other critical infrastructures producing the wellknown domino effect. Researchers are attempting to develop a high level analysis of malware propagation, discarding software details, in order to generalise to the maximum extent the defensive strategies. It has been suggested that the maximum eigenvalue could act as a threshold for the malware spreading. This paper presents a new proof of this statement and an original way to classify the max eigenvalue minimisation problem (NP-hard). A study of the Italian internet autonomous system verifying the theoretical threshold is shown. Finally, it shows how to stop a worm in a real LAN using a new sub-optimal algorithm. Such algorithm suggests which nodes to protect for limiting the worm diffusion according to the spectral paradigm.