An Information-Theoretic
Perspective on Information Warfare (original) (raw)
Cutting Through the Tangled Web: An Information-Theoretic Perspective on Information Warfare
Air Power Australia Analysis 2012-02 20th October 2012
A Monograph by
Lachlan N. Brumley, BSE(Hons),
Dr Carlo Kopp, AFAIAA, SMIEEE, PEng,
Dr Kevin B. Korb, SMIEEE, MAAAI
Text, computer graphics © 2012 Lachlan Brumley, Carlo Kopp, Kevin Korb
Information Warfare in social systems has a long and colourful history dating back to antiquity. Despite the plethora of well documented historical instances, and well known instances in the biological domain, Information Theory based mathematical formalisms are a very recent development, produced over the last two decades. Depicted is an RC-135VW Rivet Joint electronic and signals intelligence aircraft of the 763rd Expeditionary Reconnaissance Squadron in South-West Asia, 2009, a critical asset in both theatre and strategic Information Operations (U.S. Air Force image).
| | Abstract | | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | | | Information Warfare, which is the competitive use of information in survival contests, has been a pervasive feature of conflicts since the beginnings of recorded history. The advent of digital technologies for the collection, storage, analysis and distribution of information has created not only growth in the use of Information Warfare, but also created numerous new opportunities for its use.The study of Information Warfare has progressed considerably since the term was first formally used by Rona thirty six years ago. Robust foundation theory rooted in Shannon's information theory and mathematical game theory, emerged over a decade ago, with the formalisms identified now as the Borden-Kopp model.This paper surveys extant and past research in the information-theoretic foundations of Information Warfare. Specifically, both qualitative and quantitative definitions of information are surveyed, and closely compared. The most commonly used definitions of Information Warfare are then explored and analysed from an information-theoretic perspective.Research covering the four canonical strategies, which are based on Shannon's definition of information, and the Turing machine model, is explored in detail, with a specific focus on boundary conditions between the four respective canonical strategies, and differences between the Borden and Kopp models. Mappings between qualitative and information theoretic models of Information Warfare are explored. The paper also surveys numerous applications for information-theoretic representations of Information Warfare.The advent of information-theoretic models for Information Warfare establishes this as an area of study within the information sciences, in addition to its interest for the social sciences, military science and information systems. | | | | | | | | Keywords: Information Warfare, Information Operations, Information Theory, Shannon, Turing Machine, Game Theory, Hypergame, Cognitive Cycle, OODA Loop, Mills' Paradox. | | | | | |
“Oh what a tangled web we weave, When first we practise to deceive!” |
---|
Index
- Index
- The Information Warfare Problem
- What is meant by “Information”?
- Processing of Information by an Entity
- What is meant by “Information Warfare”?
- Shannon's Communication Theory and Information Warfare
- The Generality of Information Warfare
- Applications of Information Warfare
- Conclusions
- Endnotes
- References
The Information Warfare Problem
It is widely accepted that the term “Information Warfare” was first used by Thomas Rona in 1976 [3] when discussing the advantages of targeting the information and communication systems an opponent depends upon. This general area was later explored by a number of researchers, with the National Defense University establishing a School of Information Warfare Studies in 1992, and the infoWarCon series of conferences launched shortly thereafter.
The usage of the term “Information Warfare”, for better or worse, encompasses the full gamut of techniques whereby information is employed to gain a competitive advantage in a conflict or dispute. The problems raised by “Information Warfare” are fundamental to nature and are applicable not just to social or computing systems, but to any competitive survival situation where information is exploited by multiple players. In any contest between two machines, for example between a spam generator and a spam filtering engine, both entities obey the very same constraints obeyed by biological organisms exploiting information while competing for survival.
This usage amounted a “soft” definition of “Information Warfare”, followed in 1997 by a formal definition, produced by the US Air Force. The US Air Force definition focussed primarily on military applications in social systems [60]. The formal account has provided a basis for elaboration in the independently developed information-theoretic formalisms of Borden [7] and Kopp [25], which identified “four canonical strategies of Information Warfare” (see §4); “strategy” in both instances was defined in the game-theoretic sense and was related to Shannon’s information theory. Further research has aimed to establish the range of environments in which the canonical strategies apply and tie them to established research in areas such as game theory, the Observation Orientation Decision Action loop, and the theory of deception, propaganda, and marketing. Applications of the theory to network security and military electronic warfare have also emerged.
Perhaps the most surprising application of the four canonical strategies has been in evolutionary biology. Very little effort was required to establish that the four strategies are indeed biological survival strategies, evolved specifically for gaining an advantage in survival games. In the evolutionary arms race pursued by all organisms, the use of information is a powerful offensive and defensive weapon against competitors, prey and predators.
The four canonical strategies of Information Warfare provide a common mathematical model for problems that arise in biological, military, social and computing systems. This allows, importantly, a unified approach to Information Warfare, simplifying automation. For example, implementing a robust security strategy requires a common model for analyzing and understanding problems which arise from a potentially wide range of an opponent’s offensive and defensive uses of information.
In this paper we shall review recent research in this area and provide a comprehensive description of the information-theoretic foundations of “Information Warfare”. The paper does not aim to address broader issues in strategy and how “Information Warfare” may aid or hinder existing paradigms of conflict, nor will this paper attempt a deeper study of the psychological dimensions of “Information Warfare”.
What is meant by “Information”?
In the context of Information Warfare, information can be either a weapon or a target. Somewhat surprisingly for this literature, detailed in further discussion, the definitions and usage of “information” are often left vague.
Information is defined as “knowledge communicated concerning some particular fact, subject, or event; that of which one is apprised or told; intelligence, news” by the Oxford English Dictionary [54]. Another definition from the same source states that it is “separated from, or without the implication of, reference to a person informed: that which inheres in one of two or more alternative sequences, arrangements, etc., that produce different responses in something, and which is capable of being stored in, transferred by, and communicated to inanimate things”. The first definition equates information with news or intelligence regarding a fact, subject or event, while the second describes it as data which can be stored and communicated by machines. Combining these definitions, we may say that information provides knowledge of an object, event or phenomenon that can be stored or communicated by people or machines.
Figure 1: A General Communication System (Shannon, 1948)
The mathematical definition of information comes from Shannon’s [52] work on communication theory, which proposes an abstract model of communication represented by an information channel between an Information Source and a Destination [53]. This model (Figure 1) describes a generalisation of communication and consists of five parts — the Information Source, the Transmitter, the Channel, the Receiver and the Destination.
The Information Source selects the Message to send from a set of possible messages. The Message is mapped by the Transmitter into a Signal and transmitted over the communication channel. The channel is simply the medium that carries the signal, and its physical instantiation depends upon the communication method. The channel is non-ideal, and therefore impairments which damage the signal are inevitably introduced during transmission. These impairments may be additive, such as noise or other signals, or may distort the signal. A frequent model for such impairments is simple additive white Gaussian noise (AWGN), due to its wide applicability in electronic communications and to its mathematical tractability. In Shannon’s model noise is produced by a Noise Source that is connected to the channel. The Receiver collects the signal from the channel and performs the inverse of the Transmitter’s mapping operation, converting the signal with any impairments back into a Message, which is then passed on to the Destination.
In communication theory, information is a measure of the freedom of choice one has when selecting a message for transmission [53]. According to this definition, highly unlikely messages contain much information, while highly predictable messages contain little information. The Shannon information measure (Shannon entropy), H, of a message consisting of N symbols appearing with respective probabilities_pi_is (Equation 1):
| | | (1) | | ----------------------------------------------------------------- | --- |
Shannon and Weaver provide a simple demonstration of how the probability of a message’s selection affects the amount of information in the transmitted message [53]. Suppose there is a choice between two possible messages, whose probabilities are p1 and p2. The measure of information, H, is maximised when each message is equally possible, that is p1= p2 = 1/2. This occurs when one is equally free to select between the two messages. Should one message become more likely than the other, H will decrease. As a message becomes more probable, the value of H decreases toward zero.
Shannon [52] also demonstrated that the capacity of a noisy communications channel of a given bandwidth is bounded by the relationship in Equation 2, where C is the channel capacity, W is the channel bandwidth, P is the power of the signal and N is the noise power. The channel capacity is measured in bits per second, the bandwidth in Hertz and the power and noise are measured in Watts. Clearly, how much information a channel can carry can be manipulated via the channel’s bandwidth and signal-to-noise ratio. Given a fixed channel capacity, information transmission will be maximised if the actual code lengths of messages are equal to their Shannon information measure, thus, using an “efficient code” [53].
| | | (2) | | ----------------------------------------------------------------- | --- |
Shannon’s definition of information is a purely quantitative measure, determined by the probability of the message’s transmission [55]. Therefore, information as defined by Shannon is a property of a communication signal and should not be confused with the semantics (meaning) of the signal. Weaver linked Shannon information to thermodynamic entropy [55]. Entropy is used in the physical sciences as a measure of the level of organisation and structure. A system with high entropy is highly chaotic or random, while low entropy indicates a well-ordered and predictable system. Following Weaver, Shannon entropy reports how organised the information source is, which determines the rate of information generation [53].
Shannon’s model relativises information to the prior probability distribution of the receiver. Prior probabilities report the predictability of the message and thus its information content, or “surprise value”.
Another definition of information, similar to Shannon’s, is provided by Wiener [61]. Wiener describes information from a cybernetic point of view, where it is a property of the signals communicated between the various components of a system, describing the system’s state and operations. While Shannon’s definition was only applied to communication, Wiener applies his to the control and communication processes in complex mechanical, electronic, biological and organisational systems. Wiener also defines information mathematically in terms of probability, where the probabilities describe the choice between alternatives. Thus, Wiener’s definition also relates information and entropy; in particular, Wiener’s information measures negative entropy, where the information in a communicated message is a measure of the order, or lack of randomness, in the message. Shannon’s measure, on the other hand, was the information source’s entropy, describing the uncertainty of the message being transmitted, which can be interpreted as the number of bits needed in an efficiently utilised (noiseless) channel to report the state of the information source. Thus, in Shannon’s case information might be described as a potentiality, a measure of how likely a signal is to occur. In both these definitions information is a property of the communicated signal.
Bateson [1] instead defines information as a “difference which makes a difference”. According to Bateson, this definition is based upon Kant’s assertion that an object or phenomenon has a potentially infinite number of facts associated with it. Bateson argues that sensory receptors select certain facts from an object or phenomenon, which become information. Bateson suggests that a piece of chalk may be said to have an infinite number of “differences” between itself and the rest of the universe. These differences are mostly useless to an observer; however, a few of these differences are important and convey information to the observer. The filtered subset of important differences for the chalk could include its colour, location, shape and size. In Bateson’s definition of information, which differences are filtered to become information depends upon the perspective of the interested party.
Determining or quantifying the value of an item of information is thus dependent upon the observer, the circumstances of that observer, and the time at which the information is acquired. For instance, knowing which stocks will gain in the market before other observers know can yield a higher value than learning this information at the same time as others. Learning such information under circumstances where it cannot be exploited inevitably diminishes its value.
In games of incomplete information, discussed later, the value of information in reducing uncertainty can be related directly to the payoff in the game [42, 18]. If the information results in a high payoff, otherwise denied, the information is of high value. If the game is iterated or comprises multiple turns or steps, the time at which the information is acquired determines the manner in which the value of information changes over time. In this sense, Bateson’s representation is a qualitative mapping of what modern game theory tells us indirectly about the context and time variant properties of the value of information. This paper will not explore the problem of how to quantitatively determine the value of an item of information, an area well studied in recent game theory, as that problem is distinct from problems arising from the use of information to gain an advantage in a contest or conflict.
Boisot [6] provides a different model of information, arguing that what has previously been called information can instead be considered three different elements — data, information and knowledge. Entities first observe and make sense of data, converting it to information, which is then understood and incorporated into the entity’s knowledge base. Data describes the attributes of objects, while information is a subset of the data, produced by the filtering of an entity’s perceptual or conceptual processes. Boisot’s definition of information is more psychologically oriented and much broader than the mathematical definitions of Shannon or Wiener.1
Definitions of “information” fall into the categories of quantitative or qualitative: the strictly mathematical definitions of Shannon and Wiener versus ordinary language definitions, such as those of the Oxford English Dictionary, Bateson and Boisot. From a mathematical perspective, information is a property of a communicated signal, determined by the probability of that signal. The more likely a signal is, the less information it has, while the less likely it is, the more surprising its arrival and so the more information it possesses. The informal definitions consider information to be descriptions of some aspects of the world that can be transmitted and manipulated by biological organisms and machines. Of course, quantitative and qualitative definitions are potentially compatible and can be used jointly.
When “information” is used in the context of Information Warfare, it is commonly under its qualitative meaning. For example, when describing Information Warfare against computer systems, information may be used to refer to a computer program, stored data or a message sent between systems. The qualitative definitions of information are, however, vague, leading to conflation with distinct concepts such as knowledge, data and belief. Applying Shannon’s definition of information allows Information Warfare to be studied more rigorously. Whereas the mathematical definitions treat information as a property of a communicated signal, under a qualitative interpretation it is likely to be confused with the semantics of the signal.
In any case, the term “information” as it appears in much of the literature is context sensitive and that context must be interpreted carefully if the meaning of the text is to be read as intended (i.e., with high signal-to-noise ratio).
Processing of Information by an Entity
The target of an action to manipulate or impair information used by a competing entity is the victim’s decision-making. From a game theoretic perspective the intention is to compel or entice the victim into making choices which are to the advantage of the attacker. Understanding such mechanisms is therefore important to understanding how Information Warfare produces its intended effect.
Any decision-making mechanism is inherently constrained by the information collection and processing behaviours of the system which it is part of. There are many conceptual models which attempt to describe the information collection and processing behaviours of entities. Such models provide a relatively simple representation of the decision-making process, against which the effects of Information Warfare on the entity’s decision-making process can be studied. One such model of the decision-making cycle is the Observation Orientation Decision Action (OODA) loop model [10, 48], which we have employed in our research due to its generality, and because it is widely used and understood.
The OODA loop model is a method of representing the decision-making and action cycles of an entity. It was originally developed to model the decision-making process of fighter pilots, however its generality makes it suitable for modeling most decision-making cycles. The OODA loop is commonly used to describe the decision-making process in both military and business strategy [56].
Boyd’s OODA Loop is a four step cyclic model, which describes the information gathering, decision making and actions of an entity, with earlier behaviour providing feedback to the current analysis and decision activities (Figure 2). The model breaks the continuous act of perception and its subsequent decision-making into four discrete steps, which is accurate for many entities or systems. The model can be adapted to systems which are not discrete.
The loop begins with the Observation step, where the entity collects information about the state of its environment. This information may be collected with any sensors the entity possesses.
Figure 2: Boyd’s OODA (Observation Orientation Decision Action) loop model (Boyd 1986). Note the feedback from present Decisions and Actions to future Observations, as well as the control that Orientation has over Observation and Action.
During the Orientation step, the gathered information is combined with the entity’s stored beliefs about the environment and itself, which may include previous experiences, cultural traditions, genetic heritage, and analysis and synthesis methods. All of this is used to update the entity’s internal model of its environment. The internal model represents the entity’s current understanding of the state of its environment and is a product of the entity’s perceptions, beliefs and information processing abilities. This internal model may or may not accurately match reality. Boyd [11] states that the Orientation step is the “schwerpunkt” (focal point or emphasis) of the OODA loop model, as an entity’s Orientation determines how it will interact with the environment, affecting how it Observes, Decides and Acts.
After updating its understanding of its environment, the entity enters the Decisionstep, where it considers its potential actions and the expected outcomes of these actions. If the entity is a rational decision-maker, it will select the action or actions which it believes will lead to its most preferred outcomes. The possible actions and their expected outcomes that the entity develops are entirely a product of the entity’s model of its environment. Therefore, the entity will only consider actions it believes are possible and the outcomes it believes those actions will have. The entity’s beliefs therefore constrain its decision-making.
During the Action step, the entity performs its selected action or actions. This interaction changes the state of the environment. Any changes the entity causes can be observed in future OODA loop cycles, along with changes caused by other entities. This functions as a feedback loop between the entity and its environment.
Similar cyclical models of the decision-making process have also been proposed [43, 44, 49]. These models commonly describe the decision-making process as an ongoing feedback loop where an entity’s current beliefs guide how it collects and interprets new information, before basing its decisions upon these beliefs.
Figure 3: A simplified OODA loop model showing how two entities interact with each other through the environment.
A typical and implicit assumption in the OODA loop model is that it involves players in a competitive game, whether one of complete or incomplete information. In either circumstance what information is perceived by these players, and how it is understood or misunderstood, determines the subsequent actions or moves by the players, and the eventual outcomes and payoffs in the game.
In a game of perfect information a player who is unable to understand the environment due to internal processing errors (as we discuss later) will make poor choices in the game. Games of incomplete information are much more interesting, as they introduce uncertainty into the game. This results in some information possessing more value to a player than other information. That value in turn yields a payoff to the competing player, providing a direct benefit if information of high value can be rendered unusable to an opponent.
This is an important nexus in the study of Information Warfare, connecting game theory and information theory and showing the fundamental motivation for the use of information against opponents in such contests. The relationship between games of incomplete information and Information Warfare was explored in some detail by Kopp [27], using the hypergame representation of Fraser and Hipel [18], with subsequent research by Jormakka and Mölsä [24] applying game theory models in relation to the OODA loop.
The Boyd OODA loop model is valuable here as it suggests the study of how the four canonical strategies of Information Warfare specifically impair the information processing functions of a victim, and in turn how this impairs the victim’s decision process. In effect, it explains the inner workings of how impairments to information transmission and processing result in impairments to decision-making and thus failure to secure an optimal payoff in a game.
Any interaction between entities, including communication, occurs through the environment. One entity’s actions manipulate the environment, and this may be perceived by other entities during their Observation steps.Figure 3 demonstrates how this exchange occurs between two entities, although there is no limit, theoretical or otherwise, to how many entities may interact in such a manner. The interactions which occur may be either the implicit or explicit result of the Actions of one entity.
In Boyd’s model, information is collected during the Observation step before being processed and contextualised during the Orientation step. Any information and its derivative products are then used during the Decision step to guide the entity’s future Actions. Offensive acts of Information Warfare, or unintentional errors which produce similar outcomes, affect the victim’s Orientation or Observation steps [12]. The product of these actions may then affect current or future loop iterations, with intentional actions intended to produce the effect desired by the attacker.
Since unintentional errors and Information Warfare attacks can have similar outcomes, Information Warfare attacks may be disguised as unintentional errors in order to temporarily or permanently conceal the attacker’s hostile intentions.
Unintentional errors have various possible causes and, like Information Warfare, may affect an entity during its Observation and Orientation steps. During the Observation step, temporary or permanent errors may affect the entity’s sensors’ ability to correctly and promptly collect information. Errors during the Orientation step may affect how an entity processes and stores information.
Intentional deception is a key element of Information Warfare attacks, where corrupted information is purposefully communicated to a victim in order to alter its behaviour in a specific manner. Entities with incorrect beliefs may also unintentionally deceive, by communicating corrupted information during what they believe is truthful communication (cf. [37]). If this deception is successful, the victim will develop incorrect beliefs similar to those the attacker possesses. Unintentional deception may be more effective than intentional deception, since the attacker’s behaviour will support the perceived veracity of the corrupted information it has communicated. The consistency between an entity’s statements and other behaviour during deception is called multi-channel support [20]. Such behaviour is implicitly available during unintentional deception, yet must be explicitly provided during intentional deception.
A specific type of deception is self-deception, where an entity is both the attacker and victim of the deception attack. Self-deception allows an entity to intentionally manipulate its own beliefs to its own end and has been observed and studied in human decision-making [45, 50]. Self-deception is of interest as the mechanisms which impair the decision cycle are no different to those used in intentional deception, but also because a sophisticated attacker might aim to exploit or reinforce existing self-deception arising in the victim system [20].
Self-deception affects the decision-making process during an entity’s Orientation step [13], where conflicting beliefs are compared. The entity then manipulates its beliefs to remove the conflict, corrupting its own beliefs to remove the conflict and allow the decision-making process to continue as normal. It is even possible that such corruption is adaptive and even intentional. Trivers [58] has argued that self-deception may aid deception, by allowing a deceiver to corrupt its beliefs in order to provide multi-channel support for its deception against others. Once the deception is complete, the entity may be able to remove its incorrect beliefs and benefit directly from the deception. Regardless, self-deception may allow a person to resolve conflicting beliefs, thereby reducing cognitive dissonance and the psychological discomfort it causes [46]. Another argument suggests that self-deception did not evolve as an adaptation, but is instead an unintentional byproduct of one or more other cognitive processes or structures [59]. It may, of course, be both.
Instances of “situating the appreciation” present as interesting case studies. Some instances are demonstrably deception of the audience by Corruption and/or Subversion, where the author is intentionally avoiding or “spinning” the discussion of the consequences of previous poor decisions. However, many may represent instances of self-deception by Corruption and/or Subversion, intending to avoid cognitive dissonance by post-facto rationalisation. Often the deceptive aspect of “situating the appreciation” reflects an effort to reinforce existing self-deception arising in the victim audience, which may include the author [20].
The problems of self-deception, unintentional deception and other forms of dysfunctional information processing are connected with established research in psychology and sociology, dealing with statistical decision theory and information processing.
The ability to process large volumes of data, which may or may not be rich in information, is inherently constrained by the computational capacity of the system processing this input, and the effectiveness of the algorithms employed for this purpose. In the contexts of sociology, psychology and Information Warfare, this problem has been studied by Toffler, Lewis and Libicki. Toffler identifies the social problems arising from ‘information overload’, where anxiety and social turmoil arise in populations exposed to conditions where the rate of social change and associated volumes of information to be processed exceed the available capacity [57].
Lewis explored the psychological effects of individuals exposed to volumes of information exceeding processing capacity. In a large survey of managers he found effects which he termed ‘Information Fatigue Syndrome’, resulting in “a weariness or overwhelming feeling of being faced with an indigestible or incomprehensible amount of information” with symptoms including “paralysis of analytical capacity”, “a hyper-aroused psychological condition”, and “anxiety and self-doubt” leading to “foolish decisions and flawed conclusions” [38].
Libicki earlier explored the same effect but arising from intentional Information Warfare attacks, as the “induced volume of useless information which hinders discovering the useful information” [39]. Libicki also notes that the possible responses to such a flood of information will either hinder the victim’s agility in adjusting to unexpected events or impose internal obstacles preventing the internal integration of new information [40].
A common thread running through all such research is that strong psychological effects arise when humans are confronted with the problem of processing volumes of information exceeding their capacity, or confronted with information which challenges earlier understanding or beliefs. In both instances the ability to effectively process the information to extract useful meaning is constrained, and the psychological effects arising exacerbate the problem [33].
Berne’s [4, 5] work in transactional analysis, which explores social games in human interaction and dysfunctional human behaviours, notes the frequency with which deception arises in human interactions. In most of the extensive collection of social games which Berne identifies, uncertainty in perceived players’ intent results in games of incomplete information, where deceptions or hiding of information are frequent.
Davis in Heuer [21] argues that “the mind is poorly ‘wired’ to deal effectively with both inherent uncertainty (the natural fog surrounding complex, indeterminate intelligence issues) and induced uncertainty (the man-made fog fabricated by denial and deception operations).” This observation is consistent with our findings on the importance of self-deception, especially in an environment of uncertainty and intentional deception.
Exploitation of human psychology is central to many established and well documented deception techniques, where weaknesses in how the victim analyses and integrates information are exploited commonly. Haswell’s [20] extensive study of classical deception techniques provides a wealth of examples.
These examples have been explored within the framework of information theory based models by Kopp [28], a study later extended to deception techniques used in political and product marketing [30].
The study of information processing and decision mechanisms in a victim system thus provides the bridge connecting game-theoretic views of Information Warfare with the information-theoretic models which describe specific strategies of Information Warfare. While uncertainty in games of incomplete information produces value in information, which motivates players to attack or protect information channels and information gathering and processing mechanisms, game theory cannot tell us anything about how the information is processed by a victim, or how it might be manipulated by an attacker. Information theory can explain the various mechanisms through which a channel can be attacked, and how such attacks will impair a victim’s decision cycle, in turn determining a player’s choices in a game of incomplete information.
What is Meant by “Information Warfare”?
There are numerous and often divergent definitions of Information Warfare in current usage, reflecting in part the pervasive nature of the phenomenon and in part the differing perspectives of observers studying the problem.
The various definitions of Information Warfare describe actions such as using information as a weapon, targeting information processing infrastructure and protecting one’s own information and information processing infrastructure. In order to provide a better understanding of Information Warfare, we examine some of the more prominent definitions, along with examples of possible offensive and defensive Information Warfare actions. The core elements of Information Warfare will be revealed through these examples.
An early definition of Information Warfare by Schwartau [51] considers it from a social context, describing various attacks against information systems and telecommunications networks. Schwartau states that “Information Warfare is an electronic conflict in which information is a strategic asset worthy of conquest or destruction”, a definition covering only offensive actions. The overall goals of Information Warfare attacks are identified by Schwartau as the theft of information, modification of information, destruction of information and destruction of the information infrastructure, with the ultimate goals of acquiring money and power and generating fear. Schwartau points out that Information Warfare takes advantage of our modern societies’ dependence on information and information systems and is not restricted to governments or government agencies, as is the case with traditional warfare. Schwartau specifies three different classes of Information Warfare attacks, using a taxonomical approach that focuses on the type of target that is attacked.
The first class of operations is Personal Information Warfare, in which individuals and their personal details, stored in electronic databases, are the targets. Schwartau describes it as “an attack against an individual’s electronic privacy”, in which the attacker views or manipulates data about the individual stored by various companies and government agencies. Schwartau points out that individuals cannot directly protect this information and will often have difficulty correcting any erroneous entries created during an attack. Such an attack could create a false outstanding arrest warrant or supply misinformation to blackmail the individual, although the most common use at this time appears to be “identity theft” typically to facilitate fraudulent use of credit cards.
The second class of operations is Corporate Information Warfare, in which companies are targeted, typically by their competitors. Schwartau describes industrial espionage, spreading disinformation, leaking confidential information and damaging a company’s information systems as potential examples.
Global Information Warfareis the third class of operations and its victims include industries, political spheres of influence, global economic forces, non-national entities and nations. Typical examples of acts within this category include theft of secrets, denial of technology usage and the destruction of communications infrastructure. Schwartau claims that “it would be stupid for a well-financed and motivated group to not attack the technical infrastructure of an adversary,” given the clear vulnerabilities, low risk and large reward of these attacks.
Schwartau’s definition of Information Warfare covers only offensive actions that utilise or affect some sort of electronic information system, which implies that Information Warfare is a modern development. Schwartau’s decision to categorise Information Warfare attacks according to their intended victim recognises that all victims are not equal and that the motivations for attacking them differ.
Libicki’s [39] definition also provides a taxonomy of Information Warfare, but divides the constituent operations by the environment in which they occur. He gives seven distinct types of operational behaviours that can be described as Information Warfare, all of which are “conflicts that involve the protection, manipulation, degradation and denial of information”. Libicki’s seven types of Information Warfare are: Command and Control Warfare, Information Based Warfare, Electronic Warfare, Psychological Warfare, Hacker Warfare, Economic Information Warfare and Cyberwarfare.
Command and Control Warfareattacks the command and communications infrastructure of an opponent, in order to degrade its responses to further military action. Command facilities are destroyed to prevent military decision-making, while communications infrastructure is destroyed to prevent the flow of information between decision-makers and the troops implementing those decisions. Libicki points to the effectiveness of Command and Control Warfare by the United States against Iraqi forces as the main reason that the bulk of those forces were ineffectual during the first Gulf War.
Information Based Warfare is the collection and use of information when planning and implementing military actions. A typical example is using information gained by reconnaissance to assess the effectiveness of previous military attacks or to determine the priority of targets for future strikes — i.e., increasing the situational awareness of the commander.
Electronic Warfare attempts to degrade the physical basis of an opponent’s communications. There are three main targets for Electronic Warfare attacks, which are radar receivers, communication systems or communicated messages. Anti-radar attacks aim to prevent an opponent’s radar from detecting vehicles, using electronic or physical assaults. Communications systems may be electronically jammed or their physical infrastructure located and destroyed. Cryptography is used to conceal the contents of ones own communications and to reveal the contents of an opponent’s communications.
Psychological Warfare is defined as the use of information against the human mind, and Libicki divides it into four sub-categories based upon its intended target. _Counter-will_operations target a country’s national will, aiming to transmit a deceptive message to an entire population. In a military context, messages typically suggest that the country’s present and future military operations are likely to fail. _Counter-forces_attacks target an opponent’s military troops, aiming to convince them that fighting is against their best interests. _Counter-commander_operations intend to confuse and disorient an opponent’s military commanders, detrimentally affecting their decision-making abilities. Cultural conflict targets an opponent’s entire culture, attempting to replace their traditions and beliefs with those of the attacker. Libicki states that while cultural conflict has a long history, its implementation is greatly aided by modern technology.
Hacker Warfare consists of attacks against civilian computer networks and systems. Similar attacks against military computer networks Libicki instead categorises as Command and Control Warfare. Some aims of Hacker Warfare include the temporary or complete shutdown of computer systems, the introduction of random data errors, the theft of information or services and the injection of false message traffic. Libicki points out that the behaviours he categorises as Hacker Warfare encapsulate much of the actions that Schwartau defines as Information Warfare.
Economic Information Warfareis defined as the attempt to control the flow of information between competing nations and societies. An Information Blockade attempts to prevent the real-time transfer of information by methods such as jamming and destruction of equipment. Libicki argues that this is difficult to achieve against a determined opponent. Information Imperialism occurs when knowledge-intensive industries become geographically concentrated, which disadvantages those without access to the region. Libicki cites Silicon Valley as an example of Information Imperialism.
Libicki’s category of Cyberwarfarecollects a variety of attacks which are currently unlikely or impossible. However, this term is commonly used by the media to describe acts which Libicki categorises as Hacker Warfare. One of these attacks is information terrorism, a type of computer hacking aimed at exploiting systems to attack individuals, which is similar to Schwartau’s Class I Information Warfare. Semantic attacks are another kind of Cyberwarfare, in which computer systems are given seemingly valid information that causes them to produce incorrect output, while appearing to be correct. Another is simula-warfare, in which simulated warfare replaces conventional warfare. Libicki argues any competitors who can agree to perform simulated warfare should be capable of negotiation to avoid conflict. Gibson-warfare is another unlikely possibility for Cyberwarfare, where a futuristic conflict occurs between virtual characters inside the system itself. Libicki argues that the current information infrastructure has not developed to the point where these attacks are possible and concedes that it never may in some cases.
Libicki points out that Information Warfare is not a recent development and that some of its varieties, such as Psychological Warfare, have a long history in human conflict. He also notes that as the information space has developed due to technological changes, new methods of Information Warfare have evolved. While Libicki proposes seven plausibly distinct forms of Information Warfare, there is some functional overlap between similar attacks. For example, computer hacking may be considered either Hacker Warfare or Command and Control Warfare, depending on whether the attack targets a civilian or military system. Libicki is also dismissive of the effects of Information Warfare operations performed against non-military targets, such as Hacker Warfare. Whereas these operations do not directly deter military operations, their effects on the civilian population may reduce the political support for those responsible for military operations and thereby achieve military objectives. Attacks that lead to economic losses can also undermine a nation’s capability to wage warfare.
Later work by Libicki [40] has focussed on the use of Information Warfare in Cyberspace, which is defined as any networked computer or communications system. While hostile attacks are the obvious method by which one may conquer cyberspace, Libicki proposes that friendly conquest is also possible. Friendly conquest recognises the power of seduction and develops from mutually beneficial relationships, in which one member becomes dependent upon the information systems or services provided by the other. Friendly conquest differs greatly from other hostile attacks, as it is entered into willingly by the victim in exchange for information or access to information systems that the victim values.
Widnall and Fogelman [60] defined Information Warfare for the United States Air Force, describing it in a social context specifically oriented to military operations. Information is said to be the product of the perception and interpretation of phenomena, much as in Boisot’s definition. The acquisition, transmission, storage or transformation of information are described as information functions. They define Information Warfare as “any action to Deny, Exploit, Corrupt or Destroy the enemy’s information and its functions; protecting ourselves against those actions and exploiting our own military information functions” [60]. This covers both offensive and defensive Information Warfare.
Widnall and Fogelman detail six types of offensive Information Warfare attacks. Psychological Operations use information to affect the enemy’s reasoning and thereby its behaviour. Electronic Warfare denies the enemy accurate information from the environment. Military Deception deceives the enemy as to the attacker’s capability or intentions. Physical Destruction targets the enemy’s information systems for destruction. Security Measures conceal the attacker’s military capabilities and intentions from the enemy. During an Information Attack an opponent’s information is directly corrupted without visibly changing its physical container. Of these offensive actions, only Information Attack is considered to be a recent development and not a traditional military operation; the others are as old as warfare itself.
One explicit reason for the Air Force’s interest in Information Warfare is to enhance its ability to accomplish Air Force missions. Another reason is that the Air Force’s dependency on integrated information systems makes their information functions a desirable target for attack by opponents. This problem is no longer restricted to the Air Force and other large organisations, as most of modern societies have become dependant upon information systems for their daily operations.
Kuehl [36] provides another military-oriented definition of Information Warfare that considers it in a social context: “Information operations conducted during time of crisis or conflict to achieve or promote specific objectives over a specific adversary or adversaries”. Information Operations are: “Actions taken to affect adversary information and information systems while defending [one’s] own information and information systems”. This implies Information Warfare is a series of offensive and defensive operations that either attack or defend information and information systems, aimed at a specific goal. The requirement that Information Warfare takes place during a crisis or conflict seems to imply that it is the exclusive domain of the military, which contradicts with Schwartau’s and Libicki’s definitions.
Information Warfare is clearly beneficial to its users when applied in a strategic method by militaries during war [60, 36, 47], however, its usefulness in this role is debatable. Knowledgeable competitors will learn to expect Information Warfare attacks before and during military operations and attempt to defend against such strikes [40]. Once an entity reveals its Information Warfare capabilities, much of the surprise factor is lost and knowledgeable opponents will increase their defenses against similar attacks in the future. In this way Information Warfare targeted against information systems and communications networks approaches a coevolutionary race, as attackers locate and exploit flaws in these systems, while defenders attempt to correct these flaws as soon as they are observed. This reveals an interesting parallel between Information Warfare attacks and Information, in that unexpected attacks, like unexpected information, are more valuable than expected attacks or information.
Denning’s [17] work again takes up Information Warfare in a social context, this time oriented toward information systems and computer security. However, Denning also notes that Information Warfare is not a recent human development, nor restricted to humans for that matter. She defines Information Warfare as offensive and defensive operations performed against information resources. Information resources are objects that operate upon information in some manner.
In any Information Warfare operation there are at least two players, an offensive player who is targeting an information resource and a defensive player who protects the information resource from the operation. Players may be individuals or organisations, who may or may not be nation states and who may or may not be sponsored by others. Offensive players are broadly categorised as being either insiders, hackers, criminals, corporations, governments and terrorists. As every individual and organisation possesses information resources, every individual and organisation is said to be a potential Defensive player. Information resources are targeted because they are of some value to at least one player. Offensive Information Warfare operations aim to increase the value of an information resource to the attacker and decrease its value to the defender. This framework provides a game-theoretic outlook on Information Warfare, where players select offensive and defensive strategies which result in various outcomes with differing payoffs for the players.
Denning states that there are three overall aims of offensive Information Warfare operations: to increase the availability of the information resource to the attacker; to decrease the availability of the resource to the defender; and to decrease the integrity of the information resource. These aims closely match Schwartau’s stated overall goals for Information Warfare, namely the theft, modification or destruction of information and the destruction of information infrastructure. Stealing or modifying information increases the availability of the information resource to the attacker. Modifying or destroying information decreases the integrity of the information resource. Destroying the information infrastructure decreases the availability of information resources to the defender.
Information resources are protected from Information Warfare attacks by using defensive Information Warfare operations. Denning categorises defensive Information Warfare operations into: prevention, deterrence, indication and warning, detection, emergency preparedness and response. Examples include laws and policies that deter various Information Warfare operations, physical security measures that prevent access to information resources and procedures for dealing with the aftermath of a successful attack.
Denning provides a comprehensive description of Information Warfare of both offensive and defensive kinds. The representation of Information Warfare in a game-theoretic manner importantly allows the application of Game Theory in analysing instances of Information Warfare. While the examples of Information Warfare focus on computer networks, communication systems and other modern information infrastructure, Denning acknowledges the presence of Information Warfare in evolutionary biology.
Borden [7] took Widnall and Fogelman’s [60] definition of Information Warfare and mapped it onto Shannon’s model of information, producing a definition of Information Warfare that also places it in a social context, again with a military orientation. Widnall and Fogelman described four main offensive actions which could be performed against an adversary’s information and information infrastructure — Denial, Exploitation, Corruption and Destruction. Borden argues that these are the four main offensive operations of Information Warfare and that any action said to be Information Warfare may be categorised within one of these strategies.
Degradation involves delaying the use of information or damaging it partially or completely. Thus, Degradation operates upon the information itself. Examples given are hiding information from an adversary’s collection task and jamming a communications channel, thereby delaying the transmission of messages.
Corruption provides false information for the adversary or corrupts information that the enemy already possesses. Some examples are the use of dummies on the battlefield, spoofing transmissions on the adversary’s communications channel and Psychological Operations performed against the enemy or their allies. The use of dummies and spoofing transmissions attempts to supply corrupt information that the adversary will accept as valid, while Psychological Operations target information already possessed by the target.
Denial is “a direct attack on the means of accomplishment”, meaning anything that the adversary uses for information collection and processing. Possibilities include the destruction or disabling of an electro-optic sensor by a High Energy Laser and a virus that destroys the operating system of a computer used by the enemy for decision-making. Denial attacks may either permanently destroy the targeted system or temporarily disable it.
Exploitation is the collection of information directly from the adversary’s own information collection systems. The information collected may be useful for understanding the adversary’s point of view.
The overall aims of these strategies match Denning’s three aims of Information Warfare, since Degradation and Denial both reduce the availability of information, while Corruption reduces the integrity of the information and Exploitation increases the availability of information to the attacker.
Independently of Borden, Kopp [25] also generated four strategies for offensive Information Warfare attacks. However, Kopp started with Shannon’s model of communication and from this derived three types of offensive strategies that have different effects on the channel. These strategies are the Denial of Information, Deception and Mimicry, and Disruption and Destruction. A fourth strategy of Subversion is added, which utilises the channel for its communication to the victim.
Kopp’s work explicitly considers Information Warfare in both social and biological systems, whereas previous analyses only described Information Warfare in a human context. Kopp argues that Information Warfare is a basic evolutionary adaptation resulting from competition for survival, which manifests itself in a variety of areas. Kopp draws examples of the proposed attack strategies from three different domains: the insect world, military electronic warfare, and cyberwar.
A Denial of Informationattack conceals or camouflages information from adversaries, preventing its collection and use. Examples include insects that blend into their environments, a stealth fighter which uses its shape and radar-absorbing material to hide from radar and the use of encryption to hide information from users of a computer system. Denial of Information attacks may be further categorised into either active or passive forms [31]. A passive form of attack attempts to conceal a signal from the victim’s receiver and is described as covert, with the victim unlikely to be aware of the attack. In an active form of attack, the receiver is blanketed by noise so that it cannot discern the signal from the noise. Active forms of attack are inherently overt, alerting the victim to the fact that it is being attacked.
Deception and Mimicryattacks intentionally insert misleading information into a system, which the victim accepts as valid. For some examples: some harmless insects that mimic the appearance of dangerous predators; defensive jamming equipment on an aircraft emitting enemy radar returns with an erroneous position measurement; and techniques used to mask the identity of someone penetrating a network or system. Successful Deception and Mimicry attacks are inherently covert, as they are intended to leave the victim unaware that the information is misleading [31].
Disruption and Destructiondescribes overt attacks which either disrupt the activities of victim’s information system or destroy it outright, in order to prevent or delay the collection and processing of information. Examples include beetles that spray noxious fluids onto predators to blind them and an electromagnetic pulse weapon used to destroy a radar and its supporting communications network. A denial of service attack is an example of disruption in the cyberwar domain. Disruption and Destruction attacks are overt in nature, as the victim will notice that the effects of the attack on their information receiver [31]. Such attacks may also be further classified, based upon the permanence of the attacks, using existing military terms: “hard-kill” attacks are those that permanently destroy the information sensor, while “soft-kill” attacks temporarily disable the information sensor or system.
Subversion attacks initiate a self-destructive behaviour in the victim’s system, caused by information inserted by the attacker. An example of this attack in insects is a predatory insect that mimics the appearance of food to lure prey. This deception triggers a self-destructive response from the victim. Subversion in aerial warfare can be achieved by the use of deceptive signals that trigger the premature detonation of proximity fuses on guided missiles. In cyberwar, logic bombs and viruses are examples of subversive weapons, in which the system uses its own resources to damage itself. Most examples of Subversion combine with a Deception and Mimicry attack to first insert the self-destructive signal into the victim [31].
Borden / US DoD | Kopp |
---|---|
Degradation | Denial of Information |
Corruption | Deception and Mimicry |
Denial [via Destruction] | Disruption and Destruction |
Denial [via Subversion] | Subversion |
Table 1: A comparison of Borden’s and Kopp’s taxonomies of the canonical Information Warfare strategies
Kopp’s categorisation of offensive Information Warfare strategies largely overlaps with Borden’s. Both describe four canonical offensive Information Warfare strategies, three of which are, for all intents and purposes, identical. However, these two models converged from very different starting points: Borden’s from Widnall and Fogelman and Kopp’s from Shannon’s information theory.
Kopp’s “Denial of Information” attack is the same as Borden’s “Degradation” attack, both describing an attack that partially or completely conceals information from the victim. Kopp further categorises these attacks into passive attacks, which are covert, and active attacks, which are overt. Borden’s analysis also covers the temporary concealment of information, delaying the victim’s reception of information, as a method of Degradation.
Borden’s “Corruption” strategy and Kopp’s “Deception and Mimicry” strategy describe the same behaviour, where a corrupted signal mimics a valid signal and the victim is unable to distinguish between the two. Both attacks aim to reduce the integrity of the information targeted.
The “Denial” and “Disruption and Destruction” strategies also describe the same act, where the victim’s information collection and processing apparatus is temporarily or permanently disabled. Such attacks reduce the availability of information and related processing functions to the victim.
Kopp’s “Subversion” strategy lacks any equivalent in Borden’s taxonomy. Kopp points out that this is due to Borden’s taxonomy following the United States Air Force’s convention of folding the “Subversion” strategy into the “Denial” strategy [31]. Subversion attacks aim to decrease the integrity of the victim’s information leading to its decision-making, and so causing it to act in a self-destructive manner. On the other hand, Borden’s “Exploitation” strategy is not present in Kopp’s taxonomy. Kopp [27] argues that since Exploitation does not “provide an immediate causal effect in the function of the target”, it cannot be an offensive Information Warfare attack. Instead, Exploitation is simply a passive information collection technique.
Borden and Kopp’s models both describe four canonical offensive strategies of Information Warfare, within which any offensive Information Warfare attack may be categorised (Table 1). Henceforth, when further discussing these strategies, the shortest label will be used to identify the attack.
Denning, Borden and Libicki identified three overall aims for Information Warfare operations and these aims should be achieved by the four canonical strategies described by Borden and Kopp. These aims are to increase the availability of the information resource to the attacker, to decrease the availability of the resource to the defender and to decrease the integrity of the information resource. Table 2 shows how the four canonical strategies are capable of achieving these three aims. It should be noted that Subversion attacks only achieve an aim when the unintentional behaviour they induce in the defender happens to achieve that aim.
Effect on Information Resource | Degradation | Corruption | Denial | Subversion |
---|---|---|---|---|
Increase availability to attacker | X | X | ||
Decrease availability to defender | X | X | X | |
Decrease integrity of resource | X | X | X |
Table 2: Canonical Information Warfare strategies (Borden, Kopp) and Aims of Information Warfare (Denning)
Information Warfare Category | Degradation | Corruption | Denial | Subversion |
---|---|---|---|---|
Command and Control Warfare | X | X | X | |
Information Based Warfare | ||||
Electronic Warfare | X | X | X | X |
Psychological Warfare | X | X | X | |
Hacker Warfare | X | X | X | |
Economic Information Warfare | X | X | X | |
Warfare Cyberwarfare | X | X | X | X |
Table 3: Libicki’s categories of Information Warfare and the Canonical Information Warfare strategies which can implement them
Libicki’s categories of possible types of Information Warfare can also be compared against the four canonical strategies to identify which attacks are utilised in each of these types of Information Warfare (Table 3). It is worth noting that Information Based Warfare utilises none of the four canonical strategies. This is because Information Based Warfare is an action similar to Borden’s Exploitation strategy and is therefore not an offensive act of Information Warfare. Subversion attacks only implement one of the types of Information Warfare when they cause an unintended action in the victim with whatever effect that type of Information Warfare requires. For example, Subversion will implement Command and Control Warfare if it incites the victim to damage its own communication systems, as in a Denial attack.
A difficulty in using this taxonomy to categorise attacks was initially identified when attempting to differentiate between Corruption and Subversion attacks. A similar differentiation problem was later observed between other pairs of strategies — Subversion and Denial, Degradation and Corruption, and Degradation and Destruction. This problem has been called “Mills’ Paradox” [32]. While labelled as a paradox, the problem is actually one of classification, where the boundary conditions need to be precisely defined and rigorously applied. Figure 4 shows these classification problems between adjacent strategies, along with the boundary conditions for differentiating between them.
Figure 4: Mills’ Paradox, showing the links between the boundary conditions for classifying Information Warfare attacks (Kopp 2006)
Mills’ Paradox poses the following four questions:
- How do we distinguish a Subversion attack from a Corruption attack?
- How do we distinguish a destructive Subversion attack from a Denial attack?
- How do we distinguish a Degradation attack from a mimicking Corruption attack?
- How do we distinguish an intensive active Degradation attack from a soft kill Denial attack?
The opposite corners of Figure 4 can always be distinguished easily from each other: Degradation attacks from Subversion attacks, and Corruption attacks from Denial attacks. The same is not true for the remaining four boundary conditions. Herein lies the “paradox”, in that of the six boundaries, two are so distinct that no ambiguity exists, whereas the remaining four require very careful analysis to establish exactly where one strategy begins and the other ends. A useful observation in this context is that Subversion and Corruption attacks are focussed on information processing, whereas Degradation and Denial attacks are focussed on the channel.
Distinguishing a Subversion attack from a mimicking Corruption attack requires an understanding of how the victim processes the deceptive input message. In both a message is being used to deceive the target system to the advantage of the attacker. The distinction lies in whether the victim is responding to the attack voluntarily or involuntarily. A Corruption attack alters the victim’s perception of external reality and the victim then responds to the environmental change with a voluntary action of some kind. In the Subversion method of attack some involuntary internal mechanism is triggered to cause harm to the victim. Biological cases often prove difficult to separate.
Distinguishing a destructive Subversion attack from a Denial attack can also be problematic, where the end state is a destructive hard kill of the victim. Superficially the effect is the same — the victim receiver or system is no longer operational. The distinction between these two attacks lies in whether the destruction of the victim was a result of the expenditure of energy by the attacker or by the victim itself. A destructive denial attack invariably sees the attacker delivering the destructive effect via external means, whereas subversion sees the victim deliver the destructive effect against itself, once triggered. The victim’s role in both attacks is involuntary.
Distinguishing a passive Degradation attack from a mimicking Corruption attack can frequently present difficulties, especially in biological systems. The boundary condition is based upon whether the victim misidentifies the attacker or fails to perceive it at all. Mimicry which is designed to camouflage an attacker against the background noise is a passive degradation attack, since the victim cannot perceive the attacker.
Distinguishing an active Degradation attack from a soft kill Denial attack may also be superficially difficult. In both instances the channel has been rendered unusable by an observable attack on the receiver. The boundary condition can be established by determining whether the receiver remains functional or not. An overloading of the receiver to deny its use is quite distinct from a channel which is unusable due to saturation with a jamming signal.
Criteria for the boundary conditions are presented in Table 4.
By contrast with these difficulties, when considering Degradation and Subversion attacks, or the Corruption and Denial attacks, respectively, it is clear that the respective effects of these paired strategies are mutually exclusive. Degradation requires a functional victim system to achieve its effect, but Subversion results in the destruction or serious functional impairment of the victim. The same dichotomy exists between Corruption and Denial, as a deception cannot be effected if the victim system loses its channel or receiver.
| | Degradation | Corruption | Denial | Subversion | | | ------------------ | ------------------------------ | -------------------------------- | ----------------------------------- | ----------------------------------- | | Degradation | – | Is effect perceived? | Effect on channel or receiver? | – | | Corruption | Is effect perceived? | – | – | Voluntary or involuntary effect? | | Denial | Effect on channel or receiver? | – | – | Attacker or victim supplied effect? | | Subversion | – | Voluntary or involuntary effect? | Attacker or victim supplied effect? | – |
Table 4: The boundary conditions for differentiating between the canonical Information Warfare strategies
It is also interesting to sort the strategies according to how covert they are. Passive forms of Degradation and all forms of Corruption are inherently covert. Conversely, both Denial and Subversion are ultimately overt, as the victim system is damaged or impaired in function. In a sense, the Denial and Subversion strategies are centred on damaging or impairing the victim’s apparatus for information gathering and processing, whereas the Degradation and Corruption strategies are centred on compromising the information itself.
Information Warfare attacks may be combined, forming a compound Information Warfare strategy [29]. A compound Information Warfare strategy is a partially-ordered directed graph (network) of multiple attacks, which has a precedence relationship described by the graph's (network’s) structure. Each individual attack is designed to drive the victim towards an intended final state and may have multiple predecessor and successor strategies. Overall success of a compound strategy depends upon whether the victim’s end state matches that intended by the attacker.
As an example of a compound attack consider a communication displaying “an indifference to what is real” as defined by Frankfurt [0], intended not to inform but to distract, misdirect or beguile. A specific instantiation of a communication characterised by “an indifference to what is real” may implement an adaptive compound strategy, which utilises typically some combination of Degradation, Corruption and Subversion as a substitute for the actual informative content of a message. Several examples are detailed by Kopp in [28, 29, 30].
Kopp also defines the concept of a chained compound strategy, where an intermediate victim is used to propagate an attack against the final victim. An example is the exploitation of media organisations by terrorist movements, who use the media organisations to spread news of successful terrorist attacks. Figure 5 shows state transition diagrams for a canonical Information Warfare strategy, a compound Information Warfare strategy and a chained Information Warfare strategy.
Since compound Information Warfare strategies form directed graphs with individual strategies and victims as nodes and dependencies as arcs, they can be analysed using graph theory. One property of interest is that of a cut vertex, the removal of which partitions the graph into two or more disconnected graphs [15]. For a compound strategy, cut vertex removal is akin to the failure of an Information Warfare attack. Such attacks represent a single point of failure for the entire compound strategy. Attackers may increase the redundancy of their compound strategy to remove any cut vertices, by adding additional functionally identical attacks, analogously to fault-tolerant computer system design. Defenders who are aware of a compound strategy’s structure may attempt to identify cut vertices and focus their defense against those Information Warfare attacks. Another property of interest is parallelisation: as in parallelising software execution, parallel attacks (as in A and B in the compound strategies of Figure 5) can be undertaken simultaneously.
Figure 5: Compound and Chained Information Warfare strategies (Kopp 2005)
In the pursuit of a compound attack, the state of the victim is often crucial. If a predecessor strategy has failed to produce its intended effect, successor attacks in the compound strategy may be ineffective or even counter-productive by betraying the predecessor strategy. In a successful compound attack, the victim’s internal state steps though a series of discrete states reflecting the successful effects of each node (attack) in the compound strategy. This exposes a historically well documented problem in the execution of complex deceptions, as determining the effect of a previously executed strategy may be difficult or impossible.
There are many common elements among the various definitions of Information Warfare. Information Warfare consists of offensive and defensive elements, in which one’s own information and information collection and processing functions are protected, while those of competitors are attacked. Information Operations may describe individual acts, while Information Warfare may describe an overall campaign of Information Operations against one or more competitors. Alternatively, each instance of Information Warfare may be called an Information Warfare attack.
Information Warfare attacks may be performed by a single entity or a group of entities. From the definitions above, the following users of Information Warfare can be identified: citizens, governments, companies, criminals, countries, non-nation states, political groups and business cartels. Any of these may target their attacks within or across these groups. Criminals may perform Information Warfare attacks against companies, while political groups could attack and defend against attacks from other political groups. Information Warfare capabilities have also influenced the design of machines and systems, with many military examples, including stealth aircraft and visual camouflage schemes. Many if not most animal and plant species also utilise Information Warfare strategies to aid their survival and reproduction.
Modern communication networks and computer systems have created a new environment for Information Warfare attacks. In consequence, Information Warfare has often incorrectly been described as a modern development. However, elements of Information Warfare are typically coextensive with human military conflict, including chronicled instances of strategic military deception and psychological warfare. This identifies a classical use of Information Warfare, recently extended to exploit new technology. We can anticipate that this will continue so long as technological advances continue. In biology, there is a clear parallel to co-adaptive arms races between predator and prey species. Narrow definitions of Information Warfare, restricting it to attacks involving computer systems or telecommunications networks or even simply to human activity, miss this broader significance of Information Warfare. A more appropriate concept allows for Information Warfare in any competitive environment where information processing takes place. Identifiable instances of Information Warfare both in human history and in nature support the broad definition [28].
The canonical Information Warfare strategies provide a framework for categorising Information Warfare attacks and for identifying the functional similarities between what may appear to be quite different attacks. For example, a camouflaged insect and a stealth aircraft are both utilising the same canonical strategy — Degradation — against potential observers to achieve the same goal — concealment.
Shannon's Communication Theory and Information Warfare
Borden [7] and Kopp [25] both assert that Information Warfare, being based upon the concept of information, should be analysed in terms of Shannon’s information theory. Borden argues that an action a decision-maker performs on data will reduce its uncertainty. Measuring the decision-maker’s uncertainty before and after this action will reveal the change in uncertainty, which may be measured in bits.
An example is the case of Paul Revere [7]. Revere was awaiting information regarding the movement of British troops, who would move either by land or by sea. This information was to be forwarded by Revere to the American Revolutionaries. A lookout in a nearby church tower would observe the British and report the method of their approach by showing “one lantern if by land, two if by sea”. For Revere both approaches were equally probable (p(land) = p(sea) = 1/2), so he had one bit of uncertainty (Equation 3). Revere observed that two lanterns had been lit, which informed him that the British were coming by sea, thereby reducing his uncertainty. Revere’s uncertainty after receiving the message is recalculated with p(land)=0, p(sea)=1 (with the usual assumption that
As Equation 4 shows, Revere’s uncertainty was reduced to 0, and so the message reduced 1 bit of uncertainty.
| | | (4) | | ----------------------------------------------------------------- | --- |
Equation 3 also demonstrates that when one is equally free to choose between different messages, the amount of information in the transmitted message is maximised. On the other hand, Equation 4 shows that when the probability of a message is certain, the message contains no information. As the probability of a message being selected increases, the amount of information it contains decreases. It is clear that in Shannon’s definition of information, the key aspect of a message is its improbability. So, one feature of Information Warfare is the attacker’s attempt to reduce its own uncertainty or to increase the uncertainty of its target.
Information Warfare may also be considered in terms of its effects on the capacity of an information channel. Kopp states that “Information Warfare in the most fundamental sense amounts to manipulation of a channel carrying information in order to achieve a specific aim in relation to an opponent engaged in a survival conflict” [27]. Borden [8] has described Information Warfare as a “battle for bandwidth”, in which there is a competition over the available capacity of an information channel. The canonical Information Warfare strategies can be explained in terms of how they each affect an information channel [25]. This is achieved by examining the effects of the strategies on the terms of Shannon’s channel capacity formula (Equation 2). An information channel’s capacity (C) can be reduced by decreasing its bandwidth (W), decreasing the power of the signal (P) or increasing the noise in the channel (N).
The four canonical strategies may all be understood as fundamental modes of attack against a victim’s communications channel. Again, all forms of attack on a communications channel can be mapped onto the four canonical strategies, or some compound strategy arising from two or more canonical strategies. Numerous specialised forms of attack exist, and many have to date been mapped into compound or canonical strategies.
Two assumptions are made in this model. The first is that the victim receiver can wholly understand and thus decode the messages it receives, which may or may not be true in general. The second is that some repeatable mapping exists between a message, background noise and the quantitative measures of _P_and N. A basis for establishing such a mapping lies in Shannon entropy, which shows that a message with an entirely predictable content has no information content [52]:
| | | (5) | | ----------------------------------------------------------------- | --- |
where I(m) is the information content of the message and p(m) is its probability.
As p(m) → 1, I(m) → 0. If we define noise in this channel as messages without useful content from the receiver’s perspective, this provides a basis for our mapping.
A Degradation attack may render the signal sufficiently noisy that the receiver cannot discern the signal from the background noise on the channel. An active Degradation attack transmits additional noise into the information channel, so that the signal is harder to detect. Injecting much noise into the channel will make N >>P and thereby force C → 0. A passive Degradation attack reduces the power of the signal (P → 0) so that it is too faint to be detected, which also forces C → 0.
A Corruption attack substitutes a corrupted signal for a valid one. In terms of Shannon’s formula, the attack replaces _PGenuine_with PCorrupt , while W and N remain unchanged.
Denial attacks disable or destroy transmission links or information receivers, denying the victim the ability to receive information. These attacks reduce the available bandwidth of the channel (W) and in an effective attack W → 0 or W = 0.
Subversion would not normally be described in terms of channel capacity, as it does not affect the signal transmitted, the contents of the signal, the information channel or the receiver. Instead Subversion is likened to an attack against the decision-making process, which can be modeled by a Turing machine [27].2The program that controls the operation of a Turing machine is a tape that contains a series of symbols. A Subversion attack against such a machine is performed by covertly altering the symbols on the tape, thereby altering the behaviour of the Turing machine in some manner. This can be considered analogous to Corruption attacks, by replacing the target’s original probability distribution with one that misrepresents the situation.
A deception technique which arises frequently in commercial and political product marketing, but also in many intelligence deceptions, is “deception by omission” [30]. An attacker using this technique will present a message, or multiple messages, which appear to be complete, but which are not. This technique is a form of Passive Degradation, a variant of the first canonical strategy, where P → 0 for the hidden information, thus reducing its contribution to channel capacity to zero.
Fraser and Hipel [18] identify a game strategy termed “strategic surprise”, where one or more players do not know or understand that a particular option in the game may exist. More than often players using “deception by omission” are playing exactly this game.
Another deception technique which arises frequently in commercial and political product marketing, but also in many intelligence deceptions, is “deception by saturation”, which is also known as “flooding”. The mapping of this technique is sufficiently interesting to study in more detail [30].
Deception by Saturation arises in two forms, either as an Active Degradation attack, or a soft kill Denial by Destruction attack. In executing a Deception by Saturation attack, the attacker will inundate the victim with messages, most of which are redundant or irrelevant, with the aim of saturating the victim’s channel so the victim cannot gather information which might contradict the attacker’s message. Even an alert victim who may have the capacity to find valid messages embedded in a large volume of redundant messages may be effectively attacked, if the victim does not have the available time to sort through all of the received messages.
Deception by Saturation is distinct from Deception by Omission as it involves the active generation of messages with deceptive intent, whereas the former involves the omission of messages, doing so with deceptive intent. Libicki [40] also describes how saturating a victim with low-information messages can conceal the more informative messages that the victim receives. Libicki also notes that analysis of how a victim responds to these low-information messages can aid an attacker in preparing a saturation attack, in particular by ascertaining how it is likely to respond to a flood of such messages.
The messages used to implement the saturation attack can be considered to be noise in the channel, devoid of information content. Where the victim cannot successfully filter a message from the background noise, for whatever reason, the capacity of the channel will degrade. In terms of Shannon’s model for channel capacity (previously cited), the redundant or information free messages represent noise N, with N >> P resulting in C → 0.
An alternative form of this attack is one in which the victim does have the capability to distinguish the real from the information-free messages, but is unable to perform this operation in reasonable time and thus fails to distinguish between the attacker’s message and the real message.
In terms of Shannon’s model for channel capacity this is a situation where the bandwidth of the channel is inadequate to the problem, that is W << WRequired. As a result the capacity available is not enough to carry the real message and the attack succeeds. Attacks which compromise the available channel bandwidth rendering it unusable are classified as soft kill Denial by Destruction attacks.
A third deception technique which arises frequently in commercial and political product marketing, and presents an interesting case for mapping into a canonical strategy, is Deception by Spin. This is a form of Subversion attack and is often used in a compound strategy supported by Deception By Omission, or sometimes Deception By Saturation. A spin attack is based on the idea of presenting an unpalatable or other acknowledged or accepted fact, but encouraging the victim to assess that fact from a perspective which is less damaging to the attacker. The victims mechanism for critically assessing the unpalatable fact is thus subverted. Alternatively, victims may be encouraged to reassess the unpalatable fact by associating similar, but different, facts with positive ideas, undermining the original, negative assessment of the more relevant fact. We designate the latter as an Indirect Deception by Spin.
In information-theoretic and information processing terms, Deception by Spin is a classical compound Subversion attack which is targeted against the Orientation step of the victim’s OODA loop, discussed previously. As the victim uses its own internal processing resources to infer false conclusions from the received message, the victim has been effectively subverted to an internal state intended by the attacker [13].
Analysing Information Warfare as an attack against an information channel provides three distinct targets for Information Warfare attacks, each of which is uniquely associated with one canonical Information Warfare strategy. These attacks either reduce the capacity of the channel or target messages that are inside the channel. Kopp argues that since there is a kind of attack for each term of the equation and one for the underlying probability distribution (Subversion), these four attacks exhaust the possible canonical Information Warfare strategies [31].
The Generality of Information Warfare
The earlier concepts of Information Warfare focus on its usage in the contemporary social or military domain, specifically their targeting of modern information systems, such as computers and communications networks. Information Warfare, however, is neither a modern invention, nor unique to humans. The use of information in competitive survival contests is quite general.
Three of the methods currently used by the military for Information Warfare have a historical basis [60]. Psychological operations were widely used by the Mongols during their invasions to spread fear about their approaching armies, encouraging “merchants” or survivors to spread word of their attack and strength ahead of their invasion force [14]. Military deception similarly has been used widely throughout history by military leaders [20, 9]. Julius Caesar’s rapid forced marches deceived enemies into thinking no imminent military action was possible. Sun Tzu advocated concealing ones’ army — “have a capability, but appear not to” [22] — as a security measure for military warfare. Cases of such concealment leading to battle success are rampant throughout the history of warfare, disproving the modernity of Information Warfare strategies.
Indeed, Sun Tzu states that “all warfare is based on deception” [19, 16]. Since deception is involved in the canonical strategies of Information Warfare (especially the Corruption strategy), Sun Tzu’s writings provide a historical basis for the theory of Information Warfare and stress the importance of gathering and protecting information in warfare. Sun Tzu’s statements on Information Warfare include:
Protecting Information:
In making tactical dispositions, the highest pitch you can attain is to conceal them; conceal your dispositions, and you will be safe from the prying of the subtlest spies, from the machinations of the wisest brains.
Gathering Information:
Thus, what enables the wise sovereign and the good general to strike and conquer, and achieve things beyond the reach of ordinary men is foreknowledge.
Deception:
(1) Hence, when able to attack, we must seem unable; when using our forces , we must seem inactive; when we are near, we must make the enemy believe we are far away; when far away, we must make him believe we are near.
(2) Hold out baits to entice the enemy. Feign disorder and crush him.
These statements demonstrate that Sun Tzu understood the importance of information and the advantages provided by both offensive and defensive Information Warfare.
Kuehl [36] identifies three distinct target types of Information Operations: hardware, software and “wetware”. Hardware describes the physical devices that form the information collection, communication and information processing systems. Software describes the coded instructions that control the operation of the hardware devices. Wetware refers to the human mind, and Kuehl uses this term to emphasise that the human cortex is as important as software and hardware in modern information systems. Kuehl also points out that Information Operations against wetware have a long history and states that the teachings of Sun Tzu were infused with the idea that the enemy’s mind was the target that possessed the greatest payoff.
While the majority of descriptions and definitions of Information Warfare focus on its usage by humans and its effects on modern communications, Kopp [25] observes that “the fundamental paradigm of IW/IO [Information Warfare/Information Operations] appears to be a basic evolutionary adaptation resulting from competition in the survival game”. This argument is supported by examples of Information Warfare attacks performed by insects. Denning [17] also acknowledges that biological organisms can perform Information Warfare. Further biological examples of Information Warfare attacks are provided by Kopp and Mills [35], who describe the Information Warfare strategies employed by numerous species of insects, fish and birds. Information Warfare is a fundamental survival mechanism, which many different animal species have separately evolved over millions of years. According to evolutionary theory, traits which improve an organism’s chances of survival and reproduction will spread through its species, while traits that impair survival or reproduction will disappear. As a survival mechanism, Information Warfare should allow prey species to avoid predation or assist predators in catching their prey, and so its strategies will evolve in co-evolution just as well as more obvious strategies, such as strength and speed. Following Kopp and Mills, prey species use Information Warfare to conceal their existence from predators, mimic other dangerous species to deter predators from attacking them and to blind any predators that do attack. Meanwhile, predators use Information Warfare to conceal their presence from potential prey and mimic potential mates of their intended prey. The same strategies may be used by both predators and prey, with a species using its camouflage to hide from both its prey and its predators. By providing numerous examples of various animal species utilising Information Warfare, Kopp and Mills have clearly demonstrated that Information Warfare is neither a modern artifact nor a uniquely human endeavour, but instead a mechanism that may aid competition in any environment.
The reason why Information Warfare has been frequently misidentified as a recent development is that the recent creation of worldwide telecommunications networks and computer systems have produced a new environment for Information Warfare. This new environment provides numerous new targets for attacks, along with many new methods for implementing them. Schwartau [51] and Denning [17] both give examples of these types of attacks. Regardless, the underlying strategies of modern, historical and primeval Information Warfare attacks are all the same.
The growth of telecommunication networks has increased their value to users, in-line with Metcalfe’s Law [41]. Metcalfe’s Law, broadly interpreted, states that the value of a telecommunication network increases as the square of the number of devices connected. Therefore, modern telecommunication networks, which allow rapid worldwide communication between individuals and organisations, have become increasingly valuable to their users. Such networks and systems also provide an environment for Information Warfare activities. Each connected device, or its user, is a potential target for Information Warfare attacks performed through that network, along with the network itself. As networks become larger and more valuable, they become much higher value targets for Information Warfare attacks against the network’s users, as more targets are be affected by the disruption or destruction of the larger network. Larger networks also provide attackers with access to more potential targets.
Information Warfare may also occur in competitive social environments other than the military, such as politics or product marketing [30]. In these cases, deception and related forms of Information Warfare are used to promote a group, an idea, or a product, typically among members of the general public. Kopp describes deception by a government department against the rest of the government and the nation’s population, deception by a business that conceals shortcomings of its products from customers and the usage of spin in a marketing campaign to increase a product’s image in the public perception.
In short, Information Warfare is useful across a wide variety of domains, including indeed any domain that offers some competitive advantage of one actor over another, whether in biology, warfare, sports, politics or marketing. Its potential presence should be expected and planned for in all such situations.
Applications of Information Warfare
The study of theoretical concepts of Information Warfare and their application to existing systems has many potential benefits. Information Warfare may occur in any situation where there is communication between two competing entities and may therefore be applied in a wide array of potential areas, some of which we now consider.
An important factor is that the increasing complexity and integration of man-machine systems, typified by networked computing systems, present increasing vulnerabilities to attack. Therefore, a general model for understanding the vulnerabilities of such systems is needed, especially for designers. Information-theoretic models of Information Warfare, such as the Borden-Kopp strategies, provide such general models.
Islam et al. [23] consider the security concerns of various wireless ad-hoc networking protocols from an Information Warfare perspective. Potential attacks against a wireless network are described in terms of canonical and compound Information Warfare strategies, along with the effectiveness of the wireless network protocols in defending against such attacks. This approach allows Islam, Pose and Kopp to identify a potential vulnerability to Subversion attacks in their proposed networking protocol, providing a focus for future security extensions to the protocol.
Kopp [28] has described how ‘perception management’ campaigns make use of Haswell’s techniques and principles of deception [20], allowing government and non-government organisations to use Information Warfare attacks against a victim population. Examples are drawn from the regimes of Nazi Germany and the Soviet Union. These campaigns exhibit most of Haswell’s principles of deception, concentrating on thorough preparation, centralised control of propaganda and message credibility. Kopp notes that such perception management campaigns may have two distinctly different targets. Typically, perception management is performed by authoritarian regimes, where the regime controls the media apparatus and is intent on deceiving its population for its own benefit. Kopp likens this situation to a hypergame [2, 18] that the regime plays against its own population, in which the regime and the victim population possess differing perceptions of reality. This approach utilises deception and makes extensive use of Haswell’s principles of deception to create an illusion of reality for the victim population. Perception management may also be used by regimes or movements to perform deception and propaganda campaigns against the populations of other nations. In that case foreign media organisations are the initial target of a deception, which compels them to distribute further deceptive messages to the victim population, in an example of chained Information Warfare, employing compound strategies of Subversion, Degradation and Corruption. Denial attacks are avoided, as these compromise the intermediate aim of the strategy by damaging the delivery channel, preventing its future use.
The use of the mass media as a conduit for Information Warfare presents a difficult defensive problem. There are three distinct groups which may defend against these attacks — the mass media, the victim populations and the governments of the victim populations. The mass media mostly materially benefit from their role as a conduit and therefore have no interest in preventing the propagation of the Information Warfare attack against the victim population. Governments of the victim populations, especially democracies, may be powerless to stop such an attack, as legislation often prevents direct control of the mass media, while in any case modern technology simplifies the circumvention of government censorship.3The victim population may be the only entity in a position to identify and resist such attacks. However, it is unlikely that all members of the population will be capable of defending themselves against these attacks, allowing the campaign to deceive much of the population.
Political and product marketing is another area in which deception is common [30]. Such deception is used to market a policy, product or service despite its limitations or deficiencies. Kopp identifies three common types of deception that are used in this area — ‘Deception by Omission’, ‘Deception by Saturation’ and ‘Deception by Spin’. These three types of deception, discussed in detail previously, are mapped into the four canonical Information Warfare strategies, which identifies them as attacks that do not use the Corruption strategy typically associated with deception. Kopp argues that these three deception attacks may be utilised when regulation or legislation prevents Corruption’s untruthful communication, to achieve results similar to Corruption.
Broader and more fundamental issues arise when we consider the impact of the use of Information Warfare techniques on various paradigms of conflict, especially in conflicts involving nation states and non-state actors. Deception and propaganda has been a central part of such conflicts for as long as they have existed.
What has changed with the information revolution is that digital technology permits large amounts of data, and so also information, to be communicated and disseminated very rapidly. Most established paradigms of conflict have evolved in environments where the underlying technology base provided no such capability. As a result they exhibit varying levels of sensitivity to the introduction of systems which can transmit or disseminate data and information on a large scale. Numerous case studies can be found in the impact of networking technologies upon contemporary military systems and organisations [34].
Kopp [26] argues that “a fundamentally different adaptation is required in order to survive and prevail in such an environment. This adaptation is the ability to evolve faster in technology and operational doctrine over potential opponents. Indeed it is worth stating this as an axiom: ‘The player who can evolve technology and doctrine faster than an opponent, all other things being equal, will prevail.’ ”
Investigations of the ideas and applications of Information Warfare may allow potential victims to identify Information Warfare attacks and better defend against them. Knowledge of the structure and properties of an Information Warfare attack also provides knowledge of potential defenses against such an attack. The theory of Information Warfare can also be used to analyse existing physical and non-physical security systems and provide insight into the vulnerabilities of such systems.
Conclusions
This paper has surveyed existing research covering information-theoretic models for Information Warfare, and performed a critical analysis of definitions, models and canonical strategies for Information Warfare.
In the context of Information Warfare, the term “information” commonly refers to the natural language concept of information. However, Information Warfare may also be considered from the point of view of Shannon’s information and communication theory, providing a formal background for understanding how Information Warfare functions.
All definitions of Information Warfare describe it as a combination of offensive actions, performed against an opponent’s information and information processing capabilities, and defensive actions to protect oneself from such attacks. Successful attacks are intended to affect the victim’s decisions and actions. The many potential offensive Information Warfare attacks can be divided into four canonical strategies, which may be applied in any competitive domain.
While much of the Information Warfare literature focuses on its application to computer systems and telecommunications networks, there are many other situations where Information Warfare arises. These span technological, biological and social systems. This is evidence that Information Warfare is not a recent human invention, but rather an evolved, general capability which provides a competitive survival advantage in situations where information is of benefit to competing entities. For instance, in biological organisms Information Warfare is used to disadvantage competitors, predators, or prey.
An important consideration for architects and designers of computer networks and systems is that increasing complexity and levels of integration present increasing difficulties in defining and validating security mechanisms and models. This is especially true where an attacker can target both the machine and human elements in an information system. We have established that the generality of the four canonical strategies, which are based on information theory, permits their use as a tool and intellectual framework for modelling both attacks and defensive mechanisms in such complex, hybrid computing systems.
The advent of information-theoretic models for Information Warfare establishes this as an area of study within the information sciences, in addition to its interest for the social sciences, military sciences and information systems.
Endnotes
1 To be sure, perceptual and cognitive psychological research over recent decades shows that “filtering” or “subsetting” is far too simple a description of what occurs.
2 Alternatively, however, decision processes within victims could themselves be construed as information-bearing channels.
3 Nevertheless, there are many potential methods by which governments may protect their citizens from Information Warfare attacks. However, the examination of these methods, along with their benefits and drawbacks to both governments and citizens is outside the scope of this paper.
References
[0] Harry G. Frankfurt, On Bullshit, Princeton University Press, Princeton, New Jersey, 2005.
[1] Gregory Bateson. Form, Substance and Difference. In Steps to an Ecology of Mind. Chandler Publishing Co., New York, 1972. URI: http://www.rawpaint.com/library/bateson/formsubstancedifference.html; accessed October, 2012.
[2] Peter. G. Bennett. Hypergames: Developing a Model of Conflict. Futures, 12:489–507, 1980.
[3] Bruce Berkowitz and Robert W. Hahn. Cybersecurity: Who’s Watching the Store? Issues in Science and Technology, Spring 2003. URI: http://www.issues.org/19.3/berkowitz.htm; accessed October, 2012.
[4] Eric Berne. Games People Play: The psychology of human behaviour. Ballantine Books, New York, 1964.
[5] Eric Berne. What Do You Say after You Say Hello?: the psychology of human destiny. Bantam Books, New York, 1973.
[6] Max H. Boisot. Knowledge Assets: Securing Competitive Advantage in the Information Economy. Oxford University Press, Oxford, 1998.
[7] Andrew Borden. What is Information Warfare? Aerospace Power Chronicles, United States Air Force, Air University, Maxwell AFB, 1999. URI: http://www.airpower.maxwell.af.mil/airchronicles/cc/borden.html; accessed October, 2012.
[8] Andrew Borden. Unpublished correspondence between the Author and C. Kopp, 2001.
[9] Partha Bose. Alexander the Great’s Art of Strategy: Business Lessons from the great Empire builder. Allen and Unwin, Crows Nest, Australia, 2003.
[10] John R. Boyd. Patterns of Conflict. Briefing Slides, December 1986. URI: http://www.ausairpower.net/JRB/poc.pdf; accessed October, 2012.
[11] John R. Boyd. Strategic Game of ? and ? Slideshow, June 1987. URI: http://www.ausairpower.net/JRB/strategy.pdf; accessed October, 2012.
[12] L. Brumley, C. Kopp, and K. B. Korb. Misperception, Self-Deception and Information Warfare. In G. Pye and M. Warren, editors, Proceedings of the 6th Australian Information Warfare & Security Conference 2005, pages 71–79, Geelong, Australia, 2005. School of Information Systems, Deakin University, URI: http://www.csse.monash.edu.au/~carlo/InfoWar/Lectures/IWAR-2005-LB.pdf, accessed October, 2012.
[13] L. Brumley, C. Kopp, and K. B. Korb. The Orientation step of the OODA loop and Information Warfare. In C. Vailli and A. Woodward, editors, Proceedings of the 7th Australian Information Warfare & Security Conference 2006, pages 18–25, Perth, Australia, 2006. School of Computer and Information Science, Edith Cowan University. URI: http://www.csse.monash.edu.au/~carlo/InfoWar/Lectures/OODA-Loop-BKK-IWC7-2006.pdf, accessed October, 2012.
[14] James Chambers. The Devil’s Horsemen: The Mongol Invasion of Europe. Cassell Publishers Ltd, London, 1988.
[15] Gary Chartrand. Introductory Graph Theory. Dover Publications, New York, 1977.
[16] James Clavell. _The Art of War._Hodder and Stoughton, London, 1981.
[17] Dorothy E. Denning. _Information Warfare and Security._Addison-Wesley, Boston, 1999.
[18] Niall M. Fraser and Keith W. Hipel. Conflict Analysis, Models and Resolutions. Elsevier Science Publishing Co. Inc., New York, 1984.
[19] Samuel B. Griffith. The Art of War. Clarendon Press, Oxford, 1963. Translated and with an introduction by Samuel B. Griffith ; with a foreword by B.H. Liddell Hart.
[20] Jock Haswell. The Tangled Web: The Art of Tactical and Strategic Deception. John Goodchild Publishers, Wendover, 1985.
[21] Richards J. Heuer, Jr. _Psychology of Intelligence Analysis._Center for the Study of Intelligence, Central Intelligence Agency, Langley, Virginia, USA, 1999. URI: https://www.cia.gov/.../PsychofIntelNew.pdf; accessed October, 2012.
[22] J. H. Huang. Sun Tzu: The new Translation. Quill, New York, 1993.
[23] Muhammad Mahmudul Islam, Ronald Pose, and Carlo Kopp. Suburban Ad-Hoc Networks in Information Warfare. In Graham Pye and Matthew Warren, editors, Proceedings of the 6th Australian Information Warfare & Security Conference 2005 (IWAR 2005), pages 71–79, Geelong, Victoria, November 2005. School of Information Systems, Deakin University, URI: http://www.csse.monash.edu.au/research/san/Suburban...Warfare.pdf, accessed October, 2012.
[24] Jorma Jormakka and Jarmo V. E. Mölsä. Modelling Information Warfare as a Game. Journal of Information Warfare, 4(2):12–25, 2005.
[25] Carlo Kopp. Information Warfare: A Fundamental Paradigm of Infowar. Systems: Enterprise Computing Monthly, pages 46–55, February 2000. URI: http://www.ausairpower.net/OSR-0200.html; accessed October, 2012.
[26] Carlo Kopp. Moore’s Law and its Implications for Information Warfare. In Riccardo Sibilia, editor, Proceedings of the 3rd International Association of Old Crows (AOC) Electronic Warfare Conference, Zurich, 2000, pages –, Alexandria, Virginia, 2000. Association of Old Crows, URI: http://www.ausairpower.net/PDF-A/moore-iw.pdf, accessed October, 2012.
[27] Carlo Kopp. Shannon, Hypergames and Information Warfare. Journal of Information Warfare, 2(2):108–118, 2003; URI: http://www.csse.monash.edu.au/courseware/cse468/2006/Lectures/_JIW-2002-1-CK.pdf, accessed October, 2012.
[28] Carlo Kopp. _Classical Deception Techniques and Perception Management vs. the Four Strategies of Information Warfare._In Graham Pye and Matthew Warren, editors, Proceedings of the 6th Australian Information Warfare & Security Conference 2005 (IWAR 2005), pages 81–89, Geelong, Victoria, November 2005. School of Information Systems, Deakin University; URI: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.3599&rep=rep1&type=pdf, accessed October, 2012.
[29] Carlo Kopp. The Analysis of Compound Information Warfare Strategies. In Graham Pye and Matthew Warren, editors, Proceedings of the 6th Australian Information Warfare & Security Conference 2005 (IWAR 2005), pages 90–97, Geelong, Victoria, November 2005. School of Information Systems, Deakin University, URI: http://www.csse.monash.edu.au/~carlo/InfoWar/Lectures/Method-IWC6-05.pdf, accessed October, 2012.
[30] Carlo Kopp. Considerations on Deception Techniques used in Political and Product Marketing. In Craig Valli and Andrew Woodward, editors, Proceedings of the 7th Australian Information Warfare & Security Conference 2006 (IWAR 2006), pages 62–71, Perth, Western Australia, November 2006. School of Computer and Information Science, Edith Cowan University, URI: http://www.csse.monash.edu.au/.../Deception-IWC7-2006-BA.pdf, accessed October, 2012.
[31] Carlo Kopp. CSE 468 Information Conflict Lecture Slides. Lecture Slides, March 2006. URI: http://www.csse.monash.edu.au/.../CSE-468-04.pdf; accessed October, 2012.
[32] Carlo Kopp. The Four Strategies of Information Warfare and their Applications. IO Journal, Vol. 1, Issue 4, Association of Old Crows, 28-33, February 2010.
[33] Carlo Kopp. Lifelong Learning in the Information Age. Defence Today, pages 46–55, Au-gust/Sept 2009. Strike Publications, Pty Ltd, Amberley, Australia.
[34] Carlo Kopp. NCW101: An Introduction to Network Centric Warfare. Air Power Australia, Melbourne, Australia, first edition edition, 2009.
[35] Carlo Kopp and Bruce Mills. Information Warfare and Evolution. In W Hutchinson, editor, Proceedings of the 3rd Australian Information Warfare & Security Conference 2002 (IWAR 2002), pages 352–360, Perth, Western Australia, 2002. Edith Cowan University, URI: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.159.2149&rep=rep1&type=pdf, accessed October, 2012.
[36] Daniel T. Kuehl. Strategic Information Warfare: A Concept. Number 332 in SDSC Working Papers Series. Strategic and Defence Studies Centre, Australian National University, Canberra, Australia, 1999.
[37] Wolfgang Känne. Some varieties of deception. In C. Penco, M. Beaney, and M. Vignolo, editors, In Explaining the Mental, Newcastle, 2007. Cambridge Scholars Publishing.
[38] David Lewis. _Dying for Information? An Investigation into the Effects of Information Overload in the USA and Worldwide._Reuters Business Information, London, May 1996. Commissioned Survey Report.
[39] Martin C. Libicki. What is Information Warfare? United States Government Printing, Washington DC, 1995.
[40] Martin C. Libicki. Conquest in Cyberspace: National Security and Information Warfare. Cambridge University Press, New York, 2007.
[41] Robert M. Metcalfe. Metcalfe’s law: a network becomes more valuable as it reaches more users. InfoWorld, 17(40):53, 1995.
[42] Oskar Morgenstern and John von Neumann. Theory of Games and Economic Behaviour. Princeton University Press, Princeton, 1947.
[43] Ulric Neisser. Cognition and Reality. W. H. Freeman, San Francisco, 1976.
[44] Donald A. Norman. The Design of Everyday Things. Doubleday, New York, 1990.
[45] G. A. Quattrone and A. Tversky. Casual versus diagnostic contingencies: On self-deception and the voter’s illusion. Journal of Personality and Social Psychology, 46(2), 1984.
[46] V. S. Ramachandran. The Evolutionary Biology of Self-Deception, Laughter, Dreaming and Depression: Some Clues from Anosognosia. Medical Hypotheses, 47:347–362, 1996.
[47] Gregory Rattray. Strategic Warfare in Cyberspace. The MIT Press, Cambridge, Massachusetts, 2001.
[48] Chester W. Richards. Boyd’s OODA loop. Slideshow, 2001. URI: http://www.dnipogo.org/richards/boyds_ooda_loop.ppt; accessed October, 2012.
[49] S. J. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall, Englewod Cliffs, New Jersey, 1995.
[50] B. Sahdra and P Thagard. Self-Deception and Emotional Coherence_. Minds and Machines,_ 13:213–231, 2003.
[51] Winn Schwartau. Information Warfare: Chaos on the Electronic Superhighway. Thunder’s Mouth Press, New York, 1994.
[52] Claude Shannon. A Mathematical Theory of Communication. _Bell System Technical Journal,_27:379–423, 623–656, July, October 1948, URI: http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html,accessed October, 2012.
[53] Claude E. Shannon and Warren Weaver. The Mathematical Theory of Communication. The University of Illinois Press, Urbana, 1949.
[54] J.A. Simpson and E.S.C. Weiner, editors. Oxford English Dictionary. Oxford University Press, New York, second edition edition, 1989. ‘Information’.
[55] Karl Erik Sveiby. What is Information?, 1994. URI: http://web.archive.org/.../Information.html; accessed October, 2012.
[56] Fred Thompson. The Boyd Cycle and Business Strategy. In Fred Truitt and Isao Yorozu, editors, Proceedings of the Aomore-Atkinson Conference, 1993.[57] Alvin Toffler. Future Shock. Bantam, reissue edition, 1984.
[58] Robert Trivers. The Elements of a Scientific Theory of Self-Deception. _Annals of the New York Academy of Sciences,_907:114–131, 2000.
[59] D. S. Neil Van Leeuwen. The Spandrels of Self-Deception: Prospects for a Biological Theory of a Mental Phenomenon. Philosophical Psychology, 20(3):329–348, June 2007. URI: http://general.ujhb.ac.za/.../2007SpandrelsSelf-Deception.pdf; accessed October, 2012.
[60] Sheila E. Widnall and Ronald R. Fogelman. Cornerstones of Information Warfare. Doctrine/Policy Document, 1997. URI: http://www.iwar.org.uk/iwar/resources/usaf/iw/corner.html; accessed October, 2012.
[61] Norbert Wiener. Cybernetics: Or Control and Communication in the Animal and the Machine. MIT Press, Cambridge, Massachusetts, second edition edition, 1961.
Air Power Australia Analyses ISSN 1832-2433