|
Unenforced Restrictions Early systems, in which storage was costly, often relied on users and their programs to not attempt certain actions. Although modern systems can detect and prevent such actions, these early systems could not afford the storage and programs to do so. Most of these actions would produce unpredictable results and were not directly exploitable; however, a few produced exploitable results. Similar problems appear in modern systems. For example, a UNIX user directory program, fingered, may fail to enforce the restriction on the length of its input. The storage beyond the input area is occupied by a privileged program. An attacker can easily exceed the expected length of the input with a rogue program, which can then be executed under the identity and privilege of the overwritten program. Covert Channels The term covert channels is most often used to describe unintended information flow between compartments in compartmented systems. For example, although compartment A has no authorized path to do so, it may send information to compartment B by changing a variable or condition that B can see. This usually involves cooperation between the owners of the compartments in a manner that is not intended or anticipated by the managers of the system. Alternatively, compartment B may simply gather intelligence about compartment A by observing some condition that is influenced by As behavior. The severity of the vulnerability presented by covert channels is usually measured in terms of the bandwidth of the channel (i.e., the number of units of information that might flow per unit of time). Most covert channels are much slower than other intentional modes of signaling. Nonetheless, because of the speed of computers, covert channels may still represent a source of compromise. The possibility of covert channels is of most concern when system management relies on the system to prevent data compromises involving the cooperation of two individuals or processes. Under many commercial processes, however, management is prepared to accept the risk of collusion. The system is expected to protect against an individual acting alone; other mechanisms protect against collusion. The Department of Defense mandatory policy assumes that a single user might operate multiple processes at different levels. Therefore, the enforcement of label integrity might be compromised by covert channels. Under the mandatory policy, the system itself protects against such compromise. TYPES OF ATTACKS Attacks are deliberate and resourceful attempts to interfere with the intended use of a system. The following sections discuss types of potential attacks. Browsing Browsing, the simplest and most straightforward type of attack, is the perusal of large quantities of available data in an attempt to identify compromising information. Browsing may involve searching primary storage for the system password table. The intruder may browse documentation for restrictions and then test to identify any that are not enforced. Access control is the preferred mechanism for defending against browsing attacks. Spoofing Spoofing is an attack in which one person or process pretends to be a person or process that has more privileges. For example, user A can mimic behavior to make process B believe user A is user C. In the absence of any other controls, B may be duped into giving to user A the data and privileges that were intended for user C. One way to spoof is to send a false notice to system users informing them that the systems telephone number has been changed. When the users call the new number, they see a screen generated by the hackers machine that looks like the one that they expected from the target system. Believing that they are communicating with the target system, they enter their IDs and passwords. The hacker promptly plays these back to the target system, which accepts the hacker as a legitimate user. Two spoofs occur here. First, the hacker spoofs the user into believing that the accessed system is the target system. Second, the hacker spoofs the target system into believing that he is the legitimate user. Eavesdropping Eavesdropping is simply listening in on the conversations between people or systems to obtain certain information. This may be an attack in itself that is, the information obtained from the conversation might itself be valuable. On the other hand, it may be a means to another attack (e.g., eavesdropping for a system password). Defenses against eavesdropping usually include moving the defense perimeter outward, reducing the amplitude of the communications signal, masking it with noise, or concealing it by the use of secret codes or encryption. Encryption is the most commonly used method of defense. Exhaustive Attacks Identifying secret data by testing all possibilities is referred to as an exhaustive attack. For example, one can identify a valid password by testing all possible passwords until a match is found. Exhaustive attacks almost always reveal the desired data. Like most other attacks, however, an exhaustive attack is efficient only when the value of the data obtained is greater than the cost of the attack. Defenses against exhaustive attacks involve increasing the cost of the attack by increasing the number of possibilities to be exhausted. For example, increasing the length of a password will increase the cost of an exhaustive attack. Increasing the effective length of a cryptographic key variable will make it more resistant to an exhaustive attack. Trojan Horses A Trojan horse attack is one in which a hostile or unexpected entity is concealed inside a benign or expected one for the purpose of getting it through some protective barrier or perimeter. Trojan horse attacks usually involve concealing unauthorized data or programs inside authorized ones for the purpose of getting them inside the computer. One defense against such attacks is inspection (i.e., looking inside the Trojan horse). The effectiveness of this defense is improved if the data objects are kept small, simple, and obvious as to their intent. Viruses A virus is a Trojan horse program that, whenever executed, attempts to insert a copy of itself in another program, usually in order to perpetuate itself and spread its influence. Viruses exploit large populations of similar systems, sharing user privileges to execute arbitrary programs and to create or write to programs. To get themselves executed, viruses exploit the identity of the infected programs or automatic execution mechanisms, or the ability to trick part of a large user population. Defenses against viruses include differentiating systems along the lines exploited by the viruses and placing limits on sharing, writing, and executing programs.
|