|
Some things need to be said about this chapter and the way it was written. As I sat before my machine, a blank page staring me in the face, I contemplated how I would structure this chapter. There were shadows looming over me and I want to discuss them here.
UNIX folks are a breed unto themselves. Some may know firewalls, some may know scanners, some may know exploit scripts, and so forth. However, they all share one common thing: They know their operating system exceedingly well. The average UNIX system administrator has probably written his own printer drivers on more than one occasion. He has also likely taken the source code for various stock utilities and reworked them to his own particular taste. So this chapter--to be any good at all--has to be filled with technical information of practical value.
Conversely, there are a lot of readers scouring these pages to learn about basic UNIX system security. Perhaps they recently installed Linux or FreeBSD because it was an inexpensive choice for a quick Web server solution. Perhaps they have had a UNIX box serving as a firewall at their offices--maintained by some outside technician--and they want to know what it actually does. Or perhaps this class of readers includes journalists who have no idea about UNIX and their editors have requested that they learn a little bit.
I considered all these things prior to writing even a single paragraph. What was the end result? A long chapter. UNIX folks can cut to the chase by breezing through each section. (There are tidbits here and there where important information appears, so keep an eye out.) The rest of the folks can read the chapter as an entire block and learn the following:
I hope this chapter will be of value to all. Also, because UNIX security is so complex, I am sure I have missed much. However, whole volumes are written on UNIX security and these still sometimes miss information. Therefore, we venture forth together, doing as best we can under the constraints of this book.
The UNIX platform has evolved over the years. Today, it can be defined as a 32- (or 64-) bit multitasking, multiuser, networked operating system. It has advanced security features, including discretionary access control, encryption, and authentication.
UNIX can be secure. However, it is not secure in its native state (that is, out of the box). Out-of-the-box weaknesses exist for every flavor of UNIX, although some distributions are more insecure than others. Certain versions of IRIX (SGI), for example, or most early versions of Linux have Class A or B holes. (Those holes allow outsiders to gain unauthorized access.) These holes are not a terminal problem (no pun intended); they simply need to be plugged at first installation. That having been done, these versions of UNIX are not different from most other versions of nonsecure UNIX.
What is secure UNIX (or as it is sometimes called, trusted UNIX)? Secure UNIX is any UNIX platform that been determined by the National Security Agency (NSA) to have excellent security controls. These versions must be on the NSA's Evaluated Product List (EPL). Products on this list have been rigorously tested under various conditions and are considered safe for use involving semi-sensitive data.
This evaluation process is under the Trusted Product Evaluation Program, which is conducted on behalf of the National Computer Security Center, and both organizations are elements of the National Security Agency. These are the people who determine what products are "safe" for use in secure and semi-secure environments.
The products are rated according to a predefined index. This index has various levels of "assurance," or classes, of security. As described in the TPEP FAQ:
Cross Reference: "TPEP FAQ: What Is a Class?" can be found online at http://www.radium.ncsc.mil/tpep/process/faq-sect3.html#Q4.
The two UNIX products that are positioned highest on the list (levels B3 and B2, respectively) are identified in Table 17.1. According to the National Security Agency, these are the most secure operating systems on the planet.
Operating System | Vendor | Class |
XTS-300 STOP 4.1a* | Wang Federal, Inc. | B3 |
Trusted XENIX 4.0* | Trusted Information Systems, Inc. | B2 |
*These operating systems have earlier versions that have all been determined to be in the same category. I have listed only the latest versions of these products.
To examine earlier versions (and their ratings), refer to http://www.radium.ncsc.mil/tpep/epl/epl-by-class.html. Wang Federal's XTS-300/STOP 4.1a is not just an operating system, but an entire package. It consists of both hardware (Intel 80486 PC/AT, EISA bus system) and software (the STOP 4.1a operating system). It sports a UNIX-like interface at lower levels of the system. At higher levels, it utilizes a hierarchical file system. This operating system has extreme DAC (data access control) and is suitable for sensitive work. STOP 4.1a has the very highest rating of any operating system. As reported by the EPL:
Cross Reference: You can find this report by the EPL online at http://www.radium.ncsc.mil/tpep/epl/epl-by-class.html.
Some night when you have a little play time, you should visit Wang Federal's site (http://www.wangfed.com/). The Technical Overview of the XTS-300 system will dumbfound you. At every level of the system, and for each database, application, user, terminal, and process, there is a level of security. It operates using a construct referred to as "rings of isolation." Each ring is exclusive. To give you an idea of how incredibly tight this security system is, consider this: Ring 0--the highest level of security--is totally unreachable by users. It is within this ring that I/O device drivers reside. Therefore, no one, at any time, can gain unauthorized access to device drivers. Even processes are restricted by ring privileges, allowed to interact only with those other processes that have the same or lesser ring privileges. Incredible. But it gets better. If a terminal is connected to a process that has a very low level of ring privilege, that terminal cannot simultaneously connect to another process or terminal maintaining a higher one. In other words, to connect to the process or terminal with a higher privilege, you must first "cut loose" the lower-privileged process. That is true security (especially because these conventions are enforced within the system itself).
Cross Reference: Wang Federal is the leading provider of TEMPEST technology, which is designed to defeat the interception and analysis of the electronic emanations coming from your computer. These electronic signals can "leak out" and be intercepted (even as far as several hundred yards away). TEMPEST technology can prevent such interception. This prevention generally involves encasing the hardware into a tight, metal construct beyond which radiation and emanations cannot escape. To see a photograph of what such a box looks like, visit http://ww.wangfed.com/products/infosec/pictures/tw3.gif. It looks more like a safe than a computer system.
An interesting bit of trivia: If you search for holes in any of Wang Federal's products, you will be searching a long, long time. However, in one obscure release of STOP (4.0.3), a bug did exist. Very little information is available on this bug, but a Defense Data Network (DDN) advisory was issued about it June 23, 1995. Check that advisory for yourself. It is rather cryptic and gives away little about the vulnerability, but it is interesting all the same.
Cross Reference: You can find the DDN advisory about STOP online at at ftp://nic.ddn.mil/scc/sec-9529.txt.
The next product down is Trusted XENIX, an operating system manufactured by Trusted Information Systems, Inc. You may recognize this name because this company creates firewall products (such as the famous Firewall Tool Kit and a tool called Gauntlet, which is a formidable firewall package). TIS has developed a whole line of not just security products, but policies and theories. TIS has been in the security business for some time.
Cross Reference: Please take a moment to check out TIS at http://www.tis.com/ or examine products at http://www.tis.com/docs/products/index.html.
Trusted XENIX is a very security-enhanced version of UNIX (and bears little resemblance to the Microsoft XENIX product of years ago). This product's security is based on the Bell and LaPadula model.
Many users may be wondering what the Bell and LaPadula model is. This is a security model utilized by United States military organizations. It is described in Department of Defense Trusted Computer System Evaluation Criteria (also known as the Orange Book, out of the "Rainbow Book" series) as "...an abstract formal treatment of DoD (Department of Defense) security policy...."
As reported in the Orange Book:
NOTE: Find the Orange Book online at http://www.v-one.com/newpages/obook.html.
This sounds complicated, but is isn't really. The model prescribes a series of "rules" of conduct. These rules of conduct may apply both to human beings (as in how military top secret and secret messages are sent) or it may apply to the levels of access allowed in a given system. If you are deeply interested in learning about the Bell and LaPadula security model, you should acquire the Orange Book. Moreover, there is an excellent paper available that will not only help you understand the basics of that security model but weaknesses or quirks within it. That paper is titled "A Security Model for Military Message Systems." The authors are Carl Landwher, Constance L. Heitmeyer, and John McLean. The paper proposes some new concepts with regard to such systems and contrasts these new approaches to the Bell and LaPadula security model. This paper reduces the complexity of the subject matter, allowing the user to easily understand concepts.
Cross Reference: "A Security Model for Military Message Systems" can be found online at http://www.itd.nrl.navy.mil/ITD/5540/publications/CHACS/Before1990/1984landwehr-tocs.ps.
Another excellent paper, "On Access Checking in Capability-Based Systems" (by Richard Y. Kain and C. E. Landwehr) demonstrates how some conditions and environments cannot conform to the Bell and LaPadula security model. The information discussed can fill out your knowledge of these types of security models.
Cross Reference: Kain and Landwehr's paper, "On Access Checking in Capability-Based Systems," can be found online at http://www.itd.nrl.navy.mil/ITD/5540/publications/CHACS/Before1990/1987landwehr-tse.ps.
Trusted XENIX has very granular access control, audit capabilities, and access control lists. In addition, the system recognizes four levels of secure users (or privileged users):
Only one of these (auditor) can alter the logs. This is serious security at work. From this level of systems security, the focus is on who, as opposed to what, is operating the system. In other words, operating systems like this do not trust users. Therefore, the construct of the system relies on strict security access policies instituted for humans by humans. The only way to crack such a system is if someone on the inside is "dirty." Each person involved in system maintenance is compartmentalized against the rest. (For example, the person who tends to the installation has his own account and this account [The Trusted System Programmer] can only operate in single-user mode.) The design, therefore, provides a very high level of accountability. Each so-called trusted user is responsible for a separate part of system security. In order for system security to be totally compromised, these individuals must act in collusion (which is not a likely contingency).
Versions of secure UNIX also exist that occupy a slightly lower level on the EPL. These are extremely secure systems as well and are more commonly found in real-life situations. XTS STOP and TIS Trusted XENIX amount to extreme security measures, way beyond what the average organization or business would require. Such systems are reserved for the super- paranoid. B1 systems abound and they are quite secure. Some of the vendors that provide B1 products are as follows:
NOTE: Again, I have listed only the latest versions of these products. In many instances, earlier versions are also B1 compliant. Please check the EPL for specifics on earlier versions.
This book does not treat implementation and maintenance of secure UNIX distributions. My reasons for this are pretty basic. First, there was not enough space to treat this subject. Second, if you use secure UNIX on a daily basis, it is you (and not I) who probably should have written this book, for your knowledge of security is likely very deep. So, having quickly discussed secure UNIX (a thing that very few of you will ever grapple with), I would like to move forward to detail some practical information.
We are going to start with one machine and work out way outward. (Not a very novel idea, but one that at least will add some order to this chapter.)
Some constants can be observed on all UNIX platforms. Securing any system begins at the time of installation (or at least it should). At the precise moment of installation, the only threat to your security consists of out-of-the-box holes (which are generally well known) and the slim possibility of a trojan horse installed by one of the vendor's programmers. (This contingency is so slight that you would do best not to fret over it. If such a trojan horse exists, news will soon surface about it. Furthermore, there is really no way for you to check whether such a trojan exists. You can apply all the MD5 you like and it will not matter a hoot. If the programmer involved had the necessary privileges and access, the cryptographic checksums will ring true, even when matched against the vendor's database of checksums. The vendor has no knowledge that the trojan horse existed, and therefore, he went with what he thought was the most secure distribution possible. These situations are so rare that you needn't worry about them.)
Before all else, your first concern runs to the people who have physical access to the machine. There are two types of those people:
The first group, if they tamper with your box, will likely cause minimal damage, but could easily cause denial of service. They can do this through simple measures, such as disconnecting the SCSI cables and disabling the Ethernet connection. However, in terms of actual access, their avenues will be slim so long as you set your passwords immediately following installation.
TIP: Immediately upon installation, set the root password. Many distributions, like Sun's SunOS or Solaris, will request that you do so. It is generally the last option presented prior to either reboot (SunOS) or bootup (Solaris). However, many distributions do not force a choice prior to first boot. Linux Slackware is one such distribution. AIX (AIX 4.x in particular, which boots directly to the Korn shell) is another. If you have installed such a system, set the root password immediately upon logging in.
Next, there are several things you need to check. Those who have physical proximity but no privilege could still compromise your security. After setting the root password, the first question you should ask yourself is whether your system supports a single-user mode. If so, can you disable it or restrict its use? Many systems support single-user mode. For example, certain DECstations (the 3100, in particular) will allow you to specify your boot option:
Cross Reference: The previous paragraph is excerpted from CIAC-2303, The Console Password for DEC Workstations by Allan L. Van Lehn. This excellent paper can be found online at http://ciac.llnl.gov/ciac/documents/.
Interactive booting will get them to a single-user mode and that hole should be shut immediately after installation. You can set the console password on a DEC workstation.
Next, note that the box is only as secure as its location. Certainly, you would not place a machine with sensitive information in a physical location where malicious users can have unrestricted access to it. "Unrestricted access" in this context means access where users could potentially have time, without fear of detection, to take off the cover or otherwise tamper with the hardware. Such tampering could lead to compromise.
TIP: Some machines have physical weaknesses that are also inherent to the PC platform. On certain workstations, it is trivial to disable the PROM password. For instance, removing the nvram chip on Indigo workstations will kill the PROM password.
As noted in RFC 1244:
So your machine should be located in a safe place. It should be exposed to as little physical contact with untrusted personnel as possible. It should also have a root password and a console password, if applicable.
Your installation media should be kept in a secure place. Remember that installation media can be used to compromise the system. For example, our more mature readers may remember that this can be done with certain versions of AT&T UNIX, particularly SVR3 and V/386. This technique involves inserting the boot floppy, booting from it (as opposed to a fixed disk), and choosing the "magic mode" option. This presents a means through which to obtain a shell.
Remember that when you are installing, you are root. For those distributions that require a boot disk as part of the installation procedure, this is especially important.
Installations that occur solely via CD-ROM are less likely to offer a malicious user leveraged access. However, be advised that these types of installations also pose a risk. You must think as the malicious user thinks. If your SPARC is sitting out on the desk, with the installation media available, you better take some precautions. Otherwise, a kid can approach, halt the machine (with L1 + A), boot the installation media (with b sd(0,6,2)), and proceed to overwrite your entire disk. A malicious user could also perform this operation with almost any system (for example, by changing the SCSI ID on the hard disk drive). AIX will boot from the CD-ROM if it finds that all other disks are unsuitable for boot.
However, it is more often through the use of a boot floppy that system security is breached. Typical examples of installation procedures that require a disk include SolarisX86, some versions of AT&T UNIX, some versions of SCO, and almost all distributions of Linux. If you have such a system, secure those disks. (True, a malicious user can acquire disk images from the Internet or other sources. However, this is not nearly as convenient as having the disk readily available, in close proximity to the workstation. Most onsite breaches are crimes of opportunity. Don't present that opportunity.)
Cross Reference: A fascinating approach to the problem of physical security of workstations is taken in a paper by Dr. J. Douglas Tygar and Bennet Yee, School of Computer Science at Carnegie Mellon University. This paper, Dyad: A System for Using Physically Secure Coprocessors, can be found online at http://www.cni.org/docs/ima.ip-workshop/www/Tygar.Yee.html.
Some definition is in order here, aimed specifically at those using SGI systems (or any other system that is commonly used for graphics, design, or other applications not generally associated with the Internet).
If you are running UNIX, your machine is networked. It makes no difference that you haven't got a "network" (other than the Internet) connected to it. UNIX is a networked operating system by default. That is, unless you otherwise disable networking options, that machine will support most of the protocols used on the Internet. If you have been given such a machine, used primarily for graphical projects, you must either get a technician skilled in security or learn security yourself. By the time that box is plugged into the Net, it should be secure. As I explained earlier in this book, lack of security knowledge has downed the machines of many SGI users. Windowed systems are great (and SGI's is truly beautiful to behold). However, at the heart of such boxes is a thriving, networked UNIX.
In nearly every flavor of UNIX, there is some default password or configuration that can lead to a root compromise. For example, at the beginning of this book, I discussed problems with certain versions of IRIX. I will recount those here briefly.
The following accounts on some versions of IRIX do not require a password to login:
Cross Reference: To review the default password problem more closely, refer to Silicon Graphics Inc., Security Advisory 19951002-01-I; CERT Advisory CA-95:15--SGI lp Vulnerability. November 27, 1995. ftp://sgigate.sgi.com/Security/19951002-01-I or ftp://info.cert.org/pub/cert_advisories/CA-95%3A15.SGI.lp.vul.
Such problems should be dealt with immediately upon installation. If you are unaware of such weaknesses, contact your vendor or security organizations.
It is assumed that you are going to have more than one user on this machine. (Perhaps you'll have dozens of them.) If you are the system administrator (or the person dictating policy), you will need to set some standard on the use of passwords.
First, recognize that every password system has some inherent weakness. This is critical because passwords are at the very heart of the UNIX security scheme. Any compromise of password security is a major event. Usually, the only remedy is for all users to change their passwords. Today, password schemes are quite advanced, offering both encrypted passwords, and in certain instances password shadowing.
NOTE: Password shadowing is where the /etc/passwd file contains only tokens (or symbols) that serve as an abstract representation for the user's real, encrypted password. That real password is stored elsewhere on the drive, in a place unreachable by crackers.
Some distributions do not have shadowing as a default feature. I am not presuming here that you are installing the biggest and baddest UNIX system currently available on the market. Maybe you are installing SunOS 4_1_3 on an old SPARC 1, or similarly outdated hardware and software. (Or perhaps you are installing a Slackware version of Linux that does not support shadowing in the current distribution.)
In such a case, the /etc/passwd file will be at least viewable by users. True, the passwords are in encrypted form, but as you learned earlier, it is a trivial task to crack them. If they can be viewed, they can be cracked. (Anything that can be viewed can also be clipped and pasted. All that is required is some term package that can be used to Telnet to your box. Once the /etc/passwd file can be printed to STDOUT, it can be captured or otherwise copied.) This first needs to be remedied.
Passwords in their raw, encrypted form should not be viewable by anyone. Modern technology provides you the tools to hide these passwords, and there is no earthly reason why you shouldn't. There was a time, however, when such hiding was not available. In those olden days, bizarre and fantastic things did sometimes happen. In fact, in the early days of computer technology, security was a largely hit-or-miss situation. Here is an amusing story recounted by Robert Morris and Ken Thompson in their now-classic paper Password Security: A Case History:
Cross Reference: Password Security: A Case History can be found online at http://www.alw.nih.gov/Security/FIRST/papers/password/pwstudy.ps.
If your system supports it, you need password shadowing. If you are using Linux, you can get the Shadow Suite at ftp.cin.net/usr/ggallag/shadow/shadow-current.tar.gz.
For other systems, my suggestion is John F. Haugh II's shadow package. This package is extensive in functionality. For example, not only does it provide basic password shadowing, it can be used to age passwords. It can even restrict the port from which root can log in. Moreover, it supports 16-character passwords (as opposed to the traditional 8). This greatly enhances your password security, forcing crackers to consume considerable resources to crack an even more complex password. Other features of this distribution include the following:
Cross Reference: Shadow is available at ftp://ftp.std.com/src/freeunix/shadow.tar.Z.
As a system administrator, you will also need a password cracker and a series of wordlists. These tools will assist you in determining the strength of your users' passwords.
Cross Reference: Crack is available at ftp://coast.cs.purdue.edu/pub/tools/unix/crack/.
Wordlists vary dramatically, in terms of language, type of word, and so forth. Some consist only of proper names, and others consists of either all upper- or lowercase characters. There are thousands of locations on the Net where these lists reside.
Cross Reference: Two good starting places for wordlists are http://sdg.ncsa.uiuc.edu/~mag/Misc/Wordlists.html and ftp://coast.cs.purdue.edu/pub/dict/.
CAUTION: If you keep password crackers on your local disks, make sure they are not accessible to anyone but you. The same is true of wordlists or any other tool that might conceivably be used against your system (or anyone else's, for that matter). Many security tools fit this description. Be sure to secure all security tools that could potentially enable a cracker.
So, to recount, you have thus far performed the following operations:
Next, you will want to install a program that performs proactive password checking. Users are generally lazy creatures. When asked to supply their desired password, they will often pick passwords that can easily be cracked. Perhaps they use one of their children's names, their birth date, or their department name. On systems without proactive password checking, these characteristically weak passwords go unnoticed until the system administrator "gets around" to checking the strength of them with a tool such as Crack. By then it is often too late.
The purpose of a proactive password checker is to stop the problem before the password gets committed to the passwd file. Thus, when a user enters his desired password, before the password is accepted, it is compared against a wordlist and a series of rules. If the password fails to meet the requirements of this process (for example, it is found to be a weak password choice), the user is forced to make another choice. In this way, at least some bad passwords are screened out at time of submission.
The leading utility for this is passwd+, written by Matt Bishop. This utility has been in fairly wide use, largely because of its high level of functionality. It is a superb utility. For example, you can set the error message that will be received when a user forwards a weak password. In other words, the user is not faced with a cryptic "your password is no good" prompt, for this does not serve to educate the user as to what is a weak or strong password. (Such messages would also probably annoy the user. Users have little tolerance for a program that repeatedly issues such an error message, even if the error is with the user and not the program.) The program also provides the following:
Cross Reference: Matt Bishop's passwd+ is available at ftp://ftp.dartmouth.edu/pub/security/.
To learn more about this program (and the theory and practice Bishop applied to it), you need to get the technical report A Proactive Password Checker, Dartmouth Technical Report PCS-TR90-152. This is not available on the Net from Dartmouth. However, you can request a hardcopy of it by mail from http://www.cs.dartmouth.edu/cgi-bin/mail_tr.pl?tr=TR90-152.
So at this stage you have secured the workstation. It has shadowed passwords and will accept only passwords that are reasonably secure. Later, after your users have recorded their passwords into the database, you will attempt to crack them. The machine is also located in a safe place and neither a console mode nor installation media are available to local, malicious users. Now it is time to consider how this workstation will interact with the outside world.
Just what services do you need to run? For example, are you going to allow the use of r services? These are rlogin and rsh, primarily. These services are notorious for sporting security holes, not just in the distant past, but throughout their history. For example, in August 1996, an advisory was issued regarding an rlogin hole in certain distributions of Linux. The hole was both a Class A and Class B security hole, allowing both local and remote users to gain leveraged access:
The problem is not confined to Linux. Many hard-line users of UNIX "look down" on Linux, taking the position that Linux is not a "real" UNIX operating system. So whenever holes crop up in Linux, the hard-line community takes the "I told you so" position. This is an untenable view. Many distributions of real UNIX have had similar bugs. Consider this IBM advisory (titled "Urgent--AIX Security Exposure"):
This hole was a rlogind problem. On affected versions of AIX, any remote user could issue this command:
rlogin AIX.target.com -l -froot
and immediately gain root access to the machine. This is, of course, a Class A hole. And AIX is not the only distribution that has had problems with the r services. In fact, nearly all UNIX distributions have had some problem or another with these services. I recommend that you shut them down.
But what if you can't? What if you have to offer at least limited access using the r services? Well, thanks to Wietse Venema, this is not a problem. Venema has produced a collection of hacked utilities that will replace these daemons. These replacements offer enhanced security features and logging capabilities. Moreover, Venema provides an extensive history of their development.
Cross Reference: You can find Venema's hacked tools at ftp://ftp.win.tue.nl/pub/security/.
Cross Reference: The errata, changes, fixes, improvements, and history of these utilities are located at ftp://ftp.win.tue.nl/pub/security/logdaemon-5.6.README.
Also, in the unlikely event that you grab the utilities on-the-fly and fail to read the README file, please heed at least this warning authored by Venema:
Cross Reference: Venema's README file can be found online at ftp://ftp.win.tue.nl/pub/security/logdaemon-5.6.README.
TIP: Many such utilities replace system daemons. I recommend that before using any such utility, you carefully read the installation and readme notes. If you fail to do so, you may end up with a system that doesn't work properly.
CAUTION: Venema has made some awesome contributions to Internet security and is highly respected. However, even he is capable of making minor mistakes. Note that versions of logdaemon prior to 4.9 have a flawed implementation of S/Key, a Bellcore product used for authentication. The hole is not critical (Class A) but local users can gain unauthorized access. For further background and links to patched versions, see CERT Vendor-Initiated Bulletin VB-95:04, which is located at http://www.beckman.uiuc.edu/groups/biss/VirtualLibrary/mail/cert/msg00012.html.
There are also other solutions to the problem. There are ways, for example, to disable the r services and still provide other forms of remote login. One such solution is Secure shell (SSH). SSH is available at many locations over the Internet. I prefer this site:
http://escert.upc.es/others/ssh/
SSH is currently available for a wide range of platform. Here are a few:
As I have discussed previously, SSH provides strong authentication and encryption across remote sessions. It is an excellent replacement for rlogin and even Telnet. Moreover, SSH will defeat many spoofing attacks over IP and DNS. Many administrators suggest that if you are not providing r services, you should remove the /etc/hosts.equiv and .rhosts files. Note that the SSH client supports authentication via .rhosts and /etc/hosts.equiv. If you are going to use SSH, it is recommended that you keep one or both of these files. Before actually implementing SSH on your system, it would be wise to study the RFC related to this issue. It is titled "The SSH (Secure Shell) Remote Login Protocol."
Cross Reference: "The SSH (Secure Shell) Remote Login Protocol" by T. Ylonen (Helsinki University of Technology) can be found online at http://www.cs.hut.fi/ssh/RFC.
CAUTION: The files /etc/hosts.equiv and .rhosts should be routinely checked. Any alteration of or aberration in these files is one indication of a possible compromise of your system security. Moreover, the file /etc/hosts.equiv should be examined closely. The symbols +, !, -, and # should not appear within this file. This file is different in construct than other files and these characters may permit remote individuals to gain unrestricted access. (See RFC 91:12 and related RFCs.)
Moreover, you will probably want to enforce a strict policy regarding .rhosts files on your machine. That is, you should strictly forbid users on your machine from establishing .rhosts files in their own /home directories. You can apply all the security in the world to your personal use of .rhosts and it will not matter if users spring a hole in your security with their own.
There is disagreement in the security field on the finger utility issue. Some administrators argue that leaving the finger service intact will have an almost negligible effect on security. Their view is that on a large system, it could take ages for a cracker to build a reliable database of users and processes. Moreover, it is argued that with the introduction of dynamically allocated IP addresses, this information may be flawed for the purposes of cracking (for example, making the argument that the command finger @target.host.com will reveal only those users currently logged to the machine. This may be true in many distributions of fingerd, but not all. Still, administrators argue that crackers will meet with much duplicate and useless information by attempting to build a database this way. These contingencies would theoretically foil a cracker by frustrating their quest. Plainly stated, this technique is viewed as too much trouble. Perhaps. But as you will see soon, that is not really true. (Moreover, for certain distributions, this is not even an issue.) Try issuing this command against an Ultrix fingerd:
finger @@target.host.com
The listing you will receive in response will shock you. On certain versions of the Ultrix fingerd, this command will call a list of all users in the passwd file.
My feeling is that the functionality of remote finger queries should be eliminated altogether (or at least restricted in terms of output). Experimentation with finger queries (against your server or someone else's) will reveal some very interesting things. First, know this: fingering any character that might appear in the structure of a path will reveal whole lists of people. For example, suppose that you structure your directories for users as /u1, /u2, /u3, and so on. If you do, try fingering this:
finger 4@my.host.com
Alas, even though you have no users named 4, and even though none of these have the character 4 within their usernames, they still appear. If a cracker knows that you structure your disk organization in this manner, he can build your entire passwd file in less than an hour.
However, if you feel the need to allow finger services, I suggest using some "secure" form of finger, such as the highly customizable fingerd written by Laurent Demailly. One of its main features is that it grants access to plan files through a chrooted directory. sfingerd (which nearly always come with the full source) is available at ftp://hplyot.obspm.fr:/net/sfingerd-1.8.tar.gz.
Other known finger daemons, varying in their ability to restrict certain behavior, are listed in Table 17.2.
Daemon | Locale and General Characteristics |
fingerd-1.0 | ftp://kiwi.foobar.com/pub/fingerd.tar.gz.
Offers extensive logging and allows restrictions on forwarding. |
cfinger | ftp://sunsite.unc.edu:/pub/Linux/system/network/finger/cfingerd-1.3.2.lsm.
Can be used to provide selective finger services, denying one user but allowing another. For queries from authorized users, scripts can be executed on a finger query. |
rfingerd | ftp.technet.sg:/pub/unix/bsdi/rfingerd.tgz.
An interesting twist: a Perl daemon. Allows a lot of conditional execution and restriction, for example, if {$user_finger_request eq `foo'} {perform_this_operation}. Easy to use, small, lightweight. (It is Perl, after all.) |
There are other reasons to disable finger. The .plan file is one. On ISP machines, the .plan file is usually of little significance and is used for its most popularly known purpose: to provide a little extra info in response to a finger inquiry. However, in networks connected to the Net as a matter of course (especially in the corporate climate), the .plan file may serve other purposes (for example, status reports on projects). This type of information could be considered sensitive.
If you feel the need to run finger, restrict its use to people within your network. Or, if that is not possible, download one of the secure finger daemons and examine both the code and the documentation. Only then should you make your choice.
NOTE: It is reported in several documents, including the Arts and Sciences UNIX System Administrator Guidelines at Duke University, that you should not use GNU fingerd version 1.37. Apparently, there is a hole in that version that allows users to access privileged files.
The next step is to examine what other remote services you will offer. Here are some questions you should ask yourself:
Telnet is not an inherently dangerous service to provide, but some versions are not secure. Moreover, even in "tight" versions of Telnet, a minor problem may exist.
NOTE: One good example is Red Hat Linux 4.0. The problem is not serious, but Telnet in that distribution may reveal more information than you want it to. Suppose that you have disabled finger services, r services, and the EXPN command in Sendmail. Suppose further, however, that you do allow Telnet sessions from untrusted addresses. (In other words, you are not running firewall software and have no other means of excluding untrusted, unknown, or suspicious addresses.) With this configuration, you feel reasonably confident that no one can identify valid usernames on your system. But is that really true? No. The Telnet package on Red Hat 4.0 distributions will cut the connection between the requesting address and the server if an invalid username is given. However, if the username is valid (but the password is incorrect), the server issues a subsequent login so that a retry can be initiated. By running a nicely hacked Perl script, a cracker can effectively determine valid user IDs on your system through a sort of "brute force" technique. True, you would recognize this in your logs from the run of connection requests from the remote host. However, even little things like this can assist an outsider in gleaning information about your system.
Telnet is not radically different from other system processes. It, too, has been found vulnerable to a wide range of attacks. Such holes crop up periodically. One was discovered in 1995 by Sam Hartman of MIT's Kerberos Development Team (with confirmation and programming assistance provided by John Hawkinson, also at MIT). This hole was rather obscure, but could provide a remote user with root access. As discussed by Hartman in a public advisory ("Telnet Vulnerability: Shared Libraries"):
The hole discovered by Hartman was common to not just one version of telnetd, but several:
Take note, then. If you are new to UNIX, or if you have been running your system without frequently checking bug lists for vulnerabilities, your telnetd could have this problem. If so, you will need to install the patch. Contact your vendor or visit your vendor's site for patches.
Unfortunately, many of the locations for patches are no longer current. However, the document does provide a scheme to "test" your telnetd to see if it is vulnerable. For that reason alone, the document has significant value.
Cross Reference: You can read "Telnet Vulnerability: Shared Libraries" on the WWW at http://geek-girl.com/bugtraq/1995_4/0032.html.
Earlier versions of Telnet have also had problems. It would not serve you well for me to list them all here. Rather, it is better to suggest that you consider why you are providing Telnet. Again, this breaks down to necessity. If you can avoid offering Telnet services, then by all means do so. However, if you must offer some form of remote login, consider SSH.
Also, SSH is not your only recourse. I am simply assuming at this stage that the imaginary machine that we are securing does not yet have a firewall or other forms of security applicable to Telnet. Other options do exist. These are two of those options (to be discussed later):
What you use will depend on your particular circumstances. In some instances (for example, where you are an ISP), you really cannot use a blanket exclusionary scheme. There is no guarantee that all Telnet connections will be initiated on your subnet, nor that all will come from your PPP users. Some of these individuals will be at work or other locations. They will be looking to check their mail at lunch hour and so on. If you provide shell services, exclusionary schemes are therefore impractical.
The exception to this is if you have restricted shell use to one machine (perhaps a box named shell.provider.com). In this situation, exclusionary schemes can be implemented with a limited amount of hassle. Perhaps you only have 150 shell users. If so, you can request that these individuals forward to you a list of likely addresses that they will be coming from. These can be added to the list of allowable hosts. In many instances, they may be coming from a dynamically allocated IP at a different provider. In this case, you must make the choice if you want to allow all users from that network. Generally, however, most shell users will be logging in from work with a fixed IP. It would not be a significant amount of effort to allow these addresses through your filter.
Without installing any of this software, making the decision to allow Telnet is a difficult one. Like most TCP/IP services, Telnet affects the system at large. For example, many cracking expeditions start with Telnet. Crackers can test your vulnerability to overflow attacks and CGI exploits by initiating a Telnet session to port 80. They can also attempt to glean valid usernames by initiating a Telnet session to port 25 and so on. (Moreover, Telnet is one way for a remote user to identify your operating system type and version. This will tell the seasoned cracker which holes to try.)
For the moment, let us assume that telnetd has been disabled and that your choice is to use SSH instead.
NOTE: One hole worth noting is the environment variable passing technique. This hole emerged in November 1995 and therefore it is within the realm of possibility that your system may be affected. The bug affected even many "secure" versions of Telnet that used Kerberos-based authentication. Because the bug was so serious (allowing remote users to gain root access, yet another Class A hole), you should examine the full advisory about it. The technique involved passing local environment variables to the remote target using the ENVIRONMENT option in all Telnet versions conforming to RFC 1408 or RFC 1572. The full advisory is at http://ciac.llnl.gov/ciac/bulletins/g-01.shtml.
Deciding whether to provide FTP access is a bit less perplexing. There are few reasons to allow totally unrestricted, anonymous FTP. Usually, this is done when you are offering a software distribution, free or otherwise, or when you are maintaining an archive of information that is of interest to the general Internet population. In either case, you have almost certainly allocated a machine expressly for this purpose, which doesn't run many other services and holds only information that has already been backed up.
I am against anonymous FTP unless you have a reason. This is mainly because FTP (like Telnet and most other protocols) affects the entire system:
Cross Reference: The previous paragraph is excerpted from Barbara Fraser's Site Security Handbook (update and draft version; June 1996, CMU. draft-ietf-ssh-handbook-04.txt), which can be found online at http://info.internet.isi.edu:80/in-drafts/files/draft-ietf-ssh-handbook-04.txt.
Clearly, the worst situation is anonymous FTP with a writable directory (for example, /incoming). Fully anonymous FTP with a writable directory makes you a prime stop for those practicing the FTP "bounce" attack technique.
Briefly, FTP bounce technique involves using one FTP server as a disguise to gain access to another FTP server that has refused the cracker a connection. The typical situation is where the target machine is configured to deny connections from a certain IP address hierarchy. The cracker's machine has an IP address within that hierarchy and therefore some or all of the FTP directories on the target machine are inaccessible to him. So the cracker uses another machine (the "intermediary") to access the target. The cracker accomplishes this by writing to the intermediary's FTP directory a file that contains commands to connect to the target and retrieve some file there. When the intermediary connects to the target, it is coming from its own address (and not the cracker's). The target therefore honors the connection request and forwards the requested file.
FTP bounce attacks have not been a high-profile (or high-priority) issue within security circles, mainly because they are rare and do not generally involve cracking attempts. (Most bounce attacks probably originate overseas. The United States has export restrictions on a variety of products, most commonly those with high-level encryption written into the program. Bounce attacks are purportedly used to circumvent restrictions at U.S. FTP sites.)
You need to first examine what version of ftpd you have running on your system. Certain versions are flawed or easily misconfigured.
If you are using a version of wu_ftpd that predates April 1993, you need to update immediately. As reported in CERT Advisory 93:06 ("wuarchive ftpd Vulnerability"):
This advisement may initially seem pointless to you because of how old these versions are. However, many systems run these versions of wu_ftpd. (Most of them are legacy systems. But again, not everyone is using the latest and the greatest.)
So much for the older versions of wu_ftpd. Now I want to discuss the newer ones. On January 4, 1997, a bug in version 2.4 was discovered (credit: Aleph1 and David Greenman) and posted to the Internet. This is critical because 2.4 is the most widely used version. Moreover, it is relatively new. If you are now using 2.4 (and have not heard of this bug), you need to acquire the patch immediately. The patch was posted to the Internet by David Greenman, the principal architect of the FreeBSD Project. At the time of this writing, the patch was available only on a mailing list (BUGTRAQ). Also, at the time of this writing, the bug had not yet been appended to the searchable database at BUGTRAQ. (That database is located at http://www.geek-girl.com/bugtraq/search.html.) In keeping with the advisory at the beginning of this book about unauthorized printing of mail from individuals, I will not print the patch here. By the time this book reaches the shelves, however, the posting will be archived and can be retrieved at BUGTRAQ using the following search strings:
NOTE: These strings should be entered exactly as they appear in the list.
In a more general sense, FTP security is a subject that is best treated by studying FTP technology at its core. FTP technology has changed vastly since its introduction. The actual FTP specification was originally set forth in RFC 959, "File Transfer Protocol (FTP)" almost a decade ago. Since that time, much has been done to improve the security of this critical application.
The document that you really need to get is titled "FTP Security Extensions." It was authored by M. Horowitz (Cygnus Solutions) and S. J. Lunt (Bellcore). This IDraft (Internet draft) was authored in November 1996 and as reported in the abstract portion of that draft:
Cross Reference: "FTP Security Extensions" is located at http://info.internet.isi.edu/0/in-drafts/files/draft-ietf-cat-ftpsec-09.txt.
The document begins by reiterating the commonly asserted problem with FTP; namely, that passwords are passed in clear text. This is a problem over local and wide area networks. The paper covers various strides in security of the protocol and this serves as a good starting place for understanding the nature of FTP security.
Finally, there are a few steps you can take to ensure that your FTP server is more secure:
Finally, I recommend heavy FTP logging.
The best advice I can give here about TFTPD is this: Turn it off. TFTP is a seldom-used protocol and poses a significant security risk, even if the version you are using has been deemed safe.
NOTE: Some versions are explicitly not safe. One is the TFTP provided in AIX, in version 3.x. The patch control number for that is ix22628. It is highly unlikely that you are using such a dated version of AIX. However, if you have acquired an older RS/6000, take note of this problem, which allows remote users to grab /etc/passwd.
In Chapter 9, "Scanners," I discussed TFTP and a scanner made specifically for finding open TFTP holes (CONNECT). Because the knowledge of TFTP vulnerabilities is so widespread, there are very few system administrators who will take the chance to run it. Don't be the exception to that rule.
NOTE: Perhaps you think that the number of individuals that can exploit TFTP is small. After all, it requires some decent knowledge of UNIX. Or does it? Check out the TFTPClient32 for Windows 95. This is a tool that can help a cracker (with minimal knowledge of UNIX) crack your TFTP server. You can download a copy at http://papa.indstate.edu:8888/ftp/main!winsock-l!Windows95!FTP.html.
Disabling TFTPD is a trivial matter (no pun intended). You simply comment it out in inetd.conf, thus preventing it from being loaded at boot. However, if you are intent on running TFTP, there are several things you might consider:
NOTE: If you are new to UNIX (probably a Linux user), I suggest that you read the man pages on TFTP. If, in addition to being new to UNIX, you opted against installing man pages (and other documentation), you should at least visit these pages:http://flash.compatible.com/cloop-html/tftp.html
All these pages discuss weaknesses in the TFTP distribution. Moreover, I suggest that you acquire a copy of RFC 1350, which is the official specification for TFTP. The most reliable site I know for this is http://www.freesoft.org/Connected/RFC/1350/.
Gopher is now a somewhat antiquated protocol. However, it is fast and efficient. If you are running it, hats off to you. I am a big Gopher fan because it delivers information to my desk almost instantaneously (as opposed to HTTP network services, which are already completely saturated).
Gopher has not been a traditionally weak service in terms of security, but there are some issues worthy of mention. The University of Minnesota Gopher server is probably the most popular Gopher server ever written (available at boombox.micro.umn.edu). I would estimate that even today, better than half of all Gopher servers are running some version of this popular product. Of those, probably 10 percent are vulnerable to an old bug. That bug is present in both Gopher and Gopher+ in all versions acquired prior to August of 1993. As reported in CERT Advisory CA-93:11, UMN UNIX Gopher and Gopher+ Vulnerabilities:
That hole was also reported in a Defense Data Network Bulletin (DDN Security Bulletin 9315, August 9, 1993), which can be viewed at http://www.arc.com/database/Security_Bulletins/DDN/sec-9315.txt.
I think that the majority of crackers know little about Gopher. However, there have been some well-publicized bugs. One is that Gopher can proxy an FTP session and therefore, even if you are restricted from accessing an FTP directory on a machine, you perform a bounce attack using Gopher as the launch pad. This presents a little issue regarding firewall security. For example, if the network FTP server is behind the firewall but the Gopher server is not (and these belong to the same network), the blocked access to the FTP server will mean nothing.
In its default state, Gopher has very poor logging capabilities compared to other networked services. And, while the FTP proxying problem is not completely critical, it is something to be mindful of.
Few sites are still using Gopher and that is too bad. It is a great protocol for distribution of text, audio, or other media. Nowadays especially, a Gopher server may provide a much more vibrant and robust response than an HTTP server, simply because fewer people use it. It is not pretty, but it works like a charm. There have been relatively few security problems with Gopher (beyond those mentioned here).
Many people criticize Network File System (NFS) because its record in security has been a spotted one. However, the benefits of using NFS are considerable. The problem lies in the method of authentication for nonsecure NFS. There is simply not enough control over who can generate a "valid" NFS request.
The problem is not so much in NFS itself as it is in the proficiency of the system administrator. Exported file systems may or may not pose a risk, depending upon how they are exported. Permissions are a big factor. Certainly, if you have reason to believe that your users are going to generate (even surreptitiously) their own .rhosts files (something you should expressly prohibit), exporting /export/home is a very bad idea because these directories will naturally contain read/write permissions.
Some tools can help you automate the process of examining (and closing) NFS holes. One of them is NFSbug, written by Leendert van Doorn. This tool (generally distributed as a shar file) is designed to scan for commonly known NFS holes. Before you finish your security audit and place your box out on main street, I suggest running this utility against your system (before crackers do). NFSbug is available at ftp://ftp.cs.vu.nl/pub/leendert/nfsbug.shar.
TIP: For a superb illustration of how crackers attack NFS, you should obtain the paper "Improving the Security of Your Site by Breaking Into It" (Dan Farmer and Wietse Venema). Contained within that paper is a step-by-step analysis of such an attack. That paper can reliably be retrieved from http://www.craftwork.com/papers/security.html.
CAUTION: Never provide NFS write access to privileged files or areas and have these shared out to the Net. If you do, you are asking for trouble. Try to keep everything read-only.
Please do not suppose that NFS is a rarely used avenue by crackers. As reported in a 1995 Defense Data Network Advisory, NFS problems continue to occur:
Cross Reference: The previous paragraph is excerpted from DDN Security Bulletin 9501, which can be found online at ftp://nic.ddn.mil/scc/sec-9501.txt.
I would avoid running NFS. There are some problems with it. One is that even if you use "enhanced" or "secure" NFS (that is, the form of NFS that utilizes DES in authentication), you may meet with trouble along the way. The DES key is derived from the user's password. This presents an obvious problem. Assuming that shadowing is installed on the box, this may present one way for a cracker to reach the passwd listings. The only real value of the DES-enhanced versions is that the routine gets the time. Time-stamped procedures eliminate the possibility of a cracker monitoring the exchange and later playing it back.
NOTE: You can block NFS traffic at the router level. You do this by applying filtering to ports 111 and 2049. However, this may have little or no bearing on crackers that exist internally within your network. I prefer a combination of these techniques. That is, if you must run NFS, use an enhanced version with DES authentication as well as a router-based blocking-denial scheme.
My suggestion is that you visit the following links for NFS security. Each offers either a different view of the problem and possible solutions or important information about NFS and RPC calls:
HTTP is run more often than any other protocol these days, primarily because the WWW has become such a popular publishing medium. In the most general sense, HTTP is not inherently insecure. However, there are some things you should be aware of. The number one problem with HTTP is not with HTTP at all, but with the system administrator providing the service. Do not run httpd as root! If you fail to heed this advice, you will be a very sad system administrator. Even the slightest weakness in a CGI program can mean total compromise of your system if you are running httpd as root. This means that remote users can execute processes as root.
Moreover, though I treat CGI security at a later point in this book, here is some solid advice: If you are responsible for writing CGI programs, be careful to examine the code closely. Is there a possibility that someone can push commands onto the stack using metacharacters?
Also, consider the possibility of running httpd as a chrooted process. Many advisories suggest that it provides greater security. In my opinion, httpd should not be run in a chrooted environment. First, it offers only very minimal security gains and severely restricts your ability to use CGI. For example, under normal circumstances, users can execute CGI programs from beneath their own directory structure. By this, I mean that the standard procedure is that users can implement CGI from, say, /~usr/public_html/cgi-bin or some similar directory that has been identified as the user cgi-bin home. If you execute httpd in a chrooted environment, your users will not be able to run these scripts unless they, too, are under a chrooted environment. Moreover, the security gains from this are spurious at best. For in order to allow some form of CGI on your machine, you would need to also run either a Perl interpreter or C-based binaries from this chrooted environment. This defeats the purpose of the exercise. Unless you feel that there is an absolute need to run httpd in a chrooted environment, I would argue against it. It makes access too restricted to effectively provide CGI.
One valuable program that can help you with testing CGI applications is CGIWRAP, which is a relatively new program that does the following:
CGIWRAP was written by Nathan Neulinger and released in 1995. It is available at various locations across the Net. I have found this location to be reliable: ftp://ftp.cc.umr.edu/pub/cgi/cgiwrap/.
It is reported that CGIWRAP has been verified to work on the following platforms:
Because HTTP is a relatively new protocol and because it has now become the most popular (to users, anyway), I imagine that tools of this sort will emerge on a grand scale. And, while none of them can guarantee your security at a given site, you should have knowledge of them.
Specific problems can be found in various implementations of HTTP, mainly in servers. One of those servers is NCSA httpd. Version 1.3 had a buffer overflow vulnerability, for example. If you are or have been using 1.3, upgrade immediately. For information on the impact of the problem, go to these sources:
You can take some basic precautions:
Also, note NCSA's warning regarding DNS-based authentication:
HTTP security has undergone many changes, particularly in the past two years. One is the development of safer httpd servers. (There have been a variety of problems with servers in the past, including but not limited to problems with stack overflows and bad CGI.) The other push has been in the area of developing concrete security solutions for the entire protocol. A few important proposals are outlined in the following sections.
Secure Hypertext Transfer Protocol (S-HTTP) was developed by Enterprise Integration Technologies (a division of VeriFone, part of VeriFone's Internet Commerce Division). S-HTTP incorporates RSA and Kerberos-based encryption and authentication. As described in the IDraft on S-HTTP:
RSA and Kerberos-based authentication and encryption make for a pretty strong brew.
Secure Sockets Layer (SSL) is a method conceived by the folks at Netscape. The system is a three-tiered method of securing two-way connections. The system uses RSA and DES authentication and encryption as well as additional MD5 integrity verification. You will want to learn more about this system and therefore you should visit the home page of SSL. That document, titled "The SSL Protocol" (IDraft) was authored by Alan O. Freier and Philip Karlton (Netscape Communications) with Paul C. Kocher. It is located at http://home.netscape.com/eng/ssl3/ssl-toc.html.
A very interesting paper on HTTP security is "Evaluating Hypertext Servers for Performance and Data Security" (Suresh Srinivasan, Senior Researcher, Thomson Technology Services Group). In it, the author contrasts a number of security proposals or standards for HTTP and HTML.
HTTP security is still an emerging field. See Chapter 30, "Language, Extensions, and Security," for further information.
Before you actually connect this machine to the Internet (or any network, for that matter), you will want to perform a backup. This will preserve a record of the file system as it was when you installed it. Depending on how large the system is (perhaps you have installed everything), you might want to avoid using tape. I recommend backing up the system to flopticals, a Jazz drive, or even an old SCSI drive that you have lying around. I suggest this because restoring is generally faster. If you suspect that there has been an intrusion, you will want to do at least your comparing as quickly as possible. However, even if you are not concerned with speed, I would suggest doing the backup to some medium that is not available to other users. The backup should be secured in a location that is accessible only by you or trusted personnel.
At this stage, you have done the following:
The next step is to set up your logging and file integrity controls and procedures. Let's begin with logging. For this, you will use a product called TCP_WRAPPERS.
TCP_WRAPPERS is a program written by Wietse Venema. It is likely that no other tool more easily or efficiently facilitates monitoring of connections to your machine. The program works by replacing system daemons and recording all connection requests, their times, and most importantly, their origins. For these reasons, TCP_WRAPPERS is one of the most critical evidence-gathering tools available. It is also free. (A lot of the best UNIX software is free.)
Before installing TCP_WRAPPERS, you must read the paper that announced the program's existence. That paper (titled "TCP WRAPPER: Network Monitoring, Access Control, and Booby Traps") can be found at http://www.raptor.com/lib/tcp_wrapper.ps. The paper is significant for several reasons. First, it shows what TCP_WRAPPERS can do for you. Second, it includes a real-life example, a somewhat gripping tale (complete with logs) of Venema's efforts to pin down and apprehend a cracker.
NOTE: Like most good security applications, TCP_WRAPPERS grew out of necessity. Apparently, the Eindhoven University of Technology--where Venema was stationed--was under considerable attack by a cracker. The attacks were particularly insidious and frustrating because the cracker would often delete the entire hard disk drive of machines he compromised. (Naturally, as Venema reflects in the paper, it was difficult to determine anything about the attacks because the data was erased.) A solution was in order and Venema found a simple one: Create a daemon that intercepted any connect request, recorded the request and its origin, and then passed that request on to the native system daemons. To find out what happened, get the paper. It is a good story.
Cross Reference: TCP_WRAPPERS can be retrieved from many locations on the Internet, but I prefer its home: ftp://ftp.win.tue.nl/pub/security/TCP_WRAPPERS_7.4.tar.gz.
TCP_WRAPPERS is easy to install on most UNIX platforms, it takes very minimal resources to run, and it can provide extensive logs on who is accessing your system. In short, this program is a must. For implementation and design pointers, please obtain the paper described earlier.
Those seriously into security will undoubtedly crack a smile about TCP_Dump. It is a great utility, used to analyze the traffic on your network. It is also the program that Shimomura was running on his network when Kevin Mitnik purportedly implemented a successful spoofing attack against it. A journalist or two were amazed at how Shimomura had obtained detailed information about the attack. No magic here; he was simply running TCP_Dump.
TCP_Dump will tell you quite a lot about connections. Its output can be extremely verbose and for that reason it is considered a bit more comprehensive than TCP_WRAPPERS. People should not make such comparisons because the two programs do different things. TCP_WRAPPERS is primarily to identify who and when. TCP_Dump is designed to tell you what.
TCP_Dump is reportedly (though loosely) based on a previous program called etherfind. TCP_Dump is really a network sniffer and a good one.
CAUTION: Just to remind you, programs like TCP_Dump can ultimately eat a lot of hard disk drive space, depending largely on the frequency of traffic on your network. If you plan to have a lot of traffic, you might consider "pruning" the level of sniffing that TCP_Dump actually does. It has various options that allow you to restrict the "listening" to certain protocols, if you like. Equally, you can have the program run "full on," in which case I would recommend a nice RAID to eat the output. (That is a joke, of course, unless your network is very large and frequented by hundreds of people.)
TCP_Dump is another excellent tool to gather evidence against unlawful intrusions. Moreover, even if you never experience a break-in, TCP_Dump can teach you much about your network and the traffic conducted on it. (And perhaps it can even identify problems you were previously unaware of.)
Next, you will want to install TripWire. I discussed TripWire in previous chapters, so I will not cover it extensively here. I have already given pointers on where the tool is located. Here, I suggest that you acquire the following papers:
TripWire is not your only choice for file and system reconciliation. One that obtains very good results is called binaudit. It was written by Matt Bishop, also the author of passwd+. This system is more often referred to as the RIACS Auditing Package.
Cross Reference: You can find binaudit at ftp://nob.cs.ucdavis.edu/pub/sec-tools/binaudit.tar.
The system operates against a master list of file values that are maintained in the file /usr/audit/audit.lst. This system is not quite as comprehensive as TripWire but requires very low overhead and a minimum of hassle in setup and maintenance.
Whatever tool you use, you should have at least one that checks file system integrity. Call me a little paranoid, but I would generate a complete file system integrity check even before connecting the machine to a network. This will provide you with a ready-made database from the beginning.
Security of the X Window System is an obscure area of concern, but one of importance. If you reexamine the Evaluated Products List, you will see that X Window-based products are not in evidence. The X Window System is probably the most fluid, networked windowed system ever designed, but its security has a poor reputation.
The main argument against the use of X is the xhost hole. When an X server has access controls turned off, anyone anywhere on the Internet can open additional X windows and begin running programs arbitrarily. This hole can easily be closed (the difference between xhost + and xhost -, actually) but people are still reticent about allowing remote X sessions. (Again, it is all about poor administration and not poor design of the program.)
Some interesting approaches have been taken to remedy the problem. In this next section, I will highlight some of the problems with X Window System security. As I do so, I will be using snippets from various papers to make my point. The content of these papers is what you really need. Again, as I mentioned at the beginning, this book is a roadmap for you. If you have a practical interest in the security of X, you will want to retrieve each of the cited papers.
As noted by G. Winfield Treese and Alec Wolman in their paper "X Through the Firewall and Other Application Relays":
The first point, then, is that X is not simply a windowing system. It looks and behaves much like a garden-variety windowing system, but that is just the smaller picture. Connections are sent to the X server. The X server can serve any valid X client, whether that client be on the same machine or miles away. As noted by John Fisher, in his article "Securing X Windows":
Therefore, X is much like any other protocol in UNIX. It works on the client/server model and provides access across the Internet and a multitude of systems and architecture. It is important that users new to UNIX realize this. When a valid connection has been initiated, anything can happen (as noted in the X11R5 Xserver manual page):
Once that connection has been initiated, the attacker can destroy windows, create new windows, capture keystrokes and passwords, and carry on just about any activity taking place in the X environment.
The process by which security is maintained in X relies on a Magic Cookie. This is a 128-bit value, generated in a pseudo-random fashion. This value is distributed to clients and stored in the .Xauthority file. This authentication scheme is known as a "medium strength" measure and can theoretically be defeated. It is considered weak because of the following:
Cross Reference: The previous paragraph is excerpted from an article by Francois Staes that appeared in The X Advisor. The article, titled "X Security," can be found online at http://www.unx.com/DD/advisor/docs/nov95/nov95.fstaes.shtml.
True, if you have enabled access control, there is little likelihood of an outsider grabbing your .Xauthority file. However, you should not rely on simple access control to prevent penetration of your network. Efforts have been made to shore up X security and there is no reason you should not take advantage of them. Additional security measures should be taken because basic X security schemes have been identified as flawed in the past. As noted by the CERT bulletin titled "X Authentication Vulnerability":
Furthermore, there are many programs available (such as xkey, xscan, xspy, and watchwin) that automate the task of either cracking an X server or exploiting the server once it has been cracked. So I would first advise against running X across the Internet or even across the network. In my experience, small companies seldom have valid reasons to have X servers running on their machines (at least, not machines connected to the Internet).
However, if you insist on running X in this manner, there are some steps you can take. For example, Farmer and Venema suggest at the very least removing all instances of xhost + from not only the main Xsession file, but from all .xsession files on the system. (Oh, you could forbid the creation of any such file. However, practically, users might ignore you. I would run a script periodically--perhaps often enough to make it a cron job--that searched out these transgressions.)
Other sources suggest possible use of a Kerberized Xlib or utilization of the Identification Protocol defined in RFC 1413. Your choices will depend on your particular network configuration. Unfortunately, the security of the X Window system could consume an entire book by itself, so I will simply say that before making a decision about running X servers on your network, download the papers I have already cited. Those (and the papers I am about to cite) will help you to make an informed decision. Here are some tips on X security:
X Window System Security. Ben Gross and Baba Buehler. Beckman Institute System Services.
On the (in)Security of the Windowing System X. Marc VanHeyningen. Indiana University. September 14, 1994.
Security in the X11 Environment. Pangolin, University of Bristol, UK. January 1995.
Security in Open Systems. John Barkley, editor (with Lisa Carnahan, Richard Kuhn, Robert Bagwill, Anastase Nakassis, Michael Ransom, John Wack, Karen Olsen, Paul Markovitz, and Shu-Jen Chang). U.S. Department of Commerce, Section: The X Window System: Bagwill, Robert.
Security Enhancements of the DEC MLS+ System: The Trusted X Window System. November 1995.
Evolution of a Trusted B3 Window System Prototype. J. Epstein, J. McHugh, R. Pascale, C. Martin, D. Rothnie, H. Orman, A. Marmor-Squires, M. Branstad, and B. Danner. In Proceedings of the 1992 IEEE Symposium on Security and Privacy, 1992.
A Prototype B3 Trusted X Window System. J. Epstein, J. McHugh, R. Pascale, H. Orman, G. Benson, C. Martin, A. Marmor-Squires, B. Danner, and M. Branstad. The Proceedings of the 7th Computer Security Applications Conference. December 1991.
Improving X Window Security. Linda Mui. UNIX World 9(12). December 1992.
Security and the X Window System. Dennis Sheldrick. UNIX World 9(1):103. January 1992.
The X Window System. Robert W. Scheifler and Jim Gettys. ACM Transactions on Graphics, (5)2:79-109. April 1986.
X Window Terminals. Björn Engberg and Thomas Porcher. Digital Technical Journal of Digital Equipment Corporation, 3(4):26-36. Fall 1991.
Your next step is to apply all available or known patches for your operating system. Many of these patches will correct serious security problems that have become known since your operating system distribution was first released. These packages most often consist of little more than a shell script or the replacement of a shared resource, but they are very important.
A comprehensive listing of all patches and their locations is beyond the scope of this book. However, the following are a few important links:
Cross Reference: Patches for Sun operating systems and Solaris can be found at ftp://sunsolve1.sun.com/pub/patches/.Patches for the HP-UX operating system can be found at http://support.mayfield.hp.com/patches/html/patches.html.
Patches for Ultrix can be found at ftp://ftp.service.digital.com/pub/ultrix/.
Patches for the AIX operating system can be found at ftp://software.watson.ibm.com.
You should consult your vendor about methods to determine whether patches have been installed. Most operating systems have a tool (or script) designed to perform this operation with ease. For those with licenses still in good standing, support is just a click away. Most vendors have compiled a list of patches that should be installed for any given version of their product; a list that covers all patches applicable up to that point and version.
All right. It is time to connect your machine to the Internet (and probably a local area network as well). Before doing that, however, you should take one final step. That step involves security policies and procedures. All the security in the world will not help you if your employees (or other users) run rampant throughout the system.
Your policies and procedures will vary dramatically, depending on what your network is used for and who is using it. Before you ever connect to the Net (or any network), you should have a set of policies and procedures. In my opinion, they should be written. People adhere to things more stringently when they are written. Also, if you are away and have appointed someone else to handle the machine (or network), that person can quickly refer to your policies and procedures.
Many companies and organizations do not have such policies and procedures and this leads to confusion and disagreement among management personnel. (Moreover, the lack of a written policy greatly weakens security and response time.)
Rather than take up space here discussing how those documents should be drafted, I refer you to RFC 1244, the Site Security Handbook. Although some of the more technical advice may be dated within it, much of the people-based advice is very good indeed.
Cross Reference: RFC 1244 is located at http://www.net.ohio-state.edu/hypertext/rfc1244/intro.html.
Another interesting source (and a different outlook) on security policies and procedures is a Data Defense Network circular titled "COMMUNICATIONS SECURITY: DDN Security Management Procedures for Host Administrators." It defines some of the measures undertaken by DDN.
Cross Reference: "COMMUNICATIONS SECURITY: DDN Security Management Procedures for Host Administrators" is located at http://csrc.ncsl.nist.gov/secalert/ddn/DCA_Circular.310-P115-1.
Another, newer and more comprehensive, document is titled "Protection of TCP/IP Based Network: Elements: Security Checklist Version 1.8," authored by Dale Drew at MCI. This document is a quick checklist that covers not just policies and procedures, but all elements of network security. It is an excellent start.
Cross Reference: "Protection of TCP/IP Based Network: Elements: Security Checklist Version 1.8" is located at http://www.security.mci.net/check.html.
Finally, at the end of this chapter, there is a list of publications, journals, Web pages, and books in which you can find valuable information on setting user policies.
Practical UNIX and Internet Security (Second Edition). Simson Garfinkel and Gene Spafford. 1996. O'Reilly & Associates, Inc. ISBN 1-56592-148-8.
UNIX Security: A Practical Tutorial. McGraw-Hill. N. Derek Arnold. 1993. ISBN 0-07-002560-6.
UNIX System Security. Addison-Wesley Publishing Company, Inc. David A. Curry. 1992. ISBN 0-201-56327-4.
UNIX System Security. Addison-Wesley Publishing Company, Inc. Rick Farrow. 1990. ISBN 0-201-57030-0.
The Cuckoo's Egg. Cliff Stoll. Doubleday. ISBN 0-385-24946-2. 1989.
UNIX System Security. Patrick H. Wood and Stephen G. Kochan. Hayden Books. ISBN 0-8104-6267-2. 1985.
Computer Security Basics. Deborah Russell and G. T. Gangemi, Sr. O'Reilly & Associates, Inc. ISBN 0-937175-71-4. July 1991.
Computer Crime: A Crimefighter's Handbook (First Edition). David Icove, Karl Seger, and William VonStorch; Consulting Editor Eugene H. Spafford. ISBN 1-56592-086-4. August 1995.
Before you actually begin designing your network, there are several papers you need to read. These will assist you in understanding how to structure your network and how to implement good security procedures. Here are the papers, their locations, and what they will do for you:
You have still another task. Go back to Chapter 9, acquire as many of the scanners listed there as possible, and attack your machine over the network. The results will provide you with still more diagnostic information about your machine.
This chapter covers only the surface of UNIX security. For this, I do apologize. However, many volumes could be written about this subject. It is an evolving, dynamic field in which many of your decisions will be based on the way your network is constructed and the users that populate it.
In my view, a little UNIX security knowledge is a valuable thing for all system administrators, no matter what platform they actually use. This is primarily because UNIX evolved side by side with the Internet. Many valuable lessons have been learned through that evolutionary process, and Microsoft has wisely applied those lessons to the design of Windows NT.
Securing a UNIX server is more an art than a science. As the saying goes, there are a dozen ways to skin a cat. In UNIX security, the climate is just so. I have seen system administrators develop their entire security scheme from scratch. I have seen others who rely largely on router-based security measures. You have numerous choices. Only you can determine what works and what doesn't.
In short, the field of UNIX security is truly engrossing. I have encountered few operating systems that are as elegant or that offer as many different ways to approach the same problem. This versatility contributes to the level of difficulty in securing a UNIX server. Thus, the act of securing a UNIX box is a challenge that amounts to real hacking.
© Copyright, Macmillan Computer Publishing. All rights reserved.