nt-part2_63
Analysis of the Security of Windows NT
1 March 1999
63
8. NT versus UNIX with NFS and NIS
The UNIX operating system was designed in the early seventies at AT&T Bell
research laboratories. The authors had previously worked on the Multics system and
they applied many lessons learned in that project when designing UNIX. The name
incidentally was meant as a pun on Multics. The design was simplified in most respects
however. Traditionally, UNIX implementation has been centred around a single mono-
lithic kernel with two access control rings, supervisory state and user state. The kernel
contains all code necessary to operate the system, such as filesystem, device drivers, as
well as traditional code for process handling etc.
In the late seventies ARPA decided to fund research, into the field of long distance
computer communication, and as a part of that research researchers at Berkeley devel-
oped the Berkeley Software Distribution, or BSD for short. Beside many other inven-
tions, BSD contained the first implementation of what was to become TCP/IP. Since
the code for the BSD TCP/IP implementation was in the government domain it was,
and is widely available, many (most) other vendors have based their implementations
of the TCP/IP stack on the code from Berkeley. Berkeley worked steadily on removing
all original AT&T code from the Berkeley software release and has now achieved a
code base totally free from any obligations originally made to AT&T.
With networking in place the researchers at Berkeley designed a set of protocols and
applications to be able to access computers over the local area network, these are col-
lectively known as the r protocols since the first letter of the commands that invoke
them typically begin with the letter r. These protocols were designed to permit remote
terminal access (via rlogin and telnet,) remote file transfer, as opposed to file
access, (via rcp and ftp,) and remote command execution (via rsh.) These proto-
cols are all very similar from a security perspective.
The Berkeley line of UNIX started one of the earliest divisions of UNIX into what is
today a multitude of different systems from different vendors. The lineage of any
UNIX system can be traced back to either AT&T UNIX, or Berkeley UNIX. Today it
is estimated that some 10-15 different UNIX implementations is in use world wide.
Even though they all share the same ancestry it is difficult, especially from certain
security perspectives to discuss UNIX as a homogenous system. Many of the imple-
mentations available today differ in vital respects from each other.
With the market shifting towards single user workstations much research was being
conducted in the field of distributed computing. In the early eighties the vendor Sun
Microsystems published a set of standards on how to make files on a file server avail-
able remotely via TCP/IP, specifically via remote procedure call, RPC (another Sun
development). Sun named this NFS, the Networking File System, and this was soon
made the de facto standard for remote file access, with many vendors implementing
compatible products.
In order to make it possible to treat a large number of workstations as a single homoge-
nous installation, remote file access was not enough however. There are a number of
other relevant datastructures, traditionally stored in configuration files, that has to be
distributed to the workstations. Moreover, distributing these data in its entirety every
time a single request was made would be prohibitive in terms of network traffic, since