HostedDB - Dedicated UNIX Servers

-->
Security Issues in WWW

Security Issues in WWW


Contents


1: Overview

The Hypertext Transfer Protocol - HTTP/1.0 draft, proposed by the Internet Engineering Task Force or IETF HTTP Working Group, makes some initial suggestions as to the possible security threats involved in HTTP. Their security considerations include: Other important security considerations include:

This web page will attempt to cover these issues and present up to date information regarding the current status of security issues as they relate to WWW. This page include pointers to sources of additional information, mailing lists, tools, etc.

This web page is evolving over the course of the COMP290 seminar on WWW/Mosaic. Because of this process of evolution portions of this page may be in a state of flux and under construction.

For additional sources of general information which I have just recently found (4-16-95) and need to organize:

(Table of Contents)

2: Existing Authentication Access Mechanisms

Authentication access mechanisms are the basic methods used by CERN HTTP and NCSA HTTP to provide control over document access.

2.1: Mosiac/NCSA Authentication

The current methods in NCSA Mosaic for X, version 2.0, and NCSA HTTP for restricting access to documents and providing authentication are based on several criteria:

Passwords have no correspondence to individual's UNIX passwords on specific systems. These passwords are uuencoded as they are passed across the network. This is not a secure method of end to end transmission.

There are two levels at which authentication can work: per-directory and per-server. An excerpt from the Mosaic User Authentication Tutorial: "Per-directory authentication means that users with write access to part of the filesystem that is being served can control access to their files as they wish. They need not have root access on the system or write access to the server's primary config files."

Per-directory access control is maintained by using an file named ".htaccess" that resides in each directory. The server reads this file when attempting to access a document in that directory or in any subdirectories under that directory. The access control specification given on a per-directory basis may be overridden by the global access configuration file, see below.

Per-server access control is determined by a global access configuration file. This access configuration file is specified under the AccessConfig variable in the configuration files. For examples of directives for use when writing an global access control file see, Resource Configuration File.

In addition to just restricting access, server features may also be disabled in certain specified directories. For example user's home directories may not be safe to run all the server features in. This may be equivalent to disable server side includes. A server side include is when httpd allows users to create documents on the fly by parsing files for information such as current date, last modification date, size, etc. This parsing is performed as the server's user, root, not the actual user's.

See the Mosiac User Authentication Tutorial for additional information on providing access to WWW documents using NCSA HTTP. This tutorial walks the user through the access configuration set up and provides lots of useful examples.

For additional information see:

2.1: CERN Authentication

Because NCSA built their server based on CERN's original http server, the authentication mechanisms are very similar. The levels of access protection that can be achieved are again per-directory and per-server although the mechanism to actually carry this out is slightly different. The CERN server has several configuration files which define access and authentication rules to the http daemon. These are: Good examples of the usage of these files and how to actually combine them to get the right kind of protection for your server and file system can be found in Protected CERN Server Setup.

For more information (although Mosaic's Documentation is much better) see:

(Table of Contents)

3: Proposals For Secure Servers/HTTP

There are a number of proposals which document schemes for providing security features within or underneath HTTP. I have tried to include as many of these as possible in this section. Because the two most popular UNIX servers are those produced by CERN and NCSA, most of the proposed implementations below are for one server or the other. I will not cover security information for non-UNIX servers.

3.1: NCSA HTTP

3.1.1: PGP/PEM Encryption Scheme

NCSA has proposed a PGP/PEM Encryption Scheme for use with NCSA HTTP. The NCSA proposal uses encryption/decryption routines which are external to both the Mosaic browser and the HTTP server. The browser and server will call these routines to encrypt and decrypt their communications and thus provide user authentication and a secure transport method. These programs have "hooks" for use with PGP "pretty good privacy" or RIPEM. Both methods use RSA encryption. Note that the U.S. government has strict rules on the exportation of these technologies.s Persons who wish to use Mosaic and HTTP with PEM or PGP encryption will need to communicate beforehand and find a tamper-proof way to exchange their public keys. See NCSA httpd/Mosaic: Using PGP/PEM Authorization" for installation details for browser and server.

3.1.2: Secure NCSA HTTPD

These are pointers to on-line documentation for Secure NCSA HTTPD, which was developed jointly by Enterprise Integration Technologies, RSA Data Security, Inc., and NCSA. Secure NCSA HTTPD uses S-HTTP, see Section 3.6, to allow secure commercial transactions to take place through the Web.

Detailed information may be found at:

3.2: CERN HTTP

CERN has proposed a Public Key Protection Scheme for use with CERN HTTP. In the Basic HTTP Protection Scheme, the user name and password passes unencrypted over the network. One basic solution to this security hole is to encrypt this information in the public key of the server. CERN's Public Key Protection Scheme consists of the following steps:
  1. Server sends an Unauthorized status. When a server receives a request for a protected document the server must send its public key in the WWW-Authenticate field in the reply in addition to an unauthorizied status message.
  2. Client authenticates himself. The client/browser prompt for username and password and generates a random encryption key. The user name, password, browser's IP address, timestamp and the generated encryption key are concatenated with colons as separators. This string is encrypted in the server's public key. The client then places encrypted string Authorization field and sends the next request.
  3. Server checks authentication and authorization.
  4. Server sends an encrypted reply. Server adds DEK-Info:, Key-Info: and MIC-Info: fields (these fields are used by the client to decrypt the document per RFC1421) to header and sends back encrypted document which is pure binary.
  5. Client decrypts the reply from server.

There is also a proposal for RIPEM-based HTTP Authorization scheme. This has already been implemented by NCSA.

3.3: The IETF HTTP Security Working Group

The HTTP Security Working Group is a proposed group unratified by the IETF. Because they are a proposed group, their charter, goals, and associated web pages are currently under construction. Their basic goal, however, is to develop requirements and specifications for the provision of security services to HTTP. This working group should be watched for future developments in secure HTTP.

3.4: Shen

Shen is a security scheme proposed by CERN. Shen provides for three separate security related mechanisms.

3.5: Netscape Communications SSL

RSA's public key cryptography.

Currently this protocol is implemented on the Netscape browser and the Netsite Commerce server. This protocol would allow a user to transfer credit card and other personal information from any Netscape browser to the Netsite Server.

Bank of America is using Netscape to provide real-time online credit card authorizations. First Data Card Services Electronic Funds Services (EFS), the world's largest credit card payment processor, is also providing the same services using Netscape. MCI is also providing a similar services: a secure online shopping mall.

For additional information see:

3.6: S-HTTP

S-HTTP is yet another Secure HyperText Transfer Proposal designed by Enterprise Integration Technologies (EIT). S-HTTP is backwards compatable with HTTP. It is designed to incorporate different cryptographic message formats into WWW browsers and servers. This will include PEM, PGP, and PKCS-7. Non S-HTTP browsers/servers should be able to communicate with S-HTTP without a discernable difference, unless they request protected documents. S-HTTP does not require any client side public keys. This means that users do not have to pre-establish public keys to participate in secure transactions, unlike NCSA's approach described above.

EIT also has presentations, on the web, which are in MS Powerpoint for Windows 3.0. They claim they should be portable to Macintosh Powerpoint.

Additional information can be obtained by mailing shttp-info@eit.com or examining the following links:

3.7: Other proposed mechanisms

3.7.1: AT&T Bell Laboratory

David M. Kristol, of AT&T Bell Laboratories has proposed an extension to HTTP which will provide security through the use of "wrappers." Wrapping is just what it sounds like: the body of the WWW communication is wrapped by adding headers and footers and is often encoded. Enough information is present in the headers for the recipient to decode the message. For more information, see A proposed Extension Mechanism for HTTP.

3.7.2: SimpleMD5

SimpleMD5 is another specification of a secure authentication scheme built on top of HTTP/1.0. This scheme does not provide a secure transfer mechanism or encryption of body content. It is designed primarily to facilitate a secure access authentication mechanism.

This is based on a "challenge-response" mechanism using a "nonce" value. This value is something that is used for one transaction which both the server and client must be in agreement upon. The checksum of the password along with the nonce value is broadcast across the network, thus the password is never sent in the clear.

This model has been proposed by SpyGlass, Inc. SpyGlass has also proposed an Enhanced Mosaic Security Framework. Enhanced Mosaic is designed to work with S-HHTP. For additional information see:

3.7.3: Digest Security Scheme

This is a minimal security scheme, the goals of the scheme excerpted from Simple Digest Security Scheme are as follows:

The most important goal of the designers is to implement security without using patented or exported restricted software. This is a CERN supported proposal although its not exactly clear from the Web page who's idea it is.

(Table of Contents)

4: Secure Browsers

4.1: Secure NCSA Mosaic

Secure NCSA Mosaic was developed by EIT in cooperation with RSA and the NCSA. Additional sources of information include:

(Table of Contents)

5: Firewalls and WWW Proxies

WWW proxy servers run on firewall machines to provide access to WWW to clients that reside inside the firewall. Security firewalls are designed to restrict direct interaction between hosts behind the firewall and all other outside hosts on the Internet, see for more information on firewalls.

The CERN HTTP server may be configured to run as a proxy server. An additional benefit of proxy servers, is that they may be set up to provide caching which will significantly cut down network traffic costs and providing faster response times.

If you are unable to run a CERN HTTP proxy server, NCSA HTTP may be modified to run behind firewalls using the SOCKS package. Normally NCSA HTTP requires a direct Internet connection to run correctly. SOCKS is a package designed to run Internet clients from behind firewalls without interfering with the firewall's security requirements. The SOCKS package includes a modified version of Mosaic for X 2.0. This version is not supported by NCSA.

Proxy's can be set up to connect to other proxies. An inner proxy server may connect to the outside world through an outer proxy server. This may be a security hole. For additional information look at Authenication Between Clients and Proxies.

Proxy Related Papers from the WWW '94 Conference Proceedings:

Sources of additional information: (should be a mosaic file kicking around somewhere)

(Table of Contents)

6: Miscellaneous

6.1: Security and Robots

Robots, webcrawlers, and spiders are programs that automatically traverse the web collecting index and cataloging information. These programs can be quite useful, however poorly written "depth first" searching robots have the potential to bring servers to a stand still by recursively downloading information from CGI scripts which have an infinite number of links. Robots can also overload servers by producing "rapid-fire" requests for information. The suggested document retrieval access rate for a robot is to retrieve 1 document per minute or longer. For additional information, see Guidelines for Robot Writers or Ethical Web Agents.

It is often useful to protect certain areas of the server from access by robots or to prevent them from accessing the server completely. The method used to protect certain areas from servers is to provide a single file, available via HTTP on the local URL "./robots.txt." This file, maintained by the system administrator, specifies the access policy for the server. Policies can be based on a per robot (or user-agent) basis or may use wildcards to indicate exclusion of all robots. An example "./robots.txt" (note: our installation does not actually use one of these files) file follows:

# robots.txt for http://www.cs.unc.com/

#Allows arachnophilia to look everywhere it wants
User-agent: arachnophilia 
Disallow:

#Disallows any other user-agent from accessing the /tmp directory tree
User-agent: *
Disallow: /tmp/ 

This approach can be easy implemented on existing servers and the access policy can be determined through a single document retrieval. Robots can follow the protocol of accessing the "./robots.txt" file to determine whether they should continue to access the server. Additional sources of information:

6.2: Tools

This section contains a compilation of free tools on the web which provide security related assistance, or are security related products themselves. This list will hopefully grow during the semester.

6.3: Examples of Secure Server/Browser Packages

6.4: CGI Script Security

For additional information see: (Table of Contents)

7: WWW Security Mailing Lists

I will add on to this section as I find new mailing lists on the net. (Table of Contents)

Please e-mail me if you have any problems with this page!

hanes@cs.unc.edu