HostedDB - Dedicated UNIX Servers

-->
Internet Security Professional Reference:How to Buy a Firewall
Previous Table of Contents Next


Evaluating Reporting and Accounting

The most neglected area in firewalls is alerting and reporting. Any firewall that cannot send out an alert when it detects an attack is deficient. The two leaders in reporting and accounting are Raptor’s Eagle and Digital’s AltaVista Firewall.

Eagle offers a series of alerting capabilities based on frequency. For example, it’s simple to define “if someone tries to telnet in more than 100 times in five minutes, then we’ve got a problem.” Once an alert is triggered, Eagle can play a sound, send mail, or otherwise notify you.

AltaVista Firewall has a different strategy. It has a series of states: green, yellow, orange, and red. At each level, you can define events that move the firewall up to the next level and take some action. For example, you may decide that if the firewall detects too many telnet failures, move to “yellow alert,” disable the telnet proxy for two hours, and send yourself mail. You can even have AltaVista Firewall shut down the firewall entirely, if warranted (as when running out of disk space). After a period of time, which varies from platform to platform, the firewall lowers its alert status.

AltaVista Firewall shows its current state graphically by changing the background color of the console to match the state.

The Unix-based firewalls, including LSLI’s PORTUS, TIS’ Gauntlet, and Milky Way’s Black Hole, hand-wave away this issue by suggesting that the network manager could write some tool to analyze the logs and make alerts based on that.

Similarly, the task of summarizing reports and distributing them to the network manager is handled poorly in most products. Global’s Centri, for example, e-mails ill-designed summaries by default every 15 minutes. Any network manager subjected to the barrage of output from Centri would quickly ignore them, and any problems they reported. It is possible to vary the interval of the transmission, but the summaries remain long, verbose, and hard to read, no matter how often you get them.

Raptor also sells an add-on package to Eagle to help analyze traffic.

Exceptions to the poor reporting rule are TIS’ Gauntlet, Digital’s AltaVista Firewall, and Milkyway’s Black Hole. Both Gauntlet and AltaVista Firewall have nice reporting strategies that can automatically generate and send reports at selected intervals. Milkyway has gone overboard in reporting capabilities. They store logging information in a relational database (Postgres) and let the network manager use either pre-written scripts or SQL queries to generate reports.

Evaluating Firewall Performance

The products evaluated in this section are as follows:

  Network-1 Firewall/Plus
  Global Internet Centri
  Raptor Eagle
  Check Point Software Firewall-1
  Digital AltaVista Firewall

Firewall performance is a sticky area. In many environments, a firewall is connected to a WAN network at T1 (1.544 Mbps) or E1 (2.048 Mbps) speeds. In those situations, performance is not much of an issue because the WAN side is so slow. More and more organizations, however, are both increasing the speed of their WAN connections (such as to the Internet) beyond T1/E1 and are deploying firewalls internally to keep the support group away from the payroll database or the marketing people away from a project they shouldn’t know about. In those situations, performance is an issue and can be an important one.

In this section, you can read the results of an Opus One test of five NT-based firewalls to see which can handle the load and keep on going. NT was chosen for this test because it offered the most reproducible environment where comparing performance was possible. Because the hardware and much of the operating system are held constant, performance evaluation offers useful and meaningful comparisons.

To see just how fast these firewalls could run, Opus One set them up on an Intel-based Pentium 133 MHz system using two 3COM Corporation 3C595 NICs. Firewall/Plus and Firewall-1 called for NT 4.0. AltaVista, Eagle, and Centri all ran on NT 3.51 with Service Pack 5. No service packs were installed on the NT 4 systems.

To keep Microsoft honest, Opus One also installed a Unix-based firewall, Livermore Software Laboratory’s PORTUS, on the same hardware platform using Sun’s Solaris. The same tests were run through the PORTUS as the NT-based firewalls just to see how things turned out.

On either side of the firewall, there were two 300 MHz Alpha-based test systems. The two test systems talked to each other through the firewall. Opus One used two different benchmarks to test performance under different conditions. Each benchmark was repeated five times.


Note:  If any single iteration of the benchmark was more than 5 percent away from the mean, which might have shown LAN or disk problems, rather than firewall performance, all five iterations were repeated and the original results discarded. The numbers reported here are simple arithmetic means.

The first benchmark was used to test the proxy capabilities of the firewall. This benchmark asked the firewall to look very closely at the data passing through it. The FTP (file transfer protocol) application was an ideal way to test proxy capabilities.

The FTP benchmark includes only host CPU time to read data from disk; the data were written to the bit bucket. To simplify results, only a single trip to the disk for data transfer is included. On each FTP test run, the data were discarded rather than being written to disk. FTP transfers were run in both directions (that is, both pulling and pushing data), and the two directions averaged. File sizes were selected to be large (4 Mb) to minimize the effects of start-up and shutdown delays.

The second benchmark was a pure data benchmark and should have required little or no analysis by the firewall. This benchmark used a commonly available testing tool known as TTCP, which simply exercises the TCP protocol to push data chunks of varying sizes through a TCP pipe.

The benchmark version of TTCP had been enhanced for the test lab environment by using TCP option negotiation to increase the size of the TCP window. In a normal LAN environment, particularly a 100 Mbps one, this enhancement would increase performance significantly. Experience showed, however, that this was a waste of time; none of the proxy-based firewalls supported this option. Because of the lack of support for TCP window size, these results are not reported here.

The FTP benchmark, also modified to support larger TCP window sizes than a normal FTP client/server, was similarly restricted. The charts below show testing that used a TCP buffer size of 1024 octets in TTCP testing with a total data transfer of 100 MB per iteration.


Previous Table of Contents Next