|
Summary
A careful implementation of the firewall architecture can save a tremendous amount of resources in maintenance. Carole runs through the basic rules. (3,500 words)
WIZARD'S GUIDE TO SECURITY | ||
By Carole Fennelly |
Implementing firewall software is not really that hard. Maintaining it is. If you can take a step back and look down the road at the possible traffic jams, you can make maintenance easier by spending a little extra time with the implementation. This column will attempt to offer some advice that could save you some maintenance headaches.
Building your firewall: Read the whole series! | |
---|---|
|
Know your firewall
It doesn't matter what type of firewall you are installing; you're going to have
to take the time to learn it. For vendor-proprietary firewalls, this
means that you will probably have to take a class to learn the
vendor interface. Of course, this does not necessarily make you a
firewall expert, but it is worth taking the time to learn how to use
the product in the manner intended. Some vendors are pretty good
at supporting backward compatibility so that future releases just
require reading over the documentation.
For a firewall based on open source standards, there is more that you have to learn, and the management interface may not be as easy to use. The bright side is that the technology learned can be used in other places. Once you develop the technical skills, you can customize the firewall.
Prune the architecture
It's easy to get carried away when designing a security
architecture. Just remember, the more complex you make it, the
harder it will be to maintain with efficient performance. Once
you've learned more about the particular firewall that you are
implementing, see where you can streamline the architecture. For
example, if you are requiring users to authenticate on the firewall
before going out to Web sites, you will take a performance hit and
add a lot of maintenance. Is it worth it? If it is that important
to make sure that users are not going to inappropriate sites, it
might be better to implement a Web-caching product that also
provides filtering.
Along with the services, consider how many firewalls are in the architecture. If you're looking at an architecture with more than 50 firewalls, you will need a centralized management mechanism. For very large implementations, a firewall farm that runs on high-capacity systems may be a more efficient method than maintaining hundreds of little firewalls. Some firewalls support load balancing, which uses the resources efficiently and provides redundancy (see Resources).
Can't get there from here
Quite a lot of time is wasted in firewall implementations because
the basic network connectivity is not there. It cannot be stressed
enough: test your network routes before you even begin to install
the firewall software. Otherwise, you will find yourself wasting an
inordinate amount of time trying to debug firewall rules when it's
not a firewall problem. Almost every firewall admin I know uses
traceroute
and snoop
for testing. In his
SunWorld column last month,
Peter Galvin brought up a very useful feature of the Solaris
route
command called route get
. I found
this to be immediately useful during a firewall implementation
recently (thanks, Peter!):
# route get ns2.someplace.com route to: ns2.someplace.com destination: default mask: default gateway: rtr-r-37.company.com interface: le0 flags: <UP,GATEWAY,DONE> recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire 0 0 0 0 0 0 1500 0 #
The nice thing about this command is that it tells you the netmask and interface that is being used. The fact that it is native to Solaris is another plus.
Have you seen my keys?
Have you ever found yourself frantically searching for your car keys
when you're already late for something important? It's frustrating
that something as mundane as misplaced keys can have such an impact.
Vendor firewalls have license keys with various mechanisms for
activation. Don't wait until you have firewall consultants waiting
to configure the firewall to find out how to activate the keys. Make
sure you understand how the license keys work. An incorrect date on
your system could keep the licensing from working. Make extra copies
of the license keys (including a hard copy) and keep them in a safe
place.
Firewall rules
Blue laws
It is a source of amusement for many people to review the so-called
blue laws of a region (see Resources).
These silly, outdated laws are technically enforceable, but no judge
in his right mind would uphold them. As humans, we rely on our
common sense to determine if a rule is appropriate. Computers have
no common sense -- they just do what they are told no matter how
silly it is. With a firewall, it is critical to keep the rulebase as
efficient as possible and to periodically review the rules to make
sure they are still relevant. It is better, of course, to write good
rules in the first place.
Anyone with a computer science background is familiar with the flowchart model that should be written before the actual program. It helps to define the flow of logic so that an efficient program can be written. While it's tempting to avoid this step, it could help prevent problems later. It doesn't matter what level of Brand X certification you have; if you don't fully understand the logic of what you are doing, you will probably write sloppy rulebases. Don't expect the requesting organization to know how to word its request. Often, a request could be worded as, "We need all our people to get to vendor X to use the SOME_APP service." Not very helpful, is it? If this is how you get your requests, you will be spending a lot of time on the phone with follow-up calls chasing down information. It is easier to have a form that specifies the required information, such as:
Name of Requestor: John Doe Department: XYZ Reason for request: The application is required to provide a database lookup service from Vendor X to users in group A. Name of application: SOME_APP Connection initiation: (I) Internal (E) External B (Both): I (who opens the connection) IP Address of Server(s): 183.17.53.42/32 (the destination side) IP Address Client(s): 192.168.10.0/24 (the source side) Type of service and port: TCP/5678 Users who will use this service: All Type of authentication (Internal services Only): (P) Password (T) Token (N) None: N Service Access Time Period: Any Originating Dept. Approval: manager from XYZ Firewall Dept Approval: Firewall group manager
Even with all this information, don't hesitate to question the request. If you need to provide a connection to an external vendor for a service, it is up to the vendor to require and support authentication. Unless there is a good reason for it, the source of the connection could be defined to be anywhere in your address space. If you write very tight rules on a per-workstation basis, you might find yourself constantly updating rulebases when people move around.
I've seen some rules written badly because the administrator was confused about how the TCP/IP protocol works. Administrators could mistakenly think that they have to write a rule for both directions in order to provide a return route.
For example (using a Checkpoint format), the above requirement may be written like this:
Source Destination Service Action Track Install Time My_net Their_net SOME_APP accept short My_fw1 Any Their_net My_net SOME_APP accept short My_fw1 Any
If the application does not originate from the external network, the second rule is useless at the very least. Some admins may not realize that the TCP/IP protocol keeps track of the return route to the originator of a connection, and that it is unnecessary to write a rule for it. In fact, the other rule would allow the other side to open connections to the client side on the SOME_APP port. If you are lucky, you might not be running any process that is listening on that port. If you do have such a process, you have opened a hole in your infrastructure. Understand your rules before you create them.
Definitions
The next step in writing a program is to define the variables.
Each administrator has a different style that becomes her own personal standard. The
problem arises when you have multiple administrators maintaining
multiple firewalls. It is very likely that the same variable names
will be chosen with different definitions. Aside from the
administrative headache of having to look them up all the time,
there is an even bigger headache if you have to move an application
to another firewall.
It will look like the network objects are defined, but they may not have the values that the application requires. Changing the values will break another application. For a very large installation, it helps to establish a standard.
While I personally hate long variable names, I must grudgingly admit
that more descriptive names make maintenance easier. For host names,
you may want to incorporate the IP address into the name of the
application, as in h1.2.3.4_VendorX
. For service names, you might
want to choose something like t5678.SOME_APP
. Just make sure everyone
follows the same standard. Also, don't just start renaming objects
to follow a new standard without planning it out!
Now we can take the form and fill in variable names for the rule:
Remote host IP: 1.2.3.4 Remote host name: h1.2.3.4_VendorX Service type and port: tcp 5678 Service name: t5678.SOME_APP
The TIS Gauntlet package can use the same information to generate
the new rules. You can use the GUI interface to define the new
services (e.g., simple plug-gw), set down the rules for which
addresses can use it, which addresses they can go to, what port to
use, the users who are allowed to use it, and even a time of day for
the use of the service. And for those of you who are feeling macho
enough to set up rules without the GUI, you can always edit
netperm-table
. I personally wouldn't recommend it except in a case in which you are testing out rule changes to make sure you
have the correct rules. Just make sure you go back to the GUI and
update the rules there. The following is an example from a TIS
Gauntlet netperm-table
with some comments:
test4-gw: state on # The proxy is to be started test4-gw: proxy-type plug-gw # The proxy type test4-gw: proxy-exec ./plug-gw # The program to run test4-gw: bind-port 3333 # Port to listen on test4-gw: bind-address 193.47.22.5 # Address to bind to (0.0.0.0 is all addresses) test4-gw: timeout 21600 # Timeout after x seconds of use. # The next entries are addresses permitted to use this proxy test4-gw: permit-hosts 127.0.0.1 -policy test4-gw-Trusted test4-gw: permit-hosts 172.31.61.* -policy test4-gw-Trusted test4-gw: permit-hosts 172.31.64.* -policy test4-gw-Trusted test4-gw: permit-hosts 172.31.65.* -policy test4-gw-Trusted test4-gw: permit-hosts 172.31.66.* -policy test4-gw-Trusted test4-gw: permit-hosts 172.31.67.* -policy test4-gw-Trusted test4-gw: permit-hosts 172.31.68.* -policy test4-gw-Trusted test4-gw: permit-hosts 172.31.69.* -policy test4-gw-Trusted test4-gw: permit-hosts 192.168.11.* -policy test4-gw-Trusted test4-gw: permit-hosts 192.168.12.* -policy test4-gw-Trusted test4-gw: permit-hosts 193.47.23.* -policy test4-gw-Trusted test4-gw: permit-hosts 172.31.63.* -policy test4-gw-Trusted # The policy applied to the previous addresses policy-test4-gw-Trusted: permit-proxy test4-gw #They are permitted to use the proxy policy-test4-gw-Trusted: description test system policy-test4-gw-Trusted: destport 3335 # The port we are connecting to policy-test4-gw-Trusted: desthost 65.71.93.104 # The address we are connecting to policy-test4-gw-Trusted: privport off # We are not using a port < 1023 policy-test4-gw-Trusted: force_source_address off # We are not using the source address of the originating client, but rather the address of the outbound adapter where this connection is going to policy-test4-gw-Trusted: authserver 127.0.0.1 7777 # Address of the authentication server policy-test4-gw-Trusted: permit-password change # Users of this proxy can change their password. policy-test4-gw-Trusted: permit-destination * # Permitted to go anywhere
User management
If your site requires user authentication, you will need to
understand how your firewall software maintains user information. If
you rely only on the GUI, your hand will get a lot of exercise when
you have to point and click to get specific information for each
user. It's pretty straightforward if the firewall software bases
user information on the Unix password file, since that is a standard
with which most Unix administrators are familiar. If the firewall software
uses a vendor-specific standard, you will have to take the time to
learn what it is. While the documentation should help with this, it often does not. For example, the older Checkpoint releases stored
the user database in a proprietary format. This could be exported
using the command:
/etc/fw/bin/fw dbexport -f /tmp/users
If you examine the output file, /tmp/users
, you would
see that it is in a format with a header line of 18 description
fields separated by ;
s and followed by the user data. For the
purposes of this example, the header line is listed below, followed by a
fictitious user record:
name;groupsdestinations;sources;auth_method;fromhour;tohour;expiration_date;fore ground;days;internal_password;SKEY_seed;SKEY_number;SKEY_passwd;SKEY_gateway; comments;radius_server;userc; xyzuser;{some_app_grp,other_app_grp};{Any};{Any};InternalPassword;00:00;23:59;31 -Dec-2010;Green;{M,T,W,Th,F,S,Sn};hkjdshkddk;xyzuser;100;;;{DES,DES,MD5};
If you are able to get an ASCII format of the user database, you can use standard Unix commands to get information about all the users. For example, if you want to verify the expiration date for all the users with the above example, you can run:
cut -f8 -d";" /tmp/users > /tmp/exp.users
Virtual private networks
A typical requirement for a virtual private network (VPN) is to
provide administrators the ability to access an internal network
via the Internet. Remote dial servers are too slow and
inefficient. Most administrators have Internet access at home
anyway, and at much faster speeds than the 28.8K modems a typical
remote dial server provides. In addition, maintaining a toll-free number
for a modem pool can get expensive. A VPN can take advantage of the
existing infrastructure of the Internet in a secure manner. Typical
VPNs provide for secure authentication via smart card, triple DES
encryption for security, and the flexibility of providing all the
services required by the administrators.
Many firewall vendors offer VPN capability as part of their firewall product. I prefer to use a third-party VPN that is independent of the firewall solution. I do not want to be required to upgrade my firewall software just to keep my VPN servers in sync! I also like to have the flexibility of being able to switch firewall products without affecting my VPN connections.
An important point to remember with VPNs is that they only protect the network pipe -- not the endpoints.
An implementation example
My partner and I set up an intranet firewall for an organization
that required market data feeds between divisions. We chose a simple
proxy-based firewall (the TIS Gauntlet, in this case) because there
were many different applications that needed to pass through. The
difficulty was with the market data lines. Each market data vendor
had different requirements. One vendor provided a desktop client that
connected to its network via a SOCKS 5 connection. We installed a
SOCKS 5 server on the proxy-based firewall and channeled all the
connections through to that server. Two other market data vendors
had users log in to their networks and push an X Window back to the
desktop. They seemed surprised that we had a problem with permitting
unrestricted X into our network. There is a huge security exposure
with X Windows, but the market data vendor either didn't know or
didn't care. Its business was fast data feeds, not security.
To secure this, we deployed a VPN server to channel the users connection through the firewall. The VPN we used (VTCPSecure from InfoExress) had the ability to assign a remote IP address to users and let services such as X Windows listen on that address. To use it, the user would simply log in to the market data vendor's system, set the display to the remote IP address, and initiate the X client. The VPN server captured the X session and tunneled it back through the firewall to the client desktop.
This provided a secure method of attaching these X sessions without compromising the entire network security infrastructure. There was some impact on performance, but considering the high risk of X services, it was acceptable. The vendor also provided printing capabilities from the application. However, the application ran on the vendor's server, which could not see the users' printers. The firewall software provided a line printer gateway, which permitted the definition of printers on the firewall that were plugged to real print servers within the company. Other market data vendors may have different mechanisms to pass data. It is important to have a firewall that is flexible enough to let you add nonvendor services to it.
In addition to the market data firewall, we set up another firewall to handle Internet applications (Web, FTP, Telnet, e-mail, RealAudio, etc). We replaced some of the services provided by the vendor with other solutions that fit our needs better. For example, we run virus scanning on all desktops and servers, and felt that it was unnecessary to run an HTTP proxy that scanned for content. We were simply not interested in this feature. What we really wanted was a proxy server that would cache common pages and cut down on bandwidth utilization.
To accomplish this, we disabled the HTTP proxy that the vendor supplied and installed the Cern HTTP server that provides both a proxy capability and a caching capability. The Cern HTTP proxy provided the capability of going proxy-to-proxy and to also not proxy certain internal sites. The one thing it did not provide was a mechanism to proxy the SSL traffic to the next proxy. That problem was easily solved, though; the Cern proxy supports SOCKS 4 connections for use behind a SOCKS firewall. The firewall vendor did not provide SOCKS, so we purchased a commercial version of it from NEC and installed it on the vendor firewall.
This provided the ability of doing full proxy-to-proxy. Because we had the source code to the Cern proxy, it was simple enough to add a feature to the proxy that forced it to listen on only the inside firewall address, rather than to all addresses. This provided a higher level of security by not advertising unnecessary services to the outside.
Final thoughts
This series of articles started on the premise that the firewall architecture
should adapt to meet the requirements of a site -- not vice versa. I realize
that some of the issues and examples could easily be expanded further, and I
would be happy to do so upon request. At this point, I just wanted to demonstrate that a firewall architecture can and should be adaptable for a site.
Many thanks to Sue Chao, Kapil Bhasin, and Jonathan Klein for their contributions to this article.
Errata
Disclaimer:
The information and software in this article are provided as-is
and should be used with caution. Each environment is unique and
readers are cautioned to investigate, with their companies, the
feasibility of using the information and software in the article. No
warranties, implied or actual, are granted for any use of the
information and software in this article, and neither author nor
publisher is responsible for any damages, either consequential or
incidental, with respect to the use of the information and software
contained herein.
A reader called my attention to a couple of errors in last month's
column regarding SSH. He points out, quite correctly, that the
present version of ssh (SSH 2.0.13) does indeed support recursive
copying as well as X11 forwarding. The version I was using was SSH
2.0.10 (which I should have specified) and did not support recursive
copying, though it did support X11 forwarding (my mistake). There
were other reasons that we needed a wrapper that were too tedious to
mention in the article, but I would like to clear up my
inaccuracies.
About the author
Carole Fennelly is a partner in Wizard's Keys Corporation, a company specializing in computer security consulting. She has been a Unix system administrator for more than 15 years on various platforms and has particularly focused on sendmail configurations of late. Carole provides security consultation to several financial institutions in the New York City area.
|
Resources and Related Links | ||
|