Monday, July 16, 2007

Packet-Filtering and Basic Security Measures

Preliminary Concepts Underlying Packet-Filtering Firewalls

A small site may have Internet access through a T1 line, a cable modem, DSL, ISDN, a PPP connection to a phone-line dial-up account, or wireless. The computer connected directly to the Internet is a point of focus for security issues. Whether you have one computer or a local area network (LAN) of linked computers, the initial focus for a small site will be on the machine with the direct Internet connection. This machine will be the firewall machine.

The term firewall has various meanings depending on its implementation and purpose. At this point, firewall means the Internet-connected machine. This is where your primary security policies for Internet access will be implemented. The firewall machine's external network interface card is the connection point, or gateway, to the Internet. The purpose of a firewall is to protect what's on your side of this gateway from what's on the other side.

A simple firewall setup is sometimes called a bastion firewall because it's the main line of defense against attack from the outside. Many of your security measures are mounted from this one defender of your realm. Consequently, everything possible is done to protect this system.

Behind this line of defense is your single computer or your group of computers. The purpose of the firewall machine might simply be to serve as the connection point to the Internet for other machines on your LAN. You might be running local, private services behind this firewall, such as a shared printer or shared file systems. Or you might want all of your computers to have access to the Internet. One of your machines might host your private financial records. You might want to have Internet access from this machine, but you don't want anyone getting in. At some point, you might want to offer your own services to the Internet. One of the machines might be hosting your own website for the Internet. Another might function as your mail server or gateway. Your setup and goals will determine your security policies.

The firewall's purpose is to enforce the security policies you define. These policies reflect the decisions you've made about which Internet services you want to be accessible to your computers, which services you want to offer the world from your computers, which services you want to offer to specific remote users or sites, and which services and programs you want to run locally for your own private use. Security policies are all about access control and authenticated use of private or protected services, programs, and files on your computers.

Home and small-business systems don't face all the security issues of a larger corporate site, but the basic ideas and steps are the same. There just aren't as many factors to consider, and security policies often are less stringent than those of a corporate site. The emphasis is on protecting your site from unwelcome access from the Internet. A packet-filtering firewall is one common approach to, and one piece of, network security and controlling access to and from the outside.

Of course, having a firewall doesn't mean you are fully protected. Security is a process, not a piece of hardware. For example, even with a firewall in place it's possible to download spyware or adware or click on a maliciously crafted email, thereby opening up the computer and thus the network to the attack. It's just as important to have measures in place to mitigate successful attacks as it is to spend resources on a firewall. Using best practices inside of your network will help to lessen the chance of a successful exploit and give your network resiliency.

Something to keep in mind is that the Internet paradigm is based on the premise of end-to-end transparency. The networks between the two communicating machines are intended to be invisible. In fact, if a network device somewhere along the path fails, the idea is that traffic between the two endpoint machines will be silently rerouted.

Ideally, firewalls should be transparent. Nevertheless, they break the Internet paradigm by introducing a single point of failure within the networks between the two endpoint machines. Additionally, not all network applications use communication protocols that are easily passed through a simple packet-filtering firewall. It isn't possible to pass certain traffic through a firewall without additional application support or more sophisticated firewall technology.

Further complicating the issue has been the introduction of Network Address Translation (NAT, or "masquerading" in Linux parlance). NAT enables one computer to act on behalf of many other computers by translating their requests and forwarding them on to their destination. The use of NAT along with RFC 1918 private IP addresses has effectively prevented a looming shortage of IPv4 addresses. The combination of NAT and RFC 1918 address space makes the transmission of some types of network traffic difficult, impossible, complex, or expensive.

NOTE

Many router devices, especially those for DSL, cable modems, and wireless, are being sold as firewalls but are nothing more than NAT-enabled routers. They don't perform many of the functions of a true firewall, but they do separate internal from external. Be aware when purchasing a router that claims to be a firewall but only provides NAT. Although some of these products have some good features, the more advanced configurations are sometimes not possible.

A final complication has been the proliferation of multimedia and peer-to-peer (P2P) protocols used in both real-time communication software and popular networked games. These protocols are antithetical to today's firewall technology. Today, specific software solutions must be built and deployed for each application protocol. The firewall architectures for easily and economically handling these protocols are in process in the standards committees' working groups.

It's important to keep in mind that the combination of firewalling, DHCP, and NAT introduces complexities that cause sites to have to compromise system security to some extent in order to use the network services that the users want. Small businesses often have to deploy multiple LANs and more complex network configurations to meet the varying security needs of the individual local hosts.

Log files and other forms of monitoring (Part 2)

Log monitoring

logcheck

logcheck will go through the messages file (and others) on a regular basis (invoked via crontab usually) and email out a report of any suspicious activity. It is easily configurable with several ‘classes’ of items, active penetration attempts which is screams about immediately, bad activity, and activity to be ignored (for example DNS server statistics or SSH rekeying). Logcheck is available from: http://www.psionic.com/abacus/logcheck/.

colorlogs

colorlogs will color code log lines allowing you to easily spot bad activity. It is of somewhat questionable value however as I know very few people that stare at log files on an on-going basis. You can get it at: http://www.resentment.org/projects/colorlogs/ .

WOTS

WOTS collects log files from multiple sources and will generate reports or take action based on what you tell it to do. WOTS looks for regular expressions you define and then executes the commands you list (mail a report, sound an alert, etc.). WOTS requires you have perl installed and is available from: http://www.vcpc.univie.ac.at/~tc/tools/ .

swatch

swatch is very similar to WOTS, and the log files configuration is very similar. You can download swatch from: ftp://ftp.stanford.edu/general/security-tools/swatch/

Kernel logging

auditd

auditd allows you to use the kernel logging facilities (a very powerful tool). You can log mail messages, system events and the normal items that syslog would cover, but in addition to this you can cover events such as specific users opening files, the execution of programs, of setuid programs, and so on. If you need a solid audit trail then this is the tool for you, you can get it at: ftp://ftp.hert.org/pub/linux/auditd/ .

Shell logging

bash

I will also cover bash since it is the default shell in most Linux installations, and thus it's logging facilities are generally used. bash has a large number of variables you can configure at or during run time that modify how it behaves, everything from the command prompt style to how many lines to keep in the log file.

HISTFILE

name of the history file, by default it is ~username/.bash_history

HISTFILESIZE

maximum number of commands to keep in the file, it rotates them as needed.

HISTSIZE

the number of commands to remember (i.e. when you use the up arrow key).

The variables are typically set in /etc/profile, which configures bash globally for all users, the values can however be over-ridden by users with the ~username/.bash_profile file, and/or by manually using the export command to set variables such as export EDITOR=emacs. This is one of the reasons user directories should not be world readable, as the bash_history file can contain a lot of valuable information to a hostile party. You can also set the file itself non world readable, set your .bash_profile not to log, set the file non writeable (thus denying bash the ability to write and log to it) or link it to /dev/null (this is almost always a sure sign of suspicious user activity, or a paranoid user). For the root account I would highly recommend setting the HISTFILESIZE and HISTSIZE to a low value such as 10. Unfortunately you cannot really lock down normal user’s history files, you can set them so the user cannot delete them etc, but unless you deny the user the export command, etc. they will be able to get around having all their commands logged if they are competent. Ultimately, letting users have interactive shell accounts on the server is a bad idea and should be as heavily restricted as possible.

Wednesday, June 27, 2007

Log files and other forms of monitoring (Part 1)

One integral part of any UNIX system are the logging facilities. The majority of logging in Linux is provided by two main programs, sysklogd and klogd, the first providing logging services to programs and applications, the second providing logging capability to the Linux kernel. Klogd actually sends most messages to the syslogd facility but will on occasion pop up messages at the console (i.e. kernel panics). Sysklogd actually handles the task of processing most messages and sending them to the appropriate file or device, this is configured from within /etc/syslog.conf. By default most logging to files takes place in /var/log/, and generally speaking programs that handle their own logging (such as apache) log to /var/log/progname/, this centralizes the log files and makes it easier to place them on a separate partition (some attacks can fill your logs quite quickly, and a full / partition is no fun). Additionally there are programs that handle their own interval logging, one of the more interesting being the bash command shell. By default bash keeps a history file of commands executed in ~username/.bash_history, this file can make for extremely interesting reading, as oftentimes many admins will accidentally type their passwords in at the command line. Apache handles all of it's logging internally, configurable from httpd.conf and extremely flexible with the release of Apache 1.3.6 (it supports conditional logging). Sendmail handles it's logging requirements via syslogd but also has the option (via the command line -X switch) of logging all SMTP transactions straight to a file. This is highly inadvisable as the file will grow enormous in a short span of time, but is useful for debugging. See the sections in network security on apache and sendmail for more information.

sysklogd / klogd

In a nutshell klogd handles kernel messages, depending on your setup this can range from almost none to a great deal if for example you turn on process accounting. It then passes most messages to syslogd for actual handling, i.e. placement in a logfile. the man pages for sysklogd, klogd and syslog.conf are pretty good with clear examples. One exceedingly powerful and often overlooked ability of syslog is to log messages to a remote host running syslog. Since you can define multiple locations for syslog messages (i.e. send all kern messages to the /var/log/messages file, and to console, and to a remote host or multiple remote hosts) this allows you to centralize logging to a single host and easily check log files for security violations and other strangeness. There are several problems with syslogd and klogd however, the primary ones being the ease of which once an attacker has gained root access to deleting/modifying log files, there is no authentication built into the standard logging facilities.

The standard log files that are usually defined in syslog.conf are:

/var/log/messages
/var/log/secure
/var/log/maillog
/var/log/spooler


The first one (messages) gets the majority of information typically, user login's, TCP_WRAPPERS dumps information here, IP firewall packet logging typically dumps information here and so on. The second typically records entries for events like users changing their UID/GID (via su, sudo, etc.), failed attempts when passwords are required and so on. The maillog file typically holds entries for every pop/imap connection (user login and logout), and the header of each piece of email that goes in or out of the system (from whom, to where, msgid, status, and so on). The spooler file is not often used anymore as the number of people running usenet or uucp has plummeted, uucp has been basically replaced with ftp and email, and most usenet servers are typically extremely powerful machines to handle a full, or even partial newsfeed, meaning there aren't many of them (typically one per ISP or more depending on size). Most home users and small/medium sized business will not (and should not in my opinion) run a usenet server, the amount of bandwidth and machine power required is phenomenal, let alone the security risks.

You can also define additional log files, for example you could add:

kern.* /var/log/kernel-log

And/or you can log to a separate log host:

*.emerg @syslog-host
mail.* @mail-log-host

Which would result in all kernel messages being logged to /var/log/kernel-log, this is useful on headless servers since by default kernel messages go to /dev/console (i.e. someone logged in at the machines). In the second case all emergency messages would be logged to the host "syslog-host", and all the mail log files would be sent to the "mail-log-host" server, allowing you to easily maintain centralized log files of various services.

secure-syslog

The major problem with syslog however is that tampering with log files is trivial. There is however a secure versions of syslogd, available at http://www.core-sdi.com/ssyslog/ (these guys generally make good tools and have a good reputation, in any case it is open source software for those of you truly paranoid). This allows you to cyrptographically sign logs and other ensure they haven’t been tampered with, ultimately however an attacker can still delete the log files so it is a good idea to send them to another host, especially in the case of a firewall to prevent the hard drive being filled up.

next generation syslog

Another alternative is "syslog-ng" (Next Generation Syslog), which seems much more customizable then either syslog or secure syslog, it supports digital signatures to prevent log tampering, and can filter based on content of the message, not just the facility it comes from or priority (something that is very useful for cutting down on volume). Syslog-ng is available at: http://www.balabit.hu/products/syslog-ng.html .

System Files

/etc/passwd

The password file is arguably the most critical system file in Linux (and most other unices). It contains the mappings of username, user ID and the primary group ID that person belongs to. It may also contain the actual password however it is more likely (and much more secure) to use shadow passwords to keep the passwords in /etc/shadow. This file MUST be world readable, otherwise commands even as simple as ls will fail to work properly. The GECOS field can contain such data as the real name, phone number and the like for the user, the home directory is the default directory the user gets placed in if they log in interactively, and the login shell must be an interactive shell (such as bash, or a menu program) and listed in /etc/shells for the user to log in. The format is:

username:password:UID:GID:GECOS_field:home_directory:login_shell

/etc/shadow

The shadow file holes the username and password pairs, as well as account information such as expiry date, and any other special fields. This file should be protected at all costs.

/etc/groups

The groups file contains all the group membership information, and optional items such as group password (typically stored in gshadow on current systems), this file to must be world readable for the system to behave correctly. The format is:

groupname:password:GID:member,member,member

A group may contain no members (i.e. it is unused), a single member or multiple members, and the password is optional.

/etc/gshadow

Similar to the password shadow file, this file contains the groups, password and members.

/etc/login.defs

This file (/etc/logins.def) allows you to define some useful default values for various programs such as useradd and password expiry. It tends to vary slightly across distributions and even versions, but typically is well commented and tends to contain sane default values.

/etc/shells

The shells file contains a list of valid shells, if a user’s default shell is not listed here they may not log in interactively. See the section on Telnetd for more information.

/etc/securetty

This file contains a list of tty’s that root can log in from. Console tty’s are usually /dev/tty1 through /dev/tty6. Serial ports (if you want to log in as root over a modem say) are /dev/ttyS0 and up typically. If you want to allow root to login via the network (a very bad idea, use sudo) then add /dev/ttyp1 and up (if 30 users login and root tries to login root will be coming from /dev/ttyp31). Generally you should only allow root to login from /dev/tty1, and it is advisable to disable the root account altogether.

Wednesday, June 6, 2007

Administrative tools/ Remote

Webmin

Webmin is a (currently) a non commercial web based administrative tool. It’s a set of perl scripts with a self contained www server that you access using a www browser, it has modules for most system administration functions, although some are a bit temperamental. One of my favourite features is the fact is that it holds it’s own username and passwords for access to webmin, and you can customize what each user gets access to (i.e. user1 can administer users, user2 can reboot the server, and user3 can fiddle with the apache settings). Webmin is available at: http://www.webmin.com/.

Linuxconf

Linuxconf is a general purpose Linux administration tool that is usable from the command line, from within X, or via it's built in www server. It is my preferred tool for automated system administration (I primarily use it for doing strange network configurations), as it is relatively light from the command line (it is actually split up into several modules). From within X it provides an overall view of everything that can be configured (PPP, users, disks, etc.). To use it via a www browser you must first run Linuxconf on the machine and add the host(s) or network(s) you want to allow to connect (Conf > Misc > Linuxconf network access), save changes and quit, then when you connect to the machine (by default Linuxconf runs on port 98) you must enter a username and password, it only accepts root as the account, and Linuxconf doesn't support any encryption, so I would have to recommend very strongly against using this feature across public networks. Linuxconf ships with RedHat Linux and is available at: http://www.solucorp.qc.ca/linuxconf/. Linuxconf also doesn't seem to ship with any man pages/etc, the help is contained internally which is slightly irritating.

COAS

The COAS project (Caldera Open Administration System) is a very ambitious project to provide an open framework for administering systems, from a command line (with semi graphical interface), from within X (using the qt widget set) to the web. It abstracts the actual configuration data by providing a middle layer, thus making it suitable for use on disparate Linux platforms. Version 1.0 was just released, so it looks like Caldera is finally pushing ahead with it. The COAS site is at: http://www.coas.org/ .

Administrative tools /Local

YaST

YaST (Yet Another Setup Tool) is a rather nice command line graphical interface (very similar to scoadmin) that provides an easy interface to most administrative tasks. It does not however have any provisions for giving users limited access, so it is really only useful for cutting down on errors, and allowing new users to administer their systems. Another problem is unlike Linuxconf it is not network aware, meaning you must log into each system you want to manipulate.

sudo

Sudo gives a user setuid access to a program(s), and you can specify which host(s) they are allowed to login from (or not) and have sudo access (thus if someone breaks into an account, but you have it locked down damage is minimized). You can specify what user a command will run as, giving you a relatively fine degree of control. If granting users access be sure to specify the hosts they are allowed to log in from and execute sudo, as well give the full pathnames to binaries, it can save you significant grief in the long run (i.e. if I give a user setuid access to "adduser", there is nothing to stop them editing their path statement, and copying "bash" into /tmp). This tool is very similar to super but with slightly less fine control. Sudo is available for most distributions as a core package or a contributed package. Sudo is available at: http://www.courtesan.com/sudo/ just in case your distribution doesn’t ship with it Sudo allows you to define groups of hosts, groups of commands, and groups of users, making long term administration simpler. Several /etc/sudoers examples:
Give the user ‘seifried’ full access
seifried ALL=(ALL) ALL

Create a group of users, a group of hosts, and allow then to shutdown the server as root

Host_Alias WORKSTATIONS=localhost, station1, station2
User_Alias SHUTDOWNUSERS=bob, mary, jane
Cmnd_Alias REBOOT=halt, reboot, sync
Runas_Alias REBOOTUSER=admin
SHUTDOWNUSERS WORKSTATIONS=(REBOOTUSER) REBOOT

Super

Super is one of the very few tools that can actually be used to give certain users (and groups) varied levels of access to system administration. In addition to this you can specify times and allow access to scripts, giving setuid access to even ordinary commands could have unexpected consequences (any editor, any file manipulation tools like chown, chmod, even tools like lp could compromise parts of the system). Debian ships with super, and there are rpm's available in the contrib directory (buildhost is listed as "localhost", you might want to find the source and compile it yourself). This is a very powerful tool (it puts sudo to shame), but requires a significant amount of effort to implement properly, I think it is worth the effort though. The head end distribution site for super is at: ftp://ftp.ucolick.org/pub/users/will/ .

Friday, May 25, 2007

Administrative tools/ Access

Telnet

Telnet is by far the oldest and well known remote access tool, virtually ever Unix ships with it, and even systems such as NT support it. Telnet is really only useful if you can administer the system from a command prompt (something NT isn’t so great at), which makes it perfect for Unix. Telnet is incredibly insecure, passwords and usernames as well as the session data flies around as plain text and is a favourite target for sniffers. Telnet comes with all Linux distributions. You should never ever use stock telnet to remotely administer a system.

SSL Telnet

SSL Telnet is telnet with the addition of SSL encryption which makes it much safer and far more secure. Using X.509 certificates (also referred to as personal certificates) you can easily administer remote systems. Unlike systems such as SSH, SSL Telnet is completely GNU and free for all use. You can get SSL Telnet server and client from:
ftp://ftp.replay.com/.

SSH

SSH was originally free but is now under a commercial license, it does however have many features that make it worthwhile. It supports several forms of authentication (password, rhosts based, RSA keys), allows you to redirect ports, and easily configure which users are allowed to login using it. SSH is available from: ftp://ftp.replay.com/. If you are going to use it commercially, or want the latest version you should head over to:
http://www.ssh.fi/.

LSH

LSH is a free implementation of the SSH protocol, LSH is GNU licensed and is starting to look like the alternative (commercially speaking) to SSH (which is not free anymore). You can download it from:
http://www.net.lut.ac.uk/psst/, please note it is under development.

REXEC

REXEC is one of the older remote UNIX utilities, it allows you to execute commands on a remote system, however it is seriously flawed in that it has no real security model. Security is achieved via the use of "rhosts" files, which specify which hosts/etc may run commands, this however is prone to spoofing and other forms of exploitation. You should never ever use stock REXEC to remotely administer a system.

Slush

Slush is based on OpenSSL and supports X.509 certificates currently, which for a large organization is a much better (and saner) bet then trying to remember several dozen passwords on various servers. Slush is GPL, but not finished yet (it implements most of the required functionality to be useful, but has limits). On the other hand it is based completely in open source software making the possibilities of backdoors/etc remote. Ultimately it could replace SSH with something much nicer. You can get it from: http://violet.ibs.com.au/slush/.

NSH

NSH is a commercial product with all the bells and whistles (and I do mean all). It’s got built in support for encryption, so it’s relatively safe to use (I cannot really verify this as it isn’t open source). Ease of use is high, you cd //computername and that ‘logs’ you into that computer, you can then easily copy/modify/etc. files, run ps and get the process listing for that computer, etc. NSH also has a Perl module available, making scripting of commands pretty simple, and is ideal for administering many like systems (such as workstations). In addition to this NSH is available on multiple platforms (Linux, BSD, Irix, etc.). NSH is available from: http://www.networkshell.com/, and 30 day evaluation versions are easily downloaded.

Fsh

Fsh is stands for "Fast remote command execution" and is similar in concept to rsh/rcp. It avoids the expense of constantly creating encrypted sessions by bring up an encrypted tunnel using ssh or lsh, and running all the commands over it. You can get it from: http://www.lysator.liu.se/fsh/.

secsh

secsh (Secure Shell) provides another layer of login security, once you have logged in via ssh or SSL telnet you are prompted for another password, if you get it wrong secsh kills off the login attempt. You can get secsh at: http://www.leenux.com/scripts/.

The Linux kernel

Linux (GNU/Linux according to Stallman if you’re referring to a complete Linux distribution) is actually just the kernel of the operating system. The kernel is the core of the system, it handles access to all the harddrive, security mechanisms, networking and pretty much everything. It had better be secure or you are screwed.

In addition to this we have problems like the Pentium F00F bug and inherent problems with the TCP-IP protocol, the Linux kernel has it’s work cut out for it. Kernel versions are labeled as X.Y.Z, Z are minor revision numbers, Y define if the kernel is a test (odd number) or production (even number), and X defines the major revision (we have had 0, 1 and 2 so far). I would highly recommend running kernel 2.2.x, as of May 1999 this is 2.2.9. The .2.x series of kernel has major improvements over the 2.0.x series. Using the 2.2.x kernels also allows you access to newer features such as ipchains (instead of ipfwadm) and other advanced security features.

Upgrading and Compiling the Kernel

Upgrading the kernel consists of getting a new kernel and modules, editing /etc/lilo.conf, rerunning lilo to write a new MBR. The kernel will typically be placed into /boot, and the modules in /lib/modules/kernel.version.number/.

Getting a new kernel and modules can be accomplished 2 ways, by downloading the appropriate kernel package and installing it, or by downloading the source code from ftp://ftp.kernel.org/ (please use a mirror site), and compiling it.

Compiling a kernel is straightforward:

cd /usr/src

there should be a symlink called "linux" pointing to the directory containing the current kernel, remove it if there is, if there isn’t one no problem. You might want to ‘mv’ the linux directory to /usr/src/linux-kernel.version.number and create a link pointing /usr/src/linux at it.

Unpack the source code using tar and gzip as appropriate so that you now have a /usr/src/linux with about 50 megabytes of source code in it. The next step is to create the linux kernel configuration (/usr/src/linux.config), this can be achieved using "make config", "make menuconfig" or "make xconfig", my preferred method is "make menuconfig" (for this you will need ncurses and ncurses devel libraries). This is arguably the hardest step, there are hundreds options, which can be categorized into two main areas: hardware support, and service support. For hardware support make a list of hardware that this kernel will be running on (i.e. P166, Adaptec 2940 SCSI Controller, NE2000 ethernet card, etc.) and turn on the appropriate options. As for service support you will need to figure out which filesystems (fat, ext2, minix ,etc.) you plan to use, the same for networking (firewalling, etc.).

Once you have configured the kernel you need to compile it, the following commands makes dependencies ensuring that libraries and so forth get built in the right order, then cleans out any information from previous compiles, then builds a kernel, the modules and installs the modules.

make dep (makes dependencies)

make clean (cleans out previous cruft)

make bzImage (make zImage pukes if the kernel is to big, and 2.2.x kernels tend to be pretty big)

make modules (creates all the modules you specified)

make modules_install (installs the modules to /lib/modules/kernel.version.number/)

you then need to copy /usr/src/linux/arch/i386/boot/bzImage (zImage) to /boot/vmlinuz-kernel.version.number. Then edit /etc/lilo.conf, adding a new entry for the new kernel and setting it as the default image is the safest way (using the default=X command, otherwise it will boot the first kernel listed), if it fails you can reboot and go back to the previous working kernel. Run lilo, and reboot.

Kernel Versions

Currently we are in a stable kernel release series, 2.2.x. I would highly recommend running the latest stable kernel (currently 2.2.9 as of May 1999) as there are several nasty security problems (network attacks and denial of service attacks) that affect all kernels up to 2.0.35, 2.0.36 is patched, and the later 2.1.x test kernels to 2.2.3. Upgrading from the 2.0.x series of stable kernels to the 2.2.x series is relatively painless if you are careful and follow instructions (there are some minor issues but for most users it will go smoothly). Several software packages must be updated, libraries, ppp, modutils and others (they are covered in the kernel docs / rpm dependencies / etc.). Additionally keep the old working kernel, add an entry in lilo.conf for it as "linuxold" or something similar and you will be able to easily recover in the event 2.2.x doesn't work out as expected. Don't expect the 2.2.x series to be bug free, 2.2.9 will be found to contain flaws and will be obsoleted, like every piece of software in the world.

Physical / Boot security

Physical Access

This area is covered in depth in the "Practical Unix and Internet Security" book, but I'll give a brief overview of the basics. Someone turns your main accounting server off, turns it back on, boots it from a specially made floppy disk and transfers payroll.db to a foreign ftp site. Unless your accounting server is locked up what is to prevent a malicious user (or the cleaning staff of your building, the delivery guy, etc.) from doing just that? I have heard horror stories of cleaning staff unplugging servers so that they could plug their cleaning equipment in. I have seen people accidentally knock the little reset switch on power bars and reboot their servers (not that I have ever done that). It just makes sense to lock your servers up in a secure room (or even a closet). It is also a very good idea to put the servers on a raised surface to prevent damage in the event of flooding (be it a hole in the roof or a super gulp slurpee).

The Computer BIOS

The computer's BIOS is on of the most low level components, it controls how the computer boots and a variety of other things. Older bios's are infamous for having universal passwords, make sure your bios is recent and does not contain such a backdoor. The bios can be used to lock the boot sequence of a computer to C: only, i.e. the first harddrive, this is a very good idea. You should also use the bios to disable the floppy drive (typically a server will not need to use it), and it can prevent users from copying data off of the machine onto floppy disks.

You may also wish to disable the serial ports in users machines so that they cannot attach modems, most modern computers use PS/2 keyboard and mice, so there is very little reason for a serial port in any case (plus they eat up IRQ's). Same goes for the parallel port, allowing users to print in a fashion that bypasses your network, or giving them the chance to attach an external CDROM burner or harddrive can decrease security greatly. As you can see this is an extension of the policy of least privilege and can decrease risks considerably, as well as making network maintenance easier (less IRQ conflicts, etc.).

LILO

Once the computer has decided to boot from C:, LILO (or whichever bootloader you use) takes over. Most bootloaders allow for some flexibility in how you boot the system, LILO especially so, but this is a two edged sword. You can pass LILO arguments at boot time, the most damaging (from a security point of view) being "imagename single" which boots Linux into single user mode, and by default in most distributions dumps you to a root prompt in a command shell with no prompting for passwords or other pesky security mechanisms.
Several techniques exist to minimize this risk.

delay=X
this controls how long (in tenths of seconds) LILO waits for user input before booting to the default selection. One of the requirements of C2 security is that this interval be set to 0 (obviously a dual boot machines blows most security out of the water). It is a good idea to set this to 0 unless the system dual boots something else.

prompt
forces the user to enter something, LILO will not boot the system automatically. This could be useful on servers as a way of disabling reboots without a human attendant, but typically if the hacker has the ability to reboot the system they could rewrite the MBR with new boot options.
If you add a timeout option however the system will continue booting after the timeout is reached.

restricted
requires a password to be used if boot time options (such as "linux single") are passed to the boot loader. Make sure you use this one on each image (otherwise the server will need a password to boot, which is fine if you’re never planning to remotely reboot it).

password=XXXXX
requires user to input a password, used in conjunction with restricted, also make sure lilo.conf is no longer world readable, or any user will be able to read the password.

Here is an example of lilo.conf from one of my servers (the password has been of course changed).

boot=/dev/hda
map=/boot/map
install=/boot/boot.b
prompt
timeout=100
default=linux
image=/boot/vmlinuz-2.2.5
label=linux
root=/dev/hda1
read-only
restricted
password=some_password

This boots the system using the /boot/vmlinuz-2.2.5 kernel, stored on the MBR of the first IDE harddrive of the system, the prompt keyword would normally stop unattended rebooting, however it is set in the image, so it can boot "linux" no problem, but it would ask for a password if you entered "linux single", so if you want to go into "linux single" you have 10 seconds to type it in, at which point you would be prompted for the password ("some_password"). Combine this with a BIOS set to only boot from C: and password protected and you have a pretty secure system.

General concepts, server verses workstations

There are many issues that affect actually security setup on a computer. How secure does it need to be? Is the machine networked? Will there be interactive user accounts (telnet/ssh)? Will users be using it as a workstation or is it a server? The last one has a big impact since "workstations" and "servers" have traditionally been very different beasts, although the line is blurring with the introduction of very powerful and cheap PC's, as well as operating systems that take advantage of them. The main difference in today's world between computers is usually not the hardware, or even the OS (Linux is Linux, NT Server and NT Workstation are close family, etc.), it is in what software packages are loaded (apache, X, etc) and how users access the machine (interactively, at the console, and so forth). Some general rules that will save you a lot of grief in the long run:
  1. Keep users off of the servers. That is to say: do not give them interactive login shells, unless you absolutely must.
  2. Lock down the workstations, assume users will try to 'fix' things (heck, they might even be hostile, temp workers/etc).
  3. Use encryption wherever possible to keep plain text passwords, credit card numbers and other sensitive information from lying around.
  4. Regularly scan the network for open ports/installed software/etc that shouldn't be, compare it against previous results.

Remember: security is not a solution, it is a way of life.

Generally speaking workstations/servers are used by people that don't really care about the underlying technology, they just want to get their work done and retrieve their email in a timely fashion. There are however many users that will have the ability to modify their workstation, for better or worse (install packet sniffers, warez ftp sites, www servers, irc bots, etc). To add to this most users have physical access to their workstations, meaning you really have to lock them down if you want to do it right.

  1. Use BIOS passwords to lock users out of the BIOS (they should never be in here, also remember that older BIOS's have universal passwords.)
  2. Set the machine to boot from the appropriate harddrive only.
  3. Password the LILO prompt.
  4. Do not give the user root access, use sudo to tailor access to privileged commands as needed.
  5. Use firewalling so even if they do setup services they won’t be accessible to the world.
  6. Regularly scan the process table, open ports, installed software, and so on for change.
  7. Have a written security policy that users can understand, and enforce it.
  8. Remove all sharp objects (compilers, etc) unless needed from a system.

Remember: security in depth.

Properly setup, a Linux workstation is almost user proof (nothing is 100% secure), and generally a lot more stable then a comparable Wintel machine. With the added joy of remote administration (SSH/Telnet/NSH) you can keep your users happy and productive.

Servers are a different ball of wax together, and generally more important then workstations (one workstation dies, one user is affected, if the email/www/ftp/etc server dies your boss phones up in a bad mood). Unless there is a strong need, keep the number of users with interactive shells (bash, pine, lynx based, whatever) to a bare minimum. Segment services up (have a mail server, a www server, and so on) to minimize single point of failure. Generally speaking a properly setup server will run and not need much maintenance (I have one email server at a client location that has been in use for 2 years with about 10 hours of maintenance in total). Any upgrades should be planned carefully and executed on a test. Some important points to remember with servers:

  1. Restrict physical access to servers.
  2. Policy of least privilege, they can break less things this way.
  3. MAKE BACKUPS!
  4. Regularly check the servers for changes (ports, software, etc), automated tools are great for this.
  5. Software changes should be carefully planned/tested as they can have adverse affects (like kernel 2.2.x no longer uses ipfwadm, wouldn't that be embarrassing if you forgot to install ipchains).

Minimization of privileges means giving users (and administrators for that matter) the minimum amount of access required to do their job. Giving a user "root" access to their workstation would make sense if all users were Linux savvy, and trustworthy, but they generally aren't (on both counts). And even if they were it would be a bad idea as chances are they would install some software that is broken/insecure or other. If all a user access needs to do is shutdown/reboot the workstation then that is the amount of access they should be granted. You certainly wouldn't leave accounting files on a server with world readable permissions so that the accountants can view them, this concept extends across the network as a whole. Limiting access will also limit damage in the event of an account penetration (have you ever read the post-it notes people put on their monitors?).

Safe installation of Linux

A proper installation of Linux is the first step to a stable, secure system. There are various tips and tricks to make the install go easier, as well as some issues that are best handled during the install (such as disk layout).

Choosing your install media

This is the #1 issue that will affect speed of install and to a large degree safety. My personal favorite is ftp installs since popping a network card into a machine temporarily (assuming it doesn't have one already) is quick and painless, and going at 1+ megabyte/sec makes for quick package installs. Installing from CD-ROM is generally the easiest, as they are bootable, Linux finds the CD and off you go, no pointing to directories or worrying about case (in the case of an HD install). This is also original Linux media and you can be relatively sure it is
safe (assuming it came from a reputable source), if you are paranoid however feel free to check the signatures on the files.


  • FTP - quick, requires network card, and an ftp server (Windows box running something like warftpd will work as well).
  • HTTP – also fast, and somewhat safer then running a public FTP server for installs
  • Samba - quick, good way if you have a windows machine (share the cdrom out).
  • NFS - not as quick, but since nfs is usually implemented in most existing UNIX networks (and NT now has an NFS server from MS for free) it's mostly painless. NFS is the only network install supported by RedHat’s kickstart.
  • CDROM - if you have a fast cdrom drive, your best bet, pop the cd and boot disk in, hit enter a few times and you are done. Most Linux CDROM’s are now bootable.
  • HardDrive - generally the most painful, windows kacks up filenames/etc, installing from an ext2 partition is usually painless though (catch 22 for new users however)

So you've got a fresh install of Linux (RedHat, Debian, whatever, please, please, DO NOT install really old versions and try to upgrade them, it's a nightmare), but chances are there is a lot of extra software installed, and packages you might want to upgrade or things you had better upgrade if you don't want the system compromised in the first 15 seconds of uptime (in the case of BIND/Sendmail/etc.). Keeping a local copy of the updates directory for your
distributions is a good idea (there is a list of errata for distributions at the end of this document), and making it available via nfs/ftp or burning it to CD is generally the quickest way to make it available. As well there are other items you might want to upgrade, for instance I use a chroot'ed, non-root version of Bind 8.1.2, available on the contrib server (ftp://contrib.redhat.com/), instead of the stock, non-chrooted, run as root Bind 8.1.2 that ships
with RedHat Linux. You will also want to remove any software you are not using, and/or replace it with more secure versions (such as replacing rsh with ssh).

How to determine what to secure and how to secure it

Are you protecting data (proprietary, confidential or otherwise), are you trying to keep certain services up (your mail server, www server, etc.), do you simply want to protect the physical hardware from damage? What are you protecting it against? Malicious damage (8 Sun Enterprise 10000's), deletion (survey data, your mom's recipe collection), changes (a hospital with medical records, a bank), exposure (confidential internal communications concerning the lawsuit, plans to sell cocaine to unwed mothers), and so on. What are the chances of a "bad" event happening, network probes (happens to me daily), physical intrusion (hasn’t happened to me yet), social engineering ("Hi, this is Bob from IT, I need your password so we can reset it… .").

You need to list out the resources (servers, services, data and other components) that contain data, provide services, make up your company infrastructure, and so on. The following is a short list:
  • Physical server machines
  • Mail server and services
  • DNS server and services
  • WWW server and services
  • File server and services
  • Internal company data such as accounting records and HR data
  • Your network infrastructure (cabling, hubs, switches, routers, etc.)
  • Your phone system (PBX, voicemail, etc.)
You then need to figure out what you want to protect it against:
  • Physical damage (smoke, water, food, etc.)
  • Deletion / modification of data (accounting records, defacement of your www site, etc.)
  • Exposure of data (accounting data, etc.)
  • Continuance of services (keep the email/www/file server up and running)
  • Prevent others from using your services illegally/improperly (email spamming, etc.)
Finally what is the likelihood of an event occurring?
  • Network scans – daily is a safe bet
  • Social engineering – varies, usually the most vulnerable people tend to be the ones targeted
  • Physical intrusion – depends, typically rare, but a hostile employee with a pair of wire cutters could do a lot of damage in a telecom closet
  • Employees selling your data to competitors – it happens
  • Competitor hiring skilled people to actively penetrate your network – no-one ever talks about this one but it also happens
Once you have come up with a list of your resources and what needs to be done you can start implementing security. Some techniques (physical security for servers, etc.) pretty much go without saying, in this industry there is a baseline of security typically implemented (passwording accounts, etc.). The vast majority of security problems are usually human generated, and most problems I have seen are due to a lack of education/communication between people, there is no technical ‘silver bullet’, even the best software needs to be installed, configured and maintained by people.

Now for the stick. A short list of possible results from a security incident:
  • Loss of data
  • Direct loss of revenue (www sales, file server is down, etc)
  • Indirect loss of revenue (email support goes, customers vow never to buy from you again)
  • Cost of staff time to respond
  • Lost productivity of IT staff and workers dependant on IT infrastructure
  • Legal Liability (medical records, account records of clients, etc.)
  • Loss of customer confidence
  • Media coverage of the event