Friday, May 25, 2007

Administrative tools/ Access

Telnet

Telnet is by far the oldest and well known remote access tool, virtually ever Unix ships with it, and even systems such as NT support it. Telnet is really only useful if you can administer the system from a command prompt (something NT isn’t so great at), which makes it perfect for Unix. Telnet is incredibly insecure, passwords and usernames as well as the session data flies around as plain text and is a favourite target for sniffers. Telnet comes with all Linux distributions. You should never ever use stock telnet to remotely administer a system.

SSL Telnet

SSL Telnet is telnet with the addition of SSL encryption which makes it much safer and far more secure. Using X.509 certificates (also referred to as personal certificates) you can easily administer remote systems. Unlike systems such as SSH, SSL Telnet is completely GNU and free for all use. You can get SSL Telnet server and client from:
ftp://ftp.replay.com/.

SSH

SSH was originally free but is now under a commercial license, it does however have many features that make it worthwhile. It supports several forms of authentication (password, rhosts based, RSA keys), allows you to redirect ports, and easily configure which users are allowed to login using it. SSH is available from: ftp://ftp.replay.com/. If you are going to use it commercially, or want the latest version you should head over to:
http://www.ssh.fi/.

LSH

LSH is a free implementation of the SSH protocol, LSH is GNU licensed and is starting to look like the alternative (commercially speaking) to SSH (which is not free anymore). You can download it from:
http://www.net.lut.ac.uk/psst/, please note it is under development.

REXEC

REXEC is one of the older remote UNIX utilities, it allows you to execute commands on a remote system, however it is seriously flawed in that it has no real security model. Security is achieved via the use of "rhosts" files, which specify which hosts/etc may run commands, this however is prone to spoofing and other forms of exploitation. You should never ever use stock REXEC to remotely administer a system.

Slush

Slush is based on OpenSSL and supports X.509 certificates currently, which for a large organization is a much better (and saner) bet then trying to remember several dozen passwords on various servers. Slush is GPL, but not finished yet (it implements most of the required functionality to be useful, but has limits). On the other hand it is based completely in open source software making the possibilities of backdoors/etc remote. Ultimately it could replace SSH with something much nicer. You can get it from: http://violet.ibs.com.au/slush/.

NSH

NSH is a commercial product with all the bells and whistles (and I do mean all). It’s got built in support for encryption, so it’s relatively safe to use (I cannot really verify this as it isn’t open source). Ease of use is high, you cd //computername and that ‘logs’ you into that computer, you can then easily copy/modify/etc. files, run ps and get the process listing for that computer, etc. NSH also has a Perl module available, making scripting of commands pretty simple, and is ideal for administering many like systems (such as workstations). In addition to this NSH is available on multiple platforms (Linux, BSD, Irix, etc.). NSH is available from: http://www.networkshell.com/, and 30 day evaluation versions are easily downloaded.

Fsh

Fsh is stands for "Fast remote command execution" and is similar in concept to rsh/rcp. It avoids the expense of constantly creating encrypted sessions by bring up an encrypted tunnel using ssh or lsh, and running all the commands over it. You can get it from: http://www.lysator.liu.se/fsh/.

secsh

secsh (Secure Shell) provides another layer of login security, once you have logged in via ssh or SSL telnet you are prompted for another password, if you get it wrong secsh kills off the login attempt. You can get secsh at: http://www.leenux.com/scripts/.

The Linux kernel

Linux (GNU/Linux according to Stallman if you’re referring to a complete Linux distribution) is actually just the kernel of the operating system. The kernel is the core of the system, it handles access to all the harddrive, security mechanisms, networking and pretty much everything. It had better be secure or you are screwed.

In addition to this we have problems like the Pentium F00F bug and inherent problems with the TCP-IP protocol, the Linux kernel has it’s work cut out for it. Kernel versions are labeled as X.Y.Z, Z are minor revision numbers, Y define if the kernel is a test (odd number) or production (even number), and X defines the major revision (we have had 0, 1 and 2 so far). I would highly recommend running kernel 2.2.x, as of May 1999 this is 2.2.9. The .2.x series of kernel has major improvements over the 2.0.x series. Using the 2.2.x kernels also allows you access to newer features such as ipchains (instead of ipfwadm) and other advanced security features.

Upgrading and Compiling the Kernel

Upgrading the kernel consists of getting a new kernel and modules, editing /etc/lilo.conf, rerunning lilo to write a new MBR. The kernel will typically be placed into /boot, and the modules in /lib/modules/kernel.version.number/.

Getting a new kernel and modules can be accomplished 2 ways, by downloading the appropriate kernel package and installing it, or by downloading the source code from ftp://ftp.kernel.org/ (please use a mirror site), and compiling it.

Compiling a kernel is straightforward:

cd /usr/src

there should be a symlink called "linux" pointing to the directory containing the current kernel, remove it if there is, if there isn’t one no problem. You might want to ‘mv’ the linux directory to /usr/src/linux-kernel.version.number and create a link pointing /usr/src/linux at it.

Unpack the source code using tar and gzip as appropriate so that you now have a /usr/src/linux with about 50 megabytes of source code in it. The next step is to create the linux kernel configuration (/usr/src/linux.config), this can be achieved using "make config", "make menuconfig" or "make xconfig", my preferred method is "make menuconfig" (for this you will need ncurses and ncurses devel libraries). This is arguably the hardest step, there are hundreds options, which can be categorized into two main areas: hardware support, and service support. For hardware support make a list of hardware that this kernel will be running on (i.e. P166, Adaptec 2940 SCSI Controller, NE2000 ethernet card, etc.) and turn on the appropriate options. As for service support you will need to figure out which filesystems (fat, ext2, minix ,etc.) you plan to use, the same for networking (firewalling, etc.).

Once you have configured the kernel you need to compile it, the following commands makes dependencies ensuring that libraries and so forth get built in the right order, then cleans out any information from previous compiles, then builds a kernel, the modules and installs the modules.

make dep (makes dependencies)

make clean (cleans out previous cruft)

make bzImage (make zImage pukes if the kernel is to big, and 2.2.x kernels tend to be pretty big)

make modules (creates all the modules you specified)

make modules_install (installs the modules to /lib/modules/kernel.version.number/)

you then need to copy /usr/src/linux/arch/i386/boot/bzImage (zImage) to /boot/vmlinuz-kernel.version.number. Then edit /etc/lilo.conf, adding a new entry for the new kernel and setting it as the default image is the safest way (using the default=X command, otherwise it will boot the first kernel listed), if it fails you can reboot and go back to the previous working kernel. Run lilo, and reboot.

Kernel Versions

Currently we are in a stable kernel release series, 2.2.x. I would highly recommend running the latest stable kernel (currently 2.2.9 as of May 1999) as there are several nasty security problems (network attacks and denial of service attacks) that affect all kernels up to 2.0.35, 2.0.36 is patched, and the later 2.1.x test kernels to 2.2.3. Upgrading from the 2.0.x series of stable kernels to the 2.2.x series is relatively painless if you are careful and follow instructions (there are some minor issues but for most users it will go smoothly). Several software packages must be updated, libraries, ppp, modutils and others (they are covered in the kernel docs / rpm dependencies / etc.). Additionally keep the old working kernel, add an entry in lilo.conf for it as "linuxold" or something similar and you will be able to easily recover in the event 2.2.x doesn't work out as expected. Don't expect the 2.2.x series to be bug free, 2.2.9 will be found to contain flaws and will be obsoleted, like every piece of software in the world.

Physical / Boot security

Physical Access

This area is covered in depth in the "Practical Unix and Internet Security" book, but I'll give a brief overview of the basics. Someone turns your main accounting server off, turns it back on, boots it from a specially made floppy disk and transfers payroll.db to a foreign ftp site. Unless your accounting server is locked up what is to prevent a malicious user (or the cleaning staff of your building, the delivery guy, etc.) from doing just that? I have heard horror stories of cleaning staff unplugging servers so that they could plug their cleaning equipment in. I have seen people accidentally knock the little reset switch on power bars and reboot their servers (not that I have ever done that). It just makes sense to lock your servers up in a secure room (or even a closet). It is also a very good idea to put the servers on a raised surface to prevent damage in the event of flooding (be it a hole in the roof or a super gulp slurpee).

The Computer BIOS

The computer's BIOS is on of the most low level components, it controls how the computer boots and a variety of other things. Older bios's are infamous for having universal passwords, make sure your bios is recent and does not contain such a backdoor. The bios can be used to lock the boot sequence of a computer to C: only, i.e. the first harddrive, this is a very good idea. You should also use the bios to disable the floppy drive (typically a server will not need to use it), and it can prevent users from copying data off of the machine onto floppy disks.

You may also wish to disable the serial ports in users machines so that they cannot attach modems, most modern computers use PS/2 keyboard and mice, so there is very little reason for a serial port in any case (plus they eat up IRQ's). Same goes for the parallel port, allowing users to print in a fashion that bypasses your network, or giving them the chance to attach an external CDROM burner or harddrive can decrease security greatly. As you can see this is an extension of the policy of least privilege and can decrease risks considerably, as well as making network maintenance easier (less IRQ conflicts, etc.).

LILO

Once the computer has decided to boot from C:, LILO (or whichever bootloader you use) takes over. Most bootloaders allow for some flexibility in how you boot the system, LILO especially so, but this is a two edged sword. You can pass LILO arguments at boot time, the most damaging (from a security point of view) being "imagename single" which boots Linux into single user mode, and by default in most distributions dumps you to a root prompt in a command shell with no prompting for passwords or other pesky security mechanisms.
Several techniques exist to minimize this risk.

delay=X
this controls how long (in tenths of seconds) LILO waits for user input before booting to the default selection. One of the requirements of C2 security is that this interval be set to 0 (obviously a dual boot machines blows most security out of the water). It is a good idea to set this to 0 unless the system dual boots something else.

prompt
forces the user to enter something, LILO will not boot the system automatically. This could be useful on servers as a way of disabling reboots without a human attendant, but typically if the hacker has the ability to reboot the system they could rewrite the MBR with new boot options.
If you add a timeout option however the system will continue booting after the timeout is reached.

restricted
requires a password to be used if boot time options (such as "linux single") are passed to the boot loader. Make sure you use this one on each image (otherwise the server will need a password to boot, which is fine if you’re never planning to remotely reboot it).

password=XXXXX
requires user to input a password, used in conjunction with restricted, also make sure lilo.conf is no longer world readable, or any user will be able to read the password.

Here is an example of lilo.conf from one of my servers (the password has been of course changed).

boot=/dev/hda
map=/boot/map
install=/boot/boot.b
prompt
timeout=100
default=linux
image=/boot/vmlinuz-2.2.5
label=linux
root=/dev/hda1
read-only
restricted
password=some_password

This boots the system using the /boot/vmlinuz-2.2.5 kernel, stored on the MBR of the first IDE harddrive of the system, the prompt keyword would normally stop unattended rebooting, however it is set in the image, so it can boot "linux" no problem, but it would ask for a password if you entered "linux single", so if you want to go into "linux single" you have 10 seconds to type it in, at which point you would be prompted for the password ("some_password"). Combine this with a BIOS set to only boot from C: and password protected and you have a pretty secure system.

General concepts, server verses workstations

There are many issues that affect actually security setup on a computer. How secure does it need to be? Is the machine networked? Will there be interactive user accounts (telnet/ssh)? Will users be using it as a workstation or is it a server? The last one has a big impact since "workstations" and "servers" have traditionally been very different beasts, although the line is blurring with the introduction of very powerful and cheap PC's, as well as operating systems that take advantage of them. The main difference in today's world between computers is usually not the hardware, or even the OS (Linux is Linux, NT Server and NT Workstation are close family, etc.), it is in what software packages are loaded (apache, X, etc) and how users access the machine (interactively, at the console, and so forth). Some general rules that will save you a lot of grief in the long run:
  1. Keep users off of the servers. That is to say: do not give them interactive login shells, unless you absolutely must.
  2. Lock down the workstations, assume users will try to 'fix' things (heck, they might even be hostile, temp workers/etc).
  3. Use encryption wherever possible to keep plain text passwords, credit card numbers and other sensitive information from lying around.
  4. Regularly scan the network for open ports/installed software/etc that shouldn't be, compare it against previous results.

Remember: security is not a solution, it is a way of life.

Generally speaking workstations/servers are used by people that don't really care about the underlying technology, they just want to get their work done and retrieve their email in a timely fashion. There are however many users that will have the ability to modify their workstation, for better or worse (install packet sniffers, warez ftp sites, www servers, irc bots, etc). To add to this most users have physical access to their workstations, meaning you really have to lock them down if you want to do it right.

  1. Use BIOS passwords to lock users out of the BIOS (they should never be in here, also remember that older BIOS's have universal passwords.)
  2. Set the machine to boot from the appropriate harddrive only.
  3. Password the LILO prompt.
  4. Do not give the user root access, use sudo to tailor access to privileged commands as needed.
  5. Use firewalling so even if they do setup services they won’t be accessible to the world.
  6. Regularly scan the process table, open ports, installed software, and so on for change.
  7. Have a written security policy that users can understand, and enforce it.
  8. Remove all sharp objects (compilers, etc) unless needed from a system.

Remember: security in depth.

Properly setup, a Linux workstation is almost user proof (nothing is 100% secure), and generally a lot more stable then a comparable Wintel machine. With the added joy of remote administration (SSH/Telnet/NSH) you can keep your users happy and productive.

Servers are a different ball of wax together, and generally more important then workstations (one workstation dies, one user is affected, if the email/www/ftp/etc server dies your boss phones up in a bad mood). Unless there is a strong need, keep the number of users with interactive shells (bash, pine, lynx based, whatever) to a bare minimum. Segment services up (have a mail server, a www server, and so on) to minimize single point of failure. Generally speaking a properly setup server will run and not need much maintenance (I have one email server at a client location that has been in use for 2 years with about 10 hours of maintenance in total). Any upgrades should be planned carefully and executed on a test. Some important points to remember with servers:

  1. Restrict physical access to servers.
  2. Policy of least privilege, they can break less things this way.
  3. MAKE BACKUPS!
  4. Regularly check the servers for changes (ports, software, etc), automated tools are great for this.
  5. Software changes should be carefully planned/tested as they can have adverse affects (like kernel 2.2.x no longer uses ipfwadm, wouldn't that be embarrassing if you forgot to install ipchains).

Minimization of privileges means giving users (and administrators for that matter) the minimum amount of access required to do their job. Giving a user "root" access to their workstation would make sense if all users were Linux savvy, and trustworthy, but they generally aren't (on both counts). And even if they were it would be a bad idea as chances are they would install some software that is broken/insecure or other. If all a user access needs to do is shutdown/reboot the workstation then that is the amount of access they should be granted. You certainly wouldn't leave accounting files on a server with world readable permissions so that the accountants can view them, this concept extends across the network as a whole. Limiting access will also limit damage in the event of an account penetration (have you ever read the post-it notes people put on their monitors?).

Safe installation of Linux

A proper installation of Linux is the first step to a stable, secure system. There are various tips and tricks to make the install go easier, as well as some issues that are best handled during the install (such as disk layout).

Choosing your install media

This is the #1 issue that will affect speed of install and to a large degree safety. My personal favorite is ftp installs since popping a network card into a machine temporarily (assuming it doesn't have one already) is quick and painless, and going at 1+ megabyte/sec makes for quick package installs. Installing from CD-ROM is generally the easiest, as they are bootable, Linux finds the CD and off you go, no pointing to directories or worrying about case (in the case of an HD install). This is also original Linux media and you can be relatively sure it is
safe (assuming it came from a reputable source), if you are paranoid however feel free to check the signatures on the files.


  • FTP - quick, requires network card, and an ftp server (Windows box running something like warftpd will work as well).
  • HTTP – also fast, and somewhat safer then running a public FTP server for installs
  • Samba - quick, good way if you have a windows machine (share the cdrom out).
  • NFS - not as quick, but since nfs is usually implemented in most existing UNIX networks (and NT now has an NFS server from MS for free) it's mostly painless. NFS is the only network install supported by RedHat’s kickstart.
  • CDROM - if you have a fast cdrom drive, your best bet, pop the cd and boot disk in, hit enter a few times and you are done. Most Linux CDROM’s are now bootable.
  • HardDrive - generally the most painful, windows kacks up filenames/etc, installing from an ext2 partition is usually painless though (catch 22 for new users however)

So you've got a fresh install of Linux (RedHat, Debian, whatever, please, please, DO NOT install really old versions and try to upgrade them, it's a nightmare), but chances are there is a lot of extra software installed, and packages you might want to upgrade or things you had better upgrade if you don't want the system compromised in the first 15 seconds of uptime (in the case of BIND/Sendmail/etc.). Keeping a local copy of the updates directory for your
distributions is a good idea (there is a list of errata for distributions at the end of this document), and making it available via nfs/ftp or burning it to CD is generally the quickest way to make it available. As well there are other items you might want to upgrade, for instance I use a chroot'ed, non-root version of Bind 8.1.2, available on the contrib server (ftp://contrib.redhat.com/), instead of the stock, non-chrooted, run as root Bind 8.1.2 that ships
with RedHat Linux. You will also want to remove any software you are not using, and/or replace it with more secure versions (such as replacing rsh with ssh).

How to determine what to secure and how to secure it

Are you protecting data (proprietary, confidential or otherwise), are you trying to keep certain services up (your mail server, www server, etc.), do you simply want to protect the physical hardware from damage? What are you protecting it against? Malicious damage (8 Sun Enterprise 10000's), deletion (survey data, your mom's recipe collection), changes (a hospital with medical records, a bank), exposure (confidential internal communications concerning the lawsuit, plans to sell cocaine to unwed mothers), and so on. What are the chances of a "bad" event happening, network probes (happens to me daily), physical intrusion (hasn’t happened to me yet), social engineering ("Hi, this is Bob from IT, I need your password so we can reset it… .").

You need to list out the resources (servers, services, data and other components) that contain data, provide services, make up your company infrastructure, and so on. The following is a short list:
  • Physical server machines
  • Mail server and services
  • DNS server and services
  • WWW server and services
  • File server and services
  • Internal company data such as accounting records and HR data
  • Your network infrastructure (cabling, hubs, switches, routers, etc.)
  • Your phone system (PBX, voicemail, etc.)
You then need to figure out what you want to protect it against:
  • Physical damage (smoke, water, food, etc.)
  • Deletion / modification of data (accounting records, defacement of your www site, etc.)
  • Exposure of data (accounting data, etc.)
  • Continuance of services (keep the email/www/file server up and running)
  • Prevent others from using your services illegally/improperly (email spamming, etc.)
Finally what is the likelihood of an event occurring?
  • Network scans – daily is a safe bet
  • Social engineering – varies, usually the most vulnerable people tend to be the ones targeted
  • Physical intrusion – depends, typically rare, but a hostile employee with a pair of wire cutters could do a lot of damage in a telecom closet
  • Employees selling your data to competitors – it happens
  • Competitor hiring skilled people to actively penetrate your network – no-one ever talks about this one but it also happens
Once you have come up with a list of your resources and what needs to be done you can start implementing security. Some techniques (physical security for servers, etc.) pretty much go without saying, in this industry there is a baseline of security typically implemented (passwording accounts, etc.). The vast majority of security problems are usually human generated, and most problems I have seen are due to a lack of education/communication between people, there is no technical ‘silver bullet’, even the best software needs to be installed, configured and maintained by people.

Now for the stick. A short list of possible results from a security incident:
  • Loss of data
  • Direct loss of revenue (www sales, file server is down, etc)
  • Indirect loss of revenue (email support goes, customers vow never to buy from you again)
  • Cost of staff time to respond
  • Lost productivity of IT staff and workers dependant on IT infrastructure
  • Legal Liability (medical records, account records of clients, etc.)
  • Loss of customer confidence
  • Media coverage of the event