Thursday, July 10, 2008

Using Mkfifo For Monitoring And Enhanced User Activity Logging

Hey there,

For today's post, if you're unfamiliar with how to execute new file descriptors and work with shell redirection, to a comfortable degree, check out our older posts on those subjects (cleverly anchor-linked into this very paragraph). They explain more basic stuff regarding those topics better than this post will and, if you want to, they'll be there for you to read for as long as I can remain within the hosting company's good graces.

One basic component of any decent security setup should always be logging. At the very least, if you're running a server that needs to be on, and servicing requests, 5 days a week (or 24x7, etc), you're probably logging sar output or iostat, vmstat, etc. You're taking some sort of metrics so that you'll be able to troubleshoot any problems that may arise in the future when your needs outgrow your capacity, or somebody just makes a mistake and runs an infinite recursive fork-and-exec loop as root.

Another thing that a lot of companies like to do is to keep tabs on their employees. Especially as it pertains to their activities on production Unix and Linux servers. This may or may not be for a bad reason. Some companies (depending on their nature) are required to monitor user activity to maintain standards compliance, while others may just do it so that if user XYZ makes a mistake and crashes the machine, they can figure out that it was he/she who did it and make sure that it doesn't happen again (hopefully, by explaining why what happened happened and considering it a lesson learned. It depends heavily on the workplace and the situation, of course)

One way to keep an eye on your users is to run kernel process accounting (pacct, which is available for Linux and Unix) and kernel auditing. Both of these probably provide the most complete picture in the event of a catastrophe, but they come at a cost because they can exact a heavy toll on your systems even when everything is fine. They can also make it so that things aren't fine any more, or expedite the non-fineness (???) of a situation if anything "bad" happens. Imagine how much harder your kernel and/or memory is being interrogated when there's a problem with it. Infinite recursion comes to mind again. The ghosts of those Fibonacci numbers just never go away ;)

In my practice, unless it's specifically requested, I prefer to log via log file. I know it seems primitive, and in a sense it is, but it's very "inexpensive" in terms of performance cost. One tool you can use to help with system auditing is called "mkfifo." It, of itself, doesn't really do anything for you, but it can be a great facilitator (especially if you use your imagination and have the latitude to experiment).

mkfifo is a program that creates a simple named pipe. You can equate it to the "|" that you use in your every day command lines, like:

host # cat FILE|grep word

although it's slightly differently. When a named pipe is created, via mkfifo (or however else you can do it), it creates a pipe "file" that remains in place until it is removed (or, in some cases, until your machine reboots, if you forget to remove it). You can create your own named pipe with mkfifo simply, as it takes very few arguments, like so:

host # mkfifo -m 777 /tmp/corncob
host # ls -l /tmp/corncob
prw-r----- 1 user group 0 Jul 9 15:11 /tmp/corncob



That's all it takes to create the named pipe /tmp/corncob. The -m flag, which is used to set the permissions, is not necessary. Generally, if you don't include it, the default permission set for a new named pipe is whatever the default for your system would be. As another side note, you can also pass the -m flag and set alpha permissions, rather than octal, like:

host # mkfifo -m a=rwx /tmp/corncob

to create the exact same thing. You can delete the named pipe just like you delete a file. rm, and it's gone.

One thing you should note about named pipes is that they generally (so far as I've seen) are only able to fully pass one stream of input/output through themselves at a time. That is to say, if you have one process sending input to the named pipe and two process reading from it, only one of the reading processes will receive output. It should be noted, also, that, if such a situation were to exist, once the original process that was receiving output exits, the other process would begin receiving output from the named pipe (if it was still attempting to read from it). Was that a really long sentence or am I just typing fast? ;)

An example of what I mean below:

host-term1 # while :;do echo a b c d e >/tmp/corncob;sleep 15;done

host-term2 # tail -f /tmp/corncob
a b c d e
a b c d e
a b c d e
a b c d e
a b c d e
a b c d e
a b c d e
a b c d e

host-term3 # tail -f /tmp/corncob

host-term2 # ^C

host-term3 #
^C
a b c d e
a b c d e
a b c d e
a b c d e


Now, if you combine this freely available named pipe (you should probably secure it with permissions so that only users you want to be able to actually modify it have permission to remove it. Everyone you want to manage must be able to write to it and execute it.) with any number of output capturing mechanisms, you've got yourself a logger. Of course, you'll need to take care and work out how you want to parse the output coming from the named pipe because, even though it can only be read from by one process at a time, it can be written to by many and (if you have 15 people logged in, all with duplicate STDIN and STDOUT getting pushed through the same named pipe) that could get confusing for you very quickly.

My first recommendation would be, of course, to initiate an individual named pipe per user login process. This is so lightweight that generating 60 named pipes isn't going to cost you much more in overhead than generating six.

A simple thing to do is to use script to generate output and redirect it to the named pipe, like this:

host # ksh -ic "script /tmp/corncob"

using the name of the FIFO as the name of the output file for script. This is generally a bit klunky (script dumps the output in chunks and sometimes loses bits), but it does work relatively well. All you need to do is modify that slightly and put that in your users' .profile files and they'll be writing their entire sessions to /tmp/corncob (or, like I mentioned, each to a different named pipe file: /tmp/username perhaps?)

I think you can begin to see the possibilities here (and, probably some of the pitfalls - for instance, you can't use "tee" since it won't allow interactivity), but we'll continue on this topic tomorrow and look at some specific ways in which you can use named pipes, in conjunction with STDOUT and STDERR redirection, to do some serious security logging without serious stress :)

Cheers,

, Mike