Everything about Web and Network Monitoring

Home > Server Management > Sysadmin Tools > Monitoring files and directories with Monitis

Monitoring files and directories with Monitis

As a sysadmin, you have learned — typically, from painful experience — that things can go horribly wrong when files or directories on your systems grow beyond your expectations. This is an area where monitoring can help. In this article, we’ll take a look at creating file and directory monitors, and using Monitis to track these monitors and alert you when the unexpected happens.

Getting started

In previous articles Custom Monitors in Monitis with Python, Start Exploring Monitis API with cURL we’ve discussed setting up custom monitors. If you haven’t seen these, then take a look now, and make sure that you’ve got your Monitis API Key and Secret Key. All of the examples below assume that you have those available. All of the code used in this article is hosted in the Monitis Exchange GutHub repository. If you’d like to follow along, go clone a copy of that repository now.

 $ git clone https://github.com/monitisexchange/Python-Monitis-Scripts.git
 Cloning into Python-Monitis-Scripts...
 remote: Counting objects: 12, done.
 remote: Compressing objects: 100% (9/9), done.
 remote: Total 12 (delta 5), reused 10 (delta 3)
 Unpacking objects: 100% (12/12), done.

Now that you’ve got the code in front of you, let’s get started.

File counts and total size

The most important feature of the scripts used in this example is getting two key pieces of information for a directory: the number of files in the directory, and their combined total size. For convenience, there are also options to scan directories recursively to an arbitrary depth, and to monitor single files instead of directories.

The logic for scanning the files is relatively simple, and is illustrated with the following python code, found within fileStats() in monitis_filemonitor.sh.

 totalCount += 1
 if S_ISDIR(m) and depth > 0:
    for f in os.listdir(dirName):
       size,count = fileStats(os.path.join(dirName,f),depth-1)
       totalSize += size
       totalCount +=count
 elif S_ISREG(m):
 totalSize += os.stat(dirName)[ST_SIZE]

The call to fileStats includes an initial path and an initial depth. If that path identifies a directory, then it is traversed recursively in a depth first search (up to the maximum depth), accumulating a count of files encountered and a sum of their sizes along the way. Symlinks are ignored, so there won’t be any loops when traversing the directory structure. If the path passed in identifies a file, then the depth is ignored, and the file’s size (and a count of 1) are returned to the calling method.

Create the custom monitors

Before any monitoring can happen, you need to create the custom monitors, using the Monitis API. To do this, pick a couple of locations in your file system to monitor, and then use monitis_create_monitor.py like this:

$ export APIKEY=
 $ export SECRETKEY=
$ ./monitis_create_monitor.py -a $APIKEY -s $SECRETKEY -r "size:Size:bytes:2;count:Count:files:2" -m fileMonitor -n "temp dir"
 {"status":"ok","data":864}
$ ./monitis_create_monitor.py -a $APIKEY -s $SECRETKEY -r "size:Size:bytes:2;count:Count:files:2" -m fileMonitor -n "log dir"
 {"status":"ok","data":865}

Keep track of those monitor IDs returned tagged as “data” in the return values. We’ll use those later to set up our configuration file. Note that while we’ve described a couple of directories to monitor (home directory, temp directory), we haven’t actually specified the paths yet. We’ll do that in the next step.

Set up the configuration file

In this step, we’ll create the config file that will be used by our monitoring script. The configuration file contains the monitor ID, file system path, and recursion depth for each location in the filesystem that we want to monitor. We use a simple CSV file, for ease of creating the file by hand. Here’s an example, using the two monitor IDs we created in the previous step. Create a file called “monitis_filemonitor.conf”, with the following two lines:

864,"/tmp",2
865,"/var/log",4

That configuration says that the monitoring script should monitor two directories. The first directory is /tmp, and it should be scanned recursively with a maximum depth of 2, with the resulting counts and sizes reported to monitor ID 864. The second directory is /var/log, and it should be scanned recursively to a max depth of 4, with the results reported to monitor ID 865.

Running the monitor

Running the monitor is simply a matter of passing in your API and secret keys, along with the path to the configuration file we just created. Note that depending on the permissions on the filesystems you’re monitoring, you may need to run the monitoring script with sudo, or directly as root.

$ sudo ./monitis_filemonitor.py -a $APIKEY -s $SECRETKEY -c monitis_filemonitor.conf
 {"status":"ok"}
 {"status":"ok"}

The “status: ok” lines indicate that two sets of results were submitted successfully, just as we expected. Now, we should be able to see these results in the Monitis web interface. To make it more interesting, let’s look at some data over time. The monitor can be run from cron, or — as I did for this example — in a while loop on the command line. The command above can be run in a loop, like this:

$ while true
 > do
 > sleep 30
 > sudo ./monitis_filemonitor.py -a $APIKEY -s $SECRETKEY -c monitis_filemonitor.conf
 > done

Every 30 seconds, the monitor script will run, and send the current size and count for the custom monitors configured in monitis_filemonitor.conf. While it’s running, we’ll run another loop in another shell to make some changes to the /tmp directory.

$ while true
 > do
 > sleep 5
 > echo "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA" > `mktemp /tmp/test.XXXXXXX`
 > done

This creates an additional file in /tmp every 5 seconds. Don’t forget to clean them up when you’re done. We should be able to see a graph of the data in the Monitis web interface. Using the Monitors menu, add the “temp dir” custom monitor to a window, and switch to the graph view.

Monitis Files & Directories Monitor

There it is. A graph of our file system, changing over time. With this done, let’s go one step further and add a notification when the size of a directory gets too large.

Set up notifications

With the data being collected on a regular basis, getting alerts when a metric goes out of the expected range is easy. In this example, we’ll have Monitis send email to a GMail account. Just create a contact group, and then add a notification on the custom monitor to send to that group, with the parameter and threshold that you want.

Monitis Notification Rules

Save the notification configuration, and you’re done. Now, when the value passes the threshold, email will be sent to the contact group. For our example, the email would look like this:

Monitis Alert E-mail

We used a simple email address for this example, but the messages are brief enough that it would be easy to send them to the SMS gateway of your favorite mobile phone vendor. That way, you’ll get a text message on your phone when anything you’re monitoring goes wrong.

Try it out

If you haven’t already downloaded the example scripts, now is a great time to do so. With the Monitis API and custom monitors, keeping track of almost any aspect of your systems becomes easy. Get the code, modify the configuration file for your own file system, and try it out.

Server Monitoring

Post Tagged with , ,

About Mikayel Vardanyan