Squeezing out the memory

I have a couple of rented VPS, one of them for running a Minecraft server and the second one a very basic, 128Mb RAM for $10 a year one, that I wanted to run Nagios on for monitoring a few webpages and game servers. Previously, I was running this on the same server as Minecraft, but I’d rather split the two down so that one doesn’t suck resource that the other could need.

Hadn’t had the time to get Nagios migrated over, so decided to get a start on that the other night. Installed CentOS, set-up my users, got my shared key on there, and logged in to start getting packages installed and up to date. The problem I ran in to, however, was that YUM/RPM was rather unhappy running with so little memory. Even if I tried to update a few packages at a time I was still finding that it couldn’t allocate the memory it wanted to.

Running rpm_check_debug
Running Transaction Test
memory alloc (8 bytes) returned NULL.

I initially contacted the provider of the VPS to ask them to loan me some burst RAM for a few days so that I could get things up and running, but after a few days they just closed my support ticket without an answer (other than the automated system informing me that the issue had been escalated to, what I can only assume was, a black hole somewhere off the side of the galaxy). I was not too impressed by this, but at $10 a year I can hardly complain too much.

Also, another option that I thought I could try was to create a swap file and use that, but unfortunately this was not to be either, as my provider’s installation of OpenVZ is set to not allow swap to be used.

So I set out to try and get things up and running by installing RPMs separately. I though that this was going to take ages to get the list together, wget the packages from a mirror, then start installing them one by one, but I decided to check if YUM kept the ones that it had downloaded in preparation for the installation (which was the step at which it was conking out); lo and behold, there they were in /var/cache/yum ready to be installed. I decided to do a full yum clean, repair the RPM DB, and then yum update to make sure that I was getting the latest versions and that there was no residue from my various attempts and then start installing the packages.

First off, the base packages. I made a bold attempt at setting them all to install in sequence, but this failed, again, with the memory allocation error, leading me to believe that I’d never be able to get round this as YUM wasn’t the only one struggling with the memory requirement. It was time to check what I could shut down to free up some memory.

# [07:26:35] 🙂 Kass VPS - Nagios Server - ***ROOT***
# dunalertin:~
pstree -nla
init 4040
 ├─udevd -d
 ├─syslogd -m 0
 ├─sshd
 │   └─sshd
 │       └─sshd
 │           └─bash
 │               └─su
 │                   └─bash
 │                       └─pstree -nla
 ├─xinetd -stayalive -pidfile /var/run/xinetd.pid
 ├─sendmail
 ├─sendmail
 ├─httpd
 │ ├─httpd
 │ ├─httpd
 │ └─httpd
 ├─crond
 └─saslauthd -m /var/run/saslauthd -a pam -n 2
 └─saslauthd -m /var/run/saslauthd -a pam -n 2

First off, Apache was running, so that was taken down, then XinetD, CronD and SendMail. I tried again, but found that this still wasn’t enough. RPM hung and I had to login again via another session to kill the process. This was what triggered my next though. I checked TOP again and realised that most of my memory was taken up by bash and sshd. I couldn’t terminate either since I’m logging into the box remotely, and this would probably lose my connection and require me to start all over again (it’s possible that killing off sshd would still mean that my current session would stay open, but I wasn’t up to risking it right now). Here’s where the squeeze came.

My usual method for logging into the box is to log in as a base user, then if I require root permissions to switch up. This spawns multiple processes (seen above in the tree) before you actually get to run any commands, each taking up between 500k and 4M. Having stopped all the various services, I logged out and went against my usual modus operandi an logged directly in as root.

# [07:29:09] 🙂 Kass VPS - Nagios Server - ***ROOT***
# dunalertin:~ 
pstree -nla
init 4040
 ├─udevd -d
 ├─sshd
 │ └─sshd 
 │     └─bash
 │         └─pstree -nla
 └─saslauthd -m /var/run/saslauthd -a pam -n 2
 └─saslauthd -m /var/run/saslauthd -a pam -n 2

This squeezed enough memory out to install the base RPMs, the updates and the couple of extra packages that I needed for building Nagios. After that came the long task of pulling the config files from the other VPS and adjusting them to the new flavour (some of the paths it wanted to use were different, a few files were missing here and there, and some old services needed removing). For the most part, job done and monitoring is back on track for the servers I want alerting against. Couple of bits and bobs to touch up left over, but still, the main functionality is now there.

Now I need to sort out the Minecraft server. Again.

Facebooktwittergoogle_plusredditpinterestlinkedinmail
Facebooktwittergoogle_pluslinkedinrssyoutube

Leave a Reply

2 thoughts on “Squeezing out the memory”