Ferris, how did this car get 250 miles on it while sitting in the garage?

The box I use to host this blog, plus my own instance of Confluence and Jira, is a humble-but-dedicated Linux server running CentOS 5, hosted by Serverbeach. Yesterday, I got an abuse report that a number of other boxes had been getting automated password scans — originating from my server’s IP address. Uh-oh! Had someone compromised my box?

I opened up an SSH client, and logged in to my server as each of the named users. The bash welcome message showed just what I’d suspect — last login at some reasonable time, from an IP known to me. UNTIL I logged in as the “nagios” user, and discovered that the last login was on December 22, from “ac9ed6e3.ipt.aol.com”. UH-OH. I’ve been PwN3d.

It looks like someone guessed the password for the “nagios” user I created when I was setting up a server monitor. It probably didn’t occur to me that I was creating a public-facing login when I created the “nagios” user, and used something easy to guess. Crap! What’d they do while they were in there?

I pulled the .bash_history file for that user, which you can see here in its entirety, if you’re interested. Unless the user edited the .bash_history file as a red herring, it looks like they downloaded a password scanner utility to /tmp/.k (a dot-prepended directory, so it’d be hidden unless you used ls -a), then fired it up to scan the first two octets of my IP range. And then came back periodically to check results using “screen”.

Here’s what “ps -u nagios” showed:

USER       PID %CPU %MEM   VSZ  RSS TTY      STAT START   TIME COMMAND
nagios   30769  0.0  0.0  2588  736 ?        S    Oct17   0:10 ntpd
nagios   23380  0.0  0.1  3668 1276 ?        S    Nov03   0:59 ntpd
nagios   16926  0.0  0.1  6100 1036 ?        Ss   Dec22   0:00 SCREEN
nagios   16927  0.0  0.1  5404 1396 pts/3    Ss+  Dec22   0:00 /bin/bash

The first two processes, 30769 and 23380, are, I think, Nagios doing its regular thing. But the other two processes were spawned by the uninvited user — a “SCREEN” session, and a login shell.

I quickly changed the password for the “nagios” user, then killed all the “nagios” user’s processes and deleted everything in /tmp/.k. I ran “sudo rpm -Va” to see if any of my packages had been, you know, sneakily altered, but my expertise runs out there.

As a professional, especially as one who depends on others to execute Big Chair Sysadmin tasks, I wouldn’t put a client’s SSH front door out there in the open, where anyone can come knocking. I always request a firewall in front, which usually only allows SSH logins from a particular (or, even better, private network) IP. Get a VPN connection to the hosting provider’s network, and it’s reasonably secure and portable. So this is pretty much a case of the cobbler’s children going shoeless – oy!

Serverbeach doesn’t offer a firewall solution, so I’m going to lock down the SSH on the box myself. Anyone care to offer an opinion as to whether you prefer IP restriction (not all that portable; I’m often on various wireless connections), certificate restriction (spiffier, but more confusing), or some other stealthy methods like changing the default port?

Ferris, how did this car get 250 miles on it while sitting in the garage?

4 thoughts on “Ferris, how did this car get 250 miles on it while sitting in the garage?

  1. I changed the permissions in /etc/sshd_config to explicitly grant login _only_ to the two identities that correspond to human users (me, and the login for Kate’s blog that Blogger uses to publish.) And to explicitly deny login to the handful of other IDs.
    For the hell of it, I set a public key on the remote server so I can login without a password, but unless and until I turn “login via password” _off_, that’s obviously not going to improve things any.

    Like

  2. Al says:

    Bad luck, something similar happened to me when I first got my server, again due to a simple password. Luckily the guy who got in seemed to get easily bored and didn’t do anything serious.
    After that I tightened things up and now I’m a big fan of changing the default SSH port. I was fed up with getting logwatch reports full of failed logins, changing the port worked a treat for cutting down the junk in the logs.
    Just remember to open the new port in any firewall (I’m running a simple iptables firewall) before you restart the SSH daemon!

    Like

  3. Al, I think you’re right. I’ll seal up the front door and put it somewhere less in-the-line of traffic. I like that solution because it’s always portable (assuming I can always remember to use the new port, that is!)

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s