The Guru College
Auto-blocking abusive hosts with iptables (Part II)
Last time we looked at auto-blocking hosts, it was a theory session. Today, it’s code, configurations and MySQL databases to make blocking SSH scans real. I know you can do rate-limiting within iptables to slow SSH scanners down, but I take a slightly more hard line approach – if you scan my system, I drop all traffic you send my way for two days. If you scan again within a month of your last block, you get a 30 day ban. Even better – it’s not a block against SSH traffic, it’s a block against all traffic, to all systems that I manage.
To start with, we’re going to need a pair of perl scripts, a MySQL database, and one or more systems running the IPTables or ipfw firewalls. If you’re following along with the HA LAMP stack examples, you already have most of what you need to get started. I’m going to put all the code into a git
repository at github.com as I develop the system. The repo is a bit of a mess, and I’m trying to keep it clean, but if you have pointers for a better way to do something, I’m all ears. Also, keep in mind that this is a free-time project for me, which I can only really work on after hours (and after my toddler is asleep.)
First, the MySQL database. The first file you need to use is in sql/bad_traffic.sql
. This has the database table structures you need. Please note that it doesn’t do any GRANT
statements, so you’ll need to adjust your permissions once it’s installed. It also doesn’t try to do a CREATE DATABASE bad_traffic
; you’ll need that setup in advance as well. When you load it up (mysql -u <em>username</em> -p bad_traffic < bad_traffic.sql
), it will create two tables: a whitelist_hosts and a blocked_hosts. The whitelist is easy: any address, or fragment of an address will be ignored when looking for bad traffic to add blocks to the database. It’s NOT used when applying blocks, as there are times when you will want to manually add a host that is otherwise whitelisted. (Let’s say you add “10.” as a whitelist entry, which grants anything in the private 10.0.0.0/8 space to the whitelist. You then realize that a rouge machine on your wireless network is causing problems, and you want to shut it down.) The blocked_hosts table contains a lot more details: the IP address of the host being blocked, the day and time it was blocked, the day and time the block will expire, the reason the host was blocked, etc. This is all used when applying blocks, and for determining how long a new block will last, based on past traffic.
The bin/ssh-scanner
script looks through /var/log/secure and /var/log/secure.log, looking for failed password attempts for known and unknown users. If it finds SSH attempts for more than 10 user accounts (a frequent pattern of scanning), or if the root account is attempted more than 10 times, the ip address is considered “hostile”, and a block entry is submitted. If this is the first entry for this IP address in the last 30 days, it’s a 2 day block. If there has been at least one block in the past 30 days, the new block is entered for 30 days. If there’s two or more blocks in the last 30 days, the ban is increased to 180 days.
The bin/blocker
script does the actual blocking. It generates a hash of the current blocks in the INPUT-AUTO iptables chain and ipfw rules, and compares them against active blocks in the database. Anything present in the database that is not in the firewall is added, and anything that has expired in the database but hasn’t timed out is removed.
The final piece of this puzzle is the config file for the system, for now store in “/etc/cthulhu-manip.cfg”. Usernames, passwords and table names will all go here. It’s safely out of the way of the repo, so I’m not going to accidentally commit my throwaway password, and you won’t have to worry about having yours overwritten when I update the sample config file with more/better examples. There’s an installer script in bin/installer
that tries to set the file up for you with sane defaults, as well as add future fields easily without damaging your settings. Finally, bin/chronos
sets up a crontab file with jobs to run blocker and scanner at 5 minute intervals.
Anyway, feel free to model your solution as you like, but this is a handy way to give teeth (and memory) to your snort alerts, among other things. It shouldn’t be too hard to adapt the database manipulator to talk to ipfw in Mac OS X (as long as you have the MySQL DBD bits in the right place). It should also be easily extended to other types of traffic blocking – simply write a filter for the proper log file looking for malicious activity, and then log it to the database. (I’m working on a HTTP log analyzer at the moment). Once the block record is in the database, the rest of the system will handle black hole maintenance of “-j DROP” on the rest of the hosts that subscribe to this system. If you develop good ones, I’m happy to take patches, credit you, and keep this going.
DBD::MySQL and Snow Leopard
Getting cthulhu-manip running on Mac OS X means getting the DBD::MySQL perl modules installed. Which means, sadly, installing MySQL. The good news is it’s actually easy. Just go to dev.mysql.com and download the Mac OS X Intel 64 bit (x86_64) version, and untar the file as /usr/local/mysql
:
`Getting cthulhu-manip running on Mac OS X means getting the DBD::MySQL perl modules installed. Which means, sadly, installing MySQL. The good news is it's actually easy. Just go to [dev.mysql.com][1] and download the Mac OS X Intel 64 bit (x86_64) version, and untar the file as
/usr/local/mysql`:
``
Then, go build DBD::mysql with CPAN:
``Getting cthulhu-manip running on Mac OS X means getting the DBD::MySQL perl modules installed. Which means, sadly, installing MySQL. The good news is it's actually easy. Just go to [dev.mysql.com][1] and download the Mac OS X Intel 64 bit (x86_64) version, and untar the file as
/usr/local/mysql`:
`Getting cthulhu-manip running on Mac OS X means getting the DBD::MySQL perl modules installed. Which means, sadly, installing MySQL. The good news is it's actually easy. Just go to [dev.mysql.com][1] and download the Mac OS X Intel 64 bit (x86_64) version, and untar the file as
/usr/local/mysql`:
``
Then, go build DBD::mysql with CPAN:
```
It will print a bunch of crap on the screen as it installs, and then you’re all done, and you can use DBD::mysql, which is needed for my auto-blocking scripts to work.
What A Week
I’m more than a little behind, and I know it. In my defense, it’s been a crazy week for me. I’ll not go into the personal stuff – that doesn’t belong here, at least not now. The computer stuff is more than enough.
When applications start crashing on you, with an EXEC_BAD_ACCESS (SIGBUS)
`` I’m more than a little behind, and I know it. In my defense, it’s been a crazy week for me. I’ll not go into the personal stuff – that doesn’t belong here, at least not now. The computer stuff is more than enough.
When applications start crashing on you, with an EXEC_BAD_ACCESS (SIGBUS)
`/var/log/system.log` while you get ready for work. When you come back, it’s locked solid. Can’t get to the terminal window to see what’s happened. Damnit. It’s time to do those memory tests, right?
Wrong. Well, it’s probably not a bad idea, but it wasn’t the problem that I’ve been having since last Saturday. My boot drive was slowly failing – not in a way that SMART could detect, not in a way that showed up in logs. Just in that silent killer way. So, I started pulling DIMMs, thinking that was my issue. In the middle of the swaps, the computer failed to boot. Grey screen. No Apple logo. Nothing. Boot from Snow Leopard DVD. Fails. Boot from Leopard DVD – well, it works, but the installer crashes as soon as the GUI loads. Damnit damnit damnit.
I finally got the system to boot from the original install DVD’s that came with it – 10.4.7, I think, and realized that one of the internal hard drives was toast. And that I couldn’t boot off my burned Mac OS X DVDs, as my DVD drives are showing their age and aren’t reading burned dual-layer disks properly. Hunting around, I finally find a 10.5 DVD that came from Apple – not on DVD+R/DL media. Boots up just fine, and I load an old hard drive into the system to restore my Time Machine backups to.
And it works, but kernel panics as soon as I try to boot from the restored disk. Booting back into the installer, I realize this is a disk from my PowerMac G5, and the partition table is the old, wrong kind. Reformat as GUID, re-run the restore. Reboot. Kernel panic. Boot in verbose mode off the new disk, and see an error about being unable to exec /sbin/launchd
. Not an insignificant process. (Think init
or upstart
on a RHEL box). Do some digging, realize that you can’t use a 10.5 DVD to restore a 10.6 TimeMachine backup, as the HFS+ mechanics have changed, and you’ll get a lot of errors like this. And realize you can’t find your original Snow Leopard DVD. Just the burned copy that your old SuperDrives refuse to read.
So, another day passes, you borrow an install disk from a friend’s Mac mini, and boy, howdy… you’re back in business. Restore works, everything comes up just as you’d left it the night that it crashed the first time.
Of course, you’ve got nothing done at home in your free time. The pictures to sort, process and edit are languishing, and your blog is sadly neglected. The good news is I was able to develop some code on the side here and there, working towards a distributable, database-backed iptables manipulator, which will see the light of day sooner rather than later, thanks to this delay. So, I’m going to post this now, not even spell checking it, so I can get back to the real meat these days, and hopefully finish up the iptables posts before my 2 year old is accepted to college.
High Availability LAMP Stacks with Keepalived (Part 3)
Assuming Assuming and part II worked for you, and you have tested that keepalived does indeed failover and route traffic, you need to deal with the services on those hosts. In the case of a LAMP environment, you are looking primarily at MySQL and Apache – and you have to address config files and data replication for each.
MySQL is fairly straightforward, and multi-master replication is covered well elsewhere. Personally, I used this guide to cover the setup. Glossing over most of the details, the process is this:
- set the unique ID’s for each database instance
- set the update increment for each instance (so multi-master won’t step on itself
- set each to replicate from the other
- stop external connections to the databases
- make sure the same data is loaded into each
- enable replication
- test
Setting up Apache is slightly more complex, assuming you don’t have a expensive shared clustered storage, as your config files (and website content) need to reflect one another. This is best handled in some kind of Source Code Management engine, like SVN or Git. Git is a better choice for this, it’s fully distributed – each copy of the repository is a full copy, and can stand alone in case of total failure of everything else in the world other than the server it is on. Sadly, I can’t share the scripts we use at work (nasty IP lawyers), so until I develop one in my free time to post here, this post is going to be short on content. In essence, what you do is make each server a git repo that serves can serve content over HTTPS or SSH. Script up the connection between the two (to push/pull data between your web server nodes), and then check that copy out to your preferred development platform.
If you want to get really fancy (which you should), setup a second VIP in keepalive so you have a place to test development changes before pushing the button and going live. As most LAMP stacks draw their content from the database backend anyway, this changes won’t happen too often, but it’s always good to be able to test on a real server before blowing up production. (To be extra super fancy, you’ll run two sets of apache daemons – with separate sets of config files – so when you add the new version of PHP, you don’t destroy the production environment.)
In the next and final installment of this series, we’ll look at the final part of this, which is harvesting the apache logs, looking for attackers, and adding them to the block lists automatically.
Easy Access To MobileMe Enabled Workstations
This may be old hat for some of you, but if you have a MobileMe account setup on your Mac, you have access to said Mac from pretty much anywhere on the internet you like. This comes in the form of a IPv6 DNS entry that gets created in the mac.com domain. It’s form is pretty simple:
workstation.username.members.mac.com
So, if you call your workstation mycoolmac
and your MobileMe username is lookatme
, the DNS entry for your system would be:
mycoolmac.lookatme.members.mac.com
If you have “Remote Login” enabled in Sharing, you are good to go. From another machine (that supports IPv6 SSH), just connect:
ssh -6 mycoolmac.lookatme.members.mac.com
It will ask you for the password on your remote Mac, and you’ll be logged right in. Sadly, this doesn’t yet work from my iPhone’s SSH client… yet.
The Shuttle Is Dead
It was long-lived, but my Shuttle PC is dead. A second and then third hard drive “failed” while reinstalling my backup server. The machine stopped rebooting – it now has to be powered off and then back on to boot at all. The OpenSolaris and OpenIndiana installers started crashing while booting, or loading the installer program. And yes, everything is well seated and connected.
The Shuttle is dead, or at least unreliable enough to make it useless to me. The next question is “how do I backup my file server now?” Time and money, time and money.
Another Week
Another week has gone by with little to no posting. The excuse this time is that my backup server blew out it’s system drive, and I’ve been spending my spare time trying to recover it. Once I figured out it was gone, I started loading up a new one. Only to have forgotten to load the ethernet drivers, as this is my Shuttle, and had to refer to my own ancient blog posts to get my (even more) ancient system back onto the network. Just so I can do backups. To a stripe of drives that is almost full anyway…
Newer Posts | Home | Older Posts