The Guru College
Awesome
I couldn’t resist screen capping this part of an email and sharing it here:
Auto-blocking abusive hosts with iptables
We’ll get back to HA LAMP soon, I promise. However, it’s time to stop and make sure you are running a logging host-based firewall on your servers. If you’re not, and I’m not kidding on this, you need to start right now. Look through the output of netstat -a | grep LISTEN
. My laptop has less than half a dozen ports open, and that’s after I’ve shut off most services. Here are the two most important (and most dangerous) ones:
Active Internet connections (including servers) Proto Recv-Q Send-Q Local Address Foreign Address (state) ...snip... tcp4 0 0 *.ssh *.* LISTEN tcp6 0 0 *.ssh *.* LISTEN
This represents allowing anyone (*.*) to establish an SSH connection (*.ssh) over IPv4 or IPv6 to my laptop. Unlike most users, I actually look though my system logs on a regular basis, checking for strange things like failed login attempts from Asia. I also pick good passwords, change them frequently, and never re-use them. I seriously don’t think even a rainbow tables-based attack would do me in.
Your server probably has the same ports open to the world. How often do you check the system logs looking for unauthorized logins? Even more worrying: is *.3306
in your netstat output? While MySQL has a robust internal ACL system to allow specific users to access from specific hosts, why let people even know you are running a MySQL server on the host? This is where a host-based firewall comes in handy. The fact that you can automatically block abusive hosts with a little scripting magic is just a bonus.
First things first: I’m talking about iptables
today, and I’m focusing on the systems that use it, which is mostly Linux-based OSes. I’m sure you can do similar things with ipfw
on Solaris, but you’ll have to interpret my examples as you follow along. Also, I’m going to be focusing on RHEL5/RHEL6, as that’s what I use for servers these days.
Check to see if the firewall is turned on: /sbin/service iptables status
and then see if it’s set to start at boot: /sbin/chkconfig --list iptables
. If it’s not on, and not set to start, you’ve got some work to do. The base ruleset lives in /etc/sysconfig/iptables
and this is where iptables reads and writes it’s config files from. (Yes, with the right commands, iptables saves changes you make to the running configuration so they survive host reboots etc). When iptables is running, to see the current ruleset and the hit counters against each rule, use: iptables -L -n -v
. The layout of iptables consists of “chains” of rules, with each rule processed in order. The default chains on a RHEL6 install are INPUT, OUTPUT and FORWARD. These should make sense just by their names, but if it’s not clear, the INPUT chain is processed for all inbound packets, the OUTPUT chain for all outbound traffic, and the FORWARD chain is for all traffic being bridged or routed elsewhere (uncommon, unless using keepalived as a router, or when using KVM or Xen). These base chains is where we manage things like SSH and MySQL traffic, and lock it down to networks you control.
To add a rule to the INPUT chain, that allows MySQL connections from the private IP address range 10.0.15.0/24:
sudo /sbin/iptables -A INPUT -m state --state NEW \<br />
-m tcp -p tcp -s 10.32.1.0/24 --dport 3306 -j ACCEPT
</p>
<p>This appends a rule to the end of the INPUT chain. If you already have a default deny rule at the end of the chain, you'll need to use an insert rule instead, and insert your MySQL rule above the default deny rule. If the default deny rule is in position 12, you'll want to insert the new rule into position 12 (which then bumps the deny rule to position 13):</p>
<p>
sudo /sbin/iptables -I 12 INPUT -m state --state NEW \<br />
-m tcp -p tcp -s 10.32.1.0/24 --dport 3306 -j ACCEPT
``We'll get back to HA LAMP soon, I promise. However, it's time to stop and make sure you are running a logging host-based firewall on your servers. If you're not, and I'm not kidding on this, you need to start _right now_. Look through the output of
netstat -a | grep LISTEN`. My laptop has less than half a dozen ports open, and that’s after I’ve shut off most services. Here are the two most important (and most dangerous) ones:
Active Internet connections (including servers) Proto Recv-Q Send-Q Local Address Foreign Address (state) ...snip... tcp4 0 0 *.ssh *.* LISTEN tcp6 0 0 *.ssh *.* LISTEN
This represents allowing anyone (*.*) to establish an SSH connection (*.ssh) over IPv4 or IPv6 to my laptop. Unlike most users, I actually look though my system logs on a regular basis, checking for strange things like failed login attempts from Asia. I also pick good passwords, change them frequently, and never re-use them. I seriously don’t think even a rainbow tables-based attack would do me in.
Your server probably has the same ports open to the world. How often do you check the system logs looking for unauthorized logins? Even more worrying: is *.3306
in your netstat output? While MySQL has a robust internal ACL system to allow specific users to access from specific hosts, why let people even know you are running a MySQL server on the host? This is where a host-based firewall comes in handy. The fact that you can automatically block abusive hosts with a little scripting magic is just a bonus.
First things first: I’m talking about iptables
today, and I’m focusing on the systems that use it, which is mostly Linux-based OSes. I’m sure you can do similar things with ipfw
on Solaris, but you’ll have to interpret my examples as you follow along. Also, I’m going to be focusing on RHEL5/RHEL6, as that’s what I use for servers these days.
Check to see if the firewall is turned on: /sbin/service iptables status
and then see if it’s set to start at boot: /sbin/chkconfig --list iptables
. If it’s not on, and not set to start, you’ve got some work to do. The base ruleset lives in /etc/sysconfig/iptables
and this is where iptables reads and writes it’s config files from. (Yes, with the right commands, iptables saves changes you make to the running configuration so they survive host reboots etc). When iptables is running, to see the current ruleset and the hit counters against each rule, use: iptables -L -n -v
. The layout of iptables consists of “chains” of rules, with each rule processed in order. The default chains on a RHEL6 install are INPUT, OUTPUT and FORWARD. These should make sense just by their names, but if it’s not clear, the INPUT chain is processed for all inbound packets, the OUTPUT chain for all outbound traffic, and the FORWARD chain is for all traffic being bridged or routed elsewhere (uncommon, unless using keepalived as a router, or when using KVM or Xen). These base chains is where we manage things like SSH and MySQL traffic, and lock it down to networks you control.
To add a rule to the INPUT chain, that allows MySQL connections from the private IP address range 10.0.15.0/24:
sudo /sbin/iptables -A INPUT -m state --state NEW \<br />
-m tcp -p tcp -s 10.32.1.0/24 --dport 3306 -j ACCEPT
</p>
<p>This appends a rule to the end of the INPUT chain. If you already have a default deny rule at the end of the chain, you'll need to use an insert rule instead, and insert your MySQL rule above the default deny rule. If the default deny rule is in position 12, you'll want to insert the new rule into position 12 (which then bumps the deny rule to position 13):</p>
<p>
sudo /sbin/iptables -I 12 INPUT -m state --state NEW \<br />
-m tcp -p tcp -s 10.32.1.0/24 --dport 3306 -j ACCEPT
``sudo /sbin/iptables -N INPUT-AUTO`
and then add rules in the existing chains to jump to them:
sudo /sbin/iptables -I 1 INPUT -j INPUT-AUTO
This will insert the jump rule into position 1, and all traffic will now flow through the INPUT-AUTO chain. However, INPUT-AUTO is empty, so t’s just going to jump there and jump back out. To add a rule to INPUT-AUTO for testing, use:
sudo /sbin/iptables -I INPUT-AUTO -j LOG
This will make any traffic that comes into your system log to /var/log/messages. This will generate a lot of logs. If you used “-j DROP” instead, the traffic would have been left on the cutting room floor and not logged, but you’ll also lose your SSH connection to the box, and you will have to console in. Don’t say I didn’t warn you. Always test new rules with “-j LOG” before you use “-j DROP”, at least until you are comfortable with what you are doing.
This is all fine and good. How are we going to automaticly block abusive hosts, you ask? Sadly, we’ll have to leave that to the next installment.
Posts and free time
I’ve not been posting the past few days, and I apologize for that. Life has taken another unexpectedly busy turn for me, and while I’m working on more articles on LAMP and iptables, it’s going to be a few days before they make it to the main page. Bear with me, this won’t last too long. I hope.
High Availability LAMP Stacks with Keepalived (Part 2)
Now that you have keepalived up, and you are able to fail the IP addresses over between the hosts, you will need to address another crucial question: under what conditions should keepalived fail the IPs to the second host? For us it was pretty simple: if apache and mysql are both running, the node is healthy. If you have a shell script you’ve written that can determine node health, that’s great too. Just pay attention to the exit code you set, and you should be good to go.
The setup is done with the vrrp_script
directive, where you tell keepalived what scripts to run, and what the outcome of those scripts is going to be on the priority level of each VIP. Two things to note: you need at least keepalived version 1.1.13 for vrrp_script
to work, and you need to list the scripts before the vrrp_instance
they are called in. Again, if you use my example password and the IP address listed here, you deserve any punishment you get.
Here is my example configuration file, after the additions of the scripts:
``Now that you have keepalived up, and you are able to fail the IP addresses over between the hosts, you will need to address another crucial question: under what conditions should keepalived fail the IPs to the second host? For us it was pretty simple: if apache and mysql are both running, the node is healthy. If you have a shell script you’ve written that can determine node health, that’s great too. Just pay attention to the exit code you set, and you should be good to go.
The setup is done with the vrrp_script
directive, where you tell keepalived what scripts to run, and what the outcome of those scripts is going to be on the priority level of each VIP. Two things to note: you need at least keepalived version 1.1.13 for vrrp_script
to work, and you need to list the scripts before the vrrp_instance
they are called in. Again, if you use my example password and the IP address listed here, you deserve any punishment you get.
Here is my example configuration file, after the additions of the scripts:
``
```Now that you have keepalived up, and you are able to fail the IP addresses over between the hosts, you will need to address another crucial question: under what conditions should keepalived fail the IPs to the second host? For us it was pretty simple: if apache and mysql are both running, the node is healthy. If you have a shell script you’ve written that can determine node health, that’s great too. Just pay attention to the exit code you set, and you should be good to go.
The setup is done with the vrrp_script
directive, where you tell keepalived what scripts to run, and what the outcome of those scripts is going to be on the priority level of each VIP. Two things to note: you need at least keepalived version 1.1.13 for vrrp_script
to work, and you need to list the scripts before the vrrp_instance
they are called in. Again, if you use my example password and the IP address listed here, you deserve any punishment you get.
Here is my example configuration file, after the additions of the scripts:
``Now that you have keepalived up, and you are able to fail the IP addresses over between the hosts, you will need to address another crucial question: under what conditions should keepalived fail the IPs to the second host? For us it was pretty simple: if apache and mysql are both running, the node is healthy. If you have a shell script you’ve written that can determine node health, that’s great too. Just pay attention to the exit code you set, and you should be good to go.
The setup is done with the vrrp_script
directive, where you tell keepalived what scripts to run, and what the outcome of those scripts is going to be on the priority level of each VIP. Two things to note: you need at least keepalived version 1.1.13 for vrrp_script
to work, and you need to list the scripts before the vrrp_instance
they are called in. Again, if you use my example password and the IP address listed here, you deserve any punishment you get.
Here is my example configuration file, after the additions of the scripts:
``
````Now that you have keepalived up, and you are able to fail the IP addresses over between the hosts, you will need to address another crucial question: under what conditions should keepalived fail the IPs to the second host? For us it was pretty simple: if apache and mysql are both running, the node is healthy. If you have a shell script you've written that can determine node health, that's great too. Just pay attention to the exit code you set, and you should be good to go.
The setup is done with the `vrrp_script` directive, where you tell keepalived what scripts to run, and what the outcome of those scripts is going to be on the priority level of each VIP. Two things to note: you need at least keepalived version 1.1.13 for `vrrp_script` to work, and you need to list the scripts _before_ the `vrrp_instance` they are called in. Again, if you use my example password and the IP address listed here, _you deserve any punishment you get_.
Here is my example configuration file, after the additions of the scripts:
``Now that you have keepalived up, and you are able to fail the IP addresses over between the hosts, you will need to address another crucial question: under what conditions should keepalived fail the IPs to the second host? For us it was pretty simple: if apache and mysql are both running, the node is healthy. If you have a shell script you've written that can determine node health, that's great too. Just pay attention to the exit code you set, and you should be good to go.
The setup is done with the `vrrp_script` directive, where you tell keepalived what scripts to run, and what the outcome of those scripts is going to be on the priority level of each VIP. Two things to note: you need at least keepalived version 1.1.13 for `vrrp_script` to work, and you need to list the scripts _before_ the `vrrp_instance` they are called in. Again, if you use my example password and the IP address listed here, _you deserve any punishment you get_.
Here is my example configuration file, after the additions of the scripts:
``
```Now that you have keepalived up, and you are able to fail the IP addresses over between the hosts, you will need to address another crucial question: under what conditions should keepalived fail the IPs to the second host? For us it was pretty simple: if apache and mysql are both running, the node is healthy. If you have a shell script you've written that can determine node health, that's great too. Just pay attention to the exit code you set, and you should be good to go.
The setup is done with the `vrrp_script` directive, where you tell keepalived what scripts to run, and what the outcome of those scripts is going to be on the priority level of each VIP. Two things to note: you need at least keepalived version 1.1.13 for `vrrp_script` to work, and you need to list the scripts _before_ the `vrrp_instance` they are called in. Again, if you use my example password and the IP address listed here, _you deserve any punishment you get_.
Here is my example configuration file, after the additions of the scripts:
``Now that you have keepalived up, and you are able to fail the IP addresses over between the hosts, you will need to address another crucial question: under what conditions should keepalived fail the IPs to the second host? For us it was pretty simple: if apache and mysql are both running, the node is healthy. If you have a shell script you've written that can determine node health, that's great too. Just pay attention to the exit code you set, and you should be good to go.
The setup is done with the `vrrp_script` directive, where you tell keepalived what scripts to run, and what the outcome of those scripts is going to be on the priority level of each VIP. Two things to note: you need at least keepalived version 1.1.13 for `vrrp_script` to work, and you need to list the scripts _before_ the `vrrp_instance` they are called in. Again, if you use my example password and the IP address listed here, _you deserve any punishment you get_.
Here is my example configuration file, after the additions of the scripts:
``
`````Now that you have keepalived up, and you are able to fail the IP addresses over between the hosts, you will need to address another crucial question: under what conditions should keepalived fail the IPs to the second host? For us it was pretty simple: if apache and mysql are both running, the node is healthy. If you have a shell script you've written that can determine node health, that's great too. Just pay attention to the exit code you set, and you should be good to go.
The setup is done with the `vrrp_script` directive, where you tell keepalived what scripts to run, and what the outcome of those scripts is going to be on the priority level of each VIP. Two things to note: you need at least keepalived version 1.1.13 for `vrrp_script` to work, and you need to list the scripts _before_ the `vrrp_instance` they are called in. Again, if you use my example password and the IP address listed here, _you deserve any punishment you get_.
Here is my example configuration file, after the additions of the scripts:
``Now that you have keepalived up, and you are able to fail the IP addresses over between the hosts, you will need to address another crucial question: under what conditions should keepalived fail the IPs to the second host? For us it was pretty simple: if apache and mysql are both running, the node is healthy. If you have a shell script you've written that can determine node health, that's great too. Just pay attention to the exit code you set, and you should be good to go.
The setup is done with the `vrrp_script` directive, where you tell keepalived what scripts to run, and what the outcome of those scripts is going to be on the priority level of each VIP. Two things to note: you need at least keepalived version 1.1.13 for `vrrp_script` to work, and you need to list the scripts _before_ the `vrrp_instance` they are called in. Again, if you use my example password and the IP address listed here, _you deserve any punishment you get_.
Here is my example configuration file, after the additions of the scripts:
``
```Now that you have keepalived up, and you are able to fail the IP addresses over between the hosts, you will need to address another crucial question: under what conditions should keepalived fail the IPs to the second host? For us it was pretty simple: if apache and mysql are both running, the node is healthy. If you have a shell script you've written that can determine node health, that's great too. Just pay attention to the exit code you set, and you should be good to go.
The setup is done with the `vrrp_script` directive, where you tell keepalived what scripts to run, and what the outcome of those scripts is going to be on the priority level of each VIP. Two things to note: you need at least keepalived version 1.1.13 for `vrrp_script` to work, and you need to list the scripts _before_ the `vrrp_instance` they are called in. Again, if you use my example password and the IP address listed here, _you deserve any punishment you get_.
Here is my example configuration file, after the additions of the scripts:
``Now that you have keepalived up, and you are able to fail the IP addresses over between the hosts, you will need to address another crucial question: under what conditions should keepalived fail the IPs to the second host? For us it was pretty simple: if apache and mysql are both running, the node is healthy. If you have a shell script you've written that can determine node health, that's great too. Just pay attention to the exit code you set, and you should be good to go.
The setup is done with the `vrrp_script` directive, where you tell keepalived what scripts to run, and what the outcome of those scripts is going to be on the priority level of each VIP. Two things to note: you need at least keepalived version 1.1.13 for `vrrp_script` to work, and you need to list the scripts _before_ the `vrrp_instance` they are called in. Again, if you use my example password and the IP address listed here, _you deserve any punishment you get_.
Here is my example configuration file, after the additions of the scripts:
``
```
````Now that you have keepalived up, and you are able to fail the IP addresses over between the hosts, you will need to address another crucial question: under what conditions should keepalived fail the IPs to the second host? For us it was pretty simple: if apache and mysql are both running, the node is healthy. If you have a shell script you've written that can determine node health, that's great too. Just pay attention to the exit code you set, and you should be good to go.
The setup is done with the `vrrp_script` directive, where you tell keepalived what scripts to run, and what the outcome of those scripts is going to be on the priority level of each VIP. Two things to note: you need at least keepalived version 1.1.13 for `vrrp_script` to work, and you need to list the scripts _before_ the `vrrp_instance` they are called in. Again, if you use my example password and the IP address listed here, _you deserve any punishment you get_.
Here is my example configuration file, after the additions of the scripts:
``Now that you have keepalived up, and you are able to fail the IP addresses over between the hosts, you will need to address another crucial question: under what conditions should keepalived fail the IPs to the second host? For us it was pretty simple: if apache and mysql are both running, the node is healthy. If you have a shell script you've written that can determine node health, that's great too. Just pay attention to the exit code you set, and you should be good to go.
The setup is done with the `vrrp_script` directive, where you tell keepalived what scripts to run, and what the outcome of those scripts is going to be on the priority level of each VIP. Two things to note: you need at least keepalived version 1.1.13 for `vrrp_script` to work, and you need to list the scripts _before_ the `vrrp_instance` they are called in. Again, if you use my example password and the IP address listed here, _you deserve any punishment you get_.
Here is my example configuration file, after the additions of the scripts:
``
```Now that you have keepalived up, and you are able to fail the IP addresses over between the hosts, you will need to address another crucial question: under what conditions should keepalived fail the IPs to the second host? For us it was pretty simple: if apache and mysql are both running, the node is healthy. If you have a shell script you've written that can determine node health, that's great too. Just pay attention to the exit code you set, and you should be good to go.
The setup is done with the `vrrp_script` directive, where you tell keepalived what scripts to run, and what the outcome of those scripts is going to be on the priority level of each VIP. Two things to note: you need at least keepalived version 1.1.13 for `vrrp_script` to work, and you need to list the scripts _before_ the `vrrp_instance` they are called in. Again, if you use my example password and the IP address listed here, _you deserve any punishment you get_.
Here is my example configuration file, after the additions of the scripts:
``Now that you have keepalived up, and you are able to fail the IP addresses over between the hosts, you will need to address another crucial question: under what conditions should keepalived fail the IPs to the second host? For us it was pretty simple: if apache and mysql are both running, the node is healthy. If you have a shell script you've written that can determine node health, that's great too. Just pay attention to the exit code you set, and you should be good to go.
The setup is done with the `vrrp_script` directive, where you tell keepalived what scripts to run, and what the outcome of those scripts is going to be on the priority level of each VIP. Two things to note: you need at least keepalived version 1.1.13 for `vrrp_script` to work, and you need to list the scripts _before_ the `vrrp_instance` they are called in. Again, if you use my example password and the IP address listed here, _you deserve any punishment you get_.
Here is my example configuration file, after the additions of the scripts:
``
```
`````
The script
directive should be pretty obvious: it’s the command you are going to execute to get a return status. (I’m using killall -0
, as it’s an exceptionally cheap way to determine if a process with a given name exists at least once in the process table.) The interval
is the time, in seconds, between checks, and the `` weight is the number of points of priority are removed when the check fails. So, as long as your SLAVE config is set to a number higher than 51, when either mysql or httpd aren’t running, the VIP should fail over.
Cheap Wireless TTL Flash Options
If you want to shoot wireless TTL (Nikon = CLS, Canon = AWS) and you don’t want to drop $300+ on each flash you buy, you might think you’re stuck. However, as I’m learning, there are some options. The Sigma EF-610 DG Super Flash, the Metz mecablitz 50 AF-1, and the Nissin Di622 Mark II Digital all support “wireless TTL” with Nikon gear (and there are equivalent models for Canon shooters). All of this clouds the issue how you use your off-camera flash guns. If you are shooting primarily manual (ala David Hobby), save your money and go with the LumoPro 160 or go really cheap with the Yongnuo 460 II, the $45 bargin flash. However, if you are a CLS/AWS shooter, it may be worth it to check out some of the 3rd party options. The Sigma ones catch my eye especially, as I’ve used a number of their lenses with great results in the past.
Not that I’m in a position at the moment to buy a bunch of different flashes and test them out. In another month or two, I’m likely to pick up a pair of something, but I’m not sure what yet.
High Availability LAMP Stacks with Keepalived (Part 1)
Setting up a modern, highly available system without external dependancies is a tricky business. It’s even tricker to think through when doing it on a shoestring budget. However, it’s critical for many operations. My employer, for example, keeps a set of tools available to administrators on a well-secured web server. This system is a front-end to complex Nagios deployment, user management tools, system maintenance tools. We can’t afford for it to go down when other systems are offline, so it’s design principals are much like that of a monitoring system: everything possible is internal to the system. Luckily, it’s not as hard as it sounds, and it’s free.
If you use CentOS or RHEL, you can follow this guide pretty much directly. Other flavors will have to adapt as needed to reflect local packaging standards, service administration and filesystem layouts. This guide is going to focus mostly on setting up keepalived to act as a failover mechanism, while also looking at replicated MySQL and failure detection for apache. Also, keep in mind this is about High Availability and failover, not load balancing.
First things first: you are going to want to have two servers that are pretty much identical. It helps in the long run for sizing and replacements. They are going to need to be somewhat beefy, as they are going to host the web servers and the database servers that administrators are going to hammer when there’s a system outage. There is nothing quite like melting the face off the server that the admins are using to put everything else back together. Also, put the servers in different data centers/different buildings, so when power goes out to one, the other is still up. The tricky bit is the servers are also going to need to be in the same Layer 2 broadcast domain, while being geographically separated. This usually means being on the same VLAN, and also usually means spanning those VLANs. Networking groups don’t usually like this. The alternative is MPLS and VPLS, which is expensive – at least if you use Cisco gear – and well beyond my knowledge. I understand it’s possible, but I digress.
Once they are on the same segment, it’s time to setup keepalived. Keepalived is usually used to act as a load balancer, and uses VRRP to pass the “router” address back and forth between themselves for redundancy. We’re going to use VRRP to do most of our magic today. Good news: on most modern RedHat-based distributions, installing keepalived is as simple as:
`Setting up a modern, highly available system without external dependancies is a tricky business. It’s even tricker to think through when doing it on a shoestring budget. However, it’s critical for many operations. My employer, for example, keeps a set of tools available to administrators on a well-secured web server. This system is a front-end to complex Nagios deployment, user management tools, system maintenance tools. We can’t afford for it to go down when other systems are offline, so it’s design principals are much like that of a monitoring system: everything possible is internal to the system. Luckily, it’s not as hard as it sounds, and it’s free.
If you use CentOS or RHEL, you can follow this guide pretty much directly. Other flavors will have to adapt as needed to reflect local packaging standards, service administration and filesystem layouts. This guide is going to focus mostly on setting up keepalived to act as a failover mechanism, while also looking at replicated MySQL and failure detection for apache. Also, keep in mind this is about High Availability and failover, not load balancing.
First things first: you are going to want to have two servers that are pretty much identical. It helps in the long run for sizing and replacements. They are going to need to be somewhat beefy, as they are going to host the web servers and the database servers that administrators are going to hammer when there’s a system outage. There is nothing quite like melting the face off the server that the admins are using to put everything else back together. Also, put the servers in different data centers/different buildings, so when power goes out to one, the other is still up. The tricky bit is the servers are also going to need to be in the same Layer 2 broadcast domain, while being geographically separated. This usually means being on the same VLAN, and also usually means spanning those VLANs. Networking groups don’t usually like this. The alternative is MPLS and VPLS, which is expensive – at least if you use Cisco gear – and well beyond my knowledge. I understand it’s possible, but I digress.
Once they are on the same segment, it’s time to setup keepalived. Keepalived is usually used to act as a load balancer, and uses VRRP to pass the “router” address back and forth between themselves for redundancy. We’re going to use VRRP to do most of our magic today. Good news: on most modern RedHat-based distributions, installing keepalived is as simple as:
`
It’s not part of the Advanced Platform Server Magick that RedHat sells at a premium price. It’s just basic, vanilla, plain-Jane keepalived. Your config file is equally as simple, with two minor differences between the two hosts that will be sharing the service. On the first, set state
to MASTER
and priority
to 101
, and on the second set them to BACKUP
and 75
. Also, do yourself a favor and generate the passwords with uuidgen
. I’ve added a password generated with uuidgen, but don’t use the one I’m pasting in here. Really. Also, give yourself a real IP address, not 1.2.3.4. (Honestly, if you use a password copied off a blog post and the IP address 1.2.3.4… well, you deserve what you get).
``Setting up a modern, highly available system without external dependancies is a tricky business. It’s even tricker to think through when doing it on a shoestring budget. However, it’s critical for many operations. My employer, for example, keeps a set of tools available to administrators on a well-secured web server. This system is a front-end to complex Nagios deployment, user management tools, system maintenance tools. We can’t afford for it to go down when other systems are offline, so it’s design principals are much like that of a monitoring system: everything possible is internal to the system. Luckily, it’s not as hard as it sounds, and it’s free.
If you use CentOS or RHEL, you can follow this guide pretty much directly. Other flavors will have to adapt as needed to reflect local packaging standards, service administration and filesystem layouts. This guide is going to focus mostly on setting up keepalived to act as a failover mechanism, while also looking at replicated MySQL and failure detection for apache. Also, keep in mind this is about High Availability and failover, not load balancing.
First things first: you are going to want to have two servers that are pretty much identical. It helps in the long run for sizing and replacements. They are going to need to be somewhat beefy, as they are going to host the web servers and the database servers that administrators are going to hammer when there’s a system outage. There is nothing quite like melting the face off the server that the admins are using to put everything else back together. Also, put the servers in different data centers/different buildings, so when power goes out to one, the other is still up. The tricky bit is the servers are also going to need to be in the same Layer 2 broadcast domain, while being geographically separated. This usually means being on the same VLAN, and also usually means spanning those VLANs. Networking groups don’t usually like this. The alternative is MPLS and VPLS, which is expensive – at least if you use Cisco gear – and well beyond my knowledge. I understand it’s possible, but I digress.
Once they are on the same segment, it’s time to setup keepalived. Keepalived is usually used to act as a load balancer, and uses VRRP to pass the “router” address back and forth between themselves for redundancy. We’re going to use VRRP to do most of our magic today. Good news: on most modern RedHat-based distributions, installing keepalived is as simple as:
`Setting up a modern, highly available system without external dependancies is a tricky business. It’s even tricker to think through when doing it on a shoestring budget. However, it’s critical for many operations. My employer, for example, keeps a set of tools available to administrators on a well-secured web server. This system is a front-end to complex Nagios deployment, user management tools, system maintenance tools. We can’t afford for it to go down when other systems are offline, so it’s design principals are much like that of a monitoring system: everything possible is internal to the system. Luckily, it’s not as hard as it sounds, and it’s free.
If you use CentOS or RHEL, you can follow this guide pretty much directly. Other flavors will have to adapt as needed to reflect local packaging standards, service administration and filesystem layouts. This guide is going to focus mostly on setting up keepalived to act as a failover mechanism, while also looking at replicated MySQL and failure detection for apache. Also, keep in mind this is about High Availability and failover, not load balancing.
First things first: you are going to want to have two servers that are pretty much identical. It helps in the long run for sizing and replacements. They are going to need to be somewhat beefy, as they are going to host the web servers and the database servers that administrators are going to hammer when there’s a system outage. There is nothing quite like melting the face off the server that the admins are using to put everything else back together. Also, put the servers in different data centers/different buildings, so when power goes out to one, the other is still up. The tricky bit is the servers are also going to need to be in the same Layer 2 broadcast domain, while being geographically separated. This usually means being on the same VLAN, and also usually means spanning those VLANs. Networking groups don’t usually like this. The alternative is MPLS and VPLS, which is expensive – at least if you use Cisco gear – and well beyond my knowledge. I understand it’s possible, but I digress.
Once they are on the same segment, it’s time to setup keepalived. Keepalived is usually used to act as a load balancer, and uses VRRP to pass the “router” address back and forth between themselves for redundancy. We’re going to use VRRP to do most of our magic today. Good news: on most modern RedHat-based distributions, installing keepalived is as simple as:
`
It’s not part of the Advanced Platform Server Magick that RedHat sells at a premium price. It’s just basic, vanilla, plain-Jane keepalived. Your config file is equally as simple, with two minor differences between the two hosts that will be sharing the service. On the first, set state
to MASTER
and priority
to 101
, and on the second set them to BACKUP
and 75
. Also, do yourself a favor and generate the passwords with uuidgen
. I’ve added a password generated with uuidgen, but don’t use the one I’m pasting in here. Really. Also, give yourself a real IP address, not 1.2.3.4. (Honestly, if you use a password copied off a blog post and the IP address 1.2.3.4… well, you deserve what you get).
``
```Setting up a modern, highly available system without external dependancies is a tricky business. It’s even tricker to think through when doing it on a shoestring budget. However, it’s critical for many operations. My employer, for example, keeps a set of tools available to administrators on a well-secured web server. This system is a front-end to complex Nagios deployment, user management tools, system maintenance tools. We can’t afford for it to go down when other systems are offline, so it’s design principals are much like that of a monitoring system: everything possible is internal to the system. Luckily, it’s not as hard as it sounds, and it’s free.
If you use CentOS or RHEL, you can follow this guide pretty much directly. Other flavors will have to adapt as needed to reflect local packaging standards, service administration and filesystem layouts. This guide is going to focus mostly on setting up keepalived to act as a failover mechanism, while also looking at replicated MySQL and failure detection for apache. Also, keep in mind this is about High Availability and failover, not load balancing.
First things first: you are going to want to have two servers that are pretty much identical. It helps in the long run for sizing and replacements. They are going to need to be somewhat beefy, as they are going to host the web servers and the database servers that administrators are going to hammer when there’s a system outage. There is nothing quite like melting the face off the server that the admins are using to put everything else back together. Also, put the servers in different data centers/different buildings, so when power goes out to one, the other is still up. The tricky bit is the servers are also going to need to be in the same Layer 2 broadcast domain, while being geographically separated. This usually means being on the same VLAN, and also usually means spanning those VLANs. Networking groups don’t usually like this. The alternative is MPLS and VPLS, which is expensive – at least if you use Cisco gear – and well beyond my knowledge. I understand it’s possible, but I digress.
Once they are on the same segment, it’s time to setup keepalived. Keepalived is usually used to act as a load balancer, and uses VRRP to pass the “router” address back and forth between themselves for redundancy. We’re going to use VRRP to do most of our magic today. Good news: on most modern RedHat-based distributions, installing keepalived is as simple as:
`Setting up a modern, highly available system without external dependancies is a tricky business. It’s even tricker to think through when doing it on a shoestring budget. However, it’s critical for many operations. My employer, for example, keeps a set of tools available to administrators on a well-secured web server. This system is a front-end to complex Nagios deployment, user management tools, system maintenance tools. We can’t afford for it to go down when other systems are offline, so it’s design principals are much like that of a monitoring system: everything possible is internal to the system. Luckily, it’s not as hard as it sounds, and it’s free.
If you use CentOS or RHEL, you can follow this guide pretty much directly. Other flavors will have to adapt as needed to reflect local packaging standards, service administration and filesystem layouts. This guide is going to focus mostly on setting up keepalived to act as a failover mechanism, while also looking at replicated MySQL and failure detection for apache. Also, keep in mind this is about High Availability and failover, not load balancing.
First things first: you are going to want to have two servers that are pretty much identical. It helps in the long run for sizing and replacements. They are going to need to be somewhat beefy, as they are going to host the web servers and the database servers that administrators are going to hammer when there’s a system outage. There is nothing quite like melting the face off the server that the admins are using to put everything else back together. Also, put the servers in different data centers/different buildings, so when power goes out to one, the other is still up. The tricky bit is the servers are also going to need to be in the same Layer 2 broadcast domain, while being geographically separated. This usually means being on the same VLAN, and also usually means spanning those VLANs. Networking groups don’t usually like this. The alternative is MPLS and VPLS, which is expensive – at least if you use Cisco gear – and well beyond my knowledge. I understand it’s possible, but I digress.
Once they are on the same segment, it’s time to setup keepalived. Keepalived is usually used to act as a load balancer, and uses VRRP to pass the “router” address back and forth between themselves for redundancy. We’re going to use VRRP to do most of our magic today. Good news: on most modern RedHat-based distributions, installing keepalived is as simple as:
`
It’s not part of the Advanced Platform Server Magick that RedHat sells at a premium price. It’s just basic, vanilla, plain-Jane keepalived. Your config file is equally as simple, with two minor differences between the two hosts that will be sharing the service. On the first, set state
to MASTER
and priority
to 101
, and on the second set them to BACKUP
and 75
. Also, do yourself a favor and generate the passwords with uuidgen
. I’ve added a password generated with uuidgen, but don’t use the one I’m pasting in here. Really. Also, give yourself a real IP address, not 1.2.3.4. (Honestly, if you use a password copied off a blog post and the IP address 1.2.3.4… well, you deserve what you get).
``Setting up a modern, highly available system without external dependancies is a tricky business. It’s even tricker to think through when doing it on a shoestring budget. However, it’s critical for many operations. My employer, for example, keeps a set of tools available to administrators on a well-secured web server. This system is a front-end to complex Nagios deployment, user management tools, system maintenance tools. We can’t afford for it to go down when other systems are offline, so it’s design principals are much like that of a monitoring system: everything possible is internal to the system. Luckily, it’s not as hard as it sounds, and it’s free.
If you use CentOS or RHEL, you can follow this guide pretty much directly. Other flavors will have to adapt as needed to reflect local packaging standards, service administration and filesystem layouts. This guide is going to focus mostly on setting up keepalived to act as a failover mechanism, while also looking at replicated MySQL and failure detection for apache. Also, keep in mind this is about High Availability and failover, not load balancing.
First things first: you are going to want to have two servers that are pretty much identical. It helps in the long run for sizing and replacements. They are going to need to be somewhat beefy, as they are going to host the web servers and the database servers that administrators are going to hammer when there’s a system outage. There is nothing quite like melting the face off the server that the admins are using to put everything else back together. Also, put the servers in different data centers/different buildings, so when power goes out to one, the other is still up. The tricky bit is the servers are also going to need to be in the same Layer 2 broadcast domain, while being geographically separated. This usually means being on the same VLAN, and also usually means spanning those VLANs. Networking groups don’t usually like this. The alternative is MPLS and VPLS, which is expensive – at least if you use Cisco gear – and well beyond my knowledge. I understand it’s possible, but I digress.
Once they are on the same segment, it’s time to setup keepalived. Keepalived is usually used to act as a load balancer, and uses VRRP to pass the “router” address back and forth between themselves for redundancy. We’re going to use VRRP to do most of our magic today. Good news: on most modern RedHat-based distributions, installing keepalived is as simple as:
`Setting up a modern, highly available system without external dependancies is a tricky business. It’s even tricker to think through when doing it on a shoestring budget. However, it’s critical for many operations. My employer, for example, keeps a set of tools available to administrators on a well-secured web server. This system is a front-end to complex Nagios deployment, user management tools, system maintenance tools. We can’t afford for it to go down when other systems are offline, so it’s design principals are much like that of a monitoring system: everything possible is internal to the system. Luckily, it’s not as hard as it sounds, and it’s free.
If you use CentOS or RHEL, you can follow this guide pretty much directly. Other flavors will have to adapt as needed to reflect local packaging standards, service administration and filesystem layouts. This guide is going to focus mostly on setting up keepalived to act as a failover mechanism, while also looking at replicated MySQL and failure detection for apache. Also, keep in mind this is about High Availability and failover, not load balancing.
First things first: you are going to want to have two servers that are pretty much identical. It helps in the long run for sizing and replacements. They are going to need to be somewhat beefy, as they are going to host the web servers and the database servers that administrators are going to hammer when there’s a system outage. There is nothing quite like melting the face off the server that the admins are using to put everything else back together. Also, put the servers in different data centers/different buildings, so when power goes out to one, the other is still up. The tricky bit is the servers are also going to need to be in the same Layer 2 broadcast domain, while being geographically separated. This usually means being on the same VLAN, and also usually means spanning those VLANs. Networking groups don’t usually like this. The alternative is MPLS and VPLS, which is expensive – at least if you use Cisco gear – and well beyond my knowledge. I understand it’s possible, but I digress.
Once they are on the same segment, it’s time to setup keepalived. Keepalived is usually used to act as a load balancer, and uses VRRP to pass the “router” address back and forth between themselves for redundancy. We’re going to use VRRP to do most of our magic today. Good news: on most modern RedHat-based distributions, installing keepalived is as simple as:
`
It’s not part of the Advanced Platform Server Magick that RedHat sells at a premium price. It’s just basic, vanilla, plain-Jane keepalived. Your config file is equally as simple, with two minor differences between the two hosts that will be sharing the service. On the first, set state
to MASTER
and priority
to 101
, and on the second set them to BACKUP
and 75
. Also, do yourself a favor and generate the passwords with uuidgen
. I’ve added a password generated with uuidgen, but don’t use the one I’m pasting in here. Really. Also, give yourself a real IP address, not 1.2.3.4. (Honestly, if you use a password copied off a blog post and the IP address 1.2.3.4… well, you deserve what you get).
``
Once that's done, and the config files are in place on both hosts, fire up keepalived on the master. There will be some messages in /var/log/messages talking about keepalived coming online, setting up loopbacks and entering MASTER state. You should now be able to ping the machine externally, and SSH into the virtual IP address you specified above. Bringing the second instance of keepalived online should start a short conversation in the logs – but the MASTER should remain master.
If there's no conversation in the logs, check the virtual\_router\_id and auth_pass in the config files. If they match, and there's still no conversation, you may have to poke iptables, as you need to explicitly allow vrrp traffic in iptables. In `/etc/sysconfig/iptables`, add the following:
````Setting up a modern, highly available system without external dependancies is a tricky business. It's even tricker to think through when doing it on a shoestring budget. However, it's critical for many operations. My employer, for example, keeps a set of tools available to administrators on a well-secured web server. This system is a front-end to complex Nagios deployment, user management tools, system maintenance tools. We can't afford for it to go down when other systems are offline, so it's design principals are much like that of a monitoring system: everything possible is internal to the system. Luckily, it's not as hard as it sounds, and it's free.
If you use CentOS or RHEL, you can follow this guide pretty much directly. Other flavors will have to adapt as needed to reflect local packaging standards, service administration and filesystem layouts. This guide is going to focus mostly on setting up keepalived to act as a failover mechanism, while also looking at replicated MySQL and failure detection for apache. Also, keep in mind this is about High Availability and failover, _not_ load balancing.
First things first: you are going to want to have two servers that are pretty much identical. It helps in the long run for sizing and replacements. They are going to need to be somewhat beefy, as they are going to host the web servers and the database servers that administrators are going to hammer when there's a system outage. There is nothing quite like melting the face off the server that the admins are using to put everything else back together. Also, put the servers in different data centers/different buildings, so when power goes out to one, the other is still up. The tricky bit is the servers are also going to need to be in the same Layer 2 broadcast domain, while being geographically separated. This usually means being on the same VLAN, and _also_ usually means spanning those VLANs. Networking groups don't usually like this. The alternative is MPLS and VPLS, which is expensive – at least if you use Cisco gear – and well beyond my knowledge. I understand it's possible, but I digress.
Once they are on the same segment, it's time to setup [keepalived][1]. Keepalived is usually used to act as a load balancer, and uses VRRP to pass the “router” address back and forth between themselves for redundancy. We're going to use VRRP to do most of our magic today. Good news: on most modern RedHat-based distributions, installing keepalived is as simple as:
`Setting up a modern, highly available system without external dependancies is a tricky business. It's even tricker to think through when doing it on a shoestring budget. However, it's critical for many operations. My employer, for example, keeps a set of tools available to administrators on a well-secured web server. This system is a front-end to complex Nagios deployment, user management tools, system maintenance tools. We can't afford for it to go down when other systems are offline, so it's design principals are much like that of a monitoring system: everything possible is internal to the system. Luckily, it's not as hard as it sounds, and it's free.
If you use CentOS or RHEL, you can follow this guide pretty much directly. Other flavors will have to adapt as needed to reflect local packaging standards, service administration and filesystem layouts. This guide is going to focus mostly on setting up keepalived to act as a failover mechanism, while also looking at replicated MySQL and failure detection for apache. Also, keep in mind this is about High Availability and failover, _not_ load balancing.
First things first: you are going to want to have two servers that are pretty much identical. It helps in the long run for sizing and replacements. They are going to need to be somewhat beefy, as they are going to host the web servers and the database servers that administrators are going to hammer when there's a system outage. There is nothing quite like melting the face off the server that the admins are using to put everything else back together. Also, put the servers in different data centers/different buildings, so when power goes out to one, the other is still up. The tricky bit is the servers are also going to need to be in the same Layer 2 broadcast domain, while being geographically separated. This usually means being on the same VLAN, and _also_ usually means spanning those VLANs. Networking groups don't usually like this. The alternative is MPLS and VPLS, which is expensive – at least if you use Cisco gear – and well beyond my knowledge. I understand it's possible, but I digress.
Once they are on the same segment, it's time to setup [keepalived][1]. Keepalived is usually used to act as a load balancer, and uses VRRP to pass the “router” address back and forth between themselves for redundancy. We're going to use VRRP to do most of our magic today. Good news: on most modern RedHat-based distributions, installing keepalived is as simple as:
`
It's not part of the Advanced Platform Server Magick that RedHat sells at a premium price. It's just basic, vanilla, plain-Jane keepalived. Your config file is equally as simple, with two minor differences between the two hosts that will be sharing the service. On the first, set `state` to `MASTER` and `priority` to `101`, and on the second set them to `BACKUP` and `75`. Also, do yourself a favor and generate the passwords with `uuidgen`. I've added a password generated with uuidgen, but don't use the one I'm pasting in here. Really. Also, give yourself a real IP address, not 1.2.3.4. (Honestly, if you use a password copied off a blog post and the IP address 1.2.3.4… well, you deserve what you get).
``Setting up a modern, highly available system without external dependancies is a tricky business. It's even tricker to think through when doing it on a shoestring budget. However, it's critical for many operations. My employer, for example, keeps a set of tools available to administrators on a well-secured web server. This system is a front-end to complex Nagios deployment, user management tools, system maintenance tools. We can't afford for it to go down when other systems are offline, so it's design principals are much like that of a monitoring system: everything possible is internal to the system. Luckily, it's not as hard as it sounds, and it's free.
If you use CentOS or RHEL, you can follow this guide pretty much directly. Other flavors will have to adapt as needed to reflect local packaging standards, service administration and filesystem layouts. This guide is going to focus mostly on setting up keepalived to act as a failover mechanism, while also looking at replicated MySQL and failure detection for apache. Also, keep in mind this is about High Availability and failover, _not_ load balancing.
First things first: you are going to want to have two servers that are pretty much identical. It helps in the long run for sizing and replacements. They are going to need to be somewhat beefy, as they are going to host the web servers and the database servers that administrators are going to hammer when there's a system outage. There is nothing quite like melting the face off the server that the admins are using to put everything else back together. Also, put the servers in different data centers/different buildings, so when power goes out to one, the other is still up. The tricky bit is the servers are also going to need to be in the same Layer 2 broadcast domain, while being geographically separated. This usually means being on the same VLAN, and _also_ usually means spanning those VLANs. Networking groups don't usually like this. The alternative is MPLS and VPLS, which is expensive – at least if you use Cisco gear – and well beyond my knowledge. I understand it's possible, but I digress.
Once they are on the same segment, it's time to setup [keepalived][1]. Keepalived is usually used to act as a load balancer, and uses VRRP to pass the “router” address back and forth between themselves for redundancy. We're going to use VRRP to do most of our magic today. Good news: on most modern RedHat-based distributions, installing keepalived is as simple as:
`Setting up a modern, highly available system without external dependancies is a tricky business. It's even tricker to think through when doing it on a shoestring budget. However, it's critical for many operations. My employer, for example, keeps a set of tools available to administrators on a well-secured web server. This system is a front-end to complex Nagios deployment, user management tools, system maintenance tools. We can't afford for it to go down when other systems are offline, so it's design principals are much like that of a monitoring system: everything possible is internal to the system. Luckily, it's not as hard as it sounds, and it's free.
If you use CentOS or RHEL, you can follow this guide pretty much directly. Other flavors will have to adapt as needed to reflect local packaging standards, service administration and filesystem layouts. This guide is going to focus mostly on setting up keepalived to act as a failover mechanism, while also looking at replicated MySQL and failure detection for apache. Also, keep in mind this is about High Availability and failover, _not_ load balancing.
First things first: you are going to want to have two servers that are pretty much identical. It helps in the long run for sizing and replacements. They are going to need to be somewhat beefy, as they are going to host the web servers and the database servers that administrators are going to hammer when there's a system outage. There is nothing quite like melting the face off the server that the admins are using to put everything else back together. Also, put the servers in different data centers/different buildings, so when power goes out to one, the other is still up. The tricky bit is the servers are also going to need to be in the same Layer 2 broadcast domain, while being geographically separated. This usually means being on the same VLAN, and _also_ usually means spanning those VLANs. Networking groups don't usually like this. The alternative is MPLS and VPLS, which is expensive – at least if you use Cisco gear – and well beyond my knowledge. I understand it's possible, but I digress.
Once they are on the same segment, it's time to setup [keepalived][1]. Keepalived is usually used to act as a load balancer, and uses VRRP to pass the “router” address back and forth between themselves for redundancy. We're going to use VRRP to do most of our magic today. Good news: on most modern RedHat-based distributions, installing keepalived is as simple as:
`
It's not part of the Advanced Platform Server Magick that RedHat sells at a premium price. It's just basic, vanilla, plain-Jane keepalived. Your config file is equally as simple, with two minor differences between the two hosts that will be sharing the service. On the first, set `state` to `MASTER` and `priority` to `101`, and on the second set them to `BACKUP` and `75`. Also, do yourself a favor and generate the passwords with `uuidgen`. I've added a password generated with uuidgen, but don't use the one I'm pasting in here. Really. Also, give yourself a real IP address, not 1.2.3.4. (Honestly, if you use a password copied off a blog post and the IP address 1.2.3.4… well, you deserve what you get).
``
```Setting up a modern, highly available system without external dependancies is a tricky business. It's even tricker to think through when doing it on a shoestring budget. However, it's critical for many operations. My employer, for example, keeps a set of tools available to administrators on a well-secured web server. This system is a front-end to complex Nagios deployment, user management tools, system maintenance tools. We can't afford for it to go down when other systems are offline, so it's design principals are much like that of a monitoring system: everything possible is internal to the system. Luckily, it's not as hard as it sounds, and it's free.
If you use CentOS or RHEL, you can follow this guide pretty much directly. Other flavors will have to adapt as needed to reflect local packaging standards, service administration and filesystem layouts. This guide is going to focus mostly on setting up keepalived to act as a failover mechanism, while also looking at replicated MySQL and failure detection for apache. Also, keep in mind this is about High Availability and failover, _not_ load balancing.
First things first: you are going to want to have two servers that are pretty much identical. It helps in the long run for sizing and replacements. They are going to need to be somewhat beefy, as they are going to host the web servers and the database servers that administrators are going to hammer when there's a system outage. There is nothing quite like melting the face off the server that the admins are using to put everything else back together. Also, put the servers in different data centers/different buildings, so when power goes out to one, the other is still up. The tricky bit is the servers are also going to need to be in the same Layer 2 broadcast domain, while being geographically separated. This usually means being on the same VLAN, and _also_ usually means spanning those VLANs. Networking groups don't usually like this. The alternative is MPLS and VPLS, which is expensive – at least if you use Cisco gear – and well beyond my knowledge. I understand it's possible, but I digress.
Once they are on the same segment, it's time to setup [keepalived][1]. Keepalived is usually used to act as a load balancer, and uses VRRP to pass the “router” address back and forth between themselves for redundancy. We're going to use VRRP to do most of our magic today. Good news: on most modern RedHat-based distributions, installing keepalived is as simple as:
`Setting up a modern, highly available system without external dependancies is a tricky business. It's even tricker to think through when doing it on a shoestring budget. However, it's critical for many operations. My employer, for example, keeps a set of tools available to administrators on a well-secured web server. This system is a front-end to complex Nagios deployment, user management tools, system maintenance tools. We can't afford for it to go down when other systems are offline, so it's design principals are much like that of a monitoring system: everything possible is internal to the system. Luckily, it's not as hard as it sounds, and it's free.
If you use CentOS or RHEL, you can follow this guide pretty much directly. Other flavors will have to adapt as needed to reflect local packaging standards, service administration and filesystem layouts. This guide is going to focus mostly on setting up keepalived to act as a failover mechanism, while also looking at replicated MySQL and failure detection for apache. Also, keep in mind this is about High Availability and failover, _not_ load balancing.
First things first: you are going to want to have two servers that are pretty much identical. It helps in the long run for sizing and replacements. They are going to need to be somewhat beefy, as they are going to host the web servers and the database servers that administrators are going to hammer when there's a system outage. There is nothing quite like melting the face off the server that the admins are using to put everything else back together. Also, put the servers in different data centers/different buildings, so when power goes out to one, the other is still up. The tricky bit is the servers are also going to need to be in the same Layer 2 broadcast domain, while being geographically separated. This usually means being on the same VLAN, and _also_ usually means spanning those VLANs. Networking groups don't usually like this. The alternative is MPLS and VPLS, which is expensive – at least if you use Cisco gear – and well beyond my knowledge. I understand it's possible, but I digress.
Once they are on the same segment, it's time to setup [keepalived][1]. Keepalived is usually used to act as a load balancer, and uses VRRP to pass the “router” address back and forth between themselves for redundancy. We're going to use VRRP to do most of our magic today. Good news: on most modern RedHat-based distributions, installing keepalived is as simple as:
`
It's not part of the Advanced Platform Server Magick that RedHat sells at a premium price. It's just basic, vanilla, plain-Jane keepalived. Your config file is equally as simple, with two minor differences between the two hosts that will be sharing the service. On the first, set `state` to `MASTER` and `priority` to `101`, and on the second set them to `BACKUP` and `75`. Also, do yourself a favor and generate the passwords with `uuidgen`. I've added a password generated with uuidgen, but don't use the one I'm pasting in here. Really. Also, give yourself a real IP address, not 1.2.3.4. (Honestly, if you use a password copied off a blog post and the IP address 1.2.3.4… well, you deserve what you get).
``Setting up a modern, highly available system without external dependancies is a tricky business. It's even tricker to think through when doing it on a shoestring budget. However, it's critical for many operations. My employer, for example, keeps a set of tools available to administrators on a well-secured web server. This system is a front-end to complex Nagios deployment, user management tools, system maintenance tools. We can't afford for it to go down when other systems are offline, so it's design principals are much like that of a monitoring system: everything possible is internal to the system. Luckily, it's not as hard as it sounds, and it's free.
If you use CentOS or RHEL, you can follow this guide pretty much directly. Other flavors will have to adapt as needed to reflect local packaging standards, service administration and filesystem layouts. This guide is going to focus mostly on setting up keepalived to act as a failover mechanism, while also looking at replicated MySQL and failure detection for apache. Also, keep in mind this is about High Availability and failover, _not_ load balancing.
First things first: you are going to want to have two servers that are pretty much identical. It helps in the long run for sizing and replacements. They are going to need to be somewhat beefy, as they are going to host the web servers and the database servers that administrators are going to hammer when there's a system outage. There is nothing quite like melting the face off the server that the admins are using to put everything else back together. Also, put the servers in different data centers/different buildings, so when power goes out to one, the other is still up. The tricky bit is the servers are also going to need to be in the same Layer 2 broadcast domain, while being geographically separated. This usually means being on the same VLAN, and _also_ usually means spanning those VLANs. Networking groups don't usually like this. The alternative is MPLS and VPLS, which is expensive – at least if you use Cisco gear – and well beyond my knowledge. I understand it's possible, but I digress.
Once they are on the same segment, it's time to setup [keepalived][1]. Keepalived is usually used to act as a load balancer, and uses VRRP to pass the “router” address back and forth between themselves for redundancy. We're going to use VRRP to do most of our magic today. Good news: on most modern RedHat-based distributions, installing keepalived is as simple as:
`Setting up a modern, highly available system without external dependancies is a tricky business. It's even tricker to think through when doing it on a shoestring budget. However, it's critical for many operations. My employer, for example, keeps a set of tools available to administrators on a well-secured web server. This system is a front-end to complex Nagios deployment, user management tools, system maintenance tools. We can't afford for it to go down when other systems are offline, so it's design principals are much like that of a monitoring system: everything possible is internal to the system. Luckily, it's not as hard as it sounds, and it's free.
If you use CentOS or RHEL, you can follow this guide pretty much directly. Other flavors will have to adapt as needed to reflect local packaging standards, service administration and filesystem layouts. This guide is going to focus mostly on setting up keepalived to act as a failover mechanism, while also looking at replicated MySQL and failure detection for apache. Also, keep in mind this is about High Availability and failover, _not_ load balancing.
First things first: you are going to want to have two servers that are pretty much identical. It helps in the long run for sizing and replacements. They are going to need to be somewhat beefy, as they are going to host the web servers and the database servers that administrators are going to hammer when there's a system outage. There is nothing quite like melting the face off the server that the admins are using to put everything else back together. Also, put the servers in different data centers/different buildings, so when power goes out to one, the other is still up. The tricky bit is the servers are also going to need to be in the same Layer 2 broadcast domain, while being geographically separated. This usually means being on the same VLAN, and _also_ usually means spanning those VLANs. Networking groups don't usually like this. The alternative is MPLS and VPLS, which is expensive – at least if you use Cisco gear – and well beyond my knowledge. I understand it's possible, but I digress.
Once they are on the same segment, it's time to setup [keepalived][1]. Keepalived is usually used to act as a load balancer, and uses VRRP to pass the “router” address back and forth between themselves for redundancy. We're going to use VRRP to do most of our magic today. Good news: on most modern RedHat-based distributions, installing keepalived is as simple as:
`
It's not part of the Advanced Platform Server Magick that RedHat sells at a premium price. It's just basic, vanilla, plain-Jane keepalived. Your config file is equally as simple, with two minor differences between the two hosts that will be sharing the service. On the first, set `state` to `MASTER` and `priority` to `101`, and on the second set them to `BACKUP` and `75`. Also, do yourself a favor and generate the passwords with `uuidgen`. I've added a password generated with uuidgen, but don't use the one I'm pasting in here. Really. Also, give yourself a real IP address, not 1.2.3.4. (Honestly, if you use a password copied off a blog post and the IP address 1.2.3.4… well, you deserve what you get).
``
Once that’s done, and the config files are in place on both hosts, fire up keepalived on the master. There will be some messages in /var/log/messages talking about keepalived coming online, setting up loopbacks and entering MASTER state. You should now be able to ping the machine externally, and SSH into the virtual IP address you specified above. Bringing the second instance of keepalived online should start a short conversation in the logs – but the MASTER should remain master.
If there’s no conversation in the logs, check the virtual_router_id and auth_pass in the config files. If they match, and there’s still no conversation, you may have to poke iptables, as you need to explicitly allow vrrp traffic in iptables. In /etc/sysconfig/iptables
, add the following:
````
Once the hosts talk to each other, test failover. Turn off keepalived on the first host. It will take 2 or 3 seconds to fail over to the second host. At that point, you’ll be able to ping, but your SSH client will scream at you about a key mis-match. This is expected, and what you want to see. Success! The VIP has moved. I’ll remind you again, this is HA, not load balancing. When you bring keepalived back online, the IP will move back to the host you have set as MASTER in the config files.
At this point, you’ve done half of the hard work. As a breather, fire up apache on both boxes and fail keepalived back and forth between the hosts, and you’ll be able to see traffic going to one or the other. It’s pretty slick. In the next installment, I’ll talk about MySQL replication between the hosts.
The Flash Bus (Part II)
I’m finally getting down to writing up the Flash Bus that happened a week ago. In short, it’s amazing. Between David Hobby and Joe McNally is over 50 years of photography experience, and they are both masters of their respective crafts. David spent time out of mind as a newspaper photographer, and when he saw the way then industry was going, he packed up shop and found a new way to use his skills. He also started writing Strobist, a guide to off-camera flash. He believes heavily in using the manual modes, and building up the light in a photograph in carefully controlled and measured layers. Joe, on the other hand, spent the last three decades being a freelance photographer for National Geographic magazine. He strongly believes in automatic settings via the camera’s Through The Lens (TTL) flash metering – which allows him to play fast and loose with the flash settings, and wing it a little often.
I can’t say I like or dislike either approach. The fully manual approach makes you think out every light and every level in a way that builds the technical discipline of off camera lighting. It’s also a lot cheaper, as you can get $179 LumoPro 160’s with optical slave triggers, instead of $400 Nikon speedlights, and get the same results. It does mean more walking back and forth between the light stand and the shooting position, but there’s nothing wrong with exercise. It also means that you learn to trust your judgement faster, and build in layers. The TTL approach makes heavy use of Nikon’s Creative Lighting System (or the PocketWizard TTL transceivers) that allows control of the remote flashes from the camera. It lets you set the ratios between the various lights, and work much more quickly. (As an aside: if you have the expensive Nikon CLS-enabled flashes, you can still set them in manual mode. This is what I’m doing, as I’m lucky enough to have two CLS-enabled Nikon speedlights.) For now, I think I’m going to stay with the manual methods, ala David Hobby, as I really need to put in the hours learning the technical aspects of the lighting process.
I think the most important technical thing I learned in the seminar is that when use manually set speedlights, your aperture controls the power of the speedlights, and your shutter controls the power of the ambient lights. This is because the duration of the flash firing is about a 1/1000th of a second, and that exposure is almost always more powerful than the ambient. So you walk the background up and down with the shutter control dial and the relative power of your lights with the aperture control dial. You can then use the ISO speed to act as a global exposure compensation system. It’s pretty amazing what you can do once you realize this (like overpower the sun in the middle of the day).
There are two other things that I learned at the seminar, other than the raw volume of lighting information. The first is to register your photographs with the government, so you have the ability to sue if people use them without permission. You can send in DVD’s of images to register them en masse. The US Copyright Office maintains a lot of helpful information on their website.
The second thing is that shooting with less lights means more complications and less control. This is counter intuitive to many, as you would think that the more lights you have to set up means the more settings to change and the more things to balance. This isn’t true for two reasons: first, light is additive. As you layer more light, it will only add to the light on the scene. As long as you are going manual, you will never create new dark places in the photo you make by adding new lights. The second is because there are four generally established types of lights in a photograph: Ambient, Fill, Key, and Accent. You can create all of them with speedlights, but if you only have one light, it means it has to do double duty (as a Fill and an Accent, via a bounce card, for example). This makes it harder on the photographer to figure out how to balance the needed elements in the scene. I’ve only got the two lights, and while I want more, I think I’ll be ok for now. Though I’ve already run into situations over this last weekend where I’d wished I had two more lights for honest reasons, not just gear envy.
Anyway, this is getting to be a long post, and I’ve got dinner to cook. See you guys next time, where I actually try to get a human subject in front of my camera.
Newer Posts | Home | Older Posts