The Guru College
Home Network – DNS Server
I am now running a pair of DNS servers in my home network. Like the NTP service I’m running, these are active on both of my OpenSolaris nodes. The processor I ran to generate my db.* and named.* files is h2n, which was written initially by Cricket Liu for O’Reilley’s DNS and Bind. In short, it converts your (correctly formatted) /etc/hosts
file into almost everything you need to run a bind nameserver.
Most people use BIND8 or BIND9 – it’s huge, and is constantly being updated for security-related problems. I had thought about using something else, like djbdns, written from the ground up with security and simplicity in mind, by Daniel Bernstien. He’s so serious about security, that if you find a security related bug in his nameserver software, he’ll mail you a check for $1000 USD. While that is tempting, part of this exercise is to get familiar with the tools I will be using at work and in future projects, and at the moment, that is BIND.
You will need one thing from an external source, other than the h2n scripts – the root hints file, which you will save and rename to /var/named/db.cache
. This gives your DNS server the ability to perform external lookups.
As mentioned earlier, you will need to update your /etc/hosts
file to the ‘correct’ format:
ip_address fqdn aliases #comments
For me, this starts as follows:
`I am now running a pair of DNS servers in my home network. Like the NTP service I'm running, these are active on both of my OpenSolaris nodes. The processor I ran to generate my db.\* and named.\* files is [h2n][1], which was written initially by [Cricket Liu][2] for [O'Reilley's][3] [DNS and Bind][4]. In short, it converts your (correctly formatted)
/etc/hosts` file into almost everything you need to run a bind nameserver.
Most people use BIND8 or BIND9 – it’s huge, and is constantly being updated for security-related problems. I had thought about using something else, like djbdns, written from the ground up with security and simplicity in mind, by Daniel Bernstien. He’s so serious about security, that if you find a security related bug in his nameserver software, he’ll mail you a check for $1000 USD. While that is tempting, part of this exercise is to get familiar with the tools I will be using at work and in future projects, and at the moment, that is BIND.
You will need one thing from an external source, other than the h2n scripts – the root hints file, which you will save and rename to /var/named/db.cache
. This gives your DNS server the ability to perform external lookups.
As mentioned earlier, you will need to update your /etc/hosts
file to the ‘correct’ format:
ip_address fqdn aliases #comments
For me, this starts as follows:
``
I’ve cut off 10 or 15 lines from the end of this hosts file. As I do intend to use this hosts file as a fallback, I’ve kept the short hostnames, even though they aren’t needed for h2n. If you have a short hosts file, like mine, this will take you a few minutes. If you have been maintaining hosts files for years, this could be a long process, though it’s much better than manually writing out the forward and reverse lookup zones by hand, though. (Though, to be throughly pedantic, you should write out the files if you’ve never done it before. It will make you understand BIND a lot better.)
The comment ‘#[no mx]’ above does what it looks like it should do – it tells h2n not to assign this entry an MX record. I plan to run my own internal mail system for notifications and system alerts, and everything should wind up on the OpenSolaris boxes – not on the printer, my wife’s laptop, etc. If you don’t know what a Mail eXchange (MX) record is, please Google for it. For that matter, if you don’t know the difference between an A record, a CNAME and an ALIAS, also do some more reading. I’m not going to explain those here.
Kidney Stone Update
As best I can tell, the stone is passed. It’s been over 36 hours since I’ve felt any discomfort from it, and more then 48 hours since I’ve taken medicine for it. I never noticed it passing, but it’s not here anymore. Which is fine with me – and it will be fine with me if I never have another again.
Systems Administration
King’s Rule of System Administration: You either do your job well or constantly.
The difference between a server owner and a system administrator is not small. Both groups perform the same kinds of actions on the same kinds of systems – the difference is the way they go about their jobs.
System administrators know what impact their changes and settings will have on the server’s OS, the applications that run on that OS, and the quality of the services delivered to the end user. They check with other sysadmins to make sure they haven’t missed significant details. They monitor the performance of their applications, they check through log files looking for problems, and try to be proactive in general about security. They are, almost always, subscribed to security mailing lists about products they support, and are diligent about testing applying relevant patches. They also do comprehensive backups, and test their backups by restoring files to make sure they work before they are needed. They are able to do capacity planning. Most importantly, they are able to learn new things and expand their understandings of systems to new technologies.
Server owners, on the other hand, don’t know why they are doing what they do. Not in the ‘why am I here’ sense, but more in the ‘what does this setting really mean’. They apply the application patch ‘because it’s the newest version’, not because they read the patch notes to find out what changed. They do backups only when the system is available, and it’s easy. They almost never check with other administrators, and often test their changes on production data. When they have to debug something, they look up the problem (usually via Google) and apply changes without understanding what they are doing to make it work. These are also the ones who most often tell people “it’s always been done this way”, and therefore are almost totally unable to write documentation of their systems. It is rare that they look into log files until something has gone catastrophically wrong, and they are very defensive when constructive criticism is given.
Nobody is one kind or the other, and as I’ve grown in my professional life over the last 10 years, my metric has changed. I’m sure it will keep changing. The important part is that I want to be a better sysadmin, much like I want to be a good photographer and a good parent. I know far too many people who don’t really care about doing their job right – they just come to work for the paycheck. There’s no growth when it’s just about the paycheck.
The Problem with Paywalls
I’ve ruminated recently about Rupert Murdoch and the paywall he intends to erect around his sites sometime in the near future. Recently, daringfireball.net linked to a WSJ article, and I found myself wondering how much longer those links would keep coming, and how much I would miss them. It occurred to me that there isn’t enough quality distinction between the Wall Street Journal and the New York Times (and the rest of the news sites I read) for it to matter to me.
The only publications I would even consider paying to read are ArsTechnica and HardOCP. Not for the news articles, mind you, but for the in-depth reporting and reviews. I hadn’t bothered to read any of the Snow Leopard reporting anywhere on the internet before Ars published John Siracusa’s 23 page article. There’s no point – I know John is going to get it right, cover it properly, and do his research before talking about the details of the OS. Pretty much everyone else is going to be racing to get theirs out first. For investigative technology reporting – be it an operating system review, a wiretapping scandal, or coverage of the App Store review process – my primary source is Ars. When it comes to hardware reviews for motherboards, power supplies, or honest numbers for video cards, I go to HardOCP.
For everything else, I don’t care as much about who has it first, or even right, as long as it has an RSS feed I can consume in Google Reader. I read the headlines of eveything, and short summaries of the articles. If I’m really interested, I’ll actually pop the article open and see what’s going on. But I make a point to try to read the same story (or the headlines, or the intro paragraph) from as many different newspapers as possible. This way some of the bias is removed as people try to think of different ways to spin things. I assume the truth is somewhere in the middle, and keep on going.
So, sure, Mr. Murdoch. Put up your paywall. I wonder if anyone will even bother to come look.
Delete Key Shenanigans
Whatever idiot decided that the delete key, which is used everywhere else to destroy data, should function as the “back” button for web browsers should be shot. I wrote about this before. At the time, however, the problem only manifested itself with Firefox. As far as I could tell, Safari (my browser of choice) wasn’t afflicted with this.
Safari 4.0 is broken in this respect, and I can’t find any way to turn it off. Help?
Really, don’t
Really don’t make changes to your site’s CSS code when you’re really tired and about to go to sleep. All you do is what I did last night – break things so badly you have to go to backups to get it back to where it was. After much wailing and gnashing of teeth, of course.
GPS Data and Photoblogs
As a quick followup to my free iPhone GPS geotagging post, I’ve incorporated the GPS data into my photoblog. The first GPS enabled photos goes up Tuesday morning. Installation takes all of 10 minutes. Download and install the ‘easyMap‘ plugin for pixelpost, slap it into your addons
directory and enable it. I’ve modified the files slightly so if there isn’t GPS data found, it displays nothing instead of warning that ‘no geodata was found’ or ‘location no set’.
Also, if you’re using the nightly builds of WebKit as your primary browser, the map overlay won’t load properly. I’m not sure why, but reverting to ‘just’ Safari 4.0.2 made it all work again.
Newer Posts | Home | Older Posts