The Guru College

Drive Throughput Improvement

I’ve gotta hand it to the OpenSolairs guys – they’ve done a lot of good work. When I was running an (admittedly old) build of Solaris Nevada, my network throughput would top out around 22MB/s – which is pretty respectable for a RAIDZ1 pool on consumer-level SATA disks. I was moving some VM disks around, and noticed this:

root@vault:~# zpool iostat tank 30
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
tank        1.95T  1.46T     36      0  3.68M  17.8K
tank        1.95T  1.46T      0    324      0  39.5M
tank        1.96T  1.45T      0    511      0  62.9M
tank        1.96T  1.45T      0    501      0  61.7M
tank        1.96T  1.45T      0    485      0  59.6M
tank        1.96T  1.45T      0    482      0  59.3M
tank        1.96T  1.45T      0    498      0  61.4M
tank        1.96T  1.45T      0    509      0  62.6M
tank        1.96T  1.44T      0    495      0  61.0M

Wow. Just wow.

The Perfect Lens

Let me say first that there is no one, perfect lens. I know that and I have accepted that. The perfect lens in a given situation differs wildly depending on who is taking the photograph, among other things. How is the photographer feeling? How is the subject feeling? How comfortable is the photographer with the gear they have? I’m certain if you handed me the same location and equipment used to take this picture, I wouldn’t have been able to do it. I don’t know the workings of the cameras that were used, the lenses, and I don’t have a feel for action or sports photography. All of that said, I do have a lens that I have treated as if it is perfect for quite some time. Sadly, the honeymoon is drawing to a close.

The lens was a Christmas gift from my parents in the winter of 2005. An AF-S Nikkor 50mm f/1.4D. I was about to move home from the UAE (for a very short period, it turned out) and they had come to visit me. I remember all of this very distinctly, as I took some excellent pictures of them on that trip. However, I have now used and loved the lens for 5 years. And I must say, while it’s depth of field is amazing, and at f/2.8, it’s incredibly sharp, and it allows me to take so many photos in low light conditions that I never could have thought of before – I find it’s slow and loud to focus, has crazy lens flare if I don’t shield it well, and doesn’t offer a lot of versatility in terms of framing. People everywhere talk about the nifty-fifty, referring to 50mm lenses on 35mm SLRs – but this is, in effect, a 75mm lens for those bodies. It’s an excellent portrait lens. It’s indispensable when shooting in available darkness. But it’s not a great lens to take pictures of a moving baby – it just can’t keep up with focus, especially at f/1.4, where the depth of field can be measured in finger-widths.

This leaves me wondering, should I try to replace the lens with it’s newer brother, the $430 USD, AF-S Nikkor 50mm f/1.4G? It’s pretty much the same lens, but with a fast, quiet focusing system. Or should I be looking at something more like the Pioneer Woman’s current favorite, an AF 35mm f/2.0? It will give me the larger framing area, and still be mighty quick. It’s also nearly $100 cheaper. But, it’s a full stop slower, and I’m often hanging out on the edge of light, with my ISO cranked all the way up to 1600 and trying as hard as I can not to breathe, so the 1/40th of a second exposure comes out somewhat sharp. Another part of me says that I should replace the camera body, not the lens – move to a full frame sensor (which fixes the framing problem) and gives me ISO 6400 as a regularly useable setting, with 12,800 available in 13 stop increments as needed. Of course, a full frame camera costs serious money, as do a wide assortment of fast primes, or an f/2.8 professional zoom.

While I like to daydream about what a new lens would do for my photography, I do recognize is the old adage about cameras applies to lenses as well – the best lens is the one you have with you.

A View From The Ivory Tower: What’s Wrong With NAT?

I have long espoused a theory about network design that gets me laughed at, especially in smaller organizations. This theory is that Network Address Translation, or NAT, is a fundamentally broken technology and it makes everyone on the Internet suffer. A lot of people think I’m crazy – NAT is the tool that allows a house with cable modem to have more than one computer online at a time. In short, NAT takes the single, public IP address given to you by your ISP, and hides any number of computers behind it. It assigns IP addresses from one of the three ‘private’ address ranges (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16) to each computer, and everything works as expected.

The problem is that this breaks one of the original visions of the Internet – where everyone is an equal consumer and producer. Wasn’t the Internet supposed to set us free? Decentralize communication, so the flow of information was between peers, and not coming from The One Blessed Source? NAT makes that impossible, as it is a single-direction filter. A computer behind a NAT isn’t accessible from the rest of the Internet at large. It makes huge chunks of computers act only as consumers of media. This, coupled with the fact that most Internet connections are optimized for downloads and severely limited for uploads makes publishing content on the Internet even more costly to the end user. It limits connections, freedoms and slowly moves the whole Internet back into the control of the people at the top. The whole point of the Internet was the lack of “someone” at the top – it was designed for communications after nuclear attacks, after all – but we’re moving steadily in that direction.

To compound matters, many companies use NAT for ‘security’. The idea is that most users aren’t aware of how to safely run servers, so if they are hidden behind a NAT, they simply can’t. (The people who say NAT makes things secure in other ways is selling something – most likely network security equipment.) The job of network security really should be left to things like perimeter firewalls, intrusion detection/prevention devices, and hardened servers and hosts. This includes running anti-virus software on platforms that need it, and possibly on those that don’t, as well as turning off sharing services that aren’t being used, and enabling host-based firewalls. That is a real security policy. NAT isn’t.

The real problem, the one NAT is actually solving these days, is the shortage of IPv4 addresses. When TCP/IP was released on the world in 1980, there were thousands of hosts on the Internet, so the address space of just over 4 billion IP addresses seemed rather large enough. Hundreds of millions of addresses were set aside for private networks and special research projects. The network address that is reserved for each computer to talk to itself, called the loopback, is often referred to by the first address in the network: 127.0.0.1. A lot of system administrators don’t know that the rest of that network – 16 million addresses – are also set aside in this network, never to be used for any kind of real purpose.

Now, 30 years after IPv4 was adopted, the Internet is coming close to running out of addresses. Current estimates suggest the address space will be fully assigned by the middle of 2011. It doesn’t mean the internet is going to stop working, just that it will be harder and harder to get IPv4 addresses that can be seen anywhere else in the world. As more and more houses get always-on internet connections, and more businesses wire themselves to the internet, it will be harder to get access to the public Internet. A lot of people address this shortage with NAT, but the real solution is IPv6 addressing, which has been in the works for over 12 years. It provides 3.4×10^38 addresses – which (at this point) should hold us for quite some time.

Paywall Data

Everyone keeps talking about what happens when the paywalls start going up for the new sites – and we finally have some data. After three months, Newsday has a total of 35 paying subscribers. 35.

Preparing to Upgrade to OpenSolaris

I’m finally doing it – working out all the bits needed to upgrade my remaining Solaris Nevada box to OpenSolaris. It’s doing a lot more than just being a fileserver, and it will require wiping out the boot drive (I don’t have a handy spare drive laying around), so I had to go into and make sure I knew exactly what the impact was of me formatting the drive and reinstalling.

Squid Proxy Server

OpenSolaris 06.09 has squid built in, registered with SMF. It’s just a matter of preserving my config files and making sure all the cache and log repositories aren’t on the root pool. Once I reinstall, 10 minutes later I’ll have my squid server back – and updates will be managed by Sun, not the bozos at blastwave.

DynDNS ddclient

I’ve got a copy of the script and the config file I’m using. It’s a matter of re-writing an SMF manifest to import, and making sure the pieces are in place. I should probably deploy this on other platforms – it’s not going to penalize me for running multiple copies.

DNS Server

Another simple service – it takes a hosts file and creates zone files etc. So, to get this ported is to grab a copy of the script and the hosts file, and run along my merry way, adding a cron job as I go.

Nagios Server

If you look into the contrib repository for OpenSolaris you can find packages for nagios, nagios-plugins, and nrpe. Perfect. Well, not perfect, as it’s Nagios 3.0.3, but it’s at least somewhat modern, and means I don’t have to mess with blastwave again. Again, save config files, move things over.

Firefly/mt-daapd Server

I’ve decided not to move this over. We simply don’t use it as much as I thought we would, and it’s not worth the trouble to move. If, at some point, we need it, I’ll dig up the instructions to get it running and blog it here.

UPDATE: I went ahead and did the upgrade (after saving the rpool to another ZFS pool). Two hours of reinstalling later, and we’re all good. I’ve got a couple of things left to do, but we’re back in business.

Techies vs Normals

I read a post this morning in Google Reader, comparing technical people with everyone else. I’ve never been a fan of the put downs for non-technical people – part of the reason I’m paid to go to work during the day is that I have a skill set most people don’t have. If everyone else was like me, I’d have to find a new job. Anyway, the author of the post had a really nice way of describing the non-techs out there – “normals”. If anything, it’s a slam against the technical people, and it describes the breakdown pretty well.

Where to Invest: Lenses or Bodies

For a long time the accepted wisdom was to put money into camera lenses and not camera bodies. This makes a certain amount of sense – Nikon lenses from 1960 can be used with almost no loss of functionality on most of Nikons current lineup of DSLR’s. Non-autofocus lenses don’t magically become autofocus, but they work. The investment is protected. (And before you complain about the D40/D40x/D60 cameras and their limited support for old lenses – keep in mind that most people who have significant investment in glass from the last 40 years aren’t worried about the cost difference between a D60 and a D90).

Part of that wisdom, however, relied on the fact that the mechanics of the cameras didn’t change all that often. Apart from integrated light meters and improved autofocusing systems, in film cameras the fastest way to change the personality of your camera was to load new film. A photographer might switch from Kodak Tri-X to Fuji Velvia, depending if they wanted their camera to perform well in low light or if they wanted to capture every color highlight of a sunset.

The DSLR revolution, however, is turning common wisdom on it’s head, and it’s financially to the benefit of the camera makers. As the sensor in a DSLR can’t be switched out to optimize it for a certain kind of shot, a lot of research is going into making a single sensor that can do everything. At first, it was laughable. My D70 was almost unusable at ISO 800. My D200 is acceptable up to ISO 1200 or so. The new FX cameras run all the way to ISO 6400 without breaking a sweat, and can be pushed to ISO 102,400 if needed.

If you think about it, a 50mm f/1.4 @ ISO 1600 (the limit of push-processed Tri-X film and coincidently my D200) has the same light gathering effects as an f/2.8 @ ISO 6400 – which means much cheaper lenses can be handheld for indoor work. If you go up to the limits of the sensor in the D3S, you get 4 more full stops. Granted, at this insane ISO, you are going to lose a lot of edge detail and color clarity to sensor noise, but look back at what we are comparing with – Tri-X push processed to ISO 1600 – which is 6 stops lower.

Even if you aren’t looking at the über-high end camera, remember that a $900 camera body can now capture this image, indoor and handheld @ ISO 3200. It’s also got a roll of film well over 1000 shots long, and there’s no worry about changing film types, or forgetting you had ASA100 loaded when you walk indoors. In my book, the smart money is in the camera bodies these days – well, a handful of fixed-focal primes, and camera bodies. Replace the camera body every 3 or 4 years, and slowly build that collection of lenses. If you’re pressed, go with the body.

Newer Posts | Home | Older Posts