The Guru College

Replacing Photo Stream with Lightroom and Dropbox

As part of my switch away from Aperture, I’m losing one of the best features of iCloud: Photo Stream sync. When using iPhoto or Aperture, you can have all the photos taken with all iOS devices automatically backed up to your computer’s hard drive, seamlessly, and in the background. If you aren’t using either of these apps, you have to do it all by hand.

What I’ve done now is to set Lightroom up to “watch” the Camera Uploads folder in Dropbox. I’ve also downloaded the Dropbox app to my phone and turned on automatic photo sync, and set up my Mac’s to automatically sync SD cards and whatnot over to the Camera Uploads folder. As long as Lightroom is running, the contents of the folder get pulled into Lightroom – copied over to the fileserver and removed from Dropbox. So I can always see what’s left to import, and I can restore images from Dropbox (if needed).

This also lets me sync images from multiple Macs or even Linux boxes that I have, which means I don’t always have to go to the Shed Office to start the photo import.

The failings of this setup:

  1. Dropbox doesn’t handle shared photostreams
  2. You can’t publish to a photostream (shared or not) from Lightroom.
  3. The dropbox iOS app needs manual intervention

To be fair to Dropbox and Lightroom – as far as I know, there are no API’s for 3rd party apps to reciveve the contents of a Photo Stream, outside of Apple’s apps. This means that unless an iCloud API is released, this will never happen. The other annoyance is that the Dropbox app can only upload photos when it’s active, and the background process rules in iOS limit this to 5 minutes. So, if you are uploading a lot of images, you’ll need to either keep the dropbox app open (and the phone awake) or re-wake the app every 5 minutes.

However, it all works, and with the exception of the above caveats, it’s pretty smooth. Smooth enough for me to publish here.

Aperture, I Give Up

It’s been a few weeks since I’ve been able to post an image to the photo blog with any kind of regularity. Between speed issues, OpenCL issues, and application lockups, it’s been incredibly painful to get anything done in terms of photo editing. About a week ago, I decided to rebuild the entire preview and thumbnail cache from the library, thinking in part that slow file access was getting me. After a week solid of rebuilding time, my Aperture library is down from 140GB to 70GB or so, but if anything has been less stable since I’ve done this.

Just trying to import the pictures from this weekend have taken me almost 45 minutes – to not be able to do anything. Any time I try to access the project containing the current imports, Aperture locks up solid. I find myself once again running Aperture’s built-in Library repair functions.

I’m sick and tired of this crap. Thankfully, I was smart enough to store the 170,000 image files externally to Aperture’s library, so I’ll be able to get to all the photos even if Aperture never returns to it’s former working status. Once (if?) I get the Aperture library sorted out, I’m going to back it up and never touch it again, unless I’m trying to get an old edit out of the system. (One of the down sides of both Aperture and Lightroom – edits are both non-destructive and proprietary, so they don’t move between products.)

Lightroom, here I come!

Setting up Carbon and Whisper

As a note: the author of Graphite decided to put everything into /opt/graphite, which is sensible and unlikely to get overwritten by anyone else. You must figure out your storage stack before you go much further. Whisper and carbon have pretty high I/O needs, mostly for IOPS, and the graphite webapp is read heavy. There are ways to speed things up and make them more efficient, but you really want a dedicated disk or LUN for /opt/graphite. This way, you don’t overwhelm the rest of your system when you bulk-load data into Graphite, or add 200 new hosts with service checks and suddenly need to create thousands small files as fast as you can while still doing your usual read/write workload.

Back to the install process.

The pre-requisite is pycairo. When you use the bdist_rpm magic from the last post on Nagios and Graphite to get a reliable RPM installer of the whisper and graphite packages, make sure to have them require python24 (or greater) and pycairo. Alternatively, you can do it as a one-off and hate yourself for it in a few months when you look at your notes which aren’t explicit enough, and you can’t remember what it was you needed to install. Your choice.

Once everything is installed, head into /opt/graphite/conf, and you will find a bunch of configuration files. Right now, we are worrying about carbon.conf, relay-rules.conf, and storage-schemas.conf. These files will determine how and where data is written, how long it’s retained for, and set you up for multi-server storage of data sets. Remember folks, scale is good for you, and it makes you sleep better at night.

We are going to run carbon-cache and carbon-relay. carbon-cache is responsible for writing the incoming data to the files on disk, and carbon-relay is what you use to relay data to multiple carbon-cache instances (on the same host or on other hosts). While you can have carbon-cache listen directly to incoming traffic, being able to multiplex the data to multiple servers requires carbon-relay, so we may as well just set that up now.

First, setup data retention policies in storage-schemas.conf:

[everything_1min_13months]

priority = 1

pattern = .*

retentions = 60:43200,900:105120

This tells carbon to create whisper files with 60 second precision for 30 days (43200 60-second intervals) and with 15 minute precision for three years (105120 900-second intervals).

Next, setup relay-rules.conf:

# You must have exactly one section with ‘default = true’

[default]

default = true

servers = 127.0.0.1

destinations = 127.0.0.1:2014

All the traffic is going to localhost. It’s trivial to add more servers and destinations to this file as you add more nodes. Keep it simple at first, as you learn and debug the system, but it will serve you well over the long haul.

The final file to edit is the carbon.conf file, and it’s a bit of a bear. The graphite webapp reads this file, as do the carbon daemons, and there’s just about a million options to choose from. The good news the the only changes you have to make are to reverse the ports for the carbon-cache and carbon-relay daemons, so the generic clients can have a stock configuration. Once the config files are set up, launch the carbon-cache and carbon-relay daemons by issuing

/opt/graphite/bin/carbon-cache.py start

/opt/graphite/bin/carbon-relay.py start

Log files are in /opt/graphite/storage/log, which should be enough to get started with debugging and testing the carbon client applications. There is an example client application in /opt/graphite/bin called carbon-client.py, which can be used to test the setup.

Comments

I had inadvertently enabled the setting to disable comments on any post older than 14 days. This has been fixed. I do want people to be able to get in touch with me about articles I’ve written, and with my posting frequency, it would be better to always allow comments and deal with the occasional spam-bot. So, comments are on. I’m going to poke around a little, and if I find posts with comments turned off, I’ll just get out the SQL editor and go to town.

Random Password Update

A friend of mine in the IT Security profession gently suggested that I not use uuidgen to generate random passwords, especially now that I’ve posted to this blog that I intend to do that. He pointed out that uuidgen produces unique strings, not random ones. He further went on to suggest that it’s much better to use /dev/random as your source of entropy than to use /dev/urandom.

All duly noted, and applied. I’m once again changing all my passwords to be 40+ characters of random line noise.

Google Apps Dashboard Nagios Check

The download link for the check_google_apps.pl Nagios check was failing due to a bad RewriteRule. I’ve change the file location to be http://gurucollege.net/uploads/check_google_apps.txt.

Making Hot Sauce

Hot Sauce

  • 2 lbs of peppers
  • white vinegar
  • 14 cup sugar (or salt)
  • chopped, peeled garlic

WARNING: If you are using truly hot peppers, habanero, scotch bonnets, or anything of that ilk, wear gloves for most of this process, and be careful not to do stupid things like rub your eyes with your gloved hands. There’s nothing quite like putting hot pepper juice directly into your eyes. With that warning out of the way:

First, wash the peppers and cut the stems off. Make sure to cut and entry into the body of the peppers so the expanding gas in the peppers doesn’t cause them to explode. Blacken the peppers on the grill, or put the peppers on a cookie sheet or in a lasagna pan and broil in an oven. Turn them every few minutes to make sure they are evenly cooked.

Once the peppers are cooked, cut them up into 1 or 2 inch chunks and put them in a saucepan, along with the white vinegar and sugar (or salt). The vinegar shouldn’t quite cover the tops of the peppers. Turn the stove on low heat and let the vinegar and peppers start to simmer together.

Now, you can add the garlic to the mix. I used peeled, chopped garlic, and a lot of it. I have experimented with adding tomatoes, onions and even frozen pineapple, so have fun with this. This is where a lot of the secondary flavor of the hot sauce is going to come from.

Let the whole mix simmer on the stove for at least 30 minutes. Ideally you’d let it cook for a couple of hours, but I’ve gotten acceptable results at anything over 30 minutes. Once it’s cooked, throw it into a blender and mix it down to the desired consistency. Depending on how much vinegar you used, the sauce will be thinner or thicker. I like it to be more like salsa than Texas Pete, so I try to use as little vinegar as possible.

Once it’s blended, put it into jars and into the fridge. It will keep for a month or so in the fridge, and is good on just about anything.

Newer Posts | Home | Older Posts