Dropbox as a Content Repository

In my previous post regarding the technical changes that I made when rebooting the blog here I mentioned that the one aspect of the new setup that I considered to be novel enough to merit a separate post was using Dropbox as a universal content repository to allow me to administer the site from anywhere with Dropbox access. This is that post.

First some background. When I initially began to scope out switching The Angry Drunk to a static blogging engine, one of my hard criteria was the ability to author content and manage posting from my home Mac, my work PC, my iPad, and even my iPhone. The nice thing about most static engines is that they accommodate that requirement by avoiding control panels, web apps, and databases and simply working from a directory or directory of text files.

My initial thought was that I would use some sort of file-transfer software, such as that built into Panic’s Diet Coda to handle the file transfer and use the ssh abilities of the app or Panic’s Prompt to handle the command line stuff. That system would have worked, but it would have been annoyingly clumsy.

Fortunately, while reading the many useful articles that Gabe Weatherhead has written about his transition to Pelican I came across a link to a post detailing how to setup Dropbox on a remote host. I’ll forego going through the Dropbox installation and setup — as the post does a better job of that than I could. Of course, if anyone has any specific questions feel free to hit me up.

One I had installed Dropbox I ran into my first issue. During the setup I naturally attached the server to my normal Dropbox account. About 30 seconds after hitting enter I realized what a mistake that was. The last thing I want is my entire Dropbox directory mirrored on the paltry two gigabytes I’m paying my hosting provider for. I quickly canceled the transfer, deleted the Dropbox directory and had myself a think.

While it is possible to control what directories are synced to a particular host via Dropbox, configuring that on the command line was more effort than I was willing to put into this. What I ended up doing was signing up for a new Dropbox account that will be used solely for this site. I then deleted all the crap that Dropbox adds by default and created a single pelican directory there. I then shared that directory with my personal account so it now shows up in every place I have Dropbox.

Inside the Dropbox-hosted pelican directory I placed all of Pelican’s support files as well as the content directory that Pelican looks for. The structure looks like this:

|   |--pelicanconf.py
|   |--pelicanconf-pub.py
|   |--publish.sh
|   |--annex
|   |--blurbs
|   |--extra
|   |--images
|   |--links
|   |--pages
|   |--post
|   |--tools
|   |--pelican.log

The configuration directory contains Pelican’s configuration file(s) and a small shell script that manages rebuilding the site. The content directory contains the raw Markdown files that build the content.

The content is segregated into directories based on the post type (posts, pages, blurbs, annex posts, and linked-list items). The images directory contains image and video files. The extra directory contains web-server support files such as .htaccess and robots.txt. The tools directory contains my mint and Fever° installations.

The logs directory contains a logfile that I’ll explain in a bit.

Lastly, the themes directory contains the templates and CSS that make up the site’s custom theme.

Because this is all contained in my Dropbox directory, and because everything in here other than media assets is a plain-text file, I can edit these files, and thus the site, anywhere that I have a text editor and Dropbox.

The final bit of this whole system, and the part that I’m somewhat proud of is the publish.sh file and the two Pelican configuration files. The reason there are two configuration files is that one has content and output paths that make sense on my local iMac, while the other has paths that make sense on the web server. I have a publish.sh script on my iMac that calls the first local-config file while the server-side script calls the remote-config file. This way I can build the site to my local machine for testing, or the remote host for publishing —using the same content and theme files— by choosing which script to run.

For reference, here is what publish.sh looks like:

echo "begin sitebuild - publish" >> /home/dlines13/pelican/logs/pelican.log
date >> /home/dlines13/pelican/logs/pelican.log
/usr/local/bin/pelican /home/dlines13/pelican/content -s /home/dlines13/pelican/configuration/pelicanconf-pub.py >> /home/dlines13/pelican/logs/pelican.log 2>&1
date >> /home/dlines13/pelican/logs/pelican.log
echo "end sitebuild - publish" >> /home/dlines13/pelican/logs/pelican.log

Going through the script line by line the script writes a header to the pelican.log file, writes the date and time to the log file, runs Pelican calling the “publish” configuration file and writing any messages to the log file, writes the date to the log and finally writes a footer to the log.

In addition to manually running publish.sh my web server also has a cron job to run the script every 30 minutes.

So my workflow goes like this:

  1. Write a post in whatever text editor is at hand
  2. Save it to the appropriate folder in Dropbox
  3. SSH into my host and run publish.sh or
  4. At a minimum wait up to 30 minutes for the cron job to run the script.

There you go.