user warning: Table './drinking_drpl2/watchdog' is marked as crashed and last (automatic?) repair failed query: INSERT INTO watchdog (uid, type, message, variables, severity, link, location, referer, hostname, timestamp) VALUES (0, 'flickr', 'Could not connect to Flickr, Error: Forbidden', 'a:0:{}', 4, '', 'http://rothwerx.com/', 'http://rothwerx.com/', '195.154.162.218', 1503088988) in /home1/drinking/public_html/rothwerx/modules/dblog/dblog.module on line 146.

mdadm's rules of order

My GIS-using client recently had a power failure at their office, and their GIS fileserver failed to fully boot when it regained power. This system had a 5-disk RAID5, and according to /proc/mdstat, all the drives were marked as spare and the md was not active.

So I tried to reassemble, but it said it tried to assemble 3 drives and 1 spare, which was not enough to start the array. Naturally I had checked SMART to see if anything was reporting failed there, and nothing was. I know drive manufacturers can fudge their SMART results, but I still figure it's a pretty decent indicator.

Version Control for System Administrators

It's not uncommon for me to set up a local git repo for a project I'm working on. I understand the benefits of using version control for software projects. But for some reason, working a central VCS into my day-to-day workflow with multiple servers just eluded me.

I've had a http://projectlocker.com account, and I just signed up with http://beanstalkapp.com. My needs aren't great, I just want to keep the occasional script and/or configuration file in a central location. You know, the "cloud". You could create new repos for each parent directory you want to store in version control, but you're limited if you're just using free accounts on these VCS hosts.

I don't want to have to download the whole tree of configs and scripts every time I want something. But all the VCS tutorials would have you do just that, because that's what makes sense for a software project.

Secure Google searches from Chrome

Google has an SSL encrypted version of their search engine in beta, and it's nearly as speedy as the unencrypted standard search. Even for inane things like my Google searches, I'd rather it be secure, so I changed my default search in Google Chrome to use the https version.

On my Mac, I choose Preferences from the Chrome menu and in the Basics tab, click the Manage button next to Default Search. Click on the + button at the bottom of the window that opens, and you're ready. Name and Keyword can be anything; I used Google Secure for both.

Then for the URL, enter (all one line, of course):

https://www.google.com/search?{google:RLZ}{google:acceptedSuggestion}{google:originalQueryForSuggestion}sourceid=chrome&ie={inputEncoding}&q=%s

Click Ok, click Make Default, and you're done. Now when you type something in the URL/search bar, your search will be encrypted.

How I create LVM volumes

This is one of those things I'm mostly posting for my own reference, as I often forget everything I need to do, i.e. the uuid stuff.

  • Create partitions on the disks you want to use. You can use parted or cfdisk; I've always used fdisk but the man page for fdisk recommends against using it and instead using parted or cfdisk, so that's what I'll be using from now on I suppose.
  • Initialize the partitions (physical volumes) with pvcreate:
    pvcreate /dev/sdb1 /dev/sdc1
  • Create a new volume group if you don't already have one you want to use. Here we'll name it simply vg0, with a physical extent (PE) size of 16M, and add the two disks we initialized in the previous step:
    vgcreate -s 16M vg0 /dev/sdb1 /dev/sdc1
  • We want to use all the free space afforded on these disks, so we'll look for the Total PE number using vgdisplay, and then create a logical volume called "backup":

Simple speed comparison between cp, mv, and rsync

 One of my clients work with huge GIS data files, and they want to expand their storage space. They currently have 4x500GB drives in RAID 5 for about 1.5TB of space, and they want to replace it with 5x1TB drives in RAID 5 for around 4TB of space. As a separate project I replaced all the drives in their Buffalo TerraStation Pro II NAS with 1.5TB drives to give them about 5.5TB of formatted space for backups of their GIS data.
 
With the latest firmware for the Buffalo NAS I got NFS support, which works well because their GIS server is Linux. I created an NFS mount on the GIS server and ran rsnapshot to do an initial backup. I used /usr/bin/time just to see how long it would take to rsync that ~1TB of data, and it took 38.5 hours!
 

Get VIM syntax highlight colors when editing Drupal .module files

I'm doing more with Drupal lately, and working through the book "Pro Drupal Development". I do my editing exclusively in vim, so it's kind of annoying that vim will detect when I'm working in files with the .php extension, but not the others like .module, .inc, and .install that have PHP code in it. This is easily fixed though, at least on my Ubuntu 8.04 machine (YMMV).

  1. Create a ~/.vim directory if you don't already have one.
  2. Copy /usr/share/vim/vim71/filetype.vim to your ~/.vim directory
  3. Find the line like this: "au BufNewFile,BufRead *.php,*.php\d setf php" and add *.module,*.inc,*.install to the list of *.php extensions. Your resulting line should look something like this: au BufNewFile,BufRead *.php,*.php\d,*.module,*.inc,*.install setf php

Next time you open one of these files, you should have regular syntax highlighting.

Not enough entropy with /dev/random

A server I'm configuring needs to be able to generate GPG keys within the time frame of a web request using the software I've been given. GPG will only use /dev/random, which gets entropy from events that generate interrupt requests. After spending a while trying to think up ways to create random interrupt requests, I finally settled on the idea of replacing /dev/random with /dev/urandom. Problem is, this only works until you reboot. On my own I tried to figure out how to replace /dev/random with /dev/urandom using udev, without having to resort to creating an init script that did it after udev was done doing its thing.

Using the new chroot jail SFTP server in OpenSSH 5.1

It seems like a fairly common occurrence that I'm asked to set up a secure FTP server for one reason or another. Typically I use vsftp and lock it down pretty well. As a plain-jane FTP server, vsftp is pretty good. But the FTP protocol itself doesn't have the encrypted transfer capabilities that I'd like to see. OpenSSH 5.1 has a new "internal-sftp" server which allows you to deny shell access to SFTP users as well as put them in a kind of chroot jail. It's exactly what I need, and pretty easy to set up though it's not without its gotchas. I'm running it on a Debian 4.0 (Etch) machine, but it'll compile on many UNIX-y platforms.
 
My needs are simple, so I built it with few options:
$ ./configure --exec-prefix=/usr --sysconfdir=/etc/ssh --with-pam
$ make
 

Automatic Fantastico upgrades are not my friend

This site has been broken for a little while because a few weeks ago I thought it would be a good idea to upgrade to Drupal 6. My hosting provider uses Fantastico to make package installation easy, and since I installed Drupal through it, I figured I'd be fine upgrading through it. If you don't upgrade through Fantastico and do it manually, Fantastico thinks you're still at the old version. I'm sure there's a way to tell it otherwise, but I haven't spent the time to figure it out.
 

Turning your DD-WRT device into a real AP

Unless you're a home user, you probably expect your wireless access points to simply be a media converter - convert from Ethernet to 802.11x and vice versa. But when you have to deal with consumer grade access points, they usually route as well. This would be fine as long as you could turn this function off easily. But this isn't always the case (the only exception I can think of being the Apple Airports, and even then it's not real clear).