“A nearly impenetrable thicket of geekitude…”

Software

About software, as opposed to hardware.

Quarterly Review

Posted on October 1, 2020 at 14:36

It’s the first day of October, and an OmniFocus task tells me it’s time to review what happened last quarter so that I can set goals for the next three months.

My mind is blank. I can’t think of a single thing I’ve achieved in the last three months. I mean, I remember doing some things, they are just unmoored in time. Did that thing happen in June, or was it February?

I’ve always had a pretty bad memory for events, and I think my brain is working as well as it ever has in most ways. I’m more inclined to the theory that I’m just experiencing the passage of time differently now that every day is the same as the last. The usual milestones — birthdays, anniversaries, conferences — just don’t exist any more and I can’t navigate without them.

I have often kept workbooks for long periods when it made sense, with 2003–2013 being the most recent block. I fell out of the habit, I think, when the nature of the work I was doing changed. These days, I tend to write copious working notes in applications like Bear but there’s no time component to those.

Coincidentally, 2013 seems to have been the year when I first tried journaling as a way of sorting out my thoughts at the end of the day. This seems to have been too vague a goal, and it didn’t stick: I did get my first introduction to the Day One journaling application, however. I’ve played with it a couple of times since.

How do I know this? Well, it’s all in Day One, of course. I still own the application, and seem to have been grandfathered into a lifetime legacy “Plus” status which gives me the basics of the application and (most importantly) sync across all my devices for free.

My new plan, then, is to start journaling again from today, but this time with the very specific goal of at least recording what I’ve been working on. In three months time, we’ll see whether a review of the quarter is feasible again, even if the temporal structure of my life hasn’t returned more naturally.

Tags:

OmniFocus Automation

Posted on October 13, 2019 at 20:16

I’ve been using The Omni Group’s OmniFocus personal task manager for a decade or so now. Every few years I get the sense that I’ve strayed too far from principles and rebuild my system from scratch; 2019 has been one of those years for me. As a result I’ve been spending a lot of time with Tim Stringer’s Learn OmniFocus site, and in particular looking at how other people tailor this very flexible software to their particular needs.

A recent Learn OmniFocus workflows interview with Jason Atwood reminded me that one of the coolest features of the Mac version of OmniFocus is the ability to select a subset of the folders and projects in your database and focus in on them: if you do this, you can apply various perspectives to just those selected areas as if the others didn’t exist. I used to do this, but over time I ended up with too many folders for it to be practical. It’s an error-prone pain to select several items and then the focus action. I could reorganise my database so that I only have a couple of top-level folders, but I don’t have a perfectly hierarchical life. I want to be able to slice things up in several different ways.

Pain leads to automation, and one recent evolution of OmniFocus is towards a cross-platform automation system based on JavaScript. With a bit of help from Sal in the Omni Group Slack’s #automation channel, it turned out to be pretty easy to make plugins to select arbitrary groups of projects and folders and focus them. I’ve pushed a repository to GitHub if you’re interested in the code.

Here’s what my Automation menu looks like in OmniFocus now:

Automation Menu

So, it’s pretty easy to focus in on specific clients, or on “Work” in general, or on “Home” (defined as “not Work”).

I could bind each of those actions to a different shortcut key using the system keyboard preferences (as you can see I’ve done with the “Promote/demote” action). I have a problem remembering lots of bindings, though, so I turned to Keyboard Maestro. This is another piece of Mac software I’ve been using to some extent for ages but to which I’ve been reintroducing myself this year through David Sparks“field guide”. Amongst other things, David taught me about conflict palettes.

Here’s my OmniFocus group in Keyboard Maestro:

Keyboard Maestro "OmniFocus" Group

You can see that I have a number of macros, each bound to the same Control+F shortcut key (the group is also set to be active only when OmniFocus is the current application). Each of those macros simply invokes the appropriate menu item in OmniFocus. This is what pops up if you use a conflicted binding like Control+F:

Conflict Palette

Pressing one of the highlighted letters (‘D’, ‘H’, or ‘W’ in this example) after Control+F dismisses the conflict palette and invokes the identified macro, which in turn invokes the relevant plugin.

I find using a couple of keystrokes like this really smooth, by comparison with manually selecting multiple items with the mouse or even selecting a menu item. I often find that moving complexity out of the user interface into automation means I’ll actually use something, where it was previously too much hassle to bother with. I feel more focused already.

Tags:

zsh: Not Yet

Posted on October 9, 2019 at 15:43

In macOS 10.15 Catalina, Apple have decided to change the default shell from an antediluvian version of bash to a very recent version of zsh.

One reason for this appears to be bash switching to GPL V3 a while back. Whatever the reason, though, they are flagging this change up pretty heavily every time you open a shell:

The default interactive shell is now zsh.
To update your account to use zsh, please run `chsh -s /bin/zsh`.
For more details, please visit https://support.apple.com/kb/HT208050.

Switching to a new shell will be something of a major project, so although I will probably do that at some point I was delighted to find that you can disable the warning by adding the following to your .bash_profile:

export BASH_SILENCE_DEPRECATION_WARNING=1

Tags:

Nanoc Filters as Markdown Extensions

Posted on April 27, 2018 at 08:34

In one of the static web site projects I have been working on, the main text is composed using Markdown but a number of common constructs are used which Markdown can’t easily express. That’s not Markdown’s fault in any sense; I’m using it well outside its originally intended scope.

Here’s how I made things a bit simpler by using a Nanoc filter as a pseudo-extension for Markdown.

Overlays with rsync

Posted on February 11, 2018 at 18:03

I’ve been using rsync to build my site as a combination of a base layer held in git plus an overlay generated using Nanoc. Here’s how.

Cleaner URLs

Posted on January 19, 2018 at 09:47

One thing I’ve wanted to do for a long time is move this site further towards the use of clean URLs. I am currently migrating to a static-site generator and that seemed like the ideal time. Here are a couple of tricks I’ve used to get clean URLs for my older content without breaking bookmarks.

Ant fixcrlf and UTF-8 on Windows

Posted on June 3, 2014 at 14:39

I’ve been working on a large XML processing system in which a sequence of steps implemented in Java and other technologies are orchestrated using Apache Ant. It has to run on Mac OS, Linux and Windows. It has been pretty stable for some time, but I recently set up a new Windows system and started seeing errors like this:

Exception in thread "main" org.xml.sax.SAXParseException:
    Invalid byte 3 of 3-byte UTF-8 sequence.

Feedly

Posted on June 17, 2013 at 09:01

There are only a couple of weeks left until Google Reader shuts down. Like many other people (the “loyal but declining” following the product had certainly numbered in the millions) I’ve been looking at alternatives for a while now. I’ve finally settled on feedly.

OmniFocus 1.0

Posted on January 9, 2008 at 10:55

After a long public beta program, OmniFocus, OmniGroup’s “professional-grade personal task management” application for the Mac, has finally reached its 1.0 milestone. If you’re already both a Mac cultist and a Getting Things Done convert, you probably already know this because you’re one of the 13,590 people who pre-ordered it.

GTD and OmniFocus won’t magically rescue you from being disorganised (they certainly haven’t entirely done that for me) but I’ve found that some of the GTD principles that OmniFocus allows you to implement really do lead to some level of stress reduction:

  • Get everything that’s on your mind out of your head and into a trusted system.

  • Plan in terms of small, concrete, actionable steps.

  • Concentrate on the next available action for your current context.

You probably can’t plan multi-person mega-projects this way, but that’s not what this product is for. If you’re trying to hold together a lot of smaller projects, it can be pretty much ideal. There’s a 14-day trial available.

Tags:

Thunderbird 2

Posted on April 22, 2007 at 15:48

The big green one was always my favourite, you never knew what was in the pod.

In this case, though, I come to sing the praises not of a hypersonic airborne truck but of the second major version of the Thunderbird e-mail client. For me, it has two new must-have features that fit in really well with the way I work:

  • You can have replies filed in the same folder as the original message.

  • There’s a folder view that just shows folders with new messages.

I have a lot of message filters active that file incoming mail into per-topic folders. Knowing which I need to look at, and not having to re-file outgoing replies, will save me a lot of time.

The one “gotcha” so far has been that some of the key bindings have been changed, at least on the Mac. In particular Shift-Pretzel-M no longer creates a new message, but instead moves the currently selected message to the last folder you moved something to… hilarity ensues. After some initial cursing along the lines of “where on earth did that message disappear to”, obviously.

Fusion Beta

Posted on December 22, 2006 at 22:11

I’ve been getting more and more dependent on virtual machine technology over the last couple of years. Although I use Microsoft’s Virtual PC for Mac and the Parallels Desktop product from time to time, most of my virtual machines live on VMware Server under Linux or VMware Workstation running on my remaining Windows 2000 desktop machine. Today’s announcement of a public beta for VMware’s Fusion product for Intel-based Macs therefore came as a pleasant festive surprise. Of course, I downloaded it right away.

My first impression is that it seems to work just fine. The current beta version runs with a lot of debug code active, and the first thing you see is a warning that you’re not going to get a lot of performance out of it. The Beta EULA prevents me from commenting further on that front, and in any case both VMware Server and Parallels Desktop were less than stellar in their beta phases so it isn’t significant information.

Problems? So far, really very few. I brought up a Fedora Core 6 client from scratch in about half an hour, and downloaded an Openfiler virtual appliance and had it up and running in seconds. For some reason, I couldn’t connect to the OpenFiler appliance’s administrative interface from the host machine, but it appeared to be working fine from another machine. [Update: known issue in this build, see comments below.]

The current beta is missing any kind of snapshot facility, which is a pity as that’s one of the things that marks out VMware Workstation as such an excellent development tool. The other thing it’s missing is some of the GUI for doing things like adding more virtual hard disks to a machine. However, I found that if I stored the virtual machine on a removable FAT32 drive, I could swap it over to the Windows machine running Workstation, make changes there then swap it back to the Mac again. Neat!

Summary: very good for an initial public beta. If the final functionality comes up to Workstation’s level, particularly in the area of snapshots, I’m a customer.

RUSE MET LORD CURT REEL ION

Posted on November 21, 2006 at 22:38

I learned the difference between haphazard and random a long time ago, on a university statistics course. Since then, I’ve been wary of inventing passwords by just “thinking random” or using an obfuscation algorithm on something memorable (“replace Es by 3s, replace Ls by 7s”, or whatever). The concern is that there is really no way to know how much entropy there is in such a token (in the information theoretic sense), and it is probably less than you might think. People tend to guess high when asked how much entropy there is in something; most are surprised to hear that English text is down around one bit per letter, depending on the context.

If you know how much information entropy there is in your password, you have a good idea of how much work it would take for an attacker to guess your password by brute force: N bits of entropy means they have to try 2^N possibilities. One way to do this that I’ve used for several years is to take a fixed amount of real randomness and express it in hexadecimal. For example, I might say this to get a password with 32 bits (4 bytes) of entropy:

$ dd if=/dev/random bs=1 count=4 | od -t x1
...
0000000    14  37  a8  37

A password like 1437a837 is probably at the edge of memorability for most people, but I know that it has 32 bits worth of strength to it. So, what is one to do if there is a need for a stronger password, say one containing 64 bits of entropy? Certainly d4850aca371ce23c isn’t the answer for most of us.

When I was faced with a need to generate a higher entropy — but memorable — password recently, I remembered a technique used by some of the one-time password systems and described in RFC 2289. This uses a dictionary of 2048 (2^11) short English words to represent fragments of a 64-bit random number; six such words suffice to represent the whole 64-bit string with two bits left over for a checksum. In this scheme, our unmemorable d4850aca371ce23c becomes:

RUSE MET LORD CURT REEL ION

I couldn’t find any code that allowed me to go from the hexadecimal representation of a random bit string to something based on RFC 2289, so I wrote one myself. You can download SixWord.java if you’d like to see what I ended up with or need something like this yourself.

The code is dominated by an array holding the RFC 2289 dictionary of 2048 short words, and another array holding the 27 test vectors given in the RFC. When run, the program runs the test vectors then prompts for a hex string. You can use spaces in the input if you’re pasting something you got out of od, for example. The result should be a six word phrase you might have a chance of remembering. But if you put 64 bits worth of randomness in, you know that phrase will still have the same strength as a password as the hex gibberish did.

Hiring the Top 1%

Posted on February 9, 2005 at 16:23

Hiring good people is hard. If you advertise, you face wading through dozens or hundreds of CVs trying to figure out who the best people are. If you filter CVs, then filter again through interviews, you’re probably inclined to think that you’re being terribly selective, and that your final hires are among an elite. People often say “we hire the top 1%”.

There are several fallacies there: most obviously, it is hard to pick the people to interview on the basis of a CV, and interviewing people is a pretty hit-and-miss affair. More crucially, and less well recognised, is that you only get to pick from the people who apply, not from the whole population. Joel Spolsky’s recent article does a great job of explaining this in a really clear way.

One of Joel’s conclusions, which matches at least one case in my own experience, is that working with summer students (US: interns) is a good idea. This is not for the usually stated reasons (they work for peanuts! they know all the latest research! they are really gullible about working conditions!) but simply because if someone turns out to be really good, a summer work placement might be almost the last chance anyone has to hire them before they fall off the hiring radar for good.

Tags:

The Daily WTF

Posted on November 23, 2004 at 15:41

Subtitled Curious Perversions In Information Technology, The Daily WTF is a collection of found software artifacts that will make most experienced software people look once, do a double-take, then yell “WTF?”. Hence the name.

I’m not sure whether to file this one under “Humour” or “Really, really, scarey.”

[Thanks to Rod]

Truths

Posted on September 21, 2004 at 13:06

A friend just sent me a link to RFC 1925, The Twelve Networking Truths. Although it is one of the humorous April 1st RFCs (in this case by Ross Callon of the Internet Order of Old Farts), it is also full of genuine truths.

Although I don’t follow the really popular bloggers to any great extent, I had come across Mark Pilgrim because of his columns at xml.com, and I knew he had a blog that was fairly popular. What I didn’t know until a couple of days ago is that Dive Into Mark contains a number of things that are funny, but also full of genuine truths.

If you’re drinking coffee just now, I suggest you put it down somewhere safe. Then read Mark’s essay on “why specs matter”, which he starts off with the statement that most developers are morons, and the rest are assholes. True, true. If you’ve had a really bad day in the standards mines, find relief for your grief in the short but pointed “Unicode Normalization Form C”. Funny; but also so, so true.

[2012-02-24 Removed direct links to Mark Pilgrim’s blog, due to his disappearance from the internet.]

The Javafication of PHP

Posted on August 2, 2003 at 17:35

I do a fair amount of programming in PHP, but I’ve never been an uncritical fan of the language. My initial impression of it was that PHP must be the secret love child of Kernighan & Ritchie era C and Perl 4, combining as it does a pre-C++ model of object oriented programming with dynamic typing, a general attitude of “if you write it, I’ll find a way to make it mean something” and a library that only the kindest could regard as other than rambling and incoherent.

Peter Deutsch's Eight Fallacies

Posted on June 12, 2003 at 10:43

Old hands in any profession are always remarking that the new boys keep making the same old mistakes, and software development is no exception. Peter Deutsch’s The Eight Fallacies of Distributed Computing is a nice codification of some of our classics.

The majority of Deutsch’s items have more applicability than pure “distributed computing”, of course. Web applications, particularly the multi-tier “enterprise” variety, absolutely fit into this category, but anything on an explicitly parallel computer does, too. Finally, as main memory drifts further and further from your CPU and hardware vendors move towards multi-core chips, pretty much everything looks like ending up “distributed” enough to make the Eight Fallacies relevant.

Read the Eight Fallacies, then pin a copy up above your desk. You’ll be needing it later.

[2013-02-24 Moved link from James Gosling’s web site, which no longer exists, to Wikipedia. You can also view the original page using the Internet Archive’s Wayback Machine.]

Tags: