-
Megalithic Portal
The Megalithic Portal is an incredible site for exploring ancient stone structures. I’m very into standing stones (menhir) in particular. Something about their permanence has the hypnotic quality of fire to me. On any given weekend May → September odds are that I’m standing out on a moor staring intently at a many-thousand-year-old Rock.
Anyway I digress. The Megalithic Portal is a fantastic resource for Rock Enjoyers, but their website has an extremely clunky raster map interface that only loads a subset of Rocks at once. So if you’re looking at a Rock in one part of the country but want to see a Rock in a different part of the country, you have to jump through Eleven Hoops to get there. I don’t want to jump through hoops though, I just want to Look at Rocks.
So I set out to make a map of my own.
The key discovery was that for each site on the Megalithic website, there’s a corresponding GeoRSS feed of nearby sites, with metadata for each nearby site including site name, latitude and longitude:
<item> <title>Langleydale Common 8</title> <link>https://www.megalithic.co.uk/article.php?sid=50089</link> <description>Rock Art, (No Pic)</description> <geo:lat>54.614212686267</geo:lat> <geo:long>-1.9452625039514</geo:long> <media:thumbnail url="images/mapic/tg29.gif" height="20" width="20" /> </item>
The URL from this feed is parameterised based on the site’s latitude and longitude, so from any given Rock feed, I can iterate over each of the items and generate URLs for each of the related Rocks—meaning that I can hop from one feed to the next, saving each new item to an SQLite database as I go.
A year ago I’d have said that sounded like a lot of work.
But we’re not living a year ago, so I got Claude Sonnet 3.7 (via Aider) to do it for me. It only took a little bit of prodding.
Okay, so now I have an SQLite database with a thousand or so barrows and tumuli and dikes and embankments and (yes) Rocks around where I live. But the whole point of this is to view all of the sites at once, without having to navigate the Megalithic Portal’s esoteric RSS-based raster maps and leap through their Eleven Hoops.
Back to Claude, to set up a simple HTTP server which reads all of the sites out of the database and plots them on a simple Leaflet map. It gets it in one. Boom roasted, I watch my own job evaporate like Thanos in front of me. What will my family think when they realise that I have been replaced by a robot named after a Beanie Baby.
In an attempt to make myself feel even remotely relevant I go sign up for an OS Maps dev account so that I can use the Ordnance Survey maps as the base layer; this takes me like 20 minutes. I try not to think of how few milliseconds it would have taken Claude.
The whole thing costs £0.54.
-
Aider
Aider is an LLM tool that runs in your terminal, selectively reads your codebase, and gives you a little prompt to generate code. Lots of tools purport to act like a junior dev that you can send off to tackle the backlog, but this one kind of seems like it actually is. Operating on the terminal means that you don’t have to learn a new extension or a whole new IDE. Bringing your own models means that you don’t have to sign up for some new subscription.
And it works surprisingly well. I wouldn’t let it build out whole new features (yet) but like Harper Reed I find that with a bit of nudging I can basically get it to do exactly what I want, and only have to debug the results a little bit. I’m well pleased.
-
More cache busting
I wrote a little bit, a while ago, about my cache busting strategy for this website. The basic idea is to generate a unique identifier at the time of deploy, and then use that unique identifier as a query param for requests to static assets. So that if I haven't deployed in a while, and on the off chance that I have a repeat visitor to this website, that visitor gets the cached version of my (admittedly minimal) CSS and JavaScript. When I re-deploy the website, that visitor's browser will determine that something has changed and reload the files.
Previously, when this website was redeployed using GitHub Actions, I used the ID of the action's run as the unique identifier—on the basis of that identifier being incremented on every deploy.
Now that I deploy manually, I no longer have that identifier. So I use a random hexadecimal string instead:
VERSION=$(openssl rand -hex 4) grep -rl '{|{VERSION}|}' ./templates | xargs sed -i "s/{|{VERSION}|}/$VERSION/g"
This finds all files in my
templates
directory that have the{|{VERSION}|}
string in them (just a distinctive string that I likely won't be using anywhere else)—and then usessed
to replace that string, inline with my random identifier.I suppose I could be doing something clever like hashing each of the files, so the ID doesn't change unless the file does—but this works, and works well for a website of this scale.
-
No build step
Jim Nielsen on the cost of avoiding annoyance, which is itself a commentary on why HTMX (the framework for JS-in-HTML) doesn't have a build step.
A resounding yes! here: if you're using boring technology for your backend and progressively enhancing your frontend, modern browsers have made it extremely easy to skip out on the build step, both for CSS and JavaScript. This website has no build step, and it's got light/dark mode, color/bw mode, a dynamic component for pulling my recent tracks from Last.fm, and a search k-bar.
Heck, my brother's website doesn't even import stylesheets, it just uses Twig templates to dump a template called style.css into a
<style>
tag.Next time you're spinning up a (let's be serious) side project, reconsider whether you even need a build step.
-
Search
Somehow, over the past ~3 years, my life on the computer has become increasingly command-palettet-ised. It started with Sublime Text, back when I was first casting out on my wild software development experiment, and from them, and soon expanded to Alfred and VS Code, and then in latter years onwards to Raycast and any other site or application that will give me a handy autocomplete context menu triggered by a keyboard shortcut. Usually on the web this is
Ctrl/Cmd+K
. One of my greatest computer regrets is that I don’t have the expertise to build an extension for command-palette-ising Mail.app.My website, however: a different story. I’m the slinger of code round here and now I’ve got a handy Ctrl/Cmd+K shortcut for searching the site. I was mostly motivated by the series of poor experiments in listing archived content that I’ve iterated through over the past few months. I haven’t yet found a really good way to return to historical posts, but leveraging CraftCMS’s search has been the least-bad solution so far.
The core of the search is actually pretty simple, testament to the practical minds behind CraftCMS:
Entry::find() ->section("not projects") ->search('"' . $query . '"') ->all()
This code is exposed by an API endpoint that I hit with plain old JavaScript.
Meanwhile, on the frontend, the search field is a
<dialog>
element. The<dialog>
element probably isn’t quite ready for production yet; Safari, in particular, took their sweet time implementing it, so users on Safari 15.3 and lower (which, admittedly, is a minuscule proportion of the web-surfing public) won’t get search. This isn’t a huge concern for me, since my website’s core audience is me, and I’m on Safari 16.3. The upside is that I don’t have to think about trapping focus and keybindings; the downside is that the page in the background is still scrollable.The blob of JSON that the backend returns is then parsed into a
<template>
element and each result appended to the<dialog>
. I do a little manual keybinding to allow users to select items from the list using a keyboard alone. I could probably take a lesson or two from popular accessible autocomplete libraries, but it works for me so far.If you’re using Craft
Remember to refresh your search index; if you make a field (like post body) searchable after those posts have already been written, they won’t just become searchable. You have to bulk resave them to add them to the search index:
php craft resave/entries --update-search-index
Next up
I’m in the evaluation phase at the minute, trying to find the sharp edges of search on my site (and trying to produce more CoNtEnT to feed to it), but a couple of ideas for where to go next with search:
- Testing with VoiceOver is probably the next priority. I think I’ve checked the basic boxes on the accessibility front but I need to do some actual usability tests.
- At the minute, the search is just full-text. Searching by tag might be helpful, but I haven’t run into a search that would have come up in the tags but that hasn’t come up in full-text yet.
- Maybe Google-style keywords could be a fun addition?
Archive
Posts Stream Books Walks • Clear filters
2023
February 2023
2022
November 2022
September 2022
August 2022
May 2022
Currently showing latest 20 posts