Showing posts for Code
-
Picking on third-party deps
A colleague of mine from a couple of years ago is looking for work, and is blogging about the process of interviewing and about lessons learned from technical interview feedback. (In my opinion this level of critical self-evaluation would qualify him for at least an interview, were I in charge of looking for new software developers.)
He recently received this feedback on an exercise in building a POST handler for an API:
We liked that payloads were validated, but found the validation to be quite manual. It might have been nice to have used a 3rd party library, which can provide better error messages, as well as good TypeScript integration to couple to interfaces
I disagree with this feedback. My first instinct is never to try and install my way out of a design problem, because what I'm really doing is installing my way into three types of of technical debt:
- Dependencies incur a network cost. Yup (270KB), Joi (558KB), and Zod (3.7MB (???)) all ship more data over the wire. It's bad enough if you're downloading and installing these on a single server on every CI run; god help you if you're shipping them to the client. Or god help your users, I don't know.
- Dependencies incur a sustainability cost. As of writing, Zod has 740 versions. If you are installing Zod, I hope you are also setting aside a couple of hours each sprint to keep it up-to-date and compatible with all the other crud in your
node_modules
. - Dependencies incur an onboarding cost. I wonder whether this organisation, which is obviously hiring, has budgeted for the extra time to onboard not only to the org's codebase, but to the basket of third-party libraries that they encourage their software developers to install.
I'm picking on Zod, but I suspect that the developer agrees with me: Zod relies on precisely zero third-party (runtime) libraries.
Here's an alternative: just write your own request validator. Yes, it's a dependency — but this makes it your dependency. Want to upgrade it? Need to handle new formats for error messages? Tweak it & commit. Your validator will be bespoke to your use-case, and your new devs won't have to trawl through
node_modules
if they want to read the source. And if your source isn't 122K LoC they're more likely to, uh, do that.Hell, you don't even have to write your own: you can get Claude to do it for you. Here's a prompt, on the house:
Please write up an extremely lightweight validation library for HTTP requests in <my-web-framework>. Ideally a single class/function that I can just pass the request to, and a config object indicating what values are required or acceptable
I'm willing to accept that there are valid use cases for Zod or Joi or Yup (although maybe not quite 39 million every week). But before you hit
npm i
orpip install
orcomposer install
or whatever, ask yourself: what do we need, and can we do it ourselves?Further reading
- Boring Technology Club - how many times do I have to link to this talk
- Spoon theory - when you run
npm i
you lose a spoon
-
Megalithic Portal
The Megalithic Portal is an incredible site for exploring ancient stone structures. I’m very into standing stones (menhir) in particular. Something about their permanence has the hypnotic quality of fire to me. On any given weekend May → September odds are that I’m standing out on a moor staring intently at a many-thousand-year-old Rock.
Anyway I digress. The Megalithic Portal is a fantastic resource for Rock Enjoyers, but their website has an extremely clunky raster map interface that only loads a subset of Rocks at once. So if you’re looking at a Rock in one part of the country but want to see a Rock in a different part of the country, you have to jump through Eleven Hoops to get there. I don’t want to jump through hoops though, I just want to Look at Rocks.
So I set out to make a map of my own.
The key discovery was that for each site on the Megalithic website, there’s a corresponding GeoRSS feed of nearby sites, with metadata for each nearby site including site name, latitude and longitude:
<item> <title>Langleydale Common 8</title> <link>https://www.megalithic.co.uk/article.php?sid=50089</link> <description>Rock Art, (No Pic)</description> <geo:lat>54.614212686267</geo:lat> <geo:long>-1.9452625039514</geo:long> <media:thumbnail url="images/mapic/tg29.gif" height="20" width="20" /> </item>
The URL from this feed is parameterised based on the site’s latitude and longitude, so from any given Rock feed, I can iterate over each of the items and generate URLs for each of the related Rocks—meaning that I can hop from one feed to the next, saving each new item to an SQLite database as I go.
A year ago I’d have said that sounded like a lot of work.
But we’re not living a year ago, so I got Claude Sonnet 3.7 (via Aider) to do it for me. It only took a little bit of prodding.
Okay, so now I have an SQLite database with a thousand or so barrows and tumuli and dikes and embankments and (yes) Rocks around where I live. But the whole point of this is to view all of the sites at once, without having to navigate the Megalithic Portal’s esoteric RSS-based raster maps and leap through their Eleven Hoops.
Back to Claude, to set up a simple HTTP server which reads all of the sites out of the database and plots them on a simple Leaflet map. It gets it in one. Boom roasted, I watch my own job evaporate like Thanos in front of me. What will my family think when they realise that I have been replaced by a robot named after a Beanie Baby.
In an attempt to make myself feel even remotely relevant I go sign up for an OS Maps dev account so that I can use the Ordnance Survey maps as the base layer; this takes me like 20 minutes. I try not to think of how few milliseconds it would have taken Claude.
The whole thing costs £0.54.
-
Aider
Aider is an LLM tool that runs in your terminal, selectively reads your codebase, and gives you a little prompt to generate code. Lots of tools purport to act like a junior dev that you can send off to tackle the backlog, but this one kind of seems like it actually is. Operating on the terminal means that you don’t have to learn a new extension or a whole new IDE. Bringing your own models means that you don’t have to sign up for some new subscription.
And it works surprisingly well. I wouldn’t let it build out whole new features (yet) but like Harper Reed I find that with a bit of nudging I can basically get it to do exactly what I want, and only have to debug the results a little bit. I’m well pleased.
-
More cache busting
I wrote a little bit, a while ago, about my cache busting strategy for this website. The basic idea is to generate a unique identifier at the time of deploy, and then use that unique identifier as a query param for requests to static assets. So that if I haven't deployed in a while, and on the off chance that I have a repeat visitor to this website, that visitor gets the cached version of my (admittedly minimal) CSS and JavaScript. When I re-deploy the website, that visitor's browser will determine that something has changed and reload the files.
Previously, when this website was redeployed using GitHub Actions, I used the ID of the action's run as the unique identifier—on the basis of that identifier being incremented on every deploy.
Now that I deploy manually, I no longer have that identifier. So I use a random hexadecimal string instead:
VERSION=$(openssl rand -hex 4) grep -rl '{|{VERSION}|}' ./templates | xargs sed -i "s/{|{VERSION}|}/$VERSION/g"
This finds all files in my
templates
directory that have the{|{VERSION}|}
string in them (just a distinctive string that I likely won't be using anywhere else)—and then usessed
to replace that string, inline with my random identifier.I suppose I could be doing something clever like hashing each of the files, so the ID doesn't change unless the file does—but this works, and works well for a website of this scale.
-
No build step
Jim Nielsen on the cost of avoiding annoyance, which is itself a commentary on why HTMX (the framework for JS-in-HTML) doesn't have a build step.
A resounding yes! here: if you're using boring technology for your backend and progressively enhancing your frontend, modern browsers have made it extremely easy to skip out on the build step, both for CSS and JavaScript. This website has no build step, and it's got light/dark mode, color/bw mode, a dynamic component for pulling my recent tracks from Last.fm, and a search k-bar.
Heck, my brother's website doesn't even import stylesheets, it just uses Twig templates to dump a template called style.css into a
<style>
tag.Next time you're spinning up a (let's be serious) side project, reconsider whether you even need a build step.
Archive
Posts Stream Books Walks • Clear filters
2023
February 2023
2022
November 2022
September 2022
August 2022
Currently showing latest 20 posts