Learning about learning computers

Here I go talking about AI again. I have no hot takes for you. I have no particularly strong opinions beyond a nagging bad feeling about this. I’m not a luddite; I’m not an AI optimist; I don’t even particularly understand what’s going on behind the stony friendliness of large language models.

Although on that last count, I think I’d like to try (to understand, that is). I’ve got a relative fluency with computers, even if I don’t have any practical training. I know Python. I’d like to hope not starting from zero.

To that end, I’ve been watching some of Andrej Karpathy’s videos on building neural networks and language models from the ground up. I’ve been reading the Pytorch docs. I’ve figured out how to use Jupyter notebooks (as with most things, there’s a VS Code plugin for that). Some 15 years after I took my last math class—precalculus at DPHS in Orlando, truly several lifetimes ago at this point—I finally learned what a derivative is.

There’s about 30% that just goes over my head, no matter how many times I rewatch or reread—and it tends to be the most crucial 30%. But it’s a familiar feeling: exactly the way I felt when I was first learning how to program computers. I think a 70/30 split is just about the right proportion of getting-it:not-getting-it for optimal learning: just enough to get the basic gist, enough to keep the morale up, enough to come back to in 6 months and marvel at how much more you know now than you did then.

Anyway, see you in October.

AI Learning

Next

Progress

Channelled some nervous energy into writing an application that counts down the seconds until something happens. Maybe not the best use of my nervous energy.