Last blog post for the year! Merry Christmas, happy New Year etc. Here are some reflective end of year thoughts, some jolly, some less so:
1.
It seems for me that this year, especially the latter half of it, has been the year of agency. On the one hand it has definitely allowed me to do more in the whole 'You Can Just Do Things' spirit, not necessarily crazy out there stuff which I would've never thought of myself doing, but more so things I could imagine myself doing but at some unspecified future point which never seemed to solidify into any real action. For instance: actually writing my thoughts down and sta...
Last blog post for the year! Merry Christmas, happy New Year etc. Here are some reflective end of year thoughts, some jolly, some less so:
1.
It seems for me that this year, especially the latter half of it, has been the year of agency. On the one hand it has definitely allowed me to do more in the whole 'You Can Just Do Things' spirit, not necessarily crazy out there stuff which I would've never thought of myself doing, but more so things I could imagine myself doing but at some unspecified future point which never seemed to solidify into any real action. For instance: actually writing my thoughts down and starting a blog, reading actual physical books in Japanese and actually getting through my thousands of unread emails and getting a clean and spam-free inbox. Not to harp on about Beeminder any more than I already have on here, but it helps a lot and neatly avoids having to rely on the slippery and unreliable concept of 'willpower'.
Of course there's a uncomfortable subtext to all this. Cate Hall explains it well:
However, I suspect that some part of what is driving this interest is a concern that people have that they don't really know what their future looks like. They desire to control or lay claim to their future in a way they hope agency will provide.
The idea that intelligence is not what matters — because intelligence is becoming cheap — is growing. So there has to be something else that we can rely on, as humans, to supply a sense of control or meaning to life. Part of the enthusiasm about agency emerges from that perspective.
And speaking of that uncomfortable subtext...
2.
I have been seeing (and experiencing first hand!) AI coding models get steadily better over this year. Especially the latest generation of models (Opus 4.5, Gemini 3, GPT-5.2) seem to be noticeably more capable than previous ones (and even those were still very powerful tools). Karpathy has a good summary of the year's technical developments but it does appear that we are well on the track towards LLMs being able to crush any verifiable task, which seems to include a large chunk of advanced programming and maths. Of course the intelligence of LLMs is strange and spiky and there are still plenty of skills that I think LLMs are still merely mediocre at and that have only improved marginally over the last year (like creative writing). And even the most advanced LLMs can still do remarkably inept things when left to their own devices in areas outside of their reinforcement-learnt training. Despite all this, a world of 'only' superhuman AI coders and scientists seems like a very uncertain and unpredictable place (especially when I work in software!) and that may well be the least strange potential future that AI brings.
3.
On a more mundane note, I've been playing around with various agentic coding tools both at work and outside of it1 . My impression is that the surrounding tooling matters quite a bit on top of just having the fanciest frontier model. If you want a personal recommendation Roo Code is the best one I've tried (though at the cost of using 10x the credits as Copilot).
4.
None of the above is particularly relevant to the holidays so here's a Christmas Furret to make up for it: https://www.pixiv.net/en/artworks/139105655
Though I still brain code every once in a while.↩