As we head toward the end of 2025, I’ve got a final round up of articles I recently found valuable and interesting.
Only one of them focuses in any way on frontend development. It’s probably as good an encapsulation of this year from my perspective, but I imagine from many others, as you could have.
This newsletter, in one form or another, has been around for at least 15, if not more, years. Originally, it was curated by Maxine Sherrin, co-founder of Web Directions, and when I took over running the conferences full-time, I also took over the newsletter. I can remember my very first one. I wrote it in Whistler, Canada in right about now in 2014—seems like only yesterday, but it’s a long time ago in many ways as well. I was speaking at the Smashing conference that was held in Whistl…
As we head toward the end of 2025, I’ve got a final round up of articles I recently found valuable and interesting.
Only one of them focuses in any way on frontend development. It’s probably as good an encapsulation of this year from my perspective, but I imagine from many others, as you could have.
This newsletter, in one form or another, has been around for at least 15, if not more, years. Originally, it was curated by Maxine Sherrin, co-founder of Web Directions, and when I took over running the conferences full-time, I also took over the newsletter. I can remember my very first one. I wrote it in Whistler, Canada in right about now in 2014—seems like only yesterday, but it’s a long time ago in many ways as well. I was speaking at the Smashing conference that was held in Whistler that year, and many of the articles I posted were from speakers at that conference that year.
For the better part of all those years, the articles we gathered together here were focused on web development, web design, and related areas of practise. That’s what our conferences overwhelmingly focused on as well. That’s something that professionally has been the centre of my life since the early, at least mid-1990s.
But this year, as I’ve noted elsewhere, that’s undergone a significant transformation. And that transformation is reflected in the subjects that we cover in the articles that are collected in this week’s newsletter.
Only one of the articles this week is focused on web development, and it actually focuses a little on my growing disenchantment with that area of practise that I wrote about recently.
Meanwhile, there’s little doubt that the practise of software engineering, if nothing else, is being transformed by the increasing capability of large language models when it comes to software development. Very importantly, the patterns and practises that are emerging around their use. That’s something you’ve been consciously trying to keep up with over the last year or two. Then that’s something I would counsel people to reconsider. Hopefully, these articles and the articles we’ve collected going back over the last year or so in our weekly reading will be helpful in your endeavours there.
And if you want a daily feed of the things that I’ve been finding interesting rather than waiting for a big dump in an email once a week, then over at Conffab, you can subscribe via RSS to “Elsewhere“, where these clips are originally posted.
Thanks for being a reader and a supporter of Web Directions. I sincerely wish you the best for the season and for 2026. We’ve got a lot planned, and I hope it will be very valuable professionally for you, as we continue through a period of I think very significant transformation in our industry and beyond.
john
Is It a Bubble?
Before diving into the subject at hand—and having read a great deal about it in preparation—I want to start with a point of clarification. Everyone asks, “Is there a bubble in AI?” I think there’s ambiguity even in the question. I’ve concluded there are two different but interrelated bubble possibilities to think about: one in the behavior of companies within the industry, and the other in how investors are behaving with regard to the industry. I have absolutely no ability to judge whether the AI companies’ aggressive behavior is justified, so I’ll try to stick primarily to the question of whether there’s a bubble around AI in the financial world.
Source: Is It a Bubble?
Not infrequently in my conversations with people does this issue of whether or not we are in a bubble come up. Is there an AI bubble? Will we have an AI bubble? It’s probably something we should think about. Even if there’s little of anything that at least we as individuals can do, perhaps we can make different decisions about how and where we invest or what might happen if we had a significant downturn of the nature of early 2000s or after the global financial crisis.
In future, I think I’ll just point people to this. It’s a very solid read, but it’s not only a thoughtful thesis; it draws on quite a range of historical experiences.
A ChatGPT prompt equals about 5.1 seconds of Netflix
In June 2025 Sam Altman claimed about ChatGPT that “the average query uses about 0.34 watt-hours”. In March 2020 George Kamiya of the International Energy Agency estimated that “streaming a Netflix video in 2019 typically consumed 0.12-0.24kWh of electricity per hour”—that’s 240 watt-hours per Netflix hour at the higher end.
Source: A ChatGPT prompt equals about 5.1 seconds of Netflix
I found that 95% of all AI implementations had no ROI. I haven’t really read the study, and I don’t think many the people who quoted it have read it either.
We also see numbers bandied about about the amount of water used by large language models in particular, at times, single queries, and similarly, the amount of energy required for a single query. And then yesterday I saw on a toilet that the average flush from that toilet used 3.4 litres of water.
It’s good to see things like this from Simon Willison where he tries to provide some broader context for the energy and the environmental impact of large language models. It would be even better to see more solid figures from OpenAI, Google, and the other hyperscalers, but at least it’s a start.
What happens when the coding becomes the least interesting part of the work
That judgment is the job of a senior engineer. As far as I can tell, nobody is replacing that job with a coding agent anytime soon. Or if they are, they’re not talking about it publicly. I think it’s the former, and one of the main reasons is that there is probably not that much spelled-out practical knowledge about how to do the job of a senior engineer in frontier models’ base knowledge, the stuff that they get from their primary training by ingesting the whole internet.
Thoughts by an experienced software engineer on working with large language models. It’s an irony that as we become experienced software engineers, traditionally we’ve written less and less software.
This is a trend that is perhaps changing as large language models become increasingly capable of generating code.
How I wrote JustHTML using coding agents
Writing a full HTML5 parser is not a short one-shot problem. I have been working on this project for a couple of months on off-hours. Tooling: I used plain VS Code with Github Copilot in Agent mode. I enabled automatic approval of all commands, and then added a blacklist of commands that I always wanted to approve manually. I wrote an agent instruction that told it to keep working, and don’t stop to ask questions. Worked well! Here is the 17-step process it took to get here:
Source: How I wrote JustHTML using coding agents – Friendly Bit
A few weeks back, Simon and Willison coined the term “vibe engineering” trying to create a distinction between the use of large language models to generate code that we simply run as-is, against the use of large language models as part of the software engineering process. This example he links to is an excellent example of vibe engineering.
Emil Stenström has written an HTML parser, which, if you know anything about HTML, is much more complex than it might initially appear. Here, Emil details his approach to working with large language models to produce a very complex piece of software. Emil is a software engineer, but he observes that:
Yes. JustHTML is about 3,000 lines of Python with 8,500+ tests passing. I couldn’t have written it this quickly without the agent.
But “quickly” doesn’t mean “without thinking.” I spent a lot of time reviewing code, making design decisions, and steering the agent in the right direction. The agent did the typing; I did the thinking.
That’s probably the right division of labor.
The Bet On Juniors Just Got Better
Junior developer—obsolete accessory or valuable investment? How does the genie change the analysis?
Folks are taking knee-jerk action around the advent of AI—slowing hiring, firing all the juniors, cancelling internship programs. Instead, let’s think about this a second.
The standard model says junior developers are expensive. You pay senior salaries for negative productivity while they learn. They ask questions. They break things. They need code review. In an augmented development world, the difference between juniors & seniors is just too large & the cost of the juniors just too high. Wrongo. That’s backwards. Here’s why.
Source: The Bet On Juniors Just Got Better – by Kent Beck
Kent Beck is renowned in the world of software engineering, the originator of XP (Extreme Programming) and very well known and highly regarded when it comes to design patterns.
Here he addresses the issue that has been of concern to many people, and that is, what impact will AI have on junior developers? Will they simply not exist anymore? And then, will we ever get senior developers if we haven’t got any new junior developers? Kent has a different take, and I think it’s well worth considering.
What I learned building an opinionated and minimal coding agent
AI coding agent LLMs software engineering
I’ve also built a bunch of agents over the years, of various complexity. For example, Sitegeist, my little browser-use agent, is essentially a coding agent that lives inside the browser. In all that work, I learned that context engineering is paramount. Exactly controlling what goes into the model’s context yields better outputs, especially when it’s writing code. Existing harnesses make this extremely hard or impossible by injecting stuff behind your back that isn’t even surfaced in the UI.
Source: What I learned building an opinionated and minimal coding agent
Mario Zechner built his own minimal coding agent. Think of a lightweight version of Claude Code or OpenAI’s Codex. You can follow along here.
If You’re Going to Vibe Code, Why Not Do It in C?
So my question is this: Why vibe code with a language that has human convenience and ergonomics in view? Or to put that another way: Wouldn’t a language designed for vibe coding naturally dispense with much of what is convenient and ergonomic for humans in favor of what is convenient and ergonomic for machines? Why not have it just write C? Or hell, why not x86 assembly?
Source: If You’re Going to Vibe Code, Why Not Do It in C?
It may seem like a facetious or ironic question, but why stop with vibe coding? If we’re going to develop software with large language models, why not use C? Or, more specifically, why use a specific language? Here, Stephen Ramsey observes that programming languages are designed for human convenience, i.e., developer convenience. But if a large language model is generating the code, why generate a language that is essentially an intermediary that humans rarely have ever actually going to read?
This is the question that Brett Taylor asks in a podcast that we linked to a few months back. It’s one that really interests me. Just the other day, Geoff Huntley in another piece that we linked to talks about working with rather than against the grain of large language models. So, I think this fits into that way of thinking. If we are going to increasingly rely on large language models to do tasks for us, even if we restrict our focus to programming, it makes sense, it would seem, to find what they are best at rather than try to get them, as Geoff Huntley observes, to conform to approaches that humans have developed for our convenience.
Is your tech stack AI ready?
AI architecture LLMs software engineering
We’re at the same inflection point we saw with mobile and cloud, except AI is more sensitive to context quality. Loose contracts, missing examples, and ambiguous guardrails don’t just cause bugs. They cause agents to confidently explore the negative space of your system.
The companies that win this transition will be the ones that treat their specs as executable truth, ship golden paths that agents can copy verbatim, and prove zero-trust by default at every tool boundary.
Your tech stack doesn’t need to be rebuilt for AI. But your documentation, contracts, and boundaries? Those need to level up.
Source: Is your tech stack AI ready? | Appear Blog
Speaker at our recent Engineering AI conference, Jakub Reidl, looks at some of the key areas of your tech stack to get ready for AI.
Useful patterns for building HTML tools
I’ve started using the term HTML tools to refer to HTML applications that I’ve been building which combine HTML, JavaScript, and CSS in a single file and use them to provide useful functionality. I have built over 150 of these in the past year, almost all of them written by LLMs. This article presents a collection of useful patterns I’ve discovered along the way.
Source: Useful patterns for building HTML tools
One incredibly valuable use case for code generation, and a good way to explore, experiment, develop intuitions and capabilities with them is by building little utility tools for your own use, as Simon Willison has been doing for several years.
I too have been doing this. I’ve taken spreadsheets, Bash scripts, little pieces of JavaScript that I had cobbled together over years to help in the production of our sites and content and even printing for our conferences and built special purpose mini web applications to solve the same problems much more efficiently and enjoyably.
So I highly recommend it’s something you try for yourself if you’re not doing already. Here Simon lists a whole bunch of patterns that he has gleaned from his extensive development of such tools.
The /llms.txt file
We propose adding a /llms.txt markdown file to websites to provide LLM-friendly content. This file offers brief background information, guidance, and links to detailed markdown files. llms.txt markdown is human and LLM readable, but is also in a precise format allowing fixed processing methods (i.e. classical programming techniques such as parsers and regex). We furthermore propose that pages on websites that have information that might be useful for LLMs to read provide a clean markdown version of those pages at the same URL as the original page, but with .md appended. (URLs without file names should append index.html.md instead.)
Source: The /llms.txt file – llms-txt
LLMs.txt is one of a number of proposals for how best to expose the content of a web page, site, or app to large language models.
Llms.txt is a proposal initially from Jeremy Howard, well-known in the Python and AI communities, founder of FastAPI and now FastAI (as well as FastMail).
AI companies want a new internet—and they think they’ve found the key
Over the past 18 months, the largest AI companies in the world have quietly settled on an approach to building the next generation of apps and services—an approach that would allow AI agents from any company to easily access information and tools across the internet in a standardized way. It’s a key step toward building a usable ecosystem of AI agents that might actually pay off some of the enormous investments these companies have made, and it all starts with three letters: MCP.
Source: AI companies want a new internet—and they think they’ve found the key | The Verge
In 12 months or so, MCP has gone from an internal project at Claude at Anthropic to being extremely widely used and now has found a home at the Linux Foundation alongside other related technologies such as Goose.
This Verge story will give you an overview of the set of technologies and what’s happening next.
UX Is Your Moat (And You’re Ignoring It)
If you’re building an AI product, your interface isn’t a nice-to-have. It’s your primary competitive advantage.
Here’s what that means in practice:
Make the first five minutes seamless. Users decide whether they’re staying or leaving almost immediately. If they have to think about where to click, you’ve already lost. Netflix auto-plays. TikTok starts scrolling. What does your product do the moment someone opens it?
Source: UX Is Your Moat (And You’re Ignoring It) – Eleganthack
Technologists often default to the idea that the best technology always wins. Over the years, we see endless debates about the technical specifications of a product and why that makes that product better. But what we should have learned by now is that technology is only one part of why something becomes successful. Category defining. Dominant.
Here, Christina Wodtke brings her many years of experience to the question of what will make AI products successful, with lessons not just for the biggest technology companies, but any company, whether they use AI or not.
How to Run a 90-Minute AI Design Sprint (with prompts)
Most teams still run ideation sessions with a whiteboard, a problem statement, and a flurry of post-its. To be honest, I’ve always loved a good Design sprint, especially in person and I hope those don’t go away for anyone because they’re an awesome way to learn and connect together.
But with AI, the way we generate, evaluate, and shape ideas has fundamentally shifted. You can collapse days of thinking into a focused 90-minute sprint if you know how to structure it well.
This is the format designed to move fast without losing the depth. It blends design thinking, systems thinking, and agent-era AI capabilities into a repeatable flow you can run any time your team needs clarity.
Here’s the 90-minute AI Design Sprint, step by step with prompts you can copy, paste, and use today.
Source: How to Run a 90-Minute AI Design Sprint (with prompts)
As we’ve recently observed elsewhere, while a lot of the focus on generative AI and LLMs is on customer-facing features or generated content (be that text, images, or video), there is one place in which large language models can have a really valuable impact: on processes. Here M.C. Dean reimagines the design sprint, a staple of the design process, using large language models with some suggested prompts that she uses.
AI and variables: Building more accessible design systems faster
When people talk about AI in design, they often picture flashy visuals or generative art. But my own lightbulb moment happened at a less glamorous place: in an effort to solve this accessibility challenge under pressure.
At UX Scotland earlier this year, I shared how AI helped me transform a messy, time-consuming process into something lean, structured, and scalable. Instead of spending weeks tweaking palettes and testing contrast, I had an accessible design system up and running in just a few days. In this article, I’ll explain how I did it and why it matters.
Source: AI and variables: Building more accessible design systems faster – zeroheight
When it comes to AI, we overindex on output and user-facing features, and I think we’re somewhat asleep on workflow and process. These can be made more efficient using much language.
Here’s a great case study from Tobi Olowu on how he and his team used LLMs to help streamline the process of improving the accessibility of an existing design system.
Migrating Dillo from GitHub
front end development performance react
However, it has several problems that make it less suitable to develop Dillo anymore. The most annoying problem is that the frontend barely works without JavaScript, so we cannot open issues, pull requests, source code or CI logs in Dillo itself, despite them being mostly plain HTML, which I don’t think is acceptable. In the past, it used to gracefully degrade without enforcing JavaScript, but now it doesn’t. Additionally, the page is very resource hungry, which I don’t think is needed to render mostly static text.
Source: Migrating Dillo from GitHub
GitHub has been undertaking a long process of re-implementing their frontend using React. This is not the only story I’ve read where that turns out, perhaps not to have been the best decision. Many people have observed that with large repos, it becomes unworkably slow, even in state-of-the-art MacBook Pros.
This was eminently predictable and is one of the many reasons why I found myself, of late, pessimistic about the future of frontend as a vibrant, dynamic ecosystem.