Posted on Dec 13, 2025
Instead of writing my tech comms predictions for next year like I did in 2024, I’ve written a fictionalized account of my day as a technical writer in 2030. It’ll be interesting to see whether we get there or not. Take it as a window into a possible future, one where AI usage is safer, more regulated, and better integrated with our workflows (as it should be).
My working day starts at 8:30am, after I’ve dropped my kids at school, rushed home, and prepared some coffee surrogate (nobody can afford real coffee anymore). I open the laptop and Chuck is there – it’s always there, like a trusty butler, ready to summarize what’s been going on in pull requests, Slack threads, Jira tickets, and a plethora of ot…
Posted on Dec 13, 2025
Instead of writing my tech comms predictions for next year like I did in 2024, I’ve written a fictionalized account of my day as a technical writer in 2030. It’ll be interesting to see whether we get there or not. Take it as a window into a possible future, one where AI usage is safer, more regulated, and better integrated with our workflows (as it should be).
My working day starts at 8:30am, after I’ve dropped my kids at school, rushed home, and prepared some coffee surrogate (nobody can afford real coffee anymore). I open the laptop and Chuck is there – it’s always there, like a trusty butler, ready to summarize what’s been going on in pull requests, Slack threads, Jira tickets, and a plethora of other information systems nobody can quite tame. Its summary connects the dots between my current priorities and what’s happening in the teams I’m attached to, helping me decide what to work on next. Trying to be helpful, it offers to deal with some of the mentions I’ve got by opening pull requests; I let it do so with a small docs bug. The rest I’ll want to deal with myself. It asks me how I feel and gently reminds me that I’ve still got some PTO available. Chuck’s such a class act.
I’m in a team with several other technical writers, but for the most part I work with Chuck, which is what we call the in-house AI agent that we use. Chuck is a vast local language model capable of running on the M10 Silicon processor that powers my laptop. It’s a state-of-the-art multimodal LLM whose pedigree I can trace back to the last iterations of Claude Omni 7.5, before Anthropic went bankrupt and got acquired by Apple. As most corporate models, Chuck is ISO 42001, Turing, and EUAI certified, which means that it’s audited every year for security, governance, and legality of its training materials. Chuck is fine-tuned into several variants depending on the goal; the one I use is chuck-256b-writer. We run it in CI pipelines and locally in IDEs or CLI clients. We can also invite it to meetings as an artificial participant. I sometimes ask my own Chuck to attend calls on my behalf as Chuck-Fabri.
The thing I like the most about Chuck is that I can configure its specializations by turning modules on or off through the Silicon Brain app. When I want it to play the developer, I add several coding modules; when I want it to help me author docs, I turn on the style guide and grammarian modules, and so on. I can also ask Chuck to spawn copies of itself to roleplay users and readers based on support ticket and sales call interactions. When I do that, Chuck politely asks me to call it through other names, so as not to break character, something I duly comply with. Most system tools and APIs are already compatible with the agentic environment I use, so Chuck knows how to perform most operations on its own. An important detail: to summon Chuck, I need to first plug a physical key into the laptop. The key comes with a red button to immediately stop Chuck in case it starts operating bizarrely. Never had to use it.
It’s 11am already. I’ve been working with Chuck to write a new docs set for a new feature, telling it how I wanted the docs to fit into the existing architecture and instructing it to tweak and edit. It almost always gets 80% of the work done, though I often have to intervene to rearrange, cut, or otherwise rewrite sections. This hasn’t changed since the first days of GPT and it’ll never improve, because LLMs are not intelligent. They’re the most useful word automation tools at my disposal though, which I keep in check through deterministic tools and linters. Chuck is able to create diagrams, take screenshots of the product through an internal tool, and test the instructions and code snippets itself. When I feel unsure about its output, I ask it to verify what it’s just written through semantic internal search, or by calling its cloud cousin, Chad, which is able to provide answers from federated internal sources. All we do together, Chuck documents internally and remembers in its permanent context.
Even though I’m using a local, non-monetized, and fully audited model that consumes the equivalent of a lightbulb worth of power, I still can’t shake the feeling of being a reverse centaur at times. It helps that Chuck comes with several built-in safeguards meant to prevent me from overworking or spending too much time without interacting with other human beings. At 1pm, which is lunch time in Spain, Chuck reminds me about taking a break. It refuses to continue if it detects stress in my text, vocal, or computer usage patterns. While my interactions with Chuck on the laptop are private and encrypted, it’s allowed to inform my manager or call my designated emergency contact in case of distress. I let Chuck access my vitals on the smartwatch and schedule calls with me on a regular basis to see how I’m doing. Since I work alone at home, this makes me feel somewhat safer.
I didn’t tell you, but my current job title is Augmented Writer. My mission is to ensure that the words that humans and machines use to interact with our products are the most effective at reducing confusion and error, while they maximize effectiveness and user satisfaction. I’m augmented because I do this in concert with Chuck, which expands my existing skills in numerous ways. Without my brain, though, Chuck couldn’t do my job, because it doesn’t really care and, more importantly, because it’s not allowed to. One of the conditions imposed by the current legislation is that AI cannot operate in fully autonomous mode without human supervision. Our docs and UIs, in fact, bear a certificate of human authorship that discloses the amount of AI intervention. By law, all AI generated artifacts must produce fingerprinting patterns that can’t be tampered with, which is trickier with text, but since we must keep full audit logs of LLM usage, this can be established upon request by any competent authority, including the Turing police.
In the end, my role is more of an orchestrator than that of an author, and I’m fine with that. Software engineering, the field I serve, is an exercise in consensual imagination whose goal is to find repeatable ways of processing reality into manageable chunks of data. Reality is unmistakably raw and imperfect, a stream of floating points and broken strings running through distributed systems: one cannot tame it through clever algorithms, but it can be reduced to abstractions and data structures and binary blobs. Each of those entities has a name; they all relate to each other through words. It’s part of my job to understand those words and intervene when they don’t bring clarity. It’s then also my job to explain how those words are able to handle their parent reality. The docs I orchestrate with Chuck’s help are the artifacts that chronicle and explain the motions of data as it enters a machine and exits in shapes and configurations that are helpful to users.
It’s 5pm and I’m bidding Chuck farewell. During the night, it will work on some optional docs polish and politely present its work to me in the morning. As I log off and extract the hardware key from the laptop, I think that without the words Chuck and I produced, the machine would be opaque to its operators, a smooth wall without doors or handles. Product truth is at my disposal to weave into a fabric of meaning and possibility, into spells that unlock abilities in autonomous agents, be they organic or artificial. I am an enabler of thought and action. Getting here wasn’t easy, but I feel better knowing that I can continue defending the importance of words with the help of the most clever thesaurus ever created.