Over the past few months, I have been struggling with the Hyprland screen‑share dialog (hyprland-share-picker via xdg-desktop-portal-hyprland).
Under the hood, Chromium / OBS talk to xdg-desktop-portal, which hands off to xdg-desktop-portal-hyprland. That launches hyprland-share-picker (a Qt dialog) which uses the hyprland-toplevel-export-v1 protocol to offer windows and screens.
When I am on a call with people I want to quickly share a screen with them, I am confronted with a rather confusing UI:
- By default it selects the “Screen” tab, which I almost never want
- On the “Window” tab I get a non-visual list of window names that are hard to decipher
- There is this whole “restore token” thing that is [very confusing](https://gith…
Over the past few months, I have been struggling with the Hyprland screen‑share dialog (hyprland-share-picker via xdg-desktop-portal-hyprland).
Under the hood, Chromium / OBS talk to xdg-desktop-portal, which hands off to xdg-desktop-portal-hyprland. That launches hyprland-share-picker (a Qt dialog) which uses the hyprland-toplevel-export-v1 protocol to offer windows and screens.
When I am on a call with people I want to quickly share a screen with them, I am confronted with a rather confusing UI:
- By default it selects the “Screen” tab, which I almost never want
- On the “Window” tab I get a non-visual list of window names that are hard to decipher
- There is this whole “restore token” thing that is very confusing. I don’t need this checkbox. I accept the risk of always restoring.
The “old way” of dealing with this kind of pain was:
- Open an issue on GitHub which was done back in 2022
- Discuss the issue
- Some brave soul who is familiar with Qt toolkit, various Wayland protocols including the somewhat experimental hyprland-toplevel-export-v1 would take it on herself to implement. Given the complexity of the feature, we would be looking at a week of engineering.
So what ends up happening is that we have a bottleneck. Vaxry only has so much time. hyprland is mostly on one person’s shoulders, so some little niggles like “my favorite bug” tend to take a back seat for years.
However, there is an interesting wind of change as of November 2025.
The release of ultra-competent language models such as Gemini 3 Pro, Codex 5.1 Max, and the established Sonnet 4.5 means that when we hit “our favorite bug” we can go ahead and “work something out.”
Particularly, given the knowledge I had about the problem, the source code of grim hyprland, the protocols involved, and general structure of a solution, I am able to vibe engineer a solution to the problem in an hour or so.
I made this new version of the picker using cursor-agent with Gemini 3 Pro / Sonnet. I tend to use multiple models and coding agents to attempt different approaches, given each excel at different aspects.
How I built it.
The first thing I vibe coded was a
--testparameter (cursor agent - Sonnet 4.5 thinking). Prior to it, to launch the picker you needed to configure rather complex ENV vars.hyprland-share-pickergets the list of windows and options from ENV vars. This makes it particularly tricky to test changes because you need a large amount of setup. With this in hand, I was set to iterate quickly.I then pointed cursor-agent (with Gemini 3 Pro) at the source for grim hyprland and worked through designing the interface over a few turns.
I hit a reasonable amount of minor blockers—the screenshot was fuzzy, I let it read my hyprland config and come up with strategies for handling my 1.6 zoom in Wayland. Perhaps the largest blocker was a segfault on close that the LLMs introduced. I debugged it with the agent’s help; I don’t know all of the params to gdb, it walked me through it, then I fed the feedback back into the agent and resolved the segfault. A few more turns, added icons, removed the restore token stuff, and I was set.
This new version works perfectly for me. It behaves just as I always wanted it to behave and is the screen I am using now.
That said, a new problem is emerging. As I explained in my previous blog post, the journey from vibe engineered code to a PR I feel comfortable putting my name on can be vast.
I am not a C++ expert, I know very little about Qt, and all the Wayland experimental protocols are a bit alien to me. It would be disingenuous to say it makes sense now for Vaxry to spend hours reviewing my machine-generated code for security issues and more.
I landed on a new paradigm which I feel may become more of a norm over the upcoming years.
Software for one
For many years, software has had an aspect of personalization to it. You could adapt it to your needs in your user preferences section. One person’s default can be another person’s kryptonite when it comes to software settings. A great exploration of this concept is the malleable software article from Ink & Switch.
Now we can take this one step further.
Given the new tools, I can create custom builds of software just for me. I can vibe code myself into a corner and not be able to feasibly contribute my personalized software back to the ecosystem, which is a new happy, sad, and dangerous reality.
I am empowered, I can scratch my own itch, I can reason about risks, but fundamentally, sometimes I am building software for one.
The tools are accidentally driving an “anti open-source” practice. I don’t want to force code reviews on this code; I prefer to keep it in a tucked away fork.
I anticipate that as the competence of the models and tools increases, more and more snowflake software is going to emerge. We are already today in a world where people familiar with the coding agents can construct “personal forks” and special-case software for a single person’s use case.
This both scares me, given the security implications and anti-open-source aspects, and delights me, because I am no longer blocked.
Reducing risk of software for one
Though somewhat counterintuitive, the easier you make it to correctly hack on your software, the less the risk is of “software for one” forks emerging for your code.
Having proper guidelines for engineering, a great test suite with trivial test runners, and a nice linter makes a big difference. By having them, you can avoid having agents generate completely “out there” code because they have a feedback loop they can test against.
This means that less highly obscure code is generated, which increases the likelihood of “vibe coded” to “blessed by human” code transitions.
At Discourse, we recently built the trifecta of command line lint/test/spec which we now feed in to the agent config.
# Ruby tests
bin/rspec [spec/path/file_spec.rb[:123]]
# JavaScript tests - bin/qunit
bin/qunit --help # detailed help
bin/qunit path/to/test-file.js # Run all tests in file
bin/qunit path/to/tests/directory # Run all tests in directory
# Linting
bin/lint path/to/file path/to/another/file
bin/lint --fix path/to/file path/to/another/file
bin/lint --fix --recent # Lint all recently changed files
The end result is that all agent-built code I am observing is significantly more robust even though it is using models that may have failed in the past.
If a model spends 15 minutes figuring out how to run a test in a highly creative way, it ends up with a poison context and tends to produce far fewer and less useful results.
Where are we headed?
Back in 2023, Geoffrey Litt wrote:
I think it’s likely that soon all computer users will have the ability to develop small software tools from scratch, and to describe modifications they’d like made to software they’re already using.
I do not think we are quite there; you still need to be a programmer to wield these tools effectively. However, the heart of the insight is correct.
Software is becoming more malleable; software for one is becoming a new trend.
I am delighted and terrified simultaneously.
How I built it.