Updated on January 6, 2026 in #linux

I’m using the proprietary NVIDIA drivers and applied everything the Arch wiki suggested, I went down every rabbit hole and I’m stuck.
Quick Jump:
-
[Reflecting on Linux vs Windows](#refl…
Updated on January 6, 2026 in #linux

I’m using the proprietary NVIDIA drivers and applied everything the Arch wiki suggested, I went down every rabbit hole and I’m stuck.
Quick Jump:
I’m so close to native Linux on the desktop, but I ran into 3 issues that make using the system a struggle. Along the way I think I discovered a real bug with the NVIDIA drivers which I reported on GitHub (more on that soon).
This post is an ~8,500 word adventure of things I encountered over a few days.
In 2019 I tried to switch to native Linux with this same hardware and ran into severe audio issues so I crawled back to Windows. Thankfully between Pipewire and Linux kernel updates my same audio devices are working fantastically well today in Linux, 100% solved!
At the end of 2025, after using Windows for ~25 years I decided I’m ready to leave Windows. Thankfully I’ve been using Linux on servers for almost as long and work a lot with WSL 2 on Windows so I feel at home. This isn’t a whimsical decision. I have videos from 2016 setting up Linux based environments on Windows and used WSL (2) since it was around. Native Linux has always been the goal but I was always blocked by something.
This time around my NVIDIA GeForce GTX 750 Ti GPU with Wayland is giving me endless trouble. It has 2 GB of memory and it’s 1 component of the computer I built in 2014.
# TL;DR
This post goes into full detail but if you’re skimming around, here’s the 3 problems I encountered on Linux. The same hardware runs perfectly fine on Windows:
1. GPU Memory Issues
journalctl uncovered this repeatedly when my GPU’s memory gets close to maxed out from opening only a few desktop apps (not games):
Dec 28 12:43:15 kaizen kernel: [drm:nv_drm_gem_alloc_nvkms_memory_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to allocate NVKMS memory for GEM object
Whenever the system’s GPU usage goes near 2 GB (my card’s max amount), everything stops being stable and I have to reboot or kill my Wayland compositor (niri). The NVIDIA 580 drivers don’t seem to ever allocate and use system memory when the GPU’s memory is nearly full. It’s hard to say for sure but that’s what it feels like.
I verified memory usage by running nvidia-smi which comes with the NVIDIA drivers. This is something that can be reproduced 100% of the time on my machine.
I tried KDE Plasma (Wayland) instead of niri and the same problem happens.
KDE Plasma (X11) works fine. When the GPU’s memory gets under pressure it allocates system memory for these resources and nothing crashes or becomes unstable. The system is fully usable in this state, just like it was on Windows with the same hardware. However, everything feels less smooth than Wayland in general (including games).
I’ve recorded a video showing how Wayland and X11 react differently. I’ve linked to their timestamps in those links.
I also opened an issue on GitHub under NVIDIA’s Wayland repo.
2. Anything Using the GPU Feels Jittery
This happens in games but also typing into Ghostty (terminal) which is hardware accelerated. It’s less smooth than it was on Windows with the same hardware.
This is speculation but it could be related to an existing bug with the NVIDIA drivers where any GPU memory allocation causes a system-wide lock. There’s an open issue for it and NVIDIA said it’s a difficult problem to fix and gave no time line on a fix.
3. Keyboard Input Delay in Games
niri is always adding ~150-200ms of keyboard input latency on every key press where as KDE Plasma with both Wayland and X11 have no keyboard input latency.
I don’t know if this is a driver bug or compositor bug / limitation but given it only happens with niri and not other Wayland compositors, it’s probably niri. Who knows though.
The rest of this post goes into detail on how I arrived at these conclusions.
# Hardware
My machine is from 2014 that I built from parts:
It’s an i5-4460 3.4ghz CPU, 16 GB of memory, GeForce GTX 750 Ti (2 GB), ASRock H97M motherboard, 256 GB SSD, 1 TB HDD and a Scarlett 2i2 3rd gen USB audio interface.
$ lspci
00:00.0 Host bridge: Intel Corporation 4th Gen Core Processor DRAM Controller (rev 06)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06)
00:14.0 USB controller: Intel Corporation 9 Series Chipset Family USB xHCI Controller
00:16.0 Communication controller: Intel Corporation 9 Series Chipset Family ME Interface #1
00:19.0 Ethernet controller: Intel Corporation Ethernet Connection (2) I218-V
00:1a.0 USB controller: Intel Corporation 9 Series Chipset Family USB EHCI Controller #2
00:1b.0 Audio device: Intel Corporation 9 Series Chipset Family HD Audio Controller
00:1c.0 PCI bridge: Intel Corporation 9 Series Chipset Family PCI Express Root Port 1 (rev d0)
00:1c.3 PCI bridge: Intel Corporation 82801 PCI Bridge (rev d0)
00:1d.0 USB controller: Intel Corporation 9 Series Chipset Family USB EHCI Controller #1
00:1f.0 ISA bridge: Intel Corporation H97 Chipset LPC Controller
00:1f.2 SATA controller: Intel Corporation 9 Series Chipset Family SATA Controller [AHCI Mode]
00:1f.3 SMBus: Intel Corporation 9 Series Chipset Family SMBus Controller
01:00.0 VGA compatible controller: NVIDIA Corporation GM107 [GeForce GTX 750 Ti] (rev a2)
01:00.1 Audio device: NVIDIA Corporation GM107 High Definition Audio Controller [GeForce 940MX] (rev a1)
03:00.0 PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge (rev 03)
I have a 32" 4k monitor at 60hz over DP and a 25" 1440p monitor at 60hz over HDMI. Both are connected to the GPU. I run them both at 100% 1:1 native scaling.
It is my main development / media creation / light gaming desktop machine.
# Windows
First, let’s talk about my Windows experience because I think it sets the stage on expectations. I used the same hardware here on Windows and it was a flawless experience from a stability and performance standpoint.
The box has 10+ years of being rock solid on Windows. It doesn’t crash and it runs everything I throw at it with enough performance that I feel happy using the machine.
I had Windows 7 initially and eventually Windows 10 Pro for the majority of time.
A typical workload would be:
-
Handful of browser (Firefox) windows / tabs
-
Heavy terminal based workflow with tmux and Neovim (multiple plugins, LSPs, etc.)
-
Typical web apps running in Docker (Docker Desktop) using Flask and Rails mostly
-
Web server, background worker, database, cache, etc.
-
Watching videos on YouTube (typically streaming music in the background)
-
Running OBS for video recording
-
1080p 30 FPS videos with x264 software encoding where my CPU is at ~25% when recording, it writes to the HDD disk in soft real-time with mkv
-
I’ve recorded 1,000 videos like this without a single issue
-
Various “things” (file explorers, virtual desktops, clipboard managers, launchers, etc.)
-
I recorded a video with my latest Windows set up before I wiped Windows
All of these apps can run in parallel and I feel no lag in anything. Neovim is super snappy using the Microsoft Terminal and nothing is jittery. With a decent CPU, enough RAM and an SSD, it goes a really long ways when the OS and drivers all work in harmony.
I feel slight CPU slowdowns when doing all of the above and I start to run heavy filters using GIMP at the same time for image editing but it’s rare that I would be doing all of that together. 99% of the time I’m not recording videos when using GIMP.
But I am recording a lot of videos doing “development stuff”.
The above is what I’ve done for multiple years without issues. I even sometimes run a Linux VM with VirtualBox with 3 GB of memory so I can test infrastructure changes locally. Again, no problem at all, even with OBS recording a video!
When the VM is running with OBS along with a typical set of open apps my system is usually at 80% memory usage and I can absolutely feel this is about the max of my system. It’s not slow but you can just feel when a system is like “Yo, relax a little bit? Thanks!”.
For folks who might be skeptical, feel free to go back and watch 300+ of my videos if you don’t believe me. You won’t see big lag spikes. I’ve recorded a bunch of videos doing real things while recording with OBS on this machine.
Ironically in the latest Windows set up video from last week, it was the first time I noticed slow downs while recording because I was doing image editing in GIMP. My CPU hit 75%.
Gaming
I’ve played a ton of different games on this box at 1080p 50-60 FPS or depending on the game, a little lower resolution. Back in the day, League of Legends and also Path of Exile. I put in hundreds of hours into these games without any issues.
Risk of Rain 2 also ran well enough, even during crazy spawns.
Silksong (a modern game) gets a rock solid 60 FPS at 2560x1440 and there’s no input delay that I can feel. I play with a keyboard and mouse. It’s super smooth and a very good experience. Hollow Knight was also silky smooth. Shout out to Team Cherry for making amazing games that run on older hardware and even Linux.
I’m not playing the latest and greatest AAA games but for older games and some newer but less demanding games it’s fine. I haven’t bought a single game where I had to return it or stop playing because of performance but I also do min spec research before getting them.
With that said, even when playing games, I can alt tab out and open browsers no problem. I also sometimes play YouTube videos at the same time on my 2nd monitor if a game is kicking my ass and I need strategical help as a last resort.
Why Haven’t I Upgraded?
I never had a reason because everything ran great on Windows. I never paid attention to GPU memory or anything like that. I used my computer naturally and it all “just worked”.
# Linux
I chose to use Arch Linux for many reasons. At the time of this post it’s using Linux kernel 6.18.2-arch2-1. I used i3 ~10 years ago on a Chromebook I modified to run Linux so I’m familiar with tiling window managers.
I know that I didn’t want to use a traditional dynamic tiler but I remember reading about niri in mid-2025 in a blog post and thinking “yep, this is what I want”.
niri is a Wayland compositor and it’s a scroller that can also do tiling, tabs, floating windows and more. It’s so good that I cannot go back to anything else. The creator deserves all the success in the world and more.
NVIDIA Drivers
I followed the official Arch news post combined with the Arch NVIDIA wiki. Technically I did all of this 1 day after this NVIDIA driver announcement so I never had the old drivers.
I was able to find my GPU’s family name by running lspci | grep VGA:
01:00.0 VGA compatible controller: NVIDIA Corporation GM107 [GeForce GTX 750 Ti] (rev a2)
If you head to this page and search for “GM107” you will find it’s a part of Maxwell and the 750 Ti is listed alongside it. This verifies the 580 series is the correct one to get.
I installed them like this:
# Required for DKMS drivers. The drivers will install successfully without this but they won't work.
# Adjust this to use different headers if you use a different kernel, I am using the stock kernel.
sudo pacman -Syu linux-headers
# Install the correct driver series based on the GPU model.
yay -Syu nvidia-580xx-dkms nvidia-580xx-utils
Then I applied this modprobe adjustment to get Wayland compositors to function properly as per the Arch wiki’s suggestion. A lot of folks from many different sources suggested the same thing:
echo "options nvidia-drm modeset=1" | sudo tee /etc/modprobe.d/nvidia.conf
At this point, I rebooted and was able to launch niri successfully, victory!
No further tweaks or adjustments have been made.
# niri (Wayland)
After a few hours using niri I had instability.
All of these things happened semi-randomly:
- Ghostty instantly shut down with a core dump error written to
journalctl - Firefox would render a black window instead of a site
- The mpv video player would render a blank blue window instead of a video
- OBS (video recording tool) would fail to capture my desktop in the preview or record
- niri would draw windows that no longer exist, and they can be selected but not killed
Not everything happened at once or together but when any of these things happened once it typically results in niri becoming unpredictable in the sense that these “ghost” windows would appear. The mouse cursor would sometimes lock up and other side effects.
It would linger until I rebooted or at least logged out and restarted niri.
That led to looking at journalctl to see logs
I saw this around the time I noticed something unexpected happening:
Dec 28 12:43:15 kaizen kernel: [drm:nv_drm_gem_alloc_nvkms_memory_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to allocate NVKMS memory for GEM object
When it happened while recording my screen with OBS I saw these right after:
12:43:15 kaizen uwsm_niri-session[873]: 2025-12-28T17:43:15.123951Z WARN niri::pw_utils: error allocating dmabuf: error creating GBM buffer object
12:43:15 kaizen uwsm_niri-session[873]: Caused by:
12:43:15 kaizen uwsm_niri-session[873]: Invalid argument (os error 22) stream_id=10
12:43:15 kaizen kernel: [drm:nv_drm_gem_alloc_nvkms_memory_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to allocate NVKMS memory for GEM object
12:43:15 kaizen pipewire[1077]: invalid memory type 8
12:43:15 kaizen pipewire[1077]: pw.core: 0x55a64d9cd4b0: error -22 for resource 2: port_use_buffers(0:0:0) error: Invalid argument
12:43:15 kaizen pipewire[1077]: mod.client-node: 0x55a64dc63cb0: error seq:268 -22 (port_use_buffers(0:0:0) error: Invalid argument)
12:43:15 kaizen pipewire[1077]: pw.link: (104.0.0 -> 106.0.0) allocating -> error (Buffer allocation failed) (paused-paused)
12:43:15 kaizen pipewire[1077]: invalid memory type 8
12:43:15 kaizen pipewire[1077]: pw.core: 0x55a64d9cd4b0: error -22 for resource 2: port_use_buffers(0:0:0) error: Invalid argument
12:43:15 kaizen pipewire[1077]: mod.client-node: 0x55a64dc63cb0: error seq:268 -22 (port_use_buffers(0:0:0) error: Invalid argument)
12:43:15 kaizen pipewire[1077]: pw.link: (104.0.0 -> 106.0.0) allocating -> error (Buffer allocation failed) (paused-paused)
12:43:15 kaizen uwsm_niri-session[873]: 2025-12-28T17:43:15.125499Z DEBUG niri::niri: StopCast session_id=10
12:43:15 kaizen uwsm_niri-session[873]: 2025-12-28T17:43:15.125585Z DEBUG niri::pw_utils: pw stream: state changed: Paused -> Unconnected stream_id=10
12:43:15 kaizen uwsm_niri-session[873]: 2025-12-28T17:43:15.125964Z DEBUG niri::dbus::mutter_screen_cast: stop
12:43:15 kaizen uwsm_niri-session[873]: 2025-12-28T17:43:15.126247Z WARN niri::pw_utils: pw error id=2 seq=277 res=-32 Buffer allocation failed
12:43:15 kaizen uwsm_niri-session[873]: 2025-12-28T17:43:15.126568Z DEBUG niri::niri: StopCast session_id=10
12:43:15 kaizen xdg-desktop-portal-gnome[1042]: Failed to close GNOME screen cast session: GDBus.Error:org.freedesktop.DBus.Error.UnknownObject: Unknown object '/org/gnome/Mutter/ScreenCast/Session/u10'
12:43:15 kaizen wireplumber[1078]: wp-event-dispatcher: <WpAsyncEventHook:0x563aa32573f0> failed: <WpSiStandardLink:0x563aa3619cc0> link failed: 1 of 1 PipeWire links failed to activate
- pipewire is being used to capture my screen through OBS
- xdg-desktop-portal-gnome is used to record the screen through niri
When Ghostty crashed, I saw this core dump:
14:34:06 kaizen systemd[1]: Started Process Core Dump (PID 135381/UID 0).
14:34:06 kaizen systemd-coredump[135382]: [🡕] Process 135367 (ghostty) of user 1000 dumped core.
Module /usr/bin/ghostty without build-id.
Stack trace of thread 135367:
#0 0x00007f63fc49890c n/a (libc.so.6 + 0x9890c)
#1 0x00007f63fc43e3a0 raise (libc.so.6 + 0x3e3a0)
#2 0x00007f63fc42557a abort (libc.so.6 + 0x2557a)
#3 0x00007f63fc4254e3 n/a (libc.so.6 + 0x254e3)
#4 0x00007f63fc12ec8c n/a (libepoxy.so.0 + 0xabc8c)
#5 0x00007f63fc0de28a n/a (libepoxy.so.0 + 0x5b28a)
#6 0x00007f63fd7e4110 n/a (libgtk-4.so.1 + 0x7e4110)
#7 0x00007f63fd7d70f0 n/a (libgtk-4.so.1 + 0x7d70f0)
#8 0x00007f63fd896742 n/a (libgtk-4.so.1 + 0x896742)
#9 0x00007f63fd82ea76 gsk_renderer_render (libgtk-4.so.1 + 0x82ea76)
#10 0x00007f63fd5a2a21 n/a (libgtk-4.so.1 + 0x5a2a21)
#11 0x00007f63fd5a57b9 n/a (libgtk-4.so.1 + 0x5a57b9)
#12 0x00007f63fd75c7c2 n/a (libgtk-4.so.1 + 0x75c7c2)
#13 0x00007f63fdadec77 n/a (libgobject-2.0.so.0 + 0x32c77)
#14 0x00007f63fdaded89 g_signal_emit_valist (libgobject-2.0.so.0 + 0x32d89)
#15 0x00007f63fdadee44 g_signal_emit (libgobject-2.0.so.0 + 0x32e44)
#16 0x00007f63fd803818 n/a (libgtk-4.so.1 + 0x803818)
#17 0x00007f63fdadec77 n/a (libgobject-2.0.so.0 + 0x32c77)
#18 0x00007f63fdaded89 g_signal_emit_valist (libgobject-2.0.so.0 + 0x32d89)
#19 0x00007f63fdadee44 g_signal_emit (libgobject-2.0.so.0 + 0x32e44)
#20 0x00007f63fd7dc860 n/a (libgtk-4.so.1 + 0x7dc860)
#21 0x00007f63fcd36e91 n/a (libglib-2.0.so.0 + 0x60e91)
#22 0x00007f63fcd34f8d n/a (libglib-2.0.so.0 + 0x5ef8d)
#23 0x00007f63fcd36657 n/a (libglib-2.0.so.0 + 0x60657)
#24 0x00007f63fcd36865 g_main_context_iteration (libglib-2.0.so.0 + 0x60865)
#25 0x000055690d17031f n/a (/usr/bin/ghostty + 0x134f31f)
ELF object binary architecture: AMD x86-64
Putting everything together
All of these things together helped me understand that it’s linked to GPU usage because Firefox and Ghostty are both hardware accelerated. If I opened 20 Thunar windows (not hardware accelerated), it was fine.
Armed with this knowledge I started to search out how to monitor my GPU’s memory usage and came across nvidia-smi which comes with the NVIDIA drivers.
Monitoring GPU Usage
This tool comes by default with the NVIDIA drivers. It will list out all processes using your GPU. Here we can see my GPU has 2 GB total near the middle of the output and as of right now about half is used.
Weirdly, if you add the bottom right column up it doesn’t match. I’ve noticed it was off by a lot. Using btop to monitor your GPU also has different numbers.
This did however give me more information than I had previously:
$ nvidia-smi
Thu Jan 1 11:58:15 2026
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.119.02 Driver Version: 580.119.02 CUDA Version: 13.0 |
+-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce GTX 750 Ti Off | 00000000:01:00.0 On | N/A |
| 43% 48C P0 2W / 38W | 923MiB / 2048MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 847 G niri 167MiB |
| 0 N/A N/A 907 C+G walker 129MiB |
| 0 N/A N/A 1233 G Xwayland 3MiB |
| 0 N/A N/A 9578 G ghostty 118MiB |
| 0 N/A N/A 102687 G /usr/lib/firefox/firefox 366MiB |
+-----------------------------------------------------------------------------------------+
niriis a Wayland compositorwalkeris an app launcher which runs in the backgroundXwaylandallows Wayland to run X11 appsghosttyis a terminal, at the moment I have 1 terminal openfirefoxis a browser, at the moment I have 2 windows open with 3 tabs
What you see above is after I applied a niri specific tweak which we’ll go over in a sec. When I first ran this, niri was using 1 GB of GPU memory just by itself.
Things are starting to add up now. It’s really easy for Firefox to hit 400-500 MB if you’re watching a video and have a few tabs open. Combine that with niri using 1 GB and then having a terminal or 2 open, you’re already at near 100% of 2 GB. Open up OBS and boom, you’ve hit max usage or dangerously close.
I ended up recording a video with my phone to show this happening. Here’s the timestamp.
Getting niri from 1 GB to ~75 MB GPU Usage
In niri’s docs I found this https://github.com/YaLTeR/niri/wiki/Nvidia, I missed it in the Arch NVIDIA wiki which has it there too.
Basically you set GLVidHeapReuseRatio 0 as an NVIDIA profile for the niri process. When I did that and rebooted, niri booted up using 75 MB of memory instead of 1 GB.
After doing this, things are much improved but I still run into big problems.
If I have OBS open and recording with a terminal and a couple of browser tabs, I still easily hit 1.5 GB which puts me in a danger zone.
niri’s Memory Slowly Climbs (Leaking)
niri’s usage isn’t static. Even if you have non-hardware accelerated windows open it will use more memory, that’s normal since it needs to create a window tile and do whatever else to manipulate and track that.
The abnormal (IMO) part is if you perform an action such as resizing a window or moving a window, the memory usage goes up but it never drops down even if you close the window that was modified.
After using my system naturally for about 3-4 hours, niri will be using ~300 MB and it doesn’t seem to have a limit. Eventually I have instability and need to reboot every few hours. The longest I’ve gone is about 9 hours of using my computer normally and niri was at ~600 MB.
I also noticed if Wayland apps use shared memory, that memory will be allocated into niri’s GPU memory and after those apps are closed niri doesn’t reclaim it. This is apparent when using mpv to open images with --vo=wlshm, opening a couple of large images will cause niri’s memory usage to balloon and eventually become unstable after maxing out.
The mpv developers suggested it’s niri’s responsibility to clear this memory. One of their developers confirmed it doesn’t happen with wlroots based compositors.
I’ve opened a discussion about this on https://github.com/YaLTeR/niri/discussions/3146.
Right before publishing this article today I did find new additional information that I’m editing in. Originally I speculated it might be a memory leak.
Turns out there is a memory leak in smithay which is the compositor niri is using, that’s been reported 6+ months ago https://github.com/Smithay/smithay/issues/1562, there’s also Cosmic which uses smithay too where folks have reported similar things a year ago at https://github.com/pop-os/cosmic-comp/issues/1179.
I hope this is temporary. I’ve been providing as much information as I can to a smithay contributor. The good news is it’s reproducible and it’s happening for folks with NVIDIA and AMD GPUs, both old and new.
It’s kind of scary though because I used my system for 1 day and ended up finding this out in a few days, but the issue was open for a year. It made me think though, this is a positive scenario not a negative one.
This is Linux and open source at its core. We’re all on the same team. We have the power to have things change quickly and can directly tap into the developers of these projects. I’m honored to be able to assist in a fix and hopefully end up with a better system in the end AND it helps others too. We all win.
This is an environment and ecosystem I want to be involved in.
By the way, I am pretty sure this went unnoticed for so long because most people have a lot more GPU memory and probably shut their computer off every night so the leak gets reset every day. I keep my machine on 24 / 7 and have limited memory so it was easy to notice.
NVIDIA Developer Forums
Let’s switch focus from niri’s memory leak back to GPU memory pressure.
Once I had more of an idea of what I was dealing with, I found a thread on NVIDIA’s forums where hundreds of people have been posting similar problems for years.
NVIDIA hasn’t addressed the thread until a few months ago saying there’s a lot of incorrect assumptions posted but there are likely bugs. There isn’t anything posted can be done to resolve it though.
I posted a reply there when I had less information than I did before writing this post.
Shout out to hurikhan77 and martyn.hare for keeping up and posting all sorts of discussions and tips around this and similar issues. As far as I can tell they are regular Linux users who want to see good outcomes.
If you scroll down in that thread even further you’ll see hurikhan77’s replies that investigated where the problem could be in NVIDIA’s drivers, how to hack in a fix (with side effects) and a few replies later breaks down the problem and mentions how a real fix would be difficult because of deeply rooted architectural issues with NVIDIA’s drivers and other components within the community.
Ghostty Is So Slow
Another thing I noticed is when using Ghostty with a few full height Neovim split buffers open on my 4k monitor, it’s very jittery and slow with micro-stutters all the time. It also has slowdowns when scrolling with the mouse. It’s struggling very hard to redraw the window.
It also sometimes stops redrawing all together while I’m holding down a key and then releases all of the characters in 1 human perceivable frame after I let go of the key. It’s like something is getting buffered and held up.
It’s a very big interrupt and disruptive of my usual coding workflows. I like having a few Vim buffers open so I can see more things at once. It’s less context switching.
If I turn off syntax highlighting with Neovim then it improves but that’s not sustainable.
I opened a discussion on the Ghostty board with as much info as I can, such as this only happening when adding or removing characters on the screen. If I move the cursor around with the arrow keys (oops wait, I mean HJKL) there is no jittering.
The Microsoft Terminal never had this problem on Windows with the same version of Neovim, the same plugins and my exact config on Arch within WSL 2. The Microsoft Terminal has hardware acceleration too, it is flawless.
P.S., I also tried running Ghostty in software rendering mode with GSK_RENDERER=cairo ghostty but it’s extremely jittery and slow to render text even in a shell prompt in a tiny window. With the 6 split test, it was very slow.
P.S.S., I wonder if this is related to a GitHub issue I referenced in the gaming section (coming up next) where GPU allocations are performing system wide blocks. That could certainly explain a jittery feeling. I’m assuming syntax highlighted text will result in more GPU memory calls?
Gaming
I tried running Silksong.
On Windows I got a solid 60 FPS and it was buttery smooth. I played it for many multi-hour sessions without a hitch. It was awesome and I got to the end of act 1.
On Linux, at least it started up. It runs but it’s quite a bit more choppy even though it’s reporting 60 FPS. It’s hard to describe, there’s a LOT of micro-stutters where my definition of that is like the game is getting paused for 5ms and then unpaused frequent enough where it’s problematic. It feels like frames are not in sync with something and is most noticeable when I jump.
After digging around I saw an open GitHub issue where allocating GPU memory is a system wide blocking operation which is pretty wild to see. NVIDIA did say they found the cause but it’s not going to be easy to fix. It’s been 6 months since they last replied. This could explain a bunch of continuous micro-pauses that I noticed only on Linux with the same game.
I tried with and without v-sync but it made no difference but I wish this was the only problem. Unfortunately it’s not playable for other reasons described below. I only let it run for a few minutes so I can’t test its stability.
There’s a continuous and constant amount of keyboard input latency, ~150-200ms on every key press. The mouse input is fine, it’s only the keyboard. It’s a wired USB keyboard that had no problem in Windows.
Since I use the keyboard and mouse to play, it makes the game not playable considering the game is literally all about precise controlled movements.
The same keyboard works fine in niri (outside of games) and Windows (everywhere). I tried 2 other keyboards just to test and it got the same results. Maybe it’s another Wayland / niri issue.
I tried this niri config option which made no difference with keyboard latency, I figured it was worth trying based on some research:
debug {
disable-direct-scanout
}
Then I tried gamescope but it doesn’t run. Upon launching a game with it, it simply core dumps with a ton of output but an interesting line is:
Jan 02 06:58:24 kaizen uwsm_niri.desktop[899]: 2026-01-02T11:58:24.749669Z DEBUG niri::backend::tty: error importing dmabuf: Error::DeviceMissing
I posted both issues (keyboard delay and gamescope) in the niri discussion board but haven’t gotten a response.
Then I went on a 4 hour adventure trying everything I could think of. All sorts of different NVIDIA environment variables to set as Steam launch options based on Googling and Gemini’s responses. Most of them resulted in the game running with either a black screen or the first frame was frozen. In other words, unusable.
I also tried using Proton, both the latest stable version and experimental version. The game launched but there was no difference with keyboard lag, it was still very present.
# KDE Plasma
Look, I want to use niri and I’m not going back to Windows 10 unless it’s an absolute necessity. Trying KDE Plasma felt like a reasonable next step to see if these problems are niri or Wayland specific.
First I went on a 30 minute adventure on figuring out how to install KDE Plasma in the most minimal way possible so I can test it both in Wayland and X11.
I landed on this:
sudo pacman -Syu plasma-desktop kscreen plasma-x11-session xorg-server xorg-xinit
plasma-desktopis the bare necessities to have a working shell / GUI supportkscreenlets you tweak your display’s properties (it was greyed out without this)plasma-x11-sessionlets you run Plasma in X11xorg-serveris X’s server which needs to be running for Plasma to use itxorg-xinitprovides a way to start X with a bit of automated set up
I created ~/.xinitrc and put this in:
export DESKTOP_SESSION=plasma
exec startplasma-x11
From my tty1 all I have to do is run startplasma-wayland or startx depending on which one I want to try.
Firefox, Ghostty and everything else is still installed. That’s one thing I really like about Linux. Loading up a different compositor or “window manager” is a matter of logging out and running a different binary.
Testing desktop GPU pressure (Wayland)
I opened up Ghostty to start monitoring my GPU memory with watch nvidia-smi.
I noticed plasmashell uses quite a lot more GPU memory than niri:
$ nvidia-smi
Fri Jan 2 17:10:26 2026
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.119.02 Driver Version: 580.119.02 CUDA Version: 13.0 |
+-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce GTX 750 Ti Off | 00000000:01:00.0 On | N/A |
| 42% 48C P0 2W / 38W | 550MiB / 2048MiB | 1% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 976 G /usr/bin/kwin_wayland 10MiB |
| 0 N/A N/A 1070 G /usr/bin/Xwayland 2MiB |
| 0 N/A N/A 1106 G /usr/bin/kded6 1MiB |
| 0 N/A N/A 1107 G /usr/bin/ksmserver 1MiB |
| 0 N/A N/A 1119 G /usr/bin/plasmashell 352MiB |
| 0 N/A N/A 1144 G /usr/bin/kaccess 1MiB |
| 0 N/A N/A 1145 G ...it-kde-authentication-agent-1 1MiB |
| 0 N/A N/A 1283 G /usr/lib/xdg-desktop-portal-kde 1MiB |
| 0 N/A N/A 1292 G /usr/bin/ksecretd 1MiB |
| 0 N/A N/A 1341 G /usr/lib/baloorunner 1MiB |
| 0 N/A N/A 1348 G /usr/bin/ghostty 35MiB |
+-----------------------------------------------------------------------------------------+
Unfortunately adding it as an NVIDIA profile like I did for niri didn’t reduce it. I believe that’s because the NVIDIA drivers have hard coded values to auto-apply it to certain processes and KDE Plasma is already in there where as niri was not. It’s just heavier.
Ok whatever, let’s see what happens. I started to open Firefox and Ghostty terminals and yep the same problem. Ghostty core dumps and Firefox renders blank screens.
After around ~6-8 Firefox windows and half a dozen Ghostty windows, it’s enough where it maxes out and Ghostty will start crashing on its own.
However! Unlike niri, KDE Plasma’s compositor (kwin_wayland) doesn’t slowly leak memory nearly as much as niri. I used it naturally for a few hours and while it wasn’t rock solid, it only leaked around 10 MB after 4 hours, niri would have leaked 300-400 MB by then.
Also I did the mpv test that I did with niri and all of the memory was reclaimed after closing the image. I believe this demonstrates and further verifies the reported memory leak in smithay is real. Here I am with the same system / drivers but getting leaks in niri but not KDE Plasma.
I did have the compositor lock up a few times when spam opening Ghostty and Firefox to test GPU pressure. I had to forcefully reboot my machine to recover.
Ghostty (Wayland)
Still about the same problem as niri but I would classify it as less problematic. It feels like it does the “buffering thing” less often where it holds character output hostage until I let go of a key.
It’s still no where near as smooth as the Microsoft Terminal on Windows.
It makes me wonder if folks on Linux just haven’t experienced what true low latency smoothness feels like. Try dropping out of your desktop environment into a shell with CTRL + ALT + F2, login to your shell and start typing. That’s basically what the Microsoft Terminal feels like inside of WSL 2 (even in Neovim). It’s liquid smooth with no jittering.
Gaming (Wayland)
Holy shit, the keyboard input delay is gone. Silksong doesn’t run as smooth as Windows but it’s actually playable. I don’t know how stable it is since I didn’t play a long session, but it’s playable. No gamescope or Proton too, it just straight up worked.
That’s interesting because both niri and KDE Plasma (Wayland) use XWayland to run X11 apps. Why would there be keyboard input delay with only niri?
When I spent hours researching this before with Linux gaming I encountered terms like triple buffering and niri can’t turn that off but other Wayland compositors can (including Hyprland too). It could be a source of keyboard input delays.
Then I tried gamecope and it still crashed like it did with niri.
I have a feeling I’m going to run into issues because with just Silksong, 1 Firefox window and 1 terminal open my GPU memory was at 1700 / 2048. I’m bound to eventually have the game crash while I’m playing it. All I did was jump around the first zone on a different save profile so there was not a whole lot going on.
GPU Pressure (X11)
Right away things felt less smooth. Just moving around windows has a roughness to them.
I wasn’t optimistic but…
Without hesitation the unhinged degenerate inside of me opened a YouTube video to play and then I opened 68 Firefox windows and 25 Ghostty terminals. Then I loaded up OBS and recorded a 1080p video of my 4k screen. The video came out at 30 FPS like it should have with no dropped frames.
It all worked, the system was fully usable at about 5 GB / 16 GB of system memory being used with around 1.5 GB / 2 GB GPU memory. I could type into Neovim within Ghostty without much delay as well. I mean the core root issue of Ghostty being slow and jittery is still there but I could actually use my machine.
That’s surprising! Something about using X11 lets the NVIDIA drivers allocate GPU requested memory to the system in a seamless way and it self regulates the GPU memory to where it never maxes out.
I ended up recording a video with my phone to show how well it handles GPU load. Here’s the timestamp.
Gaming (X11)
Like Wayland (with KDE Plasma) it worked without keyboard input delay but it didn’t play as well. I could see visual screen tearing but v-sync being on or off in the game made no difference. There was something that just felt off with the performance, it felt jagged.
Funny enough gamescope did run but it completely tanked my frame rate. It was a complete slideshow at 15 FPS instead of 60 FPS without it.
I don’t know what to make of this other than I don’t think I’d want to play games in this environment with or without gamescope but maybe there’s other things to tweak?
# KDE Plasma vs niri
This is the real dilemma.
Both have issues with GPU memory pressure when using Wayland but niri uses less memory on a fresh boot. On the other hand KDE Plasma does a much better job at not leaking memory (until the smithay memory leak is resolved).
Realistically, KDE Plasma X11 is the only choice for being able to use my machine without worrying about GPU memory every few hours with the current drivers and versions we have of everything today.
For gaming, there’s no contest. I’d have to use KDE Plasma. If I use niri it would be like “dual booting”" into KDE Plasma just for games instead of Windows since it’s annoying to log out and log back in (even with tmux resurrect).
Hopefully the niri developers can figure out what KDE Plasma is doing to allow for no keyboard input delay when playing games. I’m happy to help if they acknowledge the forum post I created.
There’s also maybe some room for improvement on making things feel more smooth since KDE Plasma with Wayland felt less delayed in Ghostty and moving around windows was generally smoother feeling.
If these things could be accomplished then both are on equal ground from a usability and stability standpoint on things they can control.
# Root Cause?
For the keyboard input latency with games? I have no idea.
For the GPU memory leaks? We know it’s a smithay issue, but it’s surprising that the compositor has to manage this internally. I don’t know anything about X11 or Wayland’s history but I’m surprised Wayland itself doesn’t handle the low level resources of windows being managed.
For the GPU max memory pressure? I’m not a GPU driver developer. I can only go by all of the evidence I see when using my computer as en end user.
In this post we discovered with the same hardware:
-
Everything is super stable and smooth on Windows no matter what I throw at it
-
Multiple compositors with Wayland will not let processes use the system memory if the GPU memory is filled up, resulting in those apps crashing in unpredictable ways after opening a few windows on a lower end GPU with limited GPU memory
-
Separate to that, some compositors leak GPU memory that doesn’t get reclaimed
-
KDE Plasma with X11 acts more like Windows where system memory can be leveraged in a transparent way but is much less smooth than Wayland
Personally I don’t understand how this can be the full picture on Linux because then everyone would be crashing all the time. With modern cards they have much higher memory amounts but it would still happen. Are people just not noticing it because they have 8-16 GB of GPU memory and turn their computers off every night?
I’ve kept my machine powered on 24 / 7 for 11 years. It only reboots for Windows patches. A month or so of uptime is normal and now with Windows being end of life, I could easily have had multi-month uptime. My record was 278 days of uptime on Windows 7.
NVIDIA 580 Series Is End of Life in August 2026
NVIDIA stated they will stop addressing non-security bugs and fixing things in August 2026 so that leaves us with a potentially busted driver unless they are generous enough to backport in a fix after the end of life deadline.
I think the first step is helping NVIDIA become aware of the problem. I’ve went ahead and posted a bug report in NVIDIA’s Wayland repo. Hopefully it gets addressed before the drivers go end of life.
By the way, I tried the open source drivers, it was a catastrophe. They wouldn’t let me use my 4k monitor and hard locked my machine a few times within 30 minutes.
Known Workarounds
Not a whole lot. They just delay the inevitable.
I can disable hardware acceleration on Firefox to save a few hundred megs but then the user experience is very poor. Everything is super slow to render and even a 30 FPS 1080p video is a slideshow with the audio not in sync with the video.
You can disable hardware acceleration with Walker by running it with GSK_RENDERER=cairo walker --gapplication-service, that will win about 150 MB. It doesn’t have too many fancy animations (in a good way) so the lack of hardware acceleration is ok, even with image previews or launching apps with icons or picking emojis.
You can confirm it’s not using the GPU by not seeing it in the nvidia-smi GPU process list.
Go all-in with KDE Plasma X11 but its gaming performance was worse than KDE Plasma Wayland so I still can’t stick with 1 set up. Plus, Plasma 6.8 is moving in a direction to being Wayland only. Not using niri is a huge hit too.
# Reflecting on Linux vs Windows
After a week long battle I’m left with a machine I can barely use and I can’t play any games without ditching niri. The gaming experience is substantially worse too.
It makes me think how much easier it is on Windows. You boot up and usually get a great experience out of the box and it continues to work. It’s dependable. This is my personal assessment of having used Windows for 25 years, building a few systems for myself, helping friends build machines and having a circle of friends who never had problems that came close to what I’ve experienced here with Linux.
Remember, if you hear bad news about a Windows update where 117 people report something on a forum, there’s ~1.4 billion (with a B) devices using Windows 10 / 11 and it’s safe to say hundreds of millions of people have no problem at all.
I haven’t had actual Windows problems since the early XP days before SP1 (service pack) in ~2001-2002. This is using my computer professionally for over 2 decades.
Also, please don’t misinterpret my praise for Windows as liking it. I don’t like Windows and clearly want to switch away from it, otherwise I would have wiped this machine and put Windows 10 back for the rest of this machine’s life.
I will not use Windows 11 no matter what even with new hardware and I don’t want to continue using an end of life OS (Windows 10) so that leaves Linux as the only long term option, that’s why I’m so persistent here.
I don’t mind putting in the leg work to get it to work initially because usually if it gets to a working state it’ll be great but in this case I can’t get to a fully working state.
Speaking of working states, without the GPU memory issues using my machine feels like a hardware upgrade in Linux vs Windows. Disk I/O is faster and it’s like the CPU processes things quicker. For example websites without a doubt load faster on Linux using the same version of Firefox. It’s impressive to be honest.
Deep Rabbit Holes
I really feel for anyone who is not a developer or this determined. I don’t know how they could deal with so many issues but I guess now it makes sense on why the Linux desktop adoption rate is where it’s at. Even if I got everything working after a week of battles, there’s no way I would feel good about recommending this to a non-developer gaming friend because they might have their own unique set of problems.
Funny enough the only class of people I would recommend Linux to are the polar opposites:
- Old folks who do nothing except use a web browser. Windows always tricks them into problematic upgrades or buying their offerings. It actively harasses them. Throwing some Ubuntu or Debian distro on their machine makes all of these