Adventures on the AI Coding side of things
TL/DR: We spend most time dreaming about what is possible and miss the real world. AI is making us go crazy very easily, and if we don’t get aware of it, maybe this is the way AI will dominate and destroy us?
With the increment on hype about AI Vibe programming and what I call AI Assisted Programming, I had no more option to hop onto this wagon, before it’s too late. Boy what a ride! :D
I grew up in the eighties and my first computer had a Z80 microprocessor and 1 Kbyte of RAM. I read all the sci-fi books that the local library let me read (was 13 yo) and dreamed about a (far) year 2000 in the next century that was full of spaceships and exciting new technologies!!! I even don’t wanted to learn to drive a car and get a driver…
Adventures on the AI Coding side of things
TL/DR: We spend most time dreaming about what is possible and miss the real world. AI is making us go crazy very easily, and if we don’t get aware of it, maybe this is the way AI will dominate and destroy us?
With the increment on hype about AI Vibe programming and what I call AI Assisted Programming, I had no more option to hop onto this wagon, before it’s too late. Boy what a ride! :D
I grew up in the eighties and my first computer had a Z80 microprocessor and 1 Kbyte of RAM. I read all the sci-fi books that the local library let me read (was 13 yo) and dreamed about a (far) year 2000 in the next century that was full of spaceships and exciting new technologies!!! I even don’t wanted to learn to drive a car and get a driver license because cars were too boring, and as we were going to have spaceships in the future I decide just to wait for spaceships and forget about cars. Biggest scam ever! I am in 2025 and cars still looks suspiciously like the ones back in the 80’s. Damn. So I guess I must keep waiting… (No driver license yet! Better hurry with those spaceships, eh?)
Not all were disappointments! I believe there were four times in that I have experienced unbelievable, revolutionary technology (for someone born in mid 60’s) apart from computers, since i was born:
1 -Internet
Even the possibility of making connections to distant BBS, using modems over telephone landline, and share and download software and had the possibility to talk to people you don’t even know where they were from (by posting messages in message boards) was fascinating enough; when internet became available this fascination grew tenfold. We could talk to people in real time with IRC and have access to a whole world of information about everything specially computers. And you could post your own stuff. Wild.
2 -Second Life
The second revolutionary technology I experienced was Second Life. A virtual 3d World were people, using a user define avatar, could interact with other humans in 3d, in 3d environments, in real time, with sound and even voice so you can talk to each other using your own voice. You could go to a Night club on Friday with some friends, and the DJ is also a person and accepts music requests. A lot of Art Events, Education Events promoted by universities and a lot of people singing and playing live music and having a good time. Boy, you can fly, and live in the sky in skyboxes, and have a house somewhere and instant teleport to any place in the virtual world. Really sci-fi stuff made real.
3–3D print
The first time I experienced the act of designing a virtual 3d geometry using a 3d Editor/Cad program and seeing it materialise in front of your eyes, the exact same shape you draw and never existed before, only on your mind, and now I see it take form by the way of a PLA plastic 3d printer, was kind of magical. It just blew my mind away with the infinite possibilities I was seeing developing in front of my eyes. :D
4 — AI Code Development or what I call AI Assisted Programming (AIAP)
And this is one of the most amazing technologically advances that I have experienced in all my life.
Is is like a dream. You have an Coding Assistant that happily would create all the code you ask him to create.
A dream and a nightmare. It happily also hallucinate things when it does not know about something. It just makes up stuff that is most likely in accord to their probabilities bible. You will have to learn how to steer this assistant into more successful waters. That’s when context management and prompt engineering comes into play. You have to feed it with a nice resume of in what terms you want it to act (context) and ask it nicely (Prompt Engineering). You may get better results, almost perfect, but always be suspicious, you never know what tricks it could have under the sleeve. Just don’t go and ask for an application that would create the world next generation AGI agent.
You need to constrain the semantic space it can move in, so you can start by asking about technologies that can be used to accomplish something, viable technological paths, possibilities to accomplish something, find pros and cons, etc. When you get a perfect view of what is available, how it is used and what are the pros and cons you can decide what technological stack you can you to accomplish a task. Once you know what you want, you just ask it to generate a specification file where everything that was decided is included as guidelines to build what we want. Once we get this specification, it is just a matter of feed in this specification as context and ask for example to “start implementing the system defined in Phase 2 of the included specification, please :)” and it will happily start create the code needed, very close to the code we want. You always have to check the code you get. This is the way I found it to work for me. I create the specification file and the start creating the code needed sequentially and saving all the code I get. Later I will review and integrate/add this generated code into my project code. I wont let AI touch my code. I don’t trust it. I don’t want it near my code. So I ask for specific code, save it and later review and integrate or discard it. If I want some code changed I ask it to generate new code, this time taking attention to some new fact, in order to generate more correct code. Or if something structural changes, I will manually edit the specification file and update it to reflect the changes, if possible.
Now I would describe what my last development cycle was like with the use of Assisted AI Programming — the way I like to call it, because it is what it does indeed, and the way I used it; I don’t let it touch my code. I give it context and I get back code. Lots of code. But that’s another story that we will develop later. So it goes like this: I am also a musician, so I recently got a second hand synthesiser MIDI Guitar system, and it has a usb connection and a MIDI connections that let me control the unit, record MIDI, etc. And then my developer mind waked up: Wait! It has USB connection and MIDI ?! And the default MIDI Editor is old and obsolete and does not do almost anything you need (vintage protocol) so…. I can create a nice Editor to do what I want and use AI to develop it fast! Yes! Lets start to work. First I need to find the AI providers that would allow me to code. Never used openAI. I tried some little ones like Phind and got some promising results but it kept creating new files to get things corrected, creating new errors… ok, started to use it to create little snippets of code and start using it to research and gather info about something I needed. It was cool because it would do web search, thing that others, at the time, were not doing, and I got back accurate and updated tutorials and programming documentation. I started to create little CLI MIDI utilities apps and then I try to create an UI App. As I was in Debian Linux and using KDE, I give Qt a try and ported those little utilities to Qt to give them an UI for easier manipulation.
Then I got this idea: Hmm… I can create a lot of MIDI generators using this! My mind went into warp mode:
every crazy idea I had about a generative system the AI will generate a full app code for it. When I become aware of it, I had around 130 directories with code for generative apps; of those about 8% were working and the rest had several code versions that had to be checked and verified, merged…). Problem: each one of those apps use Jack Sound system and each one had a slightly different code to access and use the Jack Audio and MIDI system. OK, easy, lets create a Jack Lib to unify the Audio Interface access for all little apps. Now I have a good and working lib to access the JACK audio and a need to update all the apps to use the unified Jack access. I manually updated 3 or 4, and as I was doing it, I could seen the pattern of the upgrade, and it was around 130 apps, and only 8% working… Wait! I need an update system!
Fun fact: at some point I made a MIDI generative app that would take a note sequence and generated some random, chaotic music manipulating the original input note sequence. It was meant for little short note sequences in MIDI, so if I gave it a 6 note MIDI sequence, the app would generate about 3–4 minutes of random music. But one time when I was testing the app output I misspell in input MIDI filename and gave it a normal music MIDI file as input. A MIDI file with a complete music with HUNDREDS or more MIDI notes… The result? A MIDI file with 1.6 Megabytes in size with around 263.000 MIDI notes and I have been unable to open it in any MIDI app I have tried. :)
So at this time I switched to Anthropic AI and was getting better code but still buggy. I was in the middle of the implementation of the update system, realising it would take me some time to put it together but it would save me a lot of time to refactor all the apps code, and … Wait! (This was the Genius idea!) I have a lot of apps, all will have the same input/output audio system… hmm I could create a system that would used all as… Wait! A midi node system! A system that would reuse all the midi apps logic as plug-ins and just use one jack class to manage all input/output, and would allow me to redirect plug-ins input and outputs from one to another and create new systems by composing the plug-ins! So create a new project and start thinking about the node application. So I heard about Claude and was in awe about the code quality and after learning how to context it properly, the code generated was almost code that compiled and run with minimal editing, like add a missing #define or update some deprecated code. Since then I never stopped using Claude. I kept porting to the new plug-in node architecture, and manage to have a 40% working application, but meanwhile the plug-ins system was giving problems (because the plug-ins specification was changing wildly) and I gave up the plug-in system and went with nodes as classes and compiled them into the application. Later I would have to think about a way for the users to developed new nodes but without plug-ins… and adding the new node classes to the source and compiling the full app is not for everyone.
I was not happy with the default looking of the ui Qt interface of the midi node system and developed UI themes and editor, but it turned out that it wasn’t what I had in mind; I was thinking about a retro looking with a kind of green crt looking (as one of the themes). So I decide to develop a main Led Retro Screen with square pixels, monochromatic, in green/amber crt colors and even led pixel glow and screen flickering to give it a little realistic touch. It has a variable pixel size and pixel gap, to make it adapt to all sizes. Great! It is looking cool! But how can we draw to it? I develop a simple graphics lib for the LedDisplay. I can now draw pixels, lines and rectangles/squares. But now I need also some text capabilities. No problem. I develop a pixel font system for the LedDisplay. I happily hand coded the binary font patterns in text from some pics of an older Spectrum pixel font. It looks nice now! I have simple text! And then circles, and filled rectangles and circles, and curves… what about gradient fills? Of course, gradient lines and curves, and gradient filled rectangles, circles, oh add patterns to that too :D. Patterns… Wait! What we need is a bitmap class, yessss! . It can work as image and textures! Now I can use bitmaps in the LedDisplay, so adding tools to transform, apply effects, apply 2d filters to the bitmaps, ah a 2D layer system for bitmap compositing… hmmm what about retro gaming? It would look great in this display, so animation system, sprite system, particle system, scene/view system, audio and midi we already have :) I end up making a Raiders and Defender clone for the LedDisplay… Wait! I am making a graphics system!! And I can apply it to other graphics endpoints, so I need a new drawing abstraction: a canvas class, were I can draw and it renders itself using the configured application graphics setup. This canvas inherits the graphics drawing library and connecting it to a graphics manager that will draw on X11, SDL2, openGL and directly to the Linux framebuffer, and of course our LedDisplay. Also bitmaps will gain a canvas so we can draw to the bitmap using the unified graphics system. Neat. At this point I was generating a lot of code and storing it for later review and integration. And I become aware that I could have something I could comercialize. A midi node system for generative purposes could have a niche market somewhere. But there was a problem, Qt. It was free for open source development, but for commercial development you have to pay a license, and basically you are paying to get a lock-in on Qt technology. So I (again) had a Wait! Moment; as I have a good understanding of object frameworks, I decide to create my own Object framework!!! Yeahhh! So I create a top object to deal with parents and children, then an event controller class to manage events and from this a UI base object. From now on it is a “only” a matter of creating descendant objects for everything: An App object that creates the basic setup for an entire application, then we create 2 descendent classes: a CLI App and a Window App; from the UI base object create Labels, Buttons, Sliders, and all necessary UI objects, they all draw in the canvas already developed, and integrating all midi and audio code already developed before, into the framework. But a lot of code I need is already developed but buried in an ever crescent pile of code files and directories created in the moment, trying to give the filesystem some organisation, but have names like code, code1, code2,new_code, new_code2, last_code3, last, LAST, THIS. At one point I recorded a total of over 5000 text files with code… and some with several versions on the same code, as they were rewritten… boy, how the hell am I going to handle all of t … WAIT! Just create a source code documentation system that would crawl all those directories, open files, read the code and create a database with all the code indexed and then we can search for similar code I could need and use. Yes, that could work. So I start creating another new project… And maybe we could create a client/server architecture so the server would manage the database and some apps could feed the server with data and client apps could connect to the server and search for code and… and… and…
WAIT!
Then I realise that all I wanted was to make an editor app to help me manage the guitar synth patches and make it easy to play. Actually I spent about one and a half month with all this crazy development carousel and didn’t had time to play one single musical note!
… so I just shutdown the computer. Switch on my Guitar synth and the amplifier and played my guitar for an hour then went to sleep.
Wait! Wait? No, do NOT wait! Don’t waste time. Just go and DO it.
Peace
© XCF Seetan 2025
Note(1) — I have noticed that all the “Wait!” sequences, seems like a Digital Attention Deficit Disorder (DADD) or an AI Attention Deficit Disorder (AIADD) (or most probably it reflects my A.D.D.), as I keep changing targets when I get a new idea and postponing the last one, and in the end, I waste all of my available time in this kind of activities and realise no real work was done, just a lot of data was generated, valuable but a big mess of knowledge. (B.M.O.K. — Big Mess Of Knowledge)
Note(2) — I also have found that the G.O.A.T. ever note taking application is the analog pen and paper notebook. Nothing digital beats it when you need a urgent way to register a new emerging fantastic idea, before you forget it. Low price, instantly available, ultra long duration high quality ink cartridge, universal UI, rewarding UX, ultra high definition user defined graphics engine, highly flexible and adaptive user defined layout engine, high capacity storage and long term archival capabilities. All in a little package, no bigger than a little notebook ;) Amazing!