Checkout this interesting video - Infinite Software Crisis
Infinite Software Crisis, very interesting title from this video. I like how they acknowledged that the use of AI in software writing is here to stay, and there's really no alternative to that.
I thought it was interesting that they said, "Will the software engineers of tomorrow be able to understand the codebase?" I don't know if that's a good thing or a bad thing, but I think that using AI to write software, as long as we have the right models being trained, there's a high likelihood that they effectively can write fairly safe software. It might not be the most elegant code or the fastest version of ...
Checkout this interesting video - Infinite Software Crisis
Infinite Software Crisis, very interesting title from this video. I like how they acknowledged that the use of AI in software writing is here to stay, and there's really no alternative to that.
I thought it was interesting that they said, "Will the software engineers of tomorrow be able to understand the codebase?" I don't know if that's a good thing or a bad thing, but I think that using AI to write software, as long as we have the right models being trained, there's a high likelihood that they effectively can write fairly safe software. It might not be the most elegant code or the fastest version of the code, but so far, what I've seen for most scientific tasks and analytic tasks that are common at the level of a master's or PhD student, it's actually able to write the code. Of course, we can run different types of analysis code in fact in Python using numerical libraries to ensure that the analysis is correct.
Of course, for example, if the code is being written to control a system such as avionics or a crash prevention system on an automobile, the bar is much higher. Of course, the code has to be running faster, more efficiently. For example, Python and interpreted languages may not be the highest level of code; however, for the fast stuff - I suspect for the near future we'll still be using C++ and these models don't seem to be as good when writing C++ code - at present anyhow.
However, for all other things, I'm not sure there's anything terrible with using this high level of analysis and code writing.
Most recently, what I really did like is that code that I generated using Claude Code did have Claude as an author on my GitHub. So the disclosure was open and transparent, and I think that's fantastic. I think the most important thing would be that for future software engineers who use Claude or other agents to help them write code, to be sure to actually cite the fact that they've used an agent. We should remove any aspect of shame or worry that using an agent is a bad idea. I think we need to foster it, but we do need to disclose it.
I think for mission-critical tasks, real-time systems, and things where there's a real risk of harm, human intervention and double-checking of the code, or even having another AI double-check the code, is very important. Certainly, the future is going to be different AIs checking each other and some form of adversarial, for example, analysis of the code. For sure, this is the path we're heading down.