Blog

Complexity, Bugs and LLMs

Software Development
LLMs

June 14, 2025

Unnecessary libraries creates unnecessary bugs, and LLMs are making it worse.

Soft pink and baby blue water ripples together in a subtle texture.

“Everything should be made as simple as possible, but not simpler.”Albert Einstein

Last weekend, I was working on personal project. I wanted to build a way to parse my markdown files in my Obsidian vault and extract all todos. The problem, of course, is that my vault is synced across devices using a peer-to-peer protocol through an open-source software called Syncthing. This means my parsing logic needs to track file changes irrespective of which device it was changed on. The moment it synced to my laptop, where this system will run, it needs to tracks these changes.

So as usual, I went on ChatGPT and typed what I wanted, and it pointed me to watchdog, a python library that track file system changes, and then some boilerplate code to go with it. Now the problem was that my vault had un-uniformly nested folder, and so it’s bit complicated to track. Soon I was designing a 3 layer system and my lines of code went upto 100. But then I got frustrated and decided to just think from first principles about what I needed, rather than letting the LLM do the thinking. Within half an hour I had removed the library, and reduced the code to around 40 to 50 lines. I used ChatGPT to translate my pauedo python code to actual python code to speed things up but the logic was my own. It was raw python stateless system to track files and index then, and used hashing for constant-time look up.

Now for larger projects, there is probably a need for something like watchdog but for a markdown file tracking and parsing system, raw python was enough.

The lession is that adding more libraries and not starting from scratch is what adds complexity, which adds more bugs. Even if I had searched on Google i would have probably fallen into into the same hole but I think ChatGPT and LLMs in general are making it worse. LLM are at best language task, like its name suggest, but not at thinking tasks. You cannot outsource your thinking to LLMs. I recently and repeatedly have this feeling of wasting more time and getting more frustrated while coding with LLMs, especially when outsource my thinking or even let the LLM influence my thinking. They are good for translation and search but not for thinking, or decision making. In this same project I had ChatGPT cone up with edge cases and it pointed in a direction where i found a failed edge case. This is an example of search.

This whole experience and my recent LLM free coding sessions (which I am ready enjoying) had reminded me of old Cybersecurity conference in Moscow where a game dev spoke about decline of technology and how software has gotten more buggy. And i think this constant jumping to new tools, new libraries or even library first approach is one of main reasons. We don’t need library for everything and it’s probably best to code things from scratch once in a while. And I think LLM is going to make this worse. On top of that, the push to automate software engineers with LLM agents is going to create some garbage software as well cause existing one to decay into un-usablity.

Given the hype driven silicon valley economics, I think it will create bad software and make existing software worse and in the likely case of LLM not leading to AGI or whatever, it’s just gonna increase the demand of software engineers after the bust.

Meanwhile let’s try to make software better by actually thinking through code and make it less complicated (less libraries) than it needs to be. Thanks for reading till the end. Please have look around for my other writtings.