Andrey Listopadov

Am I becoming obsolete or just older?

@random-thoughts @rant ~12 minutes read

Let’s set the tone for this post right away, I just want to get this out of my system: I despise the rise of AI in tech. Coding assistants, chat-bots, vibe-coding, agents. Oh, the vibe coding… WTF?

Figure 1: This will be an “old man yells at cloud” type of post.

Figure 1: This will be an “old man yells at cloud” type of post.

I don’t like AI in general, although I know that neural networks can be used for good. Before you accuse me that I’m completely anti-AI, let me explain where I see this technology applicable.

One such example is modeling. I play guitar, and it’s a pretty expensive hobby. A good guitar itself can cost hundreds, or thousands of dollars, but then there’s guitar gear. Tube amplifiers and pedals are expensive. Not everyone has money to have several amplifiers for achieving versatile sound, and a bunch of effect pedals to work with these amplifiers.

The industry noticed that, and a lot of effort was put into modeling these circuits in software. Guitar pedals and amplifiers are basic electronics, for the most part - cascading transistors, condensers, operational amplifiers, etc. It’s possible to model them accurately in software, and I did so myself, implementing Maxon’s OD808 overdrive pedal in C++ as my diploma project for graduation. The diploma itself was about precise software modeling of analog systems, and the guitar pedal was a practical example I could show. You should have seen the faces of the diploma committee when I pulled out a guitar for a project demonstration. Better, you should have seen MY face, when the head of department took my guitar and started blasting Iron Maiden riffs.

However, modeling these schematics accurately and working in real-time is hard. Especially, when we’re talking about emulating tubes or tape machines. People still did, but the results were not as close to the real thing, as one might like, and sound engineers are picky for the details. But then there was another way of modeling - feed the raw signal through your equipment of choice, be it a tube amp, or some weird pedal schematic, capture the result, and train a neural network so it could reproduce it flawlessly. This is the technology behind Neural Amp Modeler and other such projects.

This is where neural network technology shines in my opinion - you get accurate results, even though accurately emulating entire circuits with tubes in real time is still a hard task for programmers. It sure can be done manually, and there are successful examples, where individual components of the circuit are programmed as separate modules, and then the whole circuit is constructed from them. That’s the approach I took with my diploma project as well, but it still takes much more time than just training a model with a pair of .WAV files. And the neural network method comes with another real benefit allowing you to capture your sound exactly as is for later use. For example, if you’re recording at the studio, and you want to patch things later or re-record certain parts, you can capture your studio sound, and then reproduce it later.

There are other fields, where neural networks can be helpful, I won’t go listing all such cases, I’m just going into this tangent so that you understand that I’m not against neural networks in general. However.

When ChatGPT3 was first announced, people started going crazy. We had no real breakthroughs for quite a few years, GPT2 wasn’t really good. Image and video generation models weren’t good either. Then suddenly, GPT3 talks close to being confused with a real person, and images generated by other models start looking correct, and not like a mush of pixels. People notice that GPT3 can generate code in most programming languages, and make attempts at problem-solving and such.

Fast forward a few months, and we have first coding assistants and IDEs with a primary focus on AI for development and refactoring. The internet is becoming worse by the day - AI-generated articles pollute search results, and you don’t know if the thing is real or not when looking at “drawings”. AI-generated music pollutes streaming services, often sounding awfully lot like some other artist, without acknowledging that it was trained on their music. Some streaming services even announced that now you can remix the music with AI if you have a paid plan. People accuse one another of the use of AI to pretend that they know how to do what they do. The worst.

Figure 2: A comment to the Generative AI is a Parasitic Cancer video by Freya Holmér

Figure 2: A comment to the Generative AI is a Parasitic Cancer video by Freya Holmér

Then, there’s the new hot thing - vibe coding. I’m living under a rock, I know that, because I don’t follow news, tech hype reaches me mostly via gossip in real life. So when I first heard the term vibe coding, I naturally assumed that it’s another one of these “Lofi girl”-type work setups where you have a comfy environment for coding with good vibes.

Figure 3: this kind of stuff, just imagine she’s programming. I guess, I could generate an image where she’s programming for real, but then this post would be hypocrisy.

Figure 3: this kind of stuff, just imagine she’s programming. I guess, I could generate an image where she’s programming for real, but then this post would be hypocrisy.

This is pretty much what I do when I have free time to work on my personal projects. I prepare myself a nice cup of coffee with steamed milk and peanut butter1, sit down at my desk, put on headphones, and enjoy my coding time in the late evening. Vibes, right? Nope! Apparently, vibe coding is when you explain to your AI-based editor what the heck you wanted to do, except you’re much more into debugging code written for you. WTF it is calling vibe coding? I wouldn’t mind it being called AI coding, but vibe?

I get it, AI is a hot new thing, and people want to use new shiny things because it triggers a dopamine hit. But toying with something for fun, or being passionate about things should still remain reasonable. It’s fine when you do this for fun, or sparingly, but when all you do is now done by AI, it starts getting a bit too far in my book.

I see AI-generated images in presentations and I cringe.

I see how people brag about how they used AI to do “this and that”, even though the task at hand was easy and I cringe.

I see how people misuse AI for harder tasks that can still be done without AI perfectly fine, with a much lower error rate, and I cringe again.

I see posts about how vibe coding brings back joy to programming, and I only have one question - what joy? If you did enjoy programming, but felt down recently, I think AI is going to make you feel better in the short term, but burndown will hit you again soon enough when the novelty wears off. And if you didn’t enjoy programming in the first place, how AI is going to help you enjoy it? Because you don’t have to do it manually?

If you don’t enjoy the process, AI won’t make it more enjoyable. You will simply offload the unenjoyable part to AI, and then gaslight yourself.


Phew.

These are the kinds of thoughts I had in my head lately. And when I’m annoyed at something for no apparent reason I usually start reflecting on it in hopes to understand the source of the problem for me. Worrying over nothing is the worst thing you can do for your mental health. Like, If I don’t use AI, I can choose not to care about others who use AI, and continue to live my life like I did before. That’s what I did before, like when blockchain and NFT were the hot new topics in town.

However, AI-assisted coding practically hurts me even without me participating in it. First, I chose to ignore AI assistants, heck I don’t even use non-AI assistants such as LSP that much. So when I talk to people, who use AI, I get weird looks, like: “Man, you’re gonna be obsolete in 5 years if you don’t learn how to use AIs, it makes you much more efficient”. Am I?

I don’t think efficiency is in speed only - it doesn’t really matter how fast you can produce code. Programming is mostly thinking a lot, and then writing a little. It sure does matter to a business, because today’s main tactic is TTM optimization, but that’s another topic.

I think relying on AIs for coding is going to hurt us in the long run. Count me old-school, but I believe that today a lot of people in the programming world don’t know how to code. You’d be surprised how little people know these days. Ask them the difference between an array and a vector, and they won’t be able to tell the difference. Don’t get me started on advanced topics like asynchronous/parallel programming, or even more basic features like closures.

I’ve seen a lot of people who think that they know what they’re doing, but when asked about certain internal characteristics of the language/framework/library they use, that did matter to the problem at hand, they stumbled and couldn’t answer. And I’m no better, I was there too - I’ve been writing code for bare-metal in C during the first two years at my first-ever job and didn’t understand anything. Only after I delved deeply into the language and SOC stuff, I started understanding what and why. It improved my results at work and tremendously helped me beyond just C later. So I believe, that people can learn, but would they if everything is done by AI?

I can confidently say, that I know Lua deeply enough to understand how its bytecode VM works. I also can confidently say, that I know how most of Clojure works internally. But I still can’t confidently say that I’m a great programmer. A decent one, maybe, but I still don’t know a lot of stuff behind the scenes. This is why I constantly delve deeper into the technology I’m using, trying to understand the inner workings of it.

And now we have AI assistants, practically writing programs for these people, and they can’t reason about the code written by hand, let alone generated code. I’ve seen people applying for a job, and failing miserably on the simplest questions on the coding interview. They can output the solution for tricky Leetcode tasks, but can’t explain them. Some are even proud that they can make AI assistants solve the problem for them without them even trying to understand the problem. I, myself, can’t solve these Leetcode problems, but I still can write robust code for the real-world task at hand.

Recently, I encountered a message in a group chat, that pretty much triggered me to write this post, with the following contents:

There are orders of magnitude fewer code examples for ClojureDart, which makes it very difficult to train LLMs. Examples from regular Clojure will probably be pulled up, but the libraries won’t work - you’ll need to port them first.

For a beginner, this is a very large and painful entry threshold, especially if there is no desire to write the code yourself and delve deeply into the language.

Oh, how did people even learn to code before LLMs existed? I imagine it was so hard and painful! Imagine - learning the language by writing code and understanding what you’re doing, am I right? This is so wrong on so many levels.

First of all, if there is no desire to write the code yourself, why the heck do you want to be a programmer? This code is going to run on a machine, so it should be well understood, otherwise it’s a time bomb waiting to explode. And with LLMs and vibe coding, the whole point is that you don’t look into the code that much. Or at all.

And, I’m not talking about using LLMs - even without them hallucinating a solution today we can’t write reliable software that works properly all of the time. I get frequent crashes when doing mundane tasks - if I were to take a screenshot every time a program hangs or crashes, making me lose my work, my screenshot folder would fill up my hard drive. Most programs today can’t achieve even “four nines”, let alone five or six nines of uptime. Your Windows laptop will suddenly reboot to install updates, so even if your application can theoretically achieve “five nines”, the platform it is running on can’t. I lost work due to Windows updating the .NET Framework, why the heck does it require a reboot?

Today’s software is in a miserable state. We have a lot of over-engineered stuff at the base of our software. Legacy code creeps in, making writing new code harder. Bugs are today’s norm - I’m still amazed that everything is holding up, instead of falling apart.

If engineers were as bad as programmers today, we would all be dead, lying flattened under collapsed buildings. Oh, wait…2


So am I becoming obsolete or just getting older and more grumpy? I guess it’s both.

I always wondered when would I become one of those old people who don’t understand modern technology. I was born in the 90s, and saw firsthand how technology became more and more advanced - I had access to a computer since the age of three. My folks don’t understand modern social networks that well, let alone all this AI stuff. Now I’ve become a part of the club, I guess - I also don’t understand modern social networks and all this AI stuff. Heck, my dad is probably more advanced in tech than me, as he’s been doing this smart-home thing for a while now, and I’m not understanding any of it.

Programming is both a job and a hobby of mine, I like writing code to solve problems I often impose it on myself for the fun of it. I like to think about solutions, and if I can’t do it myself, I search online like in the good old days, learning in the process.

You can argue, that an AI chatbot is a glorified search interface, except it’s not exactly that in my opinion. Today’s AI chatbots don’t just present you with existing information, instead, they reconstruct it. LLMs are more or less a decompression algorithm, except not precise in most cases. They can hallucinate a solution, and still sound as confident as if it was true. Keep in mind, that an AI chatbot doesn’t understand what it’s outputting, for now at least. People do that too, unfortunately, but at least they don’t sound as confident when they clearly don’t know what they talking about (yeah, sure).

Sometimes, beginner programmers who only know how to code with LLMs face the harsh reality, that LLMs can’t solve all problems. For now. Maybe in the upcoming years, large language models will become so sophisticated that it would be possible. I don’t know. And If such a beginner realizes, that they still need to learn things, the internet is already full of articles generated with AIs by people who don’t understand the topic, making it much harder for them to learn properly. I’m facing this problem too, each time I want to learn something new, figuring out halfway through the article that it is an AI slop.

Maybe one day I will have to use AI because my job will demand it, and I will change my opinion on this whole thing. Or maybe I’ll change the job instead, depending on the circumstances. Blacksmithing looks interesting.

Figure 4: A poker I made

Figure 4: A poker I made

Or maybe woodworking? Or pottery? Unless AI would do these things too by that time. Of course not.

Of course not.


  1. BTW, amazing recipe: 12 grams of coffee beans, grounded for double shot espresso, about 35-40 grams of salty peanut butter, and steamed milk. Nice texture and taste. ↩︎

  2. AI is apparently used in engineering too now. ↩︎