What if AI eventually makes programmers smarter, not dumber?
I don't miss learning the Iliad by heart anyway
Summary
What's left when you remove typing?
Problem-solving, actually.
I don't think we can hope for no-code to happen any time soon, but it sure feels LLM are lifting some of the burden of writing code, freeing our potential to push a bigger Sisyphus boulder.
Programming is typing. Until it's not.
There have been numerous attempts at replacing code editors with something else, and very few were successful to the point of becoming an industry standard. From the top of my head, I can think of some like Excel, or Unreal Engine's node editor, but not that many. Sure you can always argue we have RAD or schema editors, but not only are they not nearly as popular, they usually end up being complemented, or replaced, by regular code editing when a project grows.
So when you are a developer, you are going to code. Which, despite having excellent tooling nowadays with snippets, completion, generation, templates and refactoring shorcuts, means a lot of typing.
One of my teachers told me when I was in school that learning to type fast was going to be an asset because eventually, programming is a lot of typing.
There is truth in that.
I type a lot.
And since being in the flow is paramount to productivity, which in turn is dependent of having a minimum of interruptions, benefitting from a fluid highway traffic of information from your brain to your files is highly beneficial. The less you have to think about your input, the better. And if you think fast, and have a lot to output from your head, you really don't want to have sausage stutter. Ditto for having to find information. Sure, remembering everything by heart is useless, but not having to stop to fetch the proper syntax or API for your next step has value in itself.
But of course, first there are diminushing returns. I have friends with humongous setups of Dvorak split mechanical ergonomic keyboards with optimized tiling desktops and vim configs. They often are less productive, because of all the context they have to hold in their head, the time they spend catering to their setup, the careful attention they need to provide to every interractions to perform operations in the way they deam correct. Past some point, being better at using your fingers will not serve you more.
And second, like @tsoding reminded us, when you are experienced enough with programming, typing is not going to be your bottleneck. As you build more and more complex projects, on the foundation of a strong professional life, the time you spend working shifts from writing code to solving the problem itself. The code becomes merely a way to formalize the solution to said problem.
Now don't get me wrong, I do still spend a lot of time producing code, but wether I type it 30% faster or not is unlikely to affect if I'm going to reach my deadlines or not.
And yet, AI shows otherwise
Because @tsoding's quote another one where a young programmer says he only writes 20% of his code. I can only concur: I have been using LLM more and more during the last 2 years, consequently, the amount of code I write myself is also decreasing.
And it turns out this is saying something about typing, because, unlike the youngster, I know perfectly well the code I need to write. In fact, in a clean project, AI sometimes writes exactly the code I would have written myself because it just reuses my style and naming from elsewhere.
This is not a 30% improvement in typing. This is a 30 000% improvement in speed. Granted, it's going down as the project matures, the problems become more specific and obscure, since AI is better for scaffolding and things that have been already solved a lot. But still. When it works, it does show that coding is still a hell of a lot of typing, and when you remove all that typing, well, you move faster.
Until it slows you down again because it types crap, but that's more a reflection of how you use the tool. Most devs are not writing the next Figma or Cosmopolitan after all, they are automating known processes, making CRUD apps, or doing data analysis.
In short, the classic stuff is bottlenecked by knowing what works and the boilerplate to do that. Something ChatGPT is quite good at. And if you are reaching the complicated part of your work, then you should fall back to regular tooling, Copilot is not made for that yet, the right tool for the right job, and all that.
What's left when you remove typing?
Problem-solving, actually.
I understand the disarray old timers must feel when they see a noob copy/pasting plates of spaghetti from Claude into their repository, I do.
Using code you don't understand is not a new problem, after all. Stack overflow made it pretty clear 10 years ago. Plus I'm nobody to throw stones here, I've done it plenty of times.
I'm not going to discuss the cons of all that, because I assume you all understand how it can be problematic.
Instead, I'm going to suggest that there are more pros to magic completion than you might think.
Indeed, knowing the right API, all the pyramid of conventions we created, and typing fast are not virtues. They were our ways to adapt to the constraints of our field. We needed to do it because that's how things were. But we are not better people because for it, no more than NASA engineers using a slide rules were better than Space X ones using computers today.
Once you remove the need to process all that with your bio-CPU, you can dedicate it to actually understanding, defining, and expressing the problem itself.
Typing, it happens, is just a layer of indirection. And you know how that goes.
So yes, I know this means we will see abuse, laziness, corners being cut, and a drop in people knowing how the turtles bellow stand. I have no illusion about this, it's the nature of things. I, myself, know much less about lower levels like compilation, memory management, or handwritten assembly because I started my career with dynamic languages. C programmers looked at us in disbelief. I remember a colleague being laughed at because he used VB and the .dll meant a dependency of 3 megabytes for the executable. What a wasteful scoundrel! They would have had a heart attack with Electron.
Yet AI is not only making the plumbing aspect of programming more productive, it also moves the focus point on which a programmer has to apply its leverage. Less resources are spent on hieroglyphs, more resources are available for logic.
I'm fully expecting that it will:
Lower the barrier of entry to building things. It already has, in fact. Opening new kinds of smarts to enter our field.
Create a new breed of devs that will think with a higher point of view, solving different problems, or problems differently, that we are not good at solving today because we are lost in the minutia.
Reorient parts of the geeky discussions about what we build, applying the collective intelligence to what is broken IRL instead of accumulating overhead on the layer of code-related stuff.
Crystalize what we collectively implicitly know as good enough solutions to common problems and distribute them so that anyone can use them, even if they don't know they exist, or the specific forms for this instance hasn't been written yet, in the form or a lib or even a doc. The long tail of stuff seniors know just because they saw it but shared it nowhere has been a huge loss to our communities.
I don't think we can hope for no-code to happen any time soon, but it sure feels LLM are lifting some of the burden of writing code, freeing our potential to push a bigger Sisyphus boulder.
Now what happens when you do something again and again? You get better at it, smarter about it. And what AI promises, is that people will practice more problem-solving and less code wrestling.
Maybe AI will not make each individual programmer smarter. But I'm betting it's going to make the collectivity of developers, as a whole, actually smarter in the long run.
Not tomorrow, though. Be patient.
I'm putting some water in my wine and starting to use LLMs more and more to help me when I have questions, but I'm still sceptical to be honest.
As the saying goes, ‘practice makes perfect’. I find it hard to believe that the new generation of developers addicted to LLMs still have the ability to think like us, who weren't born into this world. They'll be too used to copying and pasting what the LLM says for that. On the other hand, if LLMs become more and more reliable, maybe there won't be any need to really think? Maybe that's the inevitable evolution?
In short, I'm both happy and unhappy with the evolution of LLMs. We'll see what the future brings. :D
Problem is the models *aren't* getting better and the AI companies are still heavily subsidizing their use. They're wildly expensive to run and the only way to improve a generative AI model is to feed it more training data. They already don't have enough without stealing from people. Thus its growth and use don't scale.
In a vacuum the tech is fine for generating boilerplate but I don't think anyone is going to be willing to pay the true cost for an "AI" chatbot that's only sometimes right and costs more than the rest of your productivity subscriptions combined.