Discussion about this post

User's avatar
Kevin Tewouda's avatar

I'm putting some water in my wine and starting to use LLMs more and more to help me when I have questions, but I'm still sceptical to be honest.

As the saying goes, ‘practice makes perfect’. I find it hard to believe that the new generation of developers addicted to LLMs still have the ability to think like us, who weren't born into this world. They'll be too used to copying and pasting what the LLM says for that. On the other hand, if LLMs become more and more reliable, maybe there won't be any need to really think? Maybe that's the inevitable evolution?

In short, I'm both happy and unhappy with the evolution of LLMs. We'll see what the future brings. :D

Expand full comment
Bobby's avatar

Problem is the models *aren't* getting better and the AI companies are still heavily subsidizing their use. They're wildly expensive to run and the only way to improve a generative AI model is to feed it more training data. They already don't have enough without stealing from people. Thus its growth and use don't scale.

In a vacuum the tech is fine for generating boilerplate but I don't think anyone is going to be willing to pay the true cost for an "AI" chatbot that's only sometimes right and costs more than the rest of your productivity subscriptions combined.

Expand full comment
3 more comments...

No posts