Not at all. LLM 's are very useful. Yes they can write slop but they can also automate a lot of arbitrary work away. The AI hate has become a bit of a cult refusing the see the things LLM’s are useful for and only focussing on its faults.
The dev even took the effort to mark the LLM made changes but that only resulted in arbitrary whining.
If this was a case of the LLM being used to write unit tests, or document code, or even being used to do more complicated verification or refactoring I’m reasonably sure people would gripe a bit but just resign themselves to moving on. If that were the case then absolutely this would be LLM as tooling, and the developer would state that in a heartbeat, I’m sure eventually this would lead to greater usage of these tools.
Instead what we have is LLM contributed code, with all the baggage that entails and then an over the top response.
In my, admittedly amatuer, assessment: Eventually an open source project is going to be shutdown over use of copyrighted code. It would be nice if that happening is a tragic dy for that one project, not a chain reaction that breaks a sizable portion of the open source catalogue.
Plus as a sidenote, we can see that using LLMs actively degenerates the cognitive capabilities of the user. I would love to hold on to my belief that the developers who make the software I enjoy are better than me.
There must be a /s missing there. 😄
Not at all. LLM 's are very useful. Yes they can write slop but they can also automate a lot of arbitrary work away. The AI hate has become a bit of a cult refusing the see the things LLM’s are useful for and only focussing on its faults.
The dev even took the effort to mark the LLM made changes but that only resulted in arbitrary whining.
If this was a case of the LLM being used to write unit tests, or document code, or even being used to do more complicated verification or refactoring I’m reasonably sure people would gripe a bit but just resign themselves to moving on. If that were the case then absolutely this would be LLM as tooling, and the developer would state that in a heartbeat, I’m sure eventually this would lead to greater usage of these tools.
Instead what we have is LLM contributed code, with all the baggage that entails and then an over the top response.
In my, admittedly amatuer, assessment: Eventually an open source project is going to be shutdown over use of copyrighted code. It would be nice if that happening is a tragic dy for that one project, not a chain reaction that breaks a sizable portion of the open source catalogue.
Plus as a sidenote, we can see that using LLMs actively degenerates the cognitive capabilities of the user. I would love to hold on to my belief that the developers who make the software I enjoy are better than me.