Wow, this is just what I’ve been looking for without even realizing it. A lot of my friends who are newer to the world of programming are very excited by this new wave of generative AI, particularly ChatGPT and GitHub Copilot. Conversely, I personally have a lot of misgivings about AI programming sort of half-formed in my mind. I’ve been programming for a while now (although I’m sure relative to all the SDF veterans I’m still pretty new to the game) and I can’t bring myself to believe that prodding ChatGPT into a reasonable output is more efficient than just writing the code yourself… and then I start to worry that perhaps I’m biased. As they say, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it”.
Anyways, your headline alone is a better argument against the merits of AI programming than anything I was able to come up with, so going into it I knew the post would be a good read. And I wasn’t disappointed: you’ve provided me with a much better framework to discuss generative AI with folks moving forward. Thanks for writing this!
I think of AI in programming the same way I think about search engines (there are a lot of parallels).
It can be helpful when you’re stuck or learning something new, it can be wrong, and if you use it for everything you might get something that works but its not going to look like something someone with experience would have done.
It’s true, but all programs start as natural language at least partly. Clients tell developers what they want, the developers then translate that into something that makes actual sense and is close enough to the request to make the clients happy.
Indeed, it’s the job of the programmer to understand that natural language and use it to design a program. The lack of understanding is one thing that worries me about LLMs writing programs.
Like the article mentions, it’s only good at boilerplate code at the moment, and can’t really do architecture very well. I guess that’s why it’s “Github Copilot” and not “Github Pilot”.
Going forward, who knows? We fundamentally don’t understand why LLMs work.
Think about using AI output as inspiration, examples, getting over writers block, etc, and less about using it to cut and paste it’s output wholesale as completed work.
Wow, this is just what I’ve been looking for without even realizing it. A lot of my friends who are newer to the world of programming are very excited by this new wave of generative AI, particularly ChatGPT and GitHub Copilot. Conversely, I personally have a lot of misgivings about AI programming sort of half-formed in my mind. I’ve been programming for a while now (although I’m sure relative to all the SDF veterans I’m still pretty new to the game) and I can’t bring myself to believe that prodding ChatGPT into a reasonable output is more efficient than just writing the code yourself… and then I start to worry that perhaps I’m biased. As they say, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it”.
Anyways, your headline alone is a better argument against the merits of AI programming than anything I was able to come up with, so going into it I knew the post would be a good read. And I wasn’t disappointed: you’ve provided me with a much better framework to discuss generative AI with folks moving forward. Thanks for writing this!
Hey thanks, I really appreciate it! This just made my day.
I think of AI in programming the same way I think about search engines (there are a lot of parallels). It can be helpful when you’re stuck or learning something new, it can be wrong, and if you use it for everything you might get something that works but its not going to look like something someone with experience would have done.
It’s true, but all programs start as natural language at least partly. Clients tell developers what they want, the developers then translate that into something that makes actual sense and is close enough to the request to make the clients happy.
Indeed, it’s the job of the programmer to understand that natural language and use it to design a program. The lack of understanding is one thing that worries me about LLMs writing programs.
Like the article mentions, it’s only good at boilerplate code at the moment, and can’t really do architecture very well. I guess that’s why it’s “Github Copilot” and not “Github Pilot”.
Going forward, who knows? We fundamentally don’t understand why LLMs work.
Think about using AI output as inspiration, examples, getting over writers block, etc, and less about using it to cut and paste it’s output wholesale as completed work.