Yeah, but the non-tech savvy business leaders see they can generate code with AI and think ‘why do I need a developer if I have this AI?’ and have no idea whether the code it produces is right or not. This stat should be shared broadly so leaders don’t overestimate the capability and fire people they will desperately need.
It won’t happen like that. Leadership will just under-hire and expect all their developers to be way more efficient. Working will be really stressful with increased deadlines and people questioning why you couldn’t meet them.
It’s a bit hard to generalize. Many of those people were on work visas and couldn’t jump ship easily. Others knew that the job market at the time was terrible and they would be unemployed for several months or more before finding something at a much lower pay, and they knew they couldn’t afford that.
You’re right that people shouldn’t work somewhere that abuses their workers. But it’s also sometimes not as easy as saying “just go somewhere else.”
For software engineers it really is though. I was looking for a job at that same time and had several offers that I turned down because I suspected bad working conditions, settled on one that was lower pay but actually good work life balance.
We have very different experiences. My friends also had a very different experience to you. We were all pretty desperate for jobs, and we weren’t even dealing with visas. Thankfully I landed one at a healthy place, but I would have taken anything at the time and jumped ship when the market was better.
Yeah management are all for this, the first few years here are rough with them immediately hitting the “fire the engineers we have ai now”. They won’t realize their fuckup until they’ve been promoted away from it
LLMs program at the level of a junior engineer or an intern. You already need code review and more senior engineers to fix that shit for them.
What they do is migrate that. Now that junior engineer has an intern they are trying to work with. Or… companies realize they don’t benefit from training up those newbie (or stupid) engineers when they are likely to leave in a year or two anyway.
Programming jobs will be safe for a while. They’ve been trying to eliminate those positions since at least the 90s. Because coders are expensive and often lack social skills.
But I do think the clock is ticking. We will see more and more sophisticated AI tools that are relatively idiot-proof and can do things like modify Salesforce, or create complex new Tableau reports with a few mouse clicks, and stuff like that. Jobs will be chiseled away like our unfortunate friends in graphic design.
You, along with most people, are still looking at automation wrong. It’s never been about removing people entirely, even AI, it’s about doing the same work with less cost.
If you can eliminate one programmers from your four person team by giving the other three AI to produce the same amount of work, congrats you’ve just automated one programming job.
Programming jobs aren’t going anywhere, but either the amount of code produced is about to skyrocket, or the number of employed programmers is going to drop (or most likely both of those things).
I wonder if this will also have a reverse tail end effect.
Company uses AI (with devs) to produce a large amount of code -> code is in prod for a few years with incremental changes -> dev roles rotate or get further reduced over time -> company now needs to modernize and change very large legacy codebase that nobody really understands well enough to even feed it Into the AI -> now hiring more devs than before to figure out how to manage a legacy codebase 5-10x the size of what the team could realistically handle.
Writing greenfield code is relatively easy, maintaining it over years and keeping it up to date and well understood while twisting it for all new requirements - now that’s hard.
AI will help with that too, it’s going to be able to process entire codebases at a time pretty shortly here.
Given the visual capabilities now emerging, it can likely also do human-equivalent testing.
One of the biggest AI tricks we haven’t started seeing much of yet in mainstream use is this kind of automated double-checking. Where it generates an answer, and then validates if the answer is valid before actually giving it to a human. Especially in coding bases, there really isn’t anything stopping it from coming up with an answer compiling, running into an error, re-generating, and repeating until the code passes all unit tests or even potentially visual inspection.
The big limit on this right now is sheer processing cost and context lengths for the models. However, costs for this are dropping faster than any new tech we’ve seen, and it will likely be trivial in just a few years.
Right on. AI feels like a looming paradigm shift in our field that we can either scoff at for its flaws or start learning how to exploit for our benefit. As long as it ends up boosting productivity it’s probably something we’re going to have to learn to work with for job security.
It’s already boosting productivity in many roles. That’s just going to accelerate as the models get better, the processing gets cheaper, and (as you said) people learn to use it better.
There are some areas I’m hoping get addressed by the coming skyrocket in programmer productivity:
Several phone apps aren’t utter garbage anymore. I’m not holding my breath on this one.
Online grocery websites aren’t shit-full-of-timing errors. If I get this, I’ll also wish for $1 million and buy a lottery ticket.
Municipalities and their allies (townships, city services, various local unions) will have barely passable specialized software support that actually fits their size, location and maybe even culture.
I think that last one stands to be strongly enabled by AI code assist tools. It might not be the sexiest or highest paying job, but it’ll be work that matters that largely isn’t even being done today.
I think where it shines is in helping you write code you’ve never written before. I never touched Swift before and I made a fully functional iOS app in a week. Also, even with stuff I have done before, I can say “write me a function that does x” and it will and it usually works.
Like just yesterday I asked it to write me a function that would generate and serve up an .ics file based on a selected date and extrapolate the date of a recurring monthly meeting based on the day of the week picked and its position (1st week, 2nd week, etc) within the month and then make the .ics file reflect all that. I could have generated that code myself by hand but it would have probably taken me an hour or two. It did it in about five seconds and it worked perfectly.
Yeah, you have to know what you’re doing in general and there’s a lot of babysitting involved, but anyone who thinks it’s just useless is plain wrong. It’s fucking amazing.
Edit: lol the article is referring to a study that was using GPT 3.5, which is all but useless for coding. 4.0 has been out for a year blowing everybody’s minds. Clickbait trash.
Generally you want to the reference material used to improve that first version to be correct though. Otherwise it’s just swapping one problem for another.
I wouldn’t use a textbook that was 52% incorrect, the same should apply to a chatbot.
Bad take. Is the first version of your code the one that you deliver or push upstream?
LLMs can give great starting points, I use multiple LLMs each for various reasons. Usually to clean up something I wrote (too lazy or too busy/stressed to do manually), find a problem with the logic, or maybe even brainstorm ideas.
I rarely ever use it to generate blocks of code like asking it to generate “a method that takes X inputs and does Y operations, and returns Z value”. I find that those kinds of results are often vastly wrong or just done in a way that doesn’t fit with other things I’m doing.
LLMs can give great starting points, I use multiple LLMs each for various reasons. Usually to clean up something I wrote (too lazy or too busy/stressed to do manually), find a problem with the logic, or maybe even brainstorm ideas.
Impressed some folks think LLMs are useless. Not sure if their lives/workflows/brains are that different from ours or they haven’t given at the college try.
I almost always have to use my head before a language model’s output is useful for a given purpose. The tool almost always saves me time, improves the end result, or both. Usually both, I would say.
It’s a very dangerous technology that is known to output utter garbage and make enormous mistakes. Still, it routinely blows my mind.
You should see 52% of the first version of my code.
It doesn’t have to be right to be useful.
Yeah, but the non-tech savvy business leaders see they can generate code with AI and think ‘why do I need a developer if I have this AI?’ and have no idea whether the code it produces is right or not. This stat should be shared broadly so leaders don’t overestimate the capability and fire people they will desperately need.
I say let it happen. If someone is dumb enough to fire all their workers… They deserve what will happen next
Well the firing’s happening so, i guess let’s hope you’re right about the other part.
It won’t happen like that. Leadership will just under-hire and expect all their developers to be way more efficient. Working will be really stressful with increased deadlines and people questioning why you couldn’t meet them.
And anyone who willing stays in a job that does that is an idiot. Its like the people who stayed at twitter after elon bought it
It’s a bit hard to generalize. Many of those people were on work visas and couldn’t jump ship easily. Others knew that the job market at the time was terrible and they would be unemployed for several months or more before finding something at a much lower pay, and they knew they couldn’t afford that.
You’re right that people shouldn’t work somewhere that abuses their workers. But it’s also sometimes not as easy as saying “just go somewhere else.”
For software engineers it really is though. I was looking for a job at that same time and had several offers that I turned down because I suspected bad working conditions, settled on one that was lower pay but actually good work life balance.
We have very different experiences. My friends also had a very different experience to you. We were all pretty desperate for jobs, and we weren’t even dealing with visas. Thankfully I landed one at a healthy place, but I would have taken anything at the time and jumped ship when the market was better.
Yeah management are all for this, the first few years here are rough with them immediately hitting the “fire the engineers we have ai now”. They won’t realize their fuckup until they’ve been promoted away from it
Mentioned it before but:
LLMs program at the level of a junior engineer or an intern. You already need code review and more senior engineers to fix that shit for them.
What they do is migrate that. Now that junior engineer has an intern they are trying to work with. Or… companies realize they don’t benefit from training up those newbie (or stupid) engineers when they are likely to leave in a year or two anyway.
Programming jobs will be safe for a while. They’ve been trying to eliminate those positions since at least the 90s. Because coders are expensive and often lack social skills.
But I do think the clock is ticking. We will see more and more sophisticated AI tools that are relatively idiot-proof and can do things like modify Salesforce, or create complex new Tableau reports with a few mouse clicks, and stuff like that. Jobs will be chiseled away like our unfortunate friends in graphic design.
You, along with most people, are still looking at automation wrong. It’s never been about removing people entirely, even AI, it’s about doing the same work with less cost.
If you can eliminate one programmers from your four person team by giving the other three AI to produce the same amount of work, congrats you’ve just automated one programming job.
Programming jobs aren’t going anywhere, but either the amount of code produced is about to skyrocket, or the number of employed programmers is going to drop (or most likely both of those things).
I wonder if this will also have a reverse tail end effect.
Company uses AI (with devs) to produce a large amount of code -> code is in prod for a few years with incremental changes -> dev roles rotate or get further reduced over time -> company now needs to modernize and change very large legacy codebase that nobody really understands well enough to even feed it Into the AI -> now hiring more devs than before to figure out how to manage a legacy codebase 5-10x the size of what the team could realistically handle.
Writing greenfield code is relatively easy, maintaining it over years and keeping it up to date and well understood while twisting it for all new requirements - now that’s hard.
AI will help with that too, it’s going to be able to process entire codebases at a time pretty shortly here.
Given the visual capabilities now emerging, it can likely also do human-equivalent testing.
One of the biggest AI tricks we haven’t started seeing much of yet in mainstream use is this kind of automated double-checking. Where it generates an answer, and then validates if the answer is valid before actually giving it to a human. Especially in coding bases, there really isn’t anything stopping it from coming up with an answer compiling, running into an error, re-generating, and repeating until the code passes all unit tests or even potentially visual inspection.
The big limit on this right now is sheer processing cost and context lengths for the models. However, costs for this are dropping faster than any new tech we’ve seen, and it will likely be trivial in just a few years.
Right on. AI feels like a looming paradigm shift in our field that we can either scoff at for its flaws or start learning how to exploit for our benefit. As long as it ends up boosting productivity it’s probably something we’re going to have to learn to work with for job security.
It’s already boosting productivity in many roles. That’s just going to accelerate as the models get better, the processing gets cheaper, and (as you said) people learn to use it better.
There are some areas I’m hoping get addressed by the coming skyrocket in programmer productivity:
I think that last one stands to be strongly enabled by AI code assist tools. It might not be the sexiest or highest paying job, but it’ll be work that matters that largely isn’t even being done today.
And they’ll find out very soon that they need devs when they actually try to test something and nothing works.
Yeah cause my favorite thing to do when programming is debugging someone else’s broken code.
I think where it shines is in helping you write code you’ve never written before. I never touched Swift before and I made a fully functional iOS app in a week. Also, even with stuff I have done before, I can say “write me a function that does x” and it will and it usually works.
Like just yesterday I asked it to write me a function that would generate and serve up an .ics file based on a selected date and extrapolate the date of a recurring monthly meeting based on the day of the week picked and its position (1st week, 2nd week, etc) within the month and then make the .ics file reflect all that. I could have generated that code myself by hand but it would have probably taken me an hour or two. It did it in about five seconds and it worked perfectly.
Yeah, you have to know what you’re doing in general and there’s a lot of babysitting involved, but anyone who thinks it’s just useless is plain wrong. It’s fucking amazing.
Edit: lol the article is referring to a study that was using GPT 3.5, which is all but useless for coding. 4.0 has been out for a year blowing everybody’s minds. Clickbait trash.
3.5 is still reasonably useful for the same reasons you described, imo… Just less so.
Yeah I’ve already got enough legacy code to deal with, I don’t need more of it faster.
To be fair, I’m starting to fear that all the fun bits of human jobs are the ones that are most easy to automate.
I dread the day I’m stuck playing project manager to a bunch of chat bots.
Get it to debug itself then.
Generally you want to the reference material used to improve that first version to be correct though. Otherwise it’s just swapping one problem for another.
I wouldn’t use a textbook that was 52% incorrect, the same should apply to a chatbot.
Bad take. Is the first version of your code the one that you deliver or push upstream?
LLMs can give great starting points, I use multiple LLMs each for various reasons. Usually to clean up something I wrote (too lazy or too busy/stressed to do manually), find a problem with the logic, or maybe even brainstorm ideas.
I rarely ever use it to generate blocks of code like asking it to generate “a method that takes X inputs and does Y operations, and returns Z value”. I find that those kinds of results are often vastly wrong or just done in a way that doesn’t fit with other things I’m doing.
Impressed some folks think LLMs are useless. Not sure if their lives/workflows/brains are that different from ours or they haven’t given at the college try.
I almost always have to use my head before a language model’s output is useful for a given purpose. The tool almost always saves me time, improves the end result, or both. Usually both, I would say.
It’s a very dangerous technology that is known to output utter garbage and make enormous mistakes. Still, it routinely blows my mind.