I think my employer saw the Shopify CEO’s mandatory AI memo and got a little overexcited.
As a web developer I’ve tried copilot and disliked it immensely. It didn’t save me time because my syntax memory and minimal keystroke workflow are pretty decent after 20 years of huckin’ HTML and CSS in various frameworks.
I feel like if I give studies or interviews from companies who FAFOd I’d have a better chance of arguing my point. Does anybody have any in their back pocket they can spare?
Yes, I am very aware of the irony that I could try to ask an AI but avoiding it is kind of the point in this c, isn’t it?
Alternatively, maybe you should try giving different AI systems your standard interview questions and see how they do? I doubt any of them would pass, unless the questions simple. Even if they do as good as a human, using them requires a second human, so they are still using up the same amount of engineering hours as hiring a new employee.
Thing is, it is useful for juniors, except when they fuck up, or when they need to have learned something they’re empty of any skill. There is the security issue, the code maintenance issue… You can halve the lifespan of the codebase because it will bloat up fast. No models follows best practices yet. If you make Web apps they will eventually be hackable in several unknown ways were nobody can even find the issue because nobody wrote it so it is up to a security expert to sift through kilograms of generated code - not humanoid code and find the exploit. A hacker (or just a normal user even) can find a new opening in a fraction of that time. But, I assume you want to audit all code commits. So this is what your new profession is, and when it’s not even in the correct ballpark, you have to prod it like a cowboy. It’s demeaning even haha
it is useful for juniors
[citation needed]
I would argue that for juniors in particular AI is dangerous because they lack the mental tools necessary to spot the hallucinations and thus bad information and bad work will be amplified, not ameliorated.
But of course people who are actually competent at their jobs don’t need the “help” that AI offers.
It’s one of those conundrums: dangerous for half, useless for the other half. LET’S PUMP IN BILLIONS!
Well, it just is. It’s like having your own tutor that you suspect suffer from mythomania. I’m not talking about letting it do the job or auto completing. The student has to be smart about it. The citation is real life, you know if you use it once that it can be helpful for someone looking to learn
I have tried using hallucinating digital parrots (note the plural) for months (note the plural).
They are dangerous, not useful. If you find them useful you’re missing something and THAT’S where the danger lies.
Sorry, I don’t have anything off hand. I know there has been some stories by research groups about how it’s measurably increasing security issues in codebases, but I don’t have any links.
You might also want to ask in the pinned weekly thread on [email protected] as they seem to be more active than this comm. It’s about making fun of all the “techbro takes” online, but I’ve seen people ask for similar help in the weekly thread and get assistance before.
Part of the problem is that most companies aren’t publicly sharing failure stories for anything, let alone failures of the latest hype thing.
You might be able to find some stories on the more “greybeard” oriented tech news sites like the register as well.