

This tracks with my experience: I spent far more time double checking copilot output than trusting it. Also it almost always auto completed way too much way too often, but that could be UI/UX issue than a functional one.
However, by far the most egregious thing was that it made the most subtle but crucial errors I took hours to fix, which made me lose faith in it entirely.
For example, I had a cmake project & the AI auto completed “target_link_directories” instead of “target_link_libraries”. Looking at cmake all day & never using the *_directories keyword before I couldn’t figure out why I was getting config errors. Wasted orders of magnitude more time on finding something so trivial, compared to writing “boilerplate” code myself.
Looks like I am not alone:
Furthermore, the reliability of AI suggestions was inconsistent; developers accepted less than 44 percent of the code it generated, spending significant time reviewing and correcting these outputs.
When I did find it & fix it, something interesting happened: maybe because AI is sitting too damn low in the uncanny valley I got angry at it. If the same thing would have been done by any other dev, we’d have laughed about it. Perhaps because I’d trust a another dev (optimistically? Naïvely?) to improve & learn I’d be gentler on them. A tool built on stolen knowledge by a trillion dollar corp to create an uncaring stats machine, didn’t get much love from me.
Something creepy about this, though.