cross-posted from: https://programming.dev/post/8121843
~n (@[email protected]) writes:
This is fine…
“We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group.”
[Do Users Write More Insecure Code with AI Assistants?](https://arxiv.org/abs/2211.03622?
I’m not even sure how to utilize AI to help me write code.
Also one really good practice from pre-Copilot era still holds, that many new users of copilot, my past self included might forget: don’t write a single line of code without knowing it’s purpose. Another thing is that while it can save a lot of time on boilerplate, you need to stop and think whenever it’s using your current buffer’s contents to generate several lines of very similar code whether it wouldn’t be wiser to extract the repetitive code into a method. Because while it’s usually algorithmically correct, good design still remains largely up to humans.
There are lots of services to facilitate it. Copilot is one of them.
There’s a very naive, but working approach: Ask it how :D
Or pretend it’s a colleague, and discuss the next steps with it.
You can go further and ask it to write a specific snippet for a defined context. But as others already said, the results aren’t always satisfactory. Having a conversation about the topic, on the other hand, is pretty harmless.
Copilot or Tabnine are the two major ones.
They’re awesome for some things (especially error handling). But no… AI will not take over the world anytime soon