- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
YouTuber Internet of Bugs examines the latest demo from Cognition that showcases their “first AI software engineer” allegedly solving UpWork programming tasks.
YouTuber Internet of Bugs examines the latest demo from Cognition that showcases their “first AI software engineer” allegedly solving UpWork programming tasks.
One way it can be useful is when you use it as a more verbal variant of rubber duck debugging. You’ll need to state the issue that you’re facing, including the context and edge cases. In doing so, the problem will also become more clear to you yourself.
Contrary to a rubber duck, it can then actually suggest some approach vectors, which you can then dismiss or investigate further.
This is how I use LLMs right now, and there have been a few times it’s been genuinely helpful. Mind you, most of the time it’s been helpful, it’s because it hallucinates some nonsense that gets me in the right direction, but that’s still at least a little better than the duck.
That was my experience as well with GPT 3.5. But the hit ratio is a lot better with GPT 4, and other models like Mixtral and its derivatives.