They’re still useful. For one, this study appears to focus solely on “distorted text field” captchas, which I’m pretty sure have been known to be solvable by bots for years now. For another, they can still provide useful metrics to determine whether a user is a bot or not depending on how much telemetry is available. Even text distortions can factor in whether cursor movement and typing cadence appears human. The article mentions that bots can solve captchas in under a second, which sounds scary but is something humans would not be able to do–so that can be used as another filter.
Plus, just because some bots CAN solve them doesn’t mean ALL bots can. It’s another layer of work for anyone trying to create bots accounts to deal with.
Yeah captchas havent been about “can a computer do it” for years, and for a long time now have been a question of “how humanlike is the entity performing the captcha in how they perform the captcha”
And just like spam filtering, basic filters still do the majority of the work. Even if there exist machines that can crack your captchas, you don’t just get rid of them, because there are still a ton of machines your captchas are stopping. Captcha cracking is a game of cat and mouse, and just like in a game of cat and mouse, you don’t get rid of the cat because you saw a mouse. That’s a quick way to get overrun with the mice the cat was catching
If websites can block you from solving captcha too fast then it should block a lot of bots. If the solving time is faster than human reaction time then it is fairly certain a bot.
I never like captchas for privacy reasons. However bots are a big problem in cyberspace.
You should use privacy pass extension.