- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
They are not dumb, they just don’t give a shit. Trust me, if following those policies somehow made thrm or their friends money, they’d be fucking climate change pioneers.
Virologist just vibing in the background.
So-called “AI safety” researchers are nothing but hypemen for AI companies.
This guy is great, been making ai safety videos long before the ai boom https://youtube.com/@robertmilesai
I know of him and I enjoy his videos.
This post is especially ironic, since AI and its “safety researchers” make climate change worse by ridiculously increasing energy demands.
It is so funny to me that you equate “AI Safety” with “fear mongering for a god AI”.
- They are hype men
- Then you highlight why AI Safety is important by linking a blog post about the dangers of poorly thought-out AI systems
- While calling it fear mongering for a god AI.
What are they now?
If you read AI Safety trolley problems and think they are warning you about an ai god, you misunderstood the purpose of the discussion.
Then you highlight why AI Safety is important by linking a blog post about the dangers of poorly thought-out AI systems
Have you read the article? it clearly states the difference of AI safety vs AI ethics and argues why the formerare quacks and the latter is ignored.
If you read AI Safety trolley problems and think they are warning you about an ai god, you misunderstood the purpose of the discussion.
Have you encountered what Sam Altman or Elsevier Yudkowsky claim about AI safety? It’s literally “AI might make humanity go extinct” shit.
The fear mongering Sam Altman is doing is a sales tactic. That’s the hypeman part.
Sam altman? You went for Sam altman as an “ai safety researcher that is really an hype man”? Is there a worst example? The CEO of the most well known ai company tries to undermine the actual ai safety research and be perceived as “safety aware” for good pr. Shocking…
Elsevier yudkowsky seems to claim that misalignment is a continuous problem in continuously learning ai models and that misalignment could turn into a huge issue without checks and balances and continuously improving on them. And he makes the case that developers have to be aware and counteract unintented harm that their ai can cause (you know the thing that you called ai ethics)
The alignment problem is already the wrong narrative, as it implies agency where there is none. All that talk about “alignment problem” draws focus from AI ethics (not a term I made up).
Read the article.
I didn’t say you made it up.
And the alignment problem doesn’t imply agency.
Complaining that it draws focus away from ai ethics, is an fundamentally misguided view. That is like saying workers rights draws focus away from human rights. You can have both, you should have both. And there is some overlaps.
you can figure out how to avoid the situation and fight back.
wats dis from? it looks interesting
The Ballad of Buster Scruggs; Coen Brothers comedy western anthology. Couldn’t reccomend it enough.
Pan shot!