Sam altman? You went for Sam altman as an “ai safety researcher that is really an hype man”? Is there a worst example? The CEO of the most well known ai company tries to undermine the actual ai safety research and be perceived as “safety aware” for good pr. Shocking…
Elsevier yudkowsky seems to claim that misalignment is a continuous problem in continuously learning ai models and that misalignment could turn into a huge issue without checks and balances and continuously improving on them. And he makes the case that developers have to be aware and counteract unintented harm that their ai can cause (you know the thing that you called ai ethics)
The alignment problem is already the wrong narrative, as it implies agency where there is none. All that talk about “alignment problem” draws focus from AI ethics (not a term I made up).
Complaining that it draws focus away from ai ethics, is an fundamentally misguided view. That is like saying workers rights draws focus away from human rights. You can have both, you should have both. And there is some overlaps.
So-called “AI safety” researchers are nothing but hypemen for AI companies.
This guy is great, been making ai safety videos long before the ai boom https://youtube.com/@robertmilesai
I know of him and I enjoy his videos.
Still, fearmongering for a god AI that might kill us all is both unscientific and distracts from the real harm so-called AI is already causing.
This post is especially ironic, since AI and its “safety researchers” make climate change worse by ridiculously increasing energy demands.
It is so funny to me that you equate “AI Safety” with “fear mongering for a god AI”.
What are they now?
If you read AI Safety trolley problems and think they are warning you about an ai god, you misunderstood the purpose of the discussion.
Have you read the article? it clearly states the difference of AI safety vs AI ethics and argues why the formerare quacks and the latter is ignored.
Have you encountered what Sam Altman or Elsevier Yudkowsky claim about AI safety? It’s literally “AI might make humanity go extinct” shit.
The fear mongering Sam Altman is doing is a sales tactic. That’s the hypeman part.
Sam altman? You went for Sam altman as an “ai safety researcher that is really an hype man”? Is there a worst example? The CEO of the most well known ai company tries to undermine the actual ai safety research and be perceived as “safety aware” for good pr. Shocking…
Elsevier yudkowsky seems to claim that misalignment is a continuous problem in continuously learning ai models and that misalignment could turn into a huge issue without checks and balances and continuously improving on them. And he makes the case that developers have to be aware and counteract unintented harm that their ai can cause (you know the thing that you called ai ethics)
The alignment problem is already the wrong narrative, as it implies agency where there is none. All that talk about “alignment problem” draws focus from AI ethics (not a term I made up).
Read the article.
I didn’t say you made it up.
And the alignment problem doesn’t imply agency.
Complaining that it draws focus away from ai ethics, is an fundamentally misguided view. That is like saying workers rights draws focus away from human rights. You can have both, you should have both. And there is some overlaps.