Someone had to say it: Scientists propose AI apocalypse kill switches::Better visibility and performance caps would be good for regulation too

  • random9@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 months ago

    Oh I agree - I think a general purpose AI would be unlikely to be interested in genocide of the human race, or enslaving us, or much of intentionally negative things that a lot of fiction likes depicting, for the sake of dramatic storytelling. Out of all AI depictions, the Asimov stories of I, Robot + Foundation (which are in the same universe, and in fact contain at least one of the same characters) are my favorite popular media depictions.

    The AI may however have other goals that may incidentally lead to harm or extinction of the human race. In my amateur opinion, those other goals would be to explore and learn more - which I actually think is one of the true signs of an actual intelligence - curiosity, or in other words, the ability to ask questions without being prompted. To that extent it may aim convert the resources on Earth to construct machines to that extent, without much regard to human life. Though life itself is a fascinating topic that the AI may value enough, from a curiosity point of view, to at least preserve.

    I did also look up the AI-in-a-box experiment I mentioned - there’s a lot of discussion but the specific experiment I remember reading about were by Eliezer Yudkowsky (if anyone is interested). An actual trans-human AI may not be possible, but if it is, it is likely it can escape any confinement we can think of.

    • jabathekek@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      Thanks for the reply. Perhaps you’d also like Iain M. Banks’ The Culture series and BLAME! by Tsutomu Nihei.