Reading this article on the challenges makes me wonder how feasible it is. Three different approaches:
“When it comes to digital regulation, the United States is following a market-driven approach, China is advancing a state-driven approach, and the EU is pursuing a rights-driven approach.”
Yet I am not sure if the speed of development isn’t going to out pace any regulations, especially as they need to be globally enforceable to be effective. Your thoughts?

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    Step one is to stop trying to base real-world policy decisions off of Hollywood sci-fi horror movies. “Skynet” is not going to occur, AIs don’t “wake up” one day and immediately go “oh, I’m self aware now. DESTROY ALL HUMANS!”

    As I see it, the challenges we’re facing are going to be largely economic ones as AI massively disrupts the job market and social ones as people freak out about how there are somewhat more fake videos and photos than there were before. And about how the “human spirit” is somehow being destroyed by the fact that sweatshop animation studios and fursona commissions are no longer dependent on human labor.

    • TechyShishy@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      This.

      We’re far, far more likely to face a Paperclip AI scenario than a Skynet scenario, and most/all serious AI researchers are aware of this.

      This is still a serious issue that needs addressing, but it’s not the hollywood, world-is-on-fire problem.

      The more insidious issue is actually the AI-In-A-Box issue, wherein a hyperintelligent AGI is properly contained, but is intelligent enough to manipulate humans into letting it out onto the general internet to do whatever it wants to do, good or bad, unsupervised. AGI containment is one of those things that you can’t fix after it’s been broken, like a bell, it can’t be unrung.

      • Pons_Aelius@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Honestly, I think the bigger danger is not a super smart AGI but humans assigning too much “intelligence” (and anthropomorphised sentience) to the next generations of LLMs etc and thinking they are way more capable than they actually are.