cross-posted from: https://lemmy.ml/post/20858435

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

  • ContrarianTrail@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    3 hours ago

    AGI is inevitable unless:

    1. General intelligence is substrate independent and what the brain does cannot be replicated in silica. However, since both are made of matter, and matter obeys the laws of physics, I see no reason to assume this.

    2. We destroy ourselves before we reach AGI.

    Other than that, we will keep incrementally improving our technology and it’s only a matter of time untill we get there. May take 5 years, 50 or 500 but it seems pretty inevitable to me.

    • @ContrarianTrail @JRepin well I guess somebody would first need to clearly define what “AGI” is. Currently it’s just “whatever the techbro hypers want it to be”.

      And then there’s the matter (ha!) of your assumption that we understand all laws of physics necessary that “matter obeys”, or that we can reasonably understand them. That’s a pretty strong assumption: individual human minds are pretty limited and communication adds overhead, and we might reach a point where we’re stuck.

      • ContrarianTrail@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 hour ago

        A chess engine is intelligent in one thing: playing chess. That narrow intelligence doesn’t translate to any other skill, even if it’s sometimes superhuman at that one task, like a calculator.

        Humans, on the other hand, are generally intelligent. We can perform a variety of cognitive tasks that are unrelated to each other, with our only limitations being the physical ones of our “meat computer.”

        Artificial General Intelligence (AGI) is the artificial version of human cognitive capabilities, but without the brain’s limitations. It should be noted that AGI is not synonymous with AI. AGI is a type of AI, but not all AI is generally intelligent. The next step from AGI would be Artificial Super Intelligence (ASI), which would not only be generally intelligent but also superhumanly so. This is what the “AI doomers” are concerned about.