“Experts agree these AI systems are likely to be developed in the coming decades, with many of them believing they will arrive imminently,” the IDAIS statement continues. “Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity.”

  • bjorney
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    5
    ·
    2 months ago

    in the coming decades

    Given that in the past 15 years we went from “solving regression problems a little bit better than linear models some of the time” to what we have now, it’s not unfounded to think 15 years from now people could be giving LLMs access to code execution environments

      • bjorney
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        That’s not really machine learning though. If you wanted to go way back, AI research goes back to implementations of hebbian learning in computer science back in the 1950s as a way of emulating human neurons. I was merely pointing out that AI was a computer science “dead end” until restricted Boltzmann machines were revisited by Hinton et al back in 2008 or so, and that 99% of the growth in the field has happened since the early 2010s when we reached a turning point where deep learning models could actually outperform classical statistical models like regression and random forests