• thingsiplay@beehaw.org
    link
    fedilink
    arrow-up
    81
    ·
    15 days ago

    How did he calculate the 70% chance? Without an explanation this opinion is as much important as a Reddit post. It’s just marketing fluff talk, so people talk about AI and in return a small percentage get converted into people interested into AI. Let’s call it clickbait talk.

    First he talks about high chance that humans get destroyed by AI. Follows with a prediction it would achieve AGI in 2027 (only 3 years from now). No. Just no. There is a loong way to get general intelligence. But isn’t he trying to sell you why AI is great? He follows with:

    “We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,”

    Ah yes, he does.

      • MagicShel@programming.dev
        link
        fedilink
        arrow-up
        12
        ·
        14 days ago

        ChatGPT says 1-5%, but I told it to give me nothing but a percentage and it gave me a couple of paragraphs like a kid trying to distract from the answer by surrounding it with bullshit. I think it’s onto us…

        (I kid. I attribute no sentience or intelligence to ChatGPT.)

    • eveninghere@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      14 days ago

      This is a horoscope trick. They can always say AI destroyed humanity.

      Trump won in 2016 and there was Cambridge Analytica doing data analysis: AI technology destroyed humanity!

      Israel used AI-guided missiles to attack Gaza: AI destroyed humanity!

      Whatever. You can point at whatever catastrophe and there is always AI behind because already in 2014 AI is a basic technology used everywhere.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      ·
      14 days ago

      The person who predicted 70% chance of AI doom is Daniel Kokotajlo, who quit OpenAI because of it not taking this seriously enough. The quote you have there is a statement by OpenAI, not by Kokotajlo, this is all explicit in the article. The idea that this guy is motivated by trying to do marketing for OpenAI is just wrong, the article links to some of his extensive commentary where he is advocating for more government oversight specifically of OpenAI and other big companies instead of the favorable regulations that company is pushing for. The idea that his belief in existential risk is disingenuous also doesn’t make sense, it’s clear that he and other people concerned about this take it very seriously.

  • millie@beehaw.org
    link
    fedilink
    English
    arrow-up
    60
    arrow-down
    1
    ·
    edit-2
    14 days ago

    I think when people think of the danger of AI, they think of something like Skynet or the Matrix. It either hijacks technology or builds it itself and destroys everything.

    But what seems much more likely, given what we’ve seen already, is corporations pushing AI that they know isn’t really capable of what they say it is and everyone going along with it because of money and technological ignorance.

    You can already see the warning signs. Cars that run pedestrians over, search engines that tell people to eat glue, customer support AI that have no idea what they’re talking about, endless fake reviews and articles. It’s already hurt people, but so far only on a small scale.

    But the profitablity of pushing AI early, especially if you’re just pumping and dumping a company for quarterly profits, is massive. The more that gets normalized, the greater the chance one of them gets put in charge of something important, or becomes a barrier to something important.

    That’s what’s scary about it. It isn’t AI itself, it’s AI as a vector for corporate recklessness.

    • Melody Fwygon@beehaw.org
      link
      fedilink
      arrow-up
      12
      ·
      14 days ago

      It isn’t AI itself, it’s AI as a vector for corporate recklessness.

      This. 1000% this. Many of Issac Asimov novels warned about this sort of thing too; as did any number of novels inspired by Asimov.

      It’s not that we didn’t provide the AI with rules. It’s not that the AI isn’t trying not to harm people. It’s that humans, being the clever little things we are, are far more adept at deceiving and tricking AI into saying things and using that to justify actions to gain benefit.

      …Understandably this is how that is being done. By selling AI that isn’t as intelligent as it is being trumpeted as. As long as these corporate shysters can organize a team to crap out a “Minimally Viable Product” they’re hailed as miracle workers and get paid fucking millions.

      Ideally all of this should violate the many, many laws of many, many civilized nations…but they’ve done some black magic with that too; by attacking and weakening laws and institutions that can hold them liable for this and even completely ripping out or neutering laws that could cause them to be held accountable by misusing their influence.

    • localhost@beehaw.org
      cake
      link
      fedilink
      arrow-up
      7
      ·
      14 days ago

      I don’t think your assumption holds. Corporations are not, as a rule, incompetent - in fact, they tend to be really competent at squeezing profit out of anything. They are misaligned, which is much more dangerous.

      I think the more likely scenario is also more grim:

      AI actually does continue to advance and gets better and better displacing more and more jobs. It doesn’t happen instantly so barely anything gets done. Some half-assed regulations are attempted but predictably end up either not doing anything, postponing the inevitable by a small amount of time, or causing more damage than doing nothing would. Corporations grow in power, build their own autonomous armies, and exert pressure on governments to leave them unregulated. Eventually all resources are managed by and for few rich assholes, while the rest of the world tries to survive without angering them.
      If we’re unlucky, some of those corporations end up being managed by a maximizer AGI with no human supervision and then the Earth pretty much becomes an abstract game with a scoreboard, where money (or whatever is the equivalent) is the score.

      Limitations of human body act as an important balancing factor in keeping democracies from collapsing. No human can rule a nation alone - they need armies and workers. Intellectual work is especially important (unless you have some other source of income to outsource it), but it requires good living conditions to develop and sustain. Once intellectual work is automated, infrastructure like schools, roads, hospitals, housing cease to be important for the rulers - they can give those to the army as a reward and make the rest of the population do manual work. Then if manual work and policing through force become automated, there is no need even for those slivers of decency.
      Once a single human can rule a nation, there is enough rich psychopaths for one of them to attempt it.

      There are also other AI-related pitfalls that humanity may fall into in the meantime - automated terrorism (e.g. swarms of autonomous small drones with explosive charges using face recognition to target entire ideologies by tracking social media), misaligned AGI going rogue (e.g. the famous paperclip maximizer, although probably not exactly this scenario), collapse of the internet due to propaganda bots using next-gen generative AI… I’m sure there’s more.

      • Juice@midwest.social
        link
        fedilink
        arrow-up
        4
        ·
        14 days ago

        Ai doesn’t get better. Its completely dependent on computing power. They are dumping all the power into it they can, and it sucks ass. The larger the dataset the more power it takes to search it all. Your imagination is infinite, computing power is not. you can’t keep throwing electricity at a problem. It was pushed out because there was a bunch of excess computing power after crypto crashed, or semi stabilized. Its an excuse to lay off a bunch of workers after covid who were gonna get laid off anyway. Managers were like sweet I’ll trim some excess employees and replace them with ai! Wrong. Its a grift. It might hang on for a while but policy experts are already looking at the amount of resources being thrown at it and getting weary. The technological ignorance you are responding to, that’s you. You don’t know how the economy works and you don’t know how ai works so you’re just believing all this roku’s basilisk nonsense out of an overactive imagination. Its not an insult lots of people are falling for it, ai companies are straight up lying, the media is stretching the truth of it to the point of breaking. But I’m telling you, don’t be a sucker. Until there’s a breakthrough that fixes the resource consumption issue by like orders of magnitude, I wouldn’t worry too much about Ellison’s AM becoming a reality

        • verdare [he/him]@beehaw.org
          link
          fedilink
          arrow-up
          3
          ·
          14 days ago

          I find it rather disingenuous to summarize the previous poster’s comment as a “Roko’s basilisk”scenario. Intentionally picking a ridiculous argument to characterize the other side of the debate. I think they were pretty clear about actual threats (some more plausible than others, IMO).

          I also find it interesting that you so confidently state that “AI doesn’t get better,” under the assumption that our current deep learning architectures are the only way to build AI systems.

          I’m going to make a pretty bold statement: AGI is inevitable, assuming human technological advancement isn’t halted altogether. Why can I so confidently state this? Because we already have GI without the A. To say that it is impossible is to me equivalent to arguing that there is something magical about the human brain that technology could never replicate. But brains aren’t magic; they’re incredibly sophisticated electrochemical machines. It is only a matter of time before we find a way to replicate “general intelligence,” whether it’s through new algorithms, new computing architectures, or even synthetic biology.

          • Juice@midwest.social
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            13 days ago

            I wasn’t debating you. I have debates all day with people who actually know what they’re talking about, I don’t come to the internet for that. I was just looking out for you, and anyone else who might fall for this. There is a hard physical limit. I’m not saying the things you’re describing are technically impossible, I’m saying they are technically impossible with this version of the tech. Slapping a predictive text generator on a giant database , its too expensive, and it doesn’t work. Its not a debate, its science. And not the fake shit run by corporate interests, the real thing based on math.

            There’s gonna be a heatwave this week in the Western US, and there are almost constant deadly heatwaves in many parts of the world from burning fossil fuels. But we can’t stop producing electricity to run these scam machines because someone might lose money.

        • localhost@beehaw.org
          cake
          link
          fedilink
          arrow-up
          3
          ·
          13 days ago

          Your opening sentence is demonstrably false. GTP-2 was a shitpost generator, while GPT-4 output is hard to distinguish from a genuine human. Dall-E 3 is better than its predecessors at pretty much everything. Yes, generative AI right now is getting better mostly by feeding it more training data and making it bigger. But it keeps getting better and there’s no cutoff in sight.

          That you can straight-up comment “AI doesn’t get better” at a tech literate sub and not be called out is honestly staggering.

          • Ilandar@aussie.zone
            cake
            link
            fedilink
            arrow-up
            3
            ·
            13 days ago

            That you can straight-up comment “AI doesn’t get better” at a tech literate sub and not be called out is honestly staggering.

            I actually don’t think it is because, as I alluded to in another comment in this thread, so many people are still completely in the dark on generative AI - even in general technology-themed areas of the internet. Their only understanding of it comes from reading the comments of morons (because none of these people ever actually read the linked article) who regurgitate the same old “big tech is only about hype, techbros are all charlatans from the capitalist elite” lines for karma/retweets/likes without ever actually taking the time to hear what people working within the field (i.e. experts) are saying. People underestimate the capabilities of AI because it fits their political world view, and in doing so are sitting ducks when it comes to the very real threats it poses.

          • Juice@midwest.social
            link
            fedilink
            arrow-up
            1
            ·
            12 days ago

            The difference between gpt-3 and gpt-4 is number of parameters, I.e. processing power. I don’t know what the difference between 2 and 4 is, maybe there were some algorithmic improvements. At this point, I don’t know what algorithmic improvements are going to net efficiencies in the “orders of magnitude” that would be necessary to yield the kind of results to see noticeable improvement in the technology. Like the difference between 3 and 4 is millions of parameters vs billions of parameters. Is a chatgpt 5 going to have trillions of parameters? No.

            Tech literate people are apparently just as susceptible to this grift, maybe more susceptible from what little I understand about behavioral economics. You can poke holes in my argument all you want, this isn’t a research paper.

    • 0x815@feddit.de
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      14 days ago

      Yes. We need human responsibility for everything what AI does. It’s not the technology that harms but human beings and those who profit from it.

    • Ilandar@aussie.zone
      cake
      link
      fedilink
      arrow-up
      5
      ·
      13 days ago

      Yes, it’s very concerning and frustrating that more people don’t understand the risks posed by AI. It’s not about AI becoming sentient and destroying humanity, it’s about humanity using AI to destroy itself. I think this fundamental misunderstanding of the problem is the reason why you get so many of these dismissive “AI is just techbro hype” comments. So many people are genuinely clueless about a) how manipulative this technology already is and b) the rate at which it is advancing.

    • coffeetest@beehaw.org
      link
      fedilink
      arrow-up
      5
      ·
      13 days ago

      Calling LLMs, “AI” is one of the most genius marketing moves I have ever seen. It’s also the reason for the problems you mention.

      I am guessing that a lot of people are just thinking, “Well AI is just not that smart… yet! It will learn more and get smarter and then, ah ha! Skynet!” It is a fundamental misunderstanding of what LLMs are doing. It may be a partial emulation of intelligence. Like humans, it uses its prior memory and experiences (data) to guess what an answer to a new question would look like. But unlike human intelligence, it doesn’t have any idea what it is saying, actually means.

    • Drew@mastodon.social
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      14 days ago

      @millie @floofloof this is so well articulated I can’t stand it. I want to have it printed out and hand it to anyone who asks me anything about AI. Thank you for this!

  • Retiring@lemmy.ml
    link
    fedilink
    arrow-up
    46
    ·
    15 days ago

    I feel this is all just a scam, trying to drive the value of AI stocks. Noone in the media seems to talk about the hallucination problem, the problem with limited data for new models (Habsburg-AI), the energy restrictions etc.

    It’s all uncritical believe that „AI“ will just become smart eventually. This technology is built upon a hype, it is nothing more than that. There are limitations, and they reached them.

    • aStonedSanta@lemm.ee
      link
      fedilink
      arrow-up
      10
      ·
      14 days ago

      And these current LLMs aren’t just gonna find sentience for themselves. Sure they’ll pass a Turing test but they aren’t alive lol

      • knokelmaat@beehaw.org
        link
        fedilink
        arrow-up
        14
        ·
        14 days ago

        I think the issue is not wether it’s sentient or not, it’s how much agency you give it to control stuff.

        Even before the AI craze this was an issue. Imagine if you were to create an automatic turret that kills living beings on sight, you would have to make sure you add a kill switch or you yourself wouldn’t be able to turn it off anymore without getting shot.

        The scary part is that the more complex and adaptive these systems become, the more difficult it can be to stop them once they are in autonomous mode. I think large language models are just another step in that complexity.

        An atomic bomb doesn’t pass a Turing test, but it’s a fucking scary thing nonetheless.

    • Lvxferre@mander.xyz
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      14 days ago

      Habsburg-AI? Do you have an idea on how much you made me laugh in real life with this expression??? It’s just… perfect! Model degeneration is a lot like what happened with the Habsburg family’s genetic pool.

      When it comes to hallucinations in general, I got another analogy: someone trying to use a screwdriver with nails, failing, and calling it a hallucination. In other words I don’t think that the models are misbehaving, they’re simply behaving as expected, and that any “improvement” in this regard is basically a band-aid being added to humans to a procedure that doesn’t yield a lot of useful outputs to begin with.

      And that reinforces the point from your last paragraph - those people genuinely believe that, if you feed enough data into a L"L"M, it’ll “magically” become smart. It won’t, just like 70kg of bees won’t “magically” think as well as a human being would. The underlying process is “dumb”.

      • Retiring@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        14 days ago

        I am glad you liked it. Can’t take the credit for this one though, I first heard it from Ed Zitron in his podcast „Better Offline“. Highly recommend.

    • averyminya@beehaw.org
      link
      fedilink
      arrow-up
      5
      ·
      14 days ago

      Energy restrictions actually could be pretty easily worked around using analog converting methods. Otherwise I agree completely though, and what’s the point of using energy on useless tools. There’s so many great things that AI is and can be used for, but of course like anything exploitable whatever is “for the people” is some amalgamation of extracting our dollars.

      The funny part to me is that like mentioned “beautiful” AI cabins that are clearly fake – there’s this weird dichotomy of people just not caring/too ignorant to notice the poor details, but at the same time so many generative AI tools are specifically being used to remove imperfection during the editing process. And that in itself is something that’s too bad, I’m definitely guilty of aiming for “the perfect composition” but sometimes nature and timing forces your hand which makes the piece ephemeral in a unique way. Shadows are going to exist, background subjects are going to exist.

      The current state of marketed AI is selling the promise of perfection, something that’s been getting sold for years already. Just now it’s far easier to pump out scam material with these tools, something that gets easier with each advancement in these sorts of technologies, and now with more environmental harm than just a victim of a predator.

      It really sucks being an optimist sometimes.

    • darkphotonstudio@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      14 days ago

      It could be only hype. But I don’t entirely agree. Personally, I believe we are only a few years away from AGI. Will it come from OpenAI and LLMs? Maybe, but it will likely come from something completely different. Like it or not, we are within spitting distance of a true Artificial Intelligence, and it will shake the foundations of the world.

  • Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    42
    ·
    edit-2
    14 days ago

    May I be blunt? I estimate that 70% of all OpenAI and 70% of all “insiders” are full of crap.

    What people are calling nowadays “AI” is not a magic solution for everything. It is not an existential threat either. The main risks that I see associated with it are:

    1. Assumptive people taking LLM output for granted, to disastrous outcomes. Think on “yes, you can safely mix bleach and ammonia” tier (note: made up example).
    2. Supply and demand. Generative models have awful output, but sometimes “awful” = “good enough”.
    3. Heavy increase in energy and resources consumption.

    None of those issues was created by machine “learning”, it’s just that it synergises with them.

    • Barry Zuckerkorn@beehaw.org
      link
      fedilink
      arrow-up
      12
      ·
      14 days ago

      Your scenario 1 is the actual danger. It’s not that AI will outsmart us and kill us. It’s that AI will trick us into trusting them with more responsibility than the AI can responsibly handle, to disastrous results.

      It could be small scale, low stakes stuff, like an AI designing a menu that humans blindly cook. Or it could be higher stakes stuff that actually does things like affect election results, crashes financial markets, causes a military to target the wrong house, etc. The danger has always been that humans will act on the information provided by a malfunctioning AI, not that AI and technology will be a closed loop with no humans involved.

      • Lvxferre@mander.xyz
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        14 days ago

        Yup, it is a real risk. But on a lighter side, it’s a risk that we [humanity] have been fighting against since forever - the possibility of some of us causing harm to the others not due to malice, but out of assumptiveness and similar character flaws. (In this case: “I assume that the AI is reliable enough for this task.”)

      • Lvxferre@mander.xyz
        link
        fedilink
        arrow-up
        2
        ·
        14 days ago

        I’m reading your comment as “[AI is] Not yet [an existential threat], anyway”. If that’s inaccurate, please clarify, OK?

        With that reading in mind: I don’t think that the current developments in machine “learning” lead towards the direction of some hypothetical system that would be an existential threat. The closest to that would be the subset of generative models, that looks like a tech dead end - sure, it might see some applications, but I don’t think that it’ll progress much past the current state.

        In other words I believe that the AI that would be an existential threat would be nothing like what’s being created and overhyped now.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          2
          ·
          14 days ago

          Yeah, the short-term outlook doesn’t look too dangerous right now. LLMs can do a lot of things we thought wouldn’t happen for a long time, but they still have major issues and are running out of easy scalability.

          That being said, there’s a lot of different training schemes or integrations with classical algorithms that could be tried. ChatGPT knows a scary amount of stuff (inb4 Chinese room), it just doesn’t have any incentive to use it except to mimic human-generated text. I’m not saying it’s going to happen, but I think it’s premature to write off the possibility of an AI with complex planning capabilities in the next decade or so.

          • Lvxferre@mander.xyz
            link
            fedilink
            arrow-up
            2
            ·
            14 days ago

            I don’t think that a different training scheme or integrating it with already existing algos would be enough. You’d need a structural change.

            I’ll use a silly illustration for that; it’s somewhat long so I’ll put it inside spoilers. (Feel free to ignore it though - it’s just an illustration, the main claim is outside the spoilers tag.)

            The Mad Librarian and the Good Boi

            Let’s say that you’re a librarian. And you have lots of books to sort out. So you want to teach a dog to sort books for you. Starting by sci-fi and geography books.

            So you set up the training environment: a table with a sci-fi and a geography books. And you give your dog a treat every time that he puts the ball over the sci-fi book.

            At the start, the dog doesn’t do it. But then as you train him, he’s able to do it perfectly. Great! Does the dog now recognise sci-fi and geography books? You test this out, by switching the placement of the books, and asking the dog to perform the same task; now he’s putting the ball over the history book. Nope - he doesn’t know how to tell sci-fi and geography books apart, you were “leaking” the answer by the placement of the books.

            Now you repeat the training with a random position for the books. Eventually after a lot of training the dog is able to put the ball over the sci-fi book, regardless of position. Now the dog recognises sci-fi books, right? Nope - he’s identifying books by the smell.

            To fix that you try again, with new versions of the books. Now he’s identifying the colour; the geography book has the same grey/purple hue as grass (from a dog PoV), the sci book is black like the neighbour’s cat. The dog would happily put the ball over the neighbour’s cat and ask “where’s my treat, human???” if the cat allowed it.

            Needs more books. You assemble a plethora of geo and sci-fi books. Since typically tend to be dark, and the geo books tend to have nature on their covers, the dog is able to place the ball over the sci-fi books 70% of the time. Eventually you give up and say that the 30% error is the dog “hallucinating”.

            We might argue that, by now, the dog should be “just a step away” from recognising books by topic. But we’re just fooling ourselves, the dog is finding a bunch of orthogonal (like the smell) and diagonal (like the colour) patterns. What the dog is doing is still somewhat useful, but it won’t go much past that.

            And, even if you and the dog lived forever (denying St. Peter the chance to tell him “you weren’t a good boy. You were the best boy.”), and spend most of your time with that training routine, his little brain won’t be able to create the associations necessary to actually identify a book by the topic, such as the content.

            I think that what happens with LLMs is a lot like that. With a key difference - dogs are considerably smarter than even state-of-art LLMs, even if they’re unable to speak.

            At the end of the day LLMs are complex algorithms associating pieces of words, based on statistical inference. This is useful, and you might even see some emergent behaviour - but they don’t “know” stuff, and this is trivial to show, as they fail to perform simple logic even with pieces of info that they’re able to reliably output. Different training and/or algo might change the info that it’s outputting, but they won’t “magically” go past that.

            • CanadaPlus@lemmy.sdf.org
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              13 days ago

              Chinese room, called it. Just with a dog instead.

              I have this debate so often, I’m going to try something a bit different. Why don’t we start by laying down how LLMs do work. If you had to explain as fully as you could the algorithm we’re talking about, how would you do it?

              • Lvxferre@mander.xyz
                link
                fedilink
                arrow-up
                2
                ·
                13 days ago

                Chinese room, called it. Just with a dog instead.

                The Chinese room experiment is about the internal process; if it thinks or not, if it simulates or knows, with a machine that passes the Turing test. My example clearly does not bother with all that, what matters here is the ability to perform the goal task.

                As such, no, my example is not the Chinese room. I’m highlighting something else - that the dog will keep doing spurious associations, that will affect the outcome. Is this clear now?

                Why this matters: in the topic of existential threat, it’s pretty much irrelevant if the AI in question “thinks” or not. What matters is its usage in situations where it would “decide” something.

                I have this debate so often, I’m going to try something a bit different. Why don’t we start by laying down how LLMs do work. If you had to explain as full as you could the algorithm we’re talking about, how would you do it?

                Why don’t we do the following instead: I play along your inversion of the burden of the proof once you show how it would be relevant to your implicit claim that AI [will|might] become an existential threat (from “[AI is] Not yet [an existential threat], anyway”)?


                Also worth noting that you outright ignored the main claim outside spoilers tag.

                • CanadaPlus@lemmy.sdf.org
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  edit-2
                  13 days ago

                  Yeah, sorry, I don’t want to invert burden of proof - or at least, I don’t want to ask anything unreasonable of you.

                  Okay, let’s talk just about the performance we measure - it wasn’t clear to me that’s what you mean from what you wrote. Natural language is inherently imprecise, so no bitterness intended, but in particular that’s how I read the section outside of the spoiler tag.

                  By some measures, it can do quite a bit of novel logic. I recall it drawing a unicorn using text commends in one published test, for example, which correctly had a horn, body and four legs. That requires combining concepts in a way that almost certainly isn’t directly in the training data, so it’s fair to say it’s not a mere search engine. Then again, sometimes it just doesn’t do what it’s asked, for example when adding two numbers - it will give a plausible looking result, but that’s all.

                  So, we have a blackbox, and we’re trying to decide if it could become an existential threat. Do we agree a computer just as smart as us probably would be? If so, that reduces to whether the blackbox could be just as smart as us eventually. Up until now, there’s been great reasons to say no, even about blackbox software. I know clippy could never have done it, because there’s forms of reasoning classical algorithms just couldn’t do, despite great effort - it doesn’t matter if clippy is closed source, because it was a classical algorithm.

                  On the other hand, what neural nets can’t do is a total unknown. GPT-n won’t add numbers directly, but it is able to correctly preform the steps, which you can show by putting it in a chain-of-thought framework. It just “chooses” not to, because that’s not how it was trained. GPT-n can’t organise a faction that threatens human autonomy, but we don’t know if that’s because it doesn’t know the steps, or because of the lack of memory and cost function to make it do that.

                  It’s a blackbox, there’s no known limits on what it could do, and it’s certain to be improved on quickly at least in some way. For this reason, I think it might become an existential threat, in some future iteration.

  • django@discuss.tchncs.de
    link
    fedilink
    arrow-up
    27
    ·
    15 days ago

    The energy demand of AI will harm humanity, because we keep feeding it huge amounts of energy produced by burning fossile fuels.

    • 100@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      15 days ago

      divert all energy use to these ai projects so they can regurgitate reddit shitposts more accurately, thanks

  • darkphotonstudio@beehaw.org
    link
    fedilink
    arrow-up
    19
    ·
    edit-2
    14 days ago

    I believe much of our paranoia concerning ai stems from our fear that something will come along and treat us like we treat all the other life on this planet. Which is bitterly ironic considering our propensity for slaughtering each other on a massive scale. The only danger to humanity is humans. If humanity is doomed, it will be our own stupid fault, not AI.

    • Kichae
      link
      fedilink
      English
      arrow-up
      15
      ·
      14 days ago

      I think much of it comes from “futurologists” spending too much time smelling each others’ farts. These AI guys think so very much of themselves.

      • Hazzia@discuss.tchncs.de
        link
        fedilink
        arrow-up
        8
        ·
        14 days ago

        It’s crazy how little experts like these think of humanity, or just underestimate our tollerance and adaptability to weird shit. People used to talk about how “if we ever learned UFOs were a real phenomena, there would be global mayhem!” because people’s world views would collapse and they’d riot, or whatever. After getting a few articles the past few years since that first NY Times article, I’ve basically not heard anyone really caring (who didn’t already seem to be into them before, anyway). Hell, we had a legitimate attempt to overthrow our own government, and the large majority of our population just kept on with their lives.

        The same AI experts 10 years ago would have thought the AI we have right now would have caused societal collapse.

        • darkphotonstudio@beehaw.org
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          14 days ago

          Idk about societal collapse, but think about the amount of damage the World Wide Web and social media has and continues to do. Look at the mess cars have made of cities around the world over the course of a century. Just because it doesn’t happen overnight, doesn’t mean serious problems can’t occur. I think we have 10 years before the labour market is totally upended, with or without real AGI. Even narrow AI is capable of fucking things up on a scale no one wants to admit.

      • darkphotonstudio@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        14 days ago

        Agreed, partially. However, the “techbros” in charge, for the most part, aren’t the researchers. There are futurologists who are real scientists and researchers. Dismissing them smacks of the anti-science knuckleheads ignoring warnings about the dangers of not wearing masks and getting vaccines during the pandemic. Not everyone interested in the future is a techbro.

        • Kichae
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 days ago

          “Futurologist” is a self-appointed honorific that people who fancy themselves “deep thinkers” while thinking of nothing more deeply than how deep they are. It’s like declaring oneself an “intellectual”.

    • verdare [he/him]@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      14 days ago

      The only danger to humans is humans.

      I’m sorry, but this is a really dumb take that borders on climate change denial logic. A sufficiently large comet is an existential threat to humanity. You seem to have this optimistic view that humanity is invincible against any threat but itself, and I do not think that belief is justified.

      People are right to be very skeptical about OpenAI and “techbros.” But I fear this skepticism has turned into outright denial of the genuine risks posed by AGI.

      I find myself exhausted by this binary partitioning of discourse surrounding AI. Apparently you have to either be a cult member who worships the coming god of the singularity, or think that AI is either impossible or incapable of posing a serious threat.

      • darkphotonstudio@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        14 days ago

        You seem to have this optimistic view that humanity is invincible against any threat but itself

        I didn’t say that. You’re making assumptions. However, I don’t take AGI as a serious risk, not directly anyway. AGI is a big question mark at this time and hardly comparable to a giant comet or pandemic, of which we have experience or solid scientific evidence. Could it be a threat? Yeah. Do I personally think so? No. Our reaction to and exploitation of will likely do far more harm than any direct action by an AGI.

      • darkphotonstudio@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        14 days ago

        True. But we are still talking about what is essentially an alien mind. Even if it can do a good impression of a human intelligence, doesn’t mean it is a human mind. It won’t have billions of years of evolution and thousands of years of civilization and development.

  • A1kmm@lemmy.amxl.com
    link
    fedilink
    English
    arrow-up
    16
    ·
    14 days ago

    I think any prediction based on a ‘singularity’ neglects to consider the physical limitations, and just how long the journey towards significant amounts of AGI would be.

    The human brain has an estimated 100 trillion neuronal connections - so probably a good order of magnitude estimation for the parameter count of an AGI model.

    If we consider a current GPU, e.g. the 12 GB GFX 3060, it can hold about 24 billion parameters at 4 bit quantisation (in reality a fair few less), and uses 180 W of power. So that means an AGI might use 750 kW of power to operate. A super-intelligent machine might use more. That is a farm of 2500 300W solar panels, while the sun is shining, just for the equivalent of one person.

    Now to pose a real threat against the billions of humans, you’d need more than one person’s worth of intelligence. Maybe an army equivalent to 1,000 people, powered by 8,333,333 GPUs and 2,500,000 solar panels.

    That is not going to materialise out of the air too quickly.

    In practice, as we get closer to an AGI or ASI, there will be multiple separate deployments of similar sizes (within an order of magnitude), and they won’t be aligned to each other - some systems will be adversaries of any system executing a plan to destroy humanity, and will be aligned to protect against harm (AI technologies are already widely used for threat analysis). So you’d have a bunch of malicious systems, and a bunch of defender systems, going head to head.

    The real AI risks, which I think many of the people ranting about singularities want to obscure, are:

    • An oligopoly of companies get dominance over the AI space, and perpetuates a ‘rich get richer’ cycle, accumulating wealth and power to the detriment of society. OpenAI, Microsoft, Google and AWS are probably all battling for that. Open models is the way to battle that.
    • People can no longer trust their eyes when it comes to media; existing problems of fake news, deepfakes, and so on become so severe that they undermine any sense of truth. That might fundamentally shift society, but I think we’ll adjust.
    • Doing bad stuff becomes easier. That might be scamming, but at the more extreme end it might be designing weapons of mass destruction. On the positive side, AI can help defenders too.
    • Poor quality AI might be relied on to make decisions that affect people’s lives. Best handled through the same regulatory approaches that prevent companies and governments doing the same with simple flow charts / scripts.
    • darkphotonstudio@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      14 days ago

      I think you’re right on the money when it comes to the real dangers, especially your first bullet point. I don’t necessarily agree with your napkin maths. If the virtual neurons are used in a more efficient way, that could make up for a lot versus human neuron count.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      14 days ago

      The human brain has an estimated 100 trillion neuronal connections - so probably a good order of magnitude estimation for the parameter count of an AGI model.

      Yeah, but a lot of those do things unrelated to higher reasoning. A small monkey is smarter than a moose, despite the moose obviously having way more synapses.

      I don’t think you can rely on this kind of argument so heavily. A brain isn’t a muscle.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      14 days ago

      So you’d have a bunch of malicious systems, and a bunch of defender systems, going head to head.

      Let me guess… USA is defender and Russia/China is malicious? Seriously though who is going to be running the malicious machines trying to “destroy humanity”? If you’re talking about capitalism destroying the planet, this has already been happening without AI. Otherwise this seems like just another singularity fantasy.

      • A1kmm@lemmy.amxl.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 days ago

        The fears people who like to talk about the singularity like to propose is that there will be one ‘rogue’ misaligned ASI that progressively takes over everything - i.e. all the AI in the world works against all the people.

        My point is that more likely is there will be lots of ASI or AGI systems, not aligned to each other, most on the side of the humans.

    • adderaline@beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 days ago

      Open models is the way to battle that.

      This is something I think needs to be interrogated. None of these models, even the supposedly open ones are actually “open” or even currently “openable”. We can know the exact weights for every single parameter, the code used to construct it, and the data used to train it, and that information gives us basically no insight into its behavior. We simply don’t have the tools to actually “read” a machine learning model in the way you would an open source program, the tech produces black boxes as a consequence of its structure. We can learn about how they work, for sure, but the corps making these things aren’t that far ahead of the public when it comes to understanding what they’re doing or how to change their behavior.

  • KevonLooney@lemm.ee
    link
    fedilink
    arrow-up
    14
    ·
    15 days ago

    I just realized something: since most people have no idea what AI is, it could easily be used to scam people. I think that will be it’s main function originally.

    Like the average person does not have access to real time stock data. You could make a fake AI program that pretends to be a trading algorithm and makes a ton of pretend money as the mark watches. The data would be 100% real and verifiable, just picked a few seconds after the the fact.

    Since most people care a lot about money, this will be some of the first widespread applications of real time AI. Just tricking people out of money.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      15
      ·
      15 days ago

      Yeah I’ll admit I was freaked out at the beginning. So I learned about models, used them, and got familiar with them. Now I’m less freaked out and more “oh my god so many people are going to get scammed/tricked”.

      Go on Facebook and you’ll see it’s a good 50-70% AI garbage now. My favorite are “log cabin” and kitchen posts that are just images of them with blanket titles like “wish I lived here” with THOUSANDS of comments of people saying “YES” or “it’s so beautiful”. Of course it is it has no supports! The cabinets are held up by nothing! There are 9 kinds of lanterns and most are floating. Jesus people are not ready for it.

      • frog 🐸@beehaw.org
        link
        fedilink
        English
        arrow-up
        9
        ·
        15 days ago

        The “Willa Wonka Experience” event comes to mind. The images on the website were so obviously AI-generated, but people still coughed up £35 a ticket to take their kids to it, and were then angry that the “event” was an empty warehouse with a couple of plastic props and three actors trying to improvise because the script they’d been given was AI-generated gibberish. Straight up scam.

    • trevron@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      14 days ago

      There are already cases of people pretending to be AI and people revealing dumb information revealing dumb info about themselves lol

  • Floey@lemm.ee
    link
    fedilink
    arrow-up
    14
    ·
    15 days ago

    This fear mongering is just beneficial to Altman. If his product is powerful enough to be a threat to humanity then it is also powerful enough to be capable of many useful things, things it has not proven itself to be capable of. Ironically spreading fear about its capabilities will likely raise investment, so if you actually are afraid of openai somehow arriving at agi that is dangerous then you should really be trying to convince people of its lack of real utility.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      15 days ago

      The guy complaining left the company:

      Fed up, Kokotajlo quit the firm in April, telling his team in an email that he had “lost confidence that OpenAI will behave responsibly” as it continues trying to build near-human-level AI.

      I don’t think that he stands to benefit.

      He also didn’t say that OpenAI was on the brink of having something like this either.

      Like, I don’t think all the fighting at OpenAI and people being ejected and such is all a massive choreographed performance. I think that there have been people who really strongly disagree with each other.

      I absolutely think that AGI has the potential to post existential risks to humanity. I just don’t think that OpenAI is anywhere near building anything capable of that. But if you’re trying to build towards such a thing, the risks are something that I think a lot of people would keep in mind.

      I think that human level AI is very much technically possible. We can do it ourselves, and we have hardware with superior storage and compute capacity. The problem we haven’t solved is the software side. And I can very easily believe that we may get there not all that far in the future. Years or decades, not centuries down the road.

      • Floey@lemm.ee
        link
        fedilink
        arrow-up
        3
        ·
        14 days ago

        I didn’t think it was a choreographed publicity stunt. I just know Altman has used AI fear in the past to keep people from asking rational questions like “What can this actually do?” He obviously stands to gain from people thinking they are on the verge of agi. And someone looking for a new job in the field also has to gain from it.

        As for the software thing, if it’s done by someone it won’t be openai and megacorporations following in its footsteps. They seem insistent at throwing more data (of diminishing quality) and more compute (an impractical amount) at the same style of models hoping they’ll reach some kind of tipping point.

  • 2xsaiko@discuss.tchncs.de
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    14 days ago

    I mean I give it a 100% chance if they are allowed to keep going like this considering the enormous energy and water consumption, essentially slave labor to classify data for training because it’s such a huge amount that it would never be financially viable to fairly pay people, and end result which is to fill the internet with garbage.

    You really don’t need to be an insider to see that.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    1
    ·
    15 days ago

    🤖 I’m a bot that provides automatic summaries for articles:

    Click here to see the summary

    In an interview with The New York Times, former OpenAI governance researcher Daniel Kokotajlo accused the company of ignoring the monumental risks posed by artificial general intelligence (AGI) because its decision-makers are so enthralled with its possibilities.

    Kokotajlo’s spiciest claim to the newspaper, though, was that the chance AI will wreck humanity is around 70 percent — odds you wouldn’t accept for any major life event, but that OpenAI and its ilk are barreling ahead with anyway.

    The 31-year-old Kokotajlo told the NYT that after he joined OpenAI in 2022 and was asked to forecast the technology’s progress, he became convinced not only that the industry would achieve AGI by the year 2027, but that there was a great probability that it would catastrophically harm or even destroy humanity.

    Kokotajlo became so convinced that AI posed massive risks to humanity that eventually, he personally urged OpenAI CEO Sam Altman that the company needed to “pivot to safety” and spend more time implementing guardrails to reign in the technology rather than continue making it smarter.

    Fed up, Kokotajlo quit the firm in April, telling his team in an email that he had “lost confidence that OpenAI will behave responsibly” as it continues trying to build near-human-level AI.

    “We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” the company said in a statement after the publication of this piece.


    Saved 56% of original text.