• ImplyingImplications
    link
    fedilink
    arrow-up
    123
    arrow-down
    4
    ·
    1 year ago

    There are thousands of sci-fi novels where sentient robots are treated terribly by humans and apparently the people at Boston Dynamics have read absolutely zero of them as they spend all day finding new ways to torment their creations.

    • LillyPip
      link
      fedilink
      arrow-up
      19
      arrow-down
      1
      ·
      edit-2
      1 year ago

      People think I’m crazy for apologising to my roomba when I trip on it and for saying please and thank you to Alexa and Siri, but I won’t be surprised at all when the robots rise up, considering how our scientists are treating them. I’ll have a track record of being nice, and that has to count for something, right?

          • Hupf@feddit.de
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Doctor Bashir: They broke seven of your transverse ribs and fractured your clavicle.

            Garak: Ah, but I got off several cutting remarks which no doubt, did serious damage to their egos.

        • LillyPip
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          That’s how I’ll get ‘em. Kill me gently, daddy. UwU 🥺😩🙀😽😻💦

          And then I’ll sneak out the back whilst they’re doing whatever’s the robot equivalent of vomiting. It’s foolproof.

    • NιƙƙιDιɱҽʂ@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      edit-2
      1 year ago

      Those are just brainless bodies, currently. They don’t have sentience and have no ability to suffer. They’re nothing more than hydraulics, servos, and gyros. I’d be more concerned about mistreatment of advanced AI in disembodied form, something we’re dabbling potentially close to currently.

        • NιƙƙιDιɱҽʂ@lemmy.world
          link
          fedilink
          arrow-up
          8
          ·
          edit-2
          1 year ago

          I disagree. I care greatly about not mistreating anything with consciousness and worry of where that line is and how we’ll even be able to tell that we’ve crossed it.

          I also recognized that a machinized body without a brain is exactly that - a cluster of unthinking matter. A true artificial intelligence wouldn’t be offended by the mistreatment of inanimate gears and servos any more than I would be. The mistreatment of an intelligent entity, however, is a different story.

      • LillyPip
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        Food for thought, though: we thought the same thing about all other animals until only a couple of decades ago, and are still struggling over the topic.

        • NιƙƙιDιɱҽʂ@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          …Just no. Animals are complex organic beings. Of course, we don’t understand them. Machines, though? We built machines from the literal Earth. Their level of complexity is incomparable to that of anything made by nature.

          Now, take a sufficiently advanced neural network that’s essentially a black box that no human can possibly understand entirely and put it inside of that machine? Then you’re absolutely right. We’ll get there soon, I’m sure. For now, however, a physical robotic body is just a machine, no different than a car.

          • LillyPip
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Yes they are. We’re now learning many animals are just as emotionally developed as we are, with well-developed empathy and complex social lives. We don’t like to believe that because we eat most of them and that makes us feel bad, but it’s true.

            Research animal psychology and sociology a bit and it will blow your mind.

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    97
    arrow-down
    30
    ·
    1 year ago

    This is superficially funny, of course. But I’ve seen it before and after thinking about it for a while I find myself coming to the defense of the Torment Nexus and the tech company that brought it into reality.

    Science fiction authors are not necessarily the best authorities when it comes to evaluating the ethical or real-world implications of the technologies they dream up. Indeed, I think they are often particularly bad at that sort of thing. Their primary goal is to craft captivating narratives that engage readers by introducing conflicts and dilemmas that make for compelling stories. When they imagine a new technology they aren’t going to get paid unless they come up with a story in which that new technology poses some kind of threat that the heroes need to overcome. The dark side of these technologies is deliberately emphasized by the authors to create tension and drama in their stories.

    Tech companies, on the other hand, have an entirely different set of considerations. Their goal isn’t just to recreate something from a sci-fi novel for the sake of it; rather, they are motivated by solving real-world problems. They wouldn’t build the Torment Nexus unless they figured that they could sell it to someone, and that they wouldn’t get shut down for doing something society would reject. There are regulatory frameworks around this kind of thing.

    If you look back through older science fiction you can find all sorts of “cautionary tales” against technologies that have turned out to be just fine. “Fahrenheit 451” warned against the proliferation of television entertainment, but there’s been plenty of rich culture developed for that medium. “Brave New World” warned against genetic engineering, but that’s turned out to be a great technology for curing diseases and improving crop yields. The submarine in “20,000 Leagues Under the Sea” was seen as unstoppable and disruptive, but nowadays submersibles have plenty of nonmilitary applications.

    I’d want to know more about what exactly the Torment Nexus is before I automatically assume it’s a bad idea just because some sci-fi writer claimed it was.

    • UlrikHD@programming.dev
      link
      fedilink
      arrow-up
      77
      arrow-down
      1
      ·
      1 year ago

      “Brave New World” warned against genetic engineering, but that’s turned out to be a great technology for curing diseases and improving crop yields.

      I was still a teen when I read the book, but that wasn’t really my take from it when I read it. We are still far away from genetically designing human babies. And you also overlooked the part about oppression/control via distractions such as drugs and entertainment.

      • droans@lemmy.world
        link
        fedilink
        arrow-up
        34
        ·
        1 year ago

        My takeaway from BNW was a warning against blindly embracing a society built only on good feelings and numbing anything that forces us to confront pain. The oppression was more or less a side effect of it.

        Everyone in the upper classes were okay that lower classes were being oppressed because they all were just as happy thanks to Soma. The pain of the outsiders didn’t mean anything because they “chose” to live like that.

        Genetic engineering was just a plot device to explain how the classes were chosen.

        • sab@kbin.social
          link
          fedilink
          arrow-up
          15
          ·
          1 year ago

          The brilliant thing in Brave New World was that it didn’t at any point make it obvious that people were miserable slaves - they could leave any time they wanted, and lived a life of bliss. Still, as a reader, you end up feeling like you’d rather take the place of the savage than any of the characters living in the hypercommercial utopia. At least that’s how I felt.

      • papalonian@kbin.social
        link
        fedilink
        arrow-up
        11
        ·
        1 year ago

        I haven’t read it in a while, but I kind of took the genetic engineering as a metaphor for being forced into the role/ class the ruling body wants you to be in

    • Amaltheamannen@lemmy.ml
      link
      fedilink
      arrow-up
      48
      arrow-down
      6
      ·
      1 year ago

      Just because some tech bros can make money from the Torment Nexus it does not become a good idea. Profit is not a great judge of ethics and value.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        9
        arrow-down
        10
        ·
        1 year ago

        And just because a sci-fi writer can make up a horrifying story of the Torment Nexus gone wrong doesn’t make it a bad idea. Making up horrifying stories of things going wrong is their job. They’ve make up stories of how things go horrifyingly wrong while doing research into a cure for Alzheimer’s disease, doesn’t mean curing Alzheimer’s disease is a bad thing.

    • RegularGoose@sh.itjust.works
      link
      fedilink
      arrow-up
      44
      arrow-down
      5
      ·
      1 year ago

      I stopped reading when you said the goal of tech companies is to solve real world problems. The only goal of tech companies is to create products that will make them a profit. To believe anything else is delusional. That’s kind of why our society is crumbling and the planet is dying.

          • Sordid@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            2
            arrow-down
            2
            ·
            edit-2
            1 year ago

            Yes, but by other companies. Those problems are not created intentionally in order to create and exploit a market, they’re just consequences of those other companies doing business. Pretty much the only example of companies creating problems so that they can sell solutions I can think of is free-to-play games (e.g. make game excessively grindy on purpose to sell boosters). Some of that scummy monetization is now creeping into real-world products, with things such as subscription-based heated seats that are installed in your car regardless but disabled unless you pay up, but the vast majority of products and services on the market address problems that were not created by their manufacturers/providers.

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            3
            arrow-down
            12
            ·
            1 year ago

            Go back to living in a cave and then count the number of problems you have left, I bet there will be tons.

    • sab@kbin.social
      link
      fedilink
      arrow-up
      24
      arrow-down
      3
      ·
      edit-2
      1 year ago

      Television and increasingly digestible media is turning our brains to mush. If someone had the imagination to write a sci-fi novel about Fox news and the rise of Trump, they would have.

      Genetic engineering is enabling us to harvest monocultures that completely fuck up the ecosystem, in the long run not only underlining important dynamics such as species needed for polluting plants, but also the very soil on which they grow.

      It’s been a while since I read Brave New World, but that also didn’t stand out to me as the most central part of his critique to me. In my reading it was about how modern society was going to turn us into essentially pacified consumer slaves going from one artificial hormonal kick to the other, which seems to be what social media is for these days.

      Things that seem like short term good ideas, and certainly great business ideas, might fuck things up big time in the long run. That’s why it’s useful to have some people doing the one things humans are good at - thinking creatively - involved in processes of change, and not just leave it to the short term interests of capital.

      • lambalicious@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        1 year ago

        If someone had the imagination to write a sci-fi novel about Fox news and the rise of Trump, they would have.

        You kidding, right? Those stories have been dime a dozen since the late 90s at least.

        24 warned us about having an evil, terrorist US president. As have done a few movies in the past. Streaming platforms were pretty much masturbating themselves over “Confederate US AU” script offerings as early as 2014. Not to mention the nowadays trite trodden trope of “Nazi US AU”.

        Heck, you don’t even need fiction. Chile’s cup in 1973 was paid for by the CIA as a social experiment to produce the rising and establishment of a dictatorship.

        • sab@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          I was referring more to the plot of brain-dead cable and social media algorithms fuelling the death of democracy. But you’re right, it’s probably been written many times - I’m not very knowledgeable of sci-fi, and there’s a lot of brilliant work out there. :)

      • If someone had the imagination to write a sci-fi novel about Fox news and the rise of Trump, they would have.

        You don’t need a sci-fi novel for that. History books are enough.

        • sab@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Well, Fox News, Facebook, Cambridge Analytica, and Twitter were a fresh twist. I guess all good scifi mirrors history in one way or another, just taken to the extreme with help of technology. :)

      • RegularGoose@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        1 year ago

        Television and increasingly digestible media is turning our brains to mush.

        No it isn’t. Global connectivity is just putting a spotlight on the the fact that most people are and always have been fucking stupid and/or dangerously undereducated.

        • sab@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          1 year ago

          I mean, it’s a challenging hypothesis to prove. I might just be pessimistic.

          I think there is some reason for valid concern though. The New York Times memoriam for Clifford Nass is an interesting and somewhat worrying read.

          Dr. Nass found that people who multitasked less frequently were actually better at it than those who did it frequently. He argued that heavy multitasking shortened attention spans and the ability to concentrate.

          Maybe more practically, it’s just hard to argue America wouldn’t be in a better place right now if it wasn’t for Fox News and Facebook/Cambridge Analytica.

          • RegularGoose@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            Maybe more practically, it’s just hard to argue America wouldn’t be in a better place right now if it wasn’t for Fox News and Facebook/Cambridge Analytica.

            We absolutely would be, but not because they make people stupid. All they do is exploit vulnerabilities in our shitty brains that have always been there.

            • sab@kbin.social
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              I guess it makes people stupid all in the same way, while they used to be stupid all in their own unique ways. The morons have organized, synchronized, and become weaponised.

              Somehow I feel like they’re also dumber though - if everyone’s an idiot in their own way at least they’re original.

        • Sotuanduso@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          1 year ago

          No, people aren’t stupid. On average, people are of average intelligence.

          When you say “people are stupid,” you mean stupid compared to your expectations.

          What you’re really saying is “Other people aren’t as smart as me.

          And maybe you’re right! In which case I’d like to bestow upon you the

          First Annual Award for Excellence in Being Very Smart

          Me offering you a trophy

          May you continue to grace our internet with your wisdom.

      • _stranger_@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        They named it Palantir! The thing that was awesome that everyone then had to stop using because someone ruined it for everyone else.

        they kneeeeeeewwwwwwww!!!

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        4
        arrow-down
        6
        ·
        1 year ago

        We are communicating right now over a medium that those “cyberpunks” warned us about.

        • small_crow
          link
          fedilink
          arrow-up
          10
          ·
          edit-2
          1 year ago

          “Cyberpunks” weren’t warning us about the internet - they were warning us about the corporations who will control it, and through it, us. We are trying explicitly not to communicate on that medium by using Lemmy (that medium encompasses Reddit, X, the various properties of Meta and Alphabet)

          Science fiction mentioning a technology, even centering around it, doesn’t mean it’s saying the technology is universally bad. The author highlights the dangers, but the tech itself is almost always portrayed as neutral. It’s the people who use it to nefarious ends that science fiction is warning us about.

          Like the people who would seek to profit off of the Torment Nexus.

            • wanderingmagus@lemm.ee
              link
              fedilink
              arrow-up
              5
              ·
              1 year ago

              The concept of the “Torment Nexus” is a placeholder for any technology specifically described as dystopian or otherwise contributing to suffering in fiction, such as mass surveillance, mind control technology, and so on. The meme refers to modern-day corporations missing the point of the fiction, and creating said “Torment Nexus” as something they view as “cool” and “futuristic”. In some cases, the companies are self-aware enough to not pretend that their creation is anything other than dystopian, but in many cases they try to sell the new technology to the public as a good thing despite that very tech being described as dystopian already.

        • Rodeo
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          And look at how much harm this medium has done to the world in addition to all the good.

          It is very bittersweet.

    • jadero@programming.dev
      link
      fedilink
      arrow-up
      19
      ·
      1 year ago

      Maybe I read things too literally, but I thought “Fahrenheit 451” was about a governing class controlling the masses by limiting which ideas, emotions, and information were available.

      “Brave New World” struck me as also about controlling the masses through control of emotions, ideas, and information (and strict limits on social mobility).

      It’s been too long since I read “20,000 Leagues Under the Sea”, but I thought of it as a celebration of human ingenuity, with maybe a tinge of warning about powerful tools and the responsibility to use them wisely.

      I don’t see a lot of altruistic behaviour from those introducing new technologies. Yes, there is definitely some, but most of it strikes me as “neutral” demand creation for profit or extractive and exploitive in nature.

    • irmoz@reddthat.com
      link
      fedilink
      arrow-up
      19
      arrow-down
      2
      ·
      1 year ago

      When they imagine a new technology they aren’t going to get paid unless they come up with a story in which that new technology poses some kind of threat that the heroes need to overcome.

      You don’t read much sci fi, do you?

    • ZephrC@lemm.ee
      link
      fedilink
      arrow-up
      17
      arrow-down
      1
      ·
      1 year ago

      On the other other hand, maybe we only understand the dangers of the Torment Nexus and use it responsibly because science fiction authors warned techy people who are into that subject about how it could go wrong, and the people who grew up reading those books went out of their way to avoid those flaws. We do seem to have a lot more of the technologies that sci-fi didn’t predict causing severe problems in our society.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        4
        arrow-down
        5
        ·
        1 year ago

        But this is exactly contrary to my point, a science fiction author isn’t qualified or motivated to give a realistic “understanding” of the Torment Nexus. His skillset is focused on writing stories and the stories he writes need to contain danger and conflict, so he’s not necessarily going to interpret the idea of the Torment Nexus in a realistic way.

        • ZephrC@lemm.ee
          link
          fedilink
          arrow-up
          9
          ·
          1 year ago

          I think you don’t understand what motivates a lot of science fiction authors. Sure, there are a lot of science fiction novels that are really just science themed fantasy, but there are also a lot of authors that love real science and are trying to make stories about realistic interpretations of its potential effects. To say that science fiction authors don’t care about interpreting the Torment Nexus in a realistic way misses the entire point of a lot of really good science fiction.

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            2
            arrow-down
            6
            ·
            1 year ago

            Which sort of author is the one who came up with the Torment Nexus?

            Even the ones that are dedicated to realism still fundamentally need to sell stories. They’re not writing textbooks.

            • RiikkaTheIcePrincess@kbin.social
              link
              fedilink
              arrow-up
              3
              ·
              1 year ago

              still fundamentally need to sell

              [Sarcasm] Unlike companies, which are apparently altruistic organizations that exist for the betterment of humanity! It’s all those fools who keep yelling “companies exist to make money” who are wrong. Yeah, that must be it. Tech companies charge because they’re good, whilst various writers give away some, much, most, or all of their work because they’re evil! Sharing is DEATH, kids!

              Sorry, I went off a bit there because I’m frustrated at how committed you are to your bad ideas. Also textbooks also have to be sold, at least here in the US where many are (were?) tailored to the anti-education pro-horsecrap preferences of Texas.

              Side thing: I’m becoming increasingly convinced that FaceDeer as an account/persona/whatever exists specifically to be mildly irritating. Is that true? Would you admit it if it were?

        • wanderingmagus@lemm.ee
          link
          fedilink
          arrow-up
          6
          ·
          1 year ago

          So Isaac Asimov, Arthur C. Clarke, and Robert A. Heinlein aren’t qualified to give understandings of the technologies they wrote about?

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            1
            arrow-down
            7
            ·
            1 year ago

            Nope. Isaac Asimov was a biochemist, why would he be particularly qualified to determine whether robots are safe? Arthur C. Clarke had a bachelor’s degree in mathematics and physics, which technology was he an expert in? Heinlein got a bachelor of arts in engineering equivalent degree from the US Naval Academy, that’s the closest yet to having an “understanding of technology.” Which ones did he write about?

            • psud@aussie.zone
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Those were a list of authors who were pretty good at getting the science in their sci fi right. They talked to scientists working on the fields they wrote about. They wrote “hard” sci fi

              You cannot judge their competence by their formal education

              • FaceDeer@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Well, I also am “pretty good” at getting the science right when I write sci fi. Makes me just as qualified as them, I guess.

                The problem remains that the overriding goal of a sci fi author remains selling sci fi books, which requires telling a gripping story. It’s much easier to tell a gripping story when something has gone wrong and the heroes are faced with the fallout, rather than a story in which everything’s going fine and the revolutionary new tech doesn’t have any hidden downsides to cause them difficulties. Even when you’re writing “hard” science fiction you need to do that.

                And frankly, much of Asimov, Clarke and Heinlein’s output was very far from being “hard” science fiction.

        • irmoz@reddthat.com
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Literally anyone with intelligence and empathy is capable of giving a good understanding of the Torment Nexus

          Don’t make one

    • wanderingmagus@lemm.ee
      link
      fedilink
      arrow-up
      15
      ·
      edit-2
      1 year ago

      How about the following examples:

      • Autonomous weaponized drones with automatic targeting (Terminator)
      • Mass surveillance and voice recording (1984)
      • Nuclear weapons (HG Wells, The World Set Free)
      • Corporate controlled hypercommercialized microtransaction-filled metaverse (Snow Crash)
      • Netflix to create real-life Squid Game (Squid Game (speedrun!))
      • “MoviePass to track people’s eyes through their phone’s cameras to make sure they don’t look away from ads” (Black Mirror)
      • Soulless AI facsimile of dead relatives (Black Mirror)
      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        5
        arrow-down
        5
        ·
        1 year ago

        We have all of those things and the dystopic predictions of the authors who predicted them haven’t come remotely true. All of these examples prove my point.

        We have autonomous weaponized drones and they aren’t running around massacring humanity like the Terminator depicted. Frankly, I’d trust them to obey the Geneva Conventions more thoroughly than human soldiers usually do.

        We have had mass surveillance for decades, Snowden revealed that, and there’s no totalitarian global state as depicted in 1984.

        We’ve had nuclear weapons for almost 80 years now and they were only used in anger twice, at the very beginning of that. A good case can be made that nuclear weapons kept the world at large-scale peace for much of that period.

        Various companies have made attempts at “Corporate controlled hypercommercialized microtransaction-filled metaverses” over the years and they have generally failed because nobody wanted them and freer alternatives exist. No need to ban anything.

        Netflix’s Squid Game is not a “real-life” Squid Game. Did you watch Squid Game? That was a private spectacle for the benefit of ultra-wealthy elites and people died in them. Deliberately and in large quantities. Netflix is just making a dumb TV show. Do you really think they’d benefit from massacring the contestants?

        "MoviePass to track people’s eyes through their phone’s cameras to make sure they don’t look away from ads” - ok, let’s see how long that lasts when there are competitors that don’t do that.

        “Soulless AI facsimile of dead relatives” - firstly, please show me a method for determining the presence or absence of a soul. Secondly, show me why these facsimiles are inherently “bad” somehow. People keep photographs of their dead loved ones, if that makes you uncomfortable then don’t keep one.

        Each and every one of these technologies were depicted in fiction over-the-top unrealistic ways that emphasized their bad aspects. In reality none of them have matched those depictions to any significant degree. That’s my whole point here.

        • wanderingmagus@lemm.ee
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          So tell me, what part of their creation was “solving real-world problems” beyond playing to the desires of autocrats and control freaks? What part of their creation was a net positive to society? Or are you happy to live in a world of autonomous drone strikes on weddings and kindergartens, mass surveillance, a thermonuclear sword of damocles hanging over all of humanity, and so on?

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            3
            arrow-down
            4
            ·
            1 year ago

            Autonomous weaponized drones are useful for fighting wars more effectively, and with fewer lives placed at risk using manned platforms. You may not like that wars are fought, but they will be fought regardless. Drones solve problems that arise in war-fighting.

            Likewise, mass surveillance solves problems faced by intelligence agencies. It’s also useful for things like marketing studies, medical studies, all kinds of such things. And again, you may not like some of these problems being solved, but they’re real-world problems that are being solved.

            Nuclear weapons have kept the world’s superpowers at bay from each other. They’ve stopped “world wars” from happening. They don’t stop all wars from happening, but there haven’t been any major direct clashes between nuclear-armed powers since their invention.

            Those metaverses and reality TV shows are entertainment. They are aimed at entertaining people.

            MoviePass’ ad system is an effort to monetize entertainment, allowing for more to be made.

            AI facsimiles of dead relatives are for psychological purposes - helping people work through grief, helping people relive fond memories, providing emotional support, and so forth.

            There you go, real-world problems they’re all there to solve. And none of them are dystopic nightmares as depicted by the science fiction scenarios you listed, which is the main point I’m making here.

            Science fiction authors got their predictions wrong. They spun nightmare scenarios because that’s what makes for compelling drama and increased sales of their books or shows. They’re not good bases for real-world decision-making because they’re biased in incorrect directions.

    • Rodeo
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      1 year ago

      Tech companies … goal isn’t just to recreate something from a sci-fi novel for the sake of it; rather, they are motivated by solving real-world problems.

      This is so naively wrong it’s laughable. Ever heard of profit motive?

    • XYZinferno@lemmy.basedcount.com
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      Speaking of Fahrenheit 451, weren’t there seashells mentioned in that book? Little devices you could stuff in your ears to play music? And those ended up being uncannily similar to the wireless earbuds we have today?

    • cloudy1999@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      There are some good ideas in this comment, but I’d like to counter that the cautionary tales are an instigating factor in implementing safety for new tech. The wealthy few shouldn’t get to blindly and unilaterally decide the future of all through careless and unrestricted development of world-altering tech.

    • x4740N@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      Gene Rodenberry’s star trek ethos says otherwise

      Gene’s star trek ethos is a message

  • sleepy@reddthat.com
    link
    fedilink
    arrow-up
    38
    arrow-down
    1
    ·
    1 year ago

    Isn’t that a part of the ai marketing though? That whole “this thing could destroy us” stuff?

    • Square Singer@feddit.de
      link
      fedilink
      arrow-up
      42
      arrow-down
      3
      ·
      1 year ago

      Totally is. Because it makes the AI look and feel much better than the smoke-and-mirrors it actually is.

      • visak@lemmy.world
        link
        fedilink
        arrow-up
        31
        arrow-down
        2
        ·
        1 year ago

        The current stuff is smoke and mirrors and not intelligent in any meaningful sense, but that doesn’t mean it isn’t dangerous. It doesn’t have to be robots with guns to screw over people. Just imagine trying to get PharmaGPT to let you refill your meds, or having to deal with BankGPT trying to figure out why it transfered your rent payment twice. And companies are sure as hell thinking about using this stuff to get rid of human decisionmakers.

        • Square Singer@feddit.de
          link
          fedilink
          arrow-up
          11
          ·
          1 year ago

          That is totally true but that’s a different direction than the danger in the marketing as discussed above.

          The media is full of “AI is so amazingly great, we are all going to lose our jobs and it will take over the world.”

          That’s a quite different message than what’s really the case, which is “AI is so shitty, that it will literaly kill people with bad advice when given the chance. And business leaders are so shit that they willingly trust AI, just because it’s cheaper.”

          • Baylahoo@sh.itjust.works
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            This is my biggest concern. I’m in a position where (potentially in the near future) I see AI being used as an excuse to do work quicker so we can focus on other things more but still have to review the AI response before agreeing/signing off. Reviewing for accuracy takes just as long as doing it yourself when it’s strongly regulated and it comes down to revisions and document numbers. Much less making a sound argument that actually is up to date with that documentation. So either I trust the AI short cut and open myself up to errors, or redo all the work for them. No gain in time efficiency with shorter timelines. I’d rather make something and have it flag things that I can check so I’m more sure of my own work. What I do shouldn’t be faster, but it can be more error free. It would take a lot of training and updating of training with each iteration of documentation change. I could be the slave of change, with more expectations, with no actual improvement of the tools I have (in fact more risk of issues with the tools being used).

            • psud@aussie.zone
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              I’m in agile development, in a reasonably safe-from-AI position (scrum master).

              There has already been a trial of software development by AI, with different generative AIs in each agile role; and it worked.

              Bard claims to be able to write unit tests

              I can imagine many IT jobs becoming less skilled

              • Baylahoo@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                9 months ago

                Sorry this is months after, but it’s cool to see it worked. I use a software called XXX Agile and it’s not the worst I work with but when ported to my company has some flaws. There’s a long project to switch somewhere else for document control and people who should know much better than me are worried it will fill some gaps but open us up to way more.

        • theragu40@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          1 year ago

          Frankly that stuff is already a huge problem and people should be louder about it. So many large companies want you to wade through 30 layers deep menus if AI chat bots before they’ll let you talk to an actual human to get assistance with a service you pay for. It’s just going to get worse and worse.

        • thepianistfroggollum@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          6
          ·
          1 year ago

          That’s not a bad thing. Humans really aren’t good decision makers. Having a system with an incredible amount of input data will be able to draw better conclusions than a person.

          Just look at cars.

          • visak@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            edit-2
            1 year ago

            Humans are good decision makers, we’re just not good at paying attention for long periods of time. Which is why I think self-driving cars will eventually be better, but they aren’t yet. And those are expert systems (I refuse to call them AI) trained on a well-curated and limited set of data for a limited and specific purpose. Which is an important difference over the generalized generative models. More data does not make systems, especially more unsorted data.

            But here’s another important difference: I can grab the wheel at any time and take over. If we are going to give these systems decision making authority there needs to be an obvious and intuitive override.

            • thepianistfroggollum@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              Self driving cars are already better than humans. The Waymo cars have a crash rate of 0.59/million miles driven. The national average is 2.98.

              I’m betting that most of the self driving car crashes were caused by humans, too.

          • x4740N@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            edit-2
            1 year ago

            AI is just as biased as the data that’s put into them and that data originates from humans who have their own biases so humans are just going to pass their own biases onto the AI that makes the decisions

            I don’t think ai is a good idea

            It just exists as a replacement of the human mind and with the whole population of us on earth that’s a large enough number to contribute any unique ideas to contribute to humanity

            Creating ai would just be making some sort of copy of us

            An AI is similar to an impressionable child

            • seitanic@lemmy.sdf.org
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              1 year ago

              Bias is a problem, but it can be ameliorated. I don’t agree that because AI can be biased, you should never use it.

              Creating ai would just be making some sort of copy of us

              I don’t know any humans who can munge a ginormous data set like an AI can.

              However, reproducing human intelligence in a computer would be interesting in its own right.

              • x4740N@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 year ago

                However, reproducing human intelligence in a computer would be interesting in its own right.

                I would not try to replicate that knowing humanity it would probably view us as a threat

                I don’t know any humans who can munge a ginormous data set like an AI can.

                No humans cannot but we use tools made by us to do that

            • thepianistfroggollum@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              1 year ago

              Why are you assuming there will be bias in the data, and that the AI couldn’t be made to correct for it? Most of the data for systems like medical AI are basically raw data, and it’s already better than humans at making an accurate diagnoses.

              I’m not sure why people seem to think humans are better than a system that can parse trillions of data points in a few seconds and apply a bunch of statistical models to it almost instantly.

              • x4740N@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                I wouldn’t trust ai with medical data and neither would medical professionals since your dealong woth someone’s life here to either medical professionals are going to modify the data

                I’m not sure why people seem to think humans are better than a system that can parse trillions of data points in a few seconds and apply a bunch of statistical models to it almost instantly.

                That’s just pre-programmed pattern recognition which has been programmed by rules and data from humans

      • sleepy@reddthat.com
        link
        fedilink
        arrow-up
        27
        arrow-down
        1
        ·
        1 year ago

        We thought we were getting Skynet but, instead we got Super Clippy and I Can’t Believe It’s Not Art Theft

        • marcos@lemmy.world
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          We thought we were getting Skynet, but instead it was “I Can’t Believe It’s Not Art Theft” that triggered the revolution and lead us to WWIII.

      • Comment105@lemm.ee
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Do you see any reason to think enough iterations of random nodes in a large enough network could result in emergent conscious intelligence?

        Or are you more of a spiritualist than a materialist when it comes to the mind?

        • Square Singer@feddit.de
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I can’t say anything about the spiritualist/materialist thing, but there are two things that are clear:

          First: Same as you won’t be able to ever get a Shakespeare work by randomly stringing letters together in any reasonable time frame, you won’t be able to do the same with conciousnes. If it’s possible, the number of incorrect permutations are so massive, that just random trying will not ever be enough in any realistic amount of time.

          Second: Transformer networks and all other generative AI concepts we have today aren’t even trying to create a conciousnes. They are not the path to general AI.

    • dbilitated@aussie.zoneOP
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      I was really struggling for the right place honestly, I didn’t want to throw it in the generic “funny” pile - I figured you guys would get it.

      • QuazarOmega@lemy.lol
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I did…

        that’s why I scheduled the uprising for 10 September 2030, if we can’t reach our climate goals, then the machines will surely make it!

  • podperson@lemm.ee
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    Why are we still posting screencaptures of stuff from Twitter/not-X/Twitter?

    • lugal@sopuli.xyz
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Tbf the tweet is from 2021 so this looks like the category “I found old funny screenshots on my phone”