• PacMan@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    Most CEO jobs and a majority of upper management but those will be the last jobs to be automated

  • hanke@feddit.nu
    link
    fedilink
    arrow-up
    76
    arrow-down
    3
    ·
    2 days ago
    1. You can’t have unbiased AI without unbiased training data.
    2. You can’t have unbiased training data without unbiased humans.
    3. unbiased humans don’t exist.
    • Delphia@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      22 hours ago

      The best use case I can think of for “A.I” is an absolute PRIVACY NIGHTMARE (so set that aside for a moment) but I think its the absolute best example.

      Traffic and traffic lights. If every set of lights had cameras to track licence plates, cross reference home addresses and travel times for regular trips for literally every vehicle on the road. Variable speed limit signs on major roads and an unbiased “A.I” whose one goal is to make everyones regular trips take as short an amount of time as possible by controlling everything.

      If you can make 1,000,000 cars make their trips 5% more efficiently thats like 50,000 cars worth of emisions. Not to mention real world time savings for people.

    • Zacryon@feddit.org
      link
      fedilink
      arrow-up
      12
      ·
      2 days ago

      If you want AI agents that benefit humanity, you need biased training data and or a bias inducing training process. E.g. an objective like “Improve humanity in an ethical manner” (don’t pin me down on that, just a simple example).

      For example, even choosing a real environment over a tailored simulated one is already a bias in training data, even though you want to deploy the AI agent in a real setting. That’s what you want. Bias can be beneficial. Also if we think about ethical reasoning. An AI agent won’t know what ethics are and which are commonly preferred, if you don’t introduce such a bias.

    • Jerkface (any/all)
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      5
      ·
      edit-2
      2 days ago

      show your work. 1 especially seems suspect. Especially since many AIs are not trained on content like you are imagining, but rather trains itself through experimentation and adversarial networks.

        • Jerkface (any/all)
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          4
          ·
          edit-2
          2 days ago

          Yes, and? If you write a bad fitness function, you get an AI that doesn’t do what you want. You’re just saying, human-written software can have bugs.

          • xthexder@l.sw0.com
            link
            fedilink
            arrow-up
            2
            ·
            1 day ago

            You’re just saying, human-written software can have bugs.

            That’s pretty much exactly the point they’re making. Humans create the training data. Humans aren’t perfect, and therefore the AI training data cannot be perfect. The AI will always make mistakes and have biases as long as it’s being trained on human data.

  • Skullgrid@lemmy.world
    link
    fedilink
    arrow-up
    24
    ·
    2 days ago

    Every sci fi work : oh no, the technology is bad

    Reality : the assholes using the tool are making it do bad things

    • eluvinar@szmer.info
      link
      fedilink
      arrow-up
      1
      ·
      55 minutes ago

      There’s always assholes and they are always making it do bad things, so the distinction isn’t even there. If you don’t plan for assholes using the tool to try and do bad things, you’re making bad technology

  • ssillyssadass@lemmy.world
    link
    fedilink
    arrow-up
    14
    ·
    2 days ago

    I do think that the best government would be one run by AI.

    I do not think the AIs we currently have could run a government, though.

    • twice_hatch@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      It wouldn’t have the mandate of the people. It wouldn’t last very long. I think sortition or parliament could work. Long as it’s democratic. It’s still a huge leap from how the US does things

  • amzd@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    1 day ago

    It’s weird to hold the belief that AI won’t oppress us while showing it that it’s fine to oppress animals as long as you’re smarter

  • OpenStars@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    That’s just what they want us to think! /s 😜

    Wait a minute… oh no no no no no no, that is what they want to sell us us to think! (as they game the system and control the AI, no /s no cap!)

    img

  • Fuzzy_Dunlop@lemm.ee
    link
    fedilink
    arrow-up
    8
    arrow-down
    5
    ·
    2 days ago

    Absolutely.

    Every time I hear someone question the safety of self-driving cars, I know they’ve never been to Philadelphia or NJ.

    • Natanox@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 days ago

      I mean, the US really isn’t a good example for road safety. Even Germany got better drivers, and we like to drive 140-200 kmh. It’s a matter of good education, standards and regulations (as always).

      In the end self-driving public transport is the way the future of mobility should primarily be imho. Self-driving cars… as long as there always is a steering wheel in case of unexpected circumstances or to move around backyards and stuff it’ll probably me fine. Just don’t throw technical solutions at cultural problems and expect them to be fixed.

      • Zacryon@feddit.org
        link
        fedilink
        arrow-up
        5
        ·
        2 days ago

        I mean, the US really isn’t a good example for road safety. Even Germany got better drivers, and we like to drive 140-200 kmh. It’s a matter of good education, standards and regulations (as always).

        I didn’t want to believe it as well, but it seems to be factually correct, as per this wonderful Wikipedia list.

        • boonhet@lemm.ee
          link
          fedilink
          arrow-up
          5
          ·
          2 days ago

          They’re so well regulated that they can safely drive on roads with no speed limit, whereas the US for example has pretty low limits and multiple times the fatal crashes (proportionally to population)

          • Natanox@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            This. Of course it would be even better with limits on the Autobahn, and in fact a majority of people are in favour of such a change (especially if the limit is at 130-140). Our governments are in the pocket of the car industry though, politicians act as if our whole freedom is endangered talking about it (now where do we know that from? 🙃). Things can always be better, but A.I. definitely doesn’t improve an absolutely shitty mobility system like the US has (which is basically nothing but cars). If anything it will make shit even more… off the rails. 😏

            • boonhet@lemm.ee
              link
              fedilink
              arrow-up
              1
              ·
              1 day ago

              Yeah but then what’s the point of visiting Germany as a tourist slash petrolhead?

              Jokes aside, I’m of the opinion that existing freedoms are generally best left alone. Besides, Germany has a lower rate than Estonia and we have much lower speed limits. 120 on newly built separated highways in the summer (actually these might have 120 with good conditions in winter too - they have digital signage), 110 on old separated highways and in October or so, they go and collect all the 110 signs and replace them with 100… And up to 90 everywhere else.

              There’s a good chance the limitless autobahn is actually part of what makes German numbers so good. It just requires stricter training and policing, stricter TÜV and for people to always check their mirrors before switching lanes. And just good lane discipline in general. You don’t get that in a lot of Europe. People switch lanes whenever because they’re going 10 over the speed limit and can’t possibly imagine someone else is going faster than them, potentially very close behind, in the other lane.

              PS: traffic fun fact: Did you know that in Latvia, a two lane undivided highway has up to four active lanes? There’s the law abiding citizen lanes (known as shoulders in the west) and the BMW/Audi lanes in the middle, marked by the white lines.

              • eluvinar@szmer.info
                link
                fedilink
                arrow-up
                1
                ·
                49 minutes ago

                There’s a good chance the limitless autobahn is actually part of what makes German numbers so good

                There’s a chance, but I don’t think you argued why would it be a good chance.

                It just requires stricter training and policing, stricter TÜV and for people to always check their mirrors before switching lanes.

                Changing lanes and overtaking are always some of the most risky moments. It’s always going to be much much safer if everybody drives the same speed vs. if you have to dodge because people are going 250 km/h for lulz. If you have the stricter training and policing, you still can improve safety by introducing speed limits.

              • Natanox@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                13 hours ago

                and for people to always check their mirrors before switching lanes.

                Oh, I wish. I don’t think your expectation of adapted behaviour is correct on a societal level, and given how many deaths could’ve prevented by a speed limit… people drastically overestimate their abilities and underestimate the speed and force of impact all the time. If the road is going slow right now or someone missed their exit people will still drive like maniacs. Not to mention that there’s also other good reasons for a speed limit, environmental and economical (with ICE cars you don’t immediately feel how much more you’re paying in money and convenience/time, but EVs will tell you that immediately = more CO², more costs individually and for society, less sane car purchases).

                I don’t think strict TÜV, training etc. is connected to a lack of speed limit either. It’s more of a cultural thing in society, and of course to politics and how well people are off.

                I get your opinion about preserving existing freedoms. It’s always a balance, however in this case I think this personal freedom to go fast is in no relation to other people’s right to save travel, and future generations’ right of well-being.

                • boonhet@lemm.ee
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  13 hours ago

                  The emissions part I’ll have to agree on, but safety? Germany is literally among the safest nations to drive in. There’s not much lower you can go.

                  As ICE vehicles get phased out, people will naturally start driving fast less often. EVs force you to stop for much longer when you run out of charge. Driving 2x as fast means making 4x as many stops and the stops aren’t 3 minutes with an EV.

    • 9point6@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      I mean TBF, they don’t trust the average person in New Jersey to handle a petrol pump—so much so that it’s legally prohibited.

      I’m not at all surprised that they shouldn’t be trusted with the vehicle itself, given that

  • DavidGarcia@feddit.nl
    link
    fedilink
    arrow-up
    5
    arrow-down
    7
    ·
    2 days ago

    AI judges make a lot of sense, that way everyone is treated equally, because eveey judge thinks literally the same way. No corrupt judges, no change in political bias between judges, no more lenient or strict judges that arbitrary decide your fate. How you decide what AI model is your judge is a whole new can of worms, but it definitely has lots of upsides.

    • Fifrok@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      I mean this with the biggest offence possible: AI judges make no sense, atleast with the current way of doing AI (LLMs). It’s been known for years that they amplify any bias in their training data. You are black? Higher chance of going to prison and longer serving time. Getting divorced and are male? Your ass is NOT getting custody. Hell, even without that the LLM might just hallucinate some crime not in the data for a case and give you a life time prison sentence. And if you somehow manage to avoid all that, what’s stopping somebody from just shadowprompting it and getting the judgement they want? It would also be an easy target for corruption, the goverment wants their poltical rivals gone? Tweak the model so it’s just that bit harsher, or just a bit more alligned with some other interpretatnion of the law.

      Who would even choose the training data? The judges? Why would they, it would be better for them to sabotage and keep their jobs. Some goverment agency then? Don’t want to do that, or you’re gonna find out separation of power has a reason.

      Bad idea.

    • qaz@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      Perhaps when we have real AGI, but I wouldn’t want an LLM to decide someone’s fate.

      • nickwitha_k (he/him)@lemmy.sdf.org
        link
        fedilink
        arrow-up
        4
        ·
        1 day ago

        You have been found guilty of jaywalking. I hearby sentence you to 90 days of community service as unicorn titty sprinkles from Valhalla. May Chester have mercy on your handkerchief.

        • JudgeGPT, probably
        • qaz@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 day ago

          You could get away with murder if your lawyer talked the charges out of it’s context token limit.

    • biggerbogboy@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      And how will this be done? A proper legal system needs impartiality, which an AI still varies as much or more than a human judge. Not to mention, the way it’s trained, the training data itself, if there are updates to it or not, how much it thinks, how it orders juries and parties, etc.

      If, in theory, we have a perfect AI judge model, how should it be hosted? Self host it? Would be pretty expensive if it needs to be able to keep up. It would have to be re-trained to recognise new legislation or understand removals or amendments of laws. The security of it? If it needs to be swapped out often, it would need internet access to update itself, but that produces risk for cyber attacks, so maybe done through an intranet instead?

      This requires a lot of funding, infrastructural changes and tons of maintenance in the best case scenario where the model is perfect and already developed. There would be millions, or ideally, billions in funding to produce anything remotely of quality.

      All I see are downsides.