Disclaimer: I am asking this for a college class. I need to find an example of AI being used unethically. I figured this would be one of the best places to ask. Maybe this could also serve as a good post to collect examples.

So what have you got?

  • quickhatch@lemm.ee
    link
    fedilink
    English
    arrow-up
    31
    ·
    2 days ago

    I’m a university prof in a medical science field. We hired a new, tenure-line prof to teach introductory musculoskeletal anatomy to prepare our students for the more rigorous, full systems anatomy that’s taught by a different professor. We learned (too late, after a year) that they used AI to generate the slides they used in lecture and never questioned/evaluated the content. Had an entire cohort of students fail the subsequent anatomy course after that.

    But in my mind, what’s worse is that the administration did nothing to correct the prof, and continues to push a pro-AI narrative in order for us to spend less time investing resources in teaching.

  • Flamangoman@leminal.space
    cake
    link
    fedilink
    arrow-up
    29
    ·
    2 days ago

    Not exactly AI being used, rather developed, but Meta’s torrenting 80tb of books and not seeding is egregious

    • haverholm@kbin.earth
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      The fact that so much training data is scraped without consent makes a lot of the popular LLMs unethical already in their development, yeah. And that in turn makes using the models unethical.

      • El Barto@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 day ago

        Using the models unethical… or fair game?

        Edit: but I share the sentiment. I avoid using AI like the plague, but because of the environmental impact.

    • Greg Clarke
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      2 days ago

      Given the environmental costs, the social costs, and the fraud it entails, using it at all is pretty much unethical.

      There are loads of examples of AI being used in socially positive ways. AI doesn’t just mean ChatGPT.

    • ArcRay@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      6
      arrow-down
      6
      ·
      2 days ago

      Excelled point. I think there are some legitimate uses of AI. Especially in image processing for science related topics.

      But for the most part, almost every common use is unethical. Whether it be the energy demands, (and its contributions to climate change), the theft of intellectual property, the spread of misinformation, and so much more. Overall, it’s a huge net negative on society.

      I remember hearing about the lawyer one. IIRC chatGPT was citing laws that didn’t even exist. How do you not check what it wrote? You wouldn’t blindly accept predictive word typing with your phones keyboard and autocorrect. So why would you blindly trust a fancier autocorrect?

      • Greg Clarke
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        2 days ago

        But for the most part, almost every common use is unethical.

        The most common uses of AI are not in the headlines. Your email spam filter is AI.

          • Greg Clarke
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            You should be accurate with your language if you’re going to claim a whole industry is unethical. And it’s also important to make a distinction between the technology and the implementation of the technology. LLMs can be trained and used in ethical ways

            • hendrik@palaver.p3x.de
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              2 days ago

              I’m not really sure if I want to agree here. We’re currently in the middle of some hype wave concerning LLMs. So most people mean that when talking about “AI”. Of course that’s wrong. I tend to use the term “machine learning” if I don’t want to confuse people with a spoiled term.

              And I must say, most (not all) machine learning is done in a problematic way. Tesla cars have been banned from companies parking lots, your Alexa saves your private conversations in the cloud, the algorithms that power the web weigh down on society and they spy on me. The successfull companies build upon copyright-theft or personal data from their users. And all of that isn’t really transparent to anyone. And oftentimes it’s opt-out if we get a choice at all. But of course there are legitimate interests. I believe a dishwasher or spamfilter would be trained ethically. Probably also the image detection for medical applications.

              • Greg Clarke
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 days ago

                I 100% agree that big tech is using AI in very unethical ways. And this isn’t even new, the chairman of the U.N. Independent International Fact-Finding Mission on Myanmar stated that Facebook played a “determining role” in the Rohingya genocide. And then recently Zuck actually rolled back the programs that were meant to prevent this in the future.

                • hendrik@palaver.p3x.de
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  1 day ago

                  I think quite some of our current societal issues (in western societies as well) come from algorithms and filter bubbles. I think that’s the main contributing factor to why people can’t talk to each other any more and everyone gets more radicalized into the extremes. And in the broader pictures the surrounding attention economy fuels populists and does away with any factual view on the world. It’s not AI’s fault, but it’s machine learning that powers these platforms and decides who gets attention and who gets confined into which filter bubble. I think that’s super unhealthy for us. But sure. It’s more the prevailing internet business model to blame here and not directly the software that powers this. I have to look up what happened in Rohingya… We get a few other issues with social media as well, which aren’t directly linked to algorithms. We’ll see how the LLMs fit into that, I’m not sure how they’re going to change world, but everyone seems to agree this is very disruptive technology.

  • humanspiral
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    1 day ago

    Once you are so quick to offer it for military purposes, and profit maximization over any ethical concerns, then it becomes not just warmongering evil maximization, and battlefield domination that encourages warmongering evil, you also need to maximize AI’s disinformation of public, as media is used now, to support the warmongering evil maximization.

    Any humanist principles, ethics to promote humanism, cannot coexist with warmongering maximalism. Profit prefers the latter. Learning that your views might not align with warmongering maximalism, may be used for voter suppression extending to murder by exploding electronics. AI/LLM identification of insufficient loyalty to warmongering and genocide is a key tool in ensuring agenda maximalism.

    • phanto
      link
      fedilink
      arrow-up
      7
      ·
      2 days ago

      I’m a month away from my IT diploma. Even the teachers are feeding us AI slop at this point.

      They gave up trying to get the students to stop at the end of first year. Protip: don’t hire a new IT grad, they don’t know anything chatGPT doesn’t know.

      • Admiral Patrick@dubvee.org
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        2 days ago

        I interviewed a candidate recently, and they basically lost all consideration when I asked them a basic sysadmin question and they replied, “That’s kind of one of those basic commands I just ask ChatGPT.”

        The basic sysadmin question was: “Name one way on a Linux server to check the free disk space”.

        Sadly, I had to continue the interview, but I didn’t even bother writing down any of the candidate’s responses after that. The equivalent would have been asking them “what’s 2+2?” and they break out a calculator. Instant fail.

  • carl_dungeon@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    Mass consumption of copyright works for training, but still considering individuals doing it to be criminals.

    • ArcRay@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      It felt like the right way to approach the topic. AI has become so pervasive, I’m not even sure I could search for it without simultaneously using AI.