Current AI models are simply too unwieldy, brittle and malleable, academic and corporate research shows. Security was an afterthought in their training as data scientists amassed breathtakingly complex collections of images and text. They are prone to racial and cultural biases, and easily manipulated.

  • girlfreddyOP
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    11 months ago

    I disagree. Even basic inclusion of words to change (ie: the N word to Black or f*g to gay) would have helped.

    Making these companies work harder to bring their product online isn’t a bad thing here.

    • ConsciousCode@beehaw.org
      link
      fedilink
      arrow-up
      23
      ·
      11 months ago

      It sounds simple but data conditioning like that is how you get scunthorpe being blacklisted, and the effects on the model even if perfectly executed are unpredictable. It could get into issues of “race blindness”, where the model has no idea these words are bad and as a result is incapable of accommodating humans when the topic comes up. Suppose in 5 years there’s a therapist AI (not ideal but mental health is horribly understaffed and most people can’t afford a PhD therapist) that gets a client who is upset because they were called a f**got at school, it would have none of the cultural context that would be required to help.

      Techniques like “constitutional AI” and RLHF developed after the foundation models really are the best approach for these, as they allow you to get an unbiased view of a very biased culture, then shape the model’s attitudes towards that afterwards.

      • sciawp@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        11 months ago

        I agree with you but I’m just gonna say with basic regex (hell, even without regex) you can easily find bad words without the problem you mentioned above.

        Word filters tend to suck in online games and stuff because they have to navigate players trying to avoid the filter, which I think could still be improved with a little effort

    • lily33@lemm.ee
      link
      fedilink
      arrow-up
      11
      ·
      11 months ago

      Then you’d get things like “Black is a pejorative word used to refer to black people”

      • girlfreddyOP
        link
        fedilink
        arrow-up
        6
        ·
        11 months ago

        Then disallow the whole sentece with the N word.

        There are ways to do security in AI learning, easy or not. And companies just throwing their hands in the air and screaming it can’t be done are lying through their teeth.

          • abir_vandergriff@beehaw.org
            link
            fedilink
            arrow-up
            7
            ·
            11 months ago

            I tried to get it to tell me how long it would take to eat a helicopter, as it’s one of the model’s pre-built prompts and thought it would be funny. Went through every AI coercive tactic that’s been thrown around and it just repeatedly said no and that I should be respectful and responsible about the thing. It was quite aggressive and annoying about it.

        • Norgur@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          11 months ago

          I think you might be led astray here. I get the feeling that you hadn’t dabbled much with AI before you read this article. The outputs of the generators themselves are incredibly sanitized. ChatGPT will not voice an opinion on anything if it can help it. Quite the opposite half of it’s output is usually some reprimand because it thought some word or other was offensive for no reason. I’m not talking about the N-Word here. Just go and try to make it insult a waffle iron. It’ll refuse.

          And for biases: Any bias in the models themselves is a bias that was present in it’s training data. If the model is misogynist, that’s because it was fed stuff that was. So if those things spew out questionable things, they actually present us with the fact that those questionable things are all too present in society and thus the internet and thus the LLM’s training data. Don’t waste energy to fix the “AI” (More like “word calculator”), it’s biases are only a symptom for deeper problems.