Comedian and author Sarah Silverman, as well as authors Christopher Golden and Richard Kadrey — are suing OpenAI and Meta each in a US District Court over dual claims of copyright infringement.

  • Madison_rogue@kbin.socialOP
    link
    fedilink
    arrow-up
    23
    ·
    edit-2
    1 year ago

    ThePile, which was assembled by a company called EleutherAI. ThePile, the complaint points out, was described in an EleutherAI paper as being put together from “a copy of the contents of the Bibliotik private tracker.” Bibliotik and the other “shadow libraries” listed, says the lawsuit, are “flagrantly illegal.”

    I think this is where the crux of the case lies since the article mentions these are only available illegally through torrents.

    • Itty53@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      1 year ago

      This is starting to touch on the root of why they keep calling this “AI”, “training”, etc. They aren’t doing this for strictly marketing, they are attempting to skew public opinion. These companies know intimately how to do that.

      They’re going to argue that if torrents are legal for educational purposes (ie the loophole that all trackers use), and they’re just “training” an “AI” then they’re just engaging in education. And an ignorant public might buy it.

      These kinds of cases will be viewed as landmark cases in the future and honestly I don’t have huge hopes. The history of these companies is engineer first, excuse the lack of ethics later. Or the philosophy of “it’s easier to apologize than ask”.

      • dandan@kbin.social
        link
        fedilink
        arrow-up
        30
        ·
        edit-2
        1 year ago

        It’s the defacto term for how we fit a statistical model to data, unrelated to any copyright concepts. I’m pretty sure we called it “training” back in 1997 when I was doing neural networks at uni, and it’s probably been used well before then too.

        Neural nets are based on the concept of Hebbian learning (from the 1930s), because they are trying to mimic how a biological neural network learns.

        This concept of training/learning has persisted because it’s a good analogy of what we are trying to do with these statistical models, even if they aren’t strictly neural networks.

        • Saganastic@kbin.social
          link
          fedilink
          arrow-up
          7
          ·
          1 year ago

          This concept of training/learning has persisted because it’s a good analogy of what we are trying to do with these statistical models, even if they aren’t strictly neural networks.

          LLMs are indeed neural networks.

        • Madison_rogue@kbin.socialOP
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          TBH I’m not really familiar with how the AI has developed over the years. Wikipedia says that ChatGPT is proprietary, which leads me to believe it’s hasn’t been developed with research grants or government involvement. Is this the case? Can a company legally develop an AI by obtaining its learning material through illegal means? Which it sounds as if Open AI and Meta did through the use of Bibliotik.

          I can’t see how this doesn’t have some legal ramification, but IANAL.

          • Rabbithole@kbin.social
            link
            fedilink
            arrow-up
            6
            ·
            edit-2
            1 year ago

            OpenAI is called that for a reason. They absolutely were a non-profit research org initially, so would have been eligible for research grants, etc. They would probably have gotten a pass on using the torrents too, for the same reason.

            They went to a private for-profit model later after they built their AI’s and wanted to start selling them as a service. How the hell all of that plays out as the company they are now is anyone’s guess.