Two authors sued OpenAI, accusing the company of violating copyright law. They say OpenAI used their work to train ChatGPT without their consent.

  • jecxjo@midwest.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Hmm that is an interesting take.

    The movie summary question is interesting. For most people I doubt they have asked ChatGPT for its own personal views on the subject matter. Asking for a movie plot summary doesn’t inherrantly require the one giving it to have experienced the movie. If this were the case then pretty much all papers written in a history class would fall under this category. No high schooler today went to war but could write about it because they are synthesizing other’s writings about the topic. Granted we know this to be the case and the students are required to cite their sources even when not directly quoting them…would this resolve the first proble?

    If we specifically asked ChatGPT “Can you give me your personal critique of the movie The Matrix?” and it returned something along the lines of “Well I cannt view movies and only generate responses based on writings of others who have seen it.” would that make the usage more clear? If its required for someone to have the ability to have their own critical analysis, there would be a handful of kids from my high school who would fail at that task too and did so regularly.

    I like your college example as that is getting better at a definition, but I think we need to find a very explicit way of describing what is happening. I agree current AI can’t do any of this so we are very much talking about future tech.

    With the idea of extending matterial, do we have a good enough understanding of how humans do it? I think its interesting when we look at computer neural networks. One of the first ones we build in a programming class is an AI that can read single digit, hand written numbers. What eventually happens is the system generates a crazy huge and unreadable equation to convert bits of an image into a statistically likely answser. When you disect it you’d think, “Oh to see the number 9 the equation must see a round top and a straight part on the right side below it.” And that assumption would be wrong. Instead we find its dozens of specific areas of the image that you and I wouldn’t necessarily associate with a “9”.

    But then if we start to think about our own brains, do we actually process reading the way we think we do? Maybe for individual characters. But we know when we read words we focus specifically on the first and last character, the length of the word and any variation of the height of the text. We can literally scramble up the letters in the middle and still read the text.

    The reason I bring this up iss that we often focus on how huamsn can transform data using past history but we often fail to explain how this works. When asking ChatGPT a more vague concept it does pull from other’s works but one thing it also does is creates a statistical analysis of human speech. It literally figures out what is the most likely next word to be said in the given sentence. The way this calculation occurs is directly related to the matterial provided, the order in which it was provided, the weights programmed into it to make decisions, etc. I’d ask how this is fundamentally different than what humans do.

    I’m a big fan of students learning a huge portion of the same literature when in high school. It creates a common dialog we can all use to understand concepts. I, in my 40s, have often referenced a character or event, statement or theme from classic literature and have noticed that only those older than me often get it. In less than a few words I’ve conveyed a huge amount of information that only occurs when the other side of the conversation gets the reference. I’m wondering if at some point AI is able to do this type of analysis would it be considered transformative?