• Sir_Kevin@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    2
    ·
    1 year ago

    I think when people say it’s only predicting the next word, it’s a bit of an oversimplification to explain that the AI is not actually intelligent. It’s more or less stringing words together in a way that seems plausible.

  • kakes@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    They’re very good at predicting the next word, so their choice of “a” or “an” is likely to make sense in context. But you can absolutely ask a GPT to continue a sentence that appears to use the wrong word.

    For instance, I just tried giving a GPT this to start with:

    My favorite fruit grows on trees, is red, and can be made into pies. It is a

    And the GPT finished it with:

    delicious and versatile fruit called apples!

    So as you can see, language is malleable enough to make sense of most inputs. Though occasionally, a GPT will get caught up in a nonsensical phrase due to this behavior.

  • GarrettBird@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 year ago

    To be overly simple about it, the LLM uses statistics and a bit of controlled RNG to pick its words. Words in the LLM have links to each other with statistical probabilities attached. If you take the sentence “I fed a peanut to an elephant” and “I fed a peanut to a elephant” and then asked 100 people which is more correct, there will be a percentage which favors one over the other. Now with LLMs its not choosing using weighted coin flips, but rather picking the most likely next word (most of the time). So if the 100 people choose “an elephant” over “a elephant” 65% of the time in its training data, then the LLM will be inclined to use “an elephant.” However, Its important to know that the words around “an elephant” will also bias its choice to use the word ‘an’ for the word ‘elephant’.

    Really, its largely based on the training data and the contexts to which ‘a’ and ‘an’ are used. Or in other words, the LLM knows because people figured it out for the LLM. People did all the thinking, LLM’s just use statistics on our bottled phrases to know when to use which. Of course, because it got its data from people - it will sometimes get it wrong which is based on how often people got it wrong.

      • GarrettBird@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        Well, my example of the word ‘elephant’ has the same property as ‘herb’ where the use of ‘a’ or ‘an’ can depend on who you ask. I chose my example trying to anticipate this exact question, and I believe I gave you an answer.

        Let me put it this way: it depends… It depends on the data the LLM (Chat GPT for example) has been given to train its output. If we have an LLM dataset which uses only text by people in the United Kingdom, then the data will favor “a herb” as the ‘h’ is pronounced, where data from the United States will favor the other way as the ‘h’ is usually silent when spoken out loud.

        As a fairly general rule, people use the article “an” before a vowel sound (like a silent “h”) and “a” before a consonant sound (like a pronounced, or aspirated, “h”). Usually the data gathered is from multiple English speaking countries, so both “an herb” and “a herb” will exist in the training data, and from there the LLM will favor picking the one that is shown more often (as the data will biased.)

        Just for fun, I asked the LLM running on my local machine. Prompt: "Fill in the blank: “It is _ herb” Response: “It is an herb.”

  • howrar
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    If it generates “I ate” and the next word can be “a” or “an”, then it will just generate one or the other based on how often they appear after “I ate”. It hasn’t decided by this point what it has eaten. After it has generated the next token, for example “I ate an”, then its next token is now limited to food items that fit the grammatical structure of this sentence so far. Now it can decide: did I eat an apple? An orange? An awesome steak? etc

  • felixwhynot@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I thought they’re not so much choosing words as chunks of words. So it would be “a” but then “-n other” or whatever. Maybe?

  • naevaTheRat@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    1 year ago

    GPT creates plausible looking sentences, it has no concept of truth or anything like that. Since if you have an “an” it’s overwhelmingly likely that the next word will begin with a vowel it will choose one which plausibly fits with the corpus of text that came before. Likewise for an “a”.

    There is no compromise in ability. It doesn’t have anything to “say” or whatever. What it produces is more like nonsense poetry than speech.