• IninewCrow
    link
    fedilink
    English
    arrow-up
    58
    ·
    1 year ago

    It’s getting weird out there.

    I deal with a few bureaucrats and office workers. Up until about a year ago, their emails were pretty simple and they sounded a lot like someone just tapped them out while on the toilet.

    Now they sound robotic and machine like. Very polite, to the point, concise and very professional. A year ago these people would just ask a vague question and not really know what to say.

    Now they’ve automatically become professional writers sending me a polite note.

    It’s good … but it just makes me wonder where all this is going.

    It’s putting lipstick on a pig … no matter how much you dress it up, it’s still a pig that likes to eat garbage and cover itself in mud.

    • AllonzeeLV@lemmy.world
      link
      fedilink
      arrow-up
      33
      ·
      1 year ago

      As humanity has found yet another way to pass the buck, it’ll be interesting to see the diminishing returns of LLMs as they begin to feed more and more on derivative content made by LLMs.

      • jacksilver@lemmy.world
        link
        fedilink
        arrow-up
        25
        ·
        1 year ago

        It’s interesting, because people say they can only get better, but I’m not sure that’s true. What happens when most new text data is being generated by LLMs or we accidentally start labeling images created through diffusion as real. Seems like there is a potential for these models to implode.

      • livus@kbin.social
        link
        fedilink
        arrow-up
        13
        ·
        1 year ago

        Even before the LLMs, back when I was on reddit I would sometimes see conversations between bots that were 3 or 4 bots replying to each other with scraped content (usually in the personal advice subs) and getting upvotes.

        I only noticed because I used to hunt bots as a hobby.

  • pinkdrunkenelephants@lemmy.world
    link
    fedilink
    arrow-up
    23
    arrow-down
    2
    ·
    1 year ago

    It’s not education anymore if people are doing that.

    They are turning education into the pointless rigamarole they accuse it of being because they don’t get that education is more important than feeding oneself. Survival is easy. Animals do that. Education is about humanizing you and connecting you with the universe you live in. It’s about something higher and better than that. It’s about actually living.

    But tell that to the troglodytes using ChatGPT to think for them, who truly only care about themselves.

    • Richard@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      1 year ago

      While agree with this sentiment, it is important to note that many tasks in school do not enrich the student as a person and their capabilities, but are mundane and/or repetitive failures of a badly designed curriculum. I can absolutely understand why students would want to automate such exercises.

      • pinkdrunkenelephants@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        No, they do, or I should say did. Art and music are vital for motor skills and bran development. I was not allowed to take music classes and live with the sadness of not being able to play music despite wanting to, and knowing because of my age it’s largely too late.

        The child’s brain is too malleable for you to justify not letting them be exposed to a variety of different skill sets even if they don’t like them as a kid. Adults are supposed to know letting kids specialize (more accurately do the bare minimum to go back to playing video games) is a bad idea.

        I don’t care if you think learning is mundane. Even if it was, you have to do it anyway. Life isn’t always sunshine and roses.

  • Ilovethebomb@lemm.ee
    link
    fedilink
    arrow-up
    22
    arrow-down
    5
    ·
    1 year ago

    Chat GPT, and the many other similar systems, are unable to conceive of something new or original, merely imitate what has already come before.

    Students who use it to write essays are shooting themselves in the foot, because, chances are, they can’t think for themselves either.

    • balderdash@lemmy.zipOP
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      I think of Chat GPT like a sometimes-inaccurate-calculator. There may be some legitimate uses for the technology, but it’s still nice to know how to multiply numbers without it.

    • Restaldt@lemmy.world
      link
      fedilink
      arrow-up
      13
      arrow-down
      4
      ·
      edit-2
      1 year ago

      Chat GPT, and the many other similar systems, are unable to conceive of something new or original, merely imitate what has already come before

      This. God do i hate that LLMs are called generative ai

      • lad@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        They are generative in the sense of generating output, nothing more. With the “intelligence” part of the AI we got a fluke, should’ve called it something else until it gets to the real intelligence level (that is now dubbed AGI)

    • Hexarei@programming.dev
      link
      fedilink
      arrow-up
      6
      arrow-down
      3
      ·
      1 year ago

      Except they can absolutely come up with new things; their responses aren’t just cut and pasted bites of previous text snippets. They are generated based on a neural network’s idea of what the most likely next token is, and tokens are often fragments of words. There’s a reason you can have it do arbitrary things with text- Because it’s doing slightly deeper things than just imitation.

    • Diplomjodler@feddit.de
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Most people can’t do that and generative AI, no matter how limited is still better than the average shlub.

    • Flying Squid@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      Chat GPT, and the many other similar systems, are unable to conceive of something new or original, merely imitate what has already come before.

      So it does what grade school teachers expect of their students?

    • EmoBean@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      16
      ·
      edit-2
      1 year ago

      Or you could just learn how to use a tool to do better with instead of bitching about progress. Hur dur calculators can’t do math, they need unique input. No fucking shit Sherlock, lern2technology you fucking boomer.

      • pinkdrunkenelephants@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        2
        ·
        edit-2
        1 year ago

        Or you could grow the fuck up and actually be an adult instead of justifying using corporate controlled software to do your thinking and living for you.

        Learn to read and write. Learn to think for yourself. Accept that effort, humility and willingness to learn and achieve are to be expected from you and learn to try to meet those expectations instead of getting angry that anything, literally anything, is asked of you to engage with other people.

        Live life.

      • pancakes@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        Was this comment generated by AI?

        I’m sure the prompt was something like “write a response to this comment while sounding like I enjoy eating crayons in my free time.”

      • Ilovethebomb@lemm.ee
        link
        fedilink
        arrow-up
        6
        arrow-down
        4
        ·
        1 year ago

        My experience with AI, and the absolute fucking dorks that talk endlessly about it, is that neither are capable of original thought.

        It’s not the technology I don’t like, it’s the users.

        Also, why so angery?

  • stevedidWHAT@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    1 year ago

    Humans: create tool to help them do things better than they used to like their grandparents and theirs before them.

    Also humans: how could we do this?!?!?

    • tigeruppercut@lemmy.zip
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      One main problem is all the wrong answers it generates. Imagine if when we invented the calculator it fucked up the answer 30% of the time.

      • stevedidWHAT@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Agreed, all are possible to iron out as the tech progresses. I can’t imagine Edison or Tesla were successful in every aspect of their inventions either.

        The ability to automate effective learning and training for diverse tasks is arguably the crux of human progression. We’re making more and more “meta” things and I think it’s really promising for our future as a species.

  • nifty@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    I recently had ChatGPT write something for me, but I didn’t like how the result didn’t capture my writing voice, so I just wrote it myself. I think Chat is good for summarizing and finding solutions to simple things, but it’s pretty much useless beyond a certain skill-level required for a task. I would really like to see a study where they take someone who doesn’t have an MBA and pair them with Chat for making a report vs. an MBA from Harvard etc. Same for consulting work at Bain etc.

    • Knusper@feddit.de
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      I also just feel like I’m not writing words for the fun of it. They’re chosen to convey information in a very intentional way to a given target group. Like, just now in that previous sentence, I changed “in a certain way” to “in a very intentional way”, because that’s more precisely what I wanted to say. I try to convey lots of nuances in relatively few words.

      That’s my #1 criticism of LLMs, that they just blather on and on. And ultimately, precise nuance requires understanding the topic, the context and the target group, which, if you’d describe it to an LLM, would take longer than to write the actual text itself.

    • EnoBlk@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Maybe next time feed it a bunch of your writings and ask it to mimick your writing style.