James Cameron on AI: “I warned you guys in 1984 and you didn’t listen”::undefined

  • Meowoem@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    Not if most their staff were pretty shitty parrots and the job is essentially just parroting…

    • Dr. Dabbles@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      At first blush, this is one of those things that most people assume is true. But one of the problems here is that a human can comprehend what is being asked in, say, a support ticket. So while an LLM might find a useful prompt and then spit out a reply that may pr may not be correct, a human can actually deeply understand what’s being asked, then select an auto-reply from a drop down menu.

      Making things worse for the LLM side of things, that person doesn’t consume absolutely insane amounts of power to be trained to reply. Neither do most of the traditional “chatbot” systems that have been around for 20 years or so. Which begs the question, why use an LLM that is as likely to get something wrong as it is to get it right when existing systems have been honed over decades to get it right almost all of the time?

      If the work being undertaken is translating text from one language to another, LLMs do an incredible job. Because guessing the next word based on hundreds of millions of samples is a uniquely good way to guess at translations. And that’s good enough almost all of the time. But asking it to write marketing copy for your newest Widget from WidgetCo? That’s going to take extremely skilled prompt writers, and equally skilled reviewers. So in that case the only thing you’re really saving is the amount of wall clock time for a human to type something. Not really a dramatic savings, TBH.