• bionicjoey
    link
    fedilink
    arrow-up
    30
    ·
    10 months ago

    Hopefully this sets a precedent for other companies thinking of replacing their employees with language models

    • Szymon
      link
      fedilink
      English
      arrow-up
      24
      ·
      edit-2
      10 months ago

      It’s setting the precedent that I’m trying out every chat bot owned by a company to get free shit now.

      Companies are about to find out just how expensive it is to remove front line labour.

      • psvrh
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        10 months ago

        Companies are about to find out just how expensive it is to remove front line labour.

        They don’t care. The executives that made this decision already got their bonus for it. If they have to retrench, they’ll simply try this again in a few years.

        Corporate and stock-market incentive structures are…perverse. They incentivize very short horizons, usually a quarter or at most a year. We’d have a much less sick society if those in charge weren’t allowed to realize gains for at least five to ten years.

        And yes, I’m aware that communism worked on the idea of a five- or ten-year plan and it had problems with dealing with short-term supply-chain issues. Market-based solutions work great for things like warehousing, logistics or distribution because the feedback is immediate and the costs aren’t externalized. Where the costs are external and long-term, but the profit in realized in the short term, market solutions fail.

      • buffaloseven@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        I think that with people constantly figuring out how to game the GPT chat bots, if we see a few more rulings like this where companies are liable for the chat bot’s responses, we’ll see a shift back towards “dumb bots” where there’s explicit control over the responses. If people realize you can get free stuff just by manipulating a chat bot, and a company is liable for what the chat bot says…I just don’t think it’s tenable for them.

        • Szymon
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          They wouldn’t have done it without crunching some numbers, but if they didn’t consider the system to be fallible, then it’s on them for not thinking it through. They’ll develop it more to get a better product, but it costs them money and ideally the cost will be more than simply having people with jobs doing the work.