cross-posted from: https://ponder.cat/post/1675150

I refuse to sit here and pretend that any of this matters. OpenAI and Anthropic are not innovators, and are antithetical to the spirit of Silicon Valley. They are management consultants dressed as founders, cynical con artists raising money for products that will never exist while peddling software that destroys our planet and diverts attention and capital away from things that might solve real problems.

I’m tired of the delusion. I’m tired of being forced to take these men seriously. I’m tired of being told by the media and investors that these men are building the future when the only things they build are mediocre and expensive. There is no joy here, no mystery, no magic, no problems solved, no lives saved, and very few lives changed other than new people added to Forbes’ Midas list.

None of this is powerful, or impressive, other than in how big a con it’s become. Look at the products and the actual outputs and tell me — does any of this actually feel like the future? Isn’t it kind of weird that the big, scary threats they’ve made about how AI will take our jobs never seem to translate to an actual product? Isn’t it strange that despite all of their money and power they’re yet to make anything truly useful?

My heart darkens, albeit briefly, when I think of how cynical all of this is. Corporations building products that don’t really do much that are being sold on the idea that one day they might, peddled by reporters that want to believe their narratives — and in some cases actively champion them. The damage will be tens of thousands of people fired, long-term environmental and infrastructural chaos, and a profound depression in Silicon Valley that I believe will dwarf the dot-com bust.

And when this all falls apart — and I believe it will — there will be a very public reckoning for the tech industry.

  • BlameThePeacock
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    8
    ·
    4 days ago

    Plenty of examples out there.

    I use it to write excel and Powerfx formulas, summarize my client notes for creating statements of work, and creating documentation. Saves me multiple hours per week and some weeks even more than that.

      • BlameThePeacock
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        Well last time I did this, I won the contract and got paid.

        I’ve used it for every contract I’ve had for the last year at least.

          • BlameThePeacock
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            I didn’t claim I won because of Genai, I won the contract while saving time by using genai.

            That’s beneficial for me.

      • BlameThePeacock
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        It’s better than the documentation I write myself, it’s not like I don’t read over it and edit it before handing it to a client.

    • sorter_plainview@lemmy.today
      link
      fedilink
      arrow-up
      5
      ·
      4 days ago

      How do you prevent hallucinations and ensure factual accuracy of the output? I have seen plenty of examples where LLMs screw up this. I haven’t seen a fool proof implementation yet. Is there anything new that I’m unaware of?

      • PhilipTheBucket@ponder.cat
        link
        fedilink
        arrow-up
        4
        ·
        3 days ago

        Code that’s fucked up making its way into the repo was already a known problem, though.

        LLMs can save a lot of time depending on what coding task you’re doing. They are not a substitute for thinking and understanding, and they quickly reach a point of diminishing returns where you might as well do it yourself to get to point C once it’s gotten you from A to B, but it’s not like bad code never existed before they came along.

      • BlameThePeacock
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        It’s not about being foolproof, I know how to do this stuff myself, this just saves me time. I can look at it and know if it should work, then I test it, just like I would do if I wrote it myself.

    • Luffy@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      3 days ago

      But you dont need a general Llm for that. You can just use a way smaller and way less energy comsuming one than something like ChatGPT

      • BlameThePeacock
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        I don’t use ChatGPT for this, I’m running a 14B parameter model locally.