• maegul (he/they)@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    3 months ago

    I really think it’s mostly about getting a big enough data set to effectively train an LLM.

    I mean, yes of course. But I don’t think there’s any way in which it is just about that. Because the business model around having and providing services around LLMs is to supplant the data that’s been trained on and the services that created that data. What other business model could there be?

    In the case of google’s AI alongside its search engine, and even chatGPT itself, this is clearly one of the use cases that has emerged and is actually working relatively well: replacing the internet search engine and giving users “answers” directly.

    Users like it because it feels more comfortable, natural and useful, and probably quicker too. And in some cases it is actually better. But, it’s important to appreciate how we got here … by the internet becoming shitter, by search engines becoming shitter all in the pursuit of ads revenue and the corresponding tolerance of SEO slop.

    IMO, to ignore the “carnivorous” dynamics here, which I think clearly go beyond ordinary capitalism and innovation, is to miss the forest for the trees. Somewhat sadly, this tech era (approx MS windows '95 to now) has taught people that the latest new thing must be a good idea and we should all get on board before it’s too late.

    • masterspace
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      3 months ago

      Users like it because it feels more comfortable, natural and useful, and probably quicker too. And in some cases it is actually better. But, it’s important to appreciate how we got here … by the internet becoming shitter, by search engines becoming shitter all in the pursuit of ads revenue and the corresponding tolerance of SEO slop

      No, it legitimately is better. Do you know what Google could never do but that Copilot Search and Gemini Search can? Synthesize one answer from multiple different sources.

      Sometimes the answer to your question is inherently not on a single page, it’s split across the old framework docs and the new framework docs and stack overflow questions and the best a traditional search engine can ever do is maybe get some of the right pieces in front of you some of the time. LLMs will give you a plain language answer immediately, and let you ask follow up questions and modifications to your original example.

      Yes Google has gotten shitty, but it would never have been able to do the above without an LLM under the hood.

      • maegul (he/they)@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        3 months ago

        Sure, but IME it is very far from doing the things that good, well written and informed human content could do, especially once we’re talking about forums and the like where you can have good conversations with informed people about your problem.

        IMO, what ever LLMs are doing that older systems can’t isn’t greater than what was lost with SEO ads-driven slop and shitty search.

        Moreover, the business interest of LLM companies is clearly in dominating and controlling (as that’s just capitalism and the “smart” thing to do), which means the retention of the older human-driven system of information sharing and problem solving is vulnerable to being severely threatened and destroyed … while we could just as well enjoy some hybridised system. But because profit is the focus, and the means of making profit problematic, we’re in rough waters which I don’t think can be trusted to create a net positive (and haven’t been trust worthy for decades now).