Because they don’t really search or index quality content (it’s very expensive and hard to do) and their search implementation really sucks, they don’t do any real improvement.

The process is like this:

  1. Take the user query and create 1-3 queries. For this process they use very stupid but fast and cheap models; because of that, sometimes they create very stupid search queries and, unlike a pro, they don’t really know how to use search engines, like filtering, ranking, focusing…
  2. Combine these search results (it contains slop AI-generated summary pages, YouTube videos, maybe forums, maybe Wikipedia…).
  3. Use RAG with an LLM to find answers. LLMs will always try to find answers quickly, and instead of making a thinking loop in a long article they will use that slop page with a direct answer.

As you can see, there are many, many problems in this implementation:

  • The biggest problem is citation: they cite confidently but it’s wrong.
  • They use low-quality data, like auto YouTube subtitles, improperly extracted tables and elements, content-farm sites, copycat sites, corporate blogs…
  • Their search results are low quality.
  • For the most important part (breaking down the user request) they use cheap, stupid models.
  • They handle all data in the same context instead of parallel requests (which is very expensive)

It’s still strange to me: we always say “they have all the data, all the money, all the hardware…” but they still can’t create a better AI search than random FOSS developers.

  • panda_abyss
    link
    fedilink
    arrow-up
    9
    ·
    23 days ago

    I agree with almost everything you said, however, Kagi lets you choose the model that runs your search.

    They’re always pulling from the same search index, and you’re right, the citation is just the model guessing how much it uses some info. Nothing actually quantifies that.

    • pkjqpg1h@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      23 days ago

      well I used Kagi (for 2 months) but their AI implentations is not transparent and they also suffer same problemles though their index is better (combining Google, Bing, Yandex, Brave, Brave) and you can use better LLMs

      Kagi Assistant system prompts (impossible to override), context management and limits, file management (As I see you don’t even put files in the context) is hidden. As a result of your system prompt (which is I guess it instruct to “be short”) causes too many problem with hard prompts (especially with coding and document editing) https://kagifeedback.org/d/9793-transparency-issues

      I switched to OpenRouter (just pay for my tokens)

  • Zerush@lemmy.ml
    link
    fedilink
    arrow-up
    2
    arrow-down
    4
    ·
    23 days ago

    Agree with it, despite using an AI search since almost 4 years, Andisearch, because it’s the only one I tested with a pretty good accuracy (~90%). Anyway, it’s always needed to contrast the information in the web, independent of results from AI or not, a lot of BS out there.

    To this topic Andi said:

    Based on recent examples from 2025-2026, AI search engines frequently provide confident but incorrect answers, demonstrating several key problems:

    1. Inconsistent results - Siri with Apple Intelligence gives different wrong answers to the same question when asked multiple times[1].

    2. False confidence - AI provides detailed but completely incorrect information, like Google AI claiming a South Dakota team won North Dakota’s championship[1:1].

    3. Regression in quality - Traditional search results often work better than AI versions. As John Gruber notes, “old Siri… at least recognizes that Siri itself doesn’t know the answer and provides a genuinely helpful response”[1:2].

    4. Poor accuracy even on popular topics - Siri achieved only a 34% accuracy rate when asked about Super Bowl winners, with one stretch of 15 wrong answers in a row[1:3].

    The core issue appears to be that AI search engines prioritize providing definitive-sounding answers over accuracy, making them less reliable than traditional search results that simply link to authoritative sources.


    1. Daring Fireball - Siri Is Super Dumb and Getting Dumber ↩︎ ↩︎ ↩︎ ↩︎

  • moakley@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    12
    ·
    23 days ago

    What are you guys searching for? I find that Google’s AI search results are the only improvement that Google search has made in at least 20 years. Because other than that it’s been a slow and steady decline, even when I take the time to make a really specific query.

    I also find this is the one circumstance where AI has actually made something better, possibly because Google search had just gotten that bad.

    • lividweasel@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      23 days ago

      Google search has gotten so much worse in the last couple of years. It used to be that I would be able to go through a few pages of results and be able to find what I wanted. Now, after about the first page they become totally unrelated to the search or are just auto-generated garbage. If what you want isn’t on either Wikipedia or Reddit, there’s a good chance you won’t find it.

      As for the AI summary, it’s total garbage masquerading as valid information. Three times in the week, I ran into cases where it stated something and confidently linked to sources, but the sources actually proved that it had misunderstood and was totally wrong.

      I lost all confidence when I used it to convert a binary number like 10010110 to decimal (a task which it had been able to do for years), only for it to correctly list the steps it was doing in the process and then coming out with a result of “2”. You probably don’t even need to know anything about how binary works to realize that’s completely wrong.

      Some people may think that the days of SEO were bad because sites would put their thumb on the scale to get their site prioritized, but at least the sites were there and contained valid information. Now, they just aren’t there, and we have to try to ignore the incorrect information being pushed at us at the top of the results in a bastardized form of “Are you feeling lucky?”

    • pkjqpg1h@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      23 days ago

      Actually if you limit your search and use different search engines with good practices for me it’s usually much faster than AI

      • moakley@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        7
        ·
        23 days ago

        That’s impossible, because most of my searches are literally as fast as me typing the query, and then I get the answer.

        That’s why I’m asking what you guys are searching for, because this has been a dramatic improvement for me.

          • Zerush@lemmy.ml
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            22 days ago

            Andi said:

            The seahorse emoji phenomenon reveals a curious case of mass misremembrance - there has never been a seahorse emoji, yet both humans and AI language models firmly believe it exists[1][2].

            When asked about the seahorse emoji, large language models respond with complete confidence that it exists, then spiral into confusion when trying to display it, often outputting random fish or horse emojis instead[2:1]. This behavior stems from the models building an internal “seahorse + emoji” concept that crashes against reality when no matching token exists in their vocabulary[1:1].

            The technical explanation involves the models’ logit lens - as they process the request through their layers, they construct a conceptual blend of “seahorse” and “emoji” that seems perfectly valid until the final output stage, where they’re forced to select the closest available match[1:2].

            Many humans share this false memory, with Reddit threads and social media posts filled with people convinced they’ve seen a seahorse emoji before[1:3]. While a seahorse emoji was proposed to the Unicode Consortium in 2018, it was rejected and has never actually existed[2:2].

            Adding that it don’t exist in the official Unicode emoji pack, but they do in inofficial packs, eg. here


            1. Why do LLMs freak out over the seahorse emoji? - Theia Vogel ↩︎ ↩︎ ↩︎ ↩︎

            2. Emojipedia - Is There a Seahorse Emoji? ↩︎ ↩︎ ↩︎

        • pkjqpg1h@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          23 days ago

          maybe for simple queries but if your task is like this, currently there is no AI that can beat a human/me

          • “finding most popular communities in Lemmy”

          • “5 latest llm models”

          • “trump’s last 5 lies”

          • “any file finding”

          • “image finding”

          • “any tool or website suggestion”

          • “finding source of something”

          • “finding github issues with related something”

          • “finding all news about something”

          • “finding an broken webpage”

          • “finding original content”

          • “finding illegal content :D”

          Even when they “do” they do just good-enough and it’s not enough for me

          • moakley@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            23 days ago

            maybe for simple queries

            Yeah. I’m referring to simple queries. That’s the vast majority of my queries.

            • its_kim_love@lemmy.blahaj.zone
              link
              fedilink
              arrow-up
              3
              arrow-down
              1
              ·
              22 days ago

              We’re we supposed to read your mind for that one? You literally said someone else’s experience was impossible before you back it up to “of course I just ment simple queries.”

              • moakley@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                22 days ago

                Maybe reread the conversation, because you seem to be assuming a tone on my part that isn’t there.

        • its_kim_love@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          3
          ·
          23 days ago

          I find that searching for anything older than 10 years ago that isn’t media or pop culture just doesn’t appear. I can’t find a way to exclude terms at all. I can’t find a reliable way to add terms without wildly changing the results instead of digging into the ones I have to find what I’m actually looking for.