• Rabbit R1 AI box is actually an Android app in a limited $200 box, running on AOSP without Google Play.
  • Rabbit Inc. is unhappy about details of its tech stack being public, threatening action against unauthorized emulators.
  • AOSP is a logical choice for mobile hardware as it provides essential functionalities without the need for Google Play.
  • MxM111@kbin.social
    link
    fedilink
    arrow-up
    8
    arrow-down
    28
    ·
    2 months ago

    The best convincing answer is the correct one. The correlation of AI answers with correct answers is fairly high. Numerous test show that. The models also significantly improved (especially paid versions) since introduction just 2 years ago.
    Of course it does not mean that it could be trusted as much as Wikipedia, but it is probably better source than Facebook.

    • De_Narm@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      3
      ·
      2 months ago

      “Fairly high” is still useless (and doesn’t actually quantify anything, depending on context both 1% and 99% could be ‘fairly high’). As long as these models just hallucinate things, I need to double-check. Which is what I would have done without one of these things anyway.

      • AIhasUse@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        8
        ·
        2 months ago

        Hallucinations are largely dealt with if you use agents. It won’t be long until it gets packaged well enough that anyone can just use it. For now, it takes a little bit of effort to get a decent setup.

      • TrickDacy@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        9
        ·
        2 months ago

        1% correct is never “fairly high” wtf

        Also if you want a computer that you don’t have to double check, you literally are expecting software to embody the concept of God. This is fucking stupid.

        • De_Narm@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          2
          ·
          edit-2
          2 months ago

          1% correct is never “fairly high” wtf

          It’s all about context. Asking a bunch of 4 year olds questions about trigonometry, 1% of answers being correct would be fairly high. ‘Fairly high’ basically only means ‘as high as expected’ or ‘higher than expected’.

          Also if you want a computer that you don’t have to double check, you literally are expecting software to embody the concept of God. This is fucking stupid.

          Hence, it is useless. If I cannot expect it to be more or less always correct, I can skip using it and just look stuff up myself.

          • TrickDacy@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            11
            ·
            2 months ago

            Obviously the only contexts that would apply here are ones where you expect a correct answer. Why would we be evaluating a software that claims to be helpful against 4 year old asked to do calculus? I have to question your ability to reason for insinuating this.

            So confirmed. God or nothing. Why don’t you go back to quills? Computers cannot read your mind and write this message automatically, hence they are useless

            • De_Narm@lemmy.world
              link
              fedilink
              English
              arrow-up
              7
              arrow-down
              1
              ·
              2 months ago

              Obviously the only contexts that would apply here are ones where you expect a correct answer.

              That’s the whole point, I don’t expect correct answers. Neither from a 4 year old nor from a probabilistic language model.

              • TrickDacy@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                8
                ·
                2 months ago

                And you don’t expect a correct answer because it isn’t 100% of the time. Some lemmings are basically just clones of Sheldon Cooper

                • De_Narm@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  arrow-down
                  1
                  ·
                  2 months ago

                  I don’t expect a correct answer because I’ve used these models quite a lot last year. At least half the answers were hallucinated. And it’s still a common complaint about this product as well if you look at actual reviews (e.g., pretty sure Marques Brownlee mentions it).

                • FlorianSimon@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  2 months ago

                  Something seems to fly above your head: quality is not optional and it’s good engineering practice to seek reliable methods of doing our work. As a mature software person, you look for tools that give less room for failure and want to leave as little as possible for humans to fuck up, because you know they’re not reliable, despite being unavoidable. That’s the logic behind automated testing, Rust’s borrow checker, static typing…

                  If you’ve done code review, you know it’s not very efficient at catching bugs. It’s not efficient because you don’t pay as much attention to details when you’re not actually writing the code. With LLMs, you have to do code review to ensure you meet quality standards, because of the hallucinations, just like you’ve got to test your work before committing it.

                  I understand the actual software engineers that care about delivering working code and would rather write it in order to be more confident in the quality of the output.

                  • TrickDacy@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    2
                    ·
                    2 months ago

                    Like most people, I have no interest in engaging in conversation with someone who gives me zero reason to.

                    Not that it’s any of your business, but quality matters to me more than anything else, which is why I like tools that help me deliver it

        • SpaceNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          2 months ago

          Perhaps the problem is that I never bothered to ask anything trivial enough, but you’d think that two rhyming words starting with 'L" would be simple.

          • CaptDust@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            “AI” is a really dumb term for what we’re all using currently. General LLMs are not intelligent, it’s assigning priorities to tokens (words) in a database, based on what tokens were provided before it, to compare and guess the next most logical word and phrase, really really fast. Informed guesses, sure, but there’s not enough parameters to consider all the factors required to identify a rhyme.

            That said, honestly I’m struggling to come up with 2 rhyming L words? Lol even rhymebrain is failing me. I’m curious what you went with.

          • MxM111@kbin.social
            link
            fedilink
            arrow-up
            2
            arrow-down
            2
            ·
            2 months ago

            Ok, by asking you mean that you find somewhere questions that someone identified as being answered wrongly by LLM, and asking yourself.

      • magic_lobster_party@kbin.run
        link
        fedilink
        arrow-up
        5
        arrow-down
        3
        ·
        2 months ago

        I’ve asked GPT4 to write specific Python programs, and more often than not it does a good job. And if the program is incorrect I can tell it about the error and it will often manage to fix it for me.

        • FlorianSimon@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 months ago

          You have every right not to, but the “useless” word comes out a lot when talking about LLMs and code, and we’re not all arguing in bad faith. The reliability problem is still a strong factor in why people don’t use this more, and, even if you buy into the hype, it’s probably a good idea to temper your expectations and try to walk a mile in the other person’s shoes. You might get to use LLMs and learn a thing or two.

          • TrickDacy@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            2 months ago

            I only “believe the hype” because a good developer friend of mine suggested I try copilot so I did and was impressed. It’s an amazing technical achievement that helps me get my job done. It’s useful every single day I use it. Does it do my job for me? No of fucking course not, I’m not a moron who expected that to begin with. It speeds up small portions of tasks and if I don’t understand or agree with its solution, it’s insanely easy not to use it.

            People online mad about something new is all this is. There are valid concerns about this kind of tech, but I rarely see that. Ignorance on the topic prevails. Anyone calling ai “useless” in a blanket statement is necessarily ignorant and doesn’t really deserve my time except to catch a quick insult for being the ignorant fool they have revealed themselves to be.

            • FlorianSimon@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              2 months ago

              I’m glad that you’re finding this useful. When I say it’s useless, I speak in my name only.

              I’m not afraid to try it out, and I actually did, and, while I was impressed by the quality of the English it spits out, I was disappointed with the actual substance of the answers, which makes this completely unusable for me in my day to day life. I keep trying it every now and then, but it’s not a service I would pay for in its current state.

              Thing is, I’m not the only one. This is the opinion of the majority of people I work with, senior or junior. I’m willing to give it some time to mature, but I’m unconvinced at the moment.

              • TrickDacy@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                edit-2
                2 months ago

                You would need to be pulling some trickery on Microsoft to get access to copilot for more than a single 30 day trial so I’m skeptical you’ve actually used it. Sounds like you’re using other products which may be much worse. It also sounds like you work in a conservative shop. Good luck with that

                • FlorianSimon@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  edit-2
                  2 months ago

                  I have not tried Copilot, no. I’m not giving any tool money, personal info and access to my code when it can’t reliably answer a question like: “does removing from a std::vector invalidate iterators?” (not a prompt I tried on LLMs but close enough).

                  That shit’s just dangerous, for obvious reasons. Especially when you consider the catastrophic impact these kinds of errors can have.

                  There needs to be a fundamental shift to something that detects and fixes the garbage, which just isn’t there ATM.

                  • TrickDacy@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    2
                    ·
                    2 months ago

                    Yeah you just illustrated you have no idea what copilot is like. But you were convinced you were an expert on it. Lol

    • k_rol
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 months ago

      I think Meta hates your answer