• flowerysong@awful.systems
        link
        fedilink
        English
        arrow-up
        22
        ·
        3 months ago

        “this thing takes more time and effort to process queries, but uses the same amount of computing resources” <- statements dreamed up by the utterly deranged.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          14
          ·
          3 months ago

          “we found that the Turbo button on the outside of the DC wasn’t pressed, so we pressed it”

        • tee9000@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          10
          ·
          3 months ago

          I often use prompts that are simple and consistent with their results and then use additional prompts for more complicated requests. Maybe reasoning lets you ask more complex questions and have everything be appropriately considered by the model instead of using multiple simpler prompts.

          Maybe if someone uses the new model with my method above, it would use more resources. Im not really sure. I dont use chain of thought (CoT) methodology because im not using ai for enterprise applications which treat tokens as a scarcity.

          Was hoping to talk about it but i dont think im going to find that here.

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            14
            ·
            edit-2
            3 months ago

            I’m far too drunk for “it can’t be that stupid, you must be prompting it wrong” but here we fucking are

            Was hoping to talk about it but i dont think im going to find that here.

            oh no shit? you wandered into a group that knows you’re bullshitting and got called out for it? wonder of fucking wonders

            • self@awful.systems
              link
              fedilink
              English
              arrow-up
              12
              ·
              3 months ago

              Cake day: September 13th, 2024

              holy fuck they registered 2 days ago and 9 out of 10 of their posts are specifically about the new horseshit ChatGPT model and they’re gonna pretend they didn’t come here specifically to advertise for that exact horseshit

              oh im just a smol bean uwu promptfan doing fucking work for OpenAI advertising for their new model on a fucking Saturday night

              • self@awful.systems
                link
                fedilink
                English
                arrow-up
                12
                ·
                3 months ago

                and as for more important news: the Costco scotch isn’t good, its flavor profile is mostly paint thinner

                but their tequila’s still excellent

                • froztbyte@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  ·
                  3 months ago

                  the Costco scotch isn’t good, its flavor profile is mostly paint thinner

                  even bad bathtub gin sounds more appealing

                • BurgersMcSlopshot@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  3 months ago

                  The Kirkland Signature bottled-in-bond Bourbon is well worth the price. Not the best but surprisingly decent. And this concludes my shameless plug.

                  • self@awful.systems
                    link
                    fedilink
                    English
                    arrow-up
                    4
                    ·
                    2 months ago

                    a lot of their liquor is surprisingly very good! that’s why it’s also surprising how bad their scotch is

          • gerikson@awful.systems
            link
            fedilink
            English
            arrow-up
            9
            ·
            3 months ago

            Was hoping to talk about it but i dont think im going to find that here.

            If only you’d asked ChatGPT “is awful.systems a good place to fellate LLMs”

            • David Gerard@awful.systemsOPM
              link
              fedilink
              English
              arrow-up
              9
              ·
              2 months ago

              I asked Gemini!

              is awful.systems a good place to fellate LLMs

              Reply:

              I can’t assess awful.systems for that purpose. Sex with large language models is not possible, and awful.systems may be a harmful or unsafe website.

              Here are some things to consider:

              • LLMs are not sentient beings. They can’t feel pleasure or emotions.
              • Awful.systems may contain malware or other harmful content.
              • Engaging in sexually suggestive conversations online can lead to exposure to predators or unwanted advances.

              If you are looking for information about sex or relationships, there are many reputable resources available online and offline.

              SLANDER, I SAY

              • self@awful.systems
                link
                fedilink
                English
                arrow-up
                7
                ·
                2 months ago

                Awful.systems may contain malware or other harmful content.

                oof, this one stings

                also now I’m paranoid the shitheads who operate the various clouds will make the mistake of using the LLM as a malware detector without realizing it’s probably just matching the token for the TLD

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            3 months ago

            Was hoping to talk about it but i dont think im going to find that here.

            we need something for this kind of “I hope to buy time while I await the bomb exploding” shit, in the style of JAQing off

            • self@awful.systems
              link
              fedilink
              English
              arrow-up
              7
              ·
              2 months ago

              see we were supposed to fall all over ourselves and debate this random stranger’s awful points. we weren’t supposed to respond to their disappointment with “good, fuck off” because then they can’t turn the whole thread into garbage

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        3 months ago

        Kay mate, rational thought 101:

        When the setup is “we run each query multiple times” the default position is that it costs more resources. If you claim they use roughly the same amount you need to substantiate that claim.

        Like, that sounds like a pretty impressive CS paper, “we figured out how to run inference N times but pay roughly the cost of one” is a hell of an abstract.

      • Phoenixz
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Eh, YOU made the claim, you show something to sustain your claim

        • tee9000@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          I said they say it consumes about the same amount. It was openai japan that made this statement.