• Aceticon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      No, no, no!

      Keep trying, Microsoft!

      Just put a couple more hundreds of billions into it!

      Don’t trust the naysayers - you’re almost there!

    • skisnow
      link
      fedilink
      English
      arrow-up
      4
      ·
      12 hours ago

      fr I’ve been reading headlines like this for years now, and LLMs are still shit at doing anything other than produce things that superficially look good but rarely stand up to close inspection.

      • Aceticon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 hours ago

        Expecting that one can improve an automated parrot to the point of getting intelligence is like expecting that one can improve the miming of a invisible barrier to the point that one gets an actual physical invisible barrier.

    • assembly@lemmy.world
      link
      fedilink
      arrow-up
      23
      ·
      1 day ago

      Until the AI results can be trusted, I don’t see how this happens. I’ve been using AI for some questions that would normally be on stackoverflow but I don’t find code generation to save me time. Because I can’t implicitly trust the product, I still have to review the code before I can use it. If I have to review and understand it, it rarely saves me time. There have been edge cases where it helped me in some areas, like turning a CSV into a visual report in PDF format but I still had to review everything. It just happens that I suck as report tools so it was a shorter amount of time for me to review the AI report than to put together visualizations myself.

      • Bustedknuckles@lemmy.world
        link
        fedilink
        arrow-up
        27
        ·
        1 day ago

        I’d offer a small correction: that ain’t happening as long as companies are liable for the AI’s work. If companies can just blame the model and get away with a fine that’s less than the savings, they absolutely will take that deal. Keep companies accountable and the bubble will burst

      • abbadon420@sh.itjust.works
        link
        fedilink
        arrow-up
        9
        arrow-down
        2
        ·
        1 day ago

        You’re not using it correctly. You’re supposed to vibecode the entire application by defining good parameters. You don’t debug or fix stuff, you just iterate. You just make a new application with revised parameters.

        If you tell the LLM “this is bad, make it better”, it will have the bad thing im it’s context and it will therefor try to make the bad thing again.

        Instead, if it makes a mistake,you throw out the whole thing and start over witg revised parameters.

        This will save us money in the short run. In the long run… who cares.

        • redlemace@lemmy.world
          link
          fedilink
          arrow-up
          11
          ·
          1 day ago

          if you tell the LLM “this is bad, make it better”, it will have the bad thing im it’s context and it will therefor try to make the bad thing again.

          You forgot “/s” I tried that a few times. With and without welling what’s wrong. After 3-5 times it gives you the first solution it offered. Tell them that and it ignores it.

          • pkjqpg1h@lemmy.zip
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            Tell them that and it ignores it.

            You can’t trust that it’s impossible by it’s architecture like if you tell it reset your memory… and it will simulate that it forgot, but it didn’t and it will affect all prompts

            This is way all models easily leak their system prompts.

  • nonentity@sh.itjust.works
    link
    fedilink
    arrow-up
    60
    arrow-down
    1
    ·
    1 day ago
    1. No it won’t.
    2. Anyone who frames LLMs as ‘intelligence’ is betraying they don’t understand what they’re talking about.
    3. Any work a LLM can perform effectively is work no human should be performing.
    • pkjqpg1h@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      could you explain little bit more

      Any work a LLM can perform effectively is work no human should be performing.

      • nonentity@sh.itjust.works
        link
        fedilink
        arrow-up
        22
        ·
        23 hours ago

        LLMs are a tool with vanishingly narrow legitimate and justifiable use cases. If they can prove to be truly effective and defensible in an application, I’m OK with them being used in targeted ways much like any other specialised tool in a kit.

        That said, I’m yet to identify any use of LLMs today which clears my technical and ethical barriers to justify their use.

        My experience to date is the majority of ‘AI’ advocates are functionally slopvangelical LLM thumpers, and should be afforded respect and deference equivalent to anyone who adheres to a faith I don’t share.

        • hector@lemmy.today
          link
          fedilink
          arrow-up
          1
          ·
          12 hours ago

          I mean I think one legitimate use is sifting through massive tranches of information and pulling out everything from a subject. Like if you have these epstein files, whatever is not redacted in the half of the pages they released any of, and you want to pull out all mentions of, say the boss of the company that ultimately owns the company you work for, or the president.

          Propublica uses it for something of that sort anyway they explained how they used it in sifting through tranches of information on one article I read about something a couple of years ago. That seemed like a rare case of where this technology could actually be useful.

        • pkjqpg1h@lemmy.zip
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          23 hours ago

          What do you think about these;

          Translation
          Grammar
          Text editing
          Categorization
          Summarization
          OCR
          
          • nonentity@sh.itjust.works
            link
            fedilink
            arrow-up
            8
            ·
            14 hours ago

            LLMs can’t perform any of those functions, and the output from tools infected with them and claim to, can intrinsically only ever be imprecise, and should never be trusted.

          • Catoblepas@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            11
            ·
            21 hours ago

            OCR isn’t a large language model. That’s why sometimes with poor quality scans or damaged text you get garbled nonsense from it. It’s not determining the statistically most likely next word, it’s matching input to possible individual characters.

          • Anna@lemmy.ml
            link
            fedilink
            arrow-up
            7
            ·
            20 hours ago

            Translation isn’t as easy as easy as just take the word and replace with another word from different language with same definition. I mean yes a technical document or something similar can be translated word for word. But, Jokes, songs and a lot more things differ from culture to culture. Sometimes author chooses a specific word in a certain language based on certain culture which can be interpreted in multiple ways to reveal hidden meaning for readers.

            And sometimes to convey the same emotion to a reader from different language and culture we need to change the text heavily.

            • GraniteM@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              7 hours ago

              I remember the Babelizer from the early internet, where you would input a piece of text, and the Babelizer would run it through five or six layers of translation, like from English to Chinese to Portuguese to Russian to Japanese and back to English again, and the results were always hilariously nonsense that only vaguely resembled the original text.

              One of the first things I did with a LLM was to replicate this process, and if I’m being honest, it does a much better job of processing that text through those multiple layers and coming out with something that’s still fairly reasonable at the far end. I certainly wouldn’t use it for important legal documents, geopolitical diplomacy, or translating works of poetry or literature, but it does have uses in cases where the stakes aren’t too high.

          • FrowingFostek@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            22 hours ago

            Not OP. I wouldn’t call myself tech savvy but, suggesting categorization of files on my computer sounds kinda nice. I just can’t trust these clowns to keep all my data local.

  • puppinstuff
    link
    fedilink
    English
    arrow-up
    89
    ·
    1 day ago

    It’s always in the next 6 months, 12 months, and then time passes and the claim keeps getting remade.

    They just want investment hype.

  • Phoenixz
    link
    fedilink
    arrow-up
    37
    ·
    1 day ago

    Well it certainly is wiping out Microsoft, so he is not wrong

  • apftwb@lemmy.world
    link
    fedilink
    arrow-up
    46
    ·
    1 day ago

    They are right. If Microsoft keeps using AI to develop their products there will be no more jobs at Microsoft.

  • deltaspawn0040@lemmy.zip
    link
    fedilink
    arrow-up
    30
    ·
    1 day ago

    “AI is going to do this very big thing” - someone heavily invested in AI.

    This isn’t a warning, this is a sales pitch.