• fishos@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      1 year ago

      Except, to my understanding, it wasn’t a LLM. It was a protein mapping model or something similar. And what they did was instead of telling it “run iterations and select the things the are benefitial based on XYZ”, they said “run iterations and select based on non-benefitial XYZ”.

      They ran a protein coding type model and told it to prioritize HARMFUL results over good ones, giving it results that would cause harm.

      Now, yes, those still need to be verified. But it wasn’t just “making things up”. It was using real data to iterate faster than a human would. Very similar to the Folding@HOME program.

        • fishos@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          1 year ago

          No problem. I’m totally on board with the “LLMs aren’t the AI singularity” page. This one is actually kinda scary to me because it shows how easily you can take a model/simulation and instead of asking “how can you improve this?”, you can also ask “how can I make this worse?”. The same tool used for good can easily be used for bad when you change the “success conditions” around. Now it’s not the techs fault, of course. It’s a tool and how it’s used. But it shows how easily a tool like this can be used in the wrong ways with very little “malicious” action necessary.

          • Square Singer@feddit.de
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            The thing is, if you run these tools to find e.g. cures to a disease it will also spit out 40k possible matches and of these there will be a handfull that actually work and become real medicine.

            I guess, harming might be a little easier than healing, but claiming that the output is actually 40k working neurotoxins is clickbaity, missleading and incorrect.