• Square Singer@feddit.de
    link
    fedilink
    arrow-up
    58
    arrow-down
    1
    ·
    1 year ago

    News stories like that are nice and all, but this is what the current state of AI is:

    The news story just talked about how many neurotoxines it suggested, not how many of them are actually neurotoxines.

    It probably printed 40k random chemical formulae.

    • Steeve
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      edit-2
      11 months ago

      deleted by creator

      • fishos@lemmy.world
        link
        fedilink
        arrow-up
        11
        arrow-down
        1
        ·
        1 year ago

        Except, to my understanding, it wasn’t a LLM. It was a protein mapping model or something similar. And what they did was instead of telling it “run iterations and select the things the are benefitial based on XYZ”, they said “run iterations and select based on non-benefitial XYZ”.

        They ran a protein coding type model and told it to prioritize HARMFUL results over good ones, giving it results that would cause harm.

        Now, yes, those still need to be verified. But it wasn’t just “making things up”. It was using real data to iterate faster than a human would. Very similar to the Folding@HOME program.

          • fishos@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            1 year ago

            No problem. I’m totally on board with the “LLMs aren’t the AI singularity” page. This one is actually kinda scary to me because it shows how easily you can take a model/simulation and instead of asking “how can you improve this?”, you can also ask “how can I make this worse?”. The same tool used for good can easily be used for bad when you change the “success conditions” around. Now it’s not the techs fault, of course. It’s a tool and how it’s used. But it shows how easily a tool like this can be used in the wrong ways with very little “malicious” action necessary.

            • Square Singer@feddit.de
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              The thing is, if you run these tools to find e.g. cures to a disease it will also spit out 40k possible matches and of these there will be a handfull that actually work and become real medicine.

              I guess, harming might be a little easier than healing, but claiming that the output is actually 40k working neurotoxins is clickbaity, missleading and incorrect.

  • WoahWoah@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    1 year ago

    So it generated known chemical weapons as well as previously unknown compositions that to all appearances would be effective chemical weapons. They didn’t actually test them for obvious reasons, but their animal toxicology models made pretty clear they would be effective toxic chemical compositions that could easily be weaponized and it did it in six hours.

    • ramble81@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Ever seen the Animatrix? Shows how the machines rose up to enslave humans. They used nuclear weapons against humans because the radiation hurt humans but not them, even though an EMP would. If anything I think our AI overlord would start with a chemical weapon since that won’t hurt them at all and there’s no chance for getting caught in the blast or the EMP wave.

  • WoahWoah@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    An article on the subject.

    FTA: "In responding to the invitation, Sean Ekins, Collaborations’ chief executive, began to brainstorm with Fabio Urbina, a senior scientist at the company. It did not take long for them to come up with an idea: What if, instead of using animal toxicology data to avoid dangerous side effects for a drug, Collaborations put its AI-based MegaSyn software to work generating a compendium of toxic molecules that were similar to VX, a notorious nerve agent?

    The team ran MegaSyn overnight and came up with 40,000 substances, including not only VX but other known chemical weapons, as well as many completely new potentially toxic substances. All it took was a bit of programming, open-source data, a 2015 Mac computer and less than six hours of machine time. “It just felt a little surreal,” Urbina says, remarking on how the software’s output was similar to the company’s commercial drug-development process. “It wasn’t any different from something we had done before—use these generative models to generate hopeful new drugs.”"

    • WoahWoah@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      So it generated known chemical weapons as well as previously unknown compositions that to all appearances would be effective chemical weapons. They didn’t actually test them for obvious reasons, but their animal toxicology models made pretty clear they would be effective toxic chemical compositions that could easily be weaponized and it did it in six hours.

  • Arache Louver@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    2
    arrow-down
    19
    ·
    1 year ago

    We don’t need AI suggestions, Bidet (n) already sent some nuclear arsenal to the “president” Zelenzki. Crazy ruler class is enough.