• Clinicallydepressedpoochie@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    2 months ago

    Ok. Been thinking about this and maybe someone can enlighten me. Couldn’t LLMs be used for code breaking and encryption cracking. My thought is language has a cadence. So even if you were to scramble it to hell shouldn’t that cadence be present in the encryption? Couldn’t you feed an LLM a bunch of machine code and train it to take that machine code and look for conversational patterns. Spitting out likely dialogs?

    • projectmoon@lemm.ee
      link
      fedilink
      arrow-up
      8
      ·
      2 months ago

      That would probably be a task for regular machine learning. Plus proper encryption shouldn’t have a discernible pattern in the encrypted bytes. Just blobs of garbage.

    • nova_ad_vitum
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      2 months ago

      Could there be patterns in ciphers? Sure. But modern cryptography is designed specifically against this. Specifically, it’s designed against there being patterns like the one you said. Modern cryptographic algos that are considered good all have the Avalanche effect baked in as a basic design requirement:

      https://en.m.wikipedia.org/wiki/Avalanche_effect

      Basically, using the same encryption key if you change one character in the input text, the cipher will be completely different . That doesn’t mean there couldn’t possibly be patterns like the one you described, but it makes it very unlikely.

      More to your point, given the number of people playing with LLMs these days, I doubt LLMs have any special ability to find whatever minute, intentionally obfuscated patterns may exist. We would have heard about it by now. Or…maybe we just don’t know about it. But I think the odds are really low .

    • Sharkwellington@lemmy.one
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      2 months ago

      This is a good question and your curiosity is appreciated.

      A password that has been properly hashed (the thing they do in that Avalanche Effect Wikipedia entry to scramble the original password in storage) can take trillions of years to crack, and each additional character makes that number exponentially higher. Unless the AI can bring that number to less than 90 days - a fairly standard password change frequency for corporate environments - or heck, just less than 100 years so it can be done within the hacker’s lifetime, it’s not really going to matter how much faster it becomes.

      The easier method (already happening in fact) is to use an LLM to scan a person’s social media and then reach out to relatives pretending to be that person, asking for bail money, logins etc. If the data is sufficiently locked down, the weakest link will be the human that knows how to get to it.