With the rapid advances we’re currently seeing in generative AI, we’re also seeing a lot of concern for large scale misinformation. Any individual with sufficient technical knowledge can now spam a forum with lots of organic looking voices and generate photos to back them up. Has anyone given some thought on how we can combat this? If so, how do you think the solution should/could look? How do you personally decide whether you’re looking at a trustworthy source of information? Do you think your approach works, or are there still problems with it?

  • Showroom7561
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    This is an unfortunate future. Unless something is done fast, the majority of content on the internet will simply be generated content with bots interacting with other bots.

    Unless we only allow users who verify their identity to participate on certain websites, I can’t see how else you could solve this problem.

    Even then, some bad actors with a verified identity could be generating content using AI and posting it as their own.

    I’m not even sure how anyone will be able to trust or believe any photo, video, or written idea online in the next 5 to 10 years.

    • howrarOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I think the idea with a verified identity is that each person only gets one. If that is the case and you find misinformation from them, it’s easy to block the one account. It’s not so easy to block if there are thousands of accounts made by the same person.

      I don’t know how you would be able to enforce a one ID per person limit though. Government identification requires either trust in the government and/or in the entity verifying your identity, and your government providing useful identification in the first place. Phones numbers don’t work because a single person can acquire multiple numbers, many have none, and numbers get transferred to different people.

    • mmin@lemmy.one
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I recently heard this view from someone who has worked with social media quite a lot, that we should basically only have verified online identities in the future. All unverified identities will be assumed to be bots. This is kind of sad to me, as anonymous online interaction still seems valuable. In any case, it does seem like a race: we try to make up tests to prove we’re human, others try to train AIs to pass those tests.

  • xurxia@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I think the basis solution is education: we need educate our children in critical thinking. Generative AI is only other one source of misinformation, like “pseudoscience” disguised as true science (false papers, manipulated data,…). It is not good that teenagers believe something is true only because it is in internet (blogs, youtube, etc)

    • howrarOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Definitely, critical thinking abilities is something we’re sorely lacking as a society. I don’t think this is purely an education problem though. Thinking takes a lot of time and energy, both of which are scarce when you’re spending it all on just trying to survive.

      However, critical thinking would only help for things like scientific claims. If someone tells you “Bob from two states over ate a burger for lunch on June 19th”, no amount of critical thinking can help you figure out whether what you read is true or not. It’s a silly example, but I think you can imagine a more serious lie that’s equally impervious to critical thinking.

  • themizarkshow@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I think the way that Bing has very clearly left footnotes for it’s sources and uses meta data on generated images is probably the best way forward at the moment. The tools creating the problems should be helping combat this first and foremost, especially since other means will take time (eg: legislation or better national ID systems).

    Where those solutions fall short is where other means can fill the gaps.

    • howrarOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      the way that Bing has very clearly left footnotes for it’s sources […] is probably the best way forward at the moment

      This brings us to the issue of being reliant on one entity (Bing) to decide whether the source is reliable. How do we know if this entity can be trusted, and how can we know if that ever changes? Assuming we can trust them, this just passes the problem onto someone else. How would this entity decide whether sources are reliable or not before feeding them to us?

      Can you elaborate a bit on what you mean by metadata on generated images? What kind of metadata and what can you do with them?

      • themizarkshow@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        While you have a point, I’d say that Bing’s footnotes are much better than BARD just providing a definitive answer. With footnotes, you can follow Bing’s logic from the sources provided and make your own call whether it was right/wrong, whereas BARD is a black-box that Google hopes we implicitly trust.

        As for the metadata… it’s at least giving credit to the generator/source in the same way a camera will embed details about the hardware/software that was used to capture the image. It’s not immediately useful for end-users, but if we align on a standard ASAP then those documents can be flagged as Gen-AI output across different ecosystems and be tagged or filtered based on personal/platform preferences. Like the search answers, it’s not the end-all solution for fighting misinformation, but it gives the user a chance to make their own decision instead of trusting anything placed in-front of them. And it gives platforms a way to scan content for Gen-AI content and deal with it as they see fit.

  • modulus@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I’m having trouble understanding why disinformation produced by an AI is more of a problem than that produced by a person. Sure, theoretically it can be made to scale a lot more–though I would point out AI is not, at the moment, light on resources either. But it’s unclear to me to what extent that makes a difference.

    • howrarOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I don’t believe the content itself won’t be any more of an issue than human-generated misinformation. The main issue I see is that a single person can now achieve this on a large scale without ever leaving their mom’s basement and at a much lower cost. It’s the concentration of power that I find concerning.