A manipulated video that mimics the voice of Vice President Kamala Harrissaying things she did not say is raising concerns about the power of artificial intelligence to mislead with Election Day about three months away.

The video gained attention after tech billionaire Elon Musk shared it on his social media platform X on Friday evening without explicitly noting it was originally released as parody.

The video uses many of the same visuals as a real ad that Harris, the likely Democratic president nominee, released last week launching her campaign. But the video swaps out the voice-over audio with another voice that convincingly impersonates Harris.

  • SpaceCowboy
    link
    fedilink
    arrow-up
    3
    ·
    1 month ago

    It also feels like there’s something subliminal about it. Like you’re hearing her voice saying she’s a deep state puppet then cut to her actually making a mistake in a speech (but it’s using old school editing techniques there too) and then back to the deepfake voice back to the actual video of her.

    Sure when you see it you know which is her and which is the deepfake. But later if you see some of those actual clips again, you might recall seeing it somewhere before and then vaguely recall some of the things the deepfake voice said along with it.

    It’s very insidious really. Memories are a weird thing and I don’t know if it’s been studied what kind of effect this sort of thing could have on people. So I don’t know. But it seems plausible that you could create false memories of someone saying something they didn’t say by intercutting things they did say with a deep fake of things they didn’t say.

    • thanks_shakey_snake
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      Yeah, I think that’s exactly right. I don’t think it’s like a sophisticated deliberate psyop or anything like that, but the effect you describe certainly exists.

      Most people are only partially paying attention to most of the information they consume, even the smart, thoughtful ones… Combined with the lossy storage of human memory, it’s easy to cache the wrong conclusions when exposed to stuff like this.