cross-posted from: https://sh.itjust.works/post/18066953

On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track. In the future, it could power virtual avatars that render locally and don’t require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.

  • floofloofOP
    link
    fedilink
    English
    arrow-up
    8
    ·
    7 months ago

    For now you can tell. Next year you may not be able to.

    • AwkwardLookMonkeyPuppet@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      Every month for the last year, they have made more progress in AI than they thought they’d make in the next couple of years combined, and the rate of progress is accelerating. It’s coming much sooner than anyone thinks.