Virginia Congresswoman Jennifer Wexton used a artificial intelligence (AI) programme to address the House on Thursday. A year ago, the lawmaker was diagnosed with progressive supranuclear palsy, which makes it difficult for her to speak.

The AI programme allowed Wexton to make a clone of her speaking voice using old recordings of appearances and speeches she made in Congress. Wexton appears to be the first person to speak on the House floor with a voice recreated by AI.

  • ulkesh@beehaw.org
    link
    fedilink
    English
    arrow-up
    22
    ·
    4 months ago

    This is a valid problem to solve with AI. I sure wish CEOs of all the moron companies jumping on the AI buzzword bandwagon would take note that AI should be for real problems to solve, not just to hitch a ride on the train and hope your stock goes up.

    • millie@beehaw.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 months ago

      That would require executives to be capable of generating actual value rather than burning it for short term profits.

  • floofloof
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 months ago

    Is it following her real voice in real time, or is the script preprepared?

    • tardigrada@beehaw.orgOP
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      4 months ago

      That’s a good question non of the articles I found on the web is answering. But Ms. Wexton spoke to the Time magazine also using the device, and the magazine says:

      During the interview at her dining room table in Leesburg, Virginia, the congresswoman typed out her thoughts, used a stylus to move the text around, hit play and then the AI program put that text into Wexton’s voice. It’s a lengthy process, so the AP provided Wexton with a few questions ahead of the interview to give the congresswoman time to type her answers.

      Source: A Neurological Disorder Stole Her Voice. Jennifer Wexton Took It Back With AI on the House Floor

      [Edit typo.]

  • Visikde@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    4 months ago

    At least Wexton supplied some of the data to make it all work
    I wonder where the data to develop the program came from?
    Can AI be developed ethically? or do the datasets have to be so large the job requires pilfered data?

    • averyminya@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      4 months ago

      TTS voice models have been around while now and don’t require much more than a 5 second sample of the voice data. TTS Tortise, among many, many others, for example.