Artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people, a government trial of the technology has found.

Amazon conducted the test earlier this year for Australia’s corporate regulator the Securities and Investments Commission (ASIC) using submissions made to an inquiry. The outcome of the trial was revealed in an answer to a questions on notice at the Senate select committee on adopting artificial intelligence.

The test involved testing generative AI models before selecting one to ingest five submissions from a parliamentary inquiry into audit and consultancy firms. The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions with a focus on ASIC mentions, recommendations, references to more regulation, and to include the page references and context.

Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all.

These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.

    • pingveno@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      I’m doing a series of conversations/interviews with my parents’ generation to keep a voice record of their stories. As part of that, I’m doing transcripts that start with the transcript feature of Google’s Recorder. It can do some nifty things like assign speakers to individual voices. I have to clean up the transcripts some, but it’s far less laborious than dealing with a 15-20 minute conversation. I can fix up a transcript in maybe 5 minutes.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      2 months ago

      but it can make a human way more efficient, and make 1 human able to do the work of 3-5 humans.

      Not if you have to proof-read everything to spot the entirely convincing-looking but completely inaccurate parts, is the problem the article cites.

        • WiseThat
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          edit-2
          2 months ago

          If the error is hidden well, yes. Close-reading a text and cross referencing everything it says takes MUCH longer than writing a piece you know is accurate to begin with

      • soul@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        For summarization, having the data correct is crucial because manual typing itself is not a large chore. AI tends to shine more when you’re producing a lot of manual labor such as a 10-page document for something. At that point, the balance tips the other way where proofing and correcting is much easier and less time-consuming than the production itself. That’s where AI comes in for the gains in workflows. It has other fantastic uses as well, like being another voice for brainstorming ideas. If done well, you’re not taking the AI’s idea so much as just using it to spur more creative thinking on your end.