Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

A few important things to start with:

  1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
  2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
  3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

  • Coolbeanschilly
    link
    fedilink
    English
    arrow-up
    111
    arrow-down
    1
    ·
    2 days ago

    How about not putting AI into something that should be entirely human controlled?

    • dan1101@lemm.ee
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      1
      ·
      2 days ago

      Yeah as more organizations implement LLMs Wikipedia has the opportunity to become more reliable and authoritative. Don’t mess that opportunity up with “AI.”

    • espentan@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      2 days ago

      These days, most companies that work with web based products are under pressure from upper management to “use AI”, as there’s a fear of missing out if they don’t. Now, management doesn’t necessarily have any idea what they should use it for, so they leave that to product managers and such. They don’t have any idea, either, and so they look at what features others have built and find a way to adapt one or more of those to fit their own products.

      Slap on back, job well done, clueless upper management happy, even though money and time have been spent and the revenue remains the same.

      • jjjalljs@ttrpg.network
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        I’ve already posted this a few times, but Ed Zitron wrote a long article about what he calls “Business Idiots”. Basically, people in decision making positions who are out of touch with their users and their products. They make bad decisions, and that’s a big factor in why everything kind of sucks now.

        https://www.wheresyoured.at/the-era-of-the-business-idiot/ (it’s long)

        I think a lot of us have this illusion that higher ranking people are smarter, more visionary, or whatever. But I think no. I think a lot of people are just kind of stupid, surrounded by other stupid people, cushioned from real, personal, consequences. On top of that, for many enterprises, the incentives don’t line up with the users. At least wikipedia isn’t profit driven, but you can probably think of some things you’ve used that got more annoying with updates. Like google putting more ads up top, or any website that does a redesign that yields more ad space, worse navigation.

    • ChicoSuave@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      The sad truth is that AI empowers the malicious to create a bigger impact on workload and standards than is scalable with humans alone. An AI running triage on article changes that flags or reports changes which need more input would be ideal. But threat mitigation and integrity preservation don’t really seem to be high on their priorities.