Hi all!

As many of you have noticed, many Lemmy.World communities introduced a bot: @[email protected]. This bot was introduced because modding can be pretty tough work at times and we are all just volunteers with regular lives. It has been helpful and we would like to keep it around in one form or another.

The [email protected] mods want to give the community a chance to voice their thoughts on some potential changes to the MBFC bot. We have heard concerns that tend to fall into a few buckets. The most common concern we’ve heard is that the bot’s comment is too long. To address this, we’ve implemented a spoiler tag so that users need to click to see more information. We’ve also cut wording about donations that people argued made the bot feel like an ad.

Another common concern people have is with MBFC’s definition of “left” and “right,” which tend to be influenced by the American Overton window. Similarly, some have expressed that they feel MBFC’s process of rating reliability and credibility is opaque and/or subjective. To address this, we have discussed creating our own open source system of scoring news sources. We would essentially start with third-party ratings, including MBFC, and create an aggregate rating. We could also open a path for users to vote, so that any rating would reflect our instance’s opinions of a source. We would love to hear your thoughts on this, as well as suggestions for sources that rate news outlets’ bias, reliability, and/or credibility. Feel free to use this thread to share other constructive criticism about the bot too.

  • MindTraveller
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    6
    ·
    4 months ago

    There’s no such thing as an objective left or right. It’s a relative scale. You shouldn’t have a bot calling things left or right at all.

    Also don’t push Ground News. They already get plenty of press from their astroturfing.

    • Randomgal
      link
      fedilink
      arrow-up
      18
      arrow-down
      4
      ·
      4 months ago

      This. The bot is effectively just propaganda for the author’s biases.

      • jeffw@lemmy.worldOPM
        link
        fedilink
        arrow-up
        3
        arrow-down
        13
        ·
        4 months ago

        Do you think aggregating ratings from multiple factors checkers would reduce that bias?

        • MindTraveller
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          4 months ago

          Keep in mind that if you base your judgements of left bias and right bias on the American overton window, that window has been highly influenced by fascism over the last 10 years, and now your judgement is based on the normalisation of fascism, which your bot is implicitly accepting. That’s bad. If you’re going to characterise sources as left or right in any form, you need to pick a point that you personally define as center. And now your judgements are all going to implicitly push people towards that point. You could say that Karl Marx is the center of the political spectrum, or you could say Mussolini is. Both of those statements are equally valid, and they are as valid as what you are doing now. If you don’t want to push any set of biases, you need to stop calling sources left and right altogether.

        • OhNoMoreLemmy@lemmy.ml
          link
          fedilink
          arrow-up
          6
          ·
          edit-2
          4 months ago

          No. The problem with your current bot isn’t that the website authors have a particular axe to grind, it’s that they’re just in a rush and a bit lazy.

          This means that they tend to say news sites which acknowledge and correct their own mistakes have credibility problems, because it’s right there - the news sites themselves acknowledged issues. Even though these are the often most credible sites, because they fix errors and care about being right.

          Similarly the whole left-right thing is just half-assed and completely useless for anyone that doesn’t live in the US. While anyone that does live in the US probably already has an opinion about these US news sources.

          Because these are lazy errors, lots of people will make similar mistakes, and aggregating ratings will amplify this, and let you pretend to be objective without fixing anything.

        • Randomgal
          link
          fedilink
          arrow-up
          3
          ·
          4 months ago

          Hm… At some point a human will have to say “Yes, this response is correct.” to whatever the machine outputs. The output then takes the bias of that human. (This is unavoidable, I’m just pointing it out.) If this is really not an effort in ideological propaganda, a solution could be for the bot to provide arguments, rather han conclusions. Instead of telling me a source is “Left” or “Biased”, it could say: “I found this commentary/article/websites/video discussing this aource’s political leaning (or quality): Link 1 Link 2 Link 3”

          Here you reduce bias by presenting information, instead of conclusions, and then letting the reader come to their own conclusions based on this information. This not only is better at education, but also helps readers develop their critical thinking.

          Instead of… You know, being told what to think about what by a bot.

    • jeffw@lemmy.worldOPM
      link
      fedilink
      arrow-up
      4
      arrow-down
      11
      ·
      4 months ago

      Honestly, the first time I had heard of Ground News was in a discussion about implementing it with the bot. Do you have any thoughts on alternatives or would you prefer that bit just removed from the bot’s comment?

      • MindTraveller
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        Someone else in this thread said to link to media literacy resources and I agree with them.