Ever since things came back up after the Jan 5th outage, I’ve started to encounter regular timeouts. I will scroll past a couple dozen number posts and then it will stop as of there are no more. Within a few seconds, the application produces a time out error.

I use Boost for Lemmy as my client, but I’m not convinced that the issue is the app since switching my instance allows me to continue scrolling without a problem. And to be clear, prior to January 5th, I’ve never experienced a timeout in the app.

I’m curious if I’m the only one experiencing timeouts on Lemmy.ca. If so, then I’m curious if the admins are aware of any issue.

  • ⓝⓞ🅞🅝🅔OP
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    Still experiencing this on other apps as well. Attached is screenshot from Voyager. Still need to figure out a way to try and get some logs.

    Any word on others experiencing this?

    • ShadowMA
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      21 hours ago

      Can you please DM me your public IP address? Also if you can give me the specific timestamps (down to the second if you can) you got the error message so I can match up against logs.

    • OtterMA
      link
      fedilink
      English
      arrow-up
      1
      ·
      22 hours ago

      So far I haven’t seen anything outside this thread, and I agree that it seems like an issue specific to certain instances.

      I tried to summarize the details below and plan to look for more info. It could be related to something that changed in the Lemmy backend between versions 0.19.3 and 0.19.5, based on which instances are affected so far.

      Some things you can test if you have a chance:

      • See if the issue happens on a few other instances, up to you on which ones but it might help to try some with different backend versions. If it happens with every instance except lemmy.world and lemmy.sdf.org for example, then that might confirm it. This page has info on what version each instance is running: https://fedidb.org/software/lemmy
      • Does it happen with the mobile web browser?

      The summary


      • Details:

        • Only happening to a few users, who are still able to access other instances just fine
        • For lemmy.ca, it started around Jan 5th after a hardware related outage
        • “I tried turning off my WiFi and just using data and it seemed to help, which is even weirder.”
        • Clearing the app’s cache did not help
      • Instances Affected: lemmy.ca (BE: 0.19.7), sh.itjust.works (BE: 0.19.5)

      • Instances not affected: lemmy.world (UI: 0.19.3 BE: 0.19.3-7-g527ab90b7)

      • Clients: mobile apps (Boost, Sync, Voyager)

      • Issue:

        • Regular timeouts, after scrolling past a ‘couple dozen’ posts it will not load any more, followed by a timeout error message ([email protected] for lemmy.ca)
        • Also unable to access comments ([email protected] for sh.itjust.works)
      • Images:

        • Boost:
        • Voyager:
      • Other issue, but still could be related:

        • Comment copied multiple times (lemmyng for lemmy.ca)
      • ⓝⓞ🅞🅝🅔OP
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        15 hours ago

        Alright, @[email protected]. Here’s some more information.

        General Findings

        • Problem occurs on the WebUI as well while logged into Lemmy.ca.
        • Problem occurs regardless of WebUI front-end used.
        • Problem occurs regardless of Andriod apps used.
          • Exception: Eternity (Nightly). I don’t know if they poll for information differently or what, but I can endlessly scroll without issue at the moment.
        • Error outputs are unfortunately inconsistent between FEs. (The standard mobile UI shows no error and sim.)
        • Problem occurs both with and without VPNs employed.

        Findings by Server

        • lemmy.ca: BE: 0.19.7
          • Problem instance and My primary.
        • lemmy.world: BE: 0.19.3-7-g527ab90b7
          • No issues observed. Logged in with Boost & Voyager using alternate account.
        • beehaw.org: BE: 0.18.4
          • No issues observed. Logged in with Boost using alternate account.
        • lemm.ee: BE: 0.19.8
          • No issues observed. Logged in Anonymously via Voyager Android App.
        • feddit.uk: BE: 0.19.7
          • Will try if/when they approve the account request.
          • Really want to try this since they are using the same BE as lemmy.ca
        • lemmy.sdf.org and another instance with BE: 0.19.3
          • Will try if/when they approve my alt accounts. If someone knows an app where I can browser sdf as anonymous, then I can do it now.

        Errors By Application Note the differences in how the error is presented. Sending timestamps and IP to @[email protected] via DM for the following:

        1. Boost (Android)

        1. Voyager (Android)

        1. Photon as Android Firefox PWA via lemmy.ca

        1. Lemmy WebUI via lemmy.ca

        simply renders empty after the timeout. this was page 2

        • ShadowMA
          link
          fedilink
          arrow-up
          2
          ·
          15 hours ago

          I suspect something might be funky with your account, rather than the network / apps. The timeout message is a lie, you’re really getting 400 / 499 errors that seem to be related to this lemmy error message:

          2025-01-20T17:30:03.366014Z  WARN Error encountered while processing the incoming HTTP request: lemmy_server::root_span_builder: NotAModOrAdmin: NotAModOrAdmin
             0: lemmy_api_common::utils::check_community_mod_of_any_or_admin_action
                       at crates/api_common/src/utils.rs:102
             1: lemmy_api::local_user::report_count::report_count
          

          and:

          2025-01-20T20:39:43.202677Z  WARN Error encountered while processing the incoming HTTP request: lemmy_server::root_span_builder: CouldntFindPerson: Record not found
             0: lemmy_apub::fetcher::resolve_actor_identifier
                       at crates/apub/src/fetcher/mod.rs:22
             1: lemmy_apub::api::read_person::read_person
          

          I wonder if it could be a bug related to the unicode name you’ve got going on. Did you set that recently, or has it been like that since day 1?

          I just restarted all of our docker containers, see if that helps? If not, try setting a normal name and confirm if that changes things?