Ever since things came back up after the Jan 5th outage, I’ve started to encounter regular timeouts. I will scroll past a couple dozen number posts and then it will stop as of there are no more. Within a few seconds, the application produces a time out error.
I use Boost for Lemmy as my client, but I’m not convinced that the issue is the app since switching my instance allows me to continue scrolling without a problem. And to be clear, prior to January 5th, I’ve never experienced a timeout in the app.
I’m curious if I’m the only one experiencing timeouts on Lemmy.ca. If so, then I’m curious if the admins are aware of any issue.
So far I haven’t seen anything outside this thread, and I agree that it seems like an issue specific to certain instances.
I tried to summarize the details below and plan to look for more info. It could be related to something that changed in the Lemmy backend between versions 0.19.3 and 0.19.5, based on which instances are affected so far.
Some things you can test if you have a chance:
The summary
Details:
Instances Affected: lemmy.ca (BE: 0.19.7), sh.itjust.works (BE: 0.19.5)
Instances not affected: lemmy.world (UI: 0.19.3 BE: 0.19.3-7-g527ab90b7)
Clients: mobile apps (Boost, Sync, Voyager)
Issue:
Images:
Other issue, but still could be related:
Alright, @[email protected]. Here’s some more information.
General Findings
Findings by Server
Errors By Application Note the differences in how the error is presented. Sending timestamps and IP to @[email protected] via DM for the following:
simply renders empty after the timeout. this was page 2
I suspect something might be funky with your account, rather than the network / apps. The timeout message is a lie, you’re really getting 400 / 499 errors that seem to be related to this lemmy error message:
2025-01-20T17:30:03.366014Z WARN Error encountered while processing the incoming HTTP request: lemmy_server::root_span_builder: NotAModOrAdmin: NotAModOrAdmin 0: lemmy_api_common::utils::check_community_mod_of_any_or_admin_action at crates/api_common/src/utils.rs:102 1: lemmy_api::local_user::report_count::report_count
and:
2025-01-20T20:39:43.202677Z WARN Error encountered while processing the incoming HTTP request: lemmy_server::root_span_builder: CouldntFindPerson: Record not found 0: lemmy_apub::fetcher::resolve_actor_identifier at crates/apub/src/fetcher/mod.rs:22 1: lemmy_apub::api::read_person::read_person
I wonder if it could be a bug related to the unicode name you’ve got going on. Did you set that recently, or has it been like that since day 1?
I just restarted all of our docker containers, see if that helps? If not, try setting a normal name and confirm if that changes things?
Well, I’ve checked out the source. (Rust is very foreign to me. Haha.) However, I can see the offending code now. I don’t really have the means to dig further, but I did learn the following:
RE: First error
Re: Second Error
So the cause? Who knows. Ugh.
Even though this started for me after the defective power supply was replaced… if that’s ALL they changed (including not adjusting any server settings at all)… then we shouldn’t see this at all. I wonder still if something happened while writing to the DB and the power cut. perhaps something related to my account is unhappy in the DB. Who knows.
You all will be changing hardware providers in the near future. The next version of the BE, which you will eventually update, also adjusts the affected code a bit. And so… maybe it will resolve itself?
One can hope.