Happy Canada Day everyone.

Related to the outage that happened last night, we rebooted the Lemmy services but we’re still trying to figure out the root cause, which seems to point to an out of memory issue in the logs. However it’s not what we see in our monitoring console.

In the meantime, we will monitor the service more closely until we are confident the issue is resolved, and we will improve our tools to detect such a problem faster.

EDIT: Also happened at night on July 2nd, still trying to find the root cause…

Apologies for the extended downtime.

  • ShadowMA
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    3 days ago

    Something got into a weird state and restarting either the backend or frontend didn’t help. Taking the entire stack down and then bringing it back up, resolved it.

    It’s weird since it crashed at 1am and at 3am we gradually restart all backend and frontends, so that automatic restart should have fixed it too. All the containers reported healthy, but nginx wasn’t reporting any available frontends.

    I suspect some sort of weird lemmy bug, but we’ll just have to improve monitoring for now and try to debug this more if it happens again.