I have an annoying problem on my server and google has been of no help. I have two drives mirrored for the OS through mdadm, and I recently replaced them with larger versions through the normal process of replacing one at a time and letting the new drive re-sync, then growing the raids in place. Everything is working as expected, with the exception of systemd… It is filling my logs with messages of timing out while trying to locate both of the old drives that no longer exist. Mdadm itself is perfectly happy with the new storage space and has reported no issues, and since this is a server I can’t just blindly reboot it to get systemd to shut the hell up.

So what’s the solution here? What can I do to make this error message go away? Thanks.

[Update] Thanks to everyone who made suggestions below, it looks like I finally found the solution in systemctl daemon-reload however there is a lot of other great info provided to help with troubleshooting. I’m still trying to learn the systemd stuff so this has all been greatly appreciated!

  • Shdwdrgn@mander.xyzOP
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Sounds interesting, any chance you can tell me what it does? Google doesn’t even seem to have any hits on “olddisk.mount” and I want to make sure this won’t break anything else as it could be months before the system is intentionally rebooted again.

    Also of note - I don’t see anything with a name similar to olddisk.mount in the systemd folder. Is this command unique to a particular distro? For reference, I’m running Debian.

    • XTL@sopuli.xyz
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      I think olddisk refers to the name of your device. Try systemctl status or just systemctl and see if it’s in the output. Or find the name in the journal.

      • Shdwdrgn@mander.xyzOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Status reports “State: degraded” but then doesn’t say WHAT is degraded and shows no other errors (and /proc/mdstat shows no errors). Trying systemctl by itself does show an error from logrotated but that seems unrelated?

        I do see the drive errors again in journalctl but I don’t see anything helpful here… maybe you’ll see something? These errors get repeated for both of the old drives about every 30 minutes, and I believe the UUIDs are for the old drives since they don’t match any existing drive.

        Oct 11 07:10:40 Juno systemd[1]: Timed out waiting for device ST500LM021-1KJ152 5.

        Oct 11 07:10:40 Juno systemd[1]: Dependency failed for /dev/disk/by-uuid/286e26b0-603a-43b2-bc0f-30853998d5ab.

        Oct 11 07:10:40 Juno systemd[1]: dev-disk-by\x2duuid-286e26b0\x2d603a\x2d43b2\x2dbc0f\x2d30853998d5ab.swap: Job dev-disk-by\x2duuid-286e26b0\x2d603a\x2d43b2\x2dbc0f\x2d30853998d5ab.swap/start failed with result 'dependency'.

        Oct 11 07:10:40 Juno systemd[1]: dev-disk-by\x2duuid-286e26b0\x2d603a\x2d43b2\x2dbc0f\x2d30853998d5ab.device: Job dev-disk-by\x2duuid-286e26b0\x2d603a\x2d43b2\x2dbc0f\x2d30853998d5ab.device/start failed with result 'timeout'.

        Oct 11 07:10:40 Juno systemd[1]: dev-disk-by\x2duuid-96b0277b\x2dcf9d\x2d4360\x2dbf90\x2d691166cff52b.device: Job dev-disk-by\x2duuid-96b0277b\x2dcf9d\x2d4360\x2dbf90\x2d691166cff52b.device/start timed out.

      • caseyweederman
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Right. systemctl list-automounts
        to find the name, maybe? I’ve never had exactly this problem though.

        Looks like list-automounts is relatively new, try systemctl status --full --all -t mount for all mounts and look for your old disks in the info.
        -t automount might work but mine is empty, which makes me think this might not be related to the automount unit type.
        Hopefully this will point us in the right direction though.

        • Shdwdrgn@mander.xyzOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Ah cool… the ‘full’ command actually advised running systemctl daemon-reload which appears to have cleared the errors listed. Based on previous errors in the log it will likely be another 20 minutes before another error would be generated, so I’m waiting to see what happens now.

        • Shdwdrgn@mander.xyzOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          That appears to be a success! Thanks for the pointers, I’m still trying to figure out the systemd stuff since I rarely have to touch it.

            • Shdwdrgn@mander.xyzOP
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              Still no new errors in the logs. It wasn’t hurting anything, it was just annoying and I didn’t want to reboot a server just because of a logging issue! 😆