This is a relief to find! I just looked at htop and panicked over the high amount of “used” memory.

  • @[email protected]
    link
    fedilink
    42 years ago

    This doesn’t explain what a disk cache (afaik often refered to as page cache) is, so here goes: When any modern OS reads a file from the disk, it (or the parts of it that are actually read) gets copied to RAM. These “pages” only get thrown out (“dropped”) when RAM is needed for something else, usually starting with the one that hasn’t been accessed for the longest time.

    You can see the effect of this by opening a program (or large file), closing it, and opening it again. The second time it will start faster because nothing needs to get read from disk.

    Since the pages in RAM are identical to the sectors on disk (unless the file has been modified in RAM by writing to it), they can be dropped immediately and the RAM can be used for something else if needed. The downside being obviously that the dropped file needs to be read again from disk when it is needed the next time.

    • AmiceseOP
      link
      fedilink
      1
      edit-2
      2 years ago

      How can I adjust my programs to utilize disk caches?

      • @[email protected]
        link
        fedilink
        12 years ago

        As I said, every file read from disk, be it an executable, image or whatever gets cached in RAM automatically and always.

        Having said that, if you read a file using read(2) (or like any API that uses read() internally, which is most), then you end up with two copies of the file in RAM, the version your OS put in the disk cache, and the copy you created in your process’s memory. You can avoid this second copy by using mmap(2). In this case the copy of the file in the disk cache gets mapped into your process’s memory, so the RAM is shared between your copy and the disk cache copy.

        You can also give hints to the disk cache subsystem in the kernel using fadvise(2). Don’t though unless you know what you’re doing.

  • Ephera
    link
    fedilink
    42 years ago

    This is also the reason “unused RAM is wasted RAM” makes little sense in an application context. OS designers realized that wisdom a long time ago, so they already made sure to utilize that unused RAM via disk caching.

    Now, if Chrome or Chrome VSCode or Chrome Discord or Chrome MS Teams requests tons of RAM, it most likely gets this used-but-available RAM, which your OS was using for disk caching.

    In the case of Chrome itself, this will make Chrome faster at the expense of your other applications’ performance.
    In the case of non-browser applications based on Chrome, your system’s performance is sacrificed, so that Microsoft can rake in its profit without actually investing money into proper application development. 🙂

  • @[email protected]
    link
    fedilink
    32 years ago

    Personal experience: on desktop i always disable swap. On a server it makes sense because who fucking cares if your email takes 10ms or 1s to send, but in a graphical context there’s so many memory accesses that the tiniest bit swapped to disk starts to make the whole thing sluggishly slow.

    Of course that’s less of a problem if you’re using a SSD, but now you may be putting unnecessary strain on your SSD whose durability is bound by writes not reads.

    So disabling swap + enabling systemd-oom/earlyoom (to kill the most gluttonous executable when really needed) is a good combination for me.

    • @[email protected]
      link
      fedilink
      2
      edit-2
      2 years ago

      I wish I could just buy more RAM every time I hit a memory constraint.

      EDIT: There’s a more general performance reason for using swap at the default settings (doesn’t cover every case but is fine for lots of situations). At the default settings it will start actively swapping at about 40% memory used. This is because the system actively benefits from the fs cache mentioned in the article and performance suffers in low-memory conditions due to the fs cache not having free RAM to work with. You’re waiting more on I/O (which has a big performance hit even with fast storage) as opposed to getting files from the cache. As RAM use increases, you can swap some of the less-needed program code to disk to keep more free space available for the disk cache. The default swappiness parameter might not be optimal for your computer/RAM use patterns and you might need to do some experimenting to find optimal values, but overall some amount of swapping is probably a good idea

      • @[email protected]
        link
        fedilink
        22 years ago

        I’ve heard that argument and i understand the technical reasoning, but in real-world experience i found that disabling swap was the best swappiness for me. Maybe i’m doing something wrong but “help my computer freezes sometime” is a common problem in my circles and “disable swap” has been the best recommendation i found so far.

        • @[email protected]
          link
          fedilink
          12 years ago

          I find that degraded performance is pretty much always preferable to playing Russian roulette with system processes.

  • @[email protected]
    link
    fedilink
    22 years ago

    Didn’t Fedora introduce something that prevents the system from allowing programs to actually eat whole RAM and cause whole system to freeze?

    Are any other distros working on stealing acquiring this functionality?

    • @[email protected]
      link
      fedilink
      32 years ago

      They just implemented a systemd feature, systemd-oomd. To be honest, it can cause issues in some edge cases, but it works pretty well in any distro (that uses systemd).

      • @[email protected]
        link
        fedilink
        32 years ago

        There’s also earlyoom that’s systemd-independent. If you have a source about the edge cases produced by such software i’m curious