Over just 14 days our physical disk usage has increased from 52% to 59%. That’s approximately 1.75 GB of disk space being gobbled up for unknown reasons.

At that rate, we’d be out of physical server space in 2 -3 months. Of course, one solution would be to double our server disk size which would double our monthly operating cost.

The ‘pictrs’ folder named ‘001’ is 132MB and the one named ‘002’ is 2.2GB. At first glance this doesn’t look like it’s an image problem.

So, we are stumped and don’t know what to do.

  • suspended@lemmy.mlOP
    link
    fedilink
    arrow-up
    2
    ·
    2 years ago

    Found the largest file on our server and have no clue what it is and why it is so fucking huge!

        • smorksA
          link
          fedilink
          arrow-up
          1
          ·
          2 years ago

          it’s just a change to the docker-compose.yml file. so depending on how your instance is setup (using ansible or docker-compose), you could just make the change yourself.

          • suspended@lemmy.mlOP
            link
            fedilink
            arrow-up
            1
            ·
            2 years ago

            I tried changing the docker-compose.yml file and it didn’t work. It just threw some vague error.

                • smorksA
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  2 years ago

                  it looks like from the error message that you maybe don’t have the correct amount of whitespace. here’s a snippet of mine:

                  services:
                    lemmy:
                      image: dessalines/lemmy:0.16.6
                      ports:
                        - "127.0.0.1:8536:8536"
                        - "127.0.0.1:6669:6669"
                      restart: always
                      environment:
                        - RUST_LOG="warn,lemmy_server=info,lemmy_api=info,lemmy_api_common=info,lemmy_api_crud=info,lemmy_apub=info,lemmy_db_schema=info,lemmy_db_views=info,lemmy_db_views_actor=info,lemmy_db_views_moderator=info,lemmy_routes=info,lemmy_utils=info,lemmy_websocket=info"
                      volumes:
                        - ./lemmy.hjson:/config/config.hjson
                      depends_on:
                        - postgres
                        - pictrs
                      logging:
                        options:
                          max-size: "20m"
                          max-file: "5"
                  

                  i also added the 4 logging lines for each service listed in my docker-compose.yml file. hope this helps!

                • Dessalines@lemmy.mlM
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  edit-2
                  2 years ago

                  I have no idea where you are getting that from, but it doesn’t match lemmy-ansible, or the PR I linked.

                  Also please do not screenshot text, just copy-paste the entire file so I can see what’s wrong with it.

                  • suspended@lemmy.mlOP
                    link
                    fedilink
                    arrow-up
                    4
                    ·
                    2 years ago

                    I looked more closely at the lemmy-ansible code and now our docker-compose.yml is functioning properly. Thanks for the heads up!

    • smorksA
      link
      fedilink
      arrow-up
      4
      ·
      2 years ago

      i believe that’s just a regular docker log file. i don’t think by default that docker shrinks their log files, so it’s probably everything since you started your instance.

      i’m just guessing though.

      • d1tt0@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        2 years ago

        I also believe that by default docker does not shrink log files.

        In the past I’ve used log-opt max-size=10m or something similar to have docker keep the logs at 10megs