• 7 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle
  • This may be a long shot, but it’s what I do, so it might be an option: Set up a crypto gateway like CipherMail which will automatically decrypt inbound email and sign/encrypt outbound. The result is that your Thunderbird will never get to see an encrypted email, decryption is handled transparently before it hit’s your inbox. Obviously, if you don’t trust your email provider, this is not an option.

    This isn’t simple and hence not for everyone, also comes with dependencies on your email provider, but it works flawless for me ever since I set it up. I run my own email server, hence adding in CipherMail wasn’t a big deal.


  • Thomas@lemmy.zell-mbc.comtoSelfhosted@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    2
    ·
    edit-2
    11 months ago

    You would expose the port to your host which makes the db acessible by anything running on the host, docker or native. Something like

    `port

    • 5432:5432 `

    But I would recommend running a dedicated db for each service. At least that’s what I do.

    • Simpler setup and therefore less error-prone
    • More secure because the db’s don’t need to be exposed
    • Easier to manage because I can independently upgrade, backup, move

    Isn’t the point about containers that you keep things which depend on each other together, eliminating dependencies? A single db would be a unecessary dependency in my view. What if one service requires a new version of MySQL, and another one does not yet support the new version?

    I also run all my databases via a bind mount

    `volume

    • ./data:/etc/postgres/data…`

    and each service in it’s own directory. E.g. /opt/docker/nextcloud

    That way I have everything which makes up a service contained in one folder. Easy to backup/restore, easy to move, and not the least, clean.
















  • :-)

    But seriously, I was wondering about the requirement to shutdown the VM’s and couldn’t come up with a solid reason? I mean, even if QEMU/KVM/Kernel get replaced during a version upgrade or a more common update, all of these kick in only after the reboot? And how’s me shutting down VMs manually different from the OS shutting down during a reboot?

    I know I am speculating and may not have the fill picture, probably a question for the Proxmox team, there may be some corner case where this is indeed important.

    By the way, Mexican or US black strat? :-)





  • You didn’t say what you are using for your scheduled backups. If it’s something like Borg backup you got a similar level of functionality, CLI instead of a nice UI though.

    I have been using Borg for years and recently also installed PBS. What I do like about it is that the UI is similar to PVE and that it nicely integrates the backup prcess into the UI which makes handling easier and in the end less error prone when it comes to restores I guess. From where I stand right now I will likely keep PBS for things which run on PVE, Borg for the rest of the world.







  • I set mine up with the 0.17.4 and got it working, but like you wrote the the instructions have changed, all the detail is gone so I can only guess. Are there any volume statements in the new (v. 0.18) docker-compose.yml for the proxy container which would point to a local file? Mine has got this:

        volumes:
          - ./nginx.conf:/etc/nginx/nginx.conf:ro
    
    

    and I had to set up the local nginx.conf file.

    edit: I just had a similar conversation in the Lemmy Matrix channel, it looks like the official documentation for some reason doesn’t specify the content of the nginx_internal.conf file any longer. After adding the v. 0.17 nginx content to this file that instance was working.