• 0 Posts
  • 29 Comments
Joined 9 months ago
cake
Cake day: October 12th, 2023

help-circle

  • I am running BIND9 to achieve this very thing.

    You can set up different “views” in BIND. Different zonefiles are served to different clients based on the IP address.

    I have an external view that allows AXFR transfers to my public slave DNS provider, and an internal view for clients accessible over my VPN. I use DNS-01 challenges to issue valid Let’s Encrypt certificates to both LAN-facing and public-facing services.

    My DNS server is running on my VPN coordination server, but, if I was not doing that, I’d run it on my router.

    I do not use dnsmasq, so I am not sure if it supports split-view DNS, but if it does not, you can try coredns as a lightweight alternative.





  • So to answer your last question first: I run dual boot Arch+Windows, and I can mount the physical Arch disk inside a WSL VM and then chroot into it to run or fix some things when I CBA to reboot properly. I haven’t tried booting a WSL instance off of the physical arch disk but I don’t imagine it would work. Firstly, WSL uses a modified linux kernel (which won’t be accessible without tinkering with the physical install). Secondly, the physical install is obviously configured for physical ACPI and Network use which will break if I boot into it from WSL. After all, WSL is not a proper VM.

    To answer the first question as to services: notes, kanban boards, network monitoring tools (connected to a VPN / management LAN), databases, more databases, even MOAR databases, database managers, web scrapers, etc.

    The very first thing I used WSL for (a long time ago) was to run ffmpeg. I just could not be bothered building it for Windows myself.


  • So on my workstation / daily driver box:

    • I have Docker using the WSL2 backend. I use this instance of docker to test deployments of software before I push it to my remote servers, to perform local development tasks, and to host some services that I only ever use when my PC is on (so services that require trust and don’t require 24x7 uptime).
    • I have about 8 distros of linux in WSL2.
    • The main distro is Ubuntu 22.04 for legacy reasons. I use this to host an nginx server on my machine (use it as a reverse proxy to my docker services running on my machine) and to run a bunch of linux apps, including GUI ones, without rebooting into my Arch install.
    • I have two instances of Archlinux. One is ‘clean’ and is only used to mount my physical arch disk if I want to do something quick without rebooting into Arch, and the other one I actively tinker with.
    • Other distros are just there for me to play with
    • I use HyperV (since it is required for WSL) to orchestrate Windows virtual machines. Yes, I do use Windows VMs on Windows host. Why? Software testing, running dodgy software in an isolated environment, running spyware I mean Facebook, and similar.
    • Prior to HyperV, I used to use Virtualbox. I switched to hyperv when I started using WSL. For a time, hyperv was incompatible with any other hypervosor on the same host so I dropped virtualbox. That seems to have been fixed now, and I reinstalled virtualbox to orchestrate Oracle Cloud VMs as well.

  • Thank you! What would such a competitive amount would be? 2 per each region covering east and west? or something more distributed such as 1 in a radius of 1,000km?

    I certainly don’t need anything as robust as 1 per 1000km. I currently utilize ClouDNS as my main slave DNS provider. ClouDNS give me POPs in the capital city of every economically-relevant country.

    I don’t necessarily need something that robust for a backup slave provider. Something like 2 POPs per continent would be more than enough, say South Africa, North Africa, Sydney, Singapore, 1-2 in Europe, 1 in JP/KR, 2 in USA, and one in South America.

    That should give decent-enough coverage.


  • I do, indeed, use slave DNS servers, in fact, I’m currently in the market for a second independent provider.

    What features am I looking for? Honestly, a competitive amount of POPs and ability to accept AXFR in. I don’t need much more than that.

    Oh and pricing: I’m looking for something on the level of AWS or cheaper. I’ve tried approaching some other players in the field like ns1 and Hurricane Electric’s commercial service and those are quoting me $350+/month for < 100 zones and <10m req/month. No thank you.




  • I’m not sure if google-fu is glitching as much as imagination. You have three clean options:

    • docker container ls (shows container name and ports)

    • netstat -ban (shows all ports in use on the system + the binary running the service)

    • Just write documentation for yourself when you bring up a new service. Doesn’t have to be anything fancy, a simple markdown or YAML file can be used. I use YAML in case I ever want to use it programmatically.

    netstat -an is your friend.

    Documentation is your second best friend.






  • Never used it, don’t trust random github repos with only 3 stars, and I don’t feel comfortable using turnkey solutions or “configuration scripts”. I am a firm believer in the maxim that configuration is a deeply personal thing. Therefore, I would not use someone’s configuration scripts because they are configured as he wants it, not as I want it.

    Running Docker Desktop on Windows is not exactly hard. And once you have docker desktop running, it is not exactly hard to run whatever other software / media server you might like.

    Windows is my primary workstation OS because I am legally blind and Windows has the best on-screen magnifier on the market. No other product, whether commercial or free, whether standalone or baked into the WM, comes even remotely close. So I use Windows. But within Windows, I leverage both WSL and Docker to run linux tools properly. All of my remote servers are linux. My home server is linux. More than half of my virtual machines are linux.



  • I’m a smidge confused on what you are trying to achieve and how you think it will work.

    As I understand you, you want to connect “embedded” devices where you do not control the software to a VPN network?

    VPNs do need some kind of client (otherwise how does the network stack know to use the VPN protocol?) so how do you envisage this working without an app?

    What is your desired topology like? Do you just want your smart TV/etc to connect to a remote media library over a VPN? If that’s the case, then you are overthinking it with approvals etc.

    You can achieve most of what you want with router configuration. Just define routes saying “Traffic from IP address 10.20.30.40 (TV) should go to 10.20.30.30 (gateway)” and then have the “gateway” handle the tunnel.

    You can also look at tailscale’s subnet routing (should work with headscale backend too).

    Good luck.


  • A few things, in no particular order:

    • Docker interferes with user-defined firewall rules on the host. You need to expend a lot of effort to make your rules persist above docker. This functionally means that, if you are running a public-facing VPS/dedicated server and bind services to 0.0.0.0, even if you set up a firewall on the same machine, it won’t work and your services will be publicly accessible
    • If you have access to a second firewall device  —  whether it is your router at home, or your hosting provider’s firewall (Hetzner, OVH both like to provide firewall controls external to your server)  — this is not the biggest concern.
    • There is no reason to bind your containers to 0.0.0.0. You will usually access most of your containers from a certain IP address, so just bind them to that IP address. My preference is to bind to any address in the 127.0.0.0/8 subnet (yes, that entire subnet is loopback) and then use a reverse proxy. Alternatively, look into the ‘macvlan’ and ‘ipvlan’ docker network drivers.

    Good luck