To be fair, you have alternatives.
Canadian software engineer living in Europe.
To be fair, you have alternatives.
I’ve been self-hosting my blog for 21years if you can believe it, much of it has been done on a server in my house. I’ve hosted it on everything from a dusty old Pentium 200Mhz with 16MB of RAM (that’s MB, not GB!) to a shared web host (Webfaction), to a proper VPS (Hetzner), to a Raspberry Pi Kubernetes cluster, which is where it is now.
The site is currently running Python/Django on a few Kubernetes pods on a few Raspberry Pi 4’s, so the total power consumption is tiny, and since they’re fanless, it’s all very quiet in my office upstairs.
In terms of safety, there’s always a risk since you’re opening a port to the world for someone to talk directly to software running in your home. You can mitigate that by (a) keeping your software up to date, and (b) ensuring that if you’re maintaining the software yourself (like I am) keeping on top of any dependencies that may have known exploits. Like, don’t just stand up an instance of Wordpress and forget about it. That shit’s going to get compromised :-)
The safest option is probably to use a static site generator like Hugo, since then your attack surface is limited to whatever you’re using to serve the static sites (probably Nginx), while if you’re running a full-blown application that does publishing etc., then that’s a lot of stuff that could have holes you don’t know about. You may also want to setup something like Cloudflare in front of your site to prevent a DOS attack or something from crippling your home internet, though that may be overkill.
But yeah, the bandwidth requirements to running a blog are negligible, and the experience of running your own stuff on your own hardware in your own house is pretty great. I recommend it :-)
Canadaland recently did an episode on this very subject and as a once-supporter of these sites, I found it eye-opening.
Oh boy are you going to love-to-hate this then. It’s best viewed on a proper computer, but you’ll get the gist on mobile too.
To be clear, I’m not throwing shade. That’s an impressive piece of software. It’s just, given the number of stories I’ve heard (and experienced) about Bash’s tricky syntax leading to Bad Things, I’m less comfortable with running this than I would be with something in a language with fewer pitfalls.
But if others take the chance and it sticks around a bit, I’ll come around ;-)
Thanks for the contribution! It’s a great idea, and with Google fucking about with blocking things like NewPipe, a project like this is a great answer to that.
That looks really impressive, but at nearly 1000 lines of Bash, I’m afraid I’m not comfortable running it on my machine. My Bash-foo isn’t strong enough to be sure that there isn’t a typo in there that could nuke my home folder.
But there’s nothing stopping you from loading realistic (or even real) data into a system like this. They’re entirely different concepts. Indeed, I’ve loaded gigabytes of production data into systems similar to what I’m proposing here (taking all necessary precautions of course). At one company, I even built a system that pulled production into a developer-friendly snapshot while simultaneously pseudo-anonymising that data so it can be safely (for some value of ${safe}) be tinkered with in development.
In fact, adhering to a system like this makes such things easier, since you don’t have to make any concessions to “this is how we do it in development”. You just pull a snapshot from the environment you want to work with and load it into your Compose session.
It sounds like you’re confusing the application with the data. Nothing in this model requires the use of production data.
I feel like you must have read an entirely different post, which must be a failing in my writing.
I would never condone baking secrets into a compose file, which is why the values in compose.yaml
aren’t secrets. The idea is that your compose file is used exclusively for testing and development, where the data isn’t real, and the priority is easing development. When you deploy, you don’t use that compose file because your environment is populated by whatever you use in production (typically Kubernetes these days).
You should not store your development database password in a .env
file because it’s not a secret. The AWS keys listed in the compose are meant to be exactly as they are there: XXX
, because LocalStack doesn’t care what these values are, only that they exist.
As for the CLI thing, again I think you’ve missed the point. The idea is to start from a position of “I’m building images” and therefore neve have a “local app, (Django, sqlite)” because sqlite should not be used unless that’s what’s used in production. There should be little to no difference between development and production, so scripting a bridge between these doesn’t make a lot of sense to me.
I don’t mean to be snarky, but I feel like you didn’t actually read the post 'cause pretty much everything you’ve suggested is the opposite of what I was trying to say.
.json
or .env
files. The litmus test here is: “How many steps does it take to get this project running?” If it’s more than 1 (docker compose up
) it’s too many.High praise! Just keep in mind that my blog is a mixed bag of topics. A little code, lots of politics, and some random stuff to boot.
It’s a tough one, but there are a few options.
For AWS, my favourite one is LocalStack, a Docker image that you can stand up like any other service and then tell it to emulate common AWS services: S3, Lamda, etc. They claim to support 80 different services which is… nuts. They’ve got a strange licensing model though, which last time I used it meant that they support some of the more common services for free, but if you want more you gotta pay… and they aren’t cheap. I don’t know if anything like this exists for Azure.
The next-best choice is to use a stand-in. Many cloud services are just managed+branded Free software projects. RDS is either PostgreSQL or MySQL, ElastiCache is just Redis, etc. For these, you can just stand up a copy of the actual service and since the APIs are identical, you should be fine. Where it gets tricky is when the cloud provider has messed with the API or added functionality that doesn’t exist elsewhere. SQS for example is kind of like RabbitMQ but not.
In those cases, it’s a question of how your application interacts with this service. If it’s by way of an external package (say Celery to SQS for example), then using RabbitMQ locally and SQS in production is probably fine because it’s Celery that’s managing the distinction and not you. They’ve done the work of testing compatibility, so theoretically you don’t have to.
If however your application is the kind of thing that interacts with this service on a low level, opening a direct connection and speaking its protocol yourself, that’s probably not a good idea.
That leaves the third option, which isn’t great, but I’ve done it and it’s not so bad: use the cloud service in development. Normally this is done by having separate services spun up per user or even with a role account. When your app writes to an S3 bucket locally, it’s actually writing to a real bucket called companyname-username-projectbucket
. With tools like Terraform, the fiddly process of setting all this up can be drastically simplified, so it’s not so bad – just make sure that the developers are aware of the fact that their actions can incur costs is all.
If none of the above are suitable, then it’s probably time to stub out the service and then rely more heavily on a QA or staging environment that’s better reflective of production.
You got the eyes just right!
It really doesn’t. See my other comment here for more detail.
Let me tell you how primary lane travel works in civilised countries: drunks and the others you mentioned end up in a canal, stranded up on a meridian, or crashed into a bollard.
That’s because they do more there than just say “share the lane” and call it a day. They narrow the road to almost exactly the width of a typical car using unforgiving barriers like bollards, medians, and 5m deep canals. They restructure the roads so they aren’t straight throughways, but brick-paved, winding pathways through the city.
They turn roads into obstacle courses, calming traffic, because as we all know, drivers may not be worried about killing cyclists, they’re horrified by the idea of scratching their paint.
They still have drunks of course, but they’re typically on bikes (since driving is so impractical), and they too often end up in a canal.
Here’s a decent example from Amsterdam where they effectively have 3 classes of road:
That last category is the majority over there, and a big reason why the city is so safe and quiet… unless it’s King’s Day or New Years eve. Then these spaces are flooded with loud, drunk pedestrians or children shooting fireworks at random. On those days I recommend trips out of town ;-)
This could actually be good news. At the end of the day, bike lanes are car infrastructure. If you want a cycling city, what you need are narrow, slow, winding roads that’re car-hostile. If you can’t have bike lanes, then this could be the opportunity to restructure the roads so that cycling in the primary lane is the default option for everyone.
This is one of the most infuriating things about the left. Automation is fantastic! Why the hell should we rail against something that reduces the amount of work people have to do? Why oppose something that reduces risks we have to take in our daily lives?
There’s no dignity in human labour. We do it because our survival depends on it. The problem is that the automation of that labour is treated by capitalists as a net profit to the owning class.
We should not be fighting to “maintain employment” FFS. We should be fighting for a reasonable share of the fruits of our community. If your job is automated, you should get a share of the company profits for life and then happily leave for new and different work, not try to prevent the automation in the first place.
Yeah that was the big strike against it for me too. I found that you can sort of perch it over a crossed leg and it’s sort of serviceable that way, but yeah… no coding on the train with a Surface.
The Surface Pro keyboard is actually quite good, with the added bonus that it’s also easily detachable.
Five bucks says that this has nothing to do with general energy for the grid and everything to do with powering the fossil fuel extraction and processing industry in that region.