I redid how my services are structured. Instead of running each project on a separate VM, theyâre now all running on a dedicated Hetzner machine. This is what I call the nix multi monolith machine (hence forth called NMMM, pronounced like tasting something delicious prefixed with an N). There are some really big advantages to the NMMM approach.
Why use a NMMM?
Setting up NMMM has a couple of advantages:
- the NMMM is cheap 1
- the NMMM is simple 2
- the NMMM doesnât need nixops while benefiting from nix 3
- the NMMM spins out new services fast 4
Obviously there are also some disadvantages to this approach, which get answered in the discussion section. But for a typical startup situation, where money is tight and there is no product market fit yet, and time to market is important, I think the NMMM is the best.
What is NMMM
The core of the Nix mutli-monolith machine (NMMM) is a dedicated machine. A dedicated machine means no virtualization, in other word itâs bear metal.
The second major part of this configuration is nix. This means that every piece of software is described in nix files. Just like the configuration of this software. More on that in the nix config section.
The third major part is multi-monoliths. Meaning that you can have multiple, unrelated services running on the same machine. This is something different from micro services. Microservices communicate with each other at some point to provide a consistent frontend. The link calls this âloosly coupledâ, or as I liked to call it: They chat with each other. Monoliths on the other hands should not communicate with each other and are isolated. They have independent frontends and backends. In other words, no chatting between monoliths. They just stand there silent and ominous. I feel this approach falls inline with the monlith first approach, but rather then a single monolith I can deploy many completely unrelated projects.
Nix config
I build this by relying on the module system. The main entrypoint is an ordinary nixos configuration file, other modules in general are also shaped like this even though the entry point depends on them. So we got a very versatile one trick pony!
Assuming / is the root of the project, the root configuration looks like this:
# /nix/hetzner/configuration.nix
{ config, pkgs, ... }:
{
imports =
[
./hardware-configuration.nix
../../videocut.org/nix/videocut.nix
../../massapp.org/nix/massapp.nix
../../raster.click/nixops/raster.nix
];
...That has the same structure as the configuration.nix I use on my laptop. The difference this example has with my laptop is that Iâm pulling in a bunch of additional modules aside from the hardware config trough the imports mechanism. imports tells nixos to also include those other configuration files. However before looking into those specific files, I need to explain how to run this entrypoint. I call this configuration from my /makefile:
deploy:
nix-shell nix/nixpkgs-shell.nix --run "make deploy_"
ROOT_DIR:=$(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
deploy_:
NIXOS_CONFIG="$(ROOT_DIR)/nix/hetzner/configuration.nix" nixos-rebuild switch --target-host root@videocut.org --show-trace
make deploy will deploy if called form the root of the project /. It runs make deploy_ from a shell that sets the NIX_PATH to a pinned nixpkgs. This uses nixos-rebuild switch, just like on my laptop. However I specify the target host to be one of the domains hosted on the Hetzner machine. All domains domains lead to the Hetzner machine.
But how does the machine decide which HTTP request goes to what service? After all they all arrive on the same machine now, so something has to make a decision on this. Nginx can do that! this is a bit later in the entry point file:
# /nix/hetzner/configuration.nix
...
services.nginx = {
virtualHosts = pkgs.lib.foldr (x: prev: pkgs.lib.recursiveUpdate (import x) prev) {}
[../../videocut.org/nix/vhost.nix
../../massapp.org/nix/vhost.nix
../../raster.click/nixops/vhost.nix
../../blog/vhost.nix
]
}
...We let the individual services decide how to configure the virtual hosts. Virtual hosts allow us to specify configurations per domain name. For example the massapp.org host looks like this:
# /massapp.org/nix/vhost.nix
lib = import ./lib.nix;
sslBools = {
forceSSL = true;
enableACME = true; # E
};
base = locations:
{
inherit locations;
} // sslBools; # C
proxy = base {
"/".proxyPass = "http://127.0.0.1:${
toString lib.massapp_port # D
}/";
};
redirect = { globalRedirect = "${lib.massappDomain}"; } // sslBools;
in {
"www.${lib.massappDomain}" = redirect; # B
"${lib.massappDomain}" = proxy; # A
The main thing weâre saying at A is that the massapp.org domain should point to the port defined at D. Furthermore in B weâre redirecting all www traffic to A. Which strips off www from www.masssapp.org resulting into massapp.org. For some reason people still yearn to type www. With this redirect we trash the peoplesâ pointless dreams and desires. Finally in C we strip of the HTTPS, which the proxy will remake into an HTTP connection. Traffic at this point is internal, so SSL has served itâs purpose and we can safely strip it. In practice this means our application doesnât have to deal with certificates or SSL. This leverages nixos built-in letâs encrypt support without even thinking about it in E 5.
Since we just bound all incoming domains to a unique port, we have to bind a program to that port as well. This program is our main application code, or the monolith. For example letâs look at the massapp module. Herein I register a systemd service which runs the main massapp executable. This is another file like configuration.nix (just like my laptop! The one trick pony):
# /massapp.org/nix/massapp.nix
{ config, pkgs, ... }:
let
massapp = pkgs.callPackage ../webservice/default.nix { }; # A
in {
...
systemd.services.massapp = # D
{
description = "Massapp webservice";
serviceConfig = {
Type = "simple";
ExecStart = "${ massapp }/bin/massapp"; # B
};
wantedBy = [ "multi-user.target" ];
requires = [ "postgresql.service" ];
environment = {
PORT = "${toString lib.massapp_port}"; # C
...
};
};
}Within A I load the main binary of the service. We tell systemd about that program at B. Finally we make the program aware of the correct port in C by setting the environment variable PORT. Yesod has a configuration built in for that by default, and massapp is a Yesod application. This would work for any other application, you can even pass CLI arguments like this just by modifying the ExecStart. At D we give this systemd unit the name massapp, by doing this std out is logged in journalctl and tagged with the unit name. Nixos-rebuild will now also know if a service failed trough exit codes. For example if the service canât find the database. Which we discuss how to setup in the database section.
Database integration
This is the database configuration, again in the entrypoint configuration.nix file:
# /nix/hetzner/configuration.nix
...
services.postgresql = {
enable = true;
# A
authentication = pkgs.lib.mkOverride 10 ''
local all all trust
host all all ::1/128 trust
host all all 0.0.0.0/0 md5
host all all ::/0 md5
'';
settings = { # B
log_connections = "yes";
log_statement = "all";
log_disconnections = "yes";
};
# C
initialScript = pkgs.writeText "backend-initScript" ''
CREATE USER massapp_prod WITH PASSWORD 'someinitialpassword';
CREATE DATABASE massapp_prod;
ALTER USER massapp_prod WITH SUPERUSER;
'';
...
}The initial scriptC is only run when the database is started for the first time, after that you need to manage users by hand. However I still add these users to script to keep track of them if I ever need to abandon this system. Like this at least the users will exist on the new machine. I give every monolithsâ user superuser access in C. This isnât the best for security, but itâs really convenient. Besides if a hacker manages to get a shell for the system or a shell for the database itâs already to late for me anyway. Iâd just scrap this system and start over elsewhere.
In B I set some additional logging options, which are just convenient when things break6. With statement logging I can still see what happened in the database if the application didnât emit enough information. logging all statements slows down the database. However, my services are hardly taxed at the moment, they are taxed but no where near justifying not logging.
Localhost connections can authenticate without password in A. Outside connections are ignored, theyâre not even asked for a password.
Discussion
This setup goes against advice from sysadmins, where they recommend you split up everything across VMâs as much as possible. I reject their hypothesis. I mean they argue for splitting services, but the tool used for splitting shouldnât be dogmatic and always result in choosing a VM. Using multiprocessing, and tenanting for specific software packages is good enough. myron semacks at least gives some arguments on why he chooses to use a VM. However his reasons for choosing a VMâs revolve around windows related oddities. For example:
Think about what happens when you need to upgrade the OS, which typically means you make a new VM to replace the old one.
We can seamlessly upgrade nixos pretty much always. Iâve had configuration files change which caused errors when upgrading nixos stable versions, but these errors need to be solved before deploying. The systems I use with nixos themselves have always been stable. Or:
Think about what happens when there is a problem and the VM is down. (Bad Windows update, you fat fingered a network setting, etc)
How would I even fat finger a network setting? I have to modify a file, commit the file and deploy to do this. At this point you canât even call it fat fingering. And even if you somehow screw up a network setting deliberately, you can reboot from an older generation and the problem is gone. The difference here is that this deployment is nix based, which allows me to manage the complexity of the monolith much more efficiently and reliably.
NMMM can also be used to impress potential clients by quickly creating new websites. What are the steps?
- Get the new domain, for example
newdomain.com, - copy over the
massapp.orgfolder intonewdomain.com, - hook it into the main config
/nix/hetzner/configuration.nix, - setup the database,
- redo branding
- deploy.
Thatâs it. Thatâs two hours of work, maybe three if something goes wrong. Nothing is more impressive then have a functioning website, the next day. I guess aside from startups, consultancies should also look into this approach.
On several occasions Iâve mentioned that configuration.nix is just like on my laptop. Originally I even copied pasted the Postgres configuration from my laptop to this machine. It just works. This is one of the big benefits you get out of nix, itâs called the copy-paste monad.
If youâre having doubts about nixops and want to replace it, you may run into secret management issues. Fortunately there is an alternative project for secret management called agenix. Although Iâm a bit skeptical over age, it seems to new to trust. You should do your diligence before using that in production.
Conclusion
I described this NMMM setup thatâs working really well for me. Furthermore I think I gave compelling reasons for others to try this out. Perhaps I liberated some people from their virtual insanity. Let me know if Iâve inspired you to change your course of action, or youâve some compelling reasons not to use this approach. Because how to structure services is a large decision, I find this all fascinating.