kasad.com

kasad.com is an internet domain operated by myself, Kian Kasad. It hosts a website, email server, web apps, and many other services.

The kasad.com Server

The server for kasad.com is a VPS hosted by Vultr. It costs $5/month. It's a pretty low-spec VPS, but that's what keeps it cheap.

Specs/information

Property Value
OS GNU/Linux
Distribution Debian 11 (bullseye)
Vultr VPS type Regular Cloud Compute
Price (monthly) US$5.00
CPU cores 1
Architecture x86_64
RAM 1 GB
Storage 25 GB
IP Address allocation static (via DHCP)
IPv4 Address 140.82.7.10
IPv6 Address 2001:19f0:5:46cc:5400:2ff:fed9:9eba
Bandwidth (monthly) 1 TB

Services

The kasad.com server runs:

Logging/Monitoring

Currently, the logging and especially log monitoring capabilities on the kasad.com server are lackluster. Logging settings have not been changed for most processes. Some log to syslog/systemd-journal while others write logs in /var/log/.

Network Traffic Accounting

Network traffic accounting is handled relatively well. The server uses vnStat 2.6 to aggregate network traffic for each interface.

A very simple web frontend (that I wrote) is available for vnStat.

Remote Access (SSH)

Remote access into the kasad.com server is done using SSH. The server runs OpenSSH 8.4.

Password authentication is disabled. Public-key authentication is required to log in. Logging in as root is disabled, as well as for several mail-only accounts.

One possible security enhancement would be to enforce two-factor authentication when logging in via SSH. However, this is risky because losing the second factor means locking yourself out of the server.

Vultr Web Console

It's also possible to log in using a username/password from the Vultr website, which provides a web interface to the TTY.

DNS

kasad.com uses Cloudflare to provide DNS resolution. Most traffic to the kasad.com server is not proxied through Cloudflare's network. Traffic using Cloudflare's Zero Trust network (meaning all SWAG endpoints) is proxied through the Cloudflare edge network.

To-do: document specific important DNS records

DNS as proof of ownership

A temporary DNS record is used to prove ownership of the kasad.com domain when obtaining TLS certificates from Let's Encrypt. Certbot, the program used to request new certificates, can do this automatically using a Cloudflare API key that has the Zone > DNS > Edit permission for the kasad.com zone.

Dynamic DNS

It is also possible to use the Cloudflare API to programmatically add/update DNS records. This means it's possible to create a dynamic DNS client script which can be used to provide DDNS records under the kasad.com domain.

Web Apps

kasad.com features a collection of web apps. Most of them are not hosted on the kasad.com server but instead run as Docker containers on my laptop. See the Web Apps Overview page for a further description and a list of the currently available web apps.

Web Apps

Web App Overview

kasad.com features a collection of web apps. Most of the web apps don't actually run on the kasad.com server. Instead, they are hosted in Docker containers on my laptop and are made accessible using Cloudflare's Zero Trust platform.

Currently we host the following apps:

Each service* is reverse-proxied through the Secure Web Application Gateway (a.k.a. SWAG) container. The SWAG container connects to the Cloudflare Zero Trust platform using a Cloudflare Tunnel. Then Cloudflare Zero-Trust Applications are used to control access to the services.

*With the exception of CGit, Nextcloud and RoundCube, because these run directly on the kasad.com server.

Web Apps

Automatic Start-up with systemd

Since our Docker containers are web services, they need to be running all the time in order to be useful. To ensure this happens, it's useful to automatically start them when the host machine boots. We can do this easily using systemd on Linux.

TL;DR: if you don't want to learn how this works and you just want the solution, download the attached docker-compose@.service file and place it in the /etc/systemd/system directory. Then skip to Using our new service.

About systemd

Systemd is a service manager and init system. This means it is responsible for starting, stopping, and supervising services running on the machine. Systemd has the capability to automatically start services on boot, as well as managing dependencies and ordering. This allows us to automatically bring our Docker Compose stacks up when our host machine boots.

Systemd also has a feature called template units. Template units have names that end with an @. They allow you to define a generic service that can have specific instances. We will use this to define a generic service for Docker Compose stacks. Then we'll create specific instances of this for each of our different stacks.

The docker-compose@ service

Creating the file

Fist, we have to create a new unit file. We can do that in two ways: (1) use the systemctl edit command or (2) place the file in the /etc/systemd/system directory. Option 1 handles this automatically, so that's what I'll use.

We'll run the following command to create a new service file. The name can be arbitrary, but it must end with @.service to make it a template service file.

# systemctl edit --full docker-compose@.service

Defining the service

Here are the contents of the docker-compose@.service file:

[Unit]
Description=Start Docker Compose stack at %I
Requires=docker.service
After=docker.service

[Service]
WorkingDirectory=%I

Type=oneshot
RemainAfterExit=true

ExecStart=/usr/bin/docker compose up -d
ExecStop=/usr/bin/docker compose down
ExecReload=/usr/bin/docker compose up -d

[Install]
WantedBy=multi-user.target

Now let's walk through what it does.

The [Unit] section

The [Unit] section contains metadata and dependency information. The Description property is just a description for the unit. Don't worry about the %I placeholder yet. We'll get to that later.

The Requires property defines units that are required for the docker-compose@ service to start. Since we'll be using Docker, we need Docker to be running. The After directive tells systemd to wait until docker.service has started before starting docker-compose@.service. Without this, they would both start simultaneously.

The [Service] section

The [Service] section contains instructions for systemd to start, stop, and supervise our service. Systemd services can be started, stopped, and reloaded. Reloading means they'll reload their configuration.

Services can also be restarted, but this is implemented as just stopping and starting the service sequentially, so it isn't really a separate action as far as we're concerned.

The ExecStart, ExecStop, and ExecReload directives define commands to run for each action. We just call docker compose up/down for each one. We pass the -d flag when bringing the stack up as we don't need systemd to capture the log output since Docker already handles that. Since Docker Compose automatically reloads the docker-compose.yml file when docker compose up is run, the reload and start actions can be the same.

The WorkingDirectory option is the most important one here. It tells systemd to change to the given directory before starting the service. But %I isn't a directory. What's up with that?

The %I placeholder

Remember how I mentioned template units? When activating a template unit, you have to specify a value after the @ sign, like docker-compose@/srv/swag.service. Systemd will take the value after the @ sign, called the instance name, and fill it in wherever %I occurs in the unit file.

Systemd requires the instance name to be escaped. This can be done manually or by using the systemd-escape command. For simple paths like the ones I'm using, slashes (/) get replaced with hyphens (-), meaning we'd use docker-compose@-srv-swag.service.

So if we provide a directory path as the instance name when enabling our service, systemd will run the docker compose commands in that directory, because the value of WorkingDirectory is the %I placeholder. This means we can use the same service file for multiple Docker Compose stacks.

The [Install] section

Finally, we need to tell systemd when to trigger our service. This is done in the [Install] section. The WantedBy directive defines a target unit that our service will be a part of when it is enabled. In systemd, the multi-user target is the default target that the system will activate, so we use this.

Using our new service

This is the easiest part. Now all we need to do is enable our service. Enabling a service means telling systemd to start it when the target that wants it is started. So if we enable our new service, systemd will start it when the system boots, because the default target wants it.

But we can't just systemctl enable docker-compose@.service. We need to provide an instance name. Remember that the instance name is the escaped directory where our docker-compose.yml file is located. For example, let's enable the Secure Web Application Gateway stack, which is located in /srv/swag. First, we'll escape the path:

$ systemd-escape /srv/swag
-srv-swag

Then we'll use the escaped path as the instance name. Note that systemctl must be run as root.

# systemctl enable docker-compose@-srv-swag.service
Created symlink /etc/systemd/system/multi-user.target.wants/docker-compose@-srv-swag.service → /etc/systemd/system/docker-compose@.service.

Now our service is enabled and it will start next time we reboot the system. You can repeat this process for other Docker Compose stacks by using different directory paths.

Note: enabling a service does not start it. If you want to enable it and start it at once, you can use systemctl enable --now <service>.

Web Apps

Sending Emails from Web Apps

Some of the kasad.com web apps (currently Vikunja, Bitwarden, Nextcloud, and BookStack) have the capability to send emails. All of these apps are hosted on their own subdomains, so it makes the most sense if emails sent by those apps originate from their respective subdomains.

Expected behavior

  1. All kasad.com email users will be allowed to send mail from (and only from) <username>@kasad.com or <username>+<anything>@kasad.com.
  2. Some special users (the mail accounts for the web apps) will be allowed to send emails from <anything>@<subdomain>.kasad.com as well.

In the above, <username> represents the account's login username. <anything> represents any sequence of letters, numbers, +, ., -, or _. <subdomain> represents a subdomain (e.g. tasks for tasks.kasad.com).

Implementation

The kasad.com mail server runs Postfix for SMTP, so we will focus on configuring that. By default, if authenticated users are allowed to send mail, they can use any From address, including ones they don't own and even ones with a different domain from the server.

Note: Dovecot is used for local delivery and IMAP, but since none of the web apps need to receive mail, we can ignore this.

The smtpd_sender_login_maps option

We can constrict the allowed From addresses using the smtpd_sender_login_maps option in Postfix's main.cf file (/etc/postfix/main.cf). This option allows us to specify a lookup table of email addresses that map to login usernames.

We don't want to hard-code each user's allowed email addresses. Even if we did, it wouldn't handle the <username>+<anything>@kasad.com case or the <anything>@<subdomain>.kasad.com case. To handle these arbitrary addresses, we need to use regular expressions.

Postfix supports regular expression mapping tables. We can use one like this:

smtpd_sender_login_maps = pcre:/etc/postfix/login_maps

This means Postfix will look in /etc/postfix/login_maps to map From addresses to the usernames which are allowed to send mail from those addresses.

Populating the login map table

The login map table maps addresses to usernames in that order. Since we're using regular expressions, the regular expression will match the address. For the username column, we can either hard-code usernames or use capture groups from the regular expression. This will make more sense once you see the example:

/^(\w+)(\+[a-z0-9_+.-]+)?@kasad\.com$/  $1

/^[a-z0-9_+.-]+@tasks\.kasad\.com$/     vikunja
/^[a-z0-9_+.-]+@cloud\.kasad\.com$/     nextcloud
/^[a-z0-9_+.-]+@auth\.kasad\.com$/      authelia
/^[a-z0-9_+.-]+@books\.kasad\.com$/     bookstack
/^[a-z0-9_+.-]+@bw\.kasad\.com$/        vaultwarden

Each regular expression must be surrounded by slashes. And since we want to match the entire address, we'll put a ^ at the beginning and a $ at the end of each pattern. Postfix automatically uses case-insensitive regular expressions. Adding the i flag here would actually make the patterns case-sensitive.

Explaining the first map entry

The first line contains two capture groups (a pattern enclosed in parentheses). The first capture group captures one or more alphanumeric characters (A-Z and 0-9). This will be the username part of the address. The second capture group captures a plus (+) followed by one or more letter, number, _, +, ., or - character. This is the extension part of the address. The second capture group is followed by a question mark (?), meaning the whole second group is optional. This means it will match both <username>@kasad.com and <username>+<anything>@kasad.com while capturing the username in the first capture group. Finally, the pattern ends with @kasad\.com which will match the literal text @kasad.com.

Next, we'll look at the second column. We know that this will evaluate to the account's username, since the second column contains the username that is allowed to send an email from the address matched in the first column. But $1 is not a username. Except it is! $1 means the contents of the first capture group that was matched. And the first capture group contains the account's username. So the first pattern will map both <username>@kasad.com and <username>+<anything>@kasad.com to <username>.

This accomplishes Expected Behavior #1.

Explaining the rest of the map entries

The rest of the entries are almost exactly the same. They are a bit simpler, since we don't need capture groups. We know that each web app that sends mail has its own subdomain. Since these subdomains are manually assigned, we must hard-code the entries in the login map table.

Each of the remaining patterns begins with [a-z0-9_+.-]+, which will match one or more letter, number, _, +, ., or - character. It's then followed by @<sub>.kasad.com, where <sub> is replaced by a specific subdomain.

The second column then contains the username for the mail account for that subdomain's web app. For example, tasks.kasad.com is the domain for Vikunja, so the vikunja user is authorized for any so-called local part (what comes before the @ symbol) on that domain.

Web Apps

Permissions for Persistent Storage Volumes

Some of the Docker containers read/write data to persistent storage volumes. They read/write using the UID/GID of the entrypoint process in the container.

Expected behavior

To allow for better access and administration of the files in these volumes, containers should ideally:

  1. Create files/directories with the owning group set to servlets
  2. Use a umask of 007: grants read/write permission to the owner and the servlets group.
  3. Set the setgid bit on directories (i.e. 2770 chmod(8) value)

Implementation

The UID and GID of the container actually does not matter as long as the umask(2) of the container's process can be set. LinuxServer.io containers allow setting the umask using the UMASK environment variable.

To ensure files are created with the servlets GID, the root directory of each volume should have the SetGID bit set. This will ensure that files and directories created within the volume inherit the GID of their parent directory.

Services can have their own users on the host, or they can run as root. The UID/GID of the container does not matter as long as the umask is set. If it is not possible to specify a umask, run the container using the servlets GID. If using a custom UID, add the user to the servlets group.

Web Apps

Secure Web Application Gateway

The Secure Web Application Gateway (a.k.a. SWAG) is a reverse proxy service. It serves a reverse proxy for all the web apps on kasad.com.

It runs as a Docker container using the lscr.io/linuxserver/swag:latest image.

Access

The SWAG container runs inside a NAT-protected LAN with a dynamic public IP. This presents some problems when hosting a public-facing webserver.

Instead of traditional port forwarding, we use Cloudflare's Zero Trust Platform. It allows us to connect the SWAG container to Cloudflare's network using a Tunnel. Cloudflare Tunnels are egress-only, meaning no incoming connections will be established, so no port forwarding or dynamic DNS solutions are required.

Configuration

Docker mods

The SWAG container uses so-called mods to add extra features. Our SWAG container uses:

TLS Certificates

The SWAG container uses Certbot to obtain signed TLS certificates from Let's Encrypt. To configure this, we define several environment variables for the contianer:

URL: kasad.com
SUBDOMAINS: swag,auth,tasks,books,bw
ONLY_SUBDOMAINS: true
VALIDATION: dns
DNSPLUGIN: cloudflare
PROPAGATION: 30
EMAIL: admin@kasad.com

These settings configure Certbot to add a temporary DNS record to kasad.com to verify ownership of the domain, then wait 30 seconds for propagation, then request a certificate valid only for the subdomains specified. The email address provided is optional and is used for certificate expiration notifications.

Cloudflare Tunnel parameters

The Cloudflare tunnel connection is configured using ennvironment variables. There is also a configuration file which handles the routing rules for incoming traffic.

CF_ZONE_ID: [redacted]
CF_ACCOUNT_ID: [redacted]
CF_API_TOKEN: [redacted]
CF_TUNNEL_NAME: swag.kasad.com
CF_TUNNEL_PASSWORD: [redacted]
FILE__CF_TUNNEL_CONFIG: /config/tunnelconfig.yml

Most of the parameters are potentially sensitive API keys.

CF_API_TOKEN must contain an API token which has the Account > Cloudflare Tunnel > Edit permission annd the Zone > DNS > Edit permission for the kasad.com zone.

The CF_ZONE_ID and CF_ACCOUNT_ID can be found on the Overview page for a zone in the Cloudflare dashboard.

CF_TUNNEL_NAME is the name for the Tunnel that will be created. CF_TUNNEL_PASSWORD is a string that can be made up or randomly generated. It should be at least 32 characters.

Ingress routing

A single tunnel is used for multiple subdomains, so cloudflared needs to know where to route traffic for each origin. This is done using a YAML configuration file following Cloudflare's specification.

The YAML contents can either (1) be specified directly as the value for the CF_TUNNEL_CONFIG environment variable, or (2) be placed in a file inside the container. The file path must then be specified in the FILE__CF_TUNNEL_CONFIG environment variable. We use the second option.

The contents of the config/tunnelconfig.yml are:

ingress:
  - hostname: swag.kasad.com
    service: https://swag.kasad.com
  - hostname: auth.kasad.com
    service: https://auth.kasad.com
  - hostname: tasks.kasad.com
    service: https://tasks.kasad.com
  - hostname: books.kasad.com
    service: https://books.kasad.com
  - hostname: bw.kasad.com
    service: https://bw.kasad.com
  - hostname: send.kasad.com
    service: https://send.kasad.com
  - service: http_status:404
Hostname routing

You'll notice in the config/tunnelconfig.yml file that the service field has the same hostname as the hostname field. This is for a reason.

Since the SWAG's NGINX server uses the Host field of requests to route traffic, the Host header on incoming requests must stay intact. This means that cloudflared needs to be able to access the NGINX instance using the DNS domain for the subdomains.

This just means we need to define extra hostnames for the SWAG container which all point to localhost. This can easily be done in the Docker Compose file:

services:
  swag:
    # ...
    extra_hosts:
      - swag.kasad.com:127.0.0.1
      - auth.kasad.com:127.0.0.1
      - tasks.kasad.com:127.0.0.1
      - books.kasad.com:127.0.0.1
      - bw.kasad.com:127.0.0.1
      - send.kasad.com:127.0.0.1

LAN access

If on the same LAN as the host for the SWAG container, the SWAG can be accessed using the IP address of the host without having to access it through the public Cloudflare IP.

To disable this, remove the port forwarding definitions from the Compose file:

services:
  swag:
    # ...
    ports:
      - '80:80'
      - '443:443'

Docker network

Since the SWAG container needs network access to any services it is reverse-proxying, the upstream containers must be on the same (Docker) network as the SWAG container. This does not require any extra configuration in the SWAG stack, but it does require the following configuration in the Compose file for any upstream services:

services:
  # ...
  
  some_upstream_service:
    # ...
    networks:
      - default # The default network for this service's stack
      - swag # The SWAG stack's network
      
networks:
  swag:
    external: true # Tells Docker to look for an existing network instead of creating a new one
    name: swag_default # This is the name of the default network for the SWAG stack

Service configuration

The SWAG container runs NGINX as the reverse proxy webserver. Its configuration files are hosted in config/nginx/. Configuration for each service that is being reverse-proxied exists under config/nginx/proxy-confs/. See the README.md file in that directory for details.

SWAG comes with sample configs for many services. These samples are files in config/nginx/proxy-confs/ with the names <service>.<type>.conf.sample. <service> is the name of the service. <type> is either subdomain or subfolder, depending on how the service is reverse-proxied.

Adding new subdomains

Hosting a service on a new subdomain requires additional steps past just the NGINX config. A new TLS certificate must be obtained and a new DNS record must be added, along with routing rules.

Luckily, this is easy. Simply add the new subdomain to the SUBDOMAINS environment variable for the SWAG container and the extra_hosts list:

services:
  swag:
    # ...
    environment:
      SUBDOMAINS: swag,auth,...,newsub
      # ...
    extra_hosts:
      # ...
      - newsub.kasad.com:127.0.0.1

Then add a new entry in the config/tunnelconfig.yml file:

ingress:
  # ...
  - hostname: newsub.kasad.com
    service: https://newsub.kasad.com
  # ...

Finally, reload the SWAG stack:

# systemctl reload docker-compose@-srv-swag.service

Deployment

The Secure Web Application Gateway runs as just a single Docker container. Since we're running Heimdall as a dashboard/landing page, the SWAG and Heimdall containers are run in the same Compose stack.

Docker Compose service configuration for the SWAG container:

version: '3'

services:
  swag:
    image: lscr.io/linuxserver/swag:latest
    container_name: swag
    environment:
      PUID: 938 # swag
      PGID: 941 # servlets
      UMASK: 007
      URL: kasad.com
      SUBDOMAINS: swag,auth,tasks,books,bw,send
      ONLY_SUBDOMAINS: true
      VALIDATION: dns
      DNSPLUGIN: cloudflare
      PROPAGATION: 30
      EMAIL: admin@kasad.com
      DOCKER_MODS: linuxserver/mods:universal-cloudflared|linuxserver/mods:swag-dashboard|linuxserver/mods:swag-auto-reload
      CF_ZONE_ID: [redacted]
      CF_ACCOUNT_ID: [redacted]
      CF_API_TOKEN: [redacted]
      CF_TUNNEL_NAME: swag.kasad.com
      CF_TUNNEL_PASSWORD: [redacted]
      FILE__CF_TUNNEL_CONFIG: /config/tunnelconfig.yml
      TZ: America/Los_Angeles
    extra_hosts:
      - swag.kasad.com:127.0.0.1
      - auth.kasad.com:127.0.0.1
      - tasks.kasad.com:127.0.0.1
      - books.kasad.com:127.0.0.1
      - bw.kasad.com:127.0.0.1
      - send.kasad.com:127.0.0.1
    ports:
      - '80:80'
      - '443:443'
    volumes:
      - ./config:/config
    restart: unless-stopped

SWAG Dashboard

We've enabled the linuxserver/mods:swag-dashboard mod for the SWAG container. This provides a dashboard page which displays information and metrics about the SWAG server.

The dashboard endpoint (/dashboard) is protected by a Cloudflare Access policy which allows only authenticated users who belong to the Administrators group.

NGINX configuration

SWAG only comes with a subdomain configuration file for the dashboard, but we want it hosted on swag.kasad.com/dashboard, so we'll need to create our own configuration file.

This file should be saved as config/nginx/proxy-confs/dashboard.subfolder.conf:

location /dashboard {
    alias /dashboard/www;
    index index.php;
    rewrite_log on;
    try_files $uri $uri/ /dashboard/index.php?$args;
}

location ~ ^/dashboard/(.*\.php)$ {
    alias /dashboard/www/$1;
    rewrite_log on;
    fastcgi_pass 127.0.0.1:9000;
    fastcgi_index index.php;
    include /etc/nginx/fastcgi_params;
    fastcgi_param DOCUMENT_ROOT /dashboard/www;
    add_header X-Document-Root "$document_root";
}
Web Apps

Heimdall - App Launcher

Description

Heimdall is an app launcher / dashboard. It serves as the landing page for swag.kasad.com.

Deployment details

Access

Heimdall is served by the SWAG reverse proxy. It is published as the root page on swag.kasad.com. It requires authentication and authorization by Cloudflare Access.

Docker Compose stack

The Heimdall container runs as part of the SWAG stack. Unlike Authelia, it will not be separated into its own stack because it is the SWAG frontend.

The Docker Compose service configuration for Heimdall is:

services:
  # ...
  heimdall:
    image: lscr.io/linuxserver/heimdall
    container_name: heimdall
    environment:
      - PUID=938 # swag
      - PGID=941 # servlets
      - TZ=America/Los_Angeles
    volumes:
      - ./heimdall_config:/config
    restart: unless-stopped

Configuration

We want to make Heimdall the root page for swag.kasad.com so it acts as a dashboard/landing page. This requires two changes in the Secure Web Application Gateway's NGINX configs:

Disabling the default SWAG root

To disable the default SWAG root, we need the comment out the root location block in the default site config. Since this is the root page, its configuration is located in config/nginx/site-confs/default rather than in config/nginx/proxy-confs/....

Find and comment out the root location block. It should look like this when done:

    #location / {
    #    # enable the next two lines for http auth
    #    #auth_basic "Restricted";
    #    #auth_basic_user_file /config/nginx/.htpasswd;

    #    # enable the next two lines for ldap auth
    #    #auth_request /auth;
    #    #error_page 401 =200 /ldaplogin;

    #    # enable for Authelia
    #    #include /config/nginx/authelia-location.conf;

    #    try_files $uri $uri/ /index.html /index.php?$args =404;
    #}

Now that this is disabled, any subfolder configuration file in config/nginx/proxy-confs with a location / { ... } block will be active.

Enabling the Heimdall page

Enabling Heimdall as the root page is easy, as the sample configuration shipped with the SWAG container already uses the root location. All we need to do is make a copy the sample file and remove the .sample suffix:

$ cp -v config/nginx/proxy-confs/heimdall.subfolder.conf{.sample,}
Web Apps

Authentik - Identity & SSO Provider

Authentik is "an open-source Identity Provider focused on flexibility and versatility." It acts as a user database and an authentication/authorization provider for Cloudflare Access and other web apps.

Deployment

Authentik requires (at least) 4 Docker containers:

Image Purpose
ghcr.io/authentik/server:2022.8.2 The main Authentik server
ghcr.io/authentik/server:2022.8.2 Authentik backend worker
postgres:12-alpine Database
redis:alpine Cache server

Authentik provides a tutorial for setting up Authentik using Docker Compose. It is highly recommended to carefully read the entire tutorial. Also read the Terminology page as it'll become required knowledge while configuring Authentik.

We deploy the Authentik stack using the following Docker Compose file and environment file.

docker-compose.yml

---
version: '3.4'

services:
  database:
    image: postgres:12-alpine
    container_name: authdb
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
      start_period: 20s
      interval: 30s
      retries: 5
      timeout: 5s
    volumes:
      - database:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=${PG_PASS:?database password required}
      - POSTGRES_USER=${PG_USER:-authentik}
      - POSTGRES_DB=${PG_DB:-authentik}
    env_file:
      - .env
      
  redis:
    image: redis:alpine
    container_name: authredis
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "redis-cli ping | grep -Fq PONG"]
      start_period: 20s
      interval: 30s
      retries: 5
      timeout: 3s
      
  server:
    image: ghcr.io/goauthentik/server:2022.8.2
    container_name: authentik
    restart: unless-stopped
    command: server
    environment:
      AUTHENTIK_REDIS__HOST: redis
      AUTHENTIK_POSTGRESQL__HOST: database
      AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
      AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
      AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
      # AUTHENTIK_ERROR_REPORTING__ENABLED: "true"
      # WORKERS: 2
    env_file:
      - .env
    networks:
      - default
      - swag

  worker:
    image: ghcr.io/goauthentik/server:2022.8.2
    container_name: authentik-worker
    restart: unless-stopped
    command: worker
    environment:
      AUTHENTIK_REDIS__HOST: redis
      AUTHENTIK_POSTGRESQL__HOST: database
      AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
      AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
      AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
      # AUTHENTIK_ERROR_REPORTING__ENABLED: "true"
    env_file:
      - .env

volumes:
  database:
    driver: local
    
networks:
  swag:
    external: true
    name: swag_default

.env

PG_PASS=[redacted]
AUTHENTIK_EMAIL__HOST=mail.kasad.com
AUTHENTIK_EMAIL__PORT=465
AUTHENTIK_EMAIL__USE_SSL=true
AUTHENTIK_EMAIL__TIMEOUT=10
AUTHENTIK_EMAIL__FROM=Kasad Auth <no-reply@auth2.kasad.com>
AUTHENTIK_EMAIL__USERNAME=authentik
AUTHENTIK_EMAIL__PASSWORD=[redacted]
AUTHENTIK_SECRET_KEY=[redacted]
AUTHENTIK_DEFAULT_USER_CHANGE_USERNAME=false
AUTHENTIK_DISABLE_STARTUP_ANALYTICS=true

SWAG network

Since our Authentik instance is reverse-proxied behind the Secure Web Application Gateway, the SWAG container needs network access to the Bitwarden container. This has been done in the Compose stack above. See this explanation for details.

Configuration

The configuration required to get Authentik working according to my specifications is extensive. It's also still changing significantly. Because of this, I will leave the writing of the rest of this page for a later date.

To do: document Authentik configuration

Web Apps

Guacamole - Remote Access

Description

Apache Guacamole is a remote access gateway with a web frontend. It allows the user to connect to a device using SSH, VNC, or RDP using just a web browser.

Deployment details

Guacamole requires three separate Docker containers: (1) the backend server which handles the underlying connections, (2) the frontend server, which provides the web interface, and (3) a PostgreSQL database which stores the frontend's data.

These are the respective Docker container images:

  1. kiankasad/guacd:latest
  2. guacamole/guacamole:latest
  3. ghcr.io/kdkasad/guacdb:2022.09.06

Access

Guacamole is published behind the Secure Web Application Gateway at swag.kasad.com/guacamole. It is protected by Cloudflare Zero Trust, requiring authentication to access.

Custom database container

Guacamole will not automatically initialize a database the first time it is run. Instead, this has to be done manually using an initialization script when creating the database container. To make this easier, I've created a Docker image specifically for Guacamole which will automatically extract the latest initialization script and create the database the first time it's run.

The sources are available on GitHub and the image is published to the GitHub Container Registry as ghcr.io/kdkasad/guacdb.

Custom guacd container

The upstream Guacamole project publishes nightly builds of the guacamole/guacd Docker image. However, the latest one (published 2022-09-06) broke support for Ed25519 SSH keys. Until the next release occurs, I've built and published my own guacd container using the latest sources from GitHub (commit 0361adc).

This container uses the latest upstream sources and is published on Docker Hub as kiankasad/guacd:latest.

Docker Compose stack

The Guacamole stack uses the following Docker Compose configuration:

version: '3'

services:

  guacdb:
    image: ghcr.io/kdkasad/guacdb:2022.09.06
    container_name: guacdb
    restart: unless-stopped
    environment:
      POSTGRES_DB: guacamole_db
      POSTGRES_USER: guac
      POSTGRES_PASSWORD: [redacted]
    volumes:
      - ./guacdb-data:/var/lib/postgresql/data

  guacd:
    image: kiankasad/guacd:latest
    container_name: guacd
    restart: unless-stopped

  guacamole:
    image: guacamole/guacamole:latest
    container_name: guacamole
    restart: unless-stopped
    environment:
      REMOTE_IP_VALVE_ENABLED: true
      GUACD_HOSTNAME: guacd
      POSTGRES_HOSTNAME: guacdb
      POSTGRES_DATABASE: guacamole_db
      POSTGRES_USER: guac
      POSTGRES_PASSWORD: [redacted]
      POSTGRESQL_AUTO_CREATE_ACCOUNTS: true
      OPENID_AUTHORIZATION_ENDPOINT: "https://auth2.kasad.com/application/o/authorize/"
      OPENID_JWKS_ENDPOINT: "https://auth2.kasad.com/application/o/guacamole/jwks/"
      OPENID_ISSUER: "https://auth2.kasad.com/"
      OPENID_CLIENT_ID: "################################"
      OPENID_REDIRECT_URI: "https://swag.kasad.com/guacamole/"
      OPENID_USERNAME_CLAIM_TYPE: "preferred_username"
      EXTENSION_PRIORITY: "openid"
    depends_on:
      - guacdb
      - guacd
    networks:
      - default
      - swag

networks:
  default:
    ipam:
      driver: default
      config:
        - subnet: "172.18.0.0/16"
          gateway: "172.18.0.1"
  swag:
    name: swag_default
    external: true

Static network subnet

The reason for the specific network subnet is that one of the connections within Guacamole is to Kian's laptop. Since Kian's laptop is the host of the Docker containers, the easiest way to address it is as 172.18.0.1. This requires that the subnet for the Guacamole stack is always at least 172.18.0.0/24. I've set it to 172.18.0.0/16 because Docker usually assigns 16-bit subnets.

SWAG reverse proxy

The web frontend for Guacamole is reverse-proxied behind the Secure Web Application Gateway (SWAG). This means the swag container needs network access to the guacamole container, so the guacamole container is added to the swag_default network in the Compose stack.

Configuration

The Guacamole frontend is where the majority of the configuration happens, as it also handles authentication and storage of user preferences/data.

Guacamole is configured using a guacamole.properties file. However, the Docker container allows for automatic generation of this configuration file using environment variables. So all of the configuration for Guacamole is done using environment variables in the Docker Compose file.

Single Sign-On

Guacamole's frontend server utilizes extensions to provide authentication backends. We use the openid and postgresql authentication extensions. OpenID Connect interfaces with Authentik to provide user authentication. The PostgreSQL backend stores the Guacamole-specific data for each user, like saved connections.

Because we're using two extensions, the order in which they are enabled matters. The guacamole/guacamole Docker container automatically prioritizes the openid extension, which is what we want. This way users must sign in through Authentik, and the PostgreSQL database is only used for data storage, not authentication.

The extension-priority configuration option in the $GUACAMOLE_HOME/guacamole.properties file can be used to override the extension loading order. The EXTENSION_PRIORITY environment variable controls the same option when using the Docker container. However, this change is only in the upstream GitHub repository and hasn't made its way to the official Docker container yet. Despite this, I've defined it anyways (it doesn't hurt).

OpenID Connect parameters

We perform the necessary configuration using environment variables, which the Docker container will convert into configuration file entries.

The following environment variables are set for the guacamole container. Obviously, replace the URLs and client ID to match your setup.

      OPENID_AUTHORIZATION_ENDPOINT: "https://auth2.kasad.com/application/o/authorize/"
      OPENID_JWKS_ENDPOINT: "https://auth2.kasad.com/application/o/guacamole/jwks/"
      OPENID_ISSUER: "https://auth2.kasad.com/"
      OPENID_CLIENT_ID: "[redacted]"
      OPENID_REDIRECT_URI: "https://swag.kasad.com/guacamole/" # Trailing slash is important
      OPENID_USERNAME_CLAIM_TYPE: "preferred_username"

In Authentik, create a new Application and a new OpenID provider for Guacamole.

The JWKS endpoint

If the JWKS endpoint is proxied behind Cloudflare (as ours is), it must have Cloudflare's Browser Integrity Check disabled. This can be accomplished by adding a Page Rule in the kasad.com zone for auth.kasad.com/jwks.json.

If this is not done, Guacamole will be prohibited from accessing the JWKS endpoint. In the container's logs, you'll find error messages about 403 Prohibited errors when trying to access the JWKS URL.

Creating an admin user

When Guacamole's database is initialized, a user is created with the username and password set to guacadmin. This user has administrator permissions on the Guacamole instance, meaning they have full control over all aspects. However, when logging in using SSO, this user is not accessible because the normal username/password login is not available.

To get around this, you have two options: (1) temporarily disable the SSO authentication extension, or (2) manually modify the Guacamole database.

I will not documennt option 2, but if you decide to go with that, see the System Permissions section of the Modifying data Manually documentation for Guacamole.

For option 1, you must first log in via SSO as the user you wish to turn into an administrator. This will create an entry for this user in the Guacamole database.

Next, disable the OpenID authentication extension by commenting out all the environment variable starting with OPENID_ in the Docker Compose file. Then re-create the stack using docker compose up -d.

Now log in to Guacamole as the guacadmin user. Then go to Settings > Users. Select the user you created in the first step and make them an administrator. While you're at it, change the password for the guacadmin user just in case.

Finally, revert the changes to the Docker Compose file and re-create the stack again. Now your SSO user should be an administrator.

Adding new users

When a user signs in to Guacamole using SSO for the first time, an entry will be created in Guacamole's database, but they will not be given any permissions. This means they cannot create connections on their own.

An administrator must grant them the necessary permissions once their account has been created.

Other Notes

Building from sources

I tried building the Guacamole frontend and backend containers myself from their sources. The backend container built fine, but the frontend container failed because of a missing libglib-2.0.so.0 library.

Unstable mobile keyboard input on Android

I've noticed that the software keyboard input mode doesn't work well on Android (at least in Brave Browser). Keystrokes are not sent through the connection until the backspace key is pressed. A somewhat-workaround for this is to type what you wish to send, then add an extra space, then hit backspace. It should result in the proper text being sent.

Upstream development

Guacamole doesn't seem to be very popular. Unfortunately, this means that upstream development is pretty slow. Bugs (even relatively severe ones) are not fixed quickly.

SSH host keys

Guacamole doesn't accept the standard format for SSH host keys (i.e. the one you'd find in ~/.ssh/known_hosts). However, it will not tell you that. If you attempt to use an SSH host key, it will simply inform you that "An internal error occurred."

guacamole/guacd image versions

The Dockerfile for guacamole/guacd is set up a certain way so that each build will pull the latest versions of the libraries it uses. A build bot automatically builds nightly versions of the guacamole/guacd image and uploads them to Docker Hub.

However, the latest tag still uses the latest tagged version of the guacd source code. So although it builds nightly, it does not use the latest Git revision. Instead, it uses the latest release (currently 1.4.0) and the newest libraries.

This means that the 1.4.0 and the latest images tags are the same in terms of guacd's functionality. The only way they differ is in what the underlying protocol libraries support.

VNC TLS failure

When connecting to a TigerVNC server using TLS transport security, a handshake error occurs. This only happens using the latest tag for the guacamole/guacd image. On tag 1.4.0, it works fine.

Sadly, using 1.4.0 is not a workaround for this because 1.4.0 lacks support for OpenSSH-style SSH keys. So we must choose between VNC with TLS and ED25519 SSH keys. I choose the latter.

Web Apps

BookStack - Personal wiki

Bookstack is a "simple, self-hosted, easy-to-use platform for organising and storing information." In other words, it's a personal wiki. The documentation for kasad.com (what you're reading now) is hosted using BookStack.

Deployment

BookStack requires two containers, the BookStack server and a database. The lscr.io/linuxserver/bookstack and mysql images are used for those.

Access

The BookStack web app is reverse-proxied behind the Secure Web Application Gateway (SWAG) container. It is not protected by Cloudflare Access policies in order to allow public read-only access to the content. A login is required to edit content.

Docker Compose stack

We deploy the BookStack instance using the following Docker Compose file:

version: "2"

services:

  bookstack:
    image: lscr.io/linuxserver/bookstack:22.09.20220908
    container_name: bookstack
    environment:
      - PUID=1003 # bookstack
      - PGID=941 # servlets
      - UMASK=007
	env_file: stack.env
	volumes:
      - /srv/bookstack/config:/config
    networks:
      - default
      - swag
    restart: unless-stopped
    depends_on:
      - bookstack_db
      
  bookstack_db:
    image: mysql
    container_name: bookstack_db
    environment:
      - PUID=1003 # bookstack
      - PGID=941 # servlets
      - UMASK=007
      - TZ=America/Los_Angeles
      - MYSQL_ROOT_PASSWORD=[redacted]
      - MYSQL_DATABASE=bookstackapp
      - MYSQL_USER=bookstack
      - MYSQL_PASSWORD=[redacted]
    volumes:
      - /srv/bookstack/db_data:/var/lib/mysql
    restart: unless-stopped
  
networks:
  swag:
    external: true
    name: swag_default

SWAG network

Since our BookStack instance is reverse-proxied by the Secure Web Application Gateway, the SWAG container needs network access to the Bookstack container. This has been done in the Compose stack above. See this explanation for details.

Persistent file volumes

BookStack stores most of its data in the MySQL database. However, some of BookStack's configuration data is stored in files in the /config directory and file uploads are stored in the /config/www/uploads directory. So we mount a storage volume on /config within the bookstack container in order to persist this data between service restarts:

    volumes:
      - /srv/bookstack/config:/config

Configuration

The configuration for BookStack's server is done using environment variables. There is a settings interface within the BookStack web frontend which controls settings that pertain to using BookStack.

For example, configuring the SMTP server is done using environment variables whereas configuring user roles is done from the settings interface.

Environment configuration

All of BookStack's configuration can be done using environment variables. By default, BookStack is designed to read configuration by parsing /config/www/.env for environment variables. Variables defined in the container's environment are also applied. A value defined in the container environment will override a different value for the same variable in the file.

To improve container repeatability, we define these variables in a stack.env file adjacent to the docker-compose.yml file. This file is loaded into the container's environment by setting env_file: stack.env for the bookstack service in the Compose file.

The contents of the stack.env file are listed below. The comments in the file explain the settings well, so they will not be explained again on this page. More settings are available, and are documented in the Configuration section of BookStack's documentation.

# Application URL
# This must be the root URL that you want to host BookStack on.
# All URLs in BookStack will be generated using this value
# to ensure URLs generated are consistent and secure.
# If you change this in the future you may need to run a command
# to update stored URLs in the database. Command example:
# php artisan bookstack:update-url https://old.example.com https://new.example.com
APP_URL=https://books.kasad.com/

# Database connection parameters
DB_HOST=bookstack_db
DB_DATABASE=bookstackapp
DB_USERNAME=bookstack
DB_PASSWORD=[redacted]

# Mail system to use
# Can be 'smtp' or 'sendmail'
MAIL_DRIVER=smtp

# Mail sender details
MAIL_FROM_NAME="Kasad BookStack"
MAIL_FROM=no-reply@books.kasad.com

# SMTP mail options
# These settings can be checked using the "Send a Test Email"
# feature found in the "Settings > Maintenance" area of the system.
MAIL_HOST=mail.kasad.com
MAIL_PORT=587
MAIL_ENCRYPTION=tls
MAIL_USERNAME=bookstack
MAIL_PASSWORD=[redacted]

# OpenID Connect authentication
AUTH_METHOD=oidc

# Control if BookStack automatically initiates login via your OIDC system
# if it's the only authentication method. Prevents the need for the
# user to click the "Login with x" button on the login page.
# Setting this to true enables auto-initiation.
AUTH_AUTO_INITIATE=true

# Set the display name to be shown on the login button.
# (Login with <name>)
OIDC_NAME="Kasad Auth Portal"

# Name of the claims(s) to use for the user's display name.
# Can have multiple attributes listed, separated with a '|' in which
# case those values will be joined with a space.
# Example: OIDC_DISPLAY_NAME_CLAIMS=given_name|family_name
OIDC_DISPLAY_NAME_CLAIMS=name

# OpenID Connect server parameters
OIDC_CLIENT_ID=[redacted]
OIDC_CLIENT_SECRET=[redacted]
OIDC_ISSUER=https://auth2.kasad.com/application/o/bookstack/
OIDC_ISSUER_DISCOVER=true

# Within BookStack there are a few different options for storing files:
#    local (Default) - Files are stored on the server running BookStack. Images are publically accessible, served by your websever, but attachments are secured behind BookStack’s authentication.
#    local_secure - Same as local option but images are served by BookStack, enabling authentication on image requests. Provides higher security but is more system resource intensive and could induce performance issues.
#    s3 - Store files externally on Amazon S3. Images are made publically accessible on upload.
STORAGE_TYPE=local_secure

# Only send cookies over a HTTPS connection.
# Ensure you have BookStack served over HTTPS before enabling.
# Defaults to 'false'
SESSION_SECURE_COOKIE=true

# Store user session data in the database instead of in files.
# This will hopefully persist user sessions across service restarts.
SESSION_DRIVER=database

Single sign-on

BookStack supports the OpenID Connect standard for single sign-on, meaning we can use Authentik to sign in to BookStack. See the BookStack documentation on OIDC for information on configuring SSO.

Authentik OpenID provider settings

Within Authentik, we need to create an Application and an OpenID Provider for BookStack.

All other settings should be fine if left with their default values.

Configuration within BookStack

There are actually some settings that are controlled from BookStack's user interface when logged in as a user with administrator permissions. These settings don't pertain to the BookStack server's configuration, but instead to the user-facing options, e.g. custom theming and allowing public access sans authentication.

Users and roles

In BookStack, users have Roles which define what permissions they are given. Many roles can be created. Roles are not nested, though, so they don't inherit permissions from each other.

Roles are configured within the settings interface in BookStack. Default permissions for each role on a specific book/chapter/page can be overridden by setting custom permissions on said book/chapter/page. See Book/page permissions below for details.

OpenID group sync

Roles can be automatically assigned according to groups which a user belongs to in the SSO provider. See the Group Sync section of BookStack's OpenID documentation for details.

Configuring this requires defining two environment variables:

Using BookStack

Book/page permissions

Books and pages can have custom permissions. Since the instance is publicly accessible, unauthenticated users can access the content according to the permissions assigned to the Public role. Create/modify/delete permissions should not be granted to the Public role, otherwise unauthenticated users will be able to modify content.

By default, books/pages will not be publicly visible. To make a book/page publicly visible, add custom permissions for that page and select the View permission for the Public role.

Pages and chapters will inherit permissions from the book they belong to unless they are given custom permissions. More specific permissions (e.g. page level) override less specific (e.g. book level) ones.

Web Apps

Vikunja - Task Manager & To-Do List

Description

Vikunja is a task manager / to-do list application. It features organization, scheduling, and (my favorite part) Kanban boards.

Access

Vikunja is reverse-proxied by the Secure Web Application Gateway. It is published on tasks.kasad.com.

Cloudflare Access authentication/authorization is required to access the Vikunja instance.

Deployment details

Docker container stack

Vikunja consists of two containers: (1) the backend/API and (2) the web interface frontend. The images for those containers are

  1. vikunja/api:latest
  2. vikunja/frontend:latest

The Docker Compose file for the stack follows:

version: '3'

services:
 api:
   image: vikunja/api
   container_name: vikunja-api
   environment:
     PUID: 938 # swag
     PGID: 941 # servlets
     VIKUNJA_DATABASE_TYPE: sqlite
     VIKUNJA_DATABASE_PATH: ./data/vikunja.db
     VIKUNJA_FILES_BASEPATH: ./data/files
     VIKUNJA_SERVICE_JWTSECRET: [redacted]
     VIKUNJA_SERVICE_FRONTENDURL: https://tasks.kasad.com/
     VIKUNJA_SERVICE_ENABLEREGISTRATION: false
     VIKUNJA_SERVICE_TIMEZONE: America/Los_Angeles
     TZ: America/Los_Angeles
     VIKUNJA_CORS_ENABLE: true
     VIKUNJA_CORS_ORIGINS: https://tasks.kasad.com
     VIKUNJA_MAILER_ENABLED: true
     VIKUNJA_MAILER_HOST: mail.kasad.com
     VIKUNJA_MAILER_PORT: 587
     VIKUNJA_MAILER_FORCESSL: false
     VIKUNJA_MAILER_AUTHTYPE: plain
     VIKUNJA_MAILER_USERNAME: vikunja
     VIKUNJA_MAILER_PASSWORD: [redacted]
     VIKUNJA_MAILER_FROMEMAIL: noreply@tasks.kasad.com
   volumes:
     - /srv/vikunja/config.yml:/app/vikunja/config.yml:ro
     - /srv/vikunja/data:/app/vikunja/data
   networks:
     - default
     - swag
   restart: unless-stopped
   
 frontend:
   image: vikunja/frontend
   container_name: vikunja-frontend
   environment:
     VIKUNJA_API_URL: https://tasks.kasad.com/api/v1/
   networks:
     - default
     - swag
   restart: unless-stopped
   
networks:
 swag:
   name: swag_default
   external: true

Configuration

The Vikunja frontend is typically configured using a YAML configuration file. However with the Docker container, it's easier to use environment variables. The environment variables' names are just the flattened representation of the configuration file's keys, prefixed with VIKUNJA_.

For example, the following environment file and YAML file represent the same configuration:

VIKUNJA_MAILER_ENABLED=true
VIKUNJA_MAILER_HOST=mail.kasad.com
VIKUNJA_MAILER_PORT=587
VIKUNJA_MAILER_FORCESSL=false
mailer:
  enabled: true
  host: mail.kasad.com
  port: 587
  forcessl: false

Database

Vikunja supports multiple database backends. As this is a relatively small instance, a SQLite3 database is used (VIKUNJA_DATABASE_TYPE: sqlite). The database path is set to ./data/vikunja.db so that the database will be located inside the data volume. (This is the VIKUNJA_DATABASE_PATH option).

File storage location

Vikunja supports file attachments. The location where uploaded files are stored is set to ./data/files so it is inside the data volume and doesn't require a separate persistent volume. (This is the VIKUNJA_FILES_BASEPATH option).

OpenID Connect parameters

All configuration can be done via environment variables except for OIDC configuration. Instead, that must be done within a YAML configuration file. In the Docker container, this file is located at /app/vikunja/config.yml.

Here is the configuration for Vikunja to use Authentik as the auth provider:

auth:
  local:
    enabled: false

  openid:
    enabled: true
    redirecturl: https://tasks.kasad.com/auth/openid/
    providers:
      - name: Kasad Auth
        authurl: https://auth2.kasad.com/application/o/vikunja/
        clientid: [redacted]
        clientsecret: [redacted]
Authentik provider configuration

In Authentik, create a new OpenID Connect provider. Add the two following redirect URLs:

https://tasks.kasad.com/auth/openid/
https://tasks.kasad.com/auth/openid/kasadauth

Under Advanced protocol settings, ensure Issuer mode is set to Each provider has a different issuer, based on the application slug.. Vikunja expects the issuer to be the same as the authurl parameter in the configuration file, which is what this setting enables.

Sending Email

Vikunja can send email reminders. In order to do this, it needs to connect to an SMTP server. A mail-enabled account has been created for Vikunja at mail.kasad.com with the username vikunja. It is authorized to send emails from any user at the tasks.kasad.com domain. See the Sending Emails from Web Apps page for details.

Web Apps

Paperless - Document Management

Description

Paperless-NGX is a document management system that transforms your physical documents into a searchable online archive so you can keep, well, less paper.

Deployment details

Access

Paperless-NGX is published by the Secure Web Application Gateway at swag.kasad.com/paperless-ngx. It is protected by Cloudflare Zero Trust and requires authentication/authorization to access.

Docker Compose stack

Paperless-NGX only requires one Docker container, lscr.io/linuxserver/paperless-ngx:latest (one of the LinuxServer.io images).

Docker Compose file:

version: "2.1"

services:
  paperless-ngx:
    image: lscr.io/linuxserver/paperless-ngx:latest
    container_name: paperless-ngx
    environment:
      - PUID=942 # paperless
      - PGID=941 # servlets
      - TZ=America/Los_Angeles
      - UMASK=002
      - PAPERLESS_FORCE_SCRIPT_NAME=/paperless-ngx
      - PAPERLESS_STATIC_URL=/paperless-ngx/static/
      - PAPERLESS_CORS_ALLOWED_HOSTS=https://swag.kasad.com
      - PAPERLESS_ALLOWED_HOSTS=localhost,swag.kasad.com
      - PAPERLESS_CSRF_TRUSTED_ORIGINS=https://swag.kasad.com
    volumes:
      - ./config:/config
      - ./data:/data
    restart: unless-stopped
    networks:
      - default
      - swag

networks:
  swag:
    name: swag_default
    external: true

Configuration

Paperless-NGX does not require much configuration. The only change I had to make was forcing it to be hosted on a non-root path. This was done by setting the PAPERLESS_FORCE_SCRIPT_NAME and PAPERLESS_STATIC_UTL environment variables for the container.

This change is required because the Paperless-NGX instance is published at the path /paperless-ngx/ through the SWAG.

Problems

User management

Unlike every other web app running on kasad.com, Paperless-NGX does not support separate users. It does require a username and password to log in, but all "users" have access to all documents. This means only one person can use the Paperless instance (or multiple people could, but not for sensitive/private documents).

No SSO support

Because it doesn't support user management, Paperless-NGX also doesn't support SSO implementations like OpenID Connect, meaning it is not possible to sign in using Authelia.

Alternatives

Because of the user management problem, I will switch to Mayan EDMS in the future. However, Mayan EDMS requires a more complicated software stack, so it might be difficult to set up.

Web Apps

Portainer - Container Manager

Portainer is a web interface for managing Docker (and Kubernetes) containers.

Access

Portainer is reverse-proxied by the Secure Web Application Gateway. It is published on swag.kasad.com/portainer.

The Portainer instance is protected by Cloudflare Access and requires two-factor authentication to access.

Deployment

Portainer runs as a single Docker container using the portainer/portainer-ce:latest image. Here is the Docker Compose service entry for it:

services:
  portainer:
    container_name: portainer
    image: portainer/portainer-ce:latest
    restart: unless-stopped
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /srv/portainer/data:/data
    networks:
      - default
      - swag

networks:
  swag:
    name: swag_default
    external: true

Docker socket volume

Since Portainer needs to be able to read and modify the state of the Docker engine, it requires access to the Docker socket file. This is mounted just like any other volume:

    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

Persistent data

Portainer stores some persistent data. In the Docker container, this is located at /data, so we mount a Docker volume there:

    volumes:
      - /srv/portainer/data:/data

Configuration

All configuration is done within Portainer itself. It has a complete settings interface.

Single Sign On with Authentik

Portainer has the ability to interface with an OpenID Connect provider like Authentik.

Configuring SSO in Portainer

The relevant settings are confiured in the Authentication section of the Settings interface. Select the OAuth card for the Authentication method option. See the following table for the rest of the settings:

Option Value Description
Use SSO Enable single sign-on
Hide internal authentication prompt Only use SSO for logins, not Portainer's built-in system
Automatic user provisioning Automatically create new users when they first log in. This is safe if (1) you trust all users who can log in to Portainer using SSO, or (2) you don't give new users write access to resources in Portainer.
Automatic team membership Assign users to Teams based on the groups they belong to within Authentik
Claim name groups The claim in which Authentik places the list of groups
Statically assigned teams Whatever you want Use this field to map from Authentik groups to Portainer teams. You must create the team in Portainer before you can select it here.
Default team No team (or a read-only team) Don't assign normal users who aren't part of a privileged group to a team
Assign admin rights to groups Automatically assign administrator permissions to certain users
Admin mapping claim value regex Administrators Give members of the Administrators group admin permissions in Portainer

In the Provider section, select the Custom card. See the following table for the settings' values:

Option Value Description
Client ID Client ID created by Authentik Sets the OpenID client ID
Client Secret Client Secret created by Authentik Sets the OpenID client secret for authenticating with Authentik
Authorization URL https://auth2.kasad.com/application/o/authorize/ OpenID authorization endpoint
Access token URL https://auth2.kasad.com/application/o/token/ OpenID authorization endpoint
Resource URL https://auth2.kasad.com/application/o/userinfo/ OpenID user information resource endpoint
Redirect URL https://swag.kasad.com/portainer/ The URL Authentik will redirect t o after successful authentication
Logout URL https://auth2.kasad.com/if/session-end/portainer/ The URL Portainer will redirect to when logging out
User identifier sub or preferred_username Defines what field Portainer will use to map Portainer users to Authentik users. If usernames are controlled and static, it is safe to use preferred_username. Otherwise sub should be used, which is essentially a per-user UUID.
Scopes openid profile email The OpenID user info scopes that Portainer will request from Authentik

Configuring SSO in Authentik

On the Authentik side, an Application and an OpenID Provider must be created. For the Redirect URI option, use the base URL of the Portainer instance, in our case https://swag.kasad.com/portainer/. All other options should be fine when left with their default values.

Using Portainer

Stack environment variables

When defining environment variables for a Docker Compose stack in Portainer, the variables will be accessible within the docker compose file as if defined in a .env file when using the docker compose command.

The variables will also be written to a stack.env file, so they can be loaded into a container's environment be adding the following line to the service entry for that container:

    env_file: stack.env

Docker Compose build contexts

When building a container in a Docker Compose stack, it is not possible to use a path on the host as the build context (i.e. where the Dockerfile is located). This is different from defining volumes, which use paths on the host, not inside the Portainer container.

To get around this, either mount the build context path in the Portainer container, or use a non-file context like a Git repository.

Web Apps

Send - Temporary File Sharing

Send is a simple, private file sharing web service. It provides an interface for users to upload files temporarily for other users to download.

Access

Send is reverse-proxied by the Secure Web Application Gateway. It is published on send.kasad.com. The Send endpoint is not protected by Cloudflare Access policies in order to make it easier to share files to non-authenticated users.

Deployment

Send runs as a single Docker container using the registry.gitlab.com/timvisee/send:latest image. We deploy it as a Docker Compose stack using Portainer for easy configuration:

version: '3'

services:
  send:
    image: registry.gitlab.com/timvisee/send:latest
    container_name: send
    restart: unless-stopped
    environment:
      BASE_URL: https://send.kasad.com
      DETECT_BASE_URL: true
      MAX_FILE_SIZE: 1073741824 # 1 GB
      MAX_EXPIRE_SECONDS: 86400 # 24 hrs
      EXPIRE_TIMES_SECONDS: 120,600,1800,3600,21600,86400 # 2 min, 10 min, 30 min, 1 hr, 6 hrs, 24 hrs
    volumes:
      - /srv/send/uploads:/uploads
    networks:
      - default
      - swag
      
networks:
  swag:
    external: true
    name: swag_default

SWAG network

The Send container is reverse-proxied behind the Secure Web Application Gateway, so the SWAG container needs network access to the Send container. This has been done in the Compose stack above. See this explanation for details.

Persistent uploads storage

While it is not required, we want to ensure that if the Send container crashes or is restarted, the uploaded files will remain intact. To accomplish this, we mount a persistent storage volume on the /uploads path inside the container:

    volumes:
      - /srv/send/uploads:/uploads

Configuration

Configuration for the Send container is done using environment variables. For ease of configuration, we define these in the container's environment section within the Docker Compose file.

Base URL

We configure the base URL which is used to generate URLs for sharing. We also enable auto-detection, but this only comes into effect whenn the base URL is not defined.

BASE_URL: https://kasad.com
DETECT_BASE_URL: true

File upload limits

We don't want users abusing the service, so we impose a maximum file size of 1 GB:

MAX_FILE_SIZE: 1073741824 # 1 GB

The maximum HTTP request size is also set to 1 GB in the NGINX config for our site within the SWAG container:

# config/nginx/proxy-confs/send.subdomain.conf
server {
  # ...
  
  client_max_body_size 1G;
  
  # ...
}

File retention period

We want to allow the user to specify a file retention time. However, we don't want to allow retention past 24 hours in order to keep disk usage low.

MAX_EXPIRE_SECONDS: 86400 # 24 hrs
EXPIRE_TIMES_SECONDS: 120,600,1800,3600,21600,86400 # 2 min, 10 min, 30 min, 1 hr, 6 hrs, 24 hrs
Web Apps

Bitwarden - Password Manager

Bitwarden is a password manager application. It has a public instance that can be used for free with limited features or with all features for a fee. I choose to self-host an instance with all the features for free.

Access

The Bitwarden instance is reverse-proxied by the Secure Web Application Gateway. It is published on bw.kasad.com.

Since Bitwarden provides its own secure login and two-factor authentication, it is not protected behind Cloudflare Access policies. However, the admin dashboard endpoint (/admin) is protected by an Access policy which is restricted to the Administrator user group.

Deployment

We are actually not running the official Bitwarden server. Instead, we run a fork called Vaultwarden because it is much lighter.

Vaultwarden runs as a single Docker container using the vaultwarden/server:latest image. We deploy it in a Docker Compose stack for ease of configuration:

version: '3'

services:
  vaultwarden:
    image: vaultwarden/server:latest
    container_name: bitwarden
    restart: always
    environment:
      WEBSOCKET_ENABLED: "true"  # Enable WebSocket notifications
      TZ: America/Los_Angeles
      DOMAIN: https://bw.kasad.com
      ADMIN_TOKEN: [redacted]
    volumes:
      - /srv/bitwarden/data:/data
    networks:
      - default
      - swag

networks:
  swag:
    external: true
    name: swag_default

SWAG network

Since our Bitwarden instance is reverse-proxied behind the Secure Web Application Gateway, the SWAG container needs network access to the Bitwarden container. This has been done in the Compose stack above. See this explanation for details.

Persistent data storage

Bitwarden needs to store data, as that's the entire purpose of the application. To ensure that all data persists between service restarts, we add a storage volume to the container mounted at /data inside the container:

    volumes:
      - /srv/bitwarden/data:/data

This is actually not necessary, as the vaultwarden/server image will mount a volume on /data automatically. We specify it, though, to avoid transparency and to keep our data in the /srv directory on the host.

Configuration

Most of Bitwarden's configuration is done using its built-in admin dashboard. This is published on /admin. There are still a few settings that must be configured for the container before the initial startup.

Environment variable settings

The three settings that need to be configured using environment variables are (1) enabling WebSockets, (2) setting the base domain, and (3) setting the initial admin dashboard password.

Enabling WebSockets

To provide notifications to users, BitWarden requires usage of WebSockets. Simply set the relevant environment variable in the Compose file:

      WEBSOCKET_ENABLED: true

Base domain

We must set the base domain in order for Bitwarden to properly generate URLs:

      DOMAIN: https://bw.kasad.com

Initial admin password

The admin dashboard on /admin requires a password to access. To set the initial password, specify it in an environment variable:

      ADMIN_TOKEN: [redacted]

Once logged in to the admin dashboard, the password can be changed. This only sets the initial password.

Dashboard settings

Many of the settings within the admin dashboard need to be configured. Significant settings for each section in the dashboard are listed below.

Hover over the name of a property in the admin dashboard to see a more detailed description.

General settings

Allow new signups: false
We don't to allow new users to sign up since our Bitwarden instance is publicly accessible.

Require email verification on signups: true
We want to ensure that all users set a valid email address which they have access to.

Allow invitations: true
This will allow administrators to create new users in the Bitwarden instance. Since self-registration is disabled, this is the only way to add new users without manually editing the database.

Invitation organization name: Kasad Family Bitwarden
Sets the name of the Bitwarden instance in invitations.

Advanced settings

Client IP header: X-Real-IP
This tells Bitwarden which HTTP header contains the client's IP address. Since we have the SWAG reverse proxy in front of Bitwarden, this will be the X-Real-IP header.

Icon blacklist non-global IPs: true
Disables fetching icons from internal/private IP addresses. This prevents malicious users from sending requests to internal IPs.

Bypass admin page security: false I have this set to false just in case, but as long as the admin dashboard is protected by proper Cloudflare Access policies, it should be safe to enable this.

Yubikey settings

Enabled: true
Enable support for two-factor authentication using Yubikeys.

Note: you can still use Yubikeys for 2FA if this is disabled, but you must use it as a WebAuthn device in that case. This option simply provides support for the Yubikey verification API.

Client ID and Secret Key
These two properties deal with your API key for the Yubikey API. For the default verification service run by Yubico, go to upgrade.yubico.com/getapikey to get an API key. If using an internal verification server, use the proper API key for it.

Server: https://api.yubico.com/wsapi/2.0/verify
The API endpoint for the verification server. The value provided here is for the default verification service run by Yubico.

Global Duo settings

Enabled: false
I do not use Duo, so I have no use for this to be enabled.

SMTP email settings

In order for Bitwarden to send email invitations, verification emails, password reset emails, and 2FA emails, a valid SMTP configuration is required. We use the kasad.com email server for sending mail. See Sending Emails from Web Apps for a detailed explanation.

Host: mail.kasad.com
Specify the SMTP server to use.

Port: 465
Secure SMTP: force_tls
Use SMTP with implicit TLS on port 465. An alternative is using SMTP with STARTTLS on by setting the Secure SMTP setting to starttls and the Port to 587. Implicit TLS is better though, so we use that.

From Address: no-reply@bw.kasad.com
From Name: Bitwarden
Set the From address that Bitwarden will use when sending emails. See Sending Emails from Web Apps for details on configuring this.

Username: vaultwarden
Password: [redacted]
Specify the username and password to use to log in to the SMTP server. This user must have a mail-enabled account on the kasad.com mail server.

Accept Invalid Certs: false
Accept Invalid Hostnames: false
When both of these are false, Bitwarden will verify the validity of the mail server's TLS certificates.

Email 2FA settings

Enabled: true
I sometimes need two-factor authentication via email, so I enable this option.

Adding users

To add a new user the Bitwarden instance, go to the Users tab and use the form at the bottom of the page to invite them by email. The address you enter will recieve an email with link to the Bitwarden instannce where they can finish setting up their account.

Backing up

Bitwarden provides an easy way to back up its critical data. Just go to the admin dashboard and use the Backup Database option at the bottom of the page to export the SQLite3 database containing Bitwarden's data.

Web Apps

Jellyfin - Media Streaming

Jellyfin is a media streaming hub. It allows you to easily stream audio and video files to a web browser.

Access

Our Jellyfin instance is reverse-proxied by the Secure Web Application Gateway. It is published on swag.kasad.com/jellyfin. The endpoint is protected by Cloudflare Access policies requiring authentication/authorization.

Deployment

Jellyfin runs as a single Docker container using the lscr.io/linuxserver/jellyfin:latest image. It is one of the LinuxServer.io images with available mods.

We run Jellyfin using a Docker Compose file for easy configuration:

version: "3"

services:
  jellyfin:
    image: lscr.io/linuxserver/jellyfin:latest
    container_name: jellyfin
    ports:
      - 7359:7359/udp
      - 1900:1900/udp
    volumes:
      - /media:/media
      - jellyfin_config:/config
    tmpfs:
      - /config/transcodes
      - /config/cache
      - /config/data/transcodes
    devices:
      # VA-API devices
      - /dev/dri/card0:/dev/dri/card0
      - /dev/dri/renderD128:/dev/dri/renderD128
    restart: unless-stopped
    environment:
      - PUID=938 # swag
      - PGID=941 # servlets
      - UMASK=002
      - TZ=America/Los_Angeles
      - JELLYFIN_PublishedServerUrl=https://swag.kasad.com/jellyfin
      - DOCKER_MODS=ghcr.io/kdkasad/docker-mods/jellyfin-jellyscrub
    networks:
      - default
      - swag
      
volumes:
  jellyfin_config: {}
  
networks:
  swag:
    name: swag_default
    external: true

SWAG network

The Send container is reverse-proxied behind the Secure Web Application Gateway, so the SWAG container needs network access to the Send container. This has been done in the Compose stack above. See this explanation for details.

Persistent data storage

The lscr.io/linuxserver/jellyfin image stores all of its data in /config by default. To make this persistent, we mount a volume on /config:

    volumes:
      - jellyfin_config:/config

Since this is a named volume, we also need to declare it at the end of the Compose file:

volumes:
  jellyfin_config: {}

Media volume

Jellyfin needs to be able to access the media it is going to stream. The media volume can be mounted anywhere within Jellyfin as long as it is configured accordingly in the UI. We mount the media volume on /media to keep things simple.

Jellyfin needs at least read access to all media. If metadata is being stored alongside media, Jellyfin needs write access as well.

We run the Jellyfin container with a umask of 002. This means newly created files will have user/group read-write permissions and global read permissions. We also set the group ownership of /media to the servlets group. We set the SetGID flag to ensure new files/directories inherit the parent's group ownership.

In-memory caches

Some of Jellyfin's data does not need to be persistent. In order to improve performance and reduce unnecessary writes to disk, we mount temporary (in-memory) filesystems on:

If the system running the Jellyfin container does not have sufficient RAM, this will likely cause Jellyfin to fail (or at least cause heavy swapping) while transcoding large videos. The transcode directory should be able to fit the media being streamed, so it should be at least a few gigabytes.

VA-API devices

The host system (my laptop) that runs the Jellyfin container has an Intel iGPU which supports VA-API for hardware video decoding. To utilize this in Jellyfin, the VA-API device nodes must be accessible inside the container. This is done in the devices section of the Compose service:

    devices:
      - /dev/dri/card0:/dev/dri/card0
      - /dev/dri/renderD128:/dev/dri/renderD128

It should also be possible to just mount the /dev/dri directory as a volume, but I have not tried that.

One the devices are mounted in the container, follow the Hardware-accelerated video decoding configuration instructions.

DLNA/UPnP

Jellyfin comes built with DLNA and UPnP support for streaming to wireless displays like TVs. To enable this, we forward the following ports for the container.

    ports:
      - 7359:7359/udp
      - 1900:1900/udp

Then select Enable 'Play To' DLNA feature in the DLNA settings tab.

Despite the port forwards, it does not seem to work as well as when the Jellyfin container is run in host network mode.

Configuration

Most of Jellyfin's configuration is done from its built-in settings menu. The only options that are not configured from there are the PID, UID, umask, and the published server URL. These are all defined in the environment section of the jellyfin service in the Compose file.

File paths

Our Jellyfin instance is configured to use the following file paths. Some of these paths are default and some are manually configured in the Settings UI. However all of them are important as they have a specific filesystem mounted on them.

Path Filesystem Type Purpose Where to configure
/config Named volume Main data directory just don't change it
/config/data/transcodes tmpfs(5) Contains transcoded videos while they're being streamed Playback tab
/config/cache tmpfs(5) Temporary cached data General tab
/media Host volume (/media) Media library Each library in the Libraries tab

All directories with volume mounts are explained in the Deployment section. This table is just meant to list them all in one place.

Storing metadata alongside media

I prefer having metadata on the same volume as the media, as most of the data is related. To accomplish this, set General > Metadata Path to /media/metadata. Make sure the directory you choose exists.

Better theme

Jellyfin supports adding custom CSS to style the web interface. We load the Ultrachromic theme, which is an overhaul of the default Jellyfin UI.

General > Custom CSS Code:

@import url('https://cdn.jsdelivr.net/gh/CTalvio/Ultrachromic/presets/monochromic_preset.css');

Hardware-accelerated video decoding

Jellyfin has many settings for hardware-accelerated video decoding (a.k.a. HWDec) in the Playback tab of the Settings UI. Many APIs for this are supported, but I've only used VA-API, so that's the only one documented here.

VA-API (Intel GPUs)

If VA-API devices have been mounted, set the following options:

Option Value Description
Hardware acceleration Video Acceleration API (VAAPI) Enable use of VA-API for HWDec
Enable hardware decoding for all codecs selected Enable hardware decoding for all codecs*
Enable hardware encoding enabled Enable hardware-accelerated video encoding*
Enable Intel Low-Power H.264/HEVC hardware encoder enabled Enable low-power encoders*
Enable encoding in HEVC format enabled Enable HEVC (H.265) encoder*

*Make sure your GPU supports the codecs you've enabled. Use the following command to check which codecs your GPU supports:

$ vainfo | sed 's/VAProfile//; s/VAEntrypointVLD/decode/; s/VAEntrypointEncSlice\(LP\)\?/encode \1/'

encode LP means low-power encoding is supported.

Single sign-on

We want to allow users to log in through the Authentik SSO portal. To do this, we must configure Jellyfin and Authentik properly.

Confguring SSO in Jellyfin

To enable SSO in Jellyfin, you must install the jellyfin-sso plugin. To do this,

  1. Navigate to the Repositories page within the Plugins tab in Jellyfin.
  2. Add a new repository with the URL https://raw.githubusercontent.com/9p4/jellyfin-plugin-sso/manifest-release/manifest.json. Name it whatever you want.
  3. Go to the Catalog page and install the SSO Authentication plugin.
  4. Once it's installed, go to the My Plugins page and click the SSO-Auth plugin. Then enter the settings from the table below.
Setting Value Description
Name of OID Provider Authentik Name of the SSO provider (arbitrary)
OID Endpoint https://auth2.kasad.com URL of the OpenID Connect provider
OpenID Client ID (redacted) OIDC Client ID for Jellyfin
OID Secret (redacted) OIDC Client secret for Jellyfin
Enabled Whether this SSO provider is enabled
Enable Authorization by Plugin Let the SSO plugin assign permissions to new users
Enable all folders Don't allow all users to access all libraries
Enabled folders Music and YouTube videos Allow all users to access these libraries
Roles empty Don't check for a specific group to authenticate with Jellyfin
Admin Roles Administrators Users in the Administrators group are administrators
Enable role-based folder access Allow access to libraries based on user's groups
Folder Role Mapping Role: Pirates. All libraries selected Allow users in the Pirates group to access all media libraries
Role Claim groups The OpenID claim to get user's roles from
Request additional scopes empty Authentik exports all required information in the defaults scopes
Set default provider Jellyfin.Server.Implementations.Users.DefaultAuthenticationProvider Setting to this value prevents users from having to authenticate twice

Configuring SSO in Authentik

We also need to configure Authentik to allow Jellyfin to authenticate against it. To do this, first create a new OpenID Connect provider in Authentik's admin interface.

Set the following settings (note that some may be under the Advanced protocol settings dropdown):

Parameter Value Description
Redirect URI https://swag.kasad.com/jellyfin/sso/OID/r/Authentik Sets the URL that Authentik will redirect to after succesful authentication.
Signing Key authentik Self-signed Certificate (RSA) Use the default Authentik key for signing tokens
Issuer mode Each provider has a different issuer, based on the application slug Use the same issuer URL as the OID configuration endpoint

The last path component of the redirect URL must match the Name of OID Provider option configured in Jellyfin.

Jellyscrub (smooth video scrubbing previews)

By default, Jellyfin can only show the chapter thumbnail when hovering over the video scrub bar. The Jellyscrub plugin provides smooth scrub previews.

To install it,

  1. Navigate to the Repositories page within the Plugins tab in Jellyfin.
  2. Add a new repository with the URL https://raw.githubusercontent.com/nicknsy/jellyscrub/main/manifest.json. Name it whatever you want.
  3. Go to the Catalog page and install the Jellyscrub plugin.

The plugin also requires a modification to Jellyfin's defualt index.html file. In order for this modification to be made automatically (and persist across container restarts), I've created a Docker mod to handle this. To enable it, add ghcr.io/kdkasad/docker-mods:jellyfin-jellyscrub to the DOCKER_MODS environment variable for the jellyfin container.

Using Jellyfin

The following sections contain information pertaining to using Jellyfin once it's set up.

Logging in with SSO

There's one problem with SSO support in Jellyfin. Since it's provided via a plugin and not by Jellyfin itself, SSO isn't enabled on the login page. To log in with SSO, a user must visit this URL: https://swag.kasad.com/jellyfin/sso/OID/p/Authentik.

Again, the last path component of the redirect URL must match the Name of OID Provider option configured in Jellyfin.

Services

Services running on kasad.com that are not web apps. Some other services exist that haven't been documented here yet.

Services

VPN

Overview

kasad.com features a secure, private WireGuard tunnel. The kasad.com server will route requests on the tunnel out to the internet, making the tunnel usable as a VPN.

Structure

Tunnel Service

The WireGuard tunnel interface is wg0. wg-quick(8) is used to create the WireGuard tunnel and load configuration. It is automatically activated by the wg-quick@wg0.service SystemD service.

WireGuard interface

The WireGuard interface is named wgvpn0, although the interface name doesn't matter as long as it's configured properly on the server. The interface has the following configuration:

[Interface]
Address = 10.5.19.1/24
SaveConfig = false
ListenPort = 51194
PrivateKey = #############################################

## PEER LIST ##
# ...

Note: the keys in the snipped above have been hidden

IP Addressing

As can be seen in the configuration snippet, the WireGuard tunnel uses the IPv4 address subnet 10.5.19.0/24. The tunnel does not use IPv6. In theory it could, but I have not bothered to set that up.

WireGuard listener

WireGuard listens on UDP port 51194. Outgoing UDP traffic is blocked by some network firewalls (notably my school's), so the kasad.com server has a firewall rule in place to redirect incoming traffic on UDP port 123 to 51194. Since UDP port 123 is used for NTP, it usually is not blocked for outgoing traffic.

nft(8) rule to implement this:

table inet nat {
	chain prerouting {
        udp dport 123 redirect to :51194
    }
}

Peers

Each device that connects to the interface has its own keypair and IP address(es).

IP Address quirk

The IP address for each peer can be specified in CIDR notation, but the range for each peer must be unique. This means you cannot assign the same subnet for all peers and allow them to choose their own addresses. You must give a single unique address to each peer or give each one a range of IP addresses that does not overlap with any other peer's range.

Peer Configuration

Each peer must be specified in the WireGuard interface's configuration. Unknown peers will not be allowed to connect.

[Peer]
PublicKey = #############################################
AllowedIPs = 10.5.19.2/32

Peer List

Device IP Address(es) Public key
Kian's laptop 10.5.19.2 laT7XasKzIdTC5gy9jSS0PaKvdjEHEU3pQ/j2BYAujs=
Kian's old phone 10.5.19.3 tjNWCci8SQwxBcHPxIXjZkOX+K214d4WNsFV6MVuA1M=
Kian's (current) phone 10.5.19.4 Lvfk37+yv13mLGETIrBnbQD2Qw474Bpfr2KxZUMpn1Q=
Home office desktop 10.5.19.6 ALN7gfvfeswE2hwSfbRyWqsmZVPvsAa8TUaurHC9eG0=

Usage as VPN

While a VPN doesn't technically have to route traffic out to the internet, that's what they're usually used for. The kasad.com WireGuard interface is configured to forward traffic to the outside world. To utilize this, simply set 10.5.19.1 as the router address in your network configuration when connected to the WireGuard tunnel.

VPN routing configuration on server

Two things must be enabled for VPN routing to work:

  1. IP forwarding must be enabled in the Linux kernel.
  2. Firewall rules must be in place to (a) allow forwarding and (b) perform SNAT/masquerading.

Kernel options

The file /etc/sysctl.d/20-ip-forward.conf enabled IP forwarding in the kernel:

# Allow IPv4 forwarding
net.ipv4.ip_forward=1

# Allow "martian" packets on 'wgvpn0' interface.
# These are packets coming from an external interface but heading to a
# localhost destination. This is required for intercepting VPN DNS requests.
net.ipv4.conf.wgvpn0.route_localnet=1

Firewall rules

The portions of the nft(8) rule file that apply to the VPN routing are displayed below. Omitted lines are marked with an ellipsis comment (# ...).

table inet filter {
    chain input {
        # ...
        iifname "wgvpn*" ip saddr 10.5.19.0/24 goto vpninput
    }
    chain vpninput {
        accept
    }
    chain forward {
        # ...
        iifname wgvpn0 oifname ens3 accept
        oifname wgvpn0 ct state established,related accept
    }
}

table inet nat {
    chain postrouting {
        # ...
        ip saddr 10.5.19.0/24 oifname ens3 masquerade
    }
}
Services

Syncthing

The kasad.com server runs a Syncthing relay server and discovery server. These services enable completely private file syncing by removing reliance on Syncthing's public servers for discovery and relaying.

Purpose

Each server provides a method of communication for devices running Syncthing on separate networks.

Relay server

When two Syncthing instances (i.e. two separate devices) cannot establish a direct connection, they must use relaying. Both clients will connect to a publicly-accessible relay server and use it to transfer data between the two clients.

Discovery server

When Syncthing instances are on the same LAN, they can discover each other using broadcast or multicast network traffic. When they are on separate networks, they require a discovery server. Each client connects to the discovery server and informs it of its IP address. The discovery server can then inform the clients of each other's location.

Running the services

Syncthing has tutorials as part of their documentation for running your own relay server or discovery server. Regardless, I'll explain the process here.

Install the daemons

Both servers are distributed as Debian packages, which is what I'll use since the kasad.com server runs Debian.

Just install the syncthing-discosrv and syncthing-relayserv packages:

# nala install syncthing-{relay,disco}srv

Configuring the discovery server

There is no configuration file for syncthing-discosrv. It only accepts command-line arguments. We can provide those using the STDISCOSRV_OPTS environment variable in the /etc/default/syncthing-discosrv file:

STDISCOSRV_OPTS="-http"

We'll only enable the -http option, as we'll use our NGINX server as a reverse proxy for HTTPS.

Setting up the reverse proxy

Since we only want to allow secure connections, we need an HTTPS reverse proxy in front of the discovery server. We can easily do this with NGINX using the following server configuration:

server {
	listen 80;
	listen [::]:80;
	server_name stdisco.kasad.com;

	return 301 https://$host$request_uri;
	
	include /etc/nginx/conf.d/common;
	include /etc/nginx/conf.d/common-kasad.com;
}

server {
	listen 443 ssl http2;
	listen [::]:443 ssl http2;
	server_name stdisco.kasad.com;

	try_files $uri =503;

	ssl_verify_client optional_no_ca;

	location / {
        proxy_http_version 1.1;
        proxy_buffering off;
        proxy_set_header Host $http_host;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $http_connection;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Client-Port $remote_port;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
        proxy_set_header X-SSL-Cert $ssl_client_cert;
		proxy_pass http://localhost:8443/;
	}

	include /etc/nginx/conf.d/common;
	include /etc/nginx/conf.d/common-kasad.com;
	include /etc/nginx/conf.d/common-kasad.com-ssl;
}

This file gets placed in /etc/nginx/sites-enabled. This will make the discovery server available at stdisco.kasad.com.

A better way to handle NGINX site configurations is to place all files in /etc/nginx/sites-available, then place a symbolic link in /etc/nginx/sites-enabled that points to ../sites-available/<site>.

Configuring the relay server

The relay server uses its own application protocol, so it's a little easier to configure. This time, we'll define two variables in /etc/default/syncthing-relaysrv:

NAT=false
STRELAYSRV_OPTS=-pools="" -provided-by="Kian Kasad - https://kasad.com" -ext-address=kasad.com:22067

Check the systemd.exec(5) manual page for details on the environment file syntax, as it's a little different from standard shell syntax.

The first line in our file disables Network Address Translation, as the kasad.com server already has a public IP address.

The second line sets the command-line options:

Firewall rules

The relay server listens on TCP port 22067 by default. We need to add a firewall rule to allow incoming traffic on this port. Since the kasad.com server runs nftables, we'll add a rule to our /etc/nftables.conf file:

table inet filter {
    chain input {
    	tcp dport 22067 accept comment "Syncthing relay server"
    }
}

Enable the services

Both packages provide Systemd services, so we can just enable and start those:

# systemctl enable --now syncthing-{relay,disco}srv.service

Client configuration

To use these servers from a Syncthing client, we need to change a few settings.

Using the discovery server

This is the easier of the two services to configure. Just open Syncthing's settings menu and navigate to the Connections tab. Then set https://stdisco.kasad.com/ as the value for the Global Discovery Servers option.

Using the relay server

The relay server takes a little more configuration for the client to use. You must first find the URI to use to connect to the relay server. This is printed when it starts, so we can look in systemd's logs:

# journalctl -eu syncthing-relaysrv.service

Find the line that contains URI: followed by a string. Copy that value, as that's the URI you need to use.

Then open Syncthing's settings page and go to the Connections tab. Add a comma after the current value of the Sync Protocol Listen Addresses option and paste your relay server's URI in.

If the Sync Protocol Listen Addresses option has the default value, replace that with tcp://:22000, quic://:22000. This is because the default value will also use Syncthing's public relay servers, which we don't want.

Services

SSH

The kasad.com server runs an SSH server to provide remote access. It's the standard OpenSSH server provided with Debian, but it has some changes in configuration.

Host key generation

Since the kasad.com server uses the default Debian image template from Vultr, it's a good idea to regenerate the SSH host keys. This just reuqires deleting the current ones and regenerating them using ssh-keygen(1):

# rm /etc/ssh/ssh_host_*_key*
# ssh-keygen -A

Configuration

This is the content of the /etc/ssh/sshd_config file for the kasad.com server:

#	$OpenBSD: sshd_config,v 1.103 2018/04/09 20:41:22 tj Exp $

# This is the sshd server system-wide configuration file.  See
# sshd_config(5) for more information.

# This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin

# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented.  Uncommented options override the
# default value.

Port 22
AddressFamily any
ListenAddress 0.0.0.0
ListenAddress ::

HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key

# Ciphers and keying
RekeyLimit default none

# Logging
SyslogFacility AUTH
LogLevel INFO

# Authentication:

#LoginGraceTime 2m
PermitRootLogin no
StrictModes no
#MaxAuthTries 6
#MaxSessions 10

PubkeyAuthentication yes
AuthorizedKeysFile	.ssh/authorized_keys
#AuthorizedPrincipalsFile none
#AuthorizedKeysCommand none
#AuthorizedKeysCommandUser nobody

# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
HostbasedAuthentication no
# Change to yes if you don't trust ~/.ssh/known_hosts for
# HostbasedAuthentication
#IgnoreUserKnownHosts no
# Don't read the user's ~/.rhosts and ~/.shosts files
#IgnoreRhosts yes

# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication no
PermitEmptyPasswords no

# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
ChallengeResponseAuthentication no

# Kerberos options
KerberosAuthentication no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
#KerberosGetAFSToken no

# GSSAPI options
GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
#GSSAPIStrictAcceptorCheck yes
#GSSAPIKeyExchange no

# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication.  Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to 'no'.
UsePAM yes

#AllowAgentForwarding yes
AllowTcpForwarding yes
#GatewayPorts no
X11Forwarding yes
#X11DisplayOffset 10
#X11UseLocalhost yes
#PermitTTY yes
PrintMotd no
#PrintLastLog yes
#TCPKeepAlive yes
PermitUserEnvironment LANG,LC_*,TZ
#Compression delayed
#ClientAliveInterval 0
#ClientAliveCountMax 3
#UseDNS no
#PidFile /var/run/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no
#ChrootDirectory none
#VersionAddendum none

# no default banner path
#Banner none

# Allow client to pass locale environment variables
AcceptEnv LANG LC_* TZ

# override default of no subsystems
Subsystem	sftp	/usr/lib/openssh/sftp-server

# Example of overriding settings on a per-user basis
#Match User anoncvs
#	X11Forwarding no
#	AllowTcpForwarding no
#	PermitTTY no
#	ForceCommand cvs server

Match Address 10.5.19.0/24
	PasswordAuthentication yes
	PermitEmptyPasswords no

Most of the configuration is default. The significant changes are listed below.

Disabling password authentication

Normally, users can log in via SSH by simply providing their username and password. This is disabled on kasad.com, as it allows for brute-force attacks.

PasswordAuthentication no

Exception for VPN connections

Users must be authenticated to connect to the kasad.com VPN. This means we won't by subject to any attacks originating from the VPN, so it's safe to allow password logins for users connecting from the VPN.

We do this by enabling password authentication only if the address of the user matches the VPN subnet.

Match Address 10.5.19.0/24
    PasswordAuthentication yes
    PermitEmptyPasswords no

Disable root login

By default, one can log in as the root user using a public key. We disable this, making it impossible to log in as root. Instead, log in as a normal user and use sudo(8).

PermitRootLogin no

Allow locale environment variables

We want to allow users to define their preferred locale and timezone, so we allow users to set the relevant environment variables when connecting.

PermitUserEnvironment LANG,LC_*,TZ
AcceptEnv LANG LC_* TZ
Services

Email

The most important service on kasad.com is the mail server. It is actually made up of multiple services that handle different aspects of the email process.

These parts are listed in the following table. Each service name links to its respective page.

Service Protocol Description
Postfix SMTP Handles sending mail from kasad.com and receiving mail to kasad.com from other servers
Dovecot IMAP Handles storage of emails on the kasad.com server and provides access to those emails to email clients
dkimpy-milter DKIM Signs outgoing emails so the recipient's email server can verify the legitimacy. Can also verify the signatures on incoming mail.

In addition to these services, some DNS records are required to ensure functionality of the email server.

Domain names

The mail server for the kasad.com domain is mail.kasad.com. However, both of these domains point to the same server. The hostname of the server, as far as Postfix is concerned, is mail.kasad.com, as that's what clients will expect to be connected to. However, we still want to send/receive mail to/from <user>@kasad.com.

We'll refer to kasad.com as our domain and mail.kasad.com as our mail server's hostname.

DNS Records

Mail servers use multiple DNS records to provide information about their functionality:

Name DNS record type Purpose
MX MX Defines the address of the mail server for its domain. Used by other MTAs to find the mail server for mail addressed to kasad.com.
A A Specifies the IPv4 address for the given hostname.
AAAA AAAA Same as A, but for IPv6.
SPF TXT Defines a list of rules about what IP addresses are allowed to send mail for the record's domain. Used to prevent forged sender addresses.
DKIM TXT Lists the public key used for DKIM signing of outgoing mail.
DMARC TXT Sets rules for recipient MTAs to follow regarding DKIM/SPF policy failures. Also lists instructions for recipient MTAs to follow to notify of DKIM/SPF failures.

A/AAAA records

These are the simplest type of record. They simply define the IP address that a given hostname corresponds to.

We define an A and an AAAA record for kasad.com that points to our server. Instead of defining a second A/AAAA record for mail.kasad.com, we use a CNAME record that points to kasad.com.

MX records

The MX record specifies the mail server for a given domain. Other MTAs use this when trying to send mail to our domain. Mail addressed to anything ending in @kasad.com will be sent to the mail server listed in the MX record for kasad.com.

We define an MX record for kasad.com with the value mail.kasad.com. This makes mail.kasad.com the mail server for the kasad.com domain.

SPF records

Sender Policy Framework (SPF) is a simple mechanism to help prevent sender address forgery. It accomplishes this by specifying a list of IP addresses that are allowed to send mail for the given domain. SPF, on its own, only checks the envelope from address. This means the From: header can still be spoofed. Enable DMARC to protect against this.

SPF records are just TXT records with specific content. The SPF rules can be customized to your liking. The rules listed below are the ones I've decided to use.

We define a TXT record for kasad.com with the following content:

v=spf1 +mx +a:kasad.com +a:mail.kasad.com -all

Both +a: rules are likely redundant, as the MX record already points to our mail server and is allowed in a previous rule. That being said, better safe than sorry.

DKIM records

The DKIM record holds the public key that corresponds to our DKIM signing key. MTAs that receive mail from us will retrieve our public key from the DKIM DNS record and use it to verify the mail's DKIM signature. It is important that this record is kept up to date with our DKIM configuration, otherwise recipient MTAs may mark our messages as being forged or as spam.

We generated the value for this record when setting up our DKIM service. See here for details. The DNS record content will be saved in the <keyname>.dns file.

DKIM records are just TXT records. However, the record name must be <selector>._domainkey.<domain>, where <selector> is the selector we chose in our DKIM configuration and <domain> is our domain.

We define a TXT record for mail._domainkey.kasad.com with the following content:

v=DKIM1; h=sha256; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAxOlorHTT/rsI5WWobgA0/+XRWAav1F5As1YoUVEUknIPbIJDuMIbEbV468XdHsp63PvwF2uz9A3iEefaGIMOpcJrgIcb3X5el0/x89kxK/zDDruiAzpcLwdy6urEmQhdRfoi1stdOhDlo8dNQj5vRORceJ2v5fUJ3VUV9eWd7cGOjhladUWedYgdIdiYqsbR6CeYIhpKK1v414 UmtB1sKcxHgxbROm+yjM6iJaSQbF9iLUlBEHOBfRc1vVuw0N+LQpRDNaaHSom0SusrMnXnjb33ANNCFMITwL9fZm9mR+sR+m+2QGvhLyODJMsxRCBKSSZhrjP2Csa80ZnOtNX91QIDAQAB

There must not be any whitespace in the value of the p= key. Do not insert spaces or line breaks.

DMARC records

DMARC records are used to tell other MTAs how to handle SPF and DKIM failures, as well as how to report these failures (and non-failures, if wanted). DMARC records can be a little complex. The DMARC record we use is explained below, but the alternative options are not. To learn more, see learndmarc.com for an interactive DMARC demonstration. To make it easier to create your own DMARC record, see Scott Kitterman's DMARC Record Assistant.

Like SPF and DKIM, DMARC records are also implemented using DNS TXT records. The DMARC TXT record must be made for _dmarc.<domain>, where <domain> is our domain.

We define a TXT record for _dmarc.kasad.com with the following content:

v=DMARC1; p=none; rua=mailto:dmarc+aggregate@kasad.com; ruf=mailto:dmarc+failures@kasad.com; fo=1; rf=afrf; sp=none

Email Server

Documentation for the kasad.com email server. See the Email page in the Services chapter for an overview.

Email Server

Postfix

Postfix is the mail transfer agent (MTA) for kasad.com. It handles receiving emails, sending emails, access control, aliases, and more.

The only things Postfix does not handle are the storing of received emails and providing users access to stored emails. Both of those features are handled by Dovecot.

Architecture

The way Postfix works can be a little confusing. This section aims to provide just enough explanation of Postfix to make it possible to understand how we've set it up.

SMTP daemons

Postfix is an SMTP server. However in our setup, it actually serves three SMTP daemons:

Port Type Authentication enabled? Purpose
25 Plain SMTP Handles connnections from other MTAs. Mainly used to receive emails sent by users of other mail servers.
587 Submission ✓ (required) Handles connections from Mail User Agents (MUAs). Mainly used by users of the mail server to submit emails for Postfix to send to another MTA
465 Submission (over TLS) ✓ (required) Same as the previous submission daemon. The only differennce is this one uses an implicit TLS wrapper rather than starting with plaintext and using the STARTTLS command.

Since the two submission daemons perform the same task, we may refer to both of them as one submission daemon.

How Postfix handles mail

All mail that Postfix processes is sent to it via SMTP. That goes for both mail being sent to us from other MTAs as well as mail submitted by our users to be sent out to other MTAs.

So how do we know what's being received and what's to be sent?
The short answer is: we don't. Instead, Postfix processes all mail the same way. It's up to us to configure Postfix to permit/deny certain actions based on how we want mail to be processed.

Postfix can be used as a final destination server (like we're doing). It can also be used as a relay server, a backup mail server, a send-only server, a receive-only server, and probably much more. In our case, we will use Postfix as a final destination server for the kasad.com domain. We will also use it as a relay server for all other domains, but with restricted access.

Relaying mail

Relaying typically means receiving and re-sending mail to another destination. However, Postfix (confusingly) considers receiving mail a form of relaying. In a sense, it is being relayed from us to us.

Based on this, we want to implement the following rules:

  1. Allow other MTAs to relay mail to any user at the kasad.com domain without needing authentication
  2. Prevent other MTAs from relaying mail to recipients at other domains/MTAs
  3. Allow authenticated clients to relay mail to any domain as long as the From address is valid for their username
  4. Prevent even authenticated clients from relaying mail with a From address that our server doesn't "own"

If rule 1 was not in place, we wouldn't be able to receive mail from other servers because all incoming mail from unauthenticated clients (e.g. other MTAs) would be rejected.

If rule 2 was not in place, we would be allowing anyone on the Internet to send mail from our server to other servers. This is not only a bad idea for security reasons, but would likely also get us put on a banlist as spammers could use our server to send spam emails.

If rule 3 was not in place, Postfix would only be able to receive mail because it would be impossible to relay mail to other MTAs.

If rule 4 was not in place, any authenticated user would be allowed to send mail from our server using any From address. This would not only allow users to impersonate other users, but would also allow sending mail from addresses our server does not "own" which could also get us put on a banlist.

Understanding this concept of allowing different clients to relay mail to/from different places is necessary to understanding Postfix's configuration. While you could probably copy the provided configuration file and swap in your own domain, it is good to understand how the configuration works.

Installing Postfix

Install the mailutils, postfix, and postfix-pcre packages:

# apt install mailutils postfix postfix-pcre

When presented with the post-installation configuration menu, select the Internet site option. Then set the System mail name to your domain (kasad.com for us).

Enabling the service

The postfix package provides two systemd services: postfix.service and postfix@.service. The latter is used for multi-instance setups, where multiple Postfix instances run on the same server. It requires an instance name, e.g. postfix@instance1.service. The postfix.service unit is sort of a metaservice which does nothing on its own. Instead, all of the instance units are a "part of" the postfix.service unit, which makes the postfix.service unit control all of the instances.

In our case, we don't want multiple instances. We only want one main instance. For this, we use the - instance name:

# systemctl enable postfix.service postfix@-.service

Configuring Postfix

Within Postfix's configuration directory (/etc/postfix), there are two configuration files which Postfix will read:

  1. main.cf - Postfix configuration parameters. The main configuration file where settings are specified.
  2. master.cf - Postfix daemon processes. Configures which daemons Postfix will run.

Postfix is not a single program or daemon. Rather, it is a collection of multiple daemons which each provide a unique utility. They communicate and pass data between each other to provide a full Postfix setup.

Postfix can be used in many ways for many purposes. We want to use it as a final destination mail server for the kasad.com domain, while also using it as a relay server for sending mail (see above for details). We also want the mail accounts to map to UNIX user accounts, e.g. an email received for jett@kasad.com will be placed in a mail directory within the user jett's home directory. This is called local delivery.

Postfix supports other modes, like virtual delivery which we will not use.

main.cf

The following is the content of the main.cf configuration file. There are comments in the file to explain what all of the settings do. Some of the settings are also explained in further detail below.

# Listen on all interfaces
inet_interfaces = all

# Listen on both IPv4 and IPv6
inet_protocols = all

# Set the domain of the server
mydomain = kasad.com

# Set mail server hostname
myhostname = mail.kasad.com

# Domain to add to outgoing mail
myorigin = $mydomain

# Domains to use local(8) delivery for
mydestination = $myhostname, $mydomain, localhost

# Trust mail coming from the local host
mynetworks_style = host

# Don't use the local biff service for mail notifications
biff = no

# Generate "delayed mail" warnings
delay_warning_time = 2h
confirm_delay_cleared = yes

# See http://www.postfix.org/COMPATIBILITY_README.html -- default to 2 on
# fresh installs.
compatibility_level = 2

# TLS certificate/key
smtpd_tls_cert_file = /etc/letsencrypt/live/$mydomain/fullchain.pem
smtpd_tls_key_file = /etc/letsencrypt/live/$mydomain/privkey.pem

# Only use TLSv1.2 or greater
smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1

# Set allowed TLS ciphers
smtpd_tls_mandatory_ciphers = high
smtpd_tls_mandatory_exclude_ciphers = aNULL

# Only permit SASL authentication over TLS connections
smtpd_tls_auth_only = yes

# Enable TLS for SMTP, but do not enforce it
# To enforce TLS for the SMTP server, add an entry for a specific port in
# master.cf
smtpd_tls_security_level = may
smtp_tls_security_level = may

# Log a summary of TLS handshakes
smtpd_tls_loglevel = 1
smtp_tls_loglevel = 1

# Include TLS information in 'Received' header of received mail
smtpd_tls_received_header = yes

# Enable SASL authentication
smtpd_sasl_auth_enable = yes

# Use the Dovecot plugin for SASL auth
smtpd_sasl_type = dovecot

# Implementation-specific information passed to the SASL auth plugin
smtpd_sasl_path = private/auth

# Don't allow anonymous logins, and only allow plaintext logins over TLS
smtpd_sasl_security_options = noanonymous, noplaintext
smtpd_sasl_tls_security_options = noanonymous

# Set restrictions for client connections to SMTPd on submission ports
# The $smtpd_client_restrictions option is set to this for submission daemons
# in master.cf
#   - permit_mynetworks: allow connections originating from $mynetworks
#   - permit_sasl_authenticated: allow connections from authenticated clients
#   - reject: reject everything else
mua_client_restrictions = permit_mynetworks,
                          permit_sasl_authenticated,
                          reject

# Set restrictions for relaying mail
#   - permit_mynetworks: allow mail originating from $mynetworks
#   - permit_sasl_authenticated: allow mail from authenticated clients
#   - reject_unauth_destination: reject mail addressed to an unauthorized destination
smtpd_relay_restrictions = permit_mynetworks,
                           permit_sasl_authenticated,
                           reject_unauth_destination

# Set restrictions for sending mail
#   - permit_mynetworks: allow mail originating from $mynetworks
#   - permit_sasl_authenticated: allow mail from authenticated clients
#   - reject_sender_login_mismatch: reject mail with a From address that
#     differs from the authenticated user's username
smtpd_sender_restrictions = permit_mynetworks,
                            reject_sender_login_mismatch,
                            permit_sasl_authenticated,
                            permit
smtpd_sender_login_maps = pcre:/etc/postfix/login_maps

# Aliases for local(8) delivery
alias_maps = hash:/etc/aliases

# Files for which databases will be generated when using the 'newaliases' command
alias_database = hash:/etc/aliases

# Aliases for virtual delivery (processed before local aliases)
virtual_alias_maps = hash:/etc/postfix/virtual

# No limit to mailbox size
mailbox_size_limit = 0

# Set the delimiter(s) between the user and extension
# E.g. mail addressed to 'user+extra' will go to the user 'user'
recipient_delimiter = +

# Set mailbox location for local(8) delivery
# (relative to recipient user's home directory)
home_mailbox = .mail/Inbox/

# Enable DKIM filter for mail
# Exact behavior depends on OpenDKIM configuration
smtpd_milters = inet:localhost:12301
non_smtpd_milters = inet:localhost:12301
milter_protocol = 6
milter_default_action = accept

# Command to use for delivering local mail
mailbox_command = /usr/lib/dovecot/deliver

# Set message size limit
#   52428800 = 50 MB
message_size_limit = 52428800

# Notify the postmaster of certain errors
#   - bounce: bounced (rejected) mail
#   - delay: delayed mail
#   - resource: mail not delivered due to resource problems
#   - software: mail not delivered due to software problems
notify_classes = bounce, delay, resource, software

# Filter for DSN emails
local_delivery_status_filter = pcre:/etc/postfix/local_dsn_filter

# Limit concurrent deliveries to local mailboxes
local_destination_concurrency_limit = 2

# vim: ft=pfmain ts=8 sw=8 et

Network interfaces

We want Postfix to listen on all network interfaces and all protocols so it is reachable on the Internet:

inet_interfaces = all
inet_protocols = all

Hostnames and domains

As explained in the Domain names section of the Email overview page, Postfix will operate as if its hostname is mail.kasad.com, as this is what we'll use for our MX DNS record. However, the domain name that we want to use for sending/receiving mail is kasad.com.

myhostname = mail.kasad.com
mydomain = kasad.com

The myorigin parameter controls the domain that local mail will be sent/delivered to. This is not really necessary when using Postfix as an Internet email server. It only affects sending mail to a user without a domain, e.g. mail to root will go to root@$myorigin.

myorigin = $mydomain

The mydestination parameter sets the list of domains that will use the local delivery method. We want to deliver all mail locally, as the mail accounts are all UNIX user accounts, so we specify the mail server's domain using the $mydomain variable, as well as the hostname and localhost.

mydestination = $mydomain, $myhostname, localhost

Trusted networks

Postfix keeps a list of trusted hosts, which are hosts that are allowed to relay mail through the Postfix server (i.e. send mail from our server to others). We don't want to allow any hosts to do this other than our own:

mynetworks_style = host

Delayed mail warnings

We want to notify users when their outgoing mail is delayed longer than 2 hours. We also want to send them a notifcation when the queue is finally processed and the delay is over.

delay_warning_time = 2h
confirm_delay_cleared = yes

TLS settings

We want to enable TLS for Postfix so its connections are encrypted, as we don't want mail being transferred in plaintext. This requires setting multiple configuration parameters, which are listed below.

While you may want to require TLS for all incoming connections, this is a bad idea. Many (especially older) email servers do not support TLS and will fail to deliver. According to RFC 2487, publicly-referenced email servers must not require connections to use TLS.

TLS certificate files

We use Certbot to obtain TLS certificates, so we need to point Postfix to that directory for the TLS certificate and key:

smtpd_tls_cert_file = /etc/letsencrypt/live/$mydomain/fullchain.pem
smtpd_tls_key_file  = /etc/letsencrypt/live/$mydomain/privkey.pem

TLS protocol versions

For connections to Postfix where TLS is mandatory, we require TLS v1.2 or higher:

smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1

For Postfix version 3.6 and greater, you can set smtpd_tls_mandatory_protocols = >=TLSv1.2 which has the same effect.

To enable this restriction for all TLS connections, not just ones where TLS is mandatory, set the smtpd_tls_protocols option as well.

Require TLS when providing credentials

We don't want to allow users to submit their username/password over a plaintext connection. Instead, we tell Postfix to only announce and enable SASL login capability over TLS connections:

smtpd_tls_auth_only = yes

Note that this is different from requiring TLS for all incoming connections. Other mail servers may still connect without TLS to deliver mail; TLS is only required when authenticating and providing credentials.

Enable TLS for SMTP connections

We want to enable, but not enforce, TLS for both incoming and outgoing SMTP connections. Options with the smtp_ prefix control the Postfix SMTP client (i.e. for outgoing connections). Options with the smtpd_ prefix control the Postfix SMTP server (i.e. for incoming connections).

smtp_tls_security_level = may
smtpd_tls_security_level = may

TLS connection logging

We want to log a summary of each TLS handshake. This will allow us to see which connections have used TLS in the mail server logs.

smtp_tls_loglevel = 1
smtpd_tls_loglevel = 1

Include TLS information in Received headers

Postfix will generate a Received header for incoming mail with information about the server it was received from and the one it was received by. We want to include whether or not TLS was used in the information that gets stored in this header:

smtpd_tls_received_header = yes

SASL authentication

We want to enable SASL authentication in Postfix so users can log in and send emails. Postfix does not provide built-in SASL authentication. Instead, it uses an external provider. We'll use Dovecot as our SASL provider, since we're already using it as our IMAP server.

See the Postfix SASL connector section of the Dovecot page for information on how to configure Dovecot to provide SASL for Postfix.

In the Postfix configuration, the following options are needed:

smtpd_sasl_auth_enable = true
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth

The following two settings ensure that anonymous logins are prohibited and that plaintext passwords are only sent over TLS connections. This is probably unnecessary, as we've restricted SASL to only be allowed over TLS connections in the first place.

smtpd_sasl_security_options = noanonymous, noplaintext
smtpd_sasl_tls_security_options = noanonymous

MUA client connection rules

We only want to allow authenticated users to connect to the submission daemon. This is controlled using the $smtpd_client_restrictions parameter. However, we only want to apply this to the submission daemons and not the main SMTP daemon.

To do this, we will set a custom parameter, $mua_client_restrictions, in main.cf:

mua_client_restrictions = permit_mynetworks,
                          permit_sasl_authenticated,
                          reject

Then in master.cf, we must add the following option line to both submission daemon entries to assign this custom value to the $smtpd_client_restrictions parameter:

-o smtpd_client_restrictions=$mua_client_restrictions

The resulting entries in master.cf should look as follows:

submission inet n       -       y       -       -       smtpd
  -o syslog_name=postfix/submission
  -o smtpd_tls_security_level=encrypt
  -o smtpd_sasl_auth_enable=yes
  -o smtpd_tls_auth_only=yes
  -o smtpd_client_restrictions=$mua_client_restrictions
submissions inet  n       -       y       -       -       smtpd
  -o syslog_name=postfix/submissions
  -o smtpd_tls_wrappermode=yes
  -o smtpd_sasl_auth_enable=yes
  -o smtpd_client_restrictions=$mua_client_restrictions

Relay/recipient rules

This is possibly one of the most important configuration sections. The options in this section control what emails Postfix will permit the relaying or receiving of.

The smtpd_relay_restrictions option has a slightly misleading name. This parameter configures behavior not only for relaying, but also for sending and receiving. It is triggered when Postfix receives the RCPT TO SMTP command (i.e. whenever a mail is being sent, be that from or to a user on the local server). In other words, it applies rules to the recipient of an email.

smtpd_relay_restrictions = permit_mynetworks,
                           permit_sasl_authenticated,
                           reject_unauth_destination

The restrictions in the option above do the following (in order):

  1. It allows all mail originating from the local machine on which Postfix is running
  2. Allow mail originating from authenticated clients
  3. Reject mail going to an unauthorized destination (i.e. one not listed in $mydestination, $virtual_alias_domains, or $virtual_mailbox_domains)

This setting will allow mail from unauthenticated users as long as it is going to a domain listed in one of the three configuration options above. This allows mail servers on the Internet to send emails to users of the kasad.com mail server without authenticating. However, they will not be allowed to relay mail to other mail servers/domains, because we reject unauthorized destinations.

Next, we set the sender restrictions. This list of restrictions is triggered when Postfix receives the MAIL FROM SMTP command. In other words, this allows us to apply rules based on the sender of the email, whereas $smtpd_relay_restrictions applies to the recipient.

smtpd_sender_restrictions = permit_mynetworks,
                            reject_sender_login_mismatch,
                            permit_sasl_authenticated,
                            permit

The restrictions in this option do the following (in order):

  1. Allow mail originating from the local machine on which Postfix is running
  2. Reject mail which originates from an email address that does not match the authenticated user (see below)
  3. Allow mail from authenticated users
  4. Allow everything else that has not been rejected already

Since these rules are evaluated in order, rejection statements will prevent a subsequent allow statement from allowing an email. For example, if a login mismatch occurs, the mail will be rejected even if the user is authenticated, because reject_sender_login_mismatch comes before permit_sasl_authenticated. The same applies for if an allow statement comes first.

Preventing user impersonation

The permit_sasl_authenticated restriction in the $smtpd_sender_restrictions list will allow authenticated users to send emails. However, it will not place any restrictions on what From address they can use. This is bad, because the user jett could send an email from anh@kasad.com even though jett is not logged in as anh.

To fix this, we must do two things. The first was already dome when configuring $smtpd_sender_restrictions: the reject_sender_login_mismatch restriction must be specified, and it must be placed before the permit_sasl_authenticated restriction. The second thing we must do is configure the sender login map.

The $smtpd_sender_login_maps option specifies a list of tables which are used to map from sender email addresses to SASL login names that are allowed to use those addresses. Since we want to allow all users to send emails from <username>@kasad.com, it makes sense to use a regular expression (PCRE) lookup table:

smtpd_sender_login_maps = pcre:/etc/postfix/login_maps

We must then populate the file /etc/postfix/login_maps with our mappings:

/^(\w+)(\+[a-z0-9_+.-]+)?@kasad\.com$/  $1

This is a regular expression that will match <username>@kasad.com and <username>+<extension>@kasad.com and map both to <username>. Effectively, this allows all users to send emails from an address which consists of their address, an optional extension, and the domain part @kasad.com. See this breakdown for more details on how this works.

It is probably possible to not hard-code the domain here and to rely on the fact that Postfix will perform a lookup with just the user part if the entire email is not matched. However, this method works and so I have no incentive to change it.

Also see the Sending Emails from Web Apps page for information on adding users which are allowed to send emails from any user part at a specific subdomain (e.g. <anything>@books.kasad.com).

Address aliases

Postfix supports defining aliases which map addresses to other users/addresses for incoming mail. This means we can define an alias to forward email to www@kasad.com and postmaster@kasad.com to an actual user without having to set up accounts for those two email addresses. There are two types of supported aliases: local aliases and virtual aliases.

Local aliases

Local aliases are processed during local delivery and virtual ones when mail is first queued. Local delivery only occurs for domains in the $mydestination configuration parameter. They are looked up using only the user part. See aliases(5) for details. To enable local aliases, we set the following options:

alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases

The first line tells Postfix where to look for aliases and the second tells it which file to generate when the newaliases command is run.

Virtual aliases

Virtual aliases are processed when mail is queued, meaning they get processed for all incominng emails, not just ones destined for $mydestination. Because of this, the virtual alias map can be used to host "virtual domains," i.e. domains that are either completely different from $mydomain or ones that are related but not part of $mydestination.

To enable virtual aliases, set the following option:

virtual_alias_maps = hash:/etc/postfix/virtual
Examples

The virtual alias table is where address rewriting should be done for subdomains that are not listed in $mydestination. For example, the following entry will redirect all emails addressed to intake@explorer.kasad.com to the explorer-intake local user:

intake@explorer.kasad.com	explorer-intake

Or the following would redirect all emails addressed to any user at the @explorer.kasad.com domain to the explorer-intake local user:

@explorer.kasad.com	explorer-intake

Regenerating the databases

When the /etc/aliases file is changed, the indexed database needs to be re-generated:

# newaliases

When the /etc/postfix/virtual file is changed, it also needs to be re-generated:

# postmap hash:/etc/postfix/virtual

User address extensions

Email addresses can be sectioned into two parts: the user part (everything before the @) and the domain part (everything after the @). The user part can optionally contain an extension. This extension is purely cosmetic. It does not change how the mail is delivered. To enable extensions, set the following option:

recipient_delimiter = +

The plus (+) character is the standard separator, so that's what we use. This setting will mean that any mail to user+anything@kasad.com will be treated as if it was sent to user@kasad.com.

Local delivery location

We want Postfix to store received emails in the .mail/Inbox directory within a user's home directory:

home_mailbox = .mail/Inbox/

This is ignored when $mailbox_command is set. We only specify it as a fallback/backup.

DKIM mail filter

We need mail to pass through the DKIM mail filter (or milter). See the DKIM Milter page for information on setting up the milter. Set the following options to have Postfix pass all mail (both incoming and outgoing) through the DKIM milter:

smtpd_milters = inet:localhost:12301
non_smtpd_milters = inet:localhost:12301
milter_protocol = 6
milter_default_action = accept

The socket specified for the [non_]smtpd_milters options must match what is defined in the dkimpy-milter configuration file.

Using Dovecot for local delivery

We want to hand mail off to Dovecot for delivery rather than having Postfix deliver the mail itself. To do this, specify the following option:

mailbox_command = /usr/lib/dovecot/deliver

Message size limit

We want to enforce a maximum message size of 50 MiB. This will hopefully prevent DoS attacks and mailbox flooding.

message_size_limit = 52428800 # 50 MiB

Postmaster notifications

Postfix has the ability to send the postmaster a notification email when a failure occurs. We want to notify the postmaster of delayed messages, bounced messages, and failures due to software errors or limited resources:

notify_classes = bounce, delay, resource, software

Delivery status notification filter

Clients can request delivery status notifications for both failed and successfully sent emails. We don't want to reveal the internal commands or files being used, so we set a filter to replace that portion of the body in notification emails:

local_delivery_status_filter = pcre:/etc/postfix/local_dsn_filter

The contents of the /etc/postfix/local_dsn_filter file are:

/^: delivered to file.+/    mail queued for delivery
/^: delivered to command.+/ mail queued for delivery

master.cf

Postfix consists of a number of different daemons. Each daemon has its own purpose and exists independently from the others. All of Postfix's daemons are controlled by the master(8) daemon.

The master(8) daemon reads master.cf for a list of daemons for it to run. The contents of our master.cf file are listed below.

# Postfix master process configuration file.  For details on the format
# of the file, see the master(5) manual page (command: "man 5 master" or
# on-line: http://www.postfix.org/master.5.html).
#
# Do not forget to execute "postfix reload" after editing this file.
#
# ==========================================================================
# service type  private unpriv  chroot  wakeup  maxproc command + args
#               (yes)   (yes)   (no)    (never) (100)
# ==========================================================================

pickup    unix  n       -       y       60      1       pickup
cleanup   unix  n       -       y       -       0       cleanup
qmgr      unix  n       -       n       300     1       qmgr
tlsmgr    unix  -       -       y       1000?   1       tlsmgr
rewrite   unix  -       -       y       -       -       trivial-rewrite
bounce    unix  -       -       y       -       0       bounce
defer     unix  -       -       y       -       0       bounce
trace     unix  -       -       y       -       0       bounce
verify    unix  -       -       y       -       1       verify
flush     unix  n       -       y       1000?   0       flush
proxymap  unix  -       -       n       -       -       proxymap
proxywrite unix -       -       n       -       1       proxymap
smtp      unix  -       -       y       -       -       smtp
relay     unix  -       -       y       -       -       smtp
  -o syslog_name=postfix/$service_name
showq     unix  n       -       y       -       -       showq
error     unix  -       -       y       -       -       error
retry     unix  -       -       y       -       -       error
discard   unix  -       -       y       -       -       discard
local     unix  -       n       n       -       -       local
virtual   unix  -       n       n       -       -       virtual
lmtp      unix  -       -       y       -       -       lmtp
anvil     unix  -       -       y       -       1       anvil
scache    unix  -       -       y       -       1       scache
postlog   unix-dgram n  -       n       -       1       postlogd

smtp      inet  n       -       y       -       -       smtpd
  -o smtpd_sasl_auth_enable=no
  -o content_filter=spamassassin
submission inet n       -       y       -       -       smtpd
  -o syslog_name=postfix/$service_name
  -o smtpd_tls_security_level=encrypt
  -o smtpd_client_restrictions=$mua_client_restrictions
submissions inet n      -       y       -       -       smtpd
  -o syslog_name=postfix/$service_name
  -o smtpd_tls_wrappermode=yes
  -o smtpd_client_restrictions=$mua_client_restrictions

spamassassin unix -     n       n       -       -       pipe
  user=debian-spamd argv=/usr/bin/spamc -e /usr/sbin/sendmail -G -oi -f ${sender} ${recipient}

Note: Lines beginning with whitespace are treated as a continuation of the previous line.

The entries in the file above are separated into three "blocks:"

Block #1 - Internal Postfix daemons

The first block enables the daemons required for Postfix to run successfully with our current configuration. These are mostly internal daemons or daemons to interface with utilities like postqueue(1).

Block #2 - SMTP daemons

The second block defines our three SMTP daemons (explained above). The first column, the service name, defines the port each daemon will listen on.* The port can be specified as a number or as a services(5) name. We also use the -o flag to override main.cf options for these specific daemons.

* This applies only for services of the inet type. For the unix type, the service name defines the name of the service's socket file.

Public SMTP daemon

Submission daemon

The second SMTP server is the submission daemon.

Submission over TLS daemon

The third SMTP server is another submission daemon which uses implicit TLS. This means that clients must directly connect using TLS. This is different from the other submission daemon, to which clients connect over plaintext TCP and begin a TLS session later.

Block #3 - SpamAssassin content filter

Coming soon.

Email Server

Dovecot

Dovecot handles storage of mail as well as providing IMAP access to emails on the kasad.com mail server.

Installation

Dovecot is broken into several Debian packages. We need dovecot-imapd and dovecot-sieve, which both require dovecot-core:

# apt install dovecot-{core,imapd,sieve}

Enabling the service

The dovecot-core package comes with a systemd service for Dovecot which we will enable:

# systemctl enable dovecot.service

Note: Enabling dovecot.socket is not enough, as that will not trigger when Postfix tries to use Dovecot for SASL, leading to broken authentication in Postfix until the IMAP server is accessed.

Configuration

Typically, Dovecot is configured using multiple drop-in configuration files in /etc/dovecot/conf.d. However, for smaller setups like ours, it's much easier to put all of the configuration in /etc/dovecot/dovecot.conf and ignore the drop-in files.

Below are the contents of dovecot.conf. The comments in the file explain briefly what each section does, but they are documented in more detail in the following sections.

# /etc/dovecot/dovecot.conf

# Note that in the dovecot conf, you can use:
# %u for username
# %n for the name in name@domain.tld
# %d for the domain
# %h the user's home directory

# SSL should be set to required.
ssl = required
ssl_cert = </etc/letsencrypt/live/kasad.com/fullchain.pem
ssl_key = </etc/letsencrypt/live/kasad.com/privkey.pem

# Plaintext login. This is safe and easy thanks to SSL.
auth_mechanisms = plain login

protocols = $protocols imap sieve

# Search for valid users in /etc/passwd
userdb {
	driver = passwd
}
#Fallback: Use plain old PAM to find user passwords
passdb {
	driver = pam
}

# Our mail for each user will be in ~/.mail, and the inbox will be ~/.mail/Inbox
# The LAYOUT option is also important because otherwise, the boxes will be `.Sent` instead of `Sent`.
mail_location = maildir:~/.mail:INBOX=~/.mail/Inbox:LAYOUT=fs
namespace inbox {
	inbox = yes

	mailbox Drafts {
		special_use = \Drafts
		auto = subscribe
	}

	mailbox Junk {
		special_use = \Junk
		auto = subscribe
		autoexpunge = 30d
	}

	mailbox Spam {
		special_use = \Junk
		auto = no
		autoexpunge = 30d
	}

	mailbox Sent {
		special_use = \Sent
		auto = subscribe
	}

	mailbox Trash {
		special_use = \Trash
		auto = subscribe
	}

	mailbox Archive {
		special_use = \Archive
	}

	mailbox Archives {
		special_use = \Archive
	}

}

# Here we let Postfix use Dovecot's authetication system.
service auth {
 	unix_listener /var/spool/postfix/private/auth {
		mode = 0660
		user = postfix
		group = postfix
	}
}

mail_plugins = $mail_plugins zlib

plugin {
	zlib_save = zstd
}

protocol lda {
	mail_plugins = $mail_plugins sieve
}

protocol lmtp {
	mail_plugins = $mail_plugins sieve
}

plugin {
	sieve = file:~/.sieve;active=~/.dovecot.sieve
	sieve_default = /etc/dovecot/sieve/default.sieve
	sieve_global = file:/var/lib/dovecot/sieve
	recipient_delimiter = +
}

TLS settings

We want to enable and enforce TLS for connections. Unlike Postfix, Dovecot's ports are only accessed by our own users, so we don't need to worry about compatibility issues with other mail servers.

ssl = required
ssl_cert = </etc/letsencrypt/live/kasad.com/fullchain.pem
ssl_key = </etc/letsencrypt/live/kasad.com/privkey.pem

SASL authentication mechanisms

SASL supports multiple methods, called mechanisms, for the user to transmit their password to the server. Since all connections to Dovecot will use TLS, we can enable plaintext mechanisms. They also happen to be the only ones supported by the PAM authentication backend.

auth_mechanisms = plain login
disable_plaintext_auth = yes

The second option above disables plaintext mechanisms for unencrypted connections. This doesn't matter since we require TLS, but it's a nice security fallback if the TLS settings are accidentally changed.

Protocols

We need to tell Dovecot which protocols we want to enable. In our case, this is IMAP and Sieve. Sieve is a scripting language which allows users to create rules to automatically filter or categorize incoming mail.

protocols = imap sieve

User & password databases

We want to use standard UNIX users as the user database for our mail system. We can easily do this in Dovecot. Authentication is broken down into two databases, a user database and a password database. The password database handles authentication of the user when they log in. The user database provides information on the user's UID, GID, and home directory after they've authenticated.

PAM password database

We want to use PAM as the password database. It is possible to use the /etc/shadow file, but PAM provides more customization options.

passdb {
	driver = pam
}

We also want to restrict access to users in the mail group, as we don't want all UNIX users to be able to receive/send mail. We can accomplish this by adding the following line at the end of the auth stack in the /etc/pam.d/dovecot file:

auth	required	pam_succeed_if.so	user ingroup mail

This will cause authentication attempts for users not in the mail group to fail, the same way as if they provided an invalid password.

Passwd file user database

After a user has authenticated, we can look up their UID, GID, and home directory in the /etc/passwd file:

userdb {
	driver = passwd
}

Mail storage location

We need to tell Dovecot where we want it to store mail. We'll use the Maildir format for storing emails, as it's the easiest to work with:

mail_location = maildir:~/.mail:INBOX=~/.mail/Inbox:LAYOUT=fs

Inbox namespace

Dovecot supports IMAP namespaces. So far, we only define one namespace, which is the default inbox namespace. This namespace also contains default mailbox definitions.

namespace inbox {
	inbox = yes

	mailbox Drafts {
		special_use = \Drafts
		auto = subscribe
	}

	mailbox Junk {
		special_use = \Junk
		auto = subscribe
		autoexpunge = 30d
	}

	mailbox Sent {
		special_use = \Sent
		auto = subscribe
	}

	mailbox Trash {
		special_use = \Trash
		auto = subscribe
	}

	mailbox Archives {
		special_use = \Archive
	}
}

The inbox = true setting for the namespace tells Dovecot that this is the namespace which contains the inbox folder where new mail should be delivered to.

The mailbox definitions tell Dovecot about some default mailboxes. Users can create additional mailboxes on top of these. Each mailbox has some options which are explained below.

The auto option controls whether Dovecot will automatically create the mailbox folder. The no value means it will not. The create value means it will automaticaaly create the mailbox, and the subscribe value means it will both create the mailbox and subscribe the user to it.

The special_use option assigns an IMAP special use flag to the mailbox. This essentially tells IMAP clients which mailboxes (folders) are used for special purposes, like storing drafts or spam email.

The autoexpunge option will automatically expunge, or permanently delete, messages in the specified mailbox when they are older than the time given by the value. For instance, the Junk folder is set to automatically delete emails inside it that are older than one month.

Postfix SASL connector

We have configured Postfix to use Dovecot as its SASL provider. For this to work, we need to configure Dovecot to provide a SASL listener socket to Postfix:

service auth {
	unix_listener /var/spool/postfix/private/auth {
    	mode  = 0660
        user  = postfix
        group = postfix
    }
}

The path for the socket must be /var/spool/postfix/$smtpd_sasl_path as specified in Postfix's main.cf configuration file.

Compressed mail storage

We want to enable compressed mail storage, as it'll help us save some disk space. To do this, we use the zlib plugin:

mail_plugins = $mail_plugins zlib

We can then configure the zlib plugin:

plugin {
	zlib_save = zstd
}

This option configures Dovecot to compress new emails with the Zstandard compression algorithm.

Sieve filtering

We want to enable users to create filters in the form of Sieve scripts. To do this, we will enable the sieve plugin for the LDA and LMTP protocols in Dovecot:

protocol lda {
	mail_plugins = $mail_plugins sieve
}

protocol lmtp {
	mail_plugins = $mail_plugins sieve
}

Make sure to place these two blocks after setting the mail_plugins option globally. If you modify mail_plugins later in the file, the protocol block will not be updated.

We also need to configure the Sieve plugin:

plugin {
	sieve = file:~/.sieve;active=~/.dovecot.sieve
	sieve_default = /etc/dovecot/sieve/default.sieve
	sieve_global = file:/var/lib/dovecot/sieve
	recipient_delimiter = +
}

The first option, sieve defines where sieve scripts will be stored. We set it to store Sieve scripts in ~/.sieve. This is also where scripts included in other scripts will be loaded from. Only one Sieve script can be active at a time. This active script is symlinked to from ~/.dovecot.sieve.

The second option, sieve_default, specifies the default Sieve script to use if the user has not provided their own.

The third option, sieve_global, specifies the directory where global scripts will be stored. This also controls where scripts that are included in other global Sieve scripts will be loaded from.

Finally, we set the recipient_delimiter character to match what we've configured in Postfix, which is the + plus character.

Default Sieve script

We've defined the default Sieve script location as /etc/dovecot/sieve/default.sieve. The contents of this file are:

require ["fileinto", "mailbox"];
if header :contains "X-Spam-Flag" "YES"
{
	fileinto "Junk";
}

This script will file emails that have been flagged as spam by SpamAssassin into the Junk folder.

Email Server

DKIM Milter

In order for high-profile email servers like Gmail and Outlook to receive emails we send, we need to sign our outgoing emails using DKIM. We use dkimpy-milter to do this.

dkimpy-milter is a milter, or mail filter. Postfix will pass incoming and outgoing emails through this milter, which will:

Installation

To install dkimpy-milter, install its Debian package:

# apt install dkimpy-milter

Enabling the service

Like Postfix and Dovecot, dkimpy-milter provides a systemd service which we will enable so the milter is automatically started:

# systemctl enable dkimpy-milter.service

Generating keys

Before configuring dkimpy-milter, we must generate keys which will be used to sign outgoing mail.

First, create and move into the /etc/postfix/dkim directory:

# install -d -m 0750 /etc/postfix/dkim
# cd /etc/postfix/dkim

According to dkimpy-milter's documentation,

Signing keys should be protected (owned by root:root with permissions 600 in a directory that is not world readable).

Using dknewkey(1)

An easy way to generate a key is to use the dknewkey(1) utility that comes with dkimpy-milter:

# dknewkey <keyname>

<keyname> can be anything you want, but it must be the same value used throughout the rest of the configuration. For simplicity, we use the name mail.

dknewkey(1) will generate a 2048-bit RSA key. If you want a larger key, you must generate it manually using OpenSSL.

Using OpenSSL

We can manually generate a private RSA key using OpenSSL:

# openssl genrsa -out <keyname>.key <keysize>
# chmod 600 <keyname>.key

keysize should be 2048 or 4096. This defines the length (in bits) of the newly generated key.

After generating the private key, we must extract the public key and create the DNS record text:

# printf 'v=DKIM1; k=rsa; h=sha256; p=%s' "$(openssl rsa -pubout -outform DER -in <keyname>.key | openssl base64 -A)" > <keyname>.dns

This will extract the public key, encode it in Base64, and print it with the proper formatting for a DKIM DNS record. The output will be written to <keyname>.dns.

Configuration

dkimpy-milter is configured by placing settings in the /etc/dkimpy-milter/dkimpy-milter.conf file. The contents are listed below. All of the options are commented and are fairly easy to understand, so they won't be explained further.

# Log to syslog
Syslog  yes

# Sign mail for kasad.com and subdomains
Domain      kasad.com
SubDomains  yes

# Specify the keyfile and selector to use
KeyFile   /etc/postfix/dkim/mail.key
Selector  mail

# Use this server's hostname as the auth server ID when
# generating the Authentication-Results header
AuthservID  HOSTNAME

# Listen on localhost on port 12301
# This is the socket Postfix will use to communicate with dkimpy-milter
Socket  inet:12301@localhost

Selector names

DKIM uses a selector, which is an arbitrary value that identifies a certain key. The key name and the selector need not match. The selector can be whatever you wish, as long as the same value is used in the DKIM DNS record.

DKIM DNS record

Mail servers that we send emails to will validate our DKIM signature by retrieving our public key from a DNS TXT record. Define a TXT record for the <selector>._domainkey.<domain> domain, where <selector> is the selector defined in the configuration file (mail in our case) and <domain> is your mail server's domain (kasad.com in our case). The content of this record should be the content of the <keyname>.dns file we generated earlier.

If the key used to sign mail is changed, the DNS record must be updated with the new public key. If the selector is changed, but the key remains the same, the record's content can remain and only the first domain component must be changed.

As an example record, dig TXT mail._domainkey.kasad.com outputs the following:

mail._domainkey.kasad.com. 214	IN	TXT	"v=DKIM1; h=sha256; k=rsa;  p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAxOlorHTT/rsI5WWobgA0/+XRWAav1F5As1YoUVEUknIPbIJDuMIbEbV468XdHsp63PvwF2uz9A3iEefaGIMOpcJrgIcb3X5el0/x89kxK/zDDruiAzpcLwdy6urEmQhdRfoi1stdOhDlo8dNQj5vRORceJ2v5fUJ3VUV9eWd7cGOjhladUWedY" "gdIdiYqsbR6CeYIhpKK1v414 UmtB1sKcxHgxbROm+yjM6iJaSQbF9iLUlBEHOBfRc1vVuw0N+LQpRDNaaHSom0SusrMnXnjb33ANNCFMITwL9fZm9mR+sR+m+2QGvhLyODJMsxRCBKSSZhrjP2Csa80ZnOtNX91QIDAQAB"

Note that the public key is too long to fit in one TXT record, so it is broken into two. I do not know the specifics on how DNS or DKIM clients handle split TXT records. However, this is how most DKIM records look and I have not experienced any compatibility issues so far.

Old Web Apps

Web Apps that are not being used anymore. Some are discontinued and others have been superseded by other web apps.

Old Web Apps

[Superseded] Authelia - Authentication & SSO

Supersedure notice

Authelia has been replaced by Authentik for use in the kasad.com web apps. Authentik provides more customization, as well as a web-based user interface for managing users, which was my main gripe when using Authelia.

Description

Authelia is an open-source authentication and authorization server and portal. It is used in the SWAG stack as an authentication agent and an SSO portal.

Service info

The Authelia container uses the ghcr.io/authelia/authelia Docker image, version 4.36.2.

Configuration

To-do: document Authelia's configuration.

Authelia is configured to use a YAML file to store users, since there are not enough users that switching to an SQL database is justified.

Access

Authelia is published at auth.kasad.com.

Because Authelia is used as the authentication backend for Cloudflare Access, it bypasses Access auth. Otherwise an infinite loop would occur, where Cloudflare tries to access Authelia as the auth backend and Authelia tries to redirect back to Cloudflare for pre-auth.

Deployment

Authelia runs in a single container. It's currently part of the SWAG stack. It can (and probably should) be separated into its own stack. The Docker Compose service configuration for it is:

services:
  # ...
  authelia:
    image: ghcr.io/authelia/authelia:4.36.4
    container_name: authelia
    user: '938:941' # swag:servlets
    environment:
      - TZ=America/Los_Angeles
    volumes:
      - /srv/swag/authelia_config:/config
    restart: unless-stopped

Usage

Authelia (auth.kasad.com) is used as an authentication backend for Cloudflare Zero Trust. It is also used as the authentication provider for the following web apps using the OpenID Connect specification:

Currently, it does not appear to be possible to use Authelia for Paperless-NGX or Bitwarden.