Abhay Rana aka Nemo

Learnings from building my own home server

Learnings

I forgot to do this on the last blog post, so here is the list:

  1. archlinux has official packages for intel-microcode-updates.
  2. wireguard is almost there. I’m running openvpn for now, waiting for the stable release.
  3. While traefik is great, I’m concerned about the security model it has for connecting to Docker (uses the docker unix socket over a docker mounted volume, which gives it root access on the host). Scary stuff.
  4. Docker Labels are a great signalling mechanism. Update: After seeing multiple bugs with how traefik uses docker labels, they have limited use-cases but work great in those. Don’t try to over-architect them for all your metadata.
  5. Terraform still needs a lot of work on their docker provider. A lot of updates destroy containers, which should be applied without needing a destroy.
  6. I can’t proxy gitea’s SSH authentication easily, since traefik doesn’t support TCP proxying yet.
  7. The docker_volume resource in terraform is useless, since it doesn’t give you any control over the volume location on the host. (This might be a docker limitation.)
  8. The upload block inside a docker_container resource is a great idea. Lets you push configuration straight inside a container. This is how I push configuration straight inside the traefik container for eg:
    upload {
      content = "${file("${path.module}/conf/traefik.toml")}"
      file    = "/etc/traefik/traefik.toml"
    }
    

Advice

This section is if you’re venturing into a docker-heavy terraform setup:

  1. Use traefik. Will save you a lot of pain with proxying requests.
  2. Repeat the ports section for every IP you want to listen on. CIDRs don’t work.
  3. If you want to run the container on boot, you want the following:
     restart = "unless-stopped"
     destroy_grace_seconds = 10
     must_run = true
    
  4. If you have a single docker_registry_image resource in your state, you can’t run terraform without internet access.
  5. Breaking your docker module into images.tf, volumes.tf, and data.tf (for registry_images) works quite well.
  6. Memory limits on docker containers can be too contrained. Keep an eye on logs to see if anything is getting killed.
  7. Setup Docker TLS auth first. I tried proxying Docker over apache with basic auth, but it didn’t work out well.

MongoDB with forceful server restarts

Since my server gets a forceful restart every few days due to power-cuts (I’m still working on a backup power supply), I faced some issues with MongoDB being unable to recover cleanly. The lockfile would indicate a ungraceful shutdown, and it would require manual repairs, which sometimes failed.

As a weird-hacky-fix, since most of the errors were from the MongoDB WiredTiger engine itself, I hypothesized that switching to a more robust engine might save me from these manual repairs. I switched to MongoRocks, and while it has stopped the issue with repairs, the wiki stil doesn’t like it, and I’m facing this issue: https://github.com/Requarks/wiki/issues/313

However, I don’t have to repair the DB manually, which is a win.

SSHD on specific Interface

My proxy server has the following

eth0 139.59.22.234

And an associated Anchor IP for static IP usecases via Digital Ocean. (10.47.0.5, doesn’t show up in ifconfig).

I wanted to run the following setup:

  • eth0:22 -> sshd
  • Anchor-IP:22 -> simpleproxy -> gitea:ssh

where gitea is the git server hosting git.captnemo.in. This way:

  • I could SSH to the proxy server over 22
  • And directly SSH to the Gitea server over 22 using a different IP address.

Unfortunately, sshd doesn’t allow you to listen on a specific interface, and since the eth0 IP is non-static I can’t rely on it.

As a result, I’ve resorted to just using 2 separate ports:

22 -> simpleproxy -> gitea:ssh 222 -> sshd

There are some hacky ways around this by creating a new service that boots SSHD after network connectivity, but I thought this was much more stable.

Wiki.js public pages

I’m using wiki.js setup at https://wiki.bb8.fun. A specific requirement I had was public pages, so that I could give links to people for specific resources that could be browser without a login.

However, I wanted the default to be authenticated, and only certain pages to be public. The config for this was surprisingly simple:

YAML config

You need to ensure that defaultReadAccess is false:

auth:
  defaultReadAccess: false
  local:
    enabled: true

Guest Access

The following configuration is set for the guest user:

Now any pages created under the /public directory are now browseable by anyone.

Here is a sample page: https://wiki.bb8.fun/public/nebula

Docker CA Cert Authentication

I wrote a script that goes with the docker TLS guide to help you setup TLS authentication

OpenVPN default gateway client side configuration

I’m running a OpenVPN configuration on my proxy server. Howver, I don’t always want to use my VPN as the default route, only when I’m in an untrusted network. I still however, want to be able to connect to the VPN and use it to connect to other clients.

The solution is two-fold:

Server Side

Make sure you do not have the following in your OpenVPN server.conf:

push "redirect-gateway def1 bypass-dhcp"

Client Side

I created 2 copies of the VPN configuration files. Both of the them have identical config, except for this one line:

redirect-gateway def1

If I connect to the VPN config using this configuration, all my traffic is forwarded over the VPN. If you’re using Arch Linux, this is as simple as creating 2 config files:

  • /etc/openvpn/client/one.conf
  • /etc/openvpn/client/two.conf

And running systemctl start openvpn-client@one. I’ve enabled my non-defaut-route VPN service, so it automatically connects to on boot.


If you’re interested in my self-hosting setup, I’m using Terraform + Docker, the code is hosted on the same server, and I’ve been writing about my experience and learnings:

  1. Part 1, Hardware
  2. Part 2, Terraform/Docker
  3. Part 3, Learnings
  4. Part 4, Migrating from Google (and more)
  5. Part 5, Home Server Networking

If you have any comments, reach out to me

Running terraform and docker on my home server

The last time I’d posted about my Home Server build in September, I’d just gotten it working. Since then, I’ve made a lot of progress. It is now running almost 10 services, up from just Kodi back then. Now it has a working copy of:

Kodi
I was running kodi-standalone-service, set to run on boot, as per the ArchLinux Wiki, but switched in favor of openbox to a simple autorun.
Steam
The current setup uses Steam as the application launcher. This lets me ensure that the Steam Controller works across all applications.
Openbox
Instead of running Kodi on xinit, I’m now running openbox with autologin against a non-privileged user.
PulseAudio
I tried fighting it, but it was slightly easier to configure compared to dmix. Might move to dmix if I get time.
btrfs
I now have the following disks:
  1. 128GB root volume. (Samsung EVO-850)
  2. 1TB volume for data backups
  3. 3TB RAID0 configuration across 2 disks. There are some btrfs subvolumes in the 3TB raid setup, including one specifically for docker volumes. The docker guide recommends running btrfs subvolumes on the block device, which I didn’t like, so I’m running docker volumes in normal mode on a btrfs disk. I don’t have enough writes to care much yet, but might explore this further.
Docker
This has been an interesting experiment. Kodi is still installed natively, but I’ve been trying to run almost everything else as a docker container. I’ve managed to do the configuration entirely via terraform, which has been a great learning experience. I’ve found terraform much more saner as a configuration system compared to something like ansible, which gets quite crazy. (We have a much more crazy terraform config at work, though).
Terraform
I have a private repository on GitLab called nebula which I use as the source of truth for the configuration. It doesn’t hold everything yet, just the following:
  1. Docker Configuration (not the docker service, just the container/volumes)
  2. CloudFlare - I’m using bb8.fun as the root domain, which is entirely managed using the CloudFlare terraform provider.
  3. MySQL - Running a MariaDB container, which has been configured by-hand till this PR gets merged.
Gitea
Running as a docker container, provisioned using terraform. Plan to proxy this using git.captnemo.in.
Emby
Docker Container. Nothing special. Plan to set this up as the Kodi backend.
Couchpotato
Experimental setup for now. Inside a docker container.
Flexget
I wish I knew how to configure this. Also inside docker.
traefik
Running as a simple reverse proxy for most of the above services
elibsrv
A simple OPDS server, which I use against my Kindle. If you don’t know what OPDS is, you should [check this out][]. Running on a simple apache setup on the archlinux box for now. WIP for dockerization.
ubooquity
Simple ebook server. Proxied over the internet. Has a online ebook reader, which is pretty cool.
MariaDB
I set this up planning to shift Kodi’s data to this, but now that I have emby setup - I’m not so sure. Still, keeping this running for now.
Transmission
Hooked up to couchpotato,flexget, and sickrage so it can do things.
Sickrage
Liking this more than flexget so far, much more easier to configure and use.
AirSonic
This is the latest fork of libresonic, which was itself forked off subsonic. My attempt at getting off Google Play Music.

Learnings

Moved these to a separate blog post

TODO

A few things off my TODO list:

  1. Create a Docker image for elibsrv that comes with both ebook-convert and kindlegen pre-installed
  2. Do the same for ubooquity as well (Using the linuxserver/ubooquity docker image)

If you’re interested in my self-hosting setup, I’m using Terraform + Docker, the code is hosted on the same server, and I’ve been writing about my experience and learnings:

  1. Part 1, Hardware
  2. Part 2, Terraform/Docker
  3. Part 3, Learnings
  4. Part 4, Migrating from Google (and more)
  5. Part 5, Home Server Networking

If you have any comments, reach out to me

Home Server Build

I’d been planning to run my own home server for a while, and this culminated in a mini-ITX build recently. The exact part list is up at https://in.pcpartpicker.com/list/krc8Gf.

In no particular order, here were the constraints:

  • The case should be small (I preferred the Elite 110, but it was unavailable everywhere).
  • Dual LAN, if possible (decided against it at the end). The plan was to run the entire home network from this directly by plugging in the ISP into the server.
  • Recent i3/i5 for amd64 builds.
  • Enough SATA bays in the cabinet for storage

The plans for the server:

  1. Scheduled backups from other sources (Android/Laptop)
  2. Run Kodi (or perhaps switch to Emby)
  3. Run torrents. Transmission-daemon works. Preferably something pluggable and that works with RSS
  4. Do amd64 builds. See https://github.com/captn3m0/ideas#arch-linux-package-build-system
  5. Host a webserver. This is primarily for serving resources off the internet
    • Host some other minor web-services
    • A simple wiki
    • Caldav server
    • Other personal projects
  6. Sync Server setup. Mainly for the Kindle and the phone.
  7. Calibre-server, koreader sync server for the Kindle
    • Now looking at libreread as well
  8. Tiny k8s cluster for running other webapps
  9. Run a graylog server for sending other system log data (using papertrail now, has a 200MB limit)

No plans to move mail hosting. That will stay at migadu.com for now.

I had a lot of spare HDDs that I was going to re-use for this build:

  1. WD MyBook 3TB (external, shelled).
  2. Seagate Expansion: 1TB
  3. Seagate Expansion 3TB (external, shelled)
  4. Samsung EVO 128GB SSD

The 2x3TB disks are setup with RAID1 over btrsfs. Important data is snapshotted to the other 1TB disk using btrfs snapshots and subvolumes. In total giving me ~4TB of storage

Software

Currently running kodi-standalone-service on boot. Have to decide on a easy-to-use container orchestration platform. Choices as of now are:

  1. Rancher
  2. Docker Swarm
  3. Shipyard
  4. Terraform
  5. Portainer

Most of these are tuned for multi-host setups, and bring in a lot of complexity as a result. Looking at Portainer, which seems well suited to a single-host setup.

Other services I’m currently running:

  1. elibsrv. Running a patched build with support for ebook-convert
  2. ubooquity for online reading of comics


If you’re interested in my self-hosting setup, I’m using Terraform + Docker, the code is hosted on the same server, and I’ve been writing about my experience and learnings:

  1. Part 1, Hardware
  2. Part 2, Terraform/Docker
  3. Part 3, Learnings
  4. Part 4, Migrating from Google (and more)
  5. Part 5, Home Server Networking

If you have any comments, reach out to me