Abhay Rana aka Nemo

Home Server Networking

Next in the Home Server series, this post documents how I got the networking setup to serve content publicly from my under-the-tv server.

Colorful block diagram for the networking setup


My home server runs on a mix of Docker/Traefik orchestrated via Terraform. The source code is at https://git.captnemo.in/nemo/nebula (self-hosted, dogfooding FTW!) if you wanna take a look.

The ISP is ACT Bangalore1. They offer decent bandwidth and I’ve been a customer long time.

Public Static IP

In order to host content, you need a stable public IP. Unfortunately, ACT puts all of its customers in Bangalore behind a NAT 2. As a result, I decided to get a Floating IP from Digital Ocean 3.

The Static IP is attached to a cheap Digital Ocean Droplet (10$/mo). If you resolve bb8.fun, this is the IP you will get:

Name:   bb8.fun

The droplet has a public static IP of it’s own as well: The reason I picked a Floating IP is because DO gives them for free, and I can switch between instances later without worrying about it.

Floating IP

On the Digital Ocean infrastructure side, this IP is not directly attached to an interface on your droplet. Instead, DO uses something called “Anchor IP”:

Network traffic between a Floating IP and a Droplet flows through an anchor IP, which is an IP address aliased to the Droplet’s public network interface (eth0). You should bind any public services that you want to make highly available through a Floating IP to the anchor IP.

So, now my Droplet has 2 different IPs that I can use:

  1. Droplet Public IP (, assigned directly to the eth0 interface.
  2. Droplet Anchor IP (, setup as an alias to the eth0 interface.

This doubles the number of services I can listen to. I could have (for eg) - 2 different webservers on both of these IPs.


In order to establish NAT-punching connectivity between the Droplet and the Home Server, I run OpenVPN server on the Droplet and openvpn-client on the homeserver.4

The Digital Ocean Guide is a great resource if you ever have to do this. 2 specific IPs on the OpenVPN network are marked as static:

  1. Droplet:
  2. Home Server:

Home Server - Networking

  • The server has a private static IP assigned to its eth0 interface
  • It also has a private static IP assiged to its tun0 interface

There are primarily 3 kinds of services that I like to run:

  1. Accessible only from within the home network (Timemachine backups, for eg) (Internal). This I publish on the eth0 interface.
  2. Accessible only from the public internet (Wiki) (Strictly Public). These I publish on the tun0 interface and proxy via the droplet.
  3. Accessible from both places (Emby, AirSonic) (Public). These I pubish on both tun0 and the eth0 interface on the homeserver.

Docker Networking Basics

Docker runs its own internal network for services, and lets you “publish” these services by forwarding traffic from a given interface to them.

In plain docker-cli, this would be:

docker run nginx --publish 443:443,80:80 (forward traffic on 443,80 on all interfaces to the container)

Since I use Terraform, it looks like the following for Traefik:

# Admin Backend
ports {
  internal = 1111
  external = 1111
  ip       = "${var.ips["eth0"]}"

ports {
  internal = 1111
  external = 1111
  ip       = "${var.ips["tun0"]}"

# Local Web Server
ports {
  internal = 80
  external = 80
  ip       = "${var.ips["eth0"]}"

# Local Web Server (HTTPS)
ports {
  internal = 443
  external = 443
  ip       = "${var.ips["eth0"]}"

# Proxied via sydney.captnemo.in
ports {
  internal = 443
  external = 443
  ip       = "${var.ips["tun0"]}"

ports {
  internal = 80
  external = 80
  ip       = "${var.ips["tun0"]}"

There are 3 “services” exposed by Traefik on 3 ports:

Traefik Admin Interface
Useful for debugging. I leave this in Read-Only mode with no authentication. This is an Internal service
HTTP, Port 80
This redirects users to the next entrypoint (HTTPS). This is a Public service.
HTTPS, Port 443
This is where most of the traefik flows. This is a Public service.

For all 3 of the above, Docker forwards traffic from both OpenVPN, as well as the home network. OpenVPN lets me access this from my laptop when I’m not at home, which is helpful for debugging issues. However, to keep the Admin Interface internal, it is not published to the internet.

Internet Access

The “bridge” between the Floating IP and the OpenVPN IP (both on the Digital Ocean droplet) is simpleproxy. It is a barely-maintained 200 line TCP-proxy. I picked it up because of its ease of use as a TCP Proxy. I specifically looked for a TCP Proxy because:

  1. I did not want to terminate SSL on Digital Ocean, since Traefik was already doing LetsEncrypt cert management for me
  2. I also wanted to proxy non-web services (more below).

The simpleproxy configuration consists of a few systemd units:

# Forward Anchor IP 80 -> Home Server VPN 80
ExecStart=/usr/bin/simpleproxy -L -R


Description=Simple Proxy

I run 3 of these: 2 for HTTP/HTTPS, and another one for SSH.

While I use simpleproxy for its stability and simplicity, you could also use iptables to achieve the same result.

SSH Tunelling

When I’m on the go, there are 3 different SSH services I might need:

  1. Digital Ocean Droplet
  2. Home Server
  3. Git (gitea runs its own internal git server)

My initial plan was:

  1. Forward Port 22 Floating IP Traffic to Gitea.
  2. Use the eth0 interface on the droplet to run the droplet sshd service.
  3. Keep the Home Server SSH forwarded to OpenVPN, so I can access it over the VPN network.

Unfortunately, that didn’t work out well, because sshd doesn’t support listening on an Interface. I could have used the Public Droplet IP, but I didn’t like the idea.

The current setup instead involves:

  1. Running the droplet sshd on a separate port entirely (2222).
  2. The simpleproxy service forwarding port 22 traefik to 2222 on OpenVPN IP which is then published by Docker to the gitea container’s port 22.

The complete traefik configuration is also available if you wanna look at the entrypoints in detail.


Traefik Public Access

You might have noticed that because traefik is listening on both eth0 and tun0, there is no guarantee of a “strictly internal” service via Traefik. Traefik just uses the Host headers in the request (or SNI) to determine the container to which it needs to forward the request. I use *.in.bb8.fun for internaly accessible services, and *.bb8.fun for public. But if someone decides to spoof the headers, they can access the Internal service.

Since I’m aware of the risk, I do not publish anything via traefik that I’m not comfortable putting on the internet. Only a couple of services are marked as “internal-also”, and are published on both. Services like Prometheus are not published via Traefik.

2 Servers

Running and managing 2 servers takes a bit more effort, and has more moving parts. But I use the droplet for other tasks as well (running my DNSCrypt Server, for eg).

Original IP Address

Since SimpleProxy does not support the Proxy Protocol, both Traefik and Gitea/SSH servers don’t get informed about the original IP Address. I plan to fix that by switching to HAProxy TCP-mode.

If you’re interested in my self-hosting setup, I’m using Terraform + Docker, the code is hosted on the same server, and I’ve been writing about my experience and learnings:

  1. Part 1, Hardware
  2. Part 2, Terraform/Docker
  3. Part 3, Learnings
  4. Part 4, Migrating from Google (and more)
  5. Part 5, Home Server Networking
  6. Part 6, btrfs RAID device replacement If you have any comments, reach out to me
  1. If you get lucky with their customer support, some of the folks I know have a static public IP on their home setup. In my case, they asked me to upgrade to a corporate plan. 

  2. I once scanned their entire network using masscan. It was fun: https://medium.com/@captn3m0/i-scanned-all-of-act-bangalore-customers-and-the-results-arent-surprising-fecf9d7fe775 

  3. AWS calls its “permanent” IP addresses “Elastic” and Digital Ocean calls them “Floating”. We really need better names in this industry. 

  4. Migrating to Wireguard is on my list, but I haven’t found any good documentation on running a hub-spoke network so far. 

A records on top level domains

Re-ran the same scan as http://blog.towo.eu/a-records-on-top-level-domains/

Scan run from AS9498.



Migrating from Google (and more)

As part of working on my home-server setup, I wanted to move off few online services to ones that I manage. This is a list of what all services I used and what I’ve migrated to .

Why: I got frustrated with Google Play Music a few times. Synced songs would not show up across all clients immediately (I had to refresh,uninstall,reinstall), and I hated the client limits it would impose. Decided to try microG on my phone at the same time, and it slowly came together.


I’ve been using email on my own domain for quite some time (captnemo.in), but it was managed by a Outlook+Google combination that I didn’t like very much.

I switched to Migadu sometime last year, and have been quite happy with the service. Their Privacy Policy, and Benefits section on the website is a pleasure to read.

Why: Email is the central-point of your online digital identity. Use your own-domain, at the very least. That way, you’re atleast protected if Google decides to suspend your account. Self-hosting email is a big responsibility that requires critical uptime, and I didn’t want to manage that, so went with migadu.

Why Migadu: You should read their HN thread.

Caviats: They don’t yet offer 2FA, but hopefully that should be fixed soon. Their spam filters aren’t the best either. Migadu even has a Drawbacks section on their website that you must read before signing up.

Alternatives: RiseUp, FastMail.

Google Play Music

I quite liked Google Play Music. While their subscription offering is horrible in India, I was a happy user of their “bring-your-own-music” plan. In fact, the most used Google service on my phone happened to be Google Play Music! I switched to a nice subsonic fork called [AirSonic][airsonic], which gives me the ability to:

  • Listen on as many devices as I want (Google has some limits)
  • Listen using multiple clients at the same time
  • Stream at whatever bandwidth I pick (I stream at 64kbps over 2G networks!)

I’m currently using Clementine on the Desktop (which unfortunately, doesn’t cache music), and UltraSonic on the phone. Airsonic even supports bookmarks, so listening to audiobooks becomes much more simpler.

Why: I didn’t like Google Play Music limits, plus I wanted to try the “phone-without-google” experiment.

Why AirSonic: Subsonic is now closed source, and the Libresonic developers forked off to AirSonic, which is under active development. It is supports across all devices that I use, while Ampache has spotty Android support.

Google Keep

I switched across to WorkFlowy, which has both a Android and a Linux app (both based on webviews). I’ve used it for years, and it is a great tool. Moreover, I’m also using DAVDroid sync for Open Tasks app on my phone. Both of these work well enough offline.

Why: I didn’t use Keep much, and WorkFlowy is a far better tool anyway.

Why WorkFlowy: It is the best note-taking/list tool I’ve used.


I switched over to the microG fork of lineageOS which offers a reverse-engineered implementation of the Google Play Services modules. It includes:

microG Core

Which talks to Google for Sync, Account purposes.

Why: Saves me a lot of battery. I can uninstall this, unlike Google Play Services.

Cons: Not all google services are supported very well. Push notifications have some issues on my phone. See the Wiki for Implementation Status.


Instead of the Google Location Provider. I use the Mozilla Location Services, along with Mozilla Stumbler to help improve their data.

Why: Google doesn’t need to know where I am.

Caviats: GALP (Google Assisted Location Provider) does GPS locks much faster in comparision. However, I’ve found the Mozilla Location Services coverage in Bangalore to be pretty good.


Stil looking for decent alternatives.


microG comes with a Google Maps shim that talks to Open Street Maps. The maps feature on Uber worked fine with that shim, however booking cabs was not always possible. I switched over to m.uber.com which worked quite well for some time.

Uber doesn’t really spend resources on their mobile site though, and it would ocassionaly stop working. Especially with regards to payments. I’ve switched over to the Ola mobile website, which works pretty well. I keep the OlaMoney app for recharging the OlaMoney wallet alongside.

Uber->Ola switch was also partially motivated by how-badly-run Uber is.


Most implementations support caldav/carddav for calendar/contacts sync. I’m using DAVDroid for syncing to a self-hosted Radicale Server.

Why: I’ve always had contacts synced to Google, so it was always my-single-source-of-truth for contacts. But since I’m on a different email provider now, it makes sense to move off those contacts as well. Radicale also lets me manage multiple addressbooks very easily.

Why Radicale: I looked around at alternatives, and 2 looked promising: Sabre.io, and Radicale. Sabre is no longer under development, so I picked Radicale, which also happened to have a regularly updated docker image.

Google Play Store

Switch to FDroid - It has some apps that Google doesn’t like, and some more. Moreover, you can use YALP Store to download any applications from the Play Store. You can even run a FDroid repository for the apps you use from Play Store, as an alternative. See this excellent guide on the various options.

Why: Play Store is tightly linked to Google Play Services, and doesn’t play nice with microG.

Why FDroid: FDroid has publicly verifiable builds, and tons of open-source applications.

Why Yalp: Was easy enough to setup.

If you’re looking to migrate to MicroG, I’d recommend going through the entire NO Gapps Setup Guide by shadow53 before proceeding.


I’ve switched to pass along with a sync to keybase.

Why: LastPass has had multiple breaches, and a plethora of security issues (including 2 RCE vulnerabilities). Their fingerprint authentication on Android could be bypassed til recently. I just can’t trust them any more

Why pass: It is built on strong crypto primitives, is open-source, and has good integration with both i3 and firefox. There is also a LastPass migration script that I used.

Caviats: Website names are used as filenames in pass, so even though passwords are encrypted, you don’t want to push it to a public Git server (since that would expose the list of services you are using). I’m using my own git server, along with keybase git(which keeps it end-to-end encrypted, even branch names). You also need to be careful about your GPG keys, instead of a single master password.


For bonus, I setup a Gitea server hosted at git.captnemo.in. Gitea is a fork of gogs, and is a single-binary go application that you can run easiy.

Just running it for fun, since I’m pretty happy with my GitHub setup. However, I might move some of my sensitive repos (such as this) to my own host.

Why Gitea: The other alternatives were gogs, and GitLab. There have been concerns about gogs development model, and GitLab was just too overpowered/heavy for my use case. (I’m using the home server for gaming as well, so it matters)

If you’re interested in my self-hosting setup, I’m using Terraform + Docker, the code is hosted on the same server, and I’ve been writing about my experience and learnings:

  1. Part 1, Hardware
  2. Part 2, Terraform/Docker
  3. Part 3, Learnings
  4. Part 4, Migrating from Google (and more)
  5. Part 5, Home Server Networking
  6. Part 6, btrfs RAID device replacement If you have any comments, reach out to me