TL;DR

If you just want to learn about how you can configure your Starlink setup to host external services, scroll past the background.

Background

I moved out to the sticks a couple years ago and was stuck with DSL. 6Mbps down and 1Mbps up if I was lucky. I was fortunate enough to get on the Starlink bandwagon within the first year of my move though.

For a while I kept my DSL as a backup and did failover load balancing with pfSense in the event Starlink went down, but after seeing how well it performed, I eventually canceled my DSL service.

With my configuration at the time, I was somehow skirting around the CGNAT (carrier grade network address translation) and was able to port forward and use ddclient to sync up some subdomains for hosting some services that I use for remote access and the like. All personal use, not hosting public services or anything crazy. Sometimes I'm out and about and I just need to get to some files stored on my NAS.

Eventually I got caught up in the all the CGNAT stuff. I'm not sure exactly how or when, because I hadn't used any of my services in quite some time. I suspect it was likely during a multi-day power outage where I was running on a generator and shutdown my rack and hooked up the Starlink router again (I have circle dish hardware) and probably got some OTA updates pushed to the router that then pushed to dishy. Again, not really sure, but that's my best guess.

In getting ready for a long trip I decided I should test my services again and blammo, nothing worked (except for some of my P2P IoT devices).

I started doing some reading, that's when I first learned about their use of CGNAT and got some high level info on how to create a work around.

What You Need

There are some things you're going to need for this to be successful.

General

  1. A VPN provider
  2. A VPN provider who allows you to do static port forwarding
  3. (Nice to have) A VPN provider who provides a shared static IP
  4. The ability to fund said services (This is not free unfortunately)
  5. A spare machine or the ability to host some VMs (Assuming you don't want to route all of your internet traffic through the VPN. There are pros/cons, lots of other config things to take into consideration. All of which is not covered here. Also if you're using a VPN service to try to remain anonymous, a shared static IP starts to diminish that effect.)

VM Specs

  • 2 vCPUs
  • 1 GB RAM
  • 40 GB Disk (You can probably get away with less)
  • Ubuntu Server 20.04 LTS (Yes I know 22.04 is out, and you can certainly adapt this to whatever distro you prefer, none of that is covered here though)
  • openvpn client
  • nginx
  • (ddclient if you don't have a shared static IP)

The Build

In my case, I use Windscribe as my VPN provider. Love 'em, hate 'em, whatever, their product does what I need it to do. Some of these steps will be specific to that. From here on out, this assumes you've done the base install of Ubuntu Server 20.04 LTS and have SSH running and have a Windscribe subscription and paid for the shared static IP upgrade. (See Windscribe support topics for help with this)

VPN Stuff

  1. Install OpenVPN
    sudo apt install openvpn -y

  2. Get your config file from https://windscribe.com/getconfig/openvpn

  3. Load the config onto your VM
    sftp username@x.x.x.x
    put Windscribe-StaticIP.ovpn

  4. Create OpenVPN service
    sudo cp ~/Windscribe-StaticIP.ovpn /etc/openvpn/client.conf
    sudo nano /etc/openvpn/auth.txt

  5. Enter your Windscribe provided credentials on each line (not your user account, specific to VPN client. See Windscribe support for that)

  6. Update client.conf to use auth.txt for creds when connecting
    sudo nano /etc/openvpn/client.conf
    Insert a new line at the end of verb 2 (before key-direction 1)
    auth-user-pass auth.txt

  7. Enable and start the OpenVPN client service
    sudo systemctl enable openvpn@client.service
    sudo systemctl start openvpn@client.service

  8. Validate that your public IP matches your VPN
    curl http://checkip.dyndns.org/

Nginx Stuff

If you're not familliar, Nginx is a step up on Apache HTTPD. It's a load balancer, web server, & reverse proxy all in one. For the purposes of this guide, we're going to use it's load balancing functionality to route traffic to our back end devices. Reverse proxy with a twist, especially since my services aren't HA. Otherwise, do what you will with it. Once the VPN is up, the world's your oyster.

  1. Install Nginx
    sudo apt install nginx -y
  2. Dump the stock Nginx config and get ready to do something new and exciting.
    sudo mv /etc/nginx.config /etc/nginx.config.orig
    sudo nano /etc/nginx.config
  3. Here's where it gets fun. Whatever services you have on the back end we can now send traffic to. I'm going to give you a sample with two different services and let you figure out the rest. (Note: if you are HA, just add extra server definitions)
user www-data;
worker_processes auto;
worker_rlimit_nofile 8192;
pid /run/nginx.pid;

load_module /usr/lib/nginx/modules/ngx_stream_module.so;

events {
        worker_connections 4096;
}

stream {
    upstream someServiceName {
        server x.x.x.x:9443;
    }

    server {
        listen 443;
        proxy_pass someServiceName;
    }

    upstream someOtherServiceName {
        server x.x.x.x:3000;
    }
    server {
        listen 80;
        proxy_pass someOtherServiceName;
    }

}
  1. Validate your config
    sudo nginx -t
  2. Restart nginx
    sudo systemctl restart nginx

Wrapping it up

I left out a lot of specifics to Windscribe, again, use their online support to figure that out if you use them. Everything is in their KB, didn't have to call anyone.

Also, don't be a jerk. Don't try to locally host a bunch of public websites/services on Starlink. It's not what it was built for. This is just a nice little workaround for those of us who occasionally need to get to some stuff inside our own network and don't want to pay $500/mo.