Compare commits

..

3 Commits

5 changed files with 13 additions and 97 deletions

View File

@ -3,14 +3,17 @@
**DISCLAIMER**: this is still a *huge* work in progress. **DISCLAIMER**: this is still a *huge* work in progress.
### Goal ### Goal
This repository aims to have a *small stack* of self hosted programs that are accessible through a single endpoint, the reverse proxy (Nginx) in the `rp` folder, that exposes whatever pieces of the stack you decide to have accessible from the outside, with or without using a domain. This repository aims to have a *small stack* of self hosted programs that are accessible through a single endpoint, the reverse proxy (Caddy) in the `caddy-docker-proxy` folder, that exposes whatever pieces of the stack you decide to have accessible from the outside, with or without using a domain.
### Why not Docker Swarm? Or k8s? ### Why not Docker Swarm? Or k8s?
That's something else in the plans, but this was more an attempt to answer the question: "what if I have a single machine but I want some modularity, without having to think too much when I want to add something?". I could still use Swarm or k8s on a single machine, but I find this solution a bit more suitable. This is also the reason I choose the Jwilder Nginx docker image over Traefik, as I didn't need service discovery on other nodes. As clearly explained on the [Docker Swarm Rocks](https://dockerswarm.rocks/swarm-or-kubernetes/) website, Docker Swarm Mode feels a bit left behind. I tried myself to build my stack on top of it, but it didn't feel as refined as the plain Docker itself. I'm still keeping an eye on the [Awesome Swarm](https://github.com/BretFisher/awesome-swarm) repository in case some new interesting tools come up.
Kubernetes has simply too much overhead for a small home lab like mine. I'm using a couple of air-gapped ARM64 boards, some mini PCs and a small Cloud VPS to achieve my needs, and at the time of writing k8s would add too much complexity to the stack.
The only thing that would make me change idea would be a need for autoscaling, but I'm still far from that situation.
### How do I use this? ### How do I use this?
~~Nice question.~~ The `caddy-docker-proxy` is the first container that should be started, after running `$ docker network create caddy` to ensure the external network exists on the system. The `Caddyfile` included and mounted in `/etc/caddy/caddyfile` is used in this case to give access to the air-gapped comtainers running on different machines on the same network.
The `rp` folder is the first piece of the puzzle. It creates the proxy, the letsencrypt companion and the `rp_reverse-proxy` network that containers exposed to the internet will have to access. Every service in the Compose files tries to have the least amount of networks necessary to operate. As you can see, this is a borderline situation where some people may prefer having service discovery with either Swarm or Kubernetes, but in my experience this is still not enough to call for that.
### Conclusion (for now): ### Conclusion (for now):
Although I still don't know if this approach has some major flaw(s), it has been reliable for many projects that I will add to this repository. Maybe someone else can find it useful for their projects, and if so I'm happy for you. I'll make sure to link as many references I followed as I can inside the individual Compose files. This approach worked without any major issue in the last years, and it has been reliable for many projects that I will add to this repository. Maybe someone else can find it useful for their projects, and if so I'm happy for you. I'll make sure to link as many references I followed as I can inside the individual Compose files.

View File

@ -4,7 +4,7 @@ x-logging:
driver: syslog driver: syslog
options: options:
tag: "container_name/{{.Name}}" tag: "container_name/{{.Name}}"
labels: "q920" labels: "LABEL"
syslog-facility: local7 syslog-facility: local7
x-opt-values: x-opt-values:

View File

@ -50,7 +50,7 @@ services:
networks: networks:
- netdata - netdata
- caddy - caddy
- npg - ngp
logging: *default-logging logging: *default-logging
prometheus: prometheus:
@ -73,7 +73,7 @@ services:
# - '--web.enable-admin-api' # - '--web.enable-admin-api'
# - '--web.enable-remote-write-receiver' # - '--web.enable-remote-write-receiver'
networks: networks:
- npg - ngp
- caddy - caddy
logging: *default-logging logging: *default-logging
@ -90,7 +90,7 @@ services:
caddy.log: caddy.log:
caddy.reverse_proxy: "{{upstreams 3000}}" caddy.reverse_proxy: "{{upstreams 3000}}"
networks: networks:
- npg - ngp
- caddy - caddy
logging: *default-logging logging: *default-logging
@ -120,6 +120,6 @@ volumes:
networks: networks:
netdata: netdata:
npg: ngp:
caddy: caddy:
external: true external: true

View File

@ -1,9 +0,0 @@
COMPOSE_PROJECT_NAME=vaultwarden
TOKEN="YOUR_TOKEN_HERE"
EXPOSED_PORT=8080
NDOMAIN="warden.domain.tld"
VDOMAIN="https://warden.domain.tld"
SFROM="mail@domain.tld"
SFROMNAME="Your Name"
SUSER="" # User login for Protonmail Bridge
SPASS="" # Password of the user (retrieved from inside the Protonmail Bridge container)

View File

@ -1,78 +0,0 @@
x-logging:
&default-logging
driver: syslog
options:
# This requires two files in /etc/rsyslog.d
# https://www.loggly.com/use-cases/docker-syslog-logging-and-troubleshooting/
tag: "container_name/{{.Name}}"
labels: "${hostname}"
syslog-facility: # cron, local7, etc.
# Can be removed if not needed
x-opt-values:
&volume-opt
driver_opts: &options
type: "nfs"
o: "addr=${IP},rw"
services:
vaultwarden:
image: vaultwarden/server:latest
container_name: vaultwarden
restart: always
environment:
ADMIN_TOKEN: ${TOKEN} # Set token if you want the admin page available
ROCKET_PORT: ${EXPOSED_PORT}
VIRTUAL_PORT: ${EXPOSED_PORT} # Used by nginx-proxy
VIRTUAL_HOST: ${NDOMAIN} # Used by nginx-proxy
LETSENCRYPT_HOST: ${NDOMAIN}
DOMAIN: ${VDOMAIN} # Used by vaultwarden to set certain links
WEBSOCKET_ENABLED: "true"
SIGNUPS_ALLOWED: "false" # Change to true if it's the first time running
# Optional environment, but useful if you want some functions
SMTP_HOST: "${protonmail-container-name}"
SMTP_FROM: ${SFROM}
SMTP_FROM_NAME: ${SFROMNAME}
SMTP_PORT: "25" # Default SMTP port for Protonmail Bridge
SMTP_USERNAME: ${SUSER}
SMTP_PASSWORD: ${SPASS}
SMTP_ACCEPT_INVALID_CERTS: "true" # Necessary when using Protonmail Bridge
volumes:
- vw-data:/data
networks:
- reverse-proxy
- vaultwarden
- protonmail
vaultwarden-backup:
image: bruceforce/vaultwarden-backup
container_name: vaultwarden-backup
restart: always
environment:
TIMESTAMP: "true"
UID: ${UID}
GID: ${GID}
BACKUP_DIR: ${BACKUP_DIR}
DELETE_AFTER: "30"
CRON_TIME: "0 2 * * *"
volumes:
- vw-data:/data
- backup:/backup
volumes:
vw-data:
# This stores the backup on a (possibly) remote server
backup:
<<: *volume-opt
driver_opts:
<<: *options
device: ":/mnt/path/vaultwarden/backup"
networks:
reverse-proxy:
name: rp_reverse-proxy
external: true
vaultwarden:
protonmail:
name: pmb_protonmail
external: true