Compare commits
No commits in common. "ce58685691275d4a54b69f68b173ddbb1f2a4d7d" and "cfc9ddec9e82ce55d88b510a81523c47d6a8f223" have entirely different histories.
ce58685691
...
cfc9ddec9e
13
README.md
13
README.md
@ -3,17 +3,14 @@
|
||||
**DISCLAIMER**: this is still a *huge* work in progress.
|
||||
|
||||
### Goal
|
||||
This repository aims to have a *small stack* of self hosted programs that are accessible through a single endpoint, the reverse proxy (Caddy) in the `caddy-docker-proxy` folder, that exposes whatever pieces of the stack you decide to have accessible from the outside, with or without using a domain.
|
||||
This repository aims to have a *small stack* of self hosted programs that are accessible through a single endpoint, the reverse proxy (Nginx) in the `rp` folder, that exposes whatever pieces of the stack you decide to have accessible from the outside, with or without using a domain.
|
||||
|
||||
### Why not Docker Swarm? Or k8s?
|
||||
As clearly explained on the [Docker Swarm Rocks](https://dockerswarm.rocks/swarm-or-kubernetes/) website, Docker Swarm Mode feels a bit left behind. I tried myself to build my stack on top of it, but it didn't feel as refined as the plain Docker itself. I'm still keeping an eye on the [Awesome Swarm](https://github.com/BretFisher/awesome-swarm) repository in case some new interesting tools come up.
|
||||
|
||||
Kubernetes has simply too much overhead for a small home lab like mine. I'm using a couple of air-gapped ARM64 boards, some mini PCs and a small Cloud VPS to achieve my needs, and at the time of writing k8s would add too much complexity to the stack.
|
||||
The only thing that would make me change idea would be a need for autoscaling, but I'm still far from that situation.
|
||||
That's something else in the plans, but this was more an attempt to answer the question: "what if I have a single machine but I want some modularity, without having to think too much when I want to add something?". I could still use Swarm or k8s on a single machine, but I find this solution a bit more suitable. This is also the reason I choose the Jwilder Nginx docker image over Traefik, as I didn't need service discovery on other nodes.
|
||||
|
||||
### How do I use this?
|
||||
The `caddy-docker-proxy` is the first container that should be started, after running `$ docker network create caddy` to ensure the external network exists on the system. The `Caddyfile` included and mounted in `/etc/caddy/caddyfile` is used in this case to give access to the air-gapped comtainers running on different machines on the same network.
|
||||
As you can see, this is a borderline situation where some people may prefer having service discovery with either Swarm or Kubernetes, but in my experience this is still not enough to call for that.
|
||||
~~Nice question.~~
|
||||
The `rp` folder is the first piece of the puzzle. It creates the proxy, the letsencrypt companion and the `rp_reverse-proxy` network that containers exposed to the internet will have to access. Every service in the Compose files tries to have the least amount of networks necessary to operate.
|
||||
|
||||
### Conclusion (for now):
|
||||
This approach worked without any major issue in the last years, and it has been reliable for many projects that I will add to this repository. Maybe someone else can find it useful for their projects, and if so I'm happy for you. I'll make sure to link as many references I followed as I can inside the individual Compose files.
|
||||
Although I still don't know if this approach has some major flaw(s), it has been reliable for many projects that I will add to this repository. Maybe someone else can find it useful for their projects, and if so I'm happy for you. I'll make sure to link as many references I followed as I can inside the individual Compose files.
|
||||
|
||||
@ -4,7 +4,7 @@ x-logging:
|
||||
driver: syslog
|
||||
options:
|
||||
tag: "container_name/{{.Name}}"
|
||||
labels: "LABEL"
|
||||
labels: "q920"
|
||||
syslog-facility: local7
|
||||
|
||||
x-opt-values:
|
||||
|
||||
@ -50,7 +50,7 @@ services:
|
||||
networks:
|
||||
- netdata
|
||||
- caddy
|
||||
- ngp
|
||||
- npg
|
||||
logging: *default-logging
|
||||
|
||||
prometheus:
|
||||
@ -73,7 +73,7 @@ services:
|
||||
# - '--web.enable-admin-api'
|
||||
# - '--web.enable-remote-write-receiver'
|
||||
networks:
|
||||
- ngp
|
||||
- npg
|
||||
- caddy
|
||||
logging: *default-logging
|
||||
|
||||
@ -90,7 +90,7 @@ services:
|
||||
caddy.log:
|
||||
caddy.reverse_proxy: "{{upstreams 3000}}"
|
||||
networks:
|
||||
- ngp
|
||||
- npg
|
||||
- caddy
|
||||
logging: *default-logging
|
||||
|
||||
@ -120,6 +120,6 @@ volumes:
|
||||
|
||||
networks:
|
||||
netdata:
|
||||
ngp:
|
||||
npg:
|
||||
caddy:
|
||||
external: true
|
||||
|
||||
9
vaultwarden/.env.template
Normal file
9
vaultwarden/.env.template
Normal file
@ -0,0 +1,9 @@
|
||||
COMPOSE_PROJECT_NAME=vaultwarden
|
||||
TOKEN="YOUR_TOKEN_HERE"
|
||||
EXPOSED_PORT=8080
|
||||
NDOMAIN="warden.domain.tld"
|
||||
VDOMAIN="https://warden.domain.tld"
|
||||
SFROM="mail@domain.tld"
|
||||
SFROMNAME="Your Name"
|
||||
SUSER="" # User login for Protonmail Bridge
|
||||
SPASS="" # Password of the user (retrieved from inside the Protonmail Bridge container)
|
||||
78
vaultwarden/docker-compose.yml
Normal file
78
vaultwarden/docker-compose.yml
Normal file
@ -0,0 +1,78 @@
|
||||
x-logging:
|
||||
&default-logging
|
||||
driver: syslog
|
||||
options:
|
||||
# This requires two files in /etc/rsyslog.d
|
||||
# https://www.loggly.com/use-cases/docker-syslog-logging-and-troubleshooting/
|
||||
tag: "container_name/{{.Name}}"
|
||||
labels: "${hostname}"
|
||||
syslog-facility: # cron, local7, etc.
|
||||
|
||||
# Can be removed if not needed
|
||||
x-opt-values:
|
||||
&volume-opt
|
||||
driver_opts: &options
|
||||
type: "nfs"
|
||||
o: "addr=${IP},rw"
|
||||
|
||||
services:
|
||||
vaultwarden:
|
||||
image: vaultwarden/server:latest
|
||||
container_name: vaultwarden
|
||||
restart: always
|
||||
environment:
|
||||
ADMIN_TOKEN: ${TOKEN} # Set token if you want the admin page available
|
||||
ROCKET_PORT: ${EXPOSED_PORT}
|
||||
VIRTUAL_PORT: ${EXPOSED_PORT} # Used by nginx-proxy
|
||||
VIRTUAL_HOST: ${NDOMAIN} # Used by nginx-proxy
|
||||
LETSENCRYPT_HOST: ${NDOMAIN}
|
||||
DOMAIN: ${VDOMAIN} # Used by vaultwarden to set certain links
|
||||
WEBSOCKET_ENABLED: "true"
|
||||
SIGNUPS_ALLOWED: "false" # Change to true if it's the first time running
|
||||
# Optional environment, but useful if you want some functions
|
||||
SMTP_HOST: "${protonmail-container-name}"
|
||||
SMTP_FROM: ${SFROM}
|
||||
SMTP_FROM_NAME: ${SFROMNAME}
|
||||
SMTP_PORT: "25" # Default SMTP port for Protonmail Bridge
|
||||
SMTP_USERNAME: ${SUSER}
|
||||
SMTP_PASSWORD: ${SPASS}
|
||||
SMTP_ACCEPT_INVALID_CERTS: "true" # Necessary when using Protonmail Bridge
|
||||
volumes:
|
||||
- vw-data:/data
|
||||
networks:
|
||||
- reverse-proxy
|
||||
- vaultwarden
|
||||
- protonmail
|
||||
|
||||
vaultwarden-backup:
|
||||
image: bruceforce/vaultwarden-backup
|
||||
container_name: vaultwarden-backup
|
||||
restart: always
|
||||
environment:
|
||||
TIMESTAMP: "true"
|
||||
UID: ${UID}
|
||||
GID: ${GID}
|
||||
BACKUP_DIR: ${BACKUP_DIR}
|
||||
DELETE_AFTER: "30"
|
||||
CRON_TIME: "0 2 * * *"
|
||||
volumes:
|
||||
- vw-data:/data
|
||||
- backup:/backup
|
||||
|
||||
volumes:
|
||||
vw-data:
|
||||
# This stores the backup on a (possibly) remote server
|
||||
backup:
|
||||
<<: *volume-opt
|
||||
driver_opts:
|
||||
<<: *options
|
||||
device: ":/mnt/path/vaultwarden/backup"
|
||||
|
||||
networks:
|
||||
reverse-proxy:
|
||||
name: rp_reverse-proxy
|
||||
external: true
|
||||
vaultwarden:
|
||||
protonmail:
|
||||
name: pmb_protonmail
|
||||
external: true
|
||||
Loading…
x
Reference in New Issue
Block a user