initial upload
This commit is contained in:
3
deployments/vm-docker-apps-301.stabify.de.yml
Normal file
3
deployments/vm-docker-apps-301.stabify.de.yml
Normal file
@@ -0,0 +1,3 @@
|
||||
apps:
|
||||
- vault
|
||||
|
||||
1
infrastructure/.gitignore
vendored
Normal file
1
infrastructure/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
acme/
|
||||
162
infrastructure/README.md
Normal file
162
infrastructure/README.md
Normal file
@@ -0,0 +1,162 @@
|
||||
# Infrastructure Deployment Guide
|
||||
|
||||
Dieses Dokument führt dich durch den **kompletten** Deployment-Prozess ("From Scratch") der Stabify Infrastruktur. Es erklärt sowohl das initiale Setup (Bootstrapping) als auch den automatisierten Regelbetrieb (GitOps).
|
||||
|
||||
## Übersicht: Wie alles zusammenhängt
|
||||
|
||||
Wir nutzen ein **GitOps** Modell: Der Code in diesem Repository ist die "Source of Truth".
|
||||
* **Terraform** erstellt die Hardware (VMs).
|
||||
* **Ansible** konfiguriert die Software (Docker, Apps).
|
||||
* **Vault** speichert alle Geheimnisse (Passwörter, Tokens).
|
||||
|
||||
### Der Secret-Flow (Wer bekommt Secrets woher?)
|
||||
|
||||
| Phase | Wer braucht Secrets? | Woher kommen sie? | Authentifizierung |
|
||||
| :--- | :--- | :--- | :--- |
|
||||
| **1. Terraform** | Dein PC | Vault (Remote via HTTPS) | Dein `VAULT_TOKEN` Env-Var |
|
||||
| **2. Ansible (Push)** | Dein PC | Vault (Remote via HTTPS) | Dein `VAULT_TOKEN` Env-Var |
|
||||
| **3. Ansible (Pull)** | Die VM selbst | Vault (Intern via HTTPS) | Token auf der VM (`/root/.vault-token`) |
|
||||
|
||||
---
|
||||
|
||||
## Voraussetzungen
|
||||
|
||||
* Zugriff auf Proxmox API und OPNsense API.
|
||||
* Installiert: `terraform`, `ansible`, `sshpass`.
|
||||
* SSH-Key für Ansible liegt bereit (z.B. `~/.ssh/id_ed25519.pub`).
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Bootstrap (Henne-Ei-Problem lösen)
|
||||
|
||||
Wir wollen Vault nutzen, aber Vault läuft selbst auf einer VM, die wir erst erstellen müssen. Daher müssen wir Terraform und Ansible einmalig "dumm" (ohne Vault) betreiben.
|
||||
|
||||
1. **Erstelle eine `bootstrap.tfvars`** (Diese Datei **NICHT** committen!):
|
||||
|
||||
```hcl
|
||||
# terraform/bootstrap.tfvars
|
||||
use_vault = false
|
||||
|
||||
# Proxmox Credentials
|
||||
proxmox_api_url = "https://10.100.0.2:8006/api2/json"
|
||||
proxmox_api_token_id = "root@pam!terraform"
|
||||
proxmox_api_token_secret = "dein-proxmox-token"
|
||||
|
||||
# OPNsense Credentials
|
||||
opnsense_uri = "https://10.100.0.1:4443"
|
||||
opnsense_api_key = "dein-opnsense-key"
|
||||
opnsense_api_secret = "dein-opnsense-secret"
|
||||
|
||||
# VM User Config
|
||||
ci_user = "ansible"
|
||||
ci_password = "InitialPassword123!" # Wird später durch Vault ersetzt
|
||||
ssh_public_key = "ssh-ed25519 AAAA..."
|
||||
```
|
||||
|
||||
2. **Terraform Initialisieren & Anwenden**:
|
||||
Dies erstellt die VMs. Da Vault noch nicht existiert, nutzen wir die lokalen Credentials.
|
||||
|
||||
```bash
|
||||
cd terraform
|
||||
export VAULT_ADDR="http://127.0.0.1:8200" # Dummy Wert für Bootstrap (wird ignoriert)
|
||||
terraform init
|
||||
terraform apply -var-file="bootstrap.tfvars"
|
||||
```
|
||||
|
||||
✅ **Ergebnis:** Alle VMs (inkl. `vm-docker-apps-301`) sind erstellt und laufen.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Vault Deployment & Setup
|
||||
|
||||
Jetzt nutzen wir Ansible im **Push-Modus**, um Vault auf dem Zielserver zu installieren.
|
||||
|
||||
1. **Ansible Voraussetzungen**:
|
||||
|
||||
```bash
|
||||
cd ../infrastructure/ansible
|
||||
ansible-galaxy install -r requirements.yml
|
||||
```
|
||||
|
||||
2. **Vault Deployen**:
|
||||
Da Vault noch nicht läuft, wird Ansible Warnungen bei Secrets werfen (Permission Denied), aber das Deployment durchführen und den Container starten.
|
||||
|
||||
```bash
|
||||
# Deploye auf alle Hosts im Inventory
|
||||
ansible-playbook -i inventory.ini deploy.yml
|
||||
```
|
||||
|
||||
3. **Vault Initialisieren (Manuell)**:
|
||||
Der Vault Container läuft nun. Wir müssen die erzeugten Keys abholen.
|
||||
|
||||
* Hole das **Root Token** vom Server:
|
||||
```bash
|
||||
ssh -i ~/.ssh/id_ed25519_ansible_prod ansible@10.100.30.11 "sudo cat /opt/vault/file/init_keys.json"
|
||||
```
|
||||
*(Hinweis: Die Datei `init_keys.json` enthält auch die Unseal-Keys. Speichere diese sicher ab!)*
|
||||
|
||||
* Kopiere das Root Token (`root_token` aus dem JSON).
|
||||
|
||||
4. **Vault Befüllen (Automatisch)**:
|
||||
Führe das Helper-Skript aus, um die Secrets aus `bootstrap.tfvars` automatisch in Vault zu importieren.
|
||||
|
||||
```bash
|
||||
cd ../.. # Zurück ins Repo-Root
|
||||
./setup_vault_secrets.sh
|
||||
```
|
||||
* Du wirst nach dem Root-Token gefragt.
|
||||
* Das Skript importiert die Secrets.
|
||||
* Es fragt, ob `bootstrap.tfvars` gelöscht werden soll (Ja).
|
||||
* Es fragt, ob das **Root-Token** aus der Datei auf dem Server gelöscht werden soll (Ja, empfohlen für Security).
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Production Mode & GitOps
|
||||
|
||||
Ab jetzt ist die Infrastruktur "Self-Contained". Terraform und Ansible holen sich alle Zugangsdaten sicher aus dem Vault.
|
||||
|
||||
### Terraform (Manuell bei Bedarf)
|
||||
Änderungen an der Hardware (neue VMs, CPU/RAM) machst du weiterhin von deinem PC aus.
|
||||
|
||||
```bash
|
||||
cd terraform
|
||||
export VAULT_ADDR='https://10.100.30.11:8200'
|
||||
export VAULT_TOKEN='<Dein-Root-Token>'
|
||||
export VAULT_CACERT=../vault-ca.crt
|
||||
|
||||
terraform plan # Sollte "No changes" zeigen
|
||||
```
|
||||
|
||||
### GitOps Workflow (Automatisch)
|
||||
|
||||
Die Server aktualisieren ihre Apps selbstständig (**Pull-Prinzip**).
|
||||
|
||||
1. **Aktivierung (Einmalig):**
|
||||
Das Playbook `deploy.yml` (aus Phase 2) hat bereits einen Systemd-Timer (`gitops-sync.timer`) auf allen Nodes installiert.
|
||||
Dieser führt alle 5 Minuten `ansible-pull` aus.
|
||||
|
||||
2. **Workflow:**
|
||||
* Du machst Änderungen am Code (z.B. neue App in `apps/` oder Änderung in `deployments/`).
|
||||
* Du pushst den **gesamten Code** in dein Git-Repo.
|
||||
* Innerhalb von 5 Minuten ziehen sich die Server den neuen Stand.
|
||||
* Sie führen das lokale Playbook `infrastructure/ansible/pull_deploy.yml` aus.
|
||||
* Dieses Playbook:
|
||||
* Installiert neue Apps.
|
||||
* Updated existierende Apps.
|
||||
* **Löscht Apps**, die aus der Deployment-Liste entfernt wurden (Pruning).
|
||||
|
||||
3. **Voraussetzung:**
|
||||
Die Variable `git_repo_url` in `infrastructure/ansible/deploy.yml` muss korrekt gesetzt sein.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
* **Fehler "Permission Denied" bei Ansible (Phase 2):**
|
||||
Normal beim ersten Lauf, da Vault noch leer ist. Nach `setup_vault_secrets.sh` und einem erneuten `ansible-playbook` Lauf verschwinden sie.
|
||||
|
||||
* **Apps werden nicht gelöscht:**
|
||||
Die Pruning-Logik greift nur im **Pull-Modus** (also via Timer auf dem Server), nicht beim manuellen `ansible-playbook` von deinem PC aus.
|
||||
|
||||
* **Terraform fragt nach Variablen:**
|
||||
Prüfe, ob `VAULT_ADDR` und `VAULT_TOKEN` gesetzt sind und ob die Secrets im Vault unter den korrekten Pfaden (`secret/infrastructure/...`) liegen.
|
||||
55
infrastructure/ansible/deploy.yml
Normal file
55
infrastructure/ansible/deploy.yml
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
- name: Service-Centric GitOps Execution (Push Mode)
|
||||
hosts: all
|
||||
gather_facts: true
|
||||
become: true # Docker braucht meist root/sudo
|
||||
|
||||
vars:
|
||||
# Basispfade (lokal auf dem Management-Controller)
|
||||
repo_root: "{{ playbook_dir }}/.."
|
||||
apps_catalog_path: "{{ repo_root }}/apps"
|
||||
deployments_path: "{{ repo_root }}/deployments"
|
||||
base_deploy_path: "/opt"
|
||||
git_repo_url: "https://gitea.example.com/stabify/infra.git" # TODO: Anpassen!
|
||||
|
||||
# Wir suchen die Definitionsdatei basierend auf FQDN
|
||||
host_def_file_fqdn: "{{ deployments_path }}/{{ inventory_hostname }}.yml"
|
||||
|
||||
roles:
|
||||
# Stelle sicher, dass jeder Host Docker & Co hat
|
||||
- common
|
||||
|
||||
tasks:
|
||||
# --- 1. Identifikation (Lokal prüfen, was der Host bekommen soll) ---
|
||||
- name: "Suche Deployment-Definition für {{ inventory_hostname }}"
|
||||
stat:
|
||||
path: "{{ host_def_file_fqdn }}"
|
||||
delegate_to: localhost
|
||||
register: def_fqdn
|
||||
|
||||
- name: "Warnung wenn unkonfiguriert"
|
||||
debug:
|
||||
msg: "Host {{ inventory_hostname }} hat keine Konfiguration in {{ deployments_path }}. Überspringe."
|
||||
when: not def_fqdn.stat.exists
|
||||
|
||||
- name: "Beende Host-Play wenn unkonfiguriert"
|
||||
meta: end_host
|
||||
when: not def_fqdn.stat.exists
|
||||
|
||||
# --- 2. Lade Konfiguration (Lokal laden) ---
|
||||
- name: "Lade Host-Konfiguration"
|
||||
include_vars:
|
||||
file: "{{ host_def_file_fqdn }}"
|
||||
name: host_config
|
||||
delegate_to: localhost
|
||||
|
||||
- name: "Plan anzeigen"
|
||||
debug:
|
||||
msg: "Deploying auf {{ inventory_hostname }}: {{ host_config.apps }}"
|
||||
|
||||
# --- 3. Ausführung (Remote auf den VMs) ---
|
||||
- name: "Deploy Apps Loop"
|
||||
include_tasks: deploy_logic_push.yml
|
||||
loop: "{{ host_config.apps }}"
|
||||
loop_control:
|
||||
loop_var: app_name
|
||||
67
infrastructure/ansible/deploy_logic_pull.yml
Normal file
67
infrastructure/ansible/deploy_logic_pull.yml
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
# PULL Logic (läuft lokal auf dem Server)
|
||||
|
||||
# 1. Validierung
|
||||
- name: "Prüfe App im Katalog"
|
||||
stat:
|
||||
path: "{{ apps_catalog_path }}/{{ app_name }}"
|
||||
register: catalog_entry
|
||||
|
||||
- name: "Skip if missing"
|
||||
debug:
|
||||
msg: "App {{ app_name }} nicht gefunden."
|
||||
when: not catalog_entry.stat.exists
|
||||
|
||||
# 2. Setup
|
||||
- name: "Setze Pfade"
|
||||
set_fact:
|
||||
source_dir: "{{ apps_catalog_path }}/{{ app_name }}"
|
||||
target_dir: "{{ base_deploy_path }}/{{ app_name }}"
|
||||
when: catalog_entry.stat.exists
|
||||
|
||||
- name: "Erstelle Zielverzeichnis"
|
||||
file:
|
||||
path: "{{ target_dir }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
when: catalog_entry.stat.exists
|
||||
|
||||
# 3. Secrets (Vault)
|
||||
# Im Pull-Mode brauchen wir ein Token. Wir lesen es aus /root/.vault-token oder ENV
|
||||
- name: "Lade Secrets (Lokal)"
|
||||
set_fact:
|
||||
app_secrets: "{{ lookup('community.hashi_vault.vault_kv2_get', 'apps/' + app_name, engine_mount_point='secret', url=vault_addr, token_path='/root/.vault-token') | default({}) }}"
|
||||
ignore_errors: true
|
||||
when: catalog_entry.stat.exists
|
||||
|
||||
- name: "Erstelle .env"
|
||||
copy:
|
||||
dest: "{{ target_dir }}/.env"
|
||||
content: |
|
||||
{% for key, value in app_secrets.items() %}
|
||||
{{ key }}={{ value }}
|
||||
{% endfor %}
|
||||
mode: '0600'
|
||||
when: catalog_entry.stat.exists and app_secrets is defined and app_secrets | length > 0
|
||||
|
||||
# 4. Sync Files (Local Copy)
|
||||
- name: "Sync Files"
|
||||
copy:
|
||||
src: "{{ source_dir }}/"
|
||||
dest: "{{ target_dir }}/"
|
||||
mode: '0644'
|
||||
directory_mode: '0755'
|
||||
when: catalog_entry.stat.exists
|
||||
|
||||
# 5. Docker Compose
|
||||
- name: "Docker Compose Up"
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ target_dir }}"
|
||||
state: present
|
||||
pull: missing
|
||||
build: always
|
||||
remove_orphans: true
|
||||
environment:
|
||||
PATH: "/usr/bin:/usr/local/bin:/snap/bin:{{ ansible_env.PATH }}"
|
||||
when: catalog_entry.stat.exists
|
||||
|
||||
75
infrastructure/ansible/deploy_logic_push.yml
Normal file
75
infrastructure/ansible/deploy_logic_push.yml
Normal file
@@ -0,0 +1,75 @@
|
||||
---
|
||||
# Push-Logik: Wir kopieren von Localhost -> Remote Host
|
||||
|
||||
# 1. Validierung (Lokal)
|
||||
- name: "Prüfe ob App '{{ app_name }}' im Katalog existiert (Lokal)"
|
||||
stat:
|
||||
path: "{{ apps_catalog_path }}/{{ app_name }}"
|
||||
delegate_to: localhost
|
||||
register: catalog_entry
|
||||
|
||||
- name: "Fehler: App fehlt im Katalog"
|
||||
fail:
|
||||
msg: "App '{{ app_name }}' nicht gefunden in {{ apps_catalog_path }}"
|
||||
when: not catalog_entry.stat.exists
|
||||
|
||||
# 2. Setup Pfade (Remote)
|
||||
- name: "Setze Zielpfad"
|
||||
set_fact:
|
||||
source_dir: "{{ apps_catalog_path }}/{{ app_name }}"
|
||||
target_dir: "{{ base_deploy_path }}/{{ app_name }}"
|
||||
|
||||
- name: "Erstelle Zielverzeichnis auf Remote"
|
||||
file:
|
||||
path: "{{ target_dir }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
# 3. Secrets aus Vault (Lokal lookup, Remote copy)
|
||||
- name: "Lade Secrets aus Vault (Lokal lookup)"
|
||||
set_fact:
|
||||
app_secrets: "{{ lookup('community.hashi_vault.vault_kv2_get', 'apps/' + app_name, engine_mount_point='secret', url=lookup('env', 'VAULT_ADDR') | default('https://10.100.30.11:8200'), token=lookup('env', 'VAULT_TOKEN')) | default({}) }}"
|
||||
delegate_to: localhost
|
||||
ignore_errors: true
|
||||
|
||||
- name: "Setze app_secrets default wenn leer"
|
||||
set_fact:
|
||||
app_secrets: {}
|
||||
when: app_secrets is undefined
|
||||
|
||||
|
||||
- name: "Erstelle .env Datei auf Remote"
|
||||
copy:
|
||||
dest: "{{ target_dir }}/.env"
|
||||
content: |
|
||||
{% for key, value in app_secrets.items() %}
|
||||
{{ key }}={{ value }}
|
||||
{% endfor %}
|
||||
mode: '0600'
|
||||
when: app_secrets | length > 0
|
||||
|
||||
# 4. Sync Dateien (Lokal -> Remote)
|
||||
# Hinweis: 'copy' Modul unterstützt kein 'exclude'. Für Excludes brauchen wir 'synchronize' (rsync)
|
||||
# oder wir kopieren alles und ignorieren .env Konflikte (da copy sowieso überschreibt)
|
||||
- name: "Synchronisiere App-Dateien (Push)"
|
||||
copy:
|
||||
src: "{{ source_dir }}/"
|
||||
dest: "{{ target_dir }}/"
|
||||
mode: '0644'
|
||||
directory_mode: '0755'
|
||||
# .env im Source wird überschrieben falls existent
|
||||
|
||||
|
||||
# 5. Docker Compose Deployment (Remote)
|
||||
- name: "Deploy {{ app_name }} mit Docker Compose"
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ target_dir }}"
|
||||
state: present
|
||||
pull: missing
|
||||
build: always
|
||||
remove_orphans: true
|
||||
environment:
|
||||
PATH: "/usr/bin:/usr/local/bin:/snap/bin:{{ ansible_env.PATH }}"
|
||||
register: compose_result
|
||||
|
||||
|
||||
14
infrastructure/ansible/inventory.ini
Normal file
14
infrastructure/ansible/inventory.ini
Normal file
@@ -0,0 +1,14 @@
|
||||
[docker_hosts]
|
||||
vm-docker-apps-301.stabify.de ansible_host=10.100.30.11
|
||||
vm-docker-traefik-302.stabify.de ansible_host=10.100.30.12
|
||||
# vm-docker-mailcow-300.stabify.de ansible_host=10.100.30.10
|
||||
|
||||
[k3s_hosts]
|
||||
# vm-k3s-master-400.stabify.de ansible_host=10.100.40.10
|
||||
# ...
|
||||
|
||||
[all:vars]
|
||||
ansible_user=ansible
|
||||
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
|
||||
ansible_ssh_private_key_file=~/.ssh/id_ed25519_ansible_prod
|
||||
|
||||
3
infrastructure/ansible/inventory_local.ini
Normal file
3
infrastructure/ansible/inventory_local.ini
Normal file
@@ -0,0 +1,3 @@
|
||||
[local]
|
||||
localhost ansible_connection=local ansible_python_interpreter=/usr/bin/python3
|
||||
|
||||
22
infrastructure/ansible/prune_logic.yml
Normal file
22
infrastructure/ansible/prune_logic.yml
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
# Pruning Logic: Entfernt Apps, die nicht mehr gewünscht sind
|
||||
|
||||
- name: "Prüfe auf docker-compose.yml in {{ base_deploy_path }}/{{ app_name_to_remove }}"
|
||||
stat:
|
||||
path: "{{ base_deploy_path }}/{{ app_name_to_remove }}/docker-compose.yml"
|
||||
register: compose_file
|
||||
|
||||
- name: "Stoppe und entferne Container für {{ app_name_to_remove }}"
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ base_deploy_path }}/{{ app_name_to_remove }}"
|
||||
state: absent
|
||||
remove_orphans: true
|
||||
environment:
|
||||
PATH: "/usr/bin:/usr/local/bin:/snap/bin:{{ ansible_env.PATH }}"
|
||||
when: compose_file.stat.exists
|
||||
|
||||
- name: "Lösche App-Verzeichnis {{ base_deploy_path }}/{{ app_name_to_remove }}"
|
||||
file:
|
||||
path: "{{ base_deploy_path }}/{{ app_name_to_remove }}"
|
||||
state: absent
|
||||
|
||||
80
infrastructure/ansible/pull_deploy.yml
Normal file
80
infrastructure/ansible/pull_deploy.yml
Normal file
@@ -0,0 +1,80 @@
|
||||
---
|
||||
- name: "GitOps Execution (Local Pull Mode)"
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: true
|
||||
become: true
|
||||
|
||||
vars:
|
||||
# Pfade sind jetzt lokal auf dem Server
|
||||
repo_root: "{{ playbook_dir }}/.."
|
||||
apps_catalog_path: "{{ repo_root }}/apps"
|
||||
deployments_path: "{{ repo_root }}/deployments"
|
||||
base_deploy_path: "/opt"
|
||||
|
||||
# Vault Adresse für lokalen Zugriff
|
||||
vault_addr: "https://10.100.30.11:8200"
|
||||
|
||||
tasks:
|
||||
# 1. Identifikation
|
||||
- name: "Bestimme Hostname (für Config Lookup)"
|
||||
set_fact:
|
||||
# ansible-pull läuft lokal, daher nehmen wir ansible_fqdn oder hostname
|
||||
target_hostname: "{{ ansible_fqdn }}"
|
||||
|
||||
- name: "Suche Deployment-Definition"
|
||||
stat:
|
||||
path: "{{ deployments_path }}/{{ target_hostname }}.yml"
|
||||
register: def_file
|
||||
|
||||
- name: "Abbruch wenn keine Config"
|
||||
fail:
|
||||
msg: "Keine Deployment-Config für {{ target_hostname }} gefunden."
|
||||
when: not def_file.stat.exists
|
||||
|
||||
# 2. Lade Config (SOLL-Zustand)
|
||||
- name: "Lade Host-Konfiguration"
|
||||
include_vars:
|
||||
file: "{{ deployments_path }}/{{ target_hostname }}.yml"
|
||||
name: host_config
|
||||
|
||||
- name: "Definiere Soll-Apps"
|
||||
set_fact:
|
||||
wanted_apps: "{{ host_config.apps }}"
|
||||
|
||||
# 3. Ermittle IST-Zustand
|
||||
- name: "Finde installierte Apps in {{ base_deploy_path }}"
|
||||
find:
|
||||
paths: "{{ base_deploy_path }}"
|
||||
file_type: directory
|
||||
recurse: false
|
||||
register: installed_dirs
|
||||
|
||||
- name: "Filtere nicht-App Verzeichnisse (z.B. vault)"
|
||||
set_fact:
|
||||
# Wir nehmen an, dass alles in /opt eine App ist, außer explizite Ausnahmen
|
||||
# Hier filtern wir nur Verzeichnisse, die Docker Compose Files haben könnten
|
||||
installed_apps: "{{ installed_dirs.files | map(attribute='path') | map('basename') | list }}"
|
||||
|
||||
# 4. Bereinigung (Pruning)
|
||||
- name: "Ermittle zu löschende Apps"
|
||||
set_fact:
|
||||
# Apps die installiert sind, aber nicht in wanted_apps stehen
|
||||
# ACHTUNG: 'vault' sollte ggf. geschützt werden, wenn es manuell läuft?
|
||||
# Da wir Vault aber auch via GitOps managen (in der Liste), ist das ok.
|
||||
apps_to_remove: "{{ installed_apps | difference(wanted_apps) }}"
|
||||
|
||||
- name: "Pruning Loop"
|
||||
include_tasks: prune_logic.yml
|
||||
loop: "{{ apps_to_remove }}"
|
||||
loop_control:
|
||||
loop_var: app_name_to_remove
|
||||
# Sicherheitshalber: Lösche nichts, was 'vault' heißt, falls Config kaputt ist
|
||||
when: app_name_to_remove != 'vault'
|
||||
|
||||
# 5. Deploy Apps (Update/Install)
|
||||
- name: "Deploy Apps Loop"
|
||||
include_tasks: deploy_logic_pull.yml
|
||||
loop: "{{ wanted_apps }}"
|
||||
loop_control:
|
||||
loop_var: app_name
|
||||
7
infrastructure/ansible/requirements.yml
Normal file
7
infrastructure/ansible/requirements.yml
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
collections:
|
||||
- name: community.docker
|
||||
version: 3.10.0
|
||||
- name: community.hashi_vault
|
||||
version: 6.0.0
|
||||
|
||||
5
infrastructure/ansible/roles/common/handlers/main.yml
Normal file
5
infrastructure/ansible/roles/common/handlers/main.yml
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
- name: Reload Systemd
|
||||
systemd:
|
||||
daemon_reload: true
|
||||
|
||||
20
infrastructure/ansible/roles/common/tasks/gitops.yml
Normal file
20
infrastructure/ansible/roles/common/tasks/gitops.yml
Normal file
@@ -0,0 +1,20 @@
|
||||
- name: "Deploy GitOps Service Unit"
|
||||
template:
|
||||
src: gitops-sync.service.j2
|
||||
dest: /etc/systemd/system/gitops-sync.service
|
||||
mode: '0644'
|
||||
notify: Reload Systemd
|
||||
|
||||
- name: "Deploy GitOps Timer Unit"
|
||||
template:
|
||||
src: gitops-sync.timer.j2
|
||||
dest: /etc/systemd/system/gitops-sync.timer
|
||||
mode: '0644'
|
||||
notify: Reload Systemd
|
||||
|
||||
- name: "Aktiviere GitOps Timer"
|
||||
systemd:
|
||||
name: gitops-sync.timer
|
||||
state: started
|
||||
enabled: true
|
||||
|
||||
49
infrastructure/ansible/roles/common/tasks/main.yml
Normal file
49
infrastructure/ansible/roles/common/tasks/main.yml
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
- name: "Installiere Basispakete"
|
||||
apt:
|
||||
name:
|
||||
- curl
|
||||
- wget
|
||||
- git
|
||||
- htop
|
||||
- vim
|
||||
- net-tools
|
||||
- dnsutils
|
||||
- ca-certificates
|
||||
- gnupg
|
||||
- lsb-release
|
||||
state: present
|
||||
update_cache: true
|
||||
|
||||
- name: "Installiere Ansible & Git für GitOps (Pull-Mode)"
|
||||
apt:
|
||||
name:
|
||||
- ansible
|
||||
- git
|
||||
- python3-hvac # Für Vault
|
||||
state: present
|
||||
|
||||
- name: "Installiere Docker (Convenience Script)"
|
||||
# Nutzung des offiziellen Docker Install Scripts ist oft robuster als Einzelpakete
|
||||
# Alternativ: Manuelles Repo-Adding (sauberer, aber mehr Code)
|
||||
shell: "curl -fsSL https://get.docker.com | sh"
|
||||
args:
|
||||
creates: /usr/bin/docker
|
||||
|
||||
- name: "Füge User zur Docker Gruppe hinzu"
|
||||
user:
|
||||
name: "{{ ansible_user }}"
|
||||
groups: docker
|
||||
append: true
|
||||
|
||||
# Docker Service sicherstellen
|
||||
- name: "Starte Docker Service"
|
||||
service:
|
||||
name: docker
|
||||
state: started
|
||||
enabled: true
|
||||
|
||||
# GitOps Setup
|
||||
- import_tasks: gitops.yml
|
||||
|
||||
|
||||
@@ -0,0 +1,20 @@
|
||||
[Unit]
|
||||
Description=Ansible Pull GitOps Sync
|
||||
Documentation=https://docs.ansible.com/ansible/latest/cli/ansible-pull.html
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
User=root
|
||||
# Wir nutzen ansible-pull um das Repo zu holen und das lokale Playbook auszuführen
|
||||
# -U: URL des Repos
|
||||
# -d: Checkout Verzeichnis
|
||||
# -i: Inventory (hier localhost)
|
||||
# pull_deploy.yml: Das Playbook im Repo
|
||||
ExecStart=/usr/bin/ansible-pull -U {{ git_repo_url }} -d /opt/stabify-infra -i infrastructure/ansible/inventory_local.ini infrastructure/ansible/pull_deploy.yml
|
||||
timeoutStartSec=600
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
@@ -0,0 +1,12 @@
|
||||
[Unit]
|
||||
Description=Trigger Ansible Pull GitOps Sync every 5 minutes
|
||||
After=network-online.target
|
||||
|
||||
[Timer]
|
||||
OnBootSec=5min
|
||||
OnUnitActiveSec=5min
|
||||
RandomizedDelaySec=60
|
||||
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
|
||||
2
infrastructure/apps/traefik-edge/.gitignore
vendored
Normal file
2
infrastructure/apps/traefik-edge/.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
.env
|
||||
certs/
|
||||
@@ -0,0 +1 @@
|
||||
#Testcomment
|
||||
@@ -0,0 +1,18 @@
|
||||
http:
|
||||
routers:
|
||||
# Route für Apps auf VM 301
|
||||
to-apps-vm:
|
||||
rule: HostRegexp(`^[a-z0-9-]+\.apps\.stabify\.de$`)
|
||||
service: apps-vm-service
|
||||
entryPoints: [ websecure ]
|
||||
tls:
|
||||
certResolver: le
|
||||
domains:
|
||||
- main: "*.apps.stabify.de"
|
||||
|
||||
services:
|
||||
apps-vm-service:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: "http://vm-docker-apps-301.stabify.de:80"
|
||||
passHostHeader: true
|
||||
42
infrastructure/apps/traefik-edge/config/traefik.yml
Normal file
42
infrastructure/apps/traefik-edge/config/traefik.yml
Normal file
@@ -0,0 +1,42 @@
|
||||
api:
|
||||
dashboard: false
|
||||
|
||||
entryPoints:
|
||||
web:
|
||||
address: ":80"
|
||||
http:
|
||||
redirections:
|
||||
entryPoint:
|
||||
to: websecure
|
||||
scheme: https
|
||||
|
||||
websecure:
|
||||
address: ":443"
|
||||
http:
|
||||
tls:
|
||||
certResolver: le
|
||||
domains:
|
||||
- main: "stabify.de"
|
||||
sans:
|
||||
- "*.stabify.de"
|
||||
- "*.k3s.stabify.de"
|
||||
- "*.sys.stabify.de"
|
||||
- "*.apps.stabify.de"
|
||||
|
||||
providers:
|
||||
docker:
|
||||
endpoint: "unix:///var/run/docker.sock"
|
||||
exposedByDefault: false
|
||||
file:
|
||||
directory: "/etc/traefik/dynamic"
|
||||
watch: true
|
||||
|
||||
certificatesResolvers:
|
||||
le:
|
||||
acme:
|
||||
email: acme@infrastructure.stabify.de
|
||||
storage: /certs/acme.json
|
||||
caServer: https://acme-v02.api.letsencrypt.org/directory
|
||||
dnsChallenge:
|
||||
provider: cloudflare
|
||||
delayBeforeCheck: 10
|
||||
30
infrastructure/apps/traefik-edge/docker-compose.yml
Normal file
30
infrastructure/apps/traefik-edge/docker-compose.yml
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
services:
|
||||
traefik:
|
||||
image: traefik:v3.6
|
||||
container_name: traefik-edge
|
||||
restart: unless-stopped
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- CF_ZONE_API_TOKEN=${CF_ZONE_API_TOKEN}
|
||||
- CF_DNS_API_TOKEN=${CF_DNS_API_TOKEN}
|
||||
command:
|
||||
# --- DEBUGGING AKTIVIEREN ---
|
||||
- "--log.level=DEBUG" # Setzt das Log-Level auf DEBUG (Fehlersuche)
|
||||
- "--accesslog=true"
|
||||
ports:
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
- ./config/traefik.yml:/etc/traefik/traefik.yml:ro
|
||||
- ./config/dynamic:/etc/traefik/dynamic:ro
|
||||
- ./certs:/certs
|
||||
networks:
|
||||
- proxy
|
||||
networks:
|
||||
proxy:
|
||||
name: proxy-edge
|
||||
23
infrastructure/apps/traefik-sub/docker-compose.yml
Normal file
23
infrastructure/apps/traefik-sub/docker-compose.yml
Normal file
@@ -0,0 +1,23 @@
|
||||
---
|
||||
services:
|
||||
traefik:
|
||||
image: traefik:v3.6
|
||||
container_name: traefik-sub
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
command:
|
||||
- "--providers.docker=true"
|
||||
- "--providers.docker.exposedbydefault=false"
|
||||
- "--entrypoints.web.address=:80"
|
||||
ports:
|
||||
- "80:80"
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
networks:
|
||||
- proxy
|
||||
|
||||
networks:
|
||||
proxy:
|
||||
name: proxy-sub
|
||||
external: false
|
||||
12
infrastructure/apps/vault/Dockerfile
Normal file
12
infrastructure/apps/vault/Dockerfile
Normal file
@@ -0,0 +1,12 @@
|
||||
FROM hashicorp/vault:1.15
|
||||
|
||||
# Install dependencies for automation script
|
||||
RUN apk add --no-cache openssl jq curl bash ca-certificates
|
||||
|
||||
# Copy entrypoint script
|
||||
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
|
||||
RUN chmod +x /usr/local/bin/entrypoint.sh
|
||||
|
||||
# Use our script as entrypoint
|
||||
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
|
||||
|
||||
81
infrastructure/apps/vault/README_VAULT.md
Normal file
81
infrastructure/apps/vault/README_VAULT.md
Normal file
@@ -0,0 +1,81 @@
|
||||
# Vault Operations Manual (Automated)
|
||||
|
||||
Dieses Dokument beschreibt den Betrieb von HashiCorp Vault innerhalb der Stabify Infrastruktur.
|
||||
Vault läuft als Docker-Container auf der VM `vm-docker-apps-301.stabify.de` (IP: `10.100.30.11`).
|
||||
|
||||
## Automatisiertes Setup
|
||||
|
||||
Dieser Service nutzt ein **Custom Entrypoint Script**, welches folgende Schritte automatisiert:
|
||||
1. **Zertifikate**: Generiert CA & Server-Zertifikate beim Start, falls diese fehlen.
|
||||
2. **Initialisierung**: Initialisiert Vault automatisch beim ersten Start.
|
||||
3. **Auto-Unseal**: Speichert die Keys lokal (`file/init_keys.json`) und nutzt sie zum automatischen Entsperren beim Boot.
|
||||
|
||||
**⚠️ SECURITY WARNING:**
|
||||
Die Unseal-Keys werden im Klartext unter `/opt/vault/file/init_keys.json` gespeichert.
|
||||
Dies dient dem komfortablen "Set-and-Forget" Betrieb im Homelab. In Hochsicherheitsumgebungen sollte diese Datei nach dem initialen Setup gelöscht und die Keys an einem sicheren Ort (Passwort Manager) verwahrt werden.
|
||||
|
||||
## Inbetriebnahme
|
||||
|
||||
### 1. Deployment (via Ansible)
|
||||
Da der Service Teil der `vm-docker-apps-301` Deployment-Definition ist, wird er automatisch gestartet, sobald das Ansible-Playbook läuft.
|
||||
|
||||
### 2. Zugriff erhalten
|
||||
Nach dem Start liegen die generierten Daten auf dem Server.
|
||||
|
||||
1. **CA Zertifikat holen** (damit dein Browser/Client vertraut):
|
||||
```bash
|
||||
scp ansible@10.100.30.11:/opt/vault/certs/ca.crt ./
|
||||
# Importiere ca.crt in deinen Truststore / Schlüsselbund
|
||||
```
|
||||
|
||||
2. **Root Token holen** (für Admin-Zugriff):
|
||||
```bash
|
||||
ssh ansible@10.100.30.11 "cat /opt/vault/file/init_keys.json" | jq -r .root_token
|
||||
```
|
||||
|
||||
3. **Login**:
|
||||
```bash
|
||||
export VAULT_ADDR='https://10.100.30.11:8200'
|
||||
export VAULT_CACERT=./ca.crt
|
||||
vault login <Root-Token>
|
||||
```
|
||||
|
||||
## Secrets anlegen (Einmalig)
|
||||
|
||||
Aktiviere die KV v2 Engine und lege die benötigten Secrets an.
|
||||
|
||||
```bash
|
||||
# Engine aktivieren
|
||||
vault secrets enable -path=secret kv-v2
|
||||
|
||||
# 1. Proxmox Credentials
|
||||
vault kv put secret/infrastructure/proxmox \
|
||||
api_token_id="root@pam!terraform" \
|
||||
api_token_secret="dein-secret-token"
|
||||
|
||||
# 2. OPNsense Credentials
|
||||
vault kv put secret/infrastructure/opnsense \
|
||||
api_key="dein-api-key" \
|
||||
api_secret="dein-api-secret"
|
||||
|
||||
# 3. VM User Credentials
|
||||
vault kv put secret/infrastructure/vm-credentials \
|
||||
ci_user="ansible" \
|
||||
ci_password="super-secure-password" \
|
||||
ssh_public_key="ssh-ed25519 AAAA..."
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Logs prüfen:**
|
||||
```bash
|
||||
ssh ansible@10.100.30.11 "docker logs vault-prod"
|
||||
```
|
||||
|
||||
**Zertifikate neu generieren:**
|
||||
Lösche einfach den Ordner `certs` auf dem Server und starte den Container neu.
|
||||
```bash
|
||||
rm -rf /opt/vault/certs/*
|
||||
docker compose restart vault
|
||||
```
|
||||
**Achtung:** Danach musst du das neue `ca.crt` wieder auf deine Clients verteilen.
|
||||
20
infrastructure/apps/vault/config/vault.hcl
Normal file
20
infrastructure/apps/vault/config/vault.hcl
Normal file
@@ -0,0 +1,20 @@
|
||||
storage "raft" {
|
||||
path = "/vault/file"
|
||||
node_id = "node1"
|
||||
}
|
||||
|
||||
listener "tcp" {
|
||||
address = "0.0.0.0:8200"
|
||||
tls_cert_file = "/vault/config/certs/vault.crt"
|
||||
tls_key_file = "/vault/config/certs/vault.key"
|
||||
tls_disable = 0
|
||||
}
|
||||
|
||||
api_addr = "https://10.100.30.11:8200"
|
||||
cluster_addr = "https://10.100.30.11:8201"
|
||||
ui = true
|
||||
|
||||
# Production hardening
|
||||
disable_mlock = true
|
||||
max_lease_ttl = "768h"
|
||||
default_lease_ttl = "168h"
|
||||
25
infrastructure/apps/vault/docker-compose.yml
Normal file
25
infrastructure/apps/vault/docker-compose.yml
Normal file
@@ -0,0 +1,25 @@
|
||||
services:
|
||||
vault:
|
||||
build: .
|
||||
image: stabify/vault-custom:latest
|
||||
container_name: vault-prod
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8200:8200"
|
||||
environment:
|
||||
VAULT_ADDR: 'https://127.0.0.1:8200'
|
||||
VAULT_API_ADDR: 'https://127.0.0.1:8200'
|
||||
volumes:
|
||||
- ./config:/vault/config
|
||||
- ./file:/vault/file
|
||||
- ./logs:/vault/logs
|
||||
# Mount certs directory.
|
||||
- ./certs:/vault/config/certs
|
||||
cap_add:
|
||||
- IPC_LOCK
|
||||
networks:
|
||||
- internal
|
||||
|
||||
networks:
|
||||
internal:
|
||||
name: vault-net
|
||||
99
infrastructure/apps/vault/entrypoint.sh
Normal file
99
infrastructure/apps/vault/entrypoint.sh
Normal file
@@ -0,0 +1,99 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# --- 1. Auto-Generate Certificates ---
|
||||
CERTS_DIR="/vault/config/certs"
|
||||
if [ ! -f "$CERTS_DIR/vault.crt" ] || [ ! -f "$CERTS_DIR/vault.key" ]; then
|
||||
echo "[ENTRYPOINT] Certificates missing. Generating self-signed certs..."
|
||||
mkdir -p "$CERTS_DIR"
|
||||
|
||||
# Create CA
|
||||
openssl genrsa -out "$CERTS_DIR/ca.key" 4096
|
||||
openssl req -new -x509 -days 3650 -key "$CERTS_DIR/ca.key" -out "$CERTS_DIR/ca.crt" \
|
||||
-subj "/C=DE/ST=Berlin/L=Berlin/O=Stabify/OU=IT/CN=StabifyRootCA"
|
||||
|
||||
# Create Server Key/CSR
|
||||
openssl genrsa -out "$CERTS_DIR/vault.key" 4096
|
||||
openssl req -new -key "$CERTS_DIR/vault.key" -out "$CERTS_DIR/vault.csr" \
|
||||
-subj "/C=DE/ST=Berlin/L=Berlin/O=Stabify/OU=IT/CN=vault.stabify.de"
|
||||
|
||||
# Config for SANs
|
||||
cat > "$CERTS_DIR/v3.ext" << EOF
|
||||
authorityKeyIdentifier=keyid,issuer
|
||||
basicConstraints=CA:FALSE
|
||||
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
|
||||
subjectAltName = @alt_names
|
||||
|
||||
[alt_names]
|
||||
DNS.1 = vault.stabify.de
|
||||
DNS.2 = vm-docker-apps-301.stabify.de
|
||||
DNS.3 = localhost
|
||||
IP.1 = 127.0.0.1
|
||||
IP.2 = 10.100.30.11
|
||||
EOF
|
||||
|
||||
# Sign Cert
|
||||
openssl x509 -req -in "$CERTS_DIR/vault.csr" \
|
||||
-CA "$CERTS_DIR/ca.crt" -CAkey "$CERTS_DIR/ca.key" -CAcreateserial \
|
||||
-out "$CERTS_DIR/vault.crt" -days 3650 -sha256 -extfile "$CERTS_DIR/v3.ext"
|
||||
|
||||
chmod 644 "$CERTS_DIR/vault.crt" "$CERTS_DIR/ca.crt"
|
||||
chmod 600 "$CERTS_DIR/vault.key" "$CERTS_DIR/ca.key"
|
||||
|
||||
echo "[ENTRYPOINT] Certificates generated successfully."
|
||||
fi
|
||||
|
||||
# Trust our own CA inside the container (for local curl/vault calls)
|
||||
cp "$CERTS_DIR/ca.crt" /usr/local/share/ca-certificates/stabify-ca.crt
|
||||
update-ca-certificates
|
||||
|
||||
# --- 2. Start Vault in Background ---
|
||||
echo "[ENTRYPOINT] Starting Vault server..."
|
||||
vault server -config=/vault/config/vault.hcl &
|
||||
VAULT_PID=$!
|
||||
|
||||
# Wait for Vault to be ready (it will be sealed initially)
|
||||
echo "[ENTRYPOINT] Waiting for Vault API..."
|
||||
until nc -z 127.0.0.1 8200; do
|
||||
sleep 1
|
||||
done
|
||||
sleep 2
|
||||
|
||||
# --- 3. Auto-Init ---
|
||||
export VAULT_ADDR='https://127.0.0.1:8200'
|
||||
export VAULT_SKIP_VERIFY=true # We trust localhost
|
||||
|
||||
KEYS_FILE="/vault/file/init_keys.json"
|
||||
|
||||
if ! vault status | grep -q "Initialized.*true"; then
|
||||
echo "[ENTRYPOINT] Vault is not initialized. Initializing..."
|
||||
vault operator init -format=json > "$KEYS_FILE"
|
||||
chmod 600 "$KEYS_FILE"
|
||||
echo "[ENTRYPOINT] Vault initialized. Keys saved to $KEYS_FILE"
|
||||
echo "!!! WARNING: Unseal keys are stored in $KEYS_FILE. Secure this file or delete it after noting the keys !!!"
|
||||
fi
|
||||
|
||||
# --- 4. Auto-Unseal ---
|
||||
if [ -f "$KEYS_FILE" ]; then
|
||||
echo "[ENTRYPOINT] Found keys file. Attempting auto-unseal..."
|
||||
# Read first 3 keys and unseal
|
||||
KEY1=$(jq -r ".unseal_keys_b64[0]" "$KEYS_FILE")
|
||||
KEY2=$(jq -r ".unseal_keys_b64[1]" "$KEYS_FILE")
|
||||
KEY3=$(jq -r ".unseal_keys_b64[2]" "$KEYS_FILE")
|
||||
|
||||
vault operator unseal "$KEY1" > /dev/null
|
||||
vault operator unseal "$KEY2" > /dev/null
|
||||
vault operator unseal "$KEY3" > /dev/null
|
||||
|
||||
if vault status | grep -q "Sealed.*false"; then
|
||||
echo "[ENTRYPOINT] Vault successfully unsealed!"
|
||||
else
|
||||
echo "[ENTRYPOINT] Failed to unseal Vault."
|
||||
fi
|
||||
else
|
||||
echo "[ENTRYPOINT] No keys file found. Manual unseal required."
|
||||
fi
|
||||
|
||||
# --- 5. Wait for Vault Process ---
|
||||
wait $VAULT_PID
|
||||
|
||||
8
infrastructure/apps/vault/vars.yml
Normal file
8
infrastructure/apps/vault/vars.yml
Normal file
@@ -0,0 +1,8 @@
|
||||
app_name: "vault"
|
||||
|
||||
vault_config:
|
||||
ui: true
|
||||
listener_address: "0.0.0.0:8200"
|
||||
api_addr: "https://10.100.30.11:8200"
|
||||
cluster_addr: "https://10.100.30.11:8201"
|
||||
|
||||
16
infrastructure/apps/whoami/docker-compose.yml
Normal file
16
infrastructure/apps/whoami/docker-compose.yml
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
services:
|
||||
whoami:
|
||||
image: traefik/whoami
|
||||
container_name: whoami
|
||||
restart: unless-stopped
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.whoami.rule=Host(`whoami.apps.stabify.de`)"
|
||||
- "traefik.http.routers.whoami.entrypoints=web"
|
||||
networks:
|
||||
- proxy-sub
|
||||
|
||||
networks:
|
||||
proxy-sub:
|
||||
external: true
|
||||
@@ -0,0 +1,7 @@
|
||||
apps:
|
||||
- vault
|
||||
- traefik-sub
|
||||
- whoami
|
||||
# Hier einfach weitere Apps aus dem Katalog hinzufügen:
|
||||
# - nextcloud
|
||||
# - monitoring
|
||||
@@ -0,0 +1,2 @@
|
||||
apps:
|
||||
- traefik-edge
|
||||
119
setup_vault_secrets.sh
Executable file
119
setup_vault_secrets.sh
Executable file
@@ -0,0 +1,119 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Pfad zur Bootstrap-Datei
|
||||
BOOTSTRAP_VARS="terraform/bootstrap.tfvars"
|
||||
VAULT_CA_LOCAL="./vault-ca.crt"
|
||||
|
||||
# Check if bootstrap vars exist
|
||||
if [ ! -f "$BOOTSTRAP_VARS" ]; then
|
||||
echo "Fehler: $BOOTSTRAP_VARS nicht gefunden."
|
||||
echo "Bitte stelle sicher, dass du im Root des Repos bist und die Datei existiert."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check for Vault CA
|
||||
if [ ! -f "$VAULT_CA_LOCAL" ]; then
|
||||
echo "Lade CA Zertifikat vom Vault Server..."
|
||||
scp -i ~/.ssh/id_ed25519_ansible_prod ansible@10.100.30.11:/opt/vault/certs/ca.crt "$VAULT_CA_LOCAL"
|
||||
fi
|
||||
|
||||
# Get Root Token from user
|
||||
read -sp "Bitte gib das Vault Root Token ein (aus init_keys.json): " VAULT_ROOT_TOKEN
|
||||
echo ""
|
||||
|
||||
if [ -z "$VAULT_ROOT_TOKEN" ]; then
|
||||
echo "Token darf nicht leer sein."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Setup Vault Environment
|
||||
export VAULT_ADDR='https://10.100.30.11:8200'
|
||||
export VAULT_TOKEN="$VAULT_ROOT_TOKEN"
|
||||
export VAULT_CACERT="$VAULT_CA_LOCAL"
|
||||
|
||||
echo "Prüfe Vault Status..."
|
||||
vault status > /dev/null
|
||||
|
||||
echo "Aktiviere KV v2 Engine..."
|
||||
vault secrets enable -path=secret kv-v2 || echo "Engine existiert bereits (ignoriere Fehler)."
|
||||
|
||||
# Helper function to extract var from tfvars (simple grep/cut, assumes standard formatting)
|
||||
get_var() {
|
||||
grep "^$1" "$BOOTSTRAP_VARS" | cut -d'=' -f2- | tr -d ' "' | sed 's/#.*//' | xargs
|
||||
}
|
||||
|
||||
echo "Lese Secrets aus $BOOTSTRAP_VARS..."
|
||||
|
||||
PM_TOKEN_ID=$(get_var "proxmox_api_token_id")
|
||||
PM_TOKEN_SECRET=$(get_var "proxmox_api_token_secret")
|
||||
OPN_KEY=$(get_var "opnsense_api_key")
|
||||
OPN_SECRET=$(get_var "opnsense_api_secret")
|
||||
OPN_URI=$(get_var "opnsense_uri")
|
||||
CI_USER=$(get_var "ci_user")
|
||||
CI_PASS=$(get_var "ci_password")
|
||||
SSH_KEY=$(get_var "ssh_public_key")
|
||||
|
||||
echo "Schreibe Secrets in Vault..."
|
||||
|
||||
vault kv put secret/infrastructure/proxmox \
|
||||
api_token_id="$PM_TOKEN_ID" \
|
||||
api_token_secret="$PM_TOKEN_SECRET"
|
||||
|
||||
vault kv put secret/infrastructure/opnsense \
|
||||
api_key="$OPN_KEY" \
|
||||
api_secret="$OPN_SECRET" \
|
||||
uri="$OPN_URI"
|
||||
|
||||
vault kv put secret/infrastructure/vm-credentials \
|
||||
ci_user="$CI_USER" \
|
||||
ci_password="$CI_PASS" \
|
||||
ssh_public_key="$SSH_KEY"
|
||||
|
||||
echo "✅ Alle Secrets erfolgreich importiert!"
|
||||
|
||||
# --- Cleanup & Switch to Production ---
|
||||
echo ""
|
||||
echo "----------------------------------------------------------------"
|
||||
echo "PHASE 3: CLEANUP & PRODUCTION SWITCH"
|
||||
echo "----------------------------------------------------------------"
|
||||
echo "Vault ist nun befüllt. Wir können nun die lokalen Secrets löschen"
|
||||
echo "und Terraform auf den Production-Mode umstellen."
|
||||
echo ""
|
||||
read -p "Möchtest du '$BOOTSTRAP_VARS' jetzt löschen? (y/n) " -n 1 -r
|
||||
echo ""
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
rm "$BOOTSTRAP_VARS"
|
||||
echo "🗑️ '$BOOTSTRAP_VARS' wurde gelöscht."
|
||||
echo "ℹ️ Hinweis: Die Variable 'use_vault' in Terraform defaults auf 'true',"
|
||||
echo " daher ist keine weitere Dateiänderung nötig."
|
||||
else
|
||||
echo "⚠️ Datei wurde NICHT gelöscht. Bitte denke daran, sie manuell zu entfernen,"
|
||||
echo " bevor du den Code ins Git pushst!"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "----------------------------------------------------------------"
|
||||
echo "SECURITY: CLEANUP REMOTE KEYS"
|
||||
echo "----------------------------------------------------------------"
|
||||
echo "Auf dem Vault-Server liegt die Datei '/opt/vault/file/init_keys.json'."
|
||||
echo "Diese enthält das Root-Token und die Unseal-Keys im Klartext."
|
||||
echo "Für maximale Sicherheit sollte diese Datei gelöscht werden (Achtung: Auto-Unseal geht dann nicht mehr!)"
|
||||
echo "oder zumindest das Root-Token daraus entfernt werden."
|
||||
echo ""
|
||||
read -p "Soll das Root-Token jetzt remote aus der Datei entfernt werden (empfohlen)? (y/n) " -n 1 -r
|
||||
echo ""
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
# Wir nutzen jq um das root_token Feld zu löschen und überschreiben die Datei
|
||||
ssh -i ~/.ssh/id_ed25519_ansible_prod ansible@10.100.30.11 "sudo jq 'del(.root_token)' /opt/vault/file/init_keys.json | sudo tee /opt/vault/file/init_keys.json.safe > /dev/null && sudo mv /opt/vault/file/init_keys.json.safe /opt/vault/file/init_keys.json && sudo chmod 600 /opt/vault/file/init_keys.json"
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "✅ Root-Token wurde aus der Remote-Datei entfernt."
|
||||
echo " Die Unseal-Keys bleiben für den Auto-Unseal erhalten."
|
||||
else
|
||||
echo "❌ Fehler beim Bereinigen der Remote-Datei."
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "🎉 Setup abgeschlossen! Du bist jetzt im Production Mode."
|
||||
5
terraform/.gitignore
vendored
Normal file
5
terraform/.gitignore
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
# Terraform
|
||||
.terraform/
|
||||
.terraform.lock.hcl
|
||||
terraform.tfstate*
|
||||
terraform.tfvars
|
||||
16
terraform/data.tf
Normal file
16
terraform/data.tf
Normal file
@@ -0,0 +1,16 @@
|
||||
# Expects secrets at specific paths in Vault KV v2 (mount point 'secret/')
|
||||
|
||||
data "vault_generic_secret" "proxmox" {
|
||||
count = var.use_vault ? 1 : 0
|
||||
path = "secret/infrastructure/proxmox"
|
||||
}
|
||||
|
||||
data "vault_generic_secret" "opnsense" {
|
||||
count = var.use_vault ? 1 : 0
|
||||
path = "secret/infrastructure/opnsense"
|
||||
}
|
||||
|
||||
data "vault_generic_secret" "vm_creds" {
|
||||
count = var.use_vault ? 1 : 0
|
||||
path = "secret/infrastructure/vm-credentials"
|
||||
}
|
||||
25
terraform/locals.tf
Normal file
25
terraform/locals.tf
Normal file
@@ -0,0 +1,25 @@
|
||||
locals {
|
||||
# SSH Public Key for Provisioning
|
||||
ssh_key = var.use_vault ? data.vault_generic_secret.vm_creds[0].data["ssh_public_key"] : var.ssh_public_key
|
||||
|
||||
# CI Credentials
|
||||
ci_user = var.use_vault ? data.vault_generic_secret.vm_creds[0].data["ci_user"] : var.ci_user
|
||||
ci_password = var.use_vault ? data.vault_generic_secret.vm_creds[0].data["ci_password"] : var.ci_password
|
||||
|
||||
vms = {
|
||||
# VLAN 30: Docker
|
||||
"vm-docker-mailcow-300" = { id = 300, cores = 4, memory = 8192, vlan = 30, tags = "docker,mailcow", ip = "10.100.30.10", gw = "10.100.30.1" }
|
||||
"vm-docker-apps-301" = { id = 301, cores = 2, memory = 4096, vlan = 30, tags = "docker,apps", ip = "10.100.30.11", gw = "10.100.30.1" }
|
||||
"vm-docker-traefik-302" = { id = 302, cores = 1, memory = 2048, vlan = 30, tags = "docker,ingress", ip = "10.100.30.12", gw = "10.100.30.1" }
|
||||
|
||||
# VLAN 40: K3s
|
||||
"vm-k3s-master-400" = { id = 400, cores = 2, memory = 4096, vlan = 40, tags = "k3s,master", ip = "10.100.40.10", gw = "10.100.40.1" }
|
||||
"vm-k3s-worker-401" = { id = 401, cores = 2, memory = 4096, vlan = 40, tags = "k3s,worker", ip = "10.100.40.11", gw = "10.100.40.1" }
|
||||
"vm-k3s-worker-402" = { id = 402, cores = 2, memory = 4096, vlan = 40, tags = "k3s,worker", ip = "10.100.40.12", gw = "10.100.40.1" }
|
||||
"vm-k3s-worker-403" = { id = 403, cores = 2, memory = 4096, vlan = 40, tags = "k3s,worker", ip = "10.100.40.13", gw = "10.100.40.1" }
|
||||
|
||||
# VLAN 90: Bastion
|
||||
"vm-bastion-900" = { id = 900, cores = 1, memory = 2048, vlan = 90, tags = "bastion", ip = "10.100.90.10", gw = "10.100.90.1" }
|
||||
"vm-bastion-901" = { id = 901, cores = 1, memory = 2048, vlan = 90, tags = "bastion", ip = "10.100.90.11", gw = "10.100.90.1" }
|
||||
}
|
||||
}
|
||||
79
terraform/main.tf
Normal file
79
terraform/main.tf
Normal file
@@ -0,0 +1,79 @@
|
||||
resource "proxmox_vm_qemu" "vm_deployment" {
|
||||
for_each = local.vms
|
||||
|
||||
target_node = var.pm_node
|
||||
|
||||
name = "${each.key}.stabify.de"
|
||||
vmid = each.value.id
|
||||
|
||||
description = "Managed by Terraform. VLAN: ${each.value.vlan} Role: ${each.value.tags} IP: ${each.value.ip}"
|
||||
clone = var.template_name
|
||||
full_clone = true
|
||||
agent = 1
|
||||
|
||||
start_at_node_boot = true
|
||||
define_connection_info = false
|
||||
|
||||
cpu {
|
||||
cores = each.value.cores
|
||||
sockets = 1
|
||||
}
|
||||
|
||||
memory = each.value.memory
|
||||
balloon = 0
|
||||
scsihw = "virtio-scsi-pci"
|
||||
boot = "order=scsi0;net0"
|
||||
|
||||
serial {
|
||||
id = 0
|
||||
type = "socket"
|
||||
}
|
||||
|
||||
disk {
|
||||
slot = "scsi0"
|
||||
size = "32G"
|
||||
type = "disk"
|
||||
storage = "local-lvm"
|
||||
iothread = true
|
||||
}
|
||||
|
||||
disk {
|
||||
slot = "ide2"
|
||||
type = "cloudinit"
|
||||
storage = "local-lvm"
|
||||
}
|
||||
|
||||
network {
|
||||
id = 0
|
||||
model = "virtio"
|
||||
bridge = "vmbr1"
|
||||
tag = each.value.vlan
|
||||
}
|
||||
|
||||
os_type = "cloud-init"
|
||||
|
||||
searchdomain = "stabify.de"
|
||||
nameserver = each.value.gw
|
||||
|
||||
ciuser = local.ci_user
|
||||
cipassword = local.ci_password
|
||||
sshkeys = local.ssh_key
|
||||
|
||||
ipconfig0 = "ip=${each.value.ip}/24,gw=${each.value.gw}"
|
||||
|
||||
tags = each.value.tags
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = [ network ]
|
||||
}
|
||||
}
|
||||
|
||||
resource "opnsense_unbound_host_override" "dns_entries" {
|
||||
for_each = local.vms
|
||||
|
||||
enabled = true
|
||||
hostname = each.key
|
||||
domain = "stabify.de"
|
||||
description = "Managed by Terraform: ${each.value.tags}"
|
||||
server = each.value.ip
|
||||
}
|
||||
20
terraform/providers.tf
Normal file
20
terraform/providers.tf
Normal file
@@ -0,0 +1,20 @@
|
||||
provider "vault" {
|
||||
# Configuration via VAULT_ADDR and VAULT_TOKEN env vars
|
||||
}
|
||||
|
||||
provider "proxmox" {
|
||||
pm_tls_insecure = true
|
||||
pm_api_url = var.proxmox_api_url
|
||||
|
||||
# Logic: If use_vault is true, verify vault data exists, otherwise use vars
|
||||
pm_api_token_id = var.use_vault ? data.vault_generic_secret.proxmox[0].data["api_token_id"] : var.proxmox_api_token_id
|
||||
pm_api_token_secret = var.use_vault ? data.vault_generic_secret.proxmox[0].data["api_token_secret"] : var.proxmox_api_token_secret
|
||||
}
|
||||
|
||||
provider "opnsense" {
|
||||
uri = var.use_vault ? data.vault_generic_secret.opnsense[0].data["uri"] : var.opnsense_uri
|
||||
allow_insecure = true
|
||||
|
||||
api_key = var.use_vault ? data.vault_generic_secret.opnsense[0].data["api_key"] : var.opnsense_api_key
|
||||
api_secret = var.use_vault ? data.vault_generic_secret.opnsense[0].data["api_secret"] : var.opnsense_api_secret
|
||||
}
|
||||
67
terraform/variables.tf
Normal file
67
terraform/variables.tf
Normal file
@@ -0,0 +1,67 @@
|
||||
variable "use_vault" {
|
||||
type = bool
|
||||
default = true
|
||||
description = "Set to false to bypass Vault and use local variables (Bootstrap Mode)"
|
||||
}
|
||||
|
||||
variable "proxmox_api_token_id" {
|
||||
type = string
|
||||
sensitive = true
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "proxmox_api_token_secret" {
|
||||
type = string
|
||||
sensitive = true
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "opnsense_api_key" {
|
||||
type = string
|
||||
sensitive = true
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "opnsense_api_secret" {
|
||||
type = string
|
||||
sensitive = true
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "ci_user" {
|
||||
type = string
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "ci_password" {
|
||||
type = string
|
||||
sensitive = true
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "ssh_public_key" {
|
||||
type = string
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "proxmox_api_url" {
|
||||
type = string
|
||||
default = "https://10.100.0.2:8006/api2/json"
|
||||
}
|
||||
|
||||
variable "pm_node" {
|
||||
type = string
|
||||
default = "hzfsn-pve-01"
|
||||
}
|
||||
|
||||
variable "template_name" {
|
||||
type = string
|
||||
default = "ubuntu-2404-ci"
|
||||
description = "Name des Cloud-Init Templates auf dem Node"
|
||||
}
|
||||
|
||||
variable "opnsense_uri" {
|
||||
type = string
|
||||
description = "URI to OPNsense API"
|
||||
default = null
|
||||
}
|
||||
26
terraform/versions.tf
Normal file
26
terraform/versions.tf
Normal file
@@ -0,0 +1,26 @@
|
||||
terraform {
|
||||
required_version = ">= 1.5.0"
|
||||
|
||||
# Enterprise: Remote State Management (Placeholder)
|
||||
# backend "s3" {
|
||||
# bucket = "terraform-state"
|
||||
# key = "prod/infrastructure.tfstate"
|
||||
# region = "eu-central-1"
|
||||
# }
|
||||
|
||||
required_providers {
|
||||
proxmox = {
|
||||
source = "telmate/proxmox"
|
||||
version = "3.0.2-rc07" # Pinned as requested
|
||||
}
|
||||
opnsense = {
|
||||
source = "browningluke/opnsense"
|
||||
version = "0.16.1"
|
||||
}
|
||||
vault = {
|
||||
source = "hashicorp/vault"
|
||||
version = "~> 3.24.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
33
vault-ca.crt
Normal file
33
vault-ca.crt
Normal file
@@ -0,0 +1,33 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIFrTCCA5WgAwIBAgIUMW5OEPxg8P8YijUOoJ2EDRMkkNswDQYJKoZIhvcNAQEL
|
||||
BQAwZjELMAkGA1UEBhMCREUxDzANBgNVBAgMBkJlcmxpbjEPMA0GA1UEBwwGQmVy
|
||||
bGluMRAwDgYDVQQKDAdTdGFiaWZ5MQswCQYDVQQLDAJJVDEWMBQGA1UEAwwNU3Rh
|
||||
YmlmeVJvb3RDQTAeFw0yNjAxMDgxOTE3MTJaFw0zNjAxMDYxOTE3MTJaMGYxCzAJ
|
||||
BgNVBAYTAkRFMQ8wDQYDVQQIDAZCZXJsaW4xDzANBgNVBAcMBkJlcmxpbjEQMA4G
|
||||
A1UECgwHU3RhYmlmeTELMAkGA1UECwwCSVQxFjAUBgNVBAMMDVN0YWJpZnlSb290
|
||||
Q0EwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDYIY89KCT5JkvuA2Bd
|
||||
sRB5Dwk9xm9PWILekJZaopHqWTrAARW7gJU0SvDmWb8lwiiS27bXA/doAKVSmccM
|
||||
N+FkQ31LF3cREbTO87NH3Ldosn2YLZXM2cf9181ORuLbLJR/fEiNbY+iL8MhnwQH
|
||||
GUbery3XK1LsU5zbpdjCth0zKbWZ0Gbi8SmhHvZDUJy4BAUVKYFqH2BVfiAPAZf6
|
||||
vBL0SQjaGc+9v6My6SurBQzAGyBtcaBoJ1tLR6S8PSEFDn6eQzPSZXaMJBN79wZM
|
||||
WYenW1HZtKTGv8Xz3T9yzYoLuzE1VQejhPrURupfs0wcfGiIZ/iP421Klj3qg/YW
|
||||
Vh2Wj4EHZLC4gV5/exUznmADEgvG6qUjV1eLkxyf0KIFzGYshxXVgrp3JCUtulMe
|
||||
t52Op8yUxYgkHfCw5JpiYJ4j9dQ7pgApY89mr/tuFjlJw64oS9GKWh4l3X31m1Ss
|
||||
NWESVP2zjqtE+89n8tqRBTc8HCIUnXzKy6PtbtLjYYHWWyi6UsXMW+Vq5jkGaiYZ
|
||||
9NzVb3wJcOWyPQW5nLL4rWUu4E514Kx4+Rq4qsrqsucIDEbO72gWXp9X8qCUF+TB
|
||||
QL4n7g+Bz6PNWOFNrSuOb5mSSethYTwVZ/4U6x23TyuchoVm22KsPHTLb22LfVGy
|
||||
E4a9kc1AjcaZ0MK+wkNtv6PlvwIDAQABo1MwUTAdBgNVHQ4EFgQUET0uSUHGinGi
|
||||
iM1X+s2kMksrcyAwHwYDVR0jBBgwFoAUET0uSUHGinGiiM1X+s2kMksrcyAwDwYD
|
||||
VR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAgEAAI+GTF5myGhO/t1HppYg
|
||||
JZfIFcSKQiR6LvWMMdE6IV74LPDq4B0nj4cSIsIdVuF3c3Sx6jyDa4tpaBYRVOuL
|
||||
sLo0zogCqX0g5tnbDT7vGFd7mkYUlzF4yDFKEfsKZIYz4XqXd0lgfJtCyMoohSf2
|
||||
YdO0PaAUg4NP2Buy0eE5QDF72ADvjm8HYltlc+9rZCN9lGz5IJnqfDs3mTrZrIRq
|
||||
E8QELienGUhr5PatMBwkpJ1i1zFdlDRRmphehzHZ6ML3f6C1zfsNtJvtFwcOAJMe
|
||||
jxozsW8sgBClwFfKfMmVU5RjXbmS0eWt37lKHLLZrwggIu/n5hGutDD83sqle/Am
|
||||
mFwV3Ltc754FhY3vItVN2XeVTt402BdQL1R3Rl/+nqJ/dkZAifZuzfl9yWjjRYSh
|
||||
xiAxgl3qqsRpQz5kM/klaFsFaot2ARv8TvB+hv5JWJwEGZuq7ca6nGOX2qVMOoXA
|
||||
3HOTG0AzNWGYB9GcaGyBqw3iltyZHY5cizXumucELxEb+2mB7NXTBsvWZzzyUvuE
|
||||
Vd8mkYB5oe6reF1XI31EnaSfnZrqnE4FtQSbZH2nIwSMq+q67p4XhKSprry6sk8P
|
||||
HgUGgxp1JRYpRMr6aI4Pb1WumjdiXJpgk2F6mo/nPN1QVhkIvlIA2LzC57t7r3mz
|
||||
EEUWC8tQVPJ1frfcPDKjuwI=
|
||||
-----END CERTIFICATE-----
|
||||
Reference in New Issue
Block a user