Initial commit — Stupid Simple Network Inventory

Application web d'inventaire réseau manuel avec FastAPI, Vue 3 et Docker.
Inclut l'authentification JWT, la découverte ICMP, et la topologie en cards CSS.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-05-17 09:19:19 +02:00
commit 88cf6458d0
58 changed files with 10365 additions and 0 deletions
+103
View File
@@ -0,0 +1,103 @@
# Architecture
## Overview
Stupid Simple Network Inventory is a self-hosted web application for manual network inventory and logical topology visualisation. There is no auto-discovery of topology — only ICMP reachability scanning. All data is entered manually.
## Request Flow
```
Browser
│ HTTP :8080
┌─────────────────────────────────────┐
│ Nginx (frontend container) │
│ serves /usr/share/nginx/html │
│ │
│ location /api/ → proxy_pass │
│ http://backend:8000/api/ │
└──────────────────┬──────────────────┘
│ HTTP :8000 (Docker internal network)
┌─────────────────────────────────────┐
│ FastAPI (backend container) │
│ uvicorn 0.0.0.0:8000 │
│ │
│ /app/data/ (bind mount) │
│ topology.db ← SQLite │
│ secret_key.txt ← JWT secret │
└─────────────────────────────────────┘
```
## Docker Compose
Two services defined in `docker-compose.yml`:
| Resource | Type | Notes |
|----------|------|-------|
| `backend` | service | Python 3.11-slim, runs as `DOCKER_UID:DOCKER_GID` (host user), `cap_drop: ALL`, `cap_add: NET_RAW` |
| `frontend` | service | Multi-stage (Vite → `nginxinc/nginx-unprivileged`), UID 101, `cap_drop: ALL`, `no-new-privileges`, exposes `:8080` |
`./db_data:/app/data` is a bind mount. The backend process runs with the same UID/GID as the host user owning `./db_data/` (set via `DOCKER_UID`/`DOCKER_GID` in `.env`). Pre-create the directory before the first run: `mkdir -p db_data`.
Both containers share an internal Docker bridge network (`internal`). The browser never communicates directly with the backend — all API traffic goes through Nginx.
## Build Pipeline
**Backend** (`backend/Dockerfile`):
1. `python:3.11-slim` base
2. Install `iputils-ping` (required for ICMP subprocess calls)
3. `pip install -r requirements.txt`
4. Start with `uvicorn main:app --host 0.0.0.0 --port 8000`
**Frontend** (`frontend/Dockerfile`):
1. Stage 1: `node:20-alpine` — runs `vite build`, outputs to `/app/dist`
2. Stage 2: `nginx:alpine` — copies `/app/dist` to `/usr/share/nginx/html`, copies `nginx.conf`
## Startup Sequence (backend)
`main.py` runs synchronously at import time before the ASGI app is created:
1. `_migrate_vlan_nullable()` — idempotent DDL fix for vlans table
2. `_migrate_device_virt_type()` — adds column if missing
3. `_migrate_device_url()` — adds column if missing
4. `_migrate_users()` — creates users table + seeds admin account
5. `Base.metadata.create_all(bind=engine)` — creates any missing tables
6. FastAPI app instance created, routers registered
This ordering ensures migrations never run against an uninitialised schema.
## Authentication Architecture
```
LoginPage.vue
│ POST /api/auth/login (form-urlencoded)
│ ← { access_token, token_type, username, must_change_password }
auth.js (setAuth)
stores token + username + mustChangePassword in localStorage
─────────────────────────────────────────────────────────────
isAuthenticated (computed ref) ← App.vue reads this
mustChangePassword (computed ref) ← App.vue shows AccountModal :forced when true
getToken() ← api.js reads this per request
App.vue template guard:
!isAuthenticated → <LoginPage>
mustChangePassword → <AccountModal :forced="true"> (blocks the app)
else → full application
api.js (axios interceptor)
every request → Authorization: Bearer <token>
every 401 response (if token existed) → clearAuth() + reload
```
The JWT payload is `{ sub: username, ver: token_version, exp: <24h> }`, signed with HS256.
The secret is loaded from `data/secret_key.txt` (auto-generated on first run, permissions 0600) or `SECRET_KEY` env var.
Password change invalidates previous tokens immediately via `token_version` bump.
## Persistence
All state is in SQLite at `./db_data/topology.db`. There is no caching layer, no background jobs, no message queue. The only writable files at runtime are `topology.db` and `secret_key.txt`, both in the bind-mounted `./db_data/` directory on the host.