Conversation
Add user 1001:1001 in backend and frontend Modify all docker-compose files to leverage this change and harden the config.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdd a non-root app user to images; switch Django secret-key creation to Changes
Sequence Diagram(s)sequenceDiagram
participant Host as Host FS / Script
participant Docker as Docker Compose
participant Backend as Backend Container
participant Huey as Huey Container
participant Frontend as Frontend Container
Host->>Host: ensure ./db exists\nget_owner_linux ./db
alt owner != 1001:1001 (Linux)
Host->>Host: sudo chown -R 1001:1001 ./db
end
Host->>Docker: docker compose up (foreground) using DOCKER_COMPOSE_FILE
Docker->>Backend: start (user 1001), run healthcheck
Backend-->>Docker: healthy
Docker->>Huey: start (read_only, cap_drop, tmpfs, user 1001)
Docker->>Frontend: start (user 1001)
Host->>Backend: run createsuperuser (after readiness)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
enterprise/config/templates/docker-compose-sqlite-caddy.yml.j2 (1)
50-71:⚠️ Potential issue | 🟠 MajorAlign backend/huey UID for the shared SQLite volume to avoid permission failures.
hueynow runs as1001:1001whilebackendstill runs as root and both share./db. If the backend creates SQLite files as root, huey may fail to read/write them. Consider runningbackendwith the same UID/GID or ensuring./dband its files are owned/writable by1001(e.g., a documentedchown -R 1001:1001 ./dbpre-step or a startup permission fixup).config/templates/docker-compose-sqlite-traefik.yml.j2 (1)
52-73:⚠️ Potential issue | 🟠 MajorPotential SQLite permission conflict with non-root huey.
hueynow runs as1001:1001whilebackendremains root, and both share./db. If SQLite files are created as root,hueycan fail to access them. Please align the backend UID/GID or ensure the DB files are owned/writable by1001before startup.
🤖 Fix all issues with AI agents
In `@config/templates/docker-compose-postgresql-caddy.yml.j2`:
- Around line 100-102: The backend and huey services both mount ./db:/code/db
but run as different users, causing permission errors; make them consistent by
adding user: "1001:1001" to the backend service (so both backend and huey run as
UID/GID 1001) or alternatively ensure ./db is pre-created with ownership
1001:1001 before containers start; also scan the other PostgreSQL templates
(postgresql-bunkerweb, postgresql-traefik) and make the same consistency change
where backend has user but huey lacks it so both services using the shared ./db
volume use the same user.
In `@config/templates/docker-compose-sqlite-bunkerweb.yml.j2`:
- Around line 61-70: The compose fragment mounts ./db (volumes: - ./db:/code/db)
but sets the container user to "1001:1001" only for the Huey service, leading to
permission mismatches when the backend (which runs as root) creates the SQLite
DB; fix by making ownership consistent: either set the backend service's user
key to "1001:1001" as well, or add an init step (init container or entrypoint
script referenced from the backend service) that chowns /code/db to 1001:1001
before the backend starts; alternatively add documentation to ensure ./db is
pre-created with owner 1001:1001 — adjust the compose "user" field and/or add
the init/chown logic so both backend and Huey can read/write the SQLite file.
In `@docker-compose-build.yml`:
- Around line 12-25: The docker-compose bind-mount ./db:/code/db combined with
user: "1001:1001" will cause permission errors unless the host ./db is owned by
UID 1001; either document the required host setup (e.g., instruct to mkdir -p db
&& sudo chown 1001:1001 db) or add an automatic init step that ensures ownership
before the main process runs (for example run a startup/entrypoint action that
checks and chowns /code/db to 1001:1001 or add a lightweight init container to
perform the chown), or switch to a volume driver that handles UID mapping—update
the docker-compose file and README accordingly to include the chosen remediation
and mention the user: "1001:1001" and volumes: ./db:/code/db symbols.
In `@enterprise/config/templates/docker-compose-postgresql-caddy.yml.j2`:
- Around line 97-106: The shared bind-mount ./db:/code/db creates permission
conflicts between services (Backend and huey) running as different users; update
the docker-compose template to avoid using the same host bind for both
services—either convert ./db to a named volume used by both (so Docker manages
ownership) or give each service its own volume mount and/or ensure a consistent
container user (the user: "1001:1001" setting) and an init step that chowns the
volume; locate references to the ./db:/code/db volume in the template and
replace them with a named volume or per-service volumes and add the consistent
ownership/init approach so both Backend and huey can read/write without
permission errors.
In `@enterprise/config/templates/docker-compose-postgresql-traefik.yml.j2`:
- Around line 105-114: Shared volume ./db:/code/db is mounted with user:
"1001:1001" for the huey container but the backend container runs as root,
causing permission mismatches; update the docker-compose template so both
services use the same UID/GID or ensure the volume is owned by 1001:1001 at
startup. Concretely, either set the backend service to run as user 1001 (match
the huey user: "1001:1001") or add an init step in the container entrypoint to
chown /code/db to 1001:1001 before starting the process; ensure the change
references the volumes entry "./db:/code/db" and the user field user:
"1001:1001" so both containers can read/write the shared DB directory.
In `@README.md`:
- Around line 600-601: Update the README note text that currently reads "udpate"
to the correct spelling "update" in the docker-compose template paragraph (the
block starting "[!NOTE] The docker-compose template files..."). Locate the
sentence mentioning "If you use an old docker-compose.yml file, it is
recommended to udpate it." and change "udpate" to "update" so the sentence reads
"...it is recommended to update it."
🧹 Nitpick comments (2)
backend/startup.sh (1)
8-9: Verifyinstallwith process substitution works in the target environment.Using
install -m 600 <(cat /proc/sys/kernel/random/uuid)is an unusual pattern. While it should work in bash, consider:
- The echo on line 9 runs regardless of whether
installsucceeds- Process substitution behavior may vary in restricted environments
A more robust alternative:
♻️ Suggested improvement with error handling
- install -m 600 <(cat /proc/sys/kernel/random/uuid) db/django_secret_key - echo "generating initial Django secret key" + umask 077 && cat /proc/sys/kernel/random/uuid > db/django_secret_key + echo "generated initial Django secret key"This uses
umaskto ensure 600 permissions while being more portable and keeping the original simple approach.config/templates/docker-compose-postgresql-bunkerweb.yml.j2 (1)
37-40: Consider hardening backend here for parity with other compose files.Other compose variants in this PR set
read_only,cap_drop,security_opt,tmpfs, anduserfor backend. If the omission here isn’t intentional, consider aligning this template to avoid a weaker default.
| read_only: true | ||
| cap_drop: | ||
| - ALL | ||
| security_opt: | ||
| - no-new-privileges:true | ||
| tmpfs: | ||
| - /tmp:rw,noexec,nosuid,nodev | ||
| volumes: | ||
| - ./db:/code/db | ||
| user: "1001:1001" |
There was a problem hiding this comment.
Same shared volume permission concern.
Backend and huey share ./db:/code/db with different user contexts. See previous comments for detailed analysis and solutions.
🤖 Prompt for AI Agents
In `@enterprise/config/templates/docker-compose-postgresql-caddy.yml.j2` around
lines 97 - 106, The shared bind-mount ./db:/code/db creates permission conflicts
between services (Backend and huey) running as different users; update the
docker-compose template to avoid using the same host bind for both
services—either convert ./db to a named volume used by both (so Docker manages
ownership) or give each service its own volume mount and/or ensure a consistent
container user (the user: "1001:1001" setting) and an init step that chowns the
volume; locate references to the ./db:/code/db volume in the template and
replace them with a named volume or per-service volumes and add the consistent
ownership/init approach so both Backend and huey can read/write without
permission errors.
melinoix
left a comment
There was a problem hiding this comment.
A next step in hardening the Docker would be to add a seccomp to the docker to avoid certain bad commands (permitting extractions, etc...)
this pr looks good to me
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
docker-compose-build.sh (1)
39-50:⚠️ Potential issue | 🔴 Critical
docker compose up(foreground) makes lines 41–50 unreachable.Without
-d,docker compose upblocks until all containers exit. The superuser creation prompt, sleep, and "ready" messages on lines 41–50 will never execute while the services are running. This is a functional regression — users will never be guided through initial setup.Either restore
-dor move the post-startup steps into a separate flow (e.g., a second terminal instruction, or run them beforeup).Quick fix: restore detached mode
- docker compose -f "${DOCKER_COMPOSE_FILE}" up + docker compose -f "${DOCKER_COMPOSE_FILE}" up -d
🤖 Fix all issues with AI agents
In `@docker-compose-build.sh`:
- Around line 30-36: The script uses Linux-only "stat -c" to compute DB_OWNER
and unconditionally runs "sudo chown" which can hard-fail under set -e; update
the DB_OWNER detection to try Linux syntax first and fall back to BSD/macOS
syntax (e.g., try "stat -c '%u:%g' ./db" and if that fails use "stat -f '%u:%g'
./db") to make DB_OWNER portable, and change the ownership step to attempt chown
without sudo, then try with sudo only if available, otherwise print a clear
warning (do not let chown failure abort the script under set -e); reference the
DB_OWNER variable and the chown invocation so the logic wraps those commands.
In `@docker-compose.sh`:
- Around line 12-18: The portability bug is that DB_OWNER="$(stat -c '%u:%g'
./db)" uses GNU stat and fails on macOS; update the logic around DB_OWNER and
the conditional chown so it works cross-platform by detecting which stat is
available (use GNU stat -c when present, fall back to BSD/macOS stat -f "%u:%g"
otherwise) or simplify by removing the check and performing an unconditional
sudo chown -R 1001:1001 ./db; change references to DB_OWNER, the stat
invocation, and the chown conditional accordingly.
| mkdir -p ./db | ||
| DB_OWNER="$(stat -c '%u:%g' ./db)" | ||
|
|
||
| if [ "$DB_OWNER" != "1001:1001" ]; then | ||
| echo "Fixing ownership of ./db (was $DB_OWNER, expected 1001:1001)" | ||
| sudo chown -R 1001:1001 ./db | ||
| fi |
There was a problem hiding this comment.
stat -c is Linux-only — breaks on macOS (BSD stat).
macOS users running Docker Desktop will hit an error here because BSD stat doesn't support -c. Consider a portable alternative:
Proposed portable fix
- DB_OWNER="$(stat -c '%u:%g' ./db)"
+ if stat -c '%u:%g' ./db >/dev/null 2>&1; then
+ DB_OWNER="$(stat -c '%u:%g' ./db)"
+ else
+ # macOS / BSD stat
+ DB_OWNER="$(stat -f '%u:%g' ./db)"
+ fiAlso, sudo chown will fail if sudo is not installed or the user lacks privileges. Consider warning the user instead of hard-failing (given set -e).
🤖 Prompt for AI Agents
In `@docker-compose-build.sh` around lines 30 - 36, The script uses Linux-only
"stat -c" to compute DB_OWNER and unconditionally runs "sudo chown" which can
hard-fail under set -e; update the DB_OWNER detection to try Linux syntax first
and fall back to BSD/macOS syntax (e.g., try "stat -c '%u:%g' ./db" and if that
fails use "stat -f '%u:%g' ./db") to make DB_OWNER portable, and change the
ownership step to attempt chown without sudo, then try with sudo only if
available, otherwise print a clear warning (do not let chown failure abort the
script under set -e); reference the DB_OWNER variable and the chown invocation
so the logic wraps those commands.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
docker-compose-build.sh (2)
52-59:⚠️ Potential issue | 🔴 CriticalBug:
docker compose upblocks — post-start commands are unreachable.
docker compose up(without-d) runs in the foreground and blocks the shell at line 52 until the user sends Ctrl+C or the containers exit. Lines 54–59 (sleep 50,createsuperuser) will only execute after compose stops, at which point the backend container is down anddocker compose execwill fail.Either run in detached mode so the script can continue, or restructure the initialization:
Option 1: Use detached mode
echo "Starting services..." - docker compose -f "${DOCKER_COMPOSE_FILE}" up + docker compose -f "${DOCKER_COMPOSE_FILE}" up -d # Simple wait for database migrations echo "Giving some time for the database to be ready, please wait ..." sleep 50 echo "Initialize your superuser account..." docker compose exec backend poetry run python manage.py createsuperuserOption 2: Use a health-check wait with foreground compose in background
echo "Starting services..." - docker compose -f "${DOCKER_COMPOSE_FILE}" up + docker compose -f "${DOCKER_COMPOSE_FILE}" up & + COMPOSE_PID=$! # Simple wait for database migrations echo "Giving some time for the database to be ready, please wait ..." sleep 50 echo "Initialize your superuser account..." - docker compose exec backend poetry run python manage.py createsuperuser + docker compose -f "${DOCKER_COMPOSE_FILE}" exec backend poetry run python manage.py createsuperuser echo "🚀 CISO Assistant is ready!" echo "Connect to CISO Assistant on https://localhost:8443" echo "For successive runs, you can now use 'docker compose up'." + + # Re-attach to compose logs + wait "$COMPOSE_PID" fi
33-35:⚠️ Potential issue | 🟡 MinorUser guidance omits the
-fflag for the custom compose file.Both messages (lines 35 and 63) tell the user to run
docker compose up, but the script uses-f docker-compose-build.yml. Without the-fflag, Docker Compose will look for the defaultdocker-compose.yml, which may be a different configuration.Suggested fix
- echo "For successive runs, you can now use 'docker compose up'." + echo "For successive runs, you can now use 'docker compose -f ${DOCKER_COMPOSE_FILE} up'."Apply at both line 35 and line 63.
Also applies to: 62-63
🧹 Nitpick comments (1)
docker-compose-build.sh (1)
54-56: Hardcodedsleep 50is fragile for database readiness.Consider polling for readiness instead — e.g., retry a health-check or
docker compose exec backend python manage.py check --database defaultin a loop with a timeout. A fixed 50-second sleep may be too short on constrained CI runners and wastefully long on fast machines.
this is counterproductive, just ignore it.
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
enterprise/config/templates/docker-compose-sqlite-caddy.yml.j2 (1)
2-33:⚠️ Potential issue | 🟡 MinorBackend service lacks hardening directives present in huey service.
The backend service in the Enterprise SQLite Caddy template (lines 2-33) does not include
read_only,cap_drop,security_opt,tmpfs, oruser: "1001:1001", while the huey service in the same file (lines 62-71) has all of these hardening directives. Both services use the same Docker image (ghcr.io/intuitem/ciso-assistant-enterprise-backend:latest).If this difference is intentional (e.g., backend requires elevated privileges to run migrations), document the reason. If not, apply the same hardening directives to the backend service for consistency.
enterprise/config/templates/docker-compose-postgresql-traefik.yml.j2 (1)
99-100:⚠️ Potential issue | 🟡 MinorDuplicate
EMAIL_USE_TLS— unconditional line will always set it, even when undefined.Line 99 conditionally sets
EMAIL_USE_TLSwhenemail.use_tlsis truthy. Line 100 unconditionally sets it again. Whenuse_tlsis falsy, line 100 will still injectEMAIL_USE_TLS=(empty or False). When truthy, it's a harmless duplicate. This is a pre-existing issue but worth fixing while touching this block.🐛 Proposed fix
{% if email.use_tls %}- EMAIL_USE_TLS={{ email.use_tls }}{% endif %} - - EMAIL_USE_TLS={{ email.use_tls }} - EMAIL_HOST_USER={{ email.user }}docker-compose-build.sh (1)
52-63:⚠️ Potential issue | 🟠 Major
docker compose upwithout-dblocks the shell — post-startup commands are unreachable.Line 53 removed the
-dflag, sodocker compose upruns in the foreground and never returns while services are alive. Thesleep 50,createsuperuser, and readiness messages on lines 55–63 will only execute after the user terminates compose (Ctrl+C), at which point the containers are stopped anddocker compose execwill fail.Compare with
docker-compose.shwhich correctly usesup -d+ a health-check polling loop.🐛 Proposed fix — match the pattern from docker-compose.sh
echo "Starting services..." - docker compose -f "${DOCKER_COMPOSE_FILE}" up + docker compose -f "${DOCKER_COMPOSE_FILE}" up -d - echo "Giving some time for the database to be ready, please wait ..." - sleep 50 + echo "Waiting for backend to be ready..." + until docker compose -f "${DOCKER_COMPOSE_FILE}" exec -T backend curl -f http://localhost:8000/api/health/ >/dev/null 2>&1; do + echo "Backend is not ready - waiting 10s..." + sleep 10 + done - echo "Initialize your superuser account..." + echo "Backend is ready! Initialize your superuser account..." docker compose exec backend poetry run python manage.py createsuperuser echo "🚀 CISO Assistant is ready!"config/docker-compose-barebone.yml (1)
69-95:⚠️ Potential issue | 🟡 MinorFrontend service is missing
restartpolicy.Backend (line 6) and huey (line 42) both specify
restart: always, but the frontend service has norestartdirective. If this is intentional for the barebone config, ignore — otherwise, consider adding it for consistency and resilience.Proposed fix
frontend: container_name: frontend environment: ... image: ghcr.io/intuitem/ciso-assistant-community/frontend:latest pull_policy: always + restart: always read_only: trueconfig/templates/docker-compose-postgresql-traefik.yml.j2 (2)
116-132:⚠️ Potential issue | 🟠 MajorFrontend is missing
read_only,cap_drop,security_opt, andtmpfsdirectives.The
user: "1001:1001"and HOME/XDG env vars were added, but the rest of the hardening block (read_only,cap_drop,security_opt,tmpfs) is missing — unlike every other compose file in this PR. Running as non-root is good, but withoutread_onlyandcap_drop: ALLthe container's attack surface remains wider than intended.Proposed fix — add the missing hardening directives to frontend
- XDG_DATA_HOME=/tmp/.local/share image: ghcr.io/intuitem/ciso-assistant-community/frontend:latest pull_policy: always + read_only: true + cap_drop: + - ALL + security_opt: + - no-new-privileges:true + tmpfs: + - /tmp:rw,noexec,nosuid,nodev depends_on: backend: condition: service_healthy user: "1001:1001"
22-68:⚠️ Potential issue | 🟠 MajorApply comprehensive hardening directives to backend service across all Docker Compose templates.
The backend service lacks the security hardening applied to huey in all six templates (postgresql-bunkerweb, postgresql-caddy, postgresql-traefik, sqlite-bunkerweb, sqlite-caddy, sqlite-traefik). The huey service consistently includes
read_only: true,cap_drop: [ALL],security_opt: [no-new-privileges:true], anduser: "1001:1001", but backend is missing these directives entirely. Frontend also lacks full hardening in most templates except sqlite-caddy. Align backend (and frontend) security configuration with huey to ensure consistent container hardening across all deployment scenarios.
🤖 Fix all issues with AI agents
In `@backend/startup.sh`:
- Line 8: Fix the mixed indentation on the install command: replace the tab
characters at the start of the line containing "install -m 600 <(cat
/proc/sys/kernel/random/uuid) db/django_secret_key" with the same number of
spaces used elsewhere in the script so indentation is consistent; locate that
exact command string in the startup.sh and normalize it to spaces-only
indentation.
In `@config/templates/docker-compose-postgresql-traefik.yml.j2`:
- Around line 35-38: The backend service in the
docker-compose-postgresql-traefik.yml.j2 template is missing the runtime
hardening directives; update the backend service (same service block that sets
HOME/XDG_* env vars) to include read_only: true, cap_drop: ['ALL'],
security_opt: ['no-new-privileges:true', 'seccomp:unconfined'] (or match your
other compose files), a tmpfs mount entry (tmpfs: /tmp or matching pattern used
elsewhere), and set user to a non-root UID/GID consistent with other templates;
ensure these directives mirror the backend hardening block used in
docker-compose.yml and docker-compose-barebone.yml so the container does not run
as root and has filesystem/capability restrictions.
In `@config/templates/docker-compose-sqlite-caddy.yml.j2`:
- Around line 12-15: The backend service in the docker-compose template is
missing the runtime hardening applied to huey and frontend (read_only, cap_drop,
security_opt, tmpfs, and user: "1001:1001"); either add those same directives to
the backend service block so it matches the hardening applied to huey and
frontend (add read_only: true, cap_drop: [ALL], security_opt:
["no-new-privileges"], tmpfs: ["/tmp:rw,noexec,nosuid,size=65536k"] and user:
"1001:1001") or, if the backend genuinely requires writable files at startup
(e.g., for migrations), add an inline comment in the backend service explaining
the exception and why each hardening item is intentionally omitted; update any
environment entries (HOME/XDG_*) unchanged unless the writable requirement
mandates different values.
In `@docker-compose.yml`:
- Line 106: The Caddy volume mount was changed from "./caddy_data:/data" to
"./db/caddy:/data/caddy", which can leave existing deployments with TLS state in
the old "./caddy_data" folder; update the docker-compose.yml change notes and
migration docs to mention this move and instruct users to either copy the
contents of "./caddy_data" into the new "./db/caddy" path or allow Caddy to
regenerate certificates, and explicitly document that the docker-compose service
volume (the "./db/caddy" mount) now stores Caddy state to avoid surprise TLS
regeneration during upgrades.
🧹 Nitpick comments (10)
enterprise/config/templates/docker-compose-sqlite-traefik.yml.j2 (1)
83-106: Frontend is missing the runtime hardening applied to huey.Huey gets
read_only,cap_drop: ALL,security_opt, andtmpfs, but the frontend service only getsuser: "1001:1001". Consider applying the same hardening to frontend for consistency, as it's also an externally-facing service.🔒 Proposed hardening for frontend
user: "1001:1001" + read_only: true + cap_drop: + - ALL + security_opt: + - no-new-privileges:true + tmpfs: + - /tmp:rw,noexec,nosuid,nodev labels:config/templates/docker-compose-postgresql-caddy.yml.j2 (1)
110-126: Frontend missing runtime hardening (consistent gap across all templates).Same as other templates — frontend gets
user: "1001:1001"but notread_only,cap_drop,security_opt, ortmpfs. Consider applying the same hardening as huey for defense-in-depth on this externally-facing service.🔒 Proposed hardening for frontend
user: "1001:1001" + read_only: true + cap_drop: + - ALL + security_opt: + - no-new-privileges:true + tmpfs: + - /tmp:rw,noexec,nosuid,nodev caddy:README.md (1)
604-610: Non-root transition documentation is clear and helpful.The step-by-step migration instructions are straightforward. Consider adding a note that macOS/Docker Desktop users can skip the
chownstep, consistent with the behavior in the shell scripts.📝 Suggested addition
- sudo chown -R 1001:1001 db +> Note: On macOS with Docker Desktop, the `chown` step can be skipped as Docker Desktop handles ownership mapping. - docker compose up -denterprise/config/templates/docker-compose-postgresql-traefik.yml.j2 (1)
128-137: Frontend is missingread_only,cap_drop,security_opt, andtmpfs— inconsistent with the community sqlite-caddy template.In
config/templates/docker-compose-sqlite-caddy.yml.j2, the frontend gets the full hardening block (read_only, cap_drop, security_opt, tmpfs) plus user. Here, onlyuseris set. Consider aligning for consistent security posture across all templates.♻️ Suggested addition
- XDG_DATA_HOME=/tmp/.local/share image: ghcr.io/intuitem/ciso-assistant-enterprise-frontend:latest pull_policy: always depends_on: backend: condition: service_healthy + read_only: true + cap_drop: + - ALL + security_opt: + - no-new-privileges:true + tmpfs: + - /tmp:rw,noexec,nosuid,nodev user: "1001:1001"config/templates/docker-compose-postgresql-bunkerweb.yml.j2 (1)
122-131: Frontend missing full hardening — same inconsistency as other templates.Same gap as the enterprise templates:
user: "1001:1001"is set butread_only,cap_drop,security_opt, andtmpfsare absent. The sqlite-caddy template has the full set. Consider aligning.docker-compose-build.yml (2)
12-25: Backend receives full hardening here — good, but unique among all compose files.This is the only compose file where the backend gets
read_only,cap_drop,security_opt,tmpfs, anduser: "1001:1001". All Jinja2 templates omit these for the backend. If backend can run hardened in the build variant, it should likely be hardened in the templates too (or vice versa — document the discrepancy).
93-93: Caddy data volume moved under./db/caddy.Note that
./dbis recursively chown'd to1001:1001in the build script, but the Caddy container runs as root by default, so this shouldn't cause permission problems. Just be aware that Caddy's TLS certificates and state will now live inside the./dbtree, which could be surprising when users back up or inspect./db.enterprise/config/templates/docker-compose-postgresql-caddy.yml.j2 (1)
120-129: Same frontend hardening gap as other templates —useris set butread_only/cap_drop/security_opt/tmpfsare missing.Consistent with other enterprise templates but inconsistent with the community sqlite-caddy template. See earlier comment for the suggested fix pattern.
docker-compose.sh (1)
5-31: Ownership handling with GNU stat detection is well-implemented.The
is_linux_gnu_stat/get_owner_linuxpattern properly addresses the macOS portability concern from the previous review. The skip message for non-Linux is clear.One minor concern:
sudo chown(line 27) will hard-fail the script underset -euo pipefailifsudois unavailable or the user lacks privileges. Consider wrapping it or printing a helpful message.💡 Optional resilience improvement
- sudo chown -R "$EXPECTED_OWNER" ./db + if ! sudo chown -R "$EXPECTED_OWNER" ./db; then + echo "⚠️ Could not fix ownership of ./db. Please run: sudo chown -R $EXPECTED_OWNER ./db" + exit 1 + fidocker-compose.yml (1)
94-112: Consider applying hardening to the Caddy service as well.Backend, huey, and frontend all receive
read_only,cap_drop: ALL,security_opt: no-new-privileges:true, andtmpfs— but Caddy does not. While Caddy needs to write to/data/caddy(already mounted as a volume) and may needNET_BIND_SERVICEif binding to privileged ports, you could still apply a subset of hardening:Suggested partial hardening for Caddy
caddy: container_name: caddy image: caddy:2.10.0 environment: - CISO_ASSISTANT_URL=https://localhost:8443 depends_on: backend: condition: service_healthy restart: unless-stopped + cap_drop: + - ALL + cap_add: + - NET_BIND_SERVICE + security_opt: + - no-new-privileges:true + read_only: true + tmpfs: + - /tmp:rw,noexec,nosuid,nodev + - /config:rw,noexec,nosuid,nodev ports: - 8443:8443 volumes: - ./db/caddy:/data/caddy
backend/startup.sh
Outdated
| if [ ! -n "$DJANGO_SECRET_KEY" ]; then | ||
| if [ ! -f db/django_secret_key ]; then | ||
| cat /proc/sys/kernel/random/uuid >db/django_secret_key | ||
| install -m 600 <(cat /proc/sys/kernel/random/uuid) db/django_secret_key |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the actual file to see indentation and context
cat -n backend/startup.sh | head -20Repository: intuitem/ciso-assistant-community
Length of output: 986
🏁 Script executed:
# Check git history for this specific change
git log --oneline -10 -- backend/startup.shRepository: intuitem/ciso-assistant-community
Length of output: 128
🏁 Script executed:
# Look for any related fallback mechanisms or error handling
rg -A 5 -B 5 'django_secret_key' backend/Repository: intuitem/ciso-assistant-community
Length of output: 907
🏁 Script executed:
# Check if /proc/sys/kernel/random/uuid is used elsewhere
rg '/proc/sys/kernel/random/uuid'Repository: intuitem/ciso-assistant-community
Length of output: 169
🏁 Script executed:
# Examine the docker-compose files to understand the security context
fd 'docker-compose.*\.ya?ml' | head -5 | xargs cat | head -100Repository: intuitem/ciso-assistant-community
Length of output: 2676
Fix indentation: line 8 uses tabs while the rest of the script uses spaces.
Line 8 mixes indentation styles (tabs) with the rest of the file (spaces), creating a formatting inconsistency.
The concern about /proc/sys/kernel/random/uuid availability in hardened containers is unlikely to be an issue. Despite cap_drop: ALL and read_only: true, /proc/sys/kernel/random/uuid remains accessible because procfs is a separate kernel-provided mount point not affected by filesystem read-only settings or capability restrictions. Additionally, this is a development-only fallback—production deployments should set DJANGO_SECRET_KEY as an environment variable (which is already checked first in the script).
🤖 Prompt for AI Agents
In `@backend/startup.sh` at line 8, Fix the mixed indentation on the install
command: replace the tab characters at the start of the line containing "install
-m 600 <(cat /proc/sys/kernel/random/uuid) db/django_secret_key" with the same
number of spaces used elsewhere in the script so indentation is consistent;
locate that exact command string in the startup.sh and normalize it to
spaces-only indentation.
| - HOME=/tmp | ||
| - XDG_CACHE_HOME=/tmp/.cache | ||
| - XDG_CONFIG_HOME=/tmp/.config | ||
| - XDG_DATA_HOME=/tmp/.local/share |
There was a problem hiding this comment.
Backend is missing read_only, cap_drop, security_opt, tmpfs, and user directives.
The environment variables were added, but the runtime hardening block that every other compose file applies to backend is absent here. Compare with docker-compose.yml (lines 16–26) and docker-compose-barebone.yml (lines 16–25) where backend gets the full set. This template only adds the HOME/XDG env vars but omits the security constraints and non-root user, leaving the backend container running as root without filesystem or capability restrictions.
Proposed fix — add the missing hardening block to backend
- XDG_DATA_HOME=/tmp/.local/share
- POSTGRES_NAME={{ postgres.name }}
...
+ read_only: true
+ cap_drop:
+ - ALL
+ security_opt:
+ - no-new-privileges:true
+ tmpfs:
+ - /tmp:rw,noexec,nosuid,nodev
volumes:
- ./db:/code/db
+ user: "1001:1001"
healthcheck:🤖 Prompt for AI Agents
In `@config/templates/docker-compose-postgresql-traefik.yml.j2` around lines 35 -
38, The backend service in the docker-compose-postgresql-traefik.yml.j2 template
is missing the runtime hardening directives; update the backend service (same
service block that sets HOME/XDG_* env vars) to include read_only: true,
cap_drop: ['ALL'], security_opt: ['no-new-privileges:true',
'seccomp:unconfined'] (or match your other compose files), a tmpfs mount entry
(tmpfs: /tmp or matching pattern used elsewhere), and set user to a non-root
UID/GID consistent with other templates; ensure these directives mirror the
backend hardening block used in docker-compose.yml and
docker-compose-barebone.yml so the container does not run as root and has
filesystem/capability restrictions.
| - HOME=/tmp | ||
| - XDG_CACHE_HOME=/tmp/.cache | ||
| - XDG_CONFIG_HOME=/tmp/.config | ||
| - XDG_DATA_HOME=/tmp/.local/share |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, verify the file exists and check its overall structure
fd -e yml -e j2 'docker-compose-sqlite-caddy' && echo "File found"Repository: intuitem/ciso-assistant-community
Length of output: 202
🏁 Script executed:
# Check backend, huey, and frontend services for hardening markers
file_path=$(fd -e yml -e j2 'docker-compose-sqlite-caddy')
if [ -n "$file_path" ]; then
echo "=== File structure and service locations ==="
cat -n "$file_path" | head -100
fiRepository: intuitem/ciso-assistant-community
Length of output: 293
🏁 Script executed:
# Check for hardening markers in each service section
file_path=$(fd -e yml -e j2 'docker-compose-sqlite-caddy')
if [ -n "$file_path" ]; then
echo "=== Backend hardening check ==="
awk '/^ backend:/,/^ [a-z_]/ {print NR": "$0}' "$file_path" | head -30
echo ""
echo "=== Huey hardening check ==="
awk '/^ huey:/,/^ [a-z_]/ {print NR": "$0}' "$file_path" | head -50
echo ""
echo "=== Frontend hardening check ==="
awk '/^ frontend:/,/^ [a-z_]/ {print NR": "$0}' "$file_path" | head -30
fiRepository: intuitem/ciso-assistant-community
Length of output: 744
🏁 Script executed:
# Also check docker-compose-build.yml for backend hardening
echo "=== Checking docker-compose-build.yml backend hardening ==="
if fd -e yml 'docker-compose-build' > /dev/null; then
build_file=$(fd -e yml 'docker-compose-build')
awk '/^ backend:/,/^ [a-z_]/ {print NR": "$0}' "$build_file" | grep -E '(read_only|cap_drop|user:|security_opt|tmpfs)' || echo "No hardening markers found in docker-compose-build.yml backend"
fiRepository: intuitem/ciso-assistant-community
Length of output: 347
🏁 Script executed:
# Read the main config file with proper newline handling
file_path=$(fd -e yml -e j2 'docker-compose-sqlite-caddy' | head -1)
echo "Reading: $file_path"
wc -l "$file_path"Repository: intuitem/ciso-assistant-community
Length of output: 193
🏁 Script executed:
# Read the full file to see structure
file_path=$(fd -e yml -e j2 'docker-compose-sqlite-caddy' | head -1)
cat -n "$file_path"Repository: intuitem/ciso-assistant-community
Length of output: 6449
🏁 Script executed:
# Find and read docker-compose-build.yml
build_file=$(fd -e yml 'docker-compose-build' | head -1)
if [ -n "$build_file" ]; then
echo "Found: $build_file"
wc -l "$build_file"
echo "=== Backend service section ==="
awk '/^ backend:/,/^ [a-z_]/ {print NR": "$0}' "$build_file"
fiRepository: intuitem/ciso-assistant-community
Length of output: 182
🏁 Script executed:
# Read the entire docker-compose-build.yml file
build_file=$(fd -e yml 'docker-compose-build' | head -1)
cat -n "$build_file"Repository: intuitem/ciso-assistant-community
Length of output: 3147
Backend service is missing runtime hardening that's applied to huey and frontend.
The backend gets the HOME/XDG env vars but lacks read_only, cap_drop, security_opt, tmpfs, and user: "1001:1001" — all of which are applied to huey (lines 58–67) and frontend (lines 87–97) in this same file. In contrast, docker-compose-build.yml applies full hardening to the backend.
If this is intentional (e.g., backend needs write access for migrations at startup), document it inline with a comment. Otherwise, align the backend hardening with the other services.
🤖 Prompt for AI Agents
In `@config/templates/docker-compose-sqlite-caddy.yml.j2` around lines 12 - 15,
The backend service in the docker-compose template is missing the runtime
hardening applied to huey and frontend (read_only, cap_drop, security_opt,
tmpfs, and user: "1001:1001"); either add those same directives to the backend
service block so it matches the hardening applied to huey and frontend (add
read_only: true, cap_drop: [ALL], security_opt: ["no-new-privileges"], tmpfs:
["/tmp:rw,noexec,nosuid,size=65536k"] and user: "1001:1001") or, if the backend
genuinely requires writable files at startup (e.g., for migrations), add an
inline comment in the backend service explaining the exception and why each
hardening item is intentionally omitted; update any environment entries
(HOME/XDG_*) unchanged unless the writable requirement mandates different
values.
| - 8443:8443 | ||
| volumes: | ||
| - ./caddy_data:/data | ||
| - ./db/caddy:/data/caddy |
There was a problem hiding this comment.
Caddy volume path change may break existing deployments.
The volume mount changed from ./caddy_data:/data to ./db/caddy:/data/caddy. This consolidates storage under ./db (which aligns with the non-root ownership strategy), but existing users upgrading will have Caddy's TLS state in the old ./caddy_data directory. Caddy can regenerate internal TLS certificates automatically, so this isn't critical — but it's worth calling out in release/migration notes so users aren't surprised by a stale ./caddy_data directory or a TLS regeneration on upgrade.
🤖 Prompt for AI Agents
In `@docker-compose.yml` at line 106, The Caddy volume mount was changed from
"./caddy_data:/data" to "./db/caddy:/data/caddy", which can leave existing
deployments with TLS state in the old "./caddy_data" folder; update the
docker-compose.yml change notes and migration docs to mention this move and
instruct users to either copy the contents of "./caddy_data" into the new
"./db/caddy" path or allow Caddy to regenerate certificates, and explicitly
document that the docker-compose service volume (the "./db/caddy" mount) now
stores Caddy state to avoid surprise TLS regeneration during upgrades.
…t-community into harden-containers
Add user 1001:1001 in backend and frontend
Modify all docker-compose files to leverage this change and harden the config.
Summary by CodeRabbit
New Features
Improvements
Documentation
Chores