feat(epic): expand vuln management capabilities#3893
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdded a new sec_intel app (models, feeds, tasks, views, serializers, migrations, UI), extended Vulnerability with dates/SLA and relations to advisories/CWEs, added settings and sync/enrich flows, introduced EBIOS RM sub-context parsing, workshop-specific SSE workflows, and frontend integrations for forms, tables, routes, and actions. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant ChatWidget as Chat Widget
participant Router as Workflow Router
participant Workshop as Workshop Workflow
participant Backend as Backend API
Client->>ChatWidget: send message + page URL
ChatWidget->>Router: ParsedContext (includes sub_context)
Router->>Router: detect model_key == "ebios_rm_study" and sub_context
Router->>Router: match sub_context prefix -> workshop-N
Router->>Router: scan user message for workshop keywords
alt workshop matched and keyword found
Router->>Workshop: route to workshop workflow (SSE)
Workshop->>Backend: generate/create artifacts (via helpers)
Backend-->>Workshop: stream SSE tokens / result
Workshop-->>ChatWidget: SSE stream to client
else no match or no keyword
Router->>Backend: fallback routing (context/global workflows)
end
ChatWidget-->>Client: render streamed assistant output
sequenceDiagram
participant Scheduler as Periodic Scheduler
participant Task as Huey Task
participant Feed as Feed Class
participant External as External Source
participant DB as Database
Scheduler->>Task: trigger cron task
Task->>Feed: instantiate (checks feature flag)
Feed->>External: fetch data (HTTP or local file)
External-->>Feed: return raw payload
Feed->>Feed: parse & normalize records
Feed->>DB: bulk create/update SecurityAdvisory/CWE rows
DB-->>Feed: return created/updated counts
Feed-->>Task: return result
Task->>Scheduler: log completion or error
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 19
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/global_settings/views.py (1)
7-7:⚠️ Potential issue | 🟡 MinorRemove duplicate import.
SerializerFactoryis imported twice fromcore.serializers(lines 7 and 11).🧹 Suggested fix
from core.serializers import SerializerFactory from iam.models import Folder, Permission, RoleAssignment, User from iam.sso.models import SSOSettings from integrations.models import IntegrationProvider -from core.serializers import SerializerFactory from django.conf import settingsAlso applies to: 11-11
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/global_settings/views.py` at line 7, Remove the duplicate import of SerializerFactory from core.serializers by keeping a single import statement for SerializerFactory and deleting the redundant one; locate the two identical import lines referencing SerializerFactory in global_settings/views.py and ensure the remaining imports are properly ordered/organized (follow project import ordering or isort if used).
🧹 Nitpick comments (18)
backend/sec_intel/models.py (1)
29-52: Add explicit range validators to score fields.Consider constraining CVSS/EPSS values at the model layer to prevent invalid persisted values from external feed drift.
Suggested hardening
+from django.core.validators import MaxValueValidator, MinValueValidator @@ cvss_base_score = models.DecimalField( max_digits=4, decimal_places=1, null=True, blank=True, + validators=[MinValueValidator(0), MaxValueValidator(10)], verbose_name=_("CVSS base score"), ) @@ epss_score = models.DecimalField( max_digits=5, decimal_places=4, null=True, blank=True, + validators=[MinValueValidator(0), MaxValueValidator(1)], verbose_name=_("EPSS score"), ) @@ epss_percentile = models.DecimalField( max_digits=5, decimal_places=4, null=True, blank=True, + validators=[MinValueValidator(0), MaxValueValidator(1)], verbose_name=_("EPSS percentile"), )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/sec_intel/models.py` around lines 29 - 52, Add explicit MinValueValidator/MaxValueValidator constraints to the numeric score fields to prevent invalid persisted values: for cvss_base_score (field name cvss_base_score) add validators enforcing 0.0 <= value <= 10.0; for epss_score (epss_score) add validators enforcing 0.0 <= value <= 1.0; for epss_percentile (epss_percentile) add validators enforcing 0.0 <= value <= 100.0. Import django.core.validators.MinValueValidator and MaxValueValidator at the top of backend/sec_intel/models.py, attach validators=[MinValueValidator(...), MaxValueValidator(...)] to each DecimalField, and run makemigrations/migrate to persist the model changes.frontend/src/routes/(app)/(internal)/cves/sync-kev/+server.ts (1)
6-9: Remove redundant manualContent-Typeheader in server POST proxy.The SvelteKit
handleFetchhook automatically injectsContent-Type: application/jsonfor allBASE_API_URLrequests, making the manual header assignment unnecessary.♻️ Suggested simplification
export const POST: RequestHandler = async ({ fetch }) => { const res = await fetch(`${BASE_API_URL}/cves/sync-kev/`, { - method: 'POST', - headers: { 'Content-Type': 'application/json' } + method: 'POST' });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/routes/`(app)/(internal)/cves/sync-kev/+server.ts around lines 6 - 9, The POST proxy is redundantly setting 'Content-Type': 'application/json' on the fetch to BASE_API_URL even though SvelteKit's handleFetch injects that already; update the fetch call (the one using BASE_API_URL in +server.ts) by removing the headers object or at least the 'Content-Type' entry so the request relies on the automatic header injection, keeping method: 'POST' and any body intact (refer to the fetch invocation using BASE_API_URL).frontend/src/routes/(app)/(internal)/cwes/batch-action/+server.ts (1)
9-13: RedundantContent-Typeheader.The
handleFetchhook inhooks.server.tsautomatically addsContent-Type: application/jsonto all fetch requests targetingBASE_API_URL, making this header unnecessary.♻️ Optional cleanup
const res = await fetch(endpoint, { method: 'POST', - headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(body) });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/routes/`(app)/(internal)/cwes/batch-action/+server.ts around lines 9 - 13, Remove the redundant explicit 'Content-Type' header from the POST fetch in +server.ts because the handleFetch hook in hooks.server.ts already injects 'Content-Type: application/json' for requests to BASE_API_URL; update the fetch call that creates 'res' (the fetch to 'endpoint' with 'body') to omit the headers object so the hook provides the header automatically.frontend/src/routes/(app)/(internal)/cwes/autocomplete/+server.ts (1)
5-9: Consider adding error handling for non-JSON responses.Same concern as the CVE autocomplete endpoint—if the backend returns a non-JSON response,
res.json()will throw. Consider aligning with the error handling pattern used in the batch-action endpoints.♻️ Suggested improvement
export const GET: RequestHandler = async ({ fetch, url }) => { const endpoint = `${BASE_API_URL}/cwes/autocomplete/${url.search}`; const res = await fetch(endpoint); + if (!res.ok) { + return json({ error: 'Failed to fetch autocomplete data' }, { status: res.status }); + } const data = await res.json(); return json(data, { status: res.status }); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/routes/`(app)/(internal)/cwes/autocomplete/+server.ts around lines 5 - 9, The GET request handler (export const GET) currently calls res.json() directly which will throw on non-JSON responses; update the GET handler to catch JSON parse errors by attempting res.json() inside try/catch (or checking content-type) and if parsing fails call await res.text() and return a sensible error payload (e.g., json({ error: text || 'Non-JSON response' }, { status: res.status })) while preserving the original response status; ensure you reference the existing variables/endpoints (endpoint, fetch, res) and return the response using json(...) as before.frontend/src/routes/(app)/(internal)/[model=urlmodel]/+page.svelte (2)
186-198: Add fallback for toast message.If the backend response JSON lacks both
detailanderrorfields, the toast will displayundefined. Consider adding a fallback message.♻️ Suggested improvement
onclick={async () => { try { const res = await fetch('/cves/sync-kev', { method: 'POST' }); const result = await res.json(); toastStore.trigger({ - message: result.detail || result.error, + message: result.detail || result.error || m.syncComplete(), preset: res.ok ? 'success' : 'error' }); if (res.ok) invalidateAll(); } catch { toastStore.trigger({ message: m.syncKevFailed(), preset: 'error' }); } }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/routes/`(app)/(internal)/[model=urlmodel]/+page.svelte around lines 186 - 198, The toast message can be undefined because result.detail and result.error might both be missing; update the onclick handler's success/error toast logic (the async callback that calls fetch('/cves/sync-kev')) to provide a fallback message when result.detail and result.error are falsy — for example use a default string like m.syncKevFailed() or "Operation completed" (choose appropriate one) when calling toastStore.trigger, and keep the existing preset selection and invalidateAll() behavior intact.
207-222: Same fallback concern for CWE sync toast message.Apply the same fallback pattern here to avoid displaying
undefinedif the response lacks expected fields.♻️ Suggested improvement
onclick={async () => { try { const res = await fetch('/cwes/sync-catalog', { method: 'POST' }); const result = await res.json(); toastStore.trigger({ - message: result.detail || result.error, + message: result.detail || result.error || m.syncComplete(), preset: res.ok ? 'success' : 'error' }); if (res.ok) invalidateAll(); } catch { toastStore.trigger({ message: m.syncCweCatalogFailed(), preset: 'error' }); } }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/routes/`(app)/(internal)/[model=urlmodel]/+page.svelte around lines 207 - 222, The onclick handler for the CWE sync button uses toastStore.trigger with message: result.detail || result.error which can be undefined; update the onclick async handler (the function that calls fetch('/cwes/sync-catalog'), processes result, calls toastStore.trigger and invalidateAll) to compute a safe message like result.detail ?? result.error ?? m.syncCweCatalogFailed() (or include res.statusText as an additional fallback) before calling toastStore.trigger so the toast never shows undefined; keep the existing success/error preset logic and retain the existing catch block behavior.frontend/src/routes/(app)/(internal)/cves/autocomplete/+server.ts (1)
5-9: Consider adding error handling for non-JSON responses.If the backend returns a non-JSON response (e.g., an HTML error page on 500),
res.json()will throw an unhandled exception. The batch-action endpoints in this PR handle errors explicitly.♻️ Suggested improvement
export const GET: RequestHandler = async ({ fetch, url }) => { const endpoint = `${BASE_API_URL}/cves/autocomplete/${url.search}`; const res = await fetch(endpoint); + if (!res.ok) { + return json({ error: 'Failed to fetch autocomplete data' }, { status: res.status }); + } const data = await res.json(); return json(data, { status: res.status }); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/routes/`(app)/(internal)/cves/autocomplete/+server.ts around lines 5 - 9, The GET RequestHandler currently calls res.json() directly which will throw if the backend returns non-JSON (e.g., HTML error pages); update the GET handler to check res.ok and attempt safe JSON parsing: if res.ok parse json and return json(data, { status: res.status }), otherwise read text (or try/catch around res.json()) and return a structured error response with appropriate status. Specifically modify the GET function that builds endpoint with BASE_API_URL to handle non-JSON responses and caught parsing errors before returning so the server doesn’t crash on 500/HTML responses.frontend/src/lib/components/Forms/ModelForm/SecIntelFeedsSettingForm.svelte (1)
8-14: Unusedmodelprop declared in interface.The
Propsinterface declaresmodel: ModelInfobut it's not destructured on line 13 and isn't used anywhere in the component. Either remove it from the interface or destructure it if needed for consistency with other form components.♻️ Option A: Remove unused prop from interface
interface Props { form: SuperForm<any>; - model: ModelInfo; } let { form }: Props = $props();♻️ Option B: Destructure for consistency (if other forms use it)
- let { form }: Props = $props(); + let { form, model }: Props = $props();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/lib/components/Forms/ModelForm/SecIntelFeedsSettingForm.svelte` around lines 8 - 14, The Props interface declares an unused model: ModelInfo and the component only destructures form via "let { form }: Props = $props();", so either remove model from the Props interface (delete "model: ModelInfo") or include it in the destructuring (e.g., "let { form, model }: Props = $props();") to match other form components; update any imports or types if removing model so there are no leftover references to ModelInfo and run the build to confirm no unused-type warnings.backend/sec_intel/management/commands/sync_epss.py (1)
17-30: Consider logging the full exception traceback for debugging.The error handling is functional, but the generic exception handler on line 29-30 only prints the error message. For production debugging, consider logging the full traceback or using
--verbosityto control detail level.♻️ Optional: Add traceback for verbose mode
+import traceback + class Command(BaseCommand): help = "Sync EPSS feed data into existing CVEs" def add_arguments(self, parser): parser.add_argument( "--file", type=str, default=None, help="Path to local EPSS CSV/gzip file (for air-gapped environments)", ) def handle(self, *args, **options): from sec_intel.feeds import EPSSFeed file_path = Path(options["file"]) if options["file"] else None try: feed = EPSSFeed(file_path=file_path) count = feed.sync() self.stdout.write( self.style.SUCCESS(f"EPSS sync complete: {count} CVEs updated") ) except FileNotFoundError: self.stderr.write(self.style.ERROR(f"File not found: {file_path}")) except Exception as e: self.stderr.write(self.style.ERROR(f"EPSS sync failed: {e}")) + if options.get("verbosity", 1) >= 2: + self.stderr.write(traceback.format_exc())🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/sec_intel/management/commands/sync_epss.py` around lines 17 - 30, The broad-except in the handle method of sync_epss.py (inside the Command.handle that constructs EPSSFeed and calls feed.sync) only prints e.message; change the exception handler to emit the full traceback when verbosity is high or always log it to the configured logger: capture the exception and either call logging.exception("EPSS sync failed") or format the traceback via traceback.format_exc() and include that text in the self.stderr.write/self.stdout.write output; use the provided verbosity option (options.get("verbosity")) or Django command logging to gate verbose trace output so you still keep concise messages at low verbosity.frontend/src/routes/(app)/(internal)/cves/[id=uuid]/+page.server.ts (3)
17-25: Consider validating form before API call and using validated ID.The form is validated but
form.validis never checked, andevent.params.idis used instead ofform.data.id. Whileevent.params.idis already constrained by the[id=uuid]route matcher, for consistency:♻️ Suggested improvement
const schema = z.object({ id: z.string().uuid() }); const form = await superValidate(formData, zod(schema)); + if (!form.valid) { + return fail(400, { form }); + } + const response = await event.fetch(`${BASE_API_URL}/cves/${event.params.id}/enrich/`, {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/routes/`(app)/(internal)/cves/[id=uuid]/+page.server.ts around lines 17 - 25, The form is validated via schema and superValidate (schema, form) but form.valid is not checked and event.params.id is used directly; before calling event.fetch use form.valid to ensure validation passed and use the validated value form.data.id in the API URL (instead of event.params.id), and when form.valid is false return the form with validation errors (or short-circuit) so the POST to enrich only runs with a validated UUID.
27-46: Add error handling for non-JSON error responses.If the backend returns a non-JSON error response (e.g., 500 with HTML),
response.json()will throw. Consider wrapping this in a try-catch or checking content-type.♻️ Suggested defensive handling
- const responseData = await response.json(); + let responseData; + try { + responseData = await response.json(); + } catch { + responseData = {}; + } if (response.ok) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/routes/`(app)/(internal)/cves/[id=uuid]/+page.server.ts around lines 27 - 46, response.json() can throw if the server returns non-JSON (e.g., HTML 500); update the handler around the response parsing in +page.server.ts to defensively parse JSON: check response.headers.get('content-type') for "application/json" before calling response.json(), or wrap response.json() in a try/catch and fall back to a safe error message; then use that safe responseData (or fallback object with an error string) when calling setFlash (references: response, response.json(), responseData, setFlash, and m.enrichmentFailed()) so non-JSON errors produce a friendly flash instead of blowing up.
13-15: Redundant null check forformData.
event.request.formData()returns aFormDataobject or throws an error - it won't return a falsy value. This check is dead code.♻️ Suggested fix
enrich: async (event) => { const formData = await event.request.formData(); - - if (!formData) { - return fail(400, { form: null }); - } - const schema = z.object({ id: z.string().uuid() });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/routes/`(app)/(internal)/cves/[id=uuid]/+page.server.ts around lines 13 - 15, The check "if (!formData) { return fail(400, { form: null }); }" is redundant because event.request.formData() either returns a FormData or throws; remove that dead check. Instead, if you need to handle malformed requests, wrap the await event.request.formData() call in a try/catch and return fail(400, { form: null }) from the catch; otherwise simply proceed using the formData variable. Ensure you update any code paths that assumed the null branch to use the real FormData object or the new catch-based error handling.backend/global_settings/views.py (1)
267-314: Consider extracting a base class for settings viewsets.
VulnerabilitySlaViewSetandSecIntelFeedsViewSetare nearly identical, differing only inserializer_class, queryset filter, andget_or_createname. A base class could reduce duplication:♻️ Optional refactor suggestion
class SingletonSettingsViewSet(viewsets.ModelViewSet): """Base viewset for singleton GlobalSettings entries.""" model = GlobalSettings settings_name: str = None # Override in subclass def retrieve(self, request, *args, **kwargs): instance = self.get_object() serializer = self.get_serializer(instance) return Response(serializer.data) def update(self, request, *args, **kwargs): instance = self.get_object() serializer = self.get_serializer(instance, data=request.data) serializer.is_valid(raise_exception=True) serializer.save() return Response(serializer.data) def get_object(self): obj, _ = self.model.objects.get_or_create(name=self.settings_name) obj.is_published = True obj.save(update_fields=["is_published"]) self.check_object_permissions(self.request, obj) return obj class VulnerabilitySlaViewSet(SingletonSettingsViewSet): settings_name = "vulnerability-sla" serializer_class = VulnerabilitySlaSerializer queryset = GlobalSettings.objects.filter(name="vulnerability-sla") class SecIntelFeedsViewSet(SingletonSettingsViewSet): settings_name = "sec-intel-feeds" serializer_class = SecIntelFeedsSerializer queryset = GlobalSettings.objects.filter(name="sec-intel-feeds")This also applies to
FeatureFlagsViewSetand potentiallyGeneralSettingsViewSet. Based on learnings, deferring this refactor to a follow-up is acceptable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/global_settings/views.py` around lines 267 - 314, Extract a base class to remove duplication: create SingletonSettingsViewSet (subclassing viewsets.ModelViewSet) that implements retrieve, update, and get_object using a class attribute settings_name (used in get_or_create and in filtering), and have VulnerabilitySlaViewSet and SecIntelFeedsViewSet subclass it, override settings_name ("vulnerability-sla"/"sec-intel-feeds"), set their serializer_class and queryset = GlobalSettings.objects.filter(name=settings_name); ensure get_object sets is_published and calls check_object_permissions as in the originals.backend/sec_intel/management/commands/sync_cwe.py (1)
29-32: Consider more specific exception handling.The
FileNotFoundErrorhandler at line 30 printsfile_path, which could beNoneif the error originates from withinCWEFeed(e.g., when downloading fails and a temp file isn't found). The genericExceptioncatch is broad and could mask unexpected issues.This is a minor concern since management commands are typically run interactively where output is visible, but for consistency with production-grade error handling, consider:
♻️ Suggested improvement
except FileNotFoundError: - self.stderr.write(self.style.ERROR(f"File not found: {file_path}")) + self.stderr.write( + self.style.ERROR(f"File not found: {file_path or 'remote resource'}") + ) except Exception as e: self.stderr.write(self.style.ERROR(f"CWE sync failed: {e}"))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/sec_intel/management/commands/sync_cwe.py` around lines 29 - 32, The FileNotFoundError handler in the management command's run logic may receive file_path == None (e.g., failures inside CWEFeed) and the blanket except Exception masks other issues; update the except blocks around CWEFeed usage/export to (1) check and include the actual file_path or context (e.g., temp file path or CWEFeed.download result) when reporting FileNotFoundError and handle None safely, (2) catch and handle more specific exceptions that CWEFeed can raise (e.g., requests exceptions, OSError) instead of a bare Exception, and (3) re-raise or log unexpected exceptions rather than silently swallowing them so callers can see tracebacks — locate these changes around the code referencing CWEFeed and the variables file_path and the command's exception handlers.frontend/src/routes/(app)/(internal)/settings/+page.server.ts (1)
97-102: Add error handling for settings fetch.The fetch calls for
vulnerability-slaandsec-intel-feedsdon't handle non-OK responses. If the endpoints fail,.json()may throw or return unexpected data.♻️ Suggested defensive handling
- const vulnerabilitySlaSettings = await fetch(`${BASE_API_URL}/settings/vulnerability-sla/`).then( - (res) => res.json() - ); - const secIntelFeedsSettings = await fetch(`${BASE_API_URL}/settings/sec-intel-feeds/`).then( - (res) => res.json() - ); + const vulnerabilitySlaSettings = await fetch(`${BASE_API_URL}/settings/vulnerability-sla/`).then( + (res) => (res.ok ? res.json() : {}) + ); + const secIntelFeedsSettings = await fetch(`${BASE_API_URL}/settings/sec-intel-feeds/`).then( + (res) => (res.ok ? res.json() : {}) + );Note: The existing SSO, general, and feature-flag fetches have the same pattern, so this is consistent with current code. Consider addressing all in a follow-up.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/routes/`(app)/(internal)/settings/+page.server.ts around lines 97 - 102, The two fetches for vulnerabilitySlaSettings and secIntelFeedsSettings use res.json() without checking the HTTP response; update both calls (the expressions that call fetch(`${BASE_API_URL}/settings/vulnerability-sla/`) and fetch(`${BASE_API_URL}/settings/sec-intel-feeds/`)) to verify res.ok before calling res.json(), throw or return a safe default when not ok, and wrap the await in a try/catch so errors from network/JSON parsing are handled and won’t crash the page; ensure the error path logs or returns a sensible fallback object consistent with the other settings fetches.backend/sec_intel/feeds.py (1)
280-284: Inconsistent empty value check between here andviews.py.In
views.py:86, the check isif current in (None, "", 0, [])which includes empty list, but here it'sif current in (None, "", 0). For consistency and to handle list fields likereferences, consider aligning the checks.Proposed fix
for k, v in fields.items(): current = getattr(cve_instance, k, None) - if current in (None, "", 0): + if current in (None, "", 0, []): setattr(cve_instance, k, v) update_fields.append(k)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/sec_intel/feeds.py` around lines 280 - 284, The empty-value check in the loop that updates cve_instance fields is missing list emptiness and should match views.py; update the conditional in the block iterating over fields.items() (where current = getattr(cve_instance, k, None) and update_fields.append(k) / setattr(cve_instance, k, v) are used) to include an empty list as an empty value (i.e., check current in (None, "", 0, []) so list fields like references get populated consistently).backend/global_settings/serializers.py (1)
428-430: Consider adding minimum value validation fornetwork_timeout.A timeout of 0 or negative would cause
httpxrequests to fail. Consider usingmin_value=1to ensure a valid timeout.Proposed fix
network_timeout = serializers.IntegerField( - source="value.network_timeout", required=False, default=30 + source="value.network_timeout", required=False, default=30, min_value=1 )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/global_settings/serializers.py` around lines 428 - 430, The network_timeout IntegerField (network_timeout = serializers.IntegerField(...)) currently allows 0 or negative values which break httpx; update this field to enforce a minimum of 1 by adding min_value=1 to the serializers.IntegerField arguments (keep source="value.network_timeout", required=False and default=30 intact) so invalid timeouts are rejected by validation before use.backend/sec_intel/views.py (1)
75-77: Consider adding logging when NVD fetch fails.Unlike
sync_kevandsync_catalog, this path doesn't log the failure. This could make debugging production issues difficult.Proposed fix
raw = NVDFeed.fetch_cve(cve_id) if raw is None: + logger.warning("NVD enrichment failed", cve_id=cve_id) return Response({"error": "Failed to fetch data from NVD"}, status=502)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/sec_intel/views.py` around lines 75 - 77, The code path calling NVDFeed.fetch_cve(cve_id) returns a 502 Response on failure but doesn't log anything; update the failure branch in the view to log the error (including the cve_id and any returned/error details) before returning the Response. Use the module's logger (e.g., logger.error or the existing logger variable) and include context like the CVE ID and a clear message so failures from NVDFeed.fetch_cve are recorded for debugging.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/chat/workflows/ebios_rm_helpers.py`:
- Around line 17-43: The LLM-generated creation paths (_create_assets and
_create_rotos) only check Python types and directly call objects.create(),
allowing invalid enum values (e.g., Asset.type) and out-of-range integers
(RoTo.motivation/resources/activity) to be persisted; update these functions to
validate choice and range constraints before saving by either calling the
model's full_clean() and catching django.core.exceptions.ValidationError
(skip/log invalid rows) or performing explicit allowlist checks for Asset.type
and min/max checks for RoTo.motivation/resources/activity and
feared_event_names/asset_names membership, then only call objects.create() for
valid entries and log or discard invalid ones.
In
`@backend/core/migrations/0157_vulnerability_cves_vulnerability_cwes_and_more.py`:
- Around line 33-49: The migration only adds nullable fields detected_at,
due_date, and eta on the Vulnerability model but does not backfill existing
rows, leaving current records with NULL SLA dates and disabled behavior; add a
data migration (migrations.RunPython) in the same migration file that locates
the Vulnerability model and sets sensible defaults for existing records (e.g.,
set detected_at from an existing timestamp field like created_at/first_seen if
available, compute due_date/eta using your SLA policy or a conservative default,
and update any derived state column) or document an external backfill step;
implement the backfill as a reversible RunPython function (e.g.,
backfill_vulnerability_sla and reverse that clears the fields) targeting
model_name "vulnerability" and the new fields detected_at, due_date, eta so
existing findings immediately participate in the SLA logic.
In `@backend/core/models.py`:
- Around line 5474-5503: The current save/_apply_sla_policy flow only sets
due_date when it's empty, so changes to severity or detected_at (or SLA inputs)
won't update an SLA-derived deadline; update the logic in save and
_apply_sla_policy so you compute the SLA deadline deterministically (e.g., add a
new helper like _compute_sla_deadline or make _apply_sla_policy return the
computed date) using self.get_severity_display() and self.detected_at, then in
save compare the existing self.due_date to the newly computed SLA date and
overwrite self.due_date when it is empty OR when it equals the previously
computed SLA date OR when severity/detected_at changed; keep manual overrides
(non-SLA dates) intact by not overwriting when the existing due_date differs
from the previously computed SLA value.
- Around line 5452-5466: The Vulnerability model's new ManyToManyFields cves and
cwes aren't being audited; update the auditlog registration for the
Vulnerability model (the auditlog.register(...) call later in this file) to
include these relations by adding m2m_fields=['cves', 'cwes'] so that
link/unlink events for CVE and CWE are recorded; if there is an existing
register call for Vulnerability, augment it rather than creating a duplicate
registration.
- Around line 5471-5485: The save method (Vulnerability.save) mutates derived
fields (detected_at and due_date via _apply_sla_policy) but forwards
kwargs["update_fields"] unchanged, so partial updates like
save(update_fields={"severity"}) will omit those recalculated fields; fix by
detecting an update_fields kwarg and either expanding it to include
"detected_at" and "due_date" (and any other fields set by _apply_sla_policy) or
removing update_fields so the full object is persisted, then call
super().save(*args, **kwargs); ensure this logic references the save method, the
_apply_sla_policy helper, and the detected_at/due_date field names so
recalculated values are always persisted.
In `@backend/core/serializers.py`:
- Around line 312-325: The SLA policy is being stored on the serializer class
via _sla_policy_cache in _get_sla_policy, causing process-wide stale values;
change the cache to an instance-level attribute (e.g., self._sla_policy_cache)
or use a request-scoped cache/invalidate mechanism so updates to
GlobalSettings(name="vulnerability-sla") are immediately visible; update
_get_sla_policy to set/read self._sla_policy_cache instead of
self.__class__._sla_policy_cache (or plug into your existing cache invalidation
interface) and keep the same fallback behavior for GlobalSettings.DoesNotExist.
- Around line 344-347: The SLA lookup is using the localized label from
obj.get_severity_display(), which can fail across locales; change the lookup to
use a stable, non-localized key (e.g., use obj.severity or another enum key)
when querying sla_policy (from _get_sla_policy()), e.g. severity_key =
obj.severity and sla_days = sla_policy.get(severity_key); keep
obj.get_severity_display() only for presentation/labels, not for the policy
lookup.
In `@backend/core/views.py`:
- Around line 3135-3147: The autocomplete view currently serializes
VulnerabilityReadSerializer(objects, many=True) directly, bypassing
related-field IAM masking; update the autocomplete method to compute the
related-field map via self._get_fieldsrelated_map(request) and then call
self._filter_related_fields(objects, fieldsrelated_map) (or equivalent) to
produce a filtered queryset/list before passing it into
VulnerabilityReadSerializer and into get_paginated_response/Response so
FieldsRelatedField payloads are masked consistently with other list/autocomplete
paths.
In `@backend/sec_intel/management/commands/enrich_cve.py`:
- Around line 90-92: The current queryset assigned to cves only selects CVEs
with published_date null and thus skips records that have published_date set but
are missing enrichment fields; update the filter in enrich_cve.py (the
assignment to cves) to include CVEs whose ref_id startswith "CVE-" AND
(published_date is null OR any of the enrichment fields are null). Implement
this using Django Q objects, e.g. replace the single published_date__isnull=True
predicate with a combined filter like Q(ref_id__startswith="CVE-") &
(Q(published_date__isnull=True) | Q(cvss__isnull=True) | Q(epss__isnull=True) |
Q(kev__isnull=True)) so partially enriched CVEs are processed in --all mode.
In `@backend/sec_intel/migrations/0001_initial.py`:
- Line 1: Update the generated migration header to match the pinned Django patch
by regenerating the migration file for sec_intel.migrations.0001_initial.py
using the project environment with Django 6.0.4 (e.g., activate the
virtualenv/container where backend/pyproject.toml is installed and run python
manage.py makemigrations for the sec_intel app) and commit the new migration
file so the generated header and metadata reflect Django 6.0.4 instead of 6.0.3.
In `@backend/sec_intel/models.py`:
- Around line 62-63: The model uses "ref_id" as the sync identity but lacks a DB
uniqueness constraint, which allows concurrent feed syncs to create duplicates;
update the model that defines ref_id (and any model referencing fields_to_check)
to enforce uniqueness by adding unique=True to the ref_id field declaration
and/or adding a Meta-level UniqueConstraint (e.g.,
UniqueConstraint(fields=["ref_id"], name="unique_ref_id")) so the DB enforces
uniqueness for the sync key; after changing the model, create and run the
corresponding migration to apply the constraint.
In `@backend/sec_intel/serializers.py`:
- Around line 45-48: get_references() assumes every item in obj.references is a
dict and will fail with AttributeError for malformed entries; update the list
comprehension to handle non-dict items by checking isinstance(ref, dict) and
either skip non-dict entries or produce a safe fallback (e.g., {"str": str(ref),
"source": ""}) so serialization never accesses .get on non-dicts; refer to the
get_references function and the obj.references iterable when making this change.
- Around line 17-24: The serializer create() currently calls
NVDFeed.enrich_cve(instance) synchronously (which calls fetch_cve and blocks);
change this so create() only enqueues an asynchronous background job (e.g.,
Celery/RQ/async task) to run NVDFeed.enrich_cve using a serializable identifier
(pass instance.id or instance.ref_id) instead of the model instance, and remove
the blocking call from create(); in the new task worker call
NVDFeed.fetch_cve/enrich_cve, check and handle its return value (log failures
and update the instance record appropriately) and surface/log exceptions instead
of silently swallowing them.
In `@frontend/messages/en.json`:
- Around line 2370-2373: The en.json file contains duplicated locale keys
"resolved" and "onTrack" used in different contexts (health-tracking vs SLA);
remove the duplicate declarations and introduce context-specific keys (e.g.,
"resolved_sla" / "resolved_health" and "onTrack_sla" / "onTrack_health")
updating every reference in the codebase to use the new keys (search for usages
of "resolved" and "onTrack" and replace with the appropriate context-specific
key), and remove the old ambiguous entries so the file no longer redefines the
same keys and translators can handle each meaning separately.
In `@frontend/src/lib/utils/crud.ts`:
- Around line 566-572: The entries { field: 'detected_at', type: 'date' }, {
field: 'eta', type: 'date' }, and { field: 'due_date', type: 'date' } are
incorrectly placed in the foreignKeyFields array; update the configuration by
removing these three date field objects from foreignKeyFields and adding them to
the detailViewFields array (preserving type: 'date'), and ensure
foreignKeyFields only contains relationship entries like { field:
'security_exceptions', urlModel: 'security-exceptions' } and similar.
In `@frontend/src/lib/utils/schemas.ts`:
- Line 179: published_date and kev_date_added currently accept arbitrary strings
which can allow invalid dates; update their Zod validators to enforce ISO
date/datetime format (e.g., replace z.string().optional().nullable() with a
date-aware validator such as z.string().datetime().optional().nullable() or
z.string().refine(...ISO date check...) to match the pattern used for fields
like eta/due_date in this file) so form fields using type="date" only produce
valid, comparable dates; update the schema entries for the published_date and
kev_date_added fields accordingly.
- Around line 841-843: vulnerabilitySchema currently defines detected_at, eta,
and due_date as z.string().optional().nullable(), which permits invalid date
strings; update these three fields to use the same date validation pattern used
elsewhere (e.g., BusinessImpactAnalysisSchema) — replace the z.string() entries
with the project's canonical date validator (e.g., z.preprocess/transform to
Date with z.date().optional().nullable() or the shared date schema helper) so
invalid dates are rejected and valid ISO dates are parsed consistently by
vulnerabilitySchema.
- Around line 180-183: The schema uses z.coerce.number() for fields like
cvss_base_score, epss_score, and epss_percentile which coerces empty strings to
0; change the schema to preprocess empty-string inputs into null before
coercion—use z.preprocess to map "" to null (or leave non-empty values
untouched) and then apply z.number().nullable().optional() (or z.coerce.number()
after preprocess if coercion is needed) for cvss_base_score, epss_score,
epss_percentile (and any other occurrences noted around lines 614-619) so empty
inputs become null at the schema level rather than 0.
In `@frontend/src/routes/`(app)/(internal)/cwes/sync-catalog/+server.ts:
- Around line 11-12: The current code calls res.json() unconditionally (see the
res variable and the return json(data, { status: res.status }) line) which will
throw for 204 No Content or non-JSON bodies; wrap the parsing in a
guard/try-catch: first check if res.status === 204 or
res.headers.get('content-length') === '0' or the content-type header does not
include 'application/json', and if so read and forward the raw text via
res.text() (or return an empty body with the upstream status) instead of calling
res.json(); otherwise call res.json() inside try/catch and on parse failure fall
back to res.text() and return that with the original res.status so the client
receives the upstream status/body intact.
---
Outside diff comments:
In `@backend/global_settings/views.py`:
- Line 7: Remove the duplicate import of SerializerFactory from core.serializers
by keeping a single import statement for SerializerFactory and deleting the
redundant one; locate the two identical import lines referencing
SerializerFactory in global_settings/views.py and ensure the remaining imports
are properly ordered/organized (follow project import ordering or isort if
used).
---
Nitpick comments:
In `@backend/global_settings/serializers.py`:
- Around line 428-430: The network_timeout IntegerField (network_timeout =
serializers.IntegerField(...)) currently allows 0 or negative values which break
httpx; update this field to enforce a minimum of 1 by adding min_value=1 to the
serializers.IntegerField arguments (keep source="value.network_timeout",
required=False and default=30 intact) so invalid timeouts are rejected by
validation before use.
In `@backend/global_settings/views.py`:
- Around line 267-314: Extract a base class to remove duplication: create
SingletonSettingsViewSet (subclassing viewsets.ModelViewSet) that implements
retrieve, update, and get_object using a class attribute settings_name (used in
get_or_create and in filtering), and have VulnerabilitySlaViewSet and
SecIntelFeedsViewSet subclass it, override settings_name
("vulnerability-sla"/"sec-intel-feeds"), set their serializer_class and queryset
= GlobalSettings.objects.filter(name=settings_name); ensure get_object sets
is_published and calls check_object_permissions as in the originals.
In `@backend/sec_intel/feeds.py`:
- Around line 280-284: The empty-value check in the loop that updates
cve_instance fields is missing list emptiness and should match views.py; update
the conditional in the block iterating over fields.items() (where current =
getattr(cve_instance, k, None) and update_fields.append(k) /
setattr(cve_instance, k, v) are used) to include an empty list as an empty value
(i.e., check current in (None, "", 0, []) so list fields like references get
populated consistently).
In `@backend/sec_intel/management/commands/sync_cwe.py`:
- Around line 29-32: The FileNotFoundError handler in the management command's
run logic may receive file_path == None (e.g., failures inside CWEFeed) and the
blanket except Exception masks other issues; update the except blocks around
CWEFeed usage/export to (1) check and include the actual file_path or context
(e.g., temp file path or CWEFeed.download result) when reporting
FileNotFoundError and handle None safely, (2) catch and handle more specific
exceptions that CWEFeed can raise (e.g., requests exceptions, OSError) instead
of a bare Exception, and (3) re-raise or log unexpected exceptions rather than
silently swallowing them so callers can see tracebacks — locate these changes
around the code referencing CWEFeed and the variables file_path and the
command's exception handlers.
In `@backend/sec_intel/management/commands/sync_epss.py`:
- Around line 17-30: The broad-except in the handle method of sync_epss.py
(inside the Command.handle that constructs EPSSFeed and calls feed.sync) only
prints e.message; change the exception handler to emit the full traceback when
verbosity is high or always log it to the configured logger: capture the
exception and either call logging.exception("EPSS sync failed") or format the
traceback via traceback.format_exc() and include that text in the
self.stderr.write/self.stdout.write output; use the provided verbosity option
(options.get("verbosity")) or Django command logging to gate verbose trace
output so you still keep concise messages at low verbosity.
In `@backend/sec_intel/models.py`:
- Around line 29-52: Add explicit MinValueValidator/MaxValueValidator
constraints to the numeric score fields to prevent invalid persisted values: for
cvss_base_score (field name cvss_base_score) add validators enforcing 0.0 <=
value <= 10.0; for epss_score (epss_score) add validators enforcing 0.0 <= value
<= 1.0; for epss_percentile (epss_percentile) add validators enforcing 0.0 <=
value <= 100.0. Import django.core.validators.MinValueValidator and
MaxValueValidator at the top of backend/sec_intel/models.py, attach
validators=[MinValueValidator(...), MaxValueValidator(...)] to each
DecimalField, and run makemigrations/migrate to persist the model changes.
In `@backend/sec_intel/views.py`:
- Around line 75-77: The code path calling NVDFeed.fetch_cve(cve_id) returns a
502 Response on failure but doesn't log anything; update the failure branch in
the view to log the error (including the cve_id and any returned/error details)
before returning the Response. Use the module's logger (e.g., logger.error or
the existing logger variable) and include context like the CVE ID and a clear
message so failures from NVDFeed.fetch_cve are recorded for debugging.
In `@frontend/src/lib/components/Forms/ModelForm/SecIntelFeedsSettingForm.svelte`:
- Around line 8-14: The Props interface declares an unused model: ModelInfo and
the component only destructures form via "let { form }: Props = $props();", so
either remove model from the Props interface (delete "model: ModelInfo") or
include it in the destructuring (e.g., "let { form, model }: Props = $props();")
to match other form components; update any imports or types if removing model so
there are no leftover references to ModelInfo and run the build to confirm no
unused-type warnings.
In `@frontend/src/routes/`(app)/(internal)/[model=urlmodel]/+page.svelte:
- Around line 186-198: The toast message can be undefined because result.detail
and result.error might both be missing; update the onclick handler's
success/error toast logic (the async callback that calls
fetch('/cves/sync-kev')) to provide a fallback message when result.detail and
result.error are falsy — for example use a default string like m.syncKevFailed()
or "Operation completed" (choose appropriate one) when calling
toastStore.trigger, and keep the existing preset selection and invalidateAll()
behavior intact.
- Around line 207-222: The onclick handler for the CWE sync button uses
toastStore.trigger with message: result.detail || result.error which can be
undefined; update the onclick async handler (the function that calls
fetch('/cwes/sync-catalog'), processes result, calls toastStore.trigger and
invalidateAll) to compute a safe message like result.detail ?? result.error ??
m.syncCweCatalogFailed() (or include res.statusText as an additional fallback)
before calling toastStore.trigger so the toast never shows undefined; keep the
existing success/error preset logic and retain the existing catch block
behavior.
In `@frontend/src/routes/`(app)/(internal)/cves/[id=uuid]/+page.server.ts:
- Around line 17-25: The form is validated via schema and superValidate (schema,
form) but form.valid is not checked and event.params.id is used directly; before
calling event.fetch use form.valid to ensure validation passed and use the
validated value form.data.id in the API URL (instead of event.params.id), and
when form.valid is false return the form with validation errors (or
short-circuit) so the POST to enrich only runs with a validated UUID.
- Around line 27-46: response.json() can throw if the server returns non-JSON
(e.g., HTML 500); update the handler around the response parsing in
+page.server.ts to defensively parse JSON: check
response.headers.get('content-type') for "application/json" before calling
response.json(), or wrap response.json() in a try/catch and fall back to a safe
error message; then use that safe responseData (or fallback object with an error
string) when calling setFlash (references: response, response.json(),
responseData, setFlash, and m.enrichmentFailed()) so non-JSON errors produce a
friendly flash instead of blowing up.
- Around line 13-15: The check "if (!formData) { return fail(400, { form: null
}); }" is redundant because event.request.formData() either returns a FormData
or throws; remove that dead check. Instead, if you need to handle malformed
requests, wrap the await event.request.formData() call in a try/catch and return
fail(400, { form: null }) from the catch; otherwise simply proceed using the
formData variable. Ensure you update any code paths that assumed the null branch
to use the real FormData object or the new catch-based error handling.
In `@frontend/src/routes/`(app)/(internal)/cves/autocomplete/+server.ts:
- Around line 5-9: The GET RequestHandler currently calls res.json() directly
which will throw if the backend returns non-JSON (e.g., HTML error pages);
update the GET handler to check res.ok and attempt safe JSON parsing: if res.ok
parse json and return json(data, { status: res.status }), otherwise read text
(or try/catch around res.json()) and return a structured error response with
appropriate status. Specifically modify the GET function that builds endpoint
with BASE_API_URL to handle non-JSON responses and caught parsing errors before
returning so the server doesn’t crash on 500/HTML responses.
In `@frontend/src/routes/`(app)/(internal)/cves/sync-kev/+server.ts:
- Around line 6-9: The POST proxy is redundantly setting 'Content-Type':
'application/json' on the fetch to BASE_API_URL even though SvelteKit's
handleFetch injects that already; update the fetch call (the one using
BASE_API_URL in +server.ts) by removing the headers object or at least the
'Content-Type' entry so the request relies on the automatic header injection,
keeping method: 'POST' and any body intact (refer to the fetch invocation using
BASE_API_URL).
In `@frontend/src/routes/`(app)/(internal)/cwes/autocomplete/+server.ts:
- Around line 5-9: The GET request handler (export const GET) currently calls
res.json() directly which will throw on non-JSON responses; update the GET
handler to catch JSON parse errors by attempting res.json() inside try/catch (or
checking content-type) and if parsing fails call await res.text() and return a
sensible error payload (e.g., json({ error: text || 'Non-JSON response' }, {
status: res.status })) while preserving the original response status; ensure you
reference the existing variables/endpoints (endpoint, fetch, res) and return the
response using json(...) as before.
In `@frontend/src/routes/`(app)/(internal)/cwes/batch-action/+server.ts:
- Around line 9-13: Remove the redundant explicit 'Content-Type' header from the
POST fetch in +server.ts because the handleFetch hook in hooks.server.ts already
injects 'Content-Type: application/json' for requests to BASE_API_URL; update
the fetch call that creates 'res' (the fetch to 'endpoint' with 'body') to omit
the headers object so the hook provides the header automatically.
In `@frontend/src/routes/`(app)/(internal)/settings/+page.server.ts:
- Around line 97-102: The two fetches for vulnerabilitySlaSettings and
secIntelFeedsSettings use res.json() without checking the HTTP response; update
both calls (the expressions that call
fetch(`${BASE_API_URL}/settings/vulnerability-sla/`) and
fetch(`${BASE_API_URL}/settings/sec-intel-feeds/`)) to verify res.ok before
calling res.json(), throw or return a safe default when not ok, and wrap the
await in a try/catch so errors from network/JSON parsing are handled and won’t
crash the page; ensure the error path logs or returns a sensible fallback object
consistent with the other settings fetches.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 72b71cbf-c4d8-4b57-abcf-8bbc09275a1e
📒 Files selected for processing (69)
backend/chat/page_context.pybackend/chat/views.pybackend/chat/workflows/ebios_rm_assist.pybackend/chat/workflows/ebios_rm_helpers.pybackend/chat/workflows/ebios_rm_workshops.pybackend/chat/workflows/registry.pybackend/ciso_assistant/settings.pybackend/core/management/commands/populate_vulnerabilities.pybackend/core/migrations/0157_vulnerability_cves_vulnerability_cwes_and_more.pybackend/core/models.pybackend/core/serializers.pybackend/core/startup.pybackend/core/urls.pybackend/core/views.pybackend/global_settings/migrations/0004_alter_globalsettings_name.pybackend/global_settings/models.pybackend/global_settings/serializers.pybackend/global_settings/urls.pybackend/global_settings/views.pybackend/sec_intel/__init__.pybackend/sec_intel/apps.pybackend/sec_intel/feeds.pybackend/sec_intel/management/__init__.pybackend/sec_intel/management/commands/__init__.pybackend/sec_intel/management/commands/enrich_cve.pybackend/sec_intel/management/commands/sync_cwe.pybackend/sec_intel/management/commands/sync_epss.pybackend/sec_intel/management/commands/sync_kev.pybackend/sec_intel/migrations/0001_initial.pybackend/sec_intel/migrations/__init__.pybackend/sec_intel/models.pybackend/sec_intel/serializers.pybackend/sec_intel/tasks.pybackend/sec_intel/views.pyenterprise/backend/enterprise_core/settings.pyfrontend/messages/en.jsonfrontend/src/lib/components/Chart/StackedBarsNormalized.sveltefrontend/src/lib/components/ChatWidget/chatStore.svelte.tsfrontend/src/lib/components/Forms/ModelForm.sveltefrontend/src/lib/components/Forms/ModelForm/CVEForm.sveltefrontend/src/lib/components/Forms/ModelForm/CWEForm.sveltefrontend/src/lib/components/Forms/ModelForm/FindingForm.sveltefrontend/src/lib/components/Forms/ModelForm/QuantitativeRiskScenarioForm.sveltefrontend/src/lib/components/Forms/ModelForm/SecIntelFeedsSettingForm.sveltefrontend/src/lib/components/Forms/ModelForm/VulnerabilitiesForm.sveltefrontend/src/lib/components/Forms/ModelForm/VulnerabilitySlaSettingForm.sveltefrontend/src/lib/components/Settings/SecIntelFeedsSettings.sveltefrontend/src/lib/components/Settings/VulnerabilitySlaSettings.sveltefrontend/src/lib/components/SideBar/navData.tsfrontend/src/lib/utils/boolean-display.tsfrontend/src/lib/utils/crud.tsfrontend/src/lib/utils/schemas.tsfrontend/src/lib/utils/table.tsfrontend/src/lib/utils/types.tsfrontend/src/routes/(app)/(internal)/[model=urlmodel]/+page.sveltefrontend/src/routes/(app)/(internal)/cves/[id=uuid]/+layout.server.tsfrontend/src/routes/(app)/(internal)/cves/[id=uuid]/+page.server.tsfrontend/src/routes/(app)/(internal)/cves/[id=uuid]/+page.sveltefrontend/src/routes/(app)/(internal)/cves/[id=uuid]/+server.tsfrontend/src/routes/(app)/(internal)/cves/autocomplete/+server.tsfrontend/src/routes/(app)/(internal)/cves/batch-action/+server.tsfrontend/src/routes/(app)/(internal)/cves/sync-kev/+server.tsfrontend/src/routes/(app)/(internal)/cwes/autocomplete/+server.tsfrontend/src/routes/(app)/(internal)/cwes/batch-action/+server.tsfrontend/src/routes/(app)/(internal)/cwes/sync-catalog/+server.tsfrontend/src/routes/(app)/(internal)/risk-scenarios/[id=uuid]/edit/+page.sveltefrontend/src/routes/(app)/(internal)/settings/+page.server.tsfrontend/src/routes/(app)/(internal)/settings/+page.sveltefrontend/src/routes/(app)/(internal)/vulnerabilities/autocomplete/+server.ts
backend/core/migrations/0159_vulnerability_cwes_vulnerability_detected_at_and_more.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (2)
backend/core/migrations/0157_vulnerability_cves_vulnerability_cwes_and_more.py (1)
6-11: Consider usingbulk_updatefor better migration performance.The current backfill iterates and calls
save()individually, which can be slow for large datasets. Usingbulk_updatewith batching would significantly improve performance.⚡ Suggested optimization
def backfill_detected_at(apps, schema_editor): """Set detected_at to created_at date for existing vulnerabilities.""" Vulnerability = apps.get_model("core", "Vulnerability") - for vuln in Vulnerability.objects.filter(detected_at__isnull=True).iterator(): - vuln.detected_at = vuln.created_at.date() if vuln.created_at else None - vuln.save(update_fields=["detected_at"]) + batch_size = 1000 + vulns_to_update = [] + for vuln in Vulnerability.objects.filter(detected_at__isnull=True).iterator(): + if vuln.created_at: + vuln.detected_at = vuln.created_at.date() + vulns_to_update.append(vuln) + if len(vulns_to_update) >= batch_size: + Vulnerability.objects.bulk_update(vulns_to_update, ["detected_at"]) + vulns_to_update = [] + if vulns_to_update: + Vulnerability.objects.bulk_update(vulns_to_update, ["detected_at"])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/core/migrations/0157_vulnerability_cves_vulnerability_cwes_and_more.py` around lines 6 - 11, The migration's backfill_detected_at function currently saves each Vulnerability individually which is slow; instead, query Vulnerability objects with detected_at null, iterate and collect changed instances into a list, set vuln.detected_at = vuln.created_at.date() if vuln.created_at else None, and perform bulk_update on the batch (using Vulnerability.objects.bulk_update(instances, ["detected_at"])) in chunks (e.g., 500-1000) until done; keep using the apps.get_model("core", "Vulnerability") lookup and ensure batching to avoid large memory spikes.frontend/src/routes/(app)/(internal)/[model=urlmodel]/+page.svelte (1)
214-258: Consider adding loading state to prevent duplicate sync requests.The sync buttons for CVEs and CWEs trigger potentially long-running backend operations without disabling the button during the request. Users might click multiple times, causing concurrent sync operations.
Consider tracking a loading state and disabling the button during the async operation:
💡 Suggested pattern
+let syncingCves = $state(false); +let syncingCwes = $state(false); {`#if` URLModel === 'cves'} <button class="inline-block p-3 btn-mini-tertiary w-12 focus:relative" title={m.syncKev()} aria-label={m.syncKev()} data-testid="sync-kev-button" + disabled={syncingCves} onclick={async () => { + syncingCves = true; try { const res = await fetch('/cves/sync-kev', { method: 'POST' }); // ... } catch { // ... + } finally { + syncingCves = false; } }}>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/routes/`(app)/(internal)/[model=urlmodel]/+page.svelte around lines 214 - 258, Add a loading flag to prevent duplicate sync requests: introduce a boolean (e.g., isSyncingKev and isSyncingCwe or a single isSyncing keyed by URLModel) and use it in the onclick handlers for the data-testid="sync-kev-button" and data-testid="sync-cwe-button" buttons to early-return if already true, set it true before calling fetch('/cves/sync-kev') or fetch('/cwes/sync-catalog'), and reset it in a finally block so the button is re-enabled; also bind the flag to the button disabled attribute and include it in any class/tailwind state change to show a loading UI while invalidateAll and toastStore.trigger run.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/global_settings/serializers.py`:
- Around line 434-436: The serializer's network_timeout IntegerField currently
allows any integer; add validation to ensure values are within the
frontend-expected range (5–120) to prevent invalid API inputs. Fix by updating
the network_timeout field (or adding a validate_network_timeout(self, value)
method) on the same serializer to enforce min_value=5 and max_value=120 (or
raise serializers.ValidationError for out-of-range values) and ensure the field
remains required=False with default=30 so behavior doesn't change for missing
values.
- Around line 359-372: The serializer currently supplies concrete defaults for
fields critical/high/medium/low/info (IntegerField definitions in the
serializer) which causes cleared (null/removed) SLA thresholds to reappear on
read; remove the hardcoded default=... from those IntegerField declarations (or
set default=None) so that when the backend deletes a severity key (the removal
logic around the code that touches lines 405-409) the serializer will return
null/missing instead of the numeric default; ensure allow_null=True remains and
confirm to_representation/update logic does not re-insert defaults.
In `@backend/sec_intel/views.py`:
- Around line 39-42: The bug is that _get_accessible_ids_map is being called
with set(field_models.values()) instead of the full mapping; update both call
sites where field_models is used (the block that calls
_get_fieldsrelated_map(serializer) and the similar block near lines 125-128) to
pass field_models directly into _get_accessible_ids_map so it receives the
expected {field_name: model_class} mapping; verify _filter_related_fields is
still called with (data, field_models, allowed_ids) unchanged.
In `@frontend/src/lib/utils/schemas.ts`:
- Around line 613-619: VulnerabilitySlaSchema currently allows any string for
sla_anchor; update the sla_anchor validator to constrain it to the backend
choices (detected_at or published_date) so frontend validation matches backend
ChoiceField. Locate VulnerabilitySlaSchema and replace the sla_anchor
z.string().optional() with an enum-based validator that only permits
"detected_at" and "published_date" (preserving optional behavior).
---
Nitpick comments:
In
`@backend/core/migrations/0157_vulnerability_cves_vulnerability_cwes_and_more.py`:
- Around line 6-11: The migration's backfill_detected_at function currently
saves each Vulnerability individually which is slow; instead, query
Vulnerability objects with detected_at null, iterate and collect changed
instances into a list, set vuln.detected_at = vuln.created_at.date() if
vuln.created_at else None, and perform bulk_update on the batch (using
Vulnerability.objects.bulk_update(instances, ["detected_at"])) in chunks (e.g.,
500-1000) until done; keep using the apps.get_model("core", "Vulnerability")
lookup and ensure batching to avoid large memory spikes.
In `@frontend/src/routes/`(app)/(internal)/[model=urlmodel]/+page.svelte:
- Around line 214-258: Add a loading flag to prevent duplicate sync requests:
introduce a boolean (e.g., isSyncingKev and isSyncingCwe or a single isSyncing
keyed by URLModel) and use it in the onclick handlers for the
data-testid="sync-kev-button" and data-testid="sync-cwe-button" buttons to
early-return if already true, set it true before calling fetch('/cves/sync-kev')
or fetch('/cwes/sync-catalog'), and reset it in a finally block so the button is
re-enabled; also bind the flag to the button disabled attribute and include it
in any class/tailwind state change to show a loading UI while invalidateAll and
toastStore.trigger run.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fcf48ab4-a8b8-4ac3-adfc-3aeb6734a1e5
📒 Files selected for processing (16)
backend/ciso_assistant/settings.pybackend/core/migrations/0157_vulnerability_cves_vulnerability_cwes_and_more.pybackend/core/models.pybackend/core/serializers.pybackend/core/views.pybackend/global_settings/serializers.pybackend/sec_intel/management/commands/enrich_cve.pybackend/sec_intel/views.pyenterprise/backend/enterprise_core/settings.pyfrontend/messages/en.jsonfrontend/messages/fr.jsonfrontend/src/lib/components/Forms/ModelForm/VulnerabilitySlaSettingForm.sveltefrontend/src/lib/utils/schemas.tsfrontend/src/lib/utils/table.tsfrontend/src/routes/(app)/(internal)/[model=urlmodel]/+page.sveltefrontend/src/routes/(app)/(internal)/vulnerabilities/refresh-due-dates/+server.ts
✅ Files skipped from review due to trivial changes (1)
- frontend/messages/fr.json
🚧 Files skipped from review as they are similar to previous changes (7)
- enterprise/backend/enterprise_core/settings.py
- backend/ciso_assistant/settings.py
- frontend/messages/en.json
- frontend/src/lib/components/Forms/ModelForm/VulnerabilitySlaSettingForm.svelte
- backend/sec_intel/management/commands/enrich_cve.py
- backend/core/models.py
- backend/core/views.py
There was a problem hiding this comment.
Actionable comments posted: 10
♻️ Duplicate comments (4)
backend/sec_intel/serializers.py (2)
46-49:⚠️ Potential issue | 🟡 MinorGuard
referencesandaliasesagainst non-dict JSON items.Both comprehensions call
.get()on every entry. If either JSON field contains a stray string ornull, serialization raisesAttributeErrorand breaks the advisory response.💡 Suggested hardening
return [ {"str": ref.get("url", ""), "source": ref.get("source", "")} for ref in obj.references + if isinstance(ref, dict) ] @@ return [ {"str": f"{alias.get('source', '')}: {alias.get('id', '')}"} for alias in obj.aliases + if isinstance(alias, dict) ]Also applies to: 54-57
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/sec_intel/serializers.py` around lines 46 - 49, The comprehensions that build reference and alias lists call .get() on each entry (iterating obj.references and obj.aliases) and will raise AttributeError if items are non-dict (e.g., strings or null); update both comprehensions to skip non-dict entries by filtering with isinstance(item, dict) (or equivalent truthy check) before calling .get(), and ensure you still default to empty strings for missing keys so serialization never fails when stray JSON items appear.
17-24:⚠️ Potential issue | 🟠 MajorKeep NVD enrichment out of
create().
NVDFeed.enrich_cve()performs external I/O and a second save. Running it inline makes advisory creation latency depend on NVD availability and turns enrichment failures into a best-effort side effect on the write path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/sec_intel/serializers.py` around lines 17 - 24, The create() method in the serializer currently calls NVDFeed.enrich_cve(instance) (which does external I/O and a second save) making writes blocking; remove that call from serializers.create and instead enqueue an asynchronous background task or emit a post-save signal that passes the instance PK to a worker which then imports sec_intel.feeds.NVDFeed and calls NVDFeed.enrich_cve(instance_id) (or loads the instance in the worker) so enrichment runs outside the request/transaction and failures are best-effort.frontend/src/lib/utils/crud.ts (1)
571-573:⚠️ Potential issue | 🟠 MajorMove these date fields out of
foreignKeyFields.
ForeignKeyFieldis the relationship config and does not define atypeproperty. Keepingdetected_at,eta, anddue_datehere makes the vulnerability form treat them like related-object fields instead of dates.💡 Suggested fix
vulnerabilities: { name: 'vulnerability', localName: 'vulnerability', localNamePlural: 'vulnerabilities', verboseName: 'Vulnerability', verboseNamePlural: 'Vulnerabilities', + detailViewFields: [ + { field: 'detected_at', type: 'date' }, + { field: 'eta', type: 'date' }, + { field: 'due_date', type: 'date' } + ], foreignKeyFields: [ { field: 'folder', urlModel: 'folders', urlParams: 'content_type=DO&content_type=GL' }, { field: 'assets', urlModel: 'assets' }, { field: 'applied_controls', urlModel: 'applied-controls' }, { field: 'filtering_labels', urlModel: 'filtering-labels' }, { field: 'security_exceptions', urlModel: 'security-exceptions' }, { field: 'security_advisories', urlModel: 'security-advisories' }, { field: 'cwes', urlModel: 'cwes' }, - { field: 'detected_at', type: 'date' }, - { field: 'eta', type: 'date' }, - { field: 'due_date', type: 'date' } ],Run this to verify the mismatch between the config arrays and their interfaces:
#!/bin/bash set -euo pipefail sed -n '99,105p' frontend/src/lib/utils/crud.ts sed -n '132,137p' frontend/src/lib/utils/crud.ts sed -n '563,581p' frontend/src/lib/utils/crud.tsExpected result:
ForeignKeyFieldhas notype,Fielddoes, and these three date entries are currently defined underforeignKeyFields.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/lib/utils/crud.ts` around lines 571 - 573, The three entries { field: 'detected_at', type: 'date' }, { field: 'eta', type: 'date' }, and { field: 'due_date', type: 'date' } are incorrectly placed in the foreignKeyFields array (ForeignKeyField doesn't support a type); move those three objects out of foreignKeyFields into the appropriate fields array (the Field-typed array used for scalar/date fields) so they are defined with type:'date' under the Field definition, and leave only proper ForeignKeyField shapes in foreignKeyFields.backend/sec_intel/models.py (1)
79-83:⚠️ Potential issue | 🟠 MajorEnforce DB uniqueness on
ref_id.Both models use
ref_idas the sync identity, and the feed code relies onbulk_create(..., ignore_conflicts=True)for deduplication. Without a model-level unique constraint, concurrent or overlapping imports can still create duplicates and break deterministic linking.Also applies to: 104-108
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/sec_intel/models.py` around lines 79 - 83, Add a DB-level uniqueness constraint for ref_id on both models: update each model that defines fields_to_check = ["ref_id"] by adding either unique=True on the ref_id field or (preferably) a Meta UniqueConstraint (e.g., UniqueConstraint(fields=["ref_id"], name="unique_ref_id_<model>")) inside the model's Meta class so the database enforces uniqueness during concurrent bulk_create(..., ignore_conflicts=True) runs; remember to create the corresponding migration.
🧹 Nitpick comments (6)
frontend/src/routes/(app)/(internal)/security-advisories/[id=uuid]/+page.server.ts (1)
17-28: Schema validation result is unused.The form is validated against a UUID schema, but
form.validis never checked before making the backend request. Additionally, the backend call usesevent.params.idfrom the URL rather than the validated form data, making the validation purely decorative.If the intent is to validate that the form contains a matching ID, consider checking
form.validand using the validated data. Otherwise, the validation can be removed to simplify the code.♻️ Option 1: Use the validation result
const schema = z.object({ id: z.string().uuid() }); const form = await superValidate(formData, zod(schema)); + + if (!form.valid) { + return fail(400, { form }); + } const response = await event.fetch( - `${BASE_API_URL}/security-advisories/${event.params.id}/enrich/`, + `${BASE_API_URL}/security-advisories/${form.data.id}/enrich/`,♻️ Option 2: Remove unused validation
- const schema = z.object({ id: z.string().uuid() }); - const form = await superValidate(formData, zod(schema)); - const response = await event.fetch( `${BASE_API_URL}/security-advisories/${event.params.id}/enrich/`,Note: You'd also need to adjust the return value since
formwould be undefined.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/routes/`(app)/(internal)/security-advisories/[id=uuid]/+page.server.ts around lines 17 - 28, The form validation result (schema, form, superValidate) is unused and the backend call uses event.params.id instead of validated data; update the handler to check form.valid and, if valid, use the validated id (form.data.id) when calling event.fetch to `${BASE_API_URL}/security-advisories/${form.data.id}/enrich/`, and if not valid return the form (or an appropriate error) without making the fetch; this ensures superValidate's output is actually enforced and prevents sending requests with unvalidated URL params.frontend/src/lib/components/Forms/ModelForm/SecurityAdvisoryForm.svelte (2)
64-64: Missing cache bindings for consistency.The
cvss_base_scorefield is missingcacheLockandbind:cachedValueprops that other fields have. This could cause inconsistent caching behavior if form data caching is enabled.♻️ Add cache bindings for consistency
-<NumberField {form} field="cvss_base_score" label={m.cvssBaseScore()} min={0} max={10} step={0.1} /> +<NumberField + {form} + field="cvss_base_score" + label={m.cvssBaseScore()} + min={0} + max={10} + step={0.1} + cacheLock={cacheLocks['cvss_base_score']} + bind:cachedValue={formDataCache['cvss_base_score']} +/>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/lib/components/Forms/ModelForm/SecurityAdvisoryForm.svelte` at line 64, The NumberField for cvss_base_score in SecurityAdvisoryForm.svelte is missing the caching props used elsewhere; update the NumberField component call for field="cvss_base_score" to include the same cache bindings as other fields (pass cacheLock and add bind:cachedValue) so it participates in the form caching mechanism consistently with the other inputs.
86-97: Missing cache bindings on filtering_labels.Similar to
cvss_base_score, thefiltering_labelsfield is missing cache bindings for consistency with other fields in this form.♻️ Add cache bindings
<AutocompleteSelect multiple {form} createFromSelection={true} optionsEndpoint="filtering-labels" optionsLabelField="label" field="filtering_labels" translateOptions={false} helpText={m.labelsHelpText()} label={m.labels()} allowUserOptions="append" + cacheLock={cacheLocks['filtering_labels']} + bind:cachedValue={formDataCache['filtering_labels']} />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/lib/components/Forms/ModelForm/SecurityAdvisoryForm.svelte` around lines 86 - 97, The AutocompleteSelect for the "filtering_labels" field is missing the same cache bindings used elsewhere (e.g., cvss_base_score) which causes inconsistent behavior; update the <AutocompleteSelect ... field="filtering_labels" ... /> instance to include the same cache-related bindings/props used by cvss_base_score (the binding names used in this form for caching), so that filtering_labels reads/writes the cached value the same way as other fields.frontend/src/routes/(app)/(internal)/security-advisories/[id=uuid]/+page.svelte (1)
29-31: Consider logging the caught error for debugging.The empty catch block swallows the error, making debugging harder if network issues occur.
♻️ Add error logging
- } catch { + } catch (err) { + console.error('Enrichment failed:', err); toastStore.trigger({ message: m.enrichmentFailed(), preset: 'error' }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/routes/`(app)/(internal)/security-advisories/[id=uuid]/+page.svelte around lines 29 - 31, The catch block that triggers toastStore.trigger({ message: m.enrichmentFailed(), preset: 'error' }) swallows the error; change it to catch the error (e) and log it before showing the toast—e.g., call console.error or the app logger with a clear message including the error and context (e.g., "Failed to enrich security advisory" plus e) so failures in the async function around that catch are visible while preserving the existing toast behavior.frontend/src/routes/(app)/(internal)/security-advisories/[id=uuid]/enrich/+server.ts (1)
5-12: Same pattern as sync-kev - consider consistency.This endpoint shares the same pattern as
sync-kev/+server.ts. The same suggestions apply:
- The
Content-Typeheader is redundant (auto-added byhandleFetchhook).- Consider handling non-JSON error responses gracefully.
Based on learnings: handleFetch hook automatically adds Content-Type for BASE_API_URL requests.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/routes/`(app)/(internal)/security-advisories/[id=uuid]/enrich/+server.ts around lines 5 - 12, The POST RequestHandler uses an explicit 'Content-Type' header and assumes res.json() will always succeed; remove the redundant headers object from the fetch call (the call inside POST) since handleFetch already sets Content-Type for BASE_API_URL requests, and make the response parsing robust by checking the response content-type (or attempting json parse inside a try/catch) before calling res.json() — if the body is not JSON, read it as text and return a JSON payload like { error: <text> } (preserving res.status) so non-JSON error responses from the fetch to /security-advisories/${params.id}/enrich/ are handled gracefully.frontend/src/routes/(app)/(internal)/security-advisories/sync-kev/+server.ts (1)
5-12: Consider handling non-JSON error responses.If the backend returns a non-JSON response (e.g., an HTML error page on 500),
res.json()will throw an unhandled exception. Consider wrapping in try/catch or checkingContent-Typeheader before parsing.Also, per learnings, the
Content-Type: application/jsonheader is automatically added by thehandleFetchhook inhooks.server.tsfor requests toBASE_API_URL, so line 8 is redundant.♻️ Suggested improvement with error handling
export const POST: RequestHandler = async ({ fetch }) => { const res = await fetch(`${BASE_API_URL}/security-advisories/sync-kev/`, { - method: 'POST', - headers: { 'Content-Type': 'application/json' } + method: 'POST' }); - const data = await res.json(); + const contentType = res.headers.get('content-type') || ''; + const data = contentType.includes('application/json') + ? await res.json() + : { error: 'Unexpected response format' }; return json(data, { status: res.status }); };Based on learnings: "the SvelteKit application uses a handleFetch hook... that automatically adds Content-Type: application/json... Manual header addition in individual server routes is unnecessary for backend API calls."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/routes/`(app)/(internal)/security-advisories/sync-kev/+server.ts around lines 5 - 12, The POST handler calls res.json() unguarded and also redundantly sets Content-Type; update the POST RequestHandler to first check res.headers.get('content-type') (or wrap res.json() in try/catch) and gracefully handle non-JSON responses (e.g., read text for error bodies and return an appropriate json/text response with res.status), and remove the manual headers: { 'Content-Type': 'application/json' } since handleFetch in hooks.server.ts already adds it for BASE_API_URL requests; change code paths around POST, fetch, BASE_API_URL, and res.json() accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/core/views.py`:
- Around line 3203-3206: The code assumes sla_policy.get(severity_label) is
always numeric and calls int(days) which can raise ValueError/TypeError for
malformed values; wrap the conversion in a try/except (catch ValueError and
TypeError), log or record the malformed value for debugging, and skip that SLA
entry (i.e., continue) instead of letting the exception propagate; update the
block around sla_policy, severity_label, days and delta = timedelta(...) to use
a safely converted days_int before constructing timedelta.
In `@backend/sec_intel/feeds.py`:
- Around line 170-181: The code currently calls Decimal(row["epss"]) and
Decimal(row["percentile"]) inside the results append and only catches KeyError
and ValueError, which misses decimal.InvalidOperation for blank/garbled values
and treats 0/0.0 as falsy; change the parsing to explicitly fetch values via
row.get("epss") and row.get("percentile"), check for empty string or None (not
falsiness) and only attempt Decimal() when non-empty, import and catch
decimal.InvalidOperation (in addition to KeyError) to skip malformed numeric
rows without aborting, and ensure numeric zeros (0, "0", "0.0") are accepted and
preserved; apply the same pattern to the other occurrences referenced (lines
~374-390 and ~417-425), and keep the logger.warning call
(logger.warning("Skipping malformed EPSS entry", cve=cve_id, error=str(e))) when
skipping.
In `@backend/sec_intel/management/commands/enrich_advisory.py`:
- Around line 50-63: When a local payload is provided via file_path the parsed
CVE may not match the requested cve_id, so after calling NVDFeed.parse_cve(raw)
validate that the parsed record's identifier (e.g. fields.get("cve_id") or
fields.get("id") depending on parse_cve output) equals cve.ref_id (or the local
cve_id variable); if it does not match, write an error via
self.stderr.write(self.style.ERROR(...)) and return without applying the
metadata. Update the block around NVDFeed.parse_cve, fields and the existing
error-return flow to perform this guard and keep the existing messages (use
cve.ref_id/cve_id and file_path in the error message for clarity).
In `@backend/sec_intel/tasks.py`:
- Around line 14-18: The handlers are currently swallowing exceptions (e.g., the
KEVFeed().sync() try/except) so failures appear as normal; change each except
Exception block to capture the exception (except Exception as e), log it (keep
exc_info=True) and then re-raise the exception (raise) so callers see the
failure; apply this pattern for KEVFeed.sync(), EUVDFeed.sync(), CWEFeed.sync(),
EPSSFeed.sync() and the corresponding periodic/on-demand task handlers in this
file.
In `@backend/sec_intel/views.py`:
- Around line 52-90: The POST actions sync_kev and sync_euvd should enqueue the
Huey background task rather than executing the full feed inline: replace the
current try block that calls KEVFeed().sync() / EUVDFeed().sync() and waits for
a result with a call that simply invokes the task-decorated method
(KEVFeed().sync() and EUVDFeed().sync() — tasks are enqueued automatically when
called) and immediately return a 202 Response like {"detail": "KEV sync
enqueued"} or {"detail": "EUVD sync enqueued"}; keep an except Exception to
logger.warning(exc_info=True) and return a 502 on enqueue failure; apply the
same change to the other similar endpoints referenced (the ones around the other
feed actions).
In `@enterprise/frontend/src/lib/components/SideBar/navData.ts`:
- Around line 163-172: The sidebar nav entries named 'securityAdvisories' (href
'/security-advisories') and 'cwes' (href '/cwes') in navData.ts point to routes
that don't exist in the enterprise app; either remove those two objects from the
navData export or implement equivalent enterprise route handlers that serve
'/security-advisories' and '/cwes' under the enterprise (app)/(internal) route
tree so clicks don't 404—if you implement routes, create the necessary page +
server/loader logic and wire any required data/permissions to match the
open-source behavior, otherwise delete the entries from navData.ts to remove the
broken menu items.
In `@frontend/messages/en.json`:
- Line 2372: The help text for SLA anchoring is misleading: update the
"detectedAtHelpText" value to "When the vulnerability was discovered. Defaults
to today. Used as the SLA start date when the SLA anchor is set to detection
date." and update "slaAnchorHelpText" to "Choose when the SLA clock starts.
Detection date: when the vulnerability was discovered by your organization.
Publication date: when the linked security advisory was publicly disclosed and
has a published date." Apply the same updated wording to the other occurrences
noted (the entries around lines 3669-3670) so both keys ("detectedAtHelpText"
and "slaAnchorHelpText") consistently reflect the configurable SLA anchor
behavior.
In `@frontend/src/lib/components/DetailView/DetailView.svelte`:
- Around line 633-639: Replace the raw <a> element used for external advisory
URLs with the shared Anchor component so app-level outbound link behavior is
preserved: import Anchor (if not already) into DetailView.svelte and change the
block that renders when val.str starts with "http://" or "https://" to render
<Anchor href={val.str} target="_blank" rel="noopener noreferrer"
class="anchor">{val.str}</Anchor> instead of the plain <a>; keep the same
props/text and the conditional that checks val.str to locate the exact spot.
In
`@frontend/src/routes/`(app)/(internal)/security-advisories/[id=uuid]/+page.server.ts:
- Around line 13-15: The check for "formData" is dead because
event.request.formData() always returns a FormData object; remove the if-block
that checks "if (!formData) { return fail(400, { form: null }); }" in the
handler (the variable "formData" and the failing call to fail(400, { form: null
}) are the targets), and instead validate expected form fields from the returned
FormData (e.g., check specific entries like formData.get('fieldName') and call
fail(...) only when those required fields are missing or invalid).
In
`@frontend/src/routes/`(app)/(internal)/security-advisories/[id=uuid]/+server.ts:
- Around line 16-20: The proxy currently constructs the Response with new
Response(JSON.stringify(data), { headers: { 'Content-Type': 'application/json' }
}) which defaults to status 200 and can mask the upstream status; update that
Response call to include the upstream response status (e.g. use
upstreamResponse.status or response.status) by adding status:
upstreamResponse.status to the init object so the proxied GET preserves the
original status code.
---
Duplicate comments:
In `@backend/sec_intel/models.py`:
- Around line 79-83: Add a DB-level uniqueness constraint for ref_id on both
models: update each model that defines fields_to_check = ["ref_id"] by adding
either unique=True on the ref_id field or (preferably) a Meta UniqueConstraint
(e.g., UniqueConstraint(fields=["ref_id"], name="unique_ref_id_<model>")) inside
the model's Meta class so the database enforces uniqueness during concurrent
bulk_create(..., ignore_conflicts=True) runs; remember to create the
corresponding migration.
In `@backend/sec_intel/serializers.py`:
- Around line 46-49: The comprehensions that build reference and alias lists
call .get() on each entry (iterating obj.references and obj.aliases) and will
raise AttributeError if items are non-dict (e.g., strings or null); update both
comprehensions to skip non-dict entries by filtering with isinstance(item, dict)
(or equivalent truthy check) before calling .get(), and ensure you still default
to empty strings for missing keys so serialization never fails when stray JSON
items appear.
- Around line 17-24: The create() method in the serializer currently calls
NVDFeed.enrich_cve(instance) (which does external I/O and a second save) making
writes blocking; remove that call from serializers.create and instead enqueue an
asynchronous background task or emit a post-save signal that passes the instance
PK to a worker which then imports sec_intel.feeds.NVDFeed and calls
NVDFeed.enrich_cve(instance_id) (or loads the instance in the worker) so
enrichment runs outside the request/transaction and failures are best-effort.
In `@frontend/src/lib/utils/crud.ts`:
- Around line 571-573: The three entries { field: 'detected_at', type: 'date' },
{ field: 'eta', type: 'date' }, and { field: 'due_date', type: 'date' } are
incorrectly placed in the foreignKeyFields array (ForeignKeyField doesn't
support a type); move those three objects out of foreignKeyFields into the
appropriate fields array (the Field-typed array used for scalar/date fields) so
they are defined with type:'date' under the Field definition, and leave only
proper ForeignKeyField shapes in foreignKeyFields.
---
Nitpick comments:
In `@frontend/src/lib/components/Forms/ModelForm/SecurityAdvisoryForm.svelte`:
- Line 64: The NumberField for cvss_base_score in SecurityAdvisoryForm.svelte is
missing the caching props used elsewhere; update the NumberField component call
for field="cvss_base_score" to include the same cache bindings as other fields
(pass cacheLock and add bind:cachedValue) so it participates in the form caching
mechanism consistently with the other inputs.
- Around line 86-97: The AutocompleteSelect for the "filtering_labels" field is
missing the same cache bindings used elsewhere (e.g., cvss_base_score) which
causes inconsistent behavior; update the <AutocompleteSelect ...
field="filtering_labels" ... /> instance to include the same cache-related
bindings/props used by cvss_base_score (the binding names used in this form for
caching), so that filtering_labels reads/writes the cached value the same way as
other fields.
In
`@frontend/src/routes/`(app)/(internal)/security-advisories/[id=uuid]/+page.server.ts:
- Around line 17-28: The form validation result (schema, form, superValidate) is
unused and the backend call uses event.params.id instead of validated data;
update the handler to check form.valid and, if valid, use the validated id
(form.data.id) when calling event.fetch to
`${BASE_API_URL}/security-advisories/${form.data.id}/enrich/`, and if not valid
return the form (or an appropriate error) without making the fetch; this ensures
superValidate's output is actually enforced and prevents sending requests with
unvalidated URL params.
In
`@frontend/src/routes/`(app)/(internal)/security-advisories/[id=uuid]/+page.svelte:
- Around line 29-31: The catch block that triggers toastStore.trigger({ message:
m.enrichmentFailed(), preset: 'error' }) swallows the error; change it to catch
the error (e) and log it before showing the toast—e.g., call console.error or
the app logger with a clear message including the error and context (e.g.,
"Failed to enrich security advisory" plus e) so failures in the async function
around that catch are visible while preserving the existing toast behavior.
In
`@frontend/src/routes/`(app)/(internal)/security-advisories/[id=uuid]/enrich/+server.ts:
- Around line 5-12: The POST RequestHandler uses an explicit 'Content-Type'
header and assumes res.json() will always succeed; remove the redundant headers
object from the fetch call (the call inside POST) since handleFetch already sets
Content-Type for BASE_API_URL requests, and make the response parsing robust by
checking the response content-type (or attempting json parse inside a try/catch)
before calling res.json() — if the body is not JSON, read it as text and return
a JSON payload like { error: <text> } (preserving res.status) so non-JSON error
responses from the fetch to /security-advisories/${params.id}/enrich/ are
handled gracefully.
In
`@frontend/src/routes/`(app)/(internal)/security-advisories/sync-kev/+server.ts:
- Around line 5-12: The POST handler calls res.json() unguarded and also
redundantly sets Content-Type; update the POST RequestHandler to first check
res.headers.get('content-type') (or wrap res.json() in try/catch) and gracefully
handle non-JSON responses (e.g., read text for error bodies and return an
appropriate json/text response with res.status), and remove the manual headers:
{ 'Content-Type': 'application/json' } since handleFetch in hooks.server.ts
already adds it for BASE_API_URL requests; change code paths around POST, fetch,
BASE_API_URL, and res.json() accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1423a589-032c-46c4-babb-20fb60c872eb
📒 Files selected for processing (37)
backend/core/migrations/0157_vulnerability_cwes_vulnerability_detected_at_and_more.pybackend/core/models.pybackend/core/serializers.pybackend/core/startup.pybackend/core/urls.pybackend/core/views.pybackend/global_settings/migrations/0004_alter_globalsettings_name.pybackend/sec_intel/feeds.pybackend/sec_intel/management/commands/enrich_advisory.pybackend/sec_intel/migrations/0001_initial.pybackend/sec_intel/models.pybackend/sec_intel/serializers.pybackend/sec_intel/tasks.pybackend/sec_intel/views.pyenterprise/frontend/src/lib/components/SideBar/navData.tsfrontend/messages/en.jsonfrontend/src/lib/components/DetailView/DetailView.sveltefrontend/src/lib/components/Forms/ModelForm.sveltefrontend/src/lib/components/Forms/ModelForm/SecurityAdvisoryForm.sveltefrontend/src/lib/components/Forms/ModelForm/VulnerabilitiesForm.sveltefrontend/src/lib/components/SideBar/navData.tsfrontend/src/lib/utils/boolean-display.tsfrontend/src/lib/utils/crud.tsfrontend/src/lib/utils/schemas.tsfrontend/src/lib/utils/table.tsfrontend/src/lib/utils/types.tsfrontend/src/routes/(app)/(internal)/[model=urlmodel]/+page.sveltefrontend/src/routes/(app)/(internal)/security-advisories/[id=uuid]/+layout.server.tsfrontend/src/routes/(app)/(internal)/security-advisories/[id=uuid]/+page.server.tsfrontend/src/routes/(app)/(internal)/security-advisories/[id=uuid]/+page.sveltefrontend/src/routes/(app)/(internal)/security-advisories/[id=uuid]/+server.tsfrontend/src/routes/(app)/(internal)/security-advisories/[id=uuid]/enrich/+server.tsfrontend/src/routes/(app)/(internal)/security-advisories/autocomplete/+server.tsfrontend/src/routes/(app)/(internal)/security-advisories/batch-action/+server.tsfrontend/src/routes/(app)/(internal)/security-advisories/source/+server.tsfrontend/src/routes/(app)/(internal)/security-advisories/sync-euvd/+server.tsfrontend/src/routes/(app)/(internal)/security-advisories/sync-kev/+server.ts
✅ Files skipped from review due to trivial changes (4)
- frontend/src/lib/utils/boolean-display.ts
- frontend/src/lib/components/SideBar/navData.ts
- backend/global_settings/migrations/0004_alter_globalsettings_name.py
- frontend/src/lib/components/Forms/ModelForm/VulnerabilitiesForm.svelte
🚧 Files skipped from review as they are similar to previous changes (8)
- backend/core/urls.py
- frontend/src/lib/utils/types.ts
- frontend/src/routes/(app)/(internal)/[model=urlmodel]/+page.svelte
- backend/core/startup.py
- backend/core/serializers.py
- frontend/src/lib/utils/schemas.ts
- frontend/src/lib/components/Forms/ModelForm.svelte
- backend/core/models.py
| if (!formData) { | ||
| return fail(400, { form: null }); | ||
| } |
There was a problem hiding this comment.
Dead code: formData check is always truthy.
event.request.formData() returns a FormData object, never null or undefined. This check will never trigger fail(400).
🔧 Remove dead code
- if (!formData) {
- return fail(400, { form: null });
- }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!formData) { | |
| return fail(400, { form: null }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@frontend/src/routes/`(app)/(internal)/security-advisories/[id=uuid]/+page.server.ts
around lines 13 - 15, The check for "formData" is dead because
event.request.formData() always returns a FormData object; remove the if-block
that checks "if (!formData) { return fail(400, { form: null }); }" in the
handler (the variable "formData" and the failing call to fail(400, { form: null
}) are the targets), and instead validate expected form fields from the returned
FormData (e.g., check specific entries like formData.get('fieldName') and call
fail(...) only when those required fields are missing or invalid).
frontend/src/routes/(app)/(internal)/security-advisories/[id=uuid]/+server.ts
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
backend/sec_intel/feeds.py (1)
170-181:⚠️ Potential issue | 🟠 MajorHandle external numeric fields with explicit emptiness checks.
Decimal()will still raiseInvalidOperationon blank/garbled feed values here, and Line 301, Line 389, and Line 423 still treat0/0.0as “missing”. That leaves the sync path vulnerable to aborting on one bad row and to silently dropping legitimate zero-valued CVSS/EPSS data. Please switch these branches to explicitNone/empty-string checks and catchInvalidOperationanywhere we parse upstream numeric fields.Also applies to: 299-303, 374-390, 417-425
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/sec_intel/feeds.py` around lines 170 - 181, The Decimal(...) conversion of external numeric fields can raise decimal.InvalidOperation on blank or garbled values and the current code treats '0'/'0.0' as missing; update the parsing where you build the dict (the results.append block using Decimal(row["epss"]) and Decimal(row["percentile"])) to first check for None/empty-string (after stripping) and only treat empty values as None, and for non-empty values attempt Decimal conversion inside a try/except that catches decimal.InvalidOperation as well as ValueError/KeyError; on InvalidOperation set the field to None (or keep it None) and call logger.warning with the error and cve id (same pattern used currently), and apply the same change to the other parsing sites that use Decimal(): the blocks handling EPSS/CVSS/percentile at the other occurrences you noted (the similar Decimal conversions at the other feed parsing functions).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/sec_intel/feeds.py`:
- Around line 188-203: The current sync builds epss_map.keys() and uses
SecurityAdvisory.objects.filter(ref_id__in=epss_map.keys()) which can exceed
Postgres bind-parameter limits for large feeds; split the keys into chunks
(e.g., 1000) using itertools.islice (import from itertools) and run the
filter/update per chunk: for each chunk, query
SecurityAdvisory.objects.filter(ref_id__in=chunk), update matching objects'
epss_score/epss_percentile and append to to_update (or call bulk_update
per-chunk), then repeat until done; apply the exact same batching pattern to
KEVFeed.sync() where kev_map.keys() is used.
---
Duplicate comments:
In `@backend/sec_intel/feeds.py`:
- Around line 170-181: The Decimal(...) conversion of external numeric fields
can raise decimal.InvalidOperation on blank or garbled values and the current
code treats '0'/'0.0' as missing; update the parsing where you build the dict
(the results.append block using Decimal(row["epss"]) and
Decimal(row["percentile"])) to first check for None/empty-string (after
stripping) and only treat empty values as None, and for non-empty values attempt
Decimal conversion inside a try/except that catches decimal.InvalidOperation as
well as ValueError/KeyError; on InvalidOperation set the field to None (or keep
it None) and call logger.warning with the error and cve id (same pattern used
currently), and apply the same change to the other parsing sites that use
Decimal(): the blocks handling EPSS/CVSS/percentile at the other occurrences you
noted (the similar Decimal conversions at the other feed parsing functions).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b7320bc1-51b4-4435-bb8d-68f4e4463942
📒 Files selected for processing (1)
backend/sec_intel/feeds.py
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
frontend/messages/en.json (1)
2372-2372:⚠️ Potential issue | 🟡 MinorSLA help text still implies detection/CVE-only behavior.
Line 2372 says SLA is calculated from detection date, and Line 3678 limits publication anchoring to CVE. With configurable anchor and generic security advisories, this is now misleading.
✏️ Suggested wording update
- "detectedAtHelpText": "When the vulnerability was discovered. Defaults to today. The SLA deadline is calculated from this date.", + "detectedAtHelpText": "When the vulnerability was discovered. Defaults to today. Used as the SLA start date when the SLA anchor is set to detection date.", - "slaAnchorHelpText": "Choose when the SLA clock starts. Detection date: when the vulnerability was discovered by your organization. Publication date: when the CVE was publicly disclosed (requires linked CVE with published date).", + "slaAnchorHelpText": "Choose when the SLA clock starts. Detection date: when the vulnerability was discovered by your organization. Publication date: when the linked security advisory was publicly disclosed and has a published date.",Also applies to: 3678-3678
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/messages/en.json` at line 2372, The help text key "detectedAtHelpText" still says the SLA is calculated from the detection date and implies CVE-only behavior; update its string to mention the configurable anchor/publication date and that SLA calculation uses the chosen anchor (not necessarily detection/CVE). Also find and update the other English message in en.json that references CVE-only publication anchoring (search for the string containing "CVE") to use neutral wording like "publication/anchor" or "chosen anchor (e.g., detection date or publication date)" so both messages accurately reflect configurable anchors.backend/core/models.py (2)
5513-5526:⚠️ Potential issue | 🟠 MajorRecompute
due_datewhen the SLA anchor inputs change.This save path only treats
severityas an SLA-driving change. Ifdetected_atis corrected later, orpublished_datechanges while the policy anchor is"published_date",due_datestays stale until something else refreshes it. If manual due-date overrides are intentional, this needs an explicit override mechanism; otherwisedetected_atandpublished_dateneed to participate in the change detection too.Also applies to: 5545-5550
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/core/models.py` around lines 5513 - 5526, The save path only treats severity changes as triggering SLA recomputation; modify the logic in the Vulnerability save method (the block that computes is_new, severity_changed and calls _apply_sla_policy) to also detect changes to SLA anchor fields (at minimum detected_at and published_date) and treat those as triggering _apply_sla_policy so due_date is recomputed; specifically, when not is_new, fetch the prior values for "severity", "detected_at", and "published_date" (similar to the existing Vulnerability.objects.filter(pk=self.pk).values(...) call), set a flag (e.g. sla_anchor_changed) if any differ from the current instance, and use that flag alongside severity_changed/is_new to call _apply_sla_policy and append "due_date" to update_fields when appropriate; if manual due-date overrides are part of the model, ensure the code preserves an explicit override field (or skips recompute when an override flag is set).
5511-5527:⚠️ Potential issue | 🟠 MajorNormalize
update_fieldsbefore mutating it.
update_fieldscan be any iterable (list, set, tuple, etc.). Calling.append()on it will crash when callers pass non-list iterables likesave(update_fields={"severity"})or a tuple, causing anAttributeErrorbefore the SLA fields are persisted.Suggested fix
def save(self, *args, **kwargs): from datetime import date - update_fields = kwargs.get("update_fields") + update_fields = ( + set(kwargs["update_fields"]) if kwargs.get("update_fields") is not None else None + ) is_new = self._state.adding severity_changed = False if not is_new: old = Vulnerability.objects.filter(pk=self.pk).values("severity").first() @@ if is_new and not self.detected_at: self.detected_at = date.today() - if update_fields is not None and "detected_at" not in update_fields: - update_fields.append("detected_at") + if update_fields is not None: + update_fields.add("detected_at") if is_new or severity_changed: self._apply_sla_policy() - if update_fields is not None and "due_date" not in update_fields: - update_fields.append("due_date") + if update_fields is not None: + update_fields.add("due_date") + if update_fields is not None: + kwargs["update_fields"] = update_fields super().save(*args, **kwargs)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/core/models.py` around lines 5511 - 5527, The code mutates update_fields (calls .append()) but callers may pass non-list iterables (set/tuple), causing AttributeError; normalize it to a mutable list before mutating: if update_fields is not None and not isinstance(update_fields, list): convert it to update_fields = list(update_fields) and assign it back into kwargs (kwargs["update_fields"] = update_fields) so subsequent .append() calls (used around self.detected_at and after self._apply_sla_policy()) are safe; update references to use this normalized variable.backend/core/serializers.py (1)
345-347:⚠️ Potential issue | 🟠 MajorUse a stable severity key for SLA lookup.
Line 345 uses
obj.get_severity_display(), which is localized. If locale changes,sla_policy.get(...)at Line 346 can miss and incorrectly fall back to"on track". Useobj.severityfor lookup and keep display translation only for UI output.Suggested fix
- severity_label = obj.get_severity_display() - sla_days = sla_policy.get(severity_label) + severity_key = obj.severity + sla_days = sla_policy.get(severity_key)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/core/serializers.py` around lines 345 - 347, The SLA lookup currently uses the localized label from obj.get_severity_display(), which can change with locale and cause sla_policy.get(...) to miss; change the lookup to use the stable internal key obj.severity (e.g., severity_key = obj.severity) and query sla_policy with that key, reserving obj.get_severity_display() only for UI/display rendering; update the variables around severity_label / sla_days to reflect the new key name and ensure downstream logic that expects the display label still calls get_severity_display() where needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/core/serializers.py`:
- Around line 347-350: The SLA handling assumes sla_days is numeric; coerce and
validate it before doing arithmetic in the serializer block (where sla_days,
remaining, obj.due_date and today are used) — convert sla_days to an int/float
inside a try/except (or use a safe numeric parse), ensure it's > 0, and only
perform the remaining < sla_days * 0.5 check when coercion succeeds; if coercion
fails or sla_days is invalid, skip the caution branch (or default to a safe
behavior) to avoid TypeError/ValueError at serialization time.
---
Duplicate comments:
In `@backend/core/models.py`:
- Around line 5513-5526: The save path only treats severity changes as
triggering SLA recomputation; modify the logic in the Vulnerability save method
(the block that computes is_new, severity_changed and calls _apply_sla_policy)
to also detect changes to SLA anchor fields (at minimum detected_at and
published_date) and treat those as triggering _apply_sla_policy so due_date is
recomputed; specifically, when not is_new, fetch the prior values for
"severity", "detected_at", and "published_date" (similar to the existing
Vulnerability.objects.filter(pk=self.pk).values(...) call), set a flag (e.g.
sla_anchor_changed) if any differ from the current instance, and use that flag
alongside severity_changed/is_new to call _apply_sla_policy and append
"due_date" to update_fields when appropriate; if manual due-date overrides are
part of the model, ensure the code preserves an explicit override field (or
skips recompute when an override flag is set).
- Around line 5511-5527: The code mutates update_fields (calls .append()) but
callers may pass non-list iterables (set/tuple), causing AttributeError;
normalize it to a mutable list before mutating: if update_fields is not None and
not isinstance(update_fields, list): convert it to update_fields =
list(update_fields) and assign it back into kwargs (kwargs["update_fields"] =
update_fields) so subsequent .append() calls (used around self.detected_at and
after self._apply_sla_policy()) are safe; update references to use this
normalized variable.
In `@backend/core/serializers.py`:
- Around line 345-347: The SLA lookup currently uses the localized label from
obj.get_severity_display(), which can change with locale and cause
sla_policy.get(...) to miss; change the lookup to use the stable internal key
obj.severity (e.g., severity_key = obj.severity) and query sla_policy with that
key, reserving obj.get_severity_display() only for UI/display rendering; update
the variables around severity_label / sla_days to reflect the new key name and
ensure downstream logic that expects the display label still calls
get_severity_display() where needed.
In `@frontend/messages/en.json`:
- Line 2372: The help text key "detectedAtHelpText" still says the SLA is
calculated from the detection date and implies CVE-only behavior; update its
string to mention the configurable anchor/publication date and that SLA
calculation uses the chosen anchor (not necessarily detection/CVE). Also find
and update the other English message in en.json that references CVE-only
publication anchoring (search for the string containing "CVE") to use neutral
wording like "publication/anchor" or "chosen anchor (e.g., detection date or
publication date)" so both messages accurately reflect configurable anchors.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d9496854-1eb0-4d70-8f9b-f4e5867690f1
📒 Files selected for processing (7)
backend/core/models.pybackend/core/serializers.pybackend/core/views.pyenterprise/frontend/src/lib/components/SideBar/navData.tsfrontend/messages/en.jsonfrontend/messages/fr.jsonfrontend/src/lib/components/SideBar/navData.ts
✅ Files skipped from review due to trivial changes (3)
- frontend/messages/fr.json
- enterprise/frontend/src/lib/components/SideBar/navData.ts
- frontend/src/lib/components/SideBar/navData.ts
e55dd12 to
a11fbcf
Compare
EE was not aligned for settings Shorten tab names to prevent layout overflow
fix test
The default policy is based on the detection date since it will always be available, while publication means that there is some intel about it (e.g. CVE) which is not always the case
Summary by CodeRabbit