Summary
handleBatchStatus fetches each execution's status with a separate storage call in a loop, producing N+1 round-trips for every batch status poll.
Context
At execute.go:442, the handler iterates for _, id := range request.ExecutionIDs and calls the storage layer once per ID. A UI dashboard polling 50–100 executions generates 50–100 serial storage calls per request. Under the default SQLite local mode this serialises all those calls through a single connection. Under PostgreSQL, connection pool slots are consumed proportionally. As workflow counts grow, batch status polls become the dominant source of storage load and the primary cause of dashboard latency.
Scope
In Scope
- Add a
GetExecutionsByIDs(ctx context.Context, ids []string) ([]Execution, error) method to the storage interface and all backends (SQLite, PostgreSQL).
- Implement the SQLite version using
WHERE id IN (?, ?, ...) and the PostgreSQL version using ANY($1::text[]).
- Update
handleBatchStatus to call the new batch method.
- Optionally add a short-TTL in-memory LRU cache in front of the batch fetch for common dashboard poll patterns.
Out of Scope
- Caching execution state beyond the immediate request — a proper cache layer is a separate initiative.
- Changing the batch status API contract (request/response schema).
- Optimising single-execution fetches.
Files
control-plane/internal/storage/storage.go (interface file) — add GetExecutionsByIDs to the ExecutionStorage interface
control-plane/internal/storage/local/execution.go — SQLite implementation using IN clause
control-plane/internal/storage/postgres/execution.go — PostgreSQL implementation using ANY
control-plane/internal/handlers/execute.go:442 — replace the loop with a single GetExecutionsByIDs call
control-plane/internal/storage/local/execution_test.go and postgres/execution_test.go — add batch-fetch tests
Acceptance Criteria
Notes for Contributors
Severity: HIGH
Use sqlx.In (SQLite) or a pq.Array / pgx array bind for PostgreSQL to avoid building query strings manually. Cap the maximum batch size (e.g. 500 IDs) and return an error for oversized requests to prevent accidental full-table scans via large IN lists.
Summary
handleBatchStatusfetches each execution's status with a separate storage call in a loop, producing N+1 round-trips for every batch status poll.Context
At
execute.go:442, the handler iteratesfor _, id := range request.ExecutionIDsand calls the storage layer once per ID. A UI dashboard polling 50–100 executions generates 50–100 serial storage calls per request. Under the default SQLite local mode this serialises all those calls through a single connection. Under PostgreSQL, connection pool slots are consumed proportionally. As workflow counts grow, batch status polls become the dominant source of storage load and the primary cause of dashboard latency.Scope
In Scope
GetExecutionsByIDs(ctx context.Context, ids []string) ([]Execution, error)method to the storage interface and all backends (SQLite, PostgreSQL).WHERE id IN (?, ?, ...)and the PostgreSQL version usingANY($1::text[]).handleBatchStatusto call the new batch method.Out of Scope
Files
control-plane/internal/storage/storage.go(interface file) — addGetExecutionsByIDsto theExecutionStorageinterfacecontrol-plane/internal/storage/local/execution.go— SQLite implementation usingINclausecontrol-plane/internal/storage/postgres/execution.go— PostgreSQL implementation usingANYcontrol-plane/internal/handlers/execute.go:442— replace the loop with a singleGetExecutionsByIDscallcontrol-plane/internal/storage/local/execution_test.goandpostgres/execution_test.go— add batch-fetch testsAcceptance Criteria
handleBatchStatusmakes exactly one storage call regardless of how many execution IDs are in the requestGetExecutionsByIDsgo test ./control-plane/...)make lint)Notes for Contributors
Severity: HIGH
Use
sqlx.In(SQLite) or apq.Array/pgxarray bind for PostgreSQL to avoid building query strings manually. Cap the maximum batch size (e.g. 500 IDs) and return an error for oversized requests to prevent accidental full-table scans via largeINlists.