Skip to content

🔧 AI assist guidelines setup#634

Open
C-Valen wants to merge 5 commits intomainfrom
config/ai-rules
Open

🔧 AI assist guidelines setup#634
C-Valen wants to merge 5 commits intomainfrom
config/ai-rules

Conversation

@C-Valen
Copy link
Copy Markdown
Member

@C-Valen C-Valen commented Sep 19, 2025

Summary:

Establishes a shared Cursor agentic development config for the osim team, covering rules, AI agents, and public guidelines for all AI tools.

Changes:

Considerations:

  • .cursor/ is Cursor-specific — no impact on devs not using Cursor
  • Agents follow an implement-first, validate-on-request pattern (no auto-lint/test triggering)

@C-Valen C-Valen requested a review from a team September 19, 2025 13:23
@C-Valen C-Valen self-assigned this Sep 19, 2025
@C-Valen C-Valen added the internal Skip Jira CI label Sep 19, 2025
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 16, 2026

📝 Walkthrough

Summary by CodeRabbit

  • Documentation
    • Added comprehensive development guidelines and onboarding documentation to support consistent code quality and feature development practices.
    • Added Cursor AI agent configurations and rules for automated code reviews and feature scaffolding workflows.

Walkthrough

The PR adds comprehensive AI development guidelines and Cursor configuration files for the OSIM Vue 3 frontend project. It establishes onboarding documentation, Cursor agent specifications, Git/coding standards, and general AI assistant guidelines for developer workflows and code generation.

Changes

Cohort / File(s) Summary
Cursor Configuration
.cursor/README.md, .cursor/ONBOARDING.md
Cursor configuration overview and OSIM ecosystem setup guide covering dependencies, authentication modes, dev server startup, development conventions, and repository structure.
Cursor Agent Specifications
.cursor/agents/code-reviewer.md, .cursor/agents/feature-scaffolder.md, .cursor/agents/pr-description.md
Agent role definitions for code review, feature scaffolding, and PR generation with structured invocation steps, review checklists, and output formatting requirements.
Cursor Rules
.cursor/rules/git-workflow.mdc, .cursor/rules/vue-typescript.mdc
Coding standards and workflow rules covering branch naming conventions, commit message formatting, Vue 3 composition patterns, TypeScript safety, and security practices.
AI Assistant Guidelines
AI_GUIDELINES.md, CLAUDE.md
Frontend tech stack and AI assistant operating guidelines, including code organization conventions, security handling, and automation restrictions.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~5 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Description check ❓ Inconclusive The PR description provides a clear summary and detailed breakdown of changes, but deviates from the required template structure (missing checklist, Jira reference, and OSIDB-ID header format). Consider aligning with the repository's template: add the checklist section, Jira ticket reference format, and OSIDB-ID header to standardize PR documentation.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title '🔧 AI assist guidelines setup' accurately describes the main objective of the PR: establishing AI assistance guidelines and Cursor configuration for the osim team.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch config/ai-rules

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@C-Valen C-Valen marked this pull request as ready for review April 16, 2026 14:29
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (5)
.cursor/agents/feature-scaffolder.md (1)

37-37: Consider clarifying that the composable pattern is structural, not prescriptive.

Line 37 describes composables as returning { data, isLoading, error, actions }, but actual codebase composables (e.g., useFetchFlaw.ts) use more descriptive, context-specific names. While the agent instructions (lines 11-16) correctly tell it to read and match existing patterns, the generic pattern here might be misread as a naming prescription.

💡 Suggested clarification
-- Composables return `{ data, isLoading, error, actions }` pattern
+- Composables return state + actions (e.g., `{ flaw, isFetching, fetchFlaw }`) — match existing naming
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.cursor/agents/feature-scaffolder.md at line 37, The doc currently states
composables return a `{ data, isLoading, error, actions }` pattern which can be
misread as a naming prescription; update the wording in
.cursor/agents/feature-scaffolder.md to clarify this is a structural example
only (not a required naming convention) and reference that existing composables
like useFetchFlaw.ts may use more descriptive, context-specific names—adjust the
sentence to say "structure/example" and add a short note telling agents to
follow the codebase's existing names when scaffolding.
.cursor/agents/code-reviewer.md (4)

31-34: Consider softening the function length guideline.

The 40-line limit for functions is a reasonable heuristic for encouraging focused, maintainable code. However, some functions legitimately need to be longer (e.g., complex business logic with necessary steps). Consider phrasing this as a guideline rather than a hard rule to allow for justified exceptions.

📝 Optional: Soften the function length guideline
 **Code quality**
-- Functions are focused and <40 lines
+- Functions are focused and typically <40 lines (flag longer functions for possible refactoring)
 - Error states handled (not swallowed)
 - Loading states reflected in UI
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.cursor/agents/code-reviewer.md around lines 31 - 34, Update the hard rule
"Functions are focused and <40 lines" to a softer guideline (e.g., "Prefer
functions <40 lines; allow justified exceptions for complex business logic with
clear rationale") so reviewers can accept longer functions when warranted;
locate and modify the guideline text "Functions are focused and <40 lines" in
the code-review checklist to reflect the new phrasing and add a short note about
documenting justification for exceptions.

8-12: Consider documenting the rationale for skipping automated checks.

The instruction to skip yarn type-check, yarn lint, and tests unless explicitly requested is clear and aligns with the "implement-first, validate-on-request" pattern mentioned in the PR. However, this trade-off (faster reviews vs. potential missed issues from static analysis) might not be immediately obvious to new team members.

📝 Optional: Add a brief rationale comment
 ## When invoked
 
 1. Run `git diff HEAD` to see recent changes
 2. Focus review on modified `.vue`, `.ts` files
-3. **Do NOT run** `yarn type-check`, `yarn lint`, or tests — only run these when the user explicitly asks
+3. **Do NOT run** `yarn type-check`, `yarn lint`, or tests — only run these when the user explicitly asks
+   - Rationale: Focus on architectural/logical review; developer runs validation tools separately
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.cursor/agents/code-reviewer.md around lines 8 - 12, Add a brief rationale
next to the "Do NOT run `yarn type-check`, `yarn lint`, or tests" guidance under
the "When invoked" section: explain that skipping automated checks speeds up
iterative/implement-first reviews and prevents CI noise, but note the trade-off
that static/type/lint issues may be missed and recommend requesting those checks
when ready; update the adjacent bullet or add a one-sentence parenthetical so
future reviewers see why this rule exists.

16-19: Consider clarifying fallback for OpenAPI client edge cases.

The requirement to use the generated OpenAPI client from @/generated/ is excellent practice and aligns with the codebase patterns. However, the agent doesn't provide guidance for edge cases (e.g., new endpoints not yet in the OpenAPI spec, or issues with code generation).

📝 Optional: Add guidance for OpenAPI client edge cases
 **TypeScript safety**
 - No `any`; proper use of `unknown` + type guards
 - Zod schemas used for API response validation (`ZodFlaw*` types)
-- Generated OpenAPI client used — no hand-rolled fetch calls
+- Generated OpenAPI client used from `@/generated/` — no hand-rolled fetch calls
+  - If endpoint missing from generated client, update OpenAPI spec first
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.cursor/agents/code-reviewer.md around lines 16 - 19, Clarify fallback
guidance for using the generated OpenAPI client from "@/generated/" by adding
concrete steps: when an endpoint is missing or generation fails, first update
the OpenAPI spec and re-run the generator; if that’s not immediately possible,
implement a small typed fallback using fetch + Zod validation (reusing existing
Zod schemas like ZodFlaw* or creating a temporary Zod schema) and wrap it in the
same call-signature as the generated client so callers (and tests) remain
unchanged; ensure the fallback logs a clear warning/error and includes a TODO
comment referencing regeneration, and add tests for both the generated client
path and the fallback path so behavior is covered until the generator is fixed.

36-43: Consider adding guidance for empty diff or no-issues case.

The output format is well-structured with clear severity levels and requirements for file references. However, it doesn't specify what the agent should output when the diff is empty or when no issues are found. Adding this guidance would make the agent's behavior more predictable.

📝 Optional: Add guidance for no-issues scenario
 ## Output format
 
 Organize feedback as:
 - 🔴 **Critical** — must fix before merge
 - 🟡 **Suggestion** — should address
 - 🟢 **Minor** — optional improvement
 
 Include specific file + line reference and a concrete fix for each 🔴 item.
+
+If no issues found, respond with: "✅ No issues found in the reviewed changes."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.cursor/agents/code-reviewer.md around lines 36 - 43, The "## Output format"
section lacks guidance for an empty diff or no-issues result; update the "##
Output format" block (the header and its bullet list) to include a clear rule
for no-issues/empty-diff cases—for example add a bullet or subheading that
instructs the agent to output a single line like "🟢 No issues found" (or
similar) and to still return the standard structure (severity headers) with a
short confirmation message and no file refs; ensure the new text is concise and
consistent with the existing severity labels so the agent's behavior is
deterministic when no findings are present.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In @.cursor/agents/code-reviewer.md:
- Around line 31-34: Update the hard rule "Functions are focused and <40 lines"
to a softer guideline (e.g., "Prefer functions <40 lines; allow justified
exceptions for complex business logic with clear rationale") so reviewers can
accept longer functions when warranted; locate and modify the guideline text
"Functions are focused and <40 lines" in the code-review checklist to reflect
the new phrasing and add a short note about documenting justification for
exceptions.
- Around line 8-12: Add a brief rationale next to the "Do NOT run `yarn
type-check`, `yarn lint`, or tests" guidance under the "When invoked" section:
explain that skipping automated checks speeds up iterative/implement-first
reviews and prevents CI noise, but note the trade-off that static/type/lint
issues may be missed and recommend requesting those checks when ready; update
the adjacent bullet or add a one-sentence parenthetical so future reviewers see
why this rule exists.
- Around line 16-19: Clarify fallback guidance for using the generated OpenAPI
client from "@/generated/" by adding concrete steps: when an endpoint is missing
or generation fails, first update the OpenAPI spec and re-run the generator; if
that’s not immediately possible, implement a small typed fallback using fetch +
Zod validation (reusing existing Zod schemas like ZodFlaw* or creating a
temporary Zod schema) and wrap it in the same call-signature as the generated
client so callers (and tests) remain unchanged; ensure the fallback logs a clear
warning/error and includes a TODO comment referencing regeneration, and add
tests for both the generated client path and the fallback path so behavior is
covered until the generator is fixed.
- Around line 36-43: The "## Output format" section lacks guidance for an empty
diff or no-issues result; update the "## Output format" block (the header and
its bullet list) to include a clear rule for no-issues/empty-diff cases—for
example add a bullet or subheading that instructs the agent to output a single
line like "🟢 No issues found" (or similar) and to still return the standard
structure (severity headers) with a short confirmation message and no file refs;
ensure the new text is concise and consistent with the existing severity labels
so the agent's behavior is deterministic when no findings are present.

In @.cursor/agents/feature-scaffolder.md:
- Line 37: The doc currently states composables return a `{ data, isLoading,
error, actions }` pattern which can be misread as a naming prescription; update
the wording in .cursor/agents/feature-scaffolder.md to clarify this is a
structural example only (not a required naming convention) and reference that
existing composables like useFetchFlaw.ts may use more descriptive,
context-specific names—adjust the sentence to say "structure/example" and add a
short note telling agents to follow the codebase's existing names when
scaffolding.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro Plus

Run ID: ec292286-f2ad-41c6-80e7-aa3e70a925ab

📥 Commits

Reviewing files that changed from the base of the PR and between 3450abc and e13d706.

📒 Files selected for processing (9)
  • .cursor/ONBOARDING.md
  • .cursor/README.md
  • .cursor/agents/code-reviewer.md
  • .cursor/agents/feature-scaffolder.md
  • .cursor/agents/pr-description.md
  • .cursor/rules/git-workflow.mdc
  • .cursor/rules/vue-typescript.mdc
  • AI_GUIDELINES.md
  • CLAUDE.md

@roduran-dev
Copy link
Copy Markdown

It looks like a great idea.

I like for all our projects 😃

@roduran-dev
Copy link
Copy Markdown

Using CLAUDE to review this it recommends this improvements.

Weaknesses / Areas to Improve

🟡 Duplication between files

  • AI_GUIDELINES.md and .cursor/rules/vue-typescript.mdc overlap heavily
  • Consider: Make AI_GUIDELINES the source of truth; rules files should reference it, not repeat it

🟡 Agent boundaries could be clearer

  • feature-scaffolder says "reads 2-3 similar files" but doesn't specify how to pick them
  • pr-description "skim — don't read everything" is vague
  • Consider: Add concrete heuristics ("pick most-recently-modified component in same dir")

🟡 Missing validation

  • No agent for "run vibe-check and interpret results"
  • Users must manually trigger and parse linter/type errors
  • Consider: Add a validation-runner agent that runs checks + formats output

🟡 Onboarding assumptions

  • ONBOARDING.md assumes familiarity with Vue 3, Pinia, Zod
  • For true onboarding, add "if you're new to X, read Y first" pointers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

internal Skip Jira CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants