This document describes the end-to-end architecture of Renderify, covering the pipeline stages, package responsibilities, data flow, and key design decisions.
For Mermaid diagrams of the same architecture, see docs/architecture-visual.md.
User Prompt
│
▼
┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ LLM Interpreter │────▶│ Code Generator │────▶│ RuntimePlan │
│ (OpenAI/Claude/ │ │ (JSON or TSX │ │ (IR) │
│ Gemini) │ │ extraction) │ │ │
└──────────────────┘ └──────────────────┘ └────────┬─────────┘
│
▼
┌──────────────────┐
│ Security Policy │
│ Checker │
└────────┬─────────┘
│
▼
┌───────────────────┐
│ Runtime Manager │
│ (Execute plan, │
│ resolve modules,│
│ transpile) │
└─────────┬─────────┘
│
▼
┌───────────────────┐
│ UI Renderer │
│ (HTML generation,│
│ DOM reconcile) │
└───────────────────┘
│
▼
Rendered UI
renderify
├── @renderify/core
│ ├── @renderify/ir
│ ├── @renderify/security ── @renderify/ir
│ └── @renderify/runtime
│ ├── @renderify/ir
│ └── @renderify/security
├── @renderify/llm
│ ├── @renderify/core
│ └── @renderify/ir
└── (optional) @renderify/cli
├── @renderify/core
├── @renderify/llm
└── @renderify/runtime
The LLMInterpreter interface abstracts over LLM providers. Each provider (OpenAI, Anthropic, Google) implements:
generateResponse()— single-shot text generationgenerateResponseStream()— streaming token-by-token generation via SSEgenerateStructuredResponse()— JSON schema-constrained generation for RuntimePlan output
The pipeline first attempts structured output (requesting a RuntimePlan JSON directly from the LLM). If the structured response is invalid, it falls back to free-form text generation.
The DefaultCodeGenerator converts LLM output into a RuntimePlan. It attempts multiple parse strategies in order:
- Direct RuntimePlan JSON — the entire LLM output is valid JSON conforming to the RuntimePlan schema
- RuntimeNode JSON — a JSON object representing a single node, wrapped into a plan
- Fenced code block extraction —
tsx,jsx,ts, orjscode blocks are extracted and placed intoplan.source - Text fallback — the raw text is wrapped as a text node
For streaming scenarios, an incremental code generation session (createIncrementalSession) processes LLM deltas in real-time, using FNV-1a 64-bit hashing for efficient change detection.
The DefaultSecurityChecker validates every RuntimePlan before execution. Checks include:
- Blocked HTML tags (script, iframe, object, embed, etc.)
- Module specifier allowlists (only permitted CDN hosts and prefixes)
- Tree depth and node count limits
- Execution budget validation (maxImports, maxExecutionMs, maxComponentInvocations)
- State model safety (prototype pollution protection in paths)
- Runtime source analysis (banned patterns like
eval(),fetch(),document.cookie) - Module manifest coverage (bare specifiers must have manifest entries in strict mode)
- Spec version compatibility
Three built-in profiles (strict, balanced, relaxed) provide sensible defaults. Custom policy overrides are supported.
The DefaultRuntimeManager is the core execution engine. It handles:
- Node resolution — recursively resolves
element,text, andcomponentnodes - Module loading — resolves bare npm specifiers via
JspmModuleLoaderto JSPM CDN URLs - Source transpilation — TypeScript/JSX transpiled via
@babel/standalonethroughBabelRuntimeSourceTranspiler - Import rewriting — bare specifiers in source code are rewritten to resolved CDN URLs
- Execution budget tracking — import counts, component invocations, and wall-clock time are tracked and enforced
- State management — per-plan state snapshots with action-based transitions
- Dependency preflight — probes all required modules before execution, with retry/timeout/CDN fallback
- Sandbox execution — optional Web Worker or iframe isolation for untrusted source code
The DefaultUIRenderer converts execution results to HTML:
- RuntimeNode tree → HTML string — with XSS sanitization and safe attribute handling
- Preact vnode rendering — when source modules produce Preact components, uses Preact's reconciliation
- DOM reconciliation — efficient diffing with keyed element matching for interactive updates
- Event delegation — runtime events are converted to
data-renderify-event-*attributes with delegated listeners - Security sanitization — blocks dangerous tags, strips
javascript:URLs, validates inline styles
Renderify supports two fundamentally different input formats:
{
"specVersion": "runtime-plan/v1",
"id": "dashboard-v1",
"version": 1,
"root": { "type": "element", "tag": "div", "children": [...] },
"capabilities": { "domWrite": true },
"state": { "initial": { "count": 0 } },
"imports": ["recharts"]
}The LLM generates a JSON object conforming to the RuntimePlan schema. This path provides maximum control and deterministic behavior.
```tsx
import { useState } from "preact/hooks";
import { LineChart, Line } from "recharts";
export default function Dashboard() {
const [metric, setMetric] = useState("revenue");
return <div>...</div>;
}
```
The LLM generates fenced code blocks. The codegen stage extracts the source code and wraps it in a RuntimePlan with plan.source. The runtime then transpiles and executes the source module.
The streaming pipeline (renderPromptStream) provides progressive UI updates:
LLM tokens ──▶ llm-delta chunks ──▶ preview renders ──▶ final render
- llm-delta — each token from the LLM is emitted as a chunk
- preview — at configurable intervals, the accumulated text is parsed and rendered as a preview
- final — after LLM completion, the full pipeline executes and emits the final result
- error — if any stage fails, an error chunk is emitted before the exception propagates
The CustomizationEngine provides 10 hook points that form an interception chain:
beforeLLM ─▶ [LLM] ─▶ afterLLM
─▶ beforeCodeGen ─▶ [CodeGen] ─▶ afterCodeGen
─▶ beforePolicyCheck ─▶ [Security] ─▶ afterPolicyCheck
─▶ beforeRuntime ─▶ [Runtime] ─▶ afterRuntime
─▶ beforeRender ─▶ [UI] ─▶ afterRender
Each hook receives the current payload and can transform it before passing to the next stage. Multiple plugins are executed in registration order.
Module resolution follows a tiered approach:
- Module manifest lookup — if the plan includes a
moduleManifest, bare specifiers are resolved to theirresolvedUrl - Compatibility aliases — built-in aliases map
react/react-domtopreact/compat, and include pinned versions forrecharts - JSPM CDN resolution — bare specifiers are resolved to
https://ga.jspm.io/npm:{specifier} - Fallback CDNs — on failure, tries configured fallback bases (default:
esm.sh) - Asset proxying — CSS imports become style-injection proxy modules; JSON imports become ESM default exports
Node.js builtins and unsupported schemes (file://, jsr:) are rejected deterministically.
When a RuntimePlan includes a source module, the execution flow is:
source.code
──▶ Babel transpile (TSX → JS)
──▶ es-module-lexer (extract imports)
──▶ Rewrite bare imports to CDN URLs
──▶ Create blob: URL for module
──▶ Dynamic import()
──▶ Extract default export
──▶ Render as Preact component (or RuntimeNode tree)
Optional sandbox modes (sandbox-worker, sandbox-iframe, sandbox-shadowrealm) isolate execution in a separate context with configurable timeouts and fail-closed behavior.
JSPM provides browser-native ESM modules from npm packages without a build step. This eliminates the need for a backend compiler while giving access to the npm ecosystem. The tiered compatibility contract (guaranteed aliases + best-effort resolution) provides predictable behavior.
Preact is ~3KB (vs React's ~45KB), loads faster from CDN, and provides full React API compatibility via preact/compat. The compatibility bridge maps all React/ReactDOM imports to Preact equivalents transparently.
@babel/standalone runs entirely in the browser, supporting TypeScript and JSX without any backend. It's loaded on demand only when the plan includes source modules.
Every RuntimePlan passes through security checks before any code runs. This is critical because LLM output is fundamentally untrusted — the model could generate <script> tags, eval() calls, or unsafe network requests. The policy framework provides defense-in-depth at multiple levels.
Structured RuntimePlan JSON gives precise control for production systems. TSX/JSX code blocks are more natural for LLMs and enable richer interactivity. Supporting both paths maximizes flexibility across different LLM capabilities and use cases.