Lean Beam is alpha code and still mostly a personal experiment.
The repository is public for collaboration and reuse, but it is not yet a polished or stable general-purpose product. The main goal is still a small, type-safe, isolated execution surface for Lean, with a thin local Beam daemon around it for low-cost experimentation.
- standalone Lean plugin for
$/lean/runAt - internal proof-first, command-fallback basis selection
- typed response payload with messages, traces, optional proof state, and optional follow-up handle
- optional follow-up execution through
$/lean/runWithand$/lean/releaseHandle - local Beam daemon/client pair for Lean and Rocq workflows
- alpha Lean wrapper commands for follow-up handle continuation and release
- installed
lean-beam-searchhelper for shorter shell branching/playout workflows - explicit Lean
lean-beam syncBeam-daemon barrier with diagnostics wait and compactfileProgressreporting lean-beam open-filesBeam-daemon introspection for tracked documents, includingsaved/notSaved, direct Lean deps when available, whether the current synced version has been checkpointed withlean-beam save, and Lean save preflight fieldssaveEligible/saveReason/saveModule; already-known tracked files are checked incrementally against the on-disk text and carry the last observed compactfileProgress- compact Lean Beam-daemon
fileProgressreporting on other slow Lean wrapper calls when matching$/lean/fileProgressnotifications were observed while the request was pending - repo-local regression coverage around isolation, stale state, cancellation, and handle invalidation
The base request is intentionally small:
- one document
- one position
- one Lean text payload
- no required command/tactic mode flag
Request-level failures stay at the transport layer. Semantic Lean outcomes stay in the normal typed response payload.
Follow-up handles exist, but they should be treated as alpha support APIs rather than as a frozen long-term contract. Current handle behavior is:
- opaque
- document-bound
- invalidated by same-document edits
- invalidated by document close
- invalidated by worker restart or Beam daemon restart
- exact continuation requires an explicit handle path; separate
lean-beam run-atcalls do not chain through hidden state
The local Beam daemon convenience layer is also still alpha. In particular, lean-beam sync is now the
supported on-disk edit barrier for Lean files: it waits for diagnostics for the synced version and
streams fresh diagnostics to clients such as the CLI without replaying them in the final JSON, and
returns a compact fileProgress summary rather than exposing the full underlying LSP notification
stream. By default lean-beam sync, lean-beam save, and lean-beam close-save stream only errors for the
current request; +full widens that stream to warnings, info, and hints. The Beam daemon now also
forwards compact fileProgress updates live to streaming clients. For programmatic local consumers,
the preferred machine-readable surface is the JSON stream exposed
by beam-client request-stream; the wrapper stderr format should be treated as human-facing.
Other slow Lean Beam daemon calls may attach a compact top-level fileProgress summary when they had
to wait on the same Lean elaboration progress. For non-barrier calls this summary may be partial,
because the request can return before the whole file reaches done = true. This should be read as a
Lean-side wrapper contract. The wrapper now also exposes alpha Lean handle commands for
continuation, linear playout, and release; these are useful for search-style workflows but are still
more fragile than the base one-shot request. Rocq support remains narrower and does not currently
expose an equivalent public sync command in the wrapper.
If Lean cannot reach a completed diagnostics barrier for the synced version, for example because an
imported target is stale and rebuild failure kills the worker session, lean-beam sync now fails rather
than reporting a partial success. lean-beam save and lean-beam close-save refuse to proceed past that
incomplete barrier. Lean sync failures may also attach a cheap direct-import recovery hint in
error.data, based on broker-tracked saved dependency boundaries, to suggest save / refresh /
lake build next steps without running a full workspace dependency scan.
lean-beam sync, lean-beam save, and lean-beam close-save should be read as a progression rather than as
unrelated commands: lean-beam sync establishes the synced diagnostics-complete saved file snapshot,
lean-beam save checkpoints that snapshot for one module, and lean-beam close-save does the same
checkpoint and then closes the tracked file. This remains a narrower contract than a full batch
rebuild: the save path reports the saved version and sourceHash. For an unchanged file,
lake build Foo.lean should replay that saved module, and Lake should be able to reuse it when
rebuilding importers. If the file changes during the save, the resulting checkpoint remains
coherent for the older snapshot and later lake build should rebuild it as stale.
If a speculative probe looks right and should become real source, the current contract is still:
make the real edit in the file, save it, then lean-beam sync. The intended future direction is to make
that handoff cheap by reusing speculative execution rather than replaying it from scratch.
lean-beam save is module-oriented, not file-oriented. lean-beam sync can operate on an arbitrary file the
daemon can open, but lean-beam save requires a file that Lake resolves to a module in the current
workspace package graph. Standalone .lean files outside that graph are not valid save targets.
- Lean plugin loading currently depends on
-Dexperimental.module=true. - Lean plugin loading is toolchain-keyed, not toolchain-agnostic.
- Supported Lean toolchains are listed in
supported-lean-toolchains. - The supported fast path is the Lean toolchain pinned by this repository's
lean-toolchain, because the plugin uses internal Lean APIs. - The install script prebuilds an installed-skill bundle cache for that pinned toolchain by default.
- The install script also accepts
--toolchain <toolchain>for explicit supported bundles and--all-supportedfor the full validated allowlist. - Runtime requests first try that installed-skill bundle cache, then fall back to a project-local
runtime bundle under
.beam/bundles/for supported toolchains. - Unsupported Lean toolchains fail early instead of attempting an opportunistic build.
lean-beam supported-toolchainslists the validated toolchains, andlean-beam doctorreports support state, bundle source, and bundle key inputs.- Bundle rebuild keys intentionally exclude the full
.lake/packagescheckout tree and instead use the runtime source tree pluslean-toolchain,lake-manifest.json, andsupported-lean-toolchains. - The first use of a supported but not-yet-prebuilt toolchain must still build a matching local fallback bundle.
- On a cold machine, that local fallback build may need network access to fetch dependencies.
- In sandboxed agent environments, Beam daemon startup itself may require elevated permissions even when
the installed bundle and project-local
.beampaths resolve correctly. - A startup failure that reports
operation not permittedthrough.beam/beam-daemon-startup.logis usually an environment restriction, not a bundle-resolution mismatch. - Cancellation is cooperative; prompt stopping depends on inner elaboration polling interruption.
- The Beam daemon is single-root and keeps a conservative single active session per backend.
- Zero-build
lean-beam savehelps checkpoint one module, but it is not a whole-workspace freshness solution. - If you edit a dependency of the target file, downstream speculative results should be treated as stale until rebuild or checkpoint.
- Lean does not yet expose a better plugin-facing restart-required / stale-dependency hook here, so this limitation is currently explicit and user-visible.
- agent-skill distribution currently relies on a local checkout and local install script; it is not yet published through a registry or marketplace flow.
- Rocq support is currently limited to goal inspection through
coq-lsp; it is not yet a full stateful execution layer.
Near-term work is mostly about hardening and simplifying:
- keep the base
runAtrequest small - preserve strict per-request isolation
- reduce packaging and workspace rough edges
- publish a smoother distribution path, likely GitHub-backed install for Codex and plugin marketplace packaging for Claude
- improve stale-dependency handling
- replace broker-side diagnostics/fileProgress barrier inference with a stronger backend-facing
readiness primitive, so
lean-beam sync/lean-beam savecan trust one authoritative completion signal instead of reconstructing barrier completeness from multiple LSP channels - keep Beam-daemon-side conveniences useful without turning them into a large public surface too early
- add a short comparison against Pantograph in the docs, to clarify where
runAtfits among nearby Lean tooling
Current release priorities:
- documentation polish for release readiness
- AI/human harness polish for maintainer workflows
- stability fixes only where they materially improve release confidence
Near-term TODO:
- finish the human-facing docs split so README stays human-only and maintainer or agent workflow detail stays in contributor, development, and skill docs
- decide whether the new README still needs a short architecture note, or whether
docs/STATUS.mdplusdocs/DEVELOPMENT.mdare enough - tighten the AI-first harness story so the preferred maintainer entrypoints are obvious for both humans and AI agents
- investigate and fix the intermittent
handleProofBranchDslCI failure if it reappears - surface
syncBarrierIncompleterecovery hints more clearly in the human-facing CLI path, not just inerror.data - continue validating every supported Lean toolchain in CI before expanding the allowlist further
- replace the broker's remaining stopgap dependency and readiness logic with stronger Lake or backend-facing primitives when Lean exposes them