Conversation
Merge reth-console into bera-reth as `bera-reth console`. Provides an interactive REPL and one-shot --exec mode for RPC calls over IPC, HTTP, or WebSocket. Pre-registers beraAdmin aliases when PoG is available. Bypasses reth tracing/Prometheus init for clean REPL startup. BERA-233
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 35 minutes and 5 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds a full interactive JSON‑RPC console: new CLI subcommand, endpoint resolution, multi-transport RPC client (HTTP/WS/IPC), input parsing and query evaluator, REPL with completion/history, formatted/raw output and exec mode, plus tests and Cargo dependency updates. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant REPL as "REPL (repl.rs)"
participant Parser as "Parser (command.rs)"
participant Engine as "Engine (engine.rs)"
participant Rpc as "RpcClient (rpc.rs)"
participant Output as "Output (output.rs)"
loop REPL loop
User->>REPL: input line
REPL->>Parser: parse_input(line)
Parser-->>REPL: InputCommand
REPL->>Engine: evaluate_line(cmd, rpc, last_result)
alt RPC invocation
Engine->>Rpc: request_value(method, params)
Rpc-->>Engine: result Value
Engine->>Engine: update last_result
Engine->>Engine: apply_query if query tail
else Query only
Engine->>Engine: apply_query(expr, last_result)
end
Engine-->>REPL: EvalOutcome
alt Value
REPL->>Output: print_value_for_chain_raw(value, chain_id, raw)
Output-->>User: formatted/raw output
else Help/Noop/Exit
REPL-->>User: help/continue/exit
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Some examples you can do with it: |
There was a problem hiding this comment.
Pull request overview
Adds an operator-facing bera-reth console subcommand that can connect to a node over IPC/HTTP/WS to run an interactive REPL or one-shot RPC execution, with convenience completions, aliases, query helpers, and formatted output/annotations.
Changes:
- Extends the top-level CLI with a Berachain-specific
consolesubcommand and routes execution appropriately. - Implements console runtime: endpoint resolution, RPC client (HTTP/WS/IPC), REPL + completion, one-shot exec mode, query mini-language, and output formatting/annotations.
- Updates dependencies to support REPL, IPC validation, and JSON-RPC client/server test utilities.
Reviewed changes
Copilot reviewed 16 out of 17 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
src/main.rs |
Parses extended CLI and dispatches to either node startup or the new console command. |
src/lib.rs |
Exposes new berachain_cli and console modules. |
src/berachain_cli.rs |
Declares Berachain extended subcommands and wires them into reth’s ExtendedCommand. |
src/console/mod.rs |
Introduces the console module structure and exports ConsoleCommand. |
src/console/cli.rs |
Defines bera-reth console flags/args and entrypoint. |
src/console/run.rs |
Orchestrates connect-time discovery, alias setup, and REPL vs --exec execution. |
src/console/endpoint.rs |
Resolves IPC/HTTP/WS endpoints and selects the transport. |
src/console/rpc.rs |
Implements an RPC client abstraction (HTTP/WS via jsonrpsee, IPC via a lightweight client) plus tests. |
src/console/repl.rs |
Implements the interactive REPL loop, history, completion helper, and startup snapshot display. |
src/console/exec.rs |
Implements one-shot execution with raw JSON output. |
src/console/command.rs |
Parses user input into RPC calls, aliases, and query expressions. |
src/console/engine.rs |
Evaluates parsed commands, dispatches RPC calls, and applies queries. |
src/console/query.rs |
Implements the query mini-language (.count, .[N], .map(...), etc.) with tests. |
src/console/output.rs |
Adds formatting for common operator responses and interpreted-value annotations (hex/wei). |
src/console/rpc_completion.rs |
Adds static per-namespace method suffix tables for tab completion + tests. |
Cargo.toml |
Adds dependencies needed for the console (rustyline/dirs/libc, tokio features, jsonrpsee client features, tempfile). |
Cargo.lock |
Locks new dependency graph for the added console functionality. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (8)
src/console/endpoint.rs (2)
37-48: Scheme detection is case-sensitive.
HTTP://hostwould fall throughstarts_with("http://")and hit thecontains("://")branch, erroring with "unsupported endpoint scheme". URL schemes are case-insensitive per RFC 3986. Minor since users almost always type lowercase; fix if you want to be strict.Proposed fix
fn detect_transport(endpoint: &str) -> Result<Transport> { - if endpoint.starts_with("http://") || endpoint.starts_with("https://") { + let lower = endpoint.to_ascii_lowercase(); + if lower.starts_with("http://") || lower.starts_with("https://") { return Ok(Transport::Http); } - if endpoint.starts_with("ws://") || endpoint.starts_with("wss://") { + if lower.starts_with("ws://") || lower.starts_with("wss://") { return Ok(Transport::Ws); } - if endpoint.contains("://") { + if lower.contains("://") { bail!("unsupported endpoint scheme in {endpoint:?}"); } Ok(Transport::Ipc) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/console/endpoint.rs` around lines 37 - 48, The detect_transport function does case-sensitive scheme checks so uppercase schemes like "HTTP://host" are misdetected; fix by performing scheme checks against a lowercase version of the input (e.g., let lower = endpoint.to_lowercase()) and use lower.starts_with("http://"), lower.starts_with("https://"), lower.starts_with("ws://"), lower.starts_with("wss://") while preserving the original endpoint string in the bail! message and returned errors; this ensures scheme detection is RFC 3986 compliant without changing the error output.
54-77: Testdefaults_to_ipc_pathis environment-dependent.It relies on
dirs::home_dir()/dirs::data_dir()returningSome. In CI sandboxes where$HOME(or$XDG_DATA_HOME/%APPDATA%) is unset,data_dir()returnsNoneand the fallbackPathBuf::from(".")still satisfies the assertions, so the test should pass — but consider adding a test that explicitly asserts the scheme-detection behavior for a Windows-style path like\\.\pipe\reth.ipcor a relative path with a colon, to guard against regressions incontains("://").🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/console/endpoint.rs` around lines 54 - 77, The current test defaults_to_ipc_path relies on environment dirs and can be flaky; instead add explicit unit tests for scheme-detection using concrete IPC-like inputs: add a test (e.g., windows_pipe_is_ipc) that calls resolve_endpoint(Some(r"\\.\pipe\reth.ipc")) and asserts got.transport == Transport::Ipc and got.raw contains the pipe path, and another test (e.g., relative_path_with_colon_is_ipc) that calls resolve_endpoint(Some("relative:with:colon/reth.ipc")) (or another representative relative path with a colon) and asserts Transport::Ipc; keep the existing defaults_to_ipc_path if desired but ensure new tests cover Windows-style and colon-containing paths to prevent regressions in resolve_endpoint's scheme detection.src/console/rpc_completion.rs (1)
357-364: Small nit: lookup is O(n) per call.
method_suffixeslinearly scansRPC_NAMESPACE_TABLEon every call;dot_completions_for_namespacecalls it once, so the REPL hit is minor. If the completer ever calls it per keystroke across many namespaces, consider aphf/once_cell::Lazy<HashMap>lookup. Fine to defer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/console/rpc_completion.rs` around lines 357 - 364, method_suffixes currently does a linear scan over RPC_NAMESPACE_TABLE on every call causing O(n) lookups; change it to a constant-time lookup by building a static map keyed by namespace (e.g., using phf or once_cell::Lazy<HashMap<&'static str, &'static [&'static str]>>) populated from RPC_NAMESPACE_TABLE, then update method_suffixes to query that map and return the slice (preserve return type &[&str]); update any references such as dot_completions_for_namespace to use the same function (no API change) and ensure the static map has 'static lifetimes for the returned slices.src/console/query.rs (1)
54-65:parse_mapfinds the first), breaking nested/complex selectors.
expr.find(')')takes the first closing paren, so an inner selector that itself contains)(e.g. a future nested.map(.map(.a))) will be split mid-expression. Today no selector syntax produces), but this is a trap waiting for the first such extension. Tracking paren depth avoids the pitfall:♻️ Proposed paren-depth scan
- if !expr.starts_with(".map(") { - return Ok(None); - } - let close = expr.find(')').ok_or_else(|| eyre!("invalid .map(...) expression"))?; - let inner = &expr[5..close]; + if !expr.starts_with(".map(") { + return Ok(None); + } + let mut depth = 1usize; + let mut close = None; + for (i, b) in expr.bytes().enumerate().skip(5) { + match b { + b'(' => depth += 1, + b')' => { + depth -= 1; + if depth == 0 { + close = Some(i); + break; + } + } + _ => {} + } + } + let close = close.ok_or_else(|| eyre!("invalid .map(...) expression"))?; + let inner = &expr[5..close];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/console/query.rs` around lines 54 - 65, parse_map currently uses expr.find(')') which stops at the first ')' and breaks on nested/complex selectors; change parse_map to scan from the opening '(' at index 4/5 and track parenthesis depth to find the matching closing ')' (increment on '(' and decrement on ')') so the computed close index is the matching partner, then compute inner = &expr[5..close] and rest = &expr[close+1..] as before; update error handling to still bail if no matching ')' is found and keep the function signature and return type unchanged.src/console/rpc.rs (2)
162-175: One fresh Unix connection per request; consider reusing the stream.Each
IpcClientLite::requestopens a newUnixStream, writes one JSON-RPC frame, and drops it. For interactive one-shots this is fine, but REPL flows that issue several calls in quick succession (e.g.admin.removeAllPeersiterating peers) pay connect overhead each time and drop correlation with the server. Holding a connection behind atokio::sync::Mutex<BufReader<UnixStream>>(and falling back to reconnect on error) would reduce latency and match how thejsonrpseeHTTP/WS variants behave.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/console/rpc.rs` around lines 162 - 175, IpcClientLite::request currently opens a new UnixStream for every call (using self.path), which adds connect overhead and loses session correlation; change the implementation to hold a persistent connection (e.g., store a tokio::sync::Mutex<Option<BufReader<UnixStream>>> or similar inside the IpcClientLite struct) and have request lock that mutex, reuse the BufReader<UnixStream> for write/read, and on any I/O error drop and reconnect to self.path before retrying; ensure request still uses next_id for unique ids and preserves the JSON-RPC framing and newline behavior when reusing the stream.
59-63:supported_modulesswallows theOkpath into?onserde_json::from_value.If the server returns
rpc_moduleswith a non-map shape (e.g. array — as exercised by thesupported_modules_errors_on_invalid_shapetest), the resulting error bubbles up as a rawserde_jsonerror with no context about which RPC produced it. Consider attaching context:- let value = self.request_value("rpc_modules", None).await?; - let map = serde_json::from_value(value)?; - Ok(map) + let value = self.request_value("rpc_modules", None).await?; + serde_json::from_value(value) + .map_err(|e| eyre!("rpc_modules returned unexpected shape: {e}"))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/console/rpc.rs` around lines 59 - 63, supported_modules currently calls serde_json::from_value(value)? which will surface a raw serde error with no context; change that call in supported_modules to attach contextual information (e.g. with .with_context or map_err) so the error indicates it occurred while parsing the "rpc_modules" value returned by request_value("rpc_modules"); reference the supported_modules function and the request_value and serde_json::from_value calls when making the change so failures clearly state they came from parsing the rpc_modules RPC response.src/console/output.rs (2)
140-155: Inconsistent empty-list handling across table formatters.
try_format_detailed_peersreturnsNonefor an empty peers array, which makes the caller fall back to printing raw JSON ([]). In contrast,try_format_peer_scoresandtry_format_banned_subnetsreturn a friendly"-- no ... --"string for the same case. Consider aligning behavior for consistency.♻️ Proposed alignment
- let peers = value.as_array()?; - if peers.is_empty() { - return None; - } - - let first = peers.first()?; + let peers = value.as_array()?; + if peers.is_empty() { + return Some("-- no peers connected --".to_string()); + } + let first = peers.first()?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/console/output.rs` around lines 140 - 155, try_format_detailed_peers currently returns None for an empty peers array (causing raw JSON "[]" to be printed), which is inconsistent with try_format_peer_scores and try_format_banned_subnets; change the early empty-array branch in try_format_detailed_peers to return Some(String) with the same friendly placeholder (e.g. "-- no peers --") instead of None so the table formatter shows the consistent human-readable message while keeping the function signature Option<String>.
80-100: Possible noisy hex annotations for short non-numeric identifiers.Any string starting with
0xand ≤ 32 hex chars gets a"0x... -> <dec>"annotation — including short peer ids, short signatures, or any opaque 8-/16-byte hex blob. For the commoneth_blockNumber/eth_gasPricecase this is great, but for responses with many such fields the interpretation section can get long. Consider either:
- gating
small_hex_to_decannotations to fields whose key hints at a number (e.g.gas*,nonce,number,*Block*), or- deduplicating annotations per
(hex → decimal)pair.Non-blocking; only raise as the operator UX matures.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/console/output.rs` around lines 80 - 100, The small-hex-to-dec annotations are noisy; deduplicate them by tracking which hex strings you've already annotated: in the function that walks values (the scope using path, out, small_hex_to_dec, decimal_like_wei, looks_like_wei, format_eth) add a HashSet<String> (e.g., seen_small_hex) and before pushing the "{path}: {s} -> {dec}" entry check and insert into the set so each hex→decimal pair is emitted only once; optionally combine this with a simple key-filter (match path contains "gas", "nonce", "number", "block") if you want to further restrict which keys get annotated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/console/engine.rs`:
- Around line 136-143: The test rpc_with_query_stores_raw_result_in_last
currently just clones a Value and never calls evaluate_line so it doesn't verify
the RpcWithQuery branch's assignment of last_rpc_result; update the test to
drive the code through evaluate_line (or refactor the store-vs-return behavior
into a pure helper and test that directly) and use a fake/mocked RpcClient that
returns the peers JSON to ensure evaluate_line's RpcWithQuery path sets
last_rpc_result; look for the rpc_with_query_stores_raw_result_in_last test, the
evaluate_line function, the RpcWithQuery branch, and the last_rpc_result field
to implement the change.
In `@src/console/output.rs`:
- Around line 17-22: The match in native_symbol_for_chain_id uses magic literals
80_069 and 80_094; extract these into a documented constant (e.g.,
BERA_CHAIN_IDS or BERA_CHAIN_ID_SET) and reference that constant from
native_symbol_for_chain_id and from src/console/repl.rs::chain_emoji so the
chain-id list is single source of truth. Likewise extract the large numeric
literals used by looks_like_wei into named, documented constants (e.g.,
WEI_LOWER_BOUND, WEI_UPPER_BOUND or WEI_HEURISTIC_BOUNDS) so the heuristic is
self-describing. Update usages to refer to the new constants and add brief doc
comments for each constant.
In `@src/console/repl.rs`:
- Around line 244-261: In complete, the current start calculation uses a byte
index (rfind on a &str) which can land mid-UTF-8 codepoint for multi-byte
whitespace; change the logic to find the previous whitespace using char_indices
so you can compute a valid byte boundary: use
up_to_cursor.char_indices().rev().find(|(_, ch)| ch.is_whitespace()).map(|(i,
ch)| i + ch.len_utf8()).unwrap_or(0) to set start, then slice needle =
&up_to_cursor[start..] as before (references: function complete, variables
up_to_cursor, start, needle).
In `@src/console/rpc_completion.rs`:
- Around line 315-332: The HARDHAT_METHOD_SUFFIXES array contains an entry
"hardhat_dropTransaction" that produces a duplicated namespace
("hardhat.hardhat_dropTransaction"); update the array (HARDHAT_METHOD_SUFFIXES)
to remove the "hardhat_" prefix from that entry (use "dropTransaction") so it
matches the other suffixes and avoids producing
"hardhat.hardhat_dropTransaction", and scan the array for any other entries with
an unnecessary "hardhat_" prefix and normalize them similarly.
- Around line 277-312: The ANVIL_METHOD_SUFFIXES array contains one entry with
the namespace already included ("anvil_dropTransaction"), which causes
dot_completions_for_namespace("anvil") to produce "anvil.anvil_dropTransaction";
change that entry in ANVIL_METHOD_SUFFIXES to the bare method name
"dropTransaction" (and scan ANVIL_METHOD_SUFFIXES for any other items that
mistakenly include the "anvil" prefix) so completions generated by
dot_completions_for_namespace("anvil") become "anvil.dropTransaction".
In `@src/console/rpc.rs`:
- Around line 162-190: The request method currently awaits UnixStream::connect,
stream.write_all and reader.read_line without timeouts; wrap those awaits in
tokio::time::timeout (e.g., Duration::from_secs(30)) inside async fn request to
avoid hanging the REPL — specifically, wrap the connect call that constructs
stream (used with self.path), both write_all calls on the stream, and the
reader.read_line call; handle timeout errors by mapping the timeout::error to a
clear eyre! error (e.g., "IPC operation timed out") so request returns a proper
Err instead of hanging. Ensure you reference the request method, self.path, and
the local reader/stream variables when adding the timeouts.
---
Nitpick comments:
In `@src/console/endpoint.rs`:
- Around line 37-48: The detect_transport function does case-sensitive scheme
checks so uppercase schemes like "HTTP://host" are misdetected; fix by
performing scheme checks against a lowercase version of the input (e.g., let
lower = endpoint.to_lowercase()) and use lower.starts_with("http://"),
lower.starts_with("https://"), lower.starts_with("ws://"),
lower.starts_with("wss://") while preserving the original endpoint string in the
bail! message and returned errors; this ensures scheme detection is RFC 3986
compliant without changing the error output.
- Around line 54-77: The current test defaults_to_ipc_path relies on environment
dirs and can be flaky; instead add explicit unit tests for scheme-detection
using concrete IPC-like inputs: add a test (e.g., windows_pipe_is_ipc) that
calls resolve_endpoint(Some(r"\\.\pipe\reth.ipc")) and asserts got.transport ==
Transport::Ipc and got.raw contains the pipe path, and another test (e.g.,
relative_path_with_colon_is_ipc) that calls
resolve_endpoint(Some("relative:with:colon/reth.ipc")) (or another
representative relative path with a colon) and asserts Transport::Ipc; keep the
existing defaults_to_ipc_path if desired but ensure new tests cover
Windows-style and colon-containing paths to prevent regressions in
resolve_endpoint's scheme detection.
In `@src/console/output.rs`:
- Around line 140-155: try_format_detailed_peers currently returns None for an
empty peers array (causing raw JSON "[]" to be printed), which is inconsistent
with try_format_peer_scores and try_format_banned_subnets; change the early
empty-array branch in try_format_detailed_peers to return Some(String) with the
same friendly placeholder (e.g. "-- no peers --") instead of None so the table
formatter shows the consistent human-readable message while keeping the function
signature Option<String>.
- Around line 80-100: The small-hex-to-dec annotations are noisy; deduplicate
them by tracking which hex strings you've already annotated: in the function
that walks values (the scope using path, out, small_hex_to_dec,
decimal_like_wei, looks_like_wei, format_eth) add a HashSet<String> (e.g.,
seen_small_hex) and before pushing the "{path}: {s} -> {dec}" entry check and
insert into the set so each hex→decimal pair is emitted only once; optionally
combine this with a simple key-filter (match path contains "gas", "nonce",
"number", "block") if you want to further restrict which keys get annotated.
In `@src/console/query.rs`:
- Around line 54-65: parse_map currently uses expr.find(')') which stops at the
first ')' and breaks on nested/complex selectors; change parse_map to scan from
the opening '(' at index 4/5 and track parenthesis depth to find the matching
closing ')' (increment on '(' and decrement on ')') so the computed close index
is the matching partner, then compute inner = &expr[5..close] and rest =
&expr[close+1..] as before; update error handling to still bail if no matching
')' is found and keep the function signature and return type unchanged.
In `@src/console/rpc_completion.rs`:
- Around line 357-364: method_suffixes currently does a linear scan over
RPC_NAMESPACE_TABLE on every call causing O(n) lookups; change it to a
constant-time lookup by building a static map keyed by namespace (e.g., using
phf or once_cell::Lazy<HashMap<&'static str, &'static [&'static str]>>)
populated from RPC_NAMESPACE_TABLE, then update method_suffixes to query that
map and return the slice (preserve return type &[&str]); update any references
such as dot_completions_for_namespace to use the same function (no API change)
and ensure the static map has 'static lifetimes for the returned slices.
In `@src/console/rpc.rs`:
- Around line 162-175: IpcClientLite::request currently opens a new UnixStream
for every call (using self.path), which adds connect overhead and loses session
correlation; change the implementation to hold a persistent connection (e.g.,
store a tokio::sync::Mutex<Option<BufReader<UnixStream>>> or similar inside the
IpcClientLite struct) and have request lock that mutex, reuse the
BufReader<UnixStream> for write/read, and on any I/O error drop and reconnect to
self.path before retrying; ensure request still uses next_id for unique ids and
preserves the JSON-RPC framing and newline behavior when reusing the stream.
- Around line 59-63: supported_modules currently calls
serde_json::from_value(value)? which will surface a raw serde error with no
context; change that call in supported_modules to attach contextual information
(e.g. with .with_context or map_err) so the error indicates it occurred while
parsing the "rpc_modules" value returned by request_value("rpc_modules");
reference the supported_modules function and the request_value and
serde_json::from_value calls when making the change so failures clearly state
they came from parsing the rpc_modules RPC response.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 028226a4-667c-4f01-a8f9-fe3d908e55f8
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (16)
Cargo.tomlsrc/berachain_cli.rssrc/console/cli.rssrc/console/command.rssrc/console/endpoint.rssrc/console/engine.rssrc/console/exec.rssrc/console/mod.rssrc/console/output.rssrc/console/query.rssrc/console/repl.rssrc/console/rpc.rssrc/console/rpc_completion.rssrc/console/run.rssrc/lib.rssrc/main.rs
- Remove undocumented removeAllPeers alias (destructive, not in public contract) - Drop anvil/hardhat completion tables (not relevant to bera-reth operators) - Handle string-encoded hex/decimal in node status formatter (chainId, headNumber, peer counts) - Deduplicate hex_or_decimal_to_u64 into output module
Caps wei annotation at 10^28 (10B tokens). Wraps IPC requests in a 30-second timeout so a stalled daemon cannot freeze the REPL. Uses char_indices for UTF-8-safe completion word boundaries. Renames misleading test.
Compare schemes against the ASCII-lowercased endpoint so HTTP:// and http:// both resolve to HTTP while preserving the original string for errors and ResolvedEndpoint::raw. Adds a regression test for an uppercase scheme.
Assert Windows-style named pipes and colon-containing relative paths stay Transport::Ipc so scheme detection does not regress.
Track parenthesis depth from the opening (.map( so the closing ) matches the map call, not the first ) in the expression.
Wrap serde_json::from_value errors with context when decoding the rpc_modules response.
Return a short placeholder when the peer list is empty so output matches other table formatters instead of raw [].
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 16 out of 17 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
src/console/engine.rs (1)
89-100: Tests still don't cover theRpcWithQuery/ alias branches ofevaluate_line.The added unit tests exercise
normalize_rpc_method,resolve_alias_method,apply_query_to_last_rpc, andapply_querydirectly, butevaluate_lineitself (including thelast_rpc_resultside effect on theRpc,RpcWithQuery, andAliasbranches) is still untested. A regression that drops thelast_rpc_result = Some(...)assignment in any of those branches would not be caught. Consider injecting a lightweight trait-based RPC or a test double soevaluate_linecan be driven end-to-end.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/console/engine.rs` around lines 89 - 100, Tests don't exercise evaluate_line's Rpc/RpcWithQuery/Alias branches and their side-effect of setting last_rpc_result, so add an end-to-end unit test using a lightweight test double implementing the RPC trait to drive evaluate_line paths: implement a fake RPC that returns a predictable serde_json::Value, inject it into the evaluator or context used by evaluate_line, then call evaluate_line with inputs that exercise Rpc, RpcWithQuery and Alias variants and assert both the returned value and that evaluator.last_rpc_result is Some(expected_value). Ensure the test targets evaluate_line, references the RpcWithQuery/Rpc/Alias branches, and cleans up/reset last_rpc_result between sub-tests to avoid inter-test interference.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/console/engine.rs`:
- Around line 44-49: The parser treats inputs with arguments as implicit RPCs
and skips alias resolution, so commands like "ban \"0xpeer\"" never reach
InputCommand::Alias; fix by ensuring alias lookup runs before or inside the
implicit-RPC check: in parse_input (or looks_like_implicit_rpc) consult the
aliases map (the same map defined in run.rs) for the first token and return
InputCommand::Alias (preserving args) when a match exists, or alternatively, in
the RPC handling path call resolve_alias_method/resolve_alias_method(aliases,
&method) before normalize_rpc_method to remap alias names to their target RPC;
update places referencing looks_like_implicit_rpc, InputCommand::Alias,
resolve_alias_method, and normalize_rpc_method accordingly so aliased commands
with arguments are resolved instead of sent raw.
In `@src/console/output.rs`:
- Around line 196-221: The code currently slices strings by bytes in the peer
formatting logic (the direction extraction using &s[..s.len().min(3)] and
client_short construction using &client[..8] and &client[client.len() - 3..])
which will panic on non-ASCII input; add a safe truncation helper (e.g.,
truncate_chars(s: &str, max_chars: usize) -> &str using char_indices) and use it
in the peer formatting: replace the direction slice with
truncate_chars(direction_str, 3) and build client_short by truncating the front
and back with truncate_chars or by taking char-safe prefixes/suffixes; apply the
same fix to the other functions named try_format_peer_scores and
try_format_banned_subnets where reason (and any similar slices) is truncated to
ensure all string slicing is done on character boundaries rather than raw byte
indices.
In `@src/console/repl.rs`:
- Around line 82-104: The beraAdmin snapshot is using wrong JSON keys: in the
bera_admin_status handling replace status.get("client") used for client_version
with status.get("clientVersion") and replace status.get("head") used for
head_number with status.get("headNumber"); update the variables client_version
and head_number accordingly so they mirror the extraction logic in
try_format_node_status, and optionally consider replacing the hard-coded "🐻⭐"
with chain_emoji(chain_id) (or same gating with BERACHAIN_CHAIN_IDS) for
consistency with the non-beraAdmin branch.
In `@src/console/rpc.rs`:
- Around line 192-196: The current code treats any presence of "error"
(including Value::Null) as a failure when parsing the JSON-RPC response; change
the guard so you only return Err when "error" exists and is not null. In
practice update the check around resp.get("error") (the block that now does `if
let Some(err) = resp.get("error") { return Err(...) }`) to first test that the
value is not Value::Null (e.g., match or use .and_then / .filter to ensure
err.is_null() is false) and only then produce the eyre! with the error value;
leave the final resp.get("result").cloned().ok_or_else(...) unchanged. Ensure
you reference the same resp variable and the existing error/result handling
logic.
---
Nitpick comments:
In `@src/console/engine.rs`:
- Around line 89-100: Tests don't exercise evaluate_line's
Rpc/RpcWithQuery/Alias branches and their side-effect of setting
last_rpc_result, so add an end-to-end unit test using a lightweight test double
implementing the RPC trait to drive evaluate_line paths: implement a fake RPC
that returns a predictable serde_json::Value, inject it into the evaluator or
context used by evaluate_line, then call evaluate_line with inputs that exercise
Rpc, RpcWithQuery and Alias variants and assert both the returned value and that
evaluator.last_rpc_result is Some(expected_value). Ensure the test targets
evaluate_line, references the RpcWithQuery/Rpc/Alias branches, and cleans
up/reset last_rpc_result between sub-tests to avoid inter-test interference.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0968a8ed-ea92-4565-bae8-50ff15813291
📒 Files selected for processing (7)
src/console/endpoint.rssrc/console/engine.rssrc/console/output.rssrc/console/query.rssrc/console/repl.rssrc/console/rpc.rssrc/console/rpc_completion.rs
✅ Files skipped from review due to trivial changes (1)
- src/console/rpc_completion.rs
Drop default and beraAdmin shortcut mappings (peers/status/ban/penalize). REPL completion still uses rpc_modules namespaces. Single-token input remains MethodToken with dot-to-underscore normalization (e.g. eth.blockNumber). Help text updated accordingly.
Reintroduce removeAllPeers / admin.removeAllPeers as explicit tokens: list admin_peers, remove each enode via admin_removePeer, summarize counts. Document in help; stderr logs per-peer failures.
Align output.rs with feat/console: chars_prefix and ellipsis_char_ends for direction, client, peer id, genesis/head hash, and ban reason strings.
Use Reedline with FileBackedHistory, ColumnarMenu tab completion, and a custom bera> prompt. Map Ctrl+C to continue and Ctrl+D to exit like the previous readline behavior.
- Apply nightly rustfmt and dprint for TOML - Fix rpc_completion module docs (no broken intra-doc links) - Expand cargo-machete ignored deps for macro/test/build usage
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/console/rpc.rs (1)
192-194:⚠️ Potential issue | 🟡 Minor
error: nullstill treated as failure — duplicate of earlier review.
resp.get("error")returnsSome(Value::Null)when a server sends"error": nullalongside a validresult, so this branch will incorrectly surfacerpc error: null. The earlier review flagged the same issue and it does not appear to have been addressed in this revision.🛠️ Proposed guard
- if let Some(err) = resp.get("error") { - return Err(eyre!("rpc error: {}", err)); - } + if let Some(err) = resp.get("error").filter(|v| !v.is_null()) { + return Err(eyre!("rpc error: {err}")); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/console/rpc.rs` around lines 192 - 194, The current check treats any presence of "error" (including Value::Null) as a failure; update the branch that inspects resp.get("error") so it only returns Err when the value exists and is not null (i.e., explicitly match and reject non-Null error values), leaving responses with "error": null to proceed and be handled by the normal result path; adjust the code around resp.get("error") in src/console/rpc.rs to perform this non-null check (match or if let Some(v) if !v.is_null()) before constructing the eyre!("rpc error: {}", err).
🧹 Nitpick comments (3)
src/console/rpc.rs (1)
337-345: Inline// whatcomments in tests conflict with coding guidelines.The
// First request: …/// Second request: …comments describe what the following lines do rather than why. As per coding guidelines,src/**/*.rs: "No comments in code unless explicitly requested" and "Write succinct documentation focusing on 'why' rather than 'what'".♻️ Proposed cleanup
- let server_task = tokio::spawn(async move { - // First request: reply with an empty line. - let (stream1, _) = listener.accept().await.expect("accept first"); + let server_task = tokio::spawn(async move { + let (stream1, _) = listener.accept().await.expect("accept first"); @@ - // Second request: reply with JSON-RPC error. - let (stream2, _) = listener.accept().await.expect("accept second"); + let (stream2, _) = listener.accept().await.expect("accept second");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/console/rpc.rs` around lines 337 - 345, Remove or replace the inline "First request: …" and "Second request: …" comments that describe what the code does; instead either delete them or add a concise why-focused comment near the test block that explains the intent of the simulated socket interactions (e.g., to simulate an empty-line response then a JSON-RPC error) so readers understand the test purpose. Locate the test code around listener.accept().await with variables stream1, r1, req1, s1 and the subsequent read_line/write_all calls and update the comments accordingly.src/console/command.rs (1)
55-67: Inline comments restatewhatthe code already says — drop or convert towhy-focused notes.Lines 56, 63, 96–99, 141, 145, 150, and 156 largely narrate the adjacent branch condition ("If wrapped as a JSON array, use it directly…", "Dot-style method: consume two segments…"). Either the code already expresses this via the predicate, or a brief doc comment on the function summarizing the dispatch rules would serve maintainers better than per-branch labels.
Suggested direction:
- Remove the per-branch
// whatlines.- Add a single
///-level note onsplit_rpc_query_tailandparse_paramsexplaining why each supports its shapes (e.g., "underscore-style tokens have only one non-query dot, so the first./[starts the query; dot-style tokens preserve two segments as the method").As per coding guidelines:
src/**/*.rs— "No comments in code unless explicitly requested" and "Write succinct documentation focusing on 'why' rather than 'what'".Also applies to: 95-103, 140-157
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/console/command.rs` around lines 55 - 67, The review notes that several inline comments in split_rpc_query_tail and parse_params merely restate what the code does (e.g., "Dot-style method: consume two segments…" and similar branch labels); remove those "what" comments and replace them with a single /// doc comment on each function explaining the rationale ("why") for supporting underscore-style vs dot-style tokens and the expected shapes of the query tail (e.g., why underscore-style treats the first '.' or '[' as the query start and why dot-style preserves two segments). Update split_rpc_query_tail and parse_params to keep only concise, why-focused documentation and remove per-branch narrating comments inside the function bodies.Cargo.toml (1)
40-97: Pin thecamembera/rethgit dependencies to arevfor reproducibility.All ~40
reth-*dependencies (and the three dev-dependency equivalents at lines 95–97) point atbranch = "pog/provenance-callback"with norev. Any new commit on that branch will silently change resolved sources between builds, defeatingCargo.lock's immutability guarantee for git deps and making bisects/rollbacks harder.Consider pinning via a shared workspace variable or pinning each entry to a specific
rev:reth = { git = "https://github.com/camembera/reth", branch = "pog/provenance-callback", rev = "<sha>" }Alternatively, tag the fork and depend on
tag = "...". This is also easier oncargo-machete/release auditing given how many crates are involved.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Cargo.toml` around lines 40 - 97, The git dependencies for the many reth crates (e.g., reth, reth-basic-payload-builder, reth-cli, reth-evm, reth-node-core and the dev deps like reth-e2e-test-utils/reth-rpc-builder/reth-trie-common) are pinned to a branch only; pin each to an immutable revision by adding rev = "<commit-sha>" (or use tag = "<tag>") to each dependency entry or define a single workspace variable with the chosen rev and reference it across the reth-* entries so all git refs are fixed and reproducible.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/console/engine.rs`:
- Around line 67-83: The loop currently uses ok_or_else(?), which aborts on the
first missing or invalid enode; instead, change the enode extraction in the
for-peer loop (where peer.get("enode").and_then(Value::as_str) is used) to a
conditional that treats a missing/invalid enode as a counted failure: if the
enode string is present proceed with rpc.request_value("admin_removePeer",
Some(serde_json::json!([enode]))).await and update removed/failed as before,
otherwise log an error (e.g., eprintln!("admin.removePeer: missing enode for
peer: {peer}") or similar), increment failed, and continue to the next peer so
the loop does not abort and accounting remains correct.
---
Duplicate comments:
In `@src/console/rpc.rs`:
- Around line 192-194: The current check treats any presence of "error"
(including Value::Null) as a failure; update the branch that inspects
resp.get("error") so it only returns Err when the value exists and is not null
(i.e., explicitly match and reject non-Null error values), leaving responses
with "error": null to proceed and be handled by the normal result path; adjust
the code around resp.get("error") in src/console/rpc.rs to perform this non-null
check (match or if let Some(v) if !v.is_null()) before constructing the
eyre!("rpc error: {}", err).
---
Nitpick comments:
In `@Cargo.toml`:
- Around line 40-97: The git dependencies for the many reth crates (e.g., reth,
reth-basic-payload-builder, reth-cli, reth-evm, reth-node-core and the dev deps
like reth-e2e-test-utils/reth-rpc-builder/reth-trie-common) are pinned to a
branch only; pin each to an immutable revision by adding rev = "<commit-sha>"
(or use tag = "<tag>") to each dependency entry or define a single workspace
variable with the chosen rev and reference it across the reth-* entries so all
git refs are fixed and reproducible.
In `@src/console/command.rs`:
- Around line 55-67: The review notes that several inline comments in
split_rpc_query_tail and parse_params merely restate what the code does (e.g.,
"Dot-style method: consume two segments…" and similar branch labels); remove
those "what" comments and replace them with a single /// doc comment on each
function explaining the rationale ("why") for supporting underscore-style vs
dot-style tokens and the expected shapes of the query tail (e.g., why
underscore-style treats the first '.' or '[' as the query start and why
dot-style preserves two segments). Update split_rpc_query_tail and parse_params
to keep only concise, why-focused documentation and remove per-branch narrating
comments inside the function bodies.
In `@src/console/rpc.rs`:
- Around line 337-345: Remove or replace the inline "First request: …" and
"Second request: …" comments that describe what the code does; instead either
delete them or add a concise why-focused comment near the test block that
explains the intent of the simulated socket interactions (e.g., to simulate an
empty-line response then a JSON-RPC error) so readers understand the test
purpose. Locate the test code around listener.accept().await with variables
stream1, r1, req1, s1 and the subsequent read_line/write_all calls and update
the comments accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4e64a266-a250-4a35-a12b-886816109761
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (9)
Cargo.tomlsrc/console/command.rssrc/console/engine.rssrc/console/exec.rssrc/console/output.rssrc/console/repl.rssrc/console/rpc.rssrc/console/rpc_completion.rssrc/console/run.rs
🚧 Files skipped from review as they are similar to previous changes (3)
- src/console/exec.rs
- src/console/rpc_completion.rs
- src/console/output.rs
Require rand >=0.8.6 and update lockfile (0.9.x -> 0.9.3) per advisory.
Summary
Adds
bera-reth console, an interactive REPL and one-shot RPC tool for operators.bera-reth console [endpoint]— connect over IPC, HTTP, or WebSocket. Auto-discovers IPC socket from platform default datadir when no endpoint given.bera-reth console --exec "<cmd>"— run a single command, print raw JSON, exit.bera-reth console --raw— suppress table formatting and value annotations in REPL mode.peers,status,ban,penalize) pre-registered and conditionally shown when beraAdmin namespace is available at connect time..count,.first,.last,.[N],.[N].field,.map(.field).Validation
Verified against live mainnet node:
mainnet-archive-el+mainnet-archive-clactive--exec 'eth.blockNumber'"0x12c508a", exit 0--exec 'web3.clientVersion'"bera-reth/v1.3.3-894d6aa/x86_64-unknown-linux-gnu"net=80094 🐻 | block=19681529 | peers=12.counton scalar errors correctly;admin_peers→.countreturns12--rawmodeTest plan
Summary by CodeRabbit