Skip to content

refactor(staker): optimize reward calculation in warmup loop#1268

Open
aronpark1007 wants to merge 3 commits intomainfrom
refactor/staker-reward-optimization
Open

refactor(staker): optimize reward calculation in warmup loop#1268
aronpark1007 wants to merge 3 commits intomainfrom
refactor/staker-reward-optimization

Conversation

@aronpark1007
Copy link
Copy Markdown

@aronpark1007 aronpark1007 commented Apr 22, 2026

Summary

  • Remove redundant zero initialization in RewardStatemake([]int64, n) already zero-initializes
  • Cache CurrentTick result per warmup iteration instead of querying the same value twice
  • Cache raw reward at each warmup boundary and reuse as the next iteration's start basis, reducing CalculateRawRewardForPosition calls per warmup

Test Plan

No new tests required. Existing tests cover all changes:

remove redundant zero initialization in RewardState
TestRewardStateOf — confirms slice length is correct after construction

cache CurrentTick per warmup iteration
TestCanonicalWarmup_1 — verifies per-tier reward accuracy one block at a time

cache raw reward at warmup boundary to reduce ReverseIterate calls
TestCanonicalWarmup_2 — AssertEmulatedRewardOf called once after 40 blocks, exercises startRaw carry-forward across all 4 warmup boundaries in a single rewardPerWarmup call

TestHistoricalTickDensity_DoesNotChangeRewardCalculation — covers tick crossing combined with warmup accumulator

Summary by CodeRabbit

  • Refactor
    • Optimized reward calculation logic in staking pools to improve performance through streamlined computation methods.

aronpark1007 and others added 3 commits April 22, 2026 14:58
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…seIterate calls

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 22, 2026

Walkthrough

Modified reward calculation logic in reward_calculation_pool.gno: removed explicit zero-initialization loops and replaced per-segment reward computation with incremental computation that maintains state across warmup segments, performing subtraction at the raw-reward level.

Changes

Cohort / File(s) Summary
Reward Calculation Optimization
contract/r/gnoswap/staker/v1/reward_calculation_pool.gno
Refactored rewardPerWarmup method to compute rewards incrementally across warmup segments rather than per-segment; removed explicit initialization loops; moved subtraction operation to raw-reward level before liquidity scaling.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title directly describes the main optimization: reducing computation in the warmup loop's reward calculation logic through caching and elimination of redundant operations.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch refactor/staker-reward-optimization

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sonarqubecloud
Copy link
Copy Markdown

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants