fix(staker): carry forward pool tier dust#1269
Conversation
WalkthroughAdds persistent per-tier "left amount" storage to the staker store and updates reward caching to be interval-aware across halving boundaries, carrying and updating per-tier dust (tierLeftAmount) during reward application. Changes
Sequence DiagramsequenceDiagram
actor Caller
participant Staker
participant RewardCalc as RewardCalc
participant Store
Caller->>Staker: emissionCacheUpdateHook(currentTime)
Staker->>RewardCalc: applyCacheToAllPools(pools, startTime=currentTime, currentTime, emission)
loop per pool
RewardCalc->>RewardCalc: cacheRewardForPool(pool, startTime, currentTime, ...)
loop per halving interval
RewardCalc->>RewardCalc: applyCacheToPool(intervalStart, intervalEnd, tierLeftAmount*)
RewardCalc->>RewardCalc: calculatePoolRewardForInterval(...)
RewardCalc->>RewardCalc: computeTierRewardsForInterval(tierLeftAmount*)
Note over RewardCalc: modify tierLeftAmount* (carryover/dust)
end
end
RewardCalc->>Store: SetPoolTierLeftAmount(updatedTierLeftAmount)
Store-->>RewardCalc: ok
RewardCalc-->>Staker: done
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
contract/r/gnoswap/staker/v1/reward_calculation_pool_tier_test.gno (1)
626-765:⚠️ Potential issue | 🟡 MinorAdd a regression for repeated lazy pool caching with carryover.
TestCacheRewardForPoolcurrently checks cache presence/final reward, but it does not callcacheRewardForPooltwice across dust-producing intervals. That misses the single-pool path losing tier dust between calls.A focused case like count
3, emission100, first cache[100,110), second cache[110,111)should assert the second reward includes the carried10dust and pays36, not33.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a6fe8524-9da8-43f3-b6a6-4fb542389542
⛔ Files ignored due to path filters (1)
contract/r/scenario/staker/tier_dust_carry_forward_filetest.gnois excluded by!**/*filetest.gno
📒 Files selected for processing (10)
contract/r/gnoswap/staker/store.gnocontract/r/gnoswap/staker/store_test.gnocontract/r/gnoswap/staker/types.gnocontract/r/gnoswap/staker/v1/_mock_test.gnocontract/r/gnoswap/staker/v1/init.gnocontract/r/gnoswap/staker/v1/instance.gnocontract/r/gnoswap/staker/v1/reward_calculation_pool_tier.gnocontract/r/gnoswap/staker/v1/reward_calculation_pool_tier_test.gnocontract/r/gnoswap/staker/v1/staker.gnocontract/r/gnoswap/test/fuzz/_mock_test.gno
| localTierLeftAmount := self.tierLeftAmount | ||
|
|
||
| if len(halvingTimestamps) == 0 { | ||
| // No emission change within the range => use current emission. | ||
| self.applyCacheToPool(poolResolver, tierNum, currentTimestamp, self.currentEmission) | ||
| self.applyCacheToPool(poolResolver, tierNum, lastTimestamp, currentTimestamp, self.currentEmission, &localTierLeftAmount) | ||
| return | ||
| } | ||
|
|
||
| // Apply caching at every halving boundary. | ||
| currentEmission := int64(0) | ||
| currentEmission := self.currentEmission | ||
| for i, hvTimestamp := range halvingTimestamps { | ||
| currentEmission = halvingEmissions[i] | ||
| self.applyCacheToPool(poolResolver, tierNum, hvTimestamp, currentEmission) | ||
| self.applyCacheToPool(poolResolver, tierNum, lastTimestamp, hvTimestamp, currentEmission, &localTierLeftAmount) | ||
| lastTimestamp = hvTimestamp | ||
| } | ||
|
|
||
| // Remaining range [lastTimestamp, currentTimestamp). | ||
| self.applyCacheToPool(poolResolver, tierNum, currentTimestamp, currentEmission) | ||
| self.applyCacheToPool(poolResolver, tierNum, lastTimestamp, currentTimestamp, currentEmission, &localTierLeftAmount) |
There was a problem hiding this comment.
Persist or otherwise track carryover from the lazy single-pool path.
localTierLeftAmount is updated by applyCacheToPool, then discarded. Repeated cacheRewardForPool calls therefore restart from stale self.tierLeftAmount; for example, a [100,110) interval can compute carry 10, but [110,111) will not reuse it and pays 33 instead of 36.
Please thread this updated carryover into the same persistent cursor model used by applyCacheToAllPools, or maintain equivalent per-pool/per-tier carryover state for lazy caching.
junghoon-vans
left a comment
There was a problem hiding this comment.
[review-agent] PR #1269 review — fix(staker): carry forward pool tier dust
Overall
The core fix is correct and well-motivated. Two pre-existing bugs (lastTimestamp not advanced in the halving loop; currentEmission initialized to 0 instead of self.currentEmission in cacheRewardForPool) are fixed as part of this work. The new calculatePoolRewardForInterval math is sound and the scenario filetest confirms the fix holds within a ≤2-unit tolerance between frequent and delayed collectors.
Findings are posted as inline comments where possible. One cross-cutting note is included here:
[note] Accumulated dust is preserved when a tier becomes empty and survives pool re-entry
computeTierRewardsForInterval skips a tier when tierCount == 0 (continues without touching tierLeftAmount[tier]). This means if all pools leave a tier, the orphaned dust persists in tierLeftAmount[tier]. When a new pool joins that tier later, it inherits the old dust as a one-time bonus on the first reward interval.
The magnitude is bounded (dust is always less than tierCount × intervalDuration emission units at the time of last computation), so this cannot cause a large accounting error. But it is semantically surprising: a freshly-joined pool receives a carry-over from a prior, unrelated pool set.
Recommendation: Reset tierLeftAmount[tier] to zero inside changeTier when self.counts[currentTier] reaches 0 after decrement.
| // Determine halving boundaries since the pool's last cached reward timestamp. | ||
| halvingTimestamps, halvingEmissions := self.getHalvingBlocksInRange(lastTimestamp, currentTimestamp) | ||
| poolResolver := NewPoolResolver(pool) | ||
| localTierLeftAmount := self.tierLeftAmount |
There was a problem hiding this comment.
[review-agent] [note] cacheRewardForPool discards its updated dust — per-pool path diverges from global state
localTierLeftAmount is a snapshot of self.tierLeftAmount at the start of this call. applyCacheToPool updates *tierLeftAmount locally (via pointer), but after cacheRewardForPool returns, the updated carryover is never written back to self.tierLeftAmount.
This means every subsequent cacheRewardForPool call (from CollectReward) starts from the same stale dust level as the last full cacheReward. In contrast, applyCacheToAllPools does persist via self.tierLeftAmount = nextTierLeftAmount.
In practice each pool still receives the correct 1/N share (the formula divides by tierCount × duration), so there is no per-pool over-distribution. The scenario test bounds the aggregate divergence at ≤2 units. However, the behaviour is undocumented.
Recommendation: Add a comment here stating that localTierLeftAmount is intentionally not written back (optimization — full dust persistence only happens on cacheReward). If exact dust fidelity in the per-pool path becomes a requirement, the updated array must be persisted after the call.
| } | ||
|
|
||
| func calculatePoolRewardForInterval(emission int64, tierRatio int64, tierCount int64, intervalDuration int64, carryover int64) (int64, int64) { | ||
| if emission < 0 || tierRatio < 0 || tierCount < 0 || intervalDuration < 0 || carryover < 0 { |
There was a problem hiding this comment.
[review-agent] [note] carryover < 0 guard is unreachable under normal operation
carryover enters this function as the output of safeSubInt64(tierRewardForInterval, distributedReward). Since distributedReward = poolReward × tierCount × intervalDuration and poolReward is integer-truncated from tierRewardForInterval / (tierCount × intervalDuration), we always have distributedReward ≤ tierRewardForInterval, guaranteeing carryover ≥ 0.
The guard is defensive and does no harm, but may mislead a future reader into treating negative carryover as a realistic scenario.
Recommendation: Add a comment clarifying that carryover is non-negative by construction (integer division remainder), or remove the guard if it is considered noise.
| } | ||
|
|
||
| tierReward := safeMulDivInt64(emission, tierRatio, 100) | ||
| tierRewardForInterval := safeAddInt64(safeMulInt64(tierReward, intervalDuration), carryover) |
There was a problem hiding this comment.
[review-agent] [note] safeMulInt64(tierReward, intervalDuration) can panic if emission × interval grows too large
tierReward can be up to the full emission (100% ratio, single-pool tier). intervalDuration is the number of seconds elapsed since the last full cacheReward call. If the contract goes a long time without a tier operation, stake/unstake, or emission hook (all of which trigger cacheReward), intervalDuration can become very large.
safeMulInt64 panics on overflow, so the failure mode is a contract-level panic when the next CollectReward or tier operation is attempted — effectively halting reward operations for all users until the overflow condition is resolved.
With typical GNS emission values (≤10^12/sec) and daily intervals, the product is ~8.6×10^16 — well within int64 range. Risk only materialises at high emission values combined with week+ intervals.
Recommendation: Document the expected maximum safe interval duration given the protocol's peak emission rate. Consider periodic caching from the emission hook to prevent the gap from growing unbounded.
| } | ||
|
|
||
| // PoolTierLeftAmount | ||
| func (s *MockStakerStore) HasPoolTierLeftAmountStoreKey() bool { |
There was a problem hiding this comment.
[review-agent] [note] HasPoolTierLeftAmountStoreKey uses value-equality instead of key-presence semantics
func (s *MockStakerStore) HasPoolTierLeftAmountStoreKey() bool {
return s.poolTierLeftAmount != [sr.AllTierCount]int64{}
}The real stakerStore returns true once the key has been written — even if the stored value is all-zeros. This mock returns false for the zero-valued initial state, causing initStoreData's guard to re-execute SetPoolTierLeftAmount with the same zero value. The write is idempotent so no data corruption occurs, but the mock diverges from real-store semantics.
This is the same pattern used by the other Has* mock methods (HasPoolTierCountsStoreKey, etc.), so it's a systemic mock design choice rather than an isolated mistake.
Recommendation: Store an explicit initialized bool flag per key (or a map[string]bool of initialized keys) in the mock to match real-store key-presence semantics. At minimum, add a comment on each Has method explaining the zero-equals-absent assumption so future test authors are aware.
|
There was a problem hiding this comment.
♻️ Duplicate comments (1)
contract/r/gnoswap/staker/v1/reward_calculation_pool_tier.gno (1)
263-276:⚠️ Potential issue | 🟠 MajorHalving-emission bug still applies to the pre-halving interval.
Both loops still assign
halvingEmissions[i](the emission that activates athvTimestamp) and then apply it to[lastTimestamp, hvTimestamp)— the interval before the halving. That interval should be priced at the previous emission rate (self.currentEmission/ thecurrentEmissionvalue prior to reassignment). This underpays the pre-halving window and overpays the first post-halving window by one interval, and none of the new carry-forward tests detect it because they only inspect the final cached reward.The previous review’s proposed fix still applies verbatim:
🐛 Proposed fix
In
cacheReward(lines 263–271):for i, hvTimestamp := range halvingTimestamps { - emission := halvingEmissions[i] // caching: [lastTimestamp, hvTimestamp) - self.applyCacheToAllPools(pools, lastTimestamp, hvTimestamp, emission) + self.applyCacheToAllPools(pools, lastTimestamp, hvTimestamp, self.currentEmission) // halve emissions when halvingBlock is reached - self.currentEmission = emission + self.currentEmission = halvingEmissions[i] lastTimestamp = hvTimestamp }In
cacheRewardForPool(lines 329–333):for i, hvTimestamp := range halvingTimestamps { - currentEmission = halvingEmissions[i] self.applyCacheToPool(poolResolver, tierNum, lastTimestamp, hvTimestamp, currentEmission, &localTierLeftAmount) + currentEmission = halvingEmissions[i] lastTimestamp = hvTimestamp }Please also add a test asserting the reward cached at an intra-range timestamp (e.g. the halving boundary) uses the pre-halving emission — the current
cache across halving boundariescase only checks the last cache entry, so the bug is invisible to it.Also applies to: 329-333
🧹 Nitpick comments (1)
contract/r/gnoswap/staker/v1/reward_calculation_pool_tier.gno (1)
364-372: Drop the pre-allocated map inapplyCacheToAllPools.
tierRewards := make(map[uint64]int64, AllTierCount-1)is overwritten immediately in both branches of theif/elsebelow, so the allocation is wasted. A plainvardeclaration is clearer:♻️ Proposed nit
- counts := self.CurrentAllTierCounts() - tierRewards := make(map[uint64]int64, AllTierCount-1) - nextTierLeftAmount := self.tierLeftAmount - if currentTimestamp <= startTimestamp { - tierRewards = self.computeTierRewards(emissionInThisInterval) - } else { - tierRewards, nextTierLeftAmount = self.computeTierRewardsForInterval(startTimestamp, currentTimestamp, emissionInThisInterval, self.tierLeftAmount) - } + counts := self.CurrentAllTierCounts() + var tierRewards map[uint64]int64 + nextTierLeftAmount := self.tierLeftAmount + if currentTimestamp <= startTimestamp { + tierRewards = self.computeTierRewards(emissionInThisInterval) + } else { + tierRewards, nextTierLeftAmount = self.computeTierRewardsForInterval(startTimestamp, currentTimestamp, emissionInThisInterval, self.tierLeftAmount) + }
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a0eaaf3a-8e74-4f6f-9a70-b512826cc298
⛔ Files ignored due to path filters (8)
contract/r/scenario/staker/more_01_single_position_for_each_warmup_tier_total_4_position_internal_only_filetest.gnois excluded by!**/*filetest.gnocontract/r/scenario/staker/more_04_positions_with_different_liquidity_and_in_range_change_by_swap_filetest.gnois excluded by!**/*filetest.gnocontract/r/scenario/staker/pool_tier_removal_readd_check_rewards_filetest.gnois excluded by!**/*filetest.gnocontract/r/scenario/staker/staker_emission_cache_sync_issue_filetest.gnois excluded by!**/*filetest.gnocontract/r/scenario/staker/staker_short_warmup_period_internal_04_remove_tier_filetest.gnois excluded by!**/*filetest.gnocontract/r/scenario/staker/staker_short_warmup_period_internal_05_position_in_out_range_changed_by_swap_filetest.gnois excluded by!**/*filetest.gnocontract/r/scenario/staker/tier_dust_carry_forward_filetest.gnois excluded by!**/*filetest.gnocontract/r/scenario/staker/tracking_unclaimed_period_validation_filetest.gnois excluded by!**/*filetest.gno
📒 Files selected for processing (5)
contract/r/gnoswap/staker/v1/_mock_test.gnocontract/r/gnoswap/staker/v1/init_test.gnocontract/r/gnoswap/staker/v1/reward_calculation_pool_tier.gnocontract/r/gnoswap/staker/v1/reward_calculation_pool_tier_test.gnocontract/r/gnoswap/test/fuzz/_mock_test.gno
🚧 Files skipped from review as they are similar to previous changes (1)
- contract/r/gnoswap/test/fuzz/_mock_test.gno



Summary
PoolTierLeftAmountto match local contract naming conventionsValidation
Summary by CodeRabbit
New Features
Bug Fixes