Skip to content

fix(staker): carry forward pool tier dust#1269

Open
junghoon-vans wants to merge 5 commits intomainfrom
GSW-2625-tier-dust-carry-forward
Open

fix(staker): carry forward pool tier dust#1269
junghoon-vans wants to merge 5 commits intomainfrom
GSW-2625-tier-dust-carry-forward

Conversation

@junghoon-vans
Copy link
Copy Markdown
Member

@junghoon-vans junghoon-vans commented Apr 22, 2026

Summary

  • carry forward tier-level internal reward dust instead of dropping it during pool-tier reward division
  • rename the persisted staker store surface to PoolTierLeftAmount to match local contract naming conventions
  • add focused staker reward tests proving left-amount accumulation and later distribution
  • add a staker scenario filetest for repeated short collections versus delayed collection in the same tier, plus fuzz mock support for the renamed store API
  • refresh affected staker scenario golden outputs to reflect the carry-forward behavior across the full scenario package

Validation

  • make test PKG=gno.land/r/gnoswap/staker
  • make test PKG=gno.land/r/gnoswap/staker/v1
  • make test PKG=gno.land/r/gnoswap/test/fuzz
  • make test PKG=gno.land/r/gnoswap/scenario/staker

Summary by CodeRabbit

  • New Features

    • Added per-tier left amount tracking for enhanced pool tier management.
  • Bug Fixes

    • Improved reward caching with interval-aware calculation and proper carryover handling across halving boundaries.
    • Fixed remainder distribution during pool tier transitions.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 22, 2026

Walkthrough

Adds persistent per-tier "left amount" storage to the staker store and updates reward caching to be interval-aware across halving boundaries, carrying and updating per-tier dust (tierLeftAmount) during reward application.

Changes

Cohort / File(s) Summary
Store Interface & Core Implementation
contract/r/gnoswap/staker/store.gno, contract/r/gnoswap/staker/types.gno, contract/r/gnoswap/staker/store_test.gno
Added StoreKeyPoolTierLeftAmount and three store methods (HasPoolTierLeftAmountStoreKey, GetPoolTierLeftAmount, SetPoolTierLeftAmount) plus tests verifying init/panic/round-trip behavior.
Mock Store Implementations
contract/r/gnoswap/staker/v1/_mock_test.gno, contract/r/gnoswap/test/fuzz/_mock_test.gno
Extended mocks with poolTierLeftAmount and hasPoolTierLeftAmount fields and matching accessor/mutator methods.
Store Integration & Initialization
contract/r/gnoswap/staker/v1/init.gno, contract/r/gnoswap/staker/v1/instance.gno
Initialize persisted tierLeftAmount when absent; getPoolTier reads tierLeftAmount; updatePoolTier persists tierLeftAmount.
Reward Calculation Logic
contract/r/gnoswap/staker/v1/reward_calculation_pool_tier.gno, contract/r/gnoswap/staker/v1/reward_calculation_pool_tier_test.gno
Added PoolTier.tierLeftAmount. Rewrote caching functions to be interval-aware and propagate/compute per-tier carryover across halving boundaries; added helpers calculatePoolRewardForInterval and computeTierRewardsForInterval; updated and added tests for interval behavior and carryover.
Emission Cache Integration
contract/r/gnoswap/staker/v1/staker.gno
Updated emissionCacheUpdateHook to pass explicit start and current timestamps into applyCacheToAllPools.

Sequence Diagram

sequenceDiagram
    actor Caller
    participant Staker
    participant RewardCalc as RewardCalc
    participant Store

    Caller->>Staker: emissionCacheUpdateHook(currentTime)
    Staker->>RewardCalc: applyCacheToAllPools(pools, startTime=currentTime, currentTime, emission)

    loop per pool
        RewardCalc->>RewardCalc: cacheRewardForPool(pool, startTime, currentTime, ...)
        loop per halving interval
            RewardCalc->>RewardCalc: applyCacheToPool(intervalStart, intervalEnd, tierLeftAmount*)
            RewardCalc->>RewardCalc: calculatePoolRewardForInterval(...)
            RewardCalc->>RewardCalc: computeTierRewardsForInterval(tierLeftAmount*)
            Note over RewardCalc: modify tierLeftAmount* (carryover/dust)
        end
    end

    RewardCalc->>Store: SetPoolTierLeftAmount(updatedTierLeftAmount)
    Store-->>RewardCalc: ok
    RewardCalc-->>Staker: done
Loading

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix(staker): carry forward pool tier dust' accurately describes the main change—implementing dust carry-forward logic for pool-tier reward calculations.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch GSW-2625-tier-dust-carry-forward

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@junghoon-vans junghoon-vans self-assigned this Apr 22, 2026
@junghoon-vans junghoon-vans added enhancement New feature or request and removed enhancement New feature or request labels Apr 22, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
contract/r/gnoswap/staker/v1/reward_calculation_pool_tier_test.gno (1)

626-765: ⚠️ Potential issue | 🟡 Minor

Add a regression for repeated lazy pool caching with carryover.

TestCacheRewardForPool currently checks cache presence/final reward, but it does not call cacheRewardForPool twice across dust-producing intervals. That misses the single-pool path losing tier dust between calls.

A focused case like count 3, emission 100, first cache [100,110), second cache [110,111) should assert the second reward includes the carried 10 dust and pays 36, not 33.


ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: a6fe8524-9da8-43f3-b6a6-4fb542389542

📥 Commits

Reviewing files that changed from the base of the PR and between 7931e1a and 143c630.

⛔ Files ignored due to path filters (1)
  • contract/r/scenario/staker/tier_dust_carry_forward_filetest.gno is excluded by !**/*filetest.gno
📒 Files selected for processing (10)
  • contract/r/gnoswap/staker/store.gno
  • contract/r/gnoswap/staker/store_test.gno
  • contract/r/gnoswap/staker/types.gno
  • contract/r/gnoswap/staker/v1/_mock_test.gno
  • contract/r/gnoswap/staker/v1/init.gno
  • contract/r/gnoswap/staker/v1/instance.gno
  • contract/r/gnoswap/staker/v1/reward_calculation_pool_tier.gno
  • contract/r/gnoswap/staker/v1/reward_calculation_pool_tier_test.gno
  • contract/r/gnoswap/staker/v1/staker.gno
  • contract/r/gnoswap/test/fuzz/_mock_test.gno

Comment thread contract/r/gnoswap/staker/v1/_mock_test.gno
Comment thread contract/r/gnoswap/staker/v1/reward_calculation_pool_tier.gno
Comment on lines +311 to +328
localTierLeftAmount := self.tierLeftAmount

if len(halvingTimestamps) == 0 {
// No emission change within the range => use current emission.
self.applyCacheToPool(poolResolver, tierNum, currentTimestamp, self.currentEmission)
self.applyCacheToPool(poolResolver, tierNum, lastTimestamp, currentTimestamp, self.currentEmission, &localTierLeftAmount)
return
}

// Apply caching at every halving boundary.
currentEmission := int64(0)
currentEmission := self.currentEmission
for i, hvTimestamp := range halvingTimestamps {
currentEmission = halvingEmissions[i]
self.applyCacheToPool(poolResolver, tierNum, hvTimestamp, currentEmission)
self.applyCacheToPool(poolResolver, tierNum, lastTimestamp, hvTimestamp, currentEmission, &localTierLeftAmount)
lastTimestamp = hvTimestamp
}

// Remaining range [lastTimestamp, currentTimestamp).
self.applyCacheToPool(poolResolver, tierNum, currentTimestamp, currentEmission)
self.applyCacheToPool(poolResolver, tierNum, lastTimestamp, currentTimestamp, currentEmission, &localTierLeftAmount)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Persist or otherwise track carryover from the lazy single-pool path.

localTierLeftAmount is updated by applyCacheToPool, then discarded. Repeated cacheRewardForPool calls therefore restart from stale self.tierLeftAmount; for example, a [100,110) interval can compute carry 10, but [110,111) will not reuse it and pays 33 instead of 36.

Please thread this updated carryover into the same persistent cursor model used by applyCacheToAllPools, or maintain equivalent per-pool/per-tier carryover state for lazy caching.

Comment thread contract/r/gnoswap/staker/v1/reward_calculation_pool_tier.gno
Comment thread contract/r/gnoswap/test/fuzz/_mock_test.gno
Copy link
Copy Markdown
Member Author

@junghoon-vans junghoon-vans left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[review-agent] PR #1269 review — fix(staker): carry forward pool tier dust

Overall

The core fix is correct and well-motivated. Two pre-existing bugs (lastTimestamp not advanced in the halving loop; currentEmission initialized to 0 instead of self.currentEmission in cacheRewardForPool) are fixed as part of this work. The new calculatePoolRewardForInterval math is sound and the scenario filetest confirms the fix holds within a ≤2-unit tolerance between frequent and delayed collectors.

Findings are posted as inline comments where possible. One cross-cutting note is included here:


[note] Accumulated dust is preserved when a tier becomes empty and survives pool re-entry

computeTierRewardsForInterval skips a tier when tierCount == 0 (continues without touching tierLeftAmount[tier]). This means if all pools leave a tier, the orphaned dust persists in tierLeftAmount[tier]. When a new pool joins that tier later, it inherits the old dust as a one-time bonus on the first reward interval.

The magnitude is bounded (dust is always less than tierCount × intervalDuration emission units at the time of last computation), so this cannot cause a large accounting error. But it is semantically surprising: a freshly-joined pool receives a carry-over from a prior, unrelated pool set.

Recommendation: Reset tierLeftAmount[tier] to zero inside changeTier when self.counts[currentTier] reaches 0 after decrement.

// Determine halving boundaries since the pool's last cached reward timestamp.
halvingTimestamps, halvingEmissions := self.getHalvingBlocksInRange(lastTimestamp, currentTimestamp)
poolResolver := NewPoolResolver(pool)
localTierLeftAmount := self.tierLeftAmount
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[review-agent] [note] cacheRewardForPool discards its updated dust — per-pool path diverges from global state

localTierLeftAmount is a snapshot of self.tierLeftAmount at the start of this call. applyCacheToPool updates *tierLeftAmount locally (via pointer), but after cacheRewardForPool returns, the updated carryover is never written back to self.tierLeftAmount.

This means every subsequent cacheRewardForPool call (from CollectReward) starts from the same stale dust level as the last full cacheReward. In contrast, applyCacheToAllPools does persist via self.tierLeftAmount = nextTierLeftAmount.

In practice each pool still receives the correct 1/N share (the formula divides by tierCount × duration), so there is no per-pool over-distribution. The scenario test bounds the aggregate divergence at ≤2 units. However, the behaviour is undocumented.

Recommendation: Add a comment here stating that localTierLeftAmount is intentionally not written back (optimization — full dust persistence only happens on cacheReward). If exact dust fidelity in the per-pool path becomes a requirement, the updated array must be persisted after the call.

}

func calculatePoolRewardForInterval(emission int64, tierRatio int64, tierCount int64, intervalDuration int64, carryover int64) (int64, int64) {
if emission < 0 || tierRatio < 0 || tierCount < 0 || intervalDuration < 0 || carryover < 0 {
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[review-agent] [note] carryover < 0 guard is unreachable under normal operation

carryover enters this function as the output of safeSubInt64(tierRewardForInterval, distributedReward). Since distributedReward = poolReward × tierCount × intervalDuration and poolReward is integer-truncated from tierRewardForInterval / (tierCount × intervalDuration), we always have distributedReward ≤ tierRewardForInterval, guaranteeing carryover ≥ 0.

The guard is defensive and does no harm, but may mislead a future reader into treating negative carryover as a realistic scenario.

Recommendation: Add a comment clarifying that carryover is non-negative by construction (integer division remainder), or remove the guard if it is considered noise.

}

tierReward := safeMulDivInt64(emission, tierRatio, 100)
tierRewardForInterval := safeAddInt64(safeMulInt64(tierReward, intervalDuration), carryover)
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[review-agent] [note] safeMulInt64(tierReward, intervalDuration) can panic if emission × interval grows too large

tierReward can be up to the full emission (100% ratio, single-pool tier). intervalDuration is the number of seconds elapsed since the last full cacheReward call. If the contract goes a long time without a tier operation, stake/unstake, or emission hook (all of which trigger cacheReward), intervalDuration can become very large.

safeMulInt64 panics on overflow, so the failure mode is a contract-level panic when the next CollectReward or tier operation is attempted — effectively halting reward operations for all users until the overflow condition is resolved.

With typical GNS emission values (≤10^12/sec) and daily intervals, the product is ~8.6×10^16 — well within int64 range. Risk only materialises at high emission values combined with week+ intervals.

Recommendation: Document the expected maximum safe interval duration given the protocol's peak emission rate. Consider periodic caching from the emission hook to prevent the gap from growing unbounded.

}

// PoolTierLeftAmount
func (s *MockStakerStore) HasPoolTierLeftAmountStoreKey() bool {
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[review-agent] [note] HasPoolTierLeftAmountStoreKey uses value-equality instead of key-presence semantics

func (s *MockStakerStore) HasPoolTierLeftAmountStoreKey() bool {
    return s.poolTierLeftAmount != [sr.AllTierCount]int64{}
}

The real stakerStore returns true once the key has been written — even if the stored value is all-zeros. This mock returns false for the zero-valued initial state, causing initStoreData's guard to re-execute SetPoolTierLeftAmount with the same zero value. The write is idempotent so no data corruption occurs, but the mock diverges from real-store semantics.

This is the same pattern used by the other Has* mock methods (HasPoolTierCountsStoreKey, etc.), so it's a systemic mock design choice rather than an isolated mistake.

Recommendation: Store an explicit initialized bool flag per key (or a map[string]bool of initialized keys) in the mock to match real-store key-presence semantics. At minimum, add a comment on each Has method explaining the zero-equals-absent assumption so future test authors are aware.

@sonarqubecloud
Copy link
Copy Markdown

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
contract/r/gnoswap/staker/v1/reward_calculation_pool_tier.gno (1)

263-276: ⚠️ Potential issue | 🟠 Major

Halving-emission bug still applies to the pre-halving interval.

Both loops still assign halvingEmissions[i] (the emission that activates at hvTimestamp) and then apply it to [lastTimestamp, hvTimestamp) — the interval before the halving. That interval should be priced at the previous emission rate (self.currentEmission / the currentEmission value prior to reassignment). This underpays the pre-halving window and overpays the first post-halving window by one interval, and none of the new carry-forward tests detect it because they only inspect the final cached reward.

The previous review’s proposed fix still applies verbatim:

🐛 Proposed fix

In cacheReward (lines 263–271):

 for i, hvTimestamp := range halvingTimestamps {
-	emission := halvingEmissions[i]
 	// caching: [lastTimestamp, hvTimestamp)
-	self.applyCacheToAllPools(pools, lastTimestamp, hvTimestamp, emission)
+	self.applyCacheToAllPools(pools, lastTimestamp, hvTimestamp, self.currentEmission)

 	// halve emissions when halvingBlock is reached
-	self.currentEmission = emission
+	self.currentEmission = halvingEmissions[i]
 	lastTimestamp = hvTimestamp
 }

In cacheRewardForPool (lines 329–333):

 for i, hvTimestamp := range halvingTimestamps {
-	currentEmission = halvingEmissions[i]
 	self.applyCacheToPool(poolResolver, tierNum, lastTimestamp, hvTimestamp, currentEmission, &localTierLeftAmount)
+	currentEmission = halvingEmissions[i]
 	lastTimestamp = hvTimestamp
 }

Please also add a test asserting the reward cached at an intra-range timestamp (e.g. the halving boundary) uses the pre-halving emission — the current cache across halving boundaries case only checks the last cache entry, so the bug is invisible to it.

Also applies to: 329-333

🧹 Nitpick comments (1)
contract/r/gnoswap/staker/v1/reward_calculation_pool_tier.gno (1)

364-372: Drop the pre-allocated map in applyCacheToAllPools.

tierRewards := make(map[uint64]int64, AllTierCount-1) is overwritten immediately in both branches of the if/else below, so the allocation is wasted. A plain var declaration is clearer:

♻️ Proposed nit
-	counts := self.CurrentAllTierCounts()
-	tierRewards := make(map[uint64]int64, AllTierCount-1)
-	nextTierLeftAmount := self.tierLeftAmount
-	if currentTimestamp <= startTimestamp {
-		tierRewards = self.computeTierRewards(emissionInThisInterval)
-	} else {
-		tierRewards, nextTierLeftAmount = self.computeTierRewardsForInterval(startTimestamp, currentTimestamp, emissionInThisInterval, self.tierLeftAmount)
-	}
+	counts := self.CurrentAllTierCounts()
+	var tierRewards map[uint64]int64
+	nextTierLeftAmount := self.tierLeftAmount
+	if currentTimestamp <= startTimestamp {
+		tierRewards = self.computeTierRewards(emissionInThisInterval)
+	} else {
+		tierRewards, nextTierLeftAmount = self.computeTierRewardsForInterval(startTimestamp, currentTimestamp, emissionInThisInterval, self.tierLeftAmount)
+	}

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: a0eaaf3a-8e74-4f6f-9a70-b512826cc298

📥 Commits

Reviewing files that changed from the base of the PR and between 143c630 and 91f16b6.

⛔ Files ignored due to path filters (8)
  • contract/r/scenario/staker/more_01_single_position_for_each_warmup_tier_total_4_position_internal_only_filetest.gno is excluded by !**/*filetest.gno
  • contract/r/scenario/staker/more_04_positions_with_different_liquidity_and_in_range_change_by_swap_filetest.gno is excluded by !**/*filetest.gno
  • contract/r/scenario/staker/pool_tier_removal_readd_check_rewards_filetest.gno is excluded by !**/*filetest.gno
  • contract/r/scenario/staker/staker_emission_cache_sync_issue_filetest.gno is excluded by !**/*filetest.gno
  • contract/r/scenario/staker/staker_short_warmup_period_internal_04_remove_tier_filetest.gno is excluded by !**/*filetest.gno
  • contract/r/scenario/staker/staker_short_warmup_period_internal_05_position_in_out_range_changed_by_swap_filetest.gno is excluded by !**/*filetest.gno
  • contract/r/scenario/staker/tier_dust_carry_forward_filetest.gno is excluded by !**/*filetest.gno
  • contract/r/scenario/staker/tracking_unclaimed_period_validation_filetest.gno is excluded by !**/*filetest.gno
📒 Files selected for processing (5)
  • contract/r/gnoswap/staker/v1/_mock_test.gno
  • contract/r/gnoswap/staker/v1/init_test.gno
  • contract/r/gnoswap/staker/v1/reward_calculation_pool_tier.gno
  • contract/r/gnoswap/staker/v1/reward_calculation_pool_tier_test.gno
  • contract/r/gnoswap/test/fuzz/_mock_test.gno
🚧 Files skipped from review as they are similar to previous changes (1)
  • contract/r/gnoswap/test/fuzz/_mock_test.gno

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant