fix: use accumulator in lo.Reduce for disruption pod count#2931
fix: use accumulator in lo.Reduce for disruption pod count#2931nicknikolakakis wants to merge 1 commit intokubernetes-sigs:mainfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: nicknikolakakis The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Welcome @nicknikolakakis! |
|
Hi @nicknikolakakis. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Regular contributors should join the org to skip this step. Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
The lo.Reduce callback discarded the accumulator, so podCount in Command.LogValues() only reported the last candidate's pod count instead of the sum across all candidates.
48fc250 to
09d39c2
Compare
Summary
The
lo.Reducecallback inCommand.LogValues()discards the accumulator (_ int), so thepod-countfield in disruption logs only reports the last candidate's pod count instead of the sum across all candidates.Before:
func(_ int, cd *Candidate, _ int) int { return len(cd.reschedulablePods) }After:
func(acc int, cd *Candidate, _ int) int { return acc + len(cd.reschedulablePods) }Single-candidate disruptions were unaffected (last == only), but multi-node consolidation under-reported.
Fixes #2888
Test plan
go build ./pkg/controllers/disruption/...— builds clean