frontend: nodes: Add resource allocation summary in Node Details#5048
frontend: nodes: Add resource allocation summary in Node Details#5048itvi-1234 wants to merge 1 commit intokubernetes-sigs:mainfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: itvi-1234 The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
2a7258b to
6de5be0
Compare
|
Hi @ashu8912 , @vyncent-t , @sniok Lemme know if any changes needed |
There was a problem hiding this comment.
Pull request overview
Adds a Resource Allocation section to the Node Details view to summarize total CPU/Memory requests and limits across pods scheduled to the node, including percentage-of-capacity indicators, and introduces new i18n glossary keys for the section labels.
Changes:
- Add
AllocatedResourcesSectionto Node Details and render it as a full-width extra section. - Compute aggregate CPU/Memory requests & limits from pods scheduled on the node and display totals + % of node capacity with status labels.
- Add new glossary translation keys for “Resource Allocation” and the four resource rows across supported locales.
Reviewed changes
Copilot reviewed 13 out of 13 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| frontend/src/components/node/Details.tsx | Adds the Node “Resource Allocation” section and calculation logic using pod lists and unit parsing. |
| frontend/src/i18n/locales/en/glossary.json | Adds English glossary strings for the new section/rows. |
| frontend/src/i18n/locales/de/glossary.json | Adds new glossary keys (empty values) for fallback. |
| frontend/src/i18n/locales/es/glossary.json | Adds new glossary keys (empty values) for fallback. |
| frontend/src/i18n/locales/fr/glossary.json | Adds new glossary keys (empty values) for fallback. |
| frontend/src/i18n/locales/hi/glossary.json | Adds new glossary keys (empty values) for fallback. |
| frontend/src/i18n/locales/it/glossary.json | Adds new glossary keys (empty values) for fallback. |
| frontend/src/i18n/locales/ja/glossary.json | Adds new glossary keys (empty values) for fallback. |
| frontend/src/i18n/locales/ko/glossary.json | Adds new glossary keys (empty values) for fallback. |
| frontend/src/i18n/locales/pt/glossary.json | Adds new glossary keys (empty values) for fallback. |
| frontend/src/i18n/locales/ta/glossary.json | Adds new glossary keys (empty values) for fallback. |
| frontend/src/i18n/locales/zh/glossary.json | Adds new glossary keys (empty values) for fallback. |
| frontend/src/i18n/locales/zh-tw/glossary.json | Adds new glossary keys (empty values) for fallback. |
| const { node } = props; | ||
| const { t } = useTranslation('glossary'); | ||
|
|
||
| const [pods] = Pod.useList({ |
There was a problem hiding this comment.
Pod.useList is called without a cluster option, so it will list pods from the currently selected cluster(s) rather than the node's actual cluster. In multi-cluster contexts this can produce incorrect allocation totals; pass cluster: node.cluster (or the same cluster used by the details view) to ensure the list is scoped correctly.
| const [pods] = Pod.useList({ | |
| const [pods] = Pod.useList({ | |
| cluster: node?.cluster, |
| const cpuCapacity = units.parseCpu(node?.status.capacity?.cpu || '0'); | ||
| const memoryCapacity = units.parseRam(node?.status.capacity?.memory || '0'); | ||
|
|
There was a problem hiding this comment.
The percentage calculations divide by cpuCapacity/memoryCapacity, but those are derived with a fallback of '0', so capacity can be 0 and the UI will render Infinity/NaN%. Guard against 0 capacity (e.g., show 0%/"N/A" and avoid division) before computing the percentages.
| pods?.forEach((pod: KubePod) => { | ||
| pod.spec.containers.forEach((container: KubeContainer) => { | ||
| cpuRequests += units.parseCpu(container.resources?.requests?.cpu || '0'); | ||
| cpuLimits += units.parseCpu(container.resources?.limits?.cpu || '0'); |
There was a problem hiding this comment.
Allocated totals only sum pod.spec.containers and ignore initContainers (and any special init-container semantics), so pods with init containers will be undercounted. Consider incorporating pod.spec.initContainers (typically using the max of init-container requests/limits per pod) to better match Kubernetes scheduling/allocation behavior.
| pods?.forEach((pod: KubePod) => { | ||
| pod.spec.containers.forEach((container: KubeContainer) => { | ||
| cpuRequests += units.parseCpu(container.resources?.requests?.cpu || '0'); | ||
| cpuLimits += units.parseCpu(container.resources?.limits?.cpu || '0'); | ||
| memoryRequests += units.parseRam(container.resources?.requests?.memory || '0'); | ||
| memoryLimits += units.parseRam(container.resources?.limits?.memory || '0'); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
This sums resources for every pod returned by the fieldSelector, including pods in terminal phases (Succeeded/Failed) that can remain bound to the node, which can inflate allocation totals. Consider filtering out terminal pods (e.g., by pod.status.phase) before summing requests/limits.
| <ValueLabel> | ||
| {units.unparseCpu(cpuRequests.toString()).value} | ||
| {units.unparseCpu(cpuRequests.toString()).unit} | ||
| </ValueLabel> |
There was a problem hiding this comment.
Resource values are rendered by concatenating {value}{unit} without a separator (e.g., 0.5m), while other parts of the UI typically format these as value unit for readability/consistency (see e.g. Pod list renderers). Consider adding a space (or reusing a shared formatter) between the numeric value and unit.
6de5be0 to
c736059
Compare
c736059 to
ca777c4
Compare
|
Hey @illume , I have implemented the suggested changes , lemme know if further required |
|
Hey , I have updated the implementation to address all the feedback provided in the previous comments. I have force-pushed the changes, though some review comments may not have been automatically marked as 'outdated' by copilot yet. Summary of improvements: |
Summary
The Node details view currently only shows actual resource usage (metrics). It lacks a summary of the total resource requests and limits allocated to the pods running on that node.
This PR adds a new "Resource Allocation" section to the Node Details view, providing a clear summary of total CPU/Memory requests and limits, along with their percentage relative to the node's capacity.
Fixes Issue #5036
Changes
1.Added a new AllocatedResourcesSection component to NodeDetails.tsx.
2.Implemented logic to calculate the total CPU/Memory requests and limits from all pods assigned to the node.
3.Integrated the new section as a full-width block in extraSections for consistent UI alignment with other details.
4.Included color-coded status labels for quick visual monitoring of allocation levels.
Steps to Test
1.Navigate to any Node's details page.
2.Locate the new "Resource Allocation" block below the usage charts.
3.Verify that the totals and percentages correctly reflect the assigned pod resources
Screenshot (Added the new resource allocation section)
