Skip to content

frontend: nodes: Add resource allocation summary in Node Details#5048

Open
itvi-1234 wants to merge 1 commit intokubernetes-sigs:mainfrom
itvi-1234:feat/node-resource-allocation
Open

frontend: nodes: Add resource allocation summary in Node Details#5048
itvi-1234 wants to merge 1 commit intokubernetes-sigs:mainfrom
itvi-1234:feat/node-resource-allocation

Conversation

@itvi-1234
Copy link
Copy Markdown
Contributor

@itvi-1234 itvi-1234 commented Apr 4, 2026

Summary
The Node details view currently only shows actual resource usage (metrics). It lacks a summary of the total resource requests and limits allocated to the pods running on that node.

This PR adds a new "Resource Allocation" section to the Node Details view, providing a clear summary of total CPU/Memory requests and limits, along with their percentage relative to the node's capacity.

Fixes Issue #5036

Changes
1.Added a new AllocatedResourcesSection component to NodeDetails.tsx.
2.Implemented logic to calculate the total CPU/Memory requests and limits from all pods assigned to the node.
3.Integrated the new section as a full-width block in extraSections for consistent UI alignment with other details.
4.Included color-coded status labels for quick visual monitoring of allocation levels.

Steps to Test
1.Navigate to any Node's details page.
2.Locate the new "Resource Allocation" block below the usage charts.
3.Verify that the totals and percentages correctly reflect the assigned pod resources

Screenshot (Added the new resource allocation section)
image

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Apr 4, 2026
@k8s-ci-robot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: itvi-1234
Once this PR has been reviewed and has the lgtm label, please assign ashu8912 for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Apr 4, 2026
@itvi-1234 itvi-1234 force-pushed the feat/node-resource-allocation branch from 2a7258b to 6de5be0 Compare April 4, 2026 06:37
@itvi-1234
Copy link
Copy Markdown
Contributor Author

Hi @ashu8912 , @vyncent-t , @sniok Lemme know if any changes needed

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a Resource Allocation section to the Node Details view to summarize total CPU/Memory requests and limits across pods scheduled to the node, including percentage-of-capacity indicators, and introduces new i18n glossary keys for the section labels.

Changes:

  • Add AllocatedResourcesSection to Node Details and render it as a full-width extra section.
  • Compute aggregate CPU/Memory requests & limits from pods scheduled on the node and display totals + % of node capacity with status labels.
  • Add new glossary translation keys for “Resource Allocation” and the four resource rows across supported locales.

Reviewed changes

Copilot reviewed 13 out of 13 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
frontend/src/components/node/Details.tsx Adds the Node “Resource Allocation” section and calculation logic using pod lists and unit parsing.
frontend/src/i18n/locales/en/glossary.json Adds English glossary strings for the new section/rows.
frontend/src/i18n/locales/de/glossary.json Adds new glossary keys (empty values) for fallback.
frontend/src/i18n/locales/es/glossary.json Adds new glossary keys (empty values) for fallback.
frontend/src/i18n/locales/fr/glossary.json Adds new glossary keys (empty values) for fallback.
frontend/src/i18n/locales/hi/glossary.json Adds new glossary keys (empty values) for fallback.
frontend/src/i18n/locales/it/glossary.json Adds new glossary keys (empty values) for fallback.
frontend/src/i18n/locales/ja/glossary.json Adds new glossary keys (empty values) for fallback.
frontend/src/i18n/locales/ko/glossary.json Adds new glossary keys (empty values) for fallback.
frontend/src/i18n/locales/pt/glossary.json Adds new glossary keys (empty values) for fallback.
frontend/src/i18n/locales/ta/glossary.json Adds new glossary keys (empty values) for fallback.
frontend/src/i18n/locales/zh/glossary.json Adds new glossary keys (empty values) for fallback.
frontend/src/i18n/locales/zh-tw/glossary.json Adds new glossary keys (empty values) for fallback.

const { node } = props;
const { t } = useTranslation('glossary');

const [pods] = Pod.useList({
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pod.useList is called without a cluster option, so it will list pods from the currently selected cluster(s) rather than the node's actual cluster. In multi-cluster contexts this can produce incorrect allocation totals; pass cluster: node.cluster (or the same cluster used by the details view) to ensure the list is scoped correctly.

Suggested change
const [pods] = Pod.useList({
const [pods] = Pod.useList({
cluster: node?.cluster,

Copilot uses AI. Check for mistakes.
Comment on lines +375 to +377
const cpuCapacity = units.parseCpu(node?.status.capacity?.cpu || '0');
const memoryCapacity = units.parseRam(node?.status.capacity?.memory || '0');

Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The percentage calculations divide by cpuCapacity/memoryCapacity, but those are derived with a fallback of '0', so capacity can be 0 and the UI will render Infinity/NaN%. Guard against 0 capacity (e.g., show 0%/"N/A" and avoid division) before computing the percentages.

Copilot uses AI. Check for mistakes.
Comment on lines +383 to +386
pods?.forEach((pod: KubePod) => {
pod.spec.containers.forEach((container: KubeContainer) => {
cpuRequests += units.parseCpu(container.resources?.requests?.cpu || '0');
cpuLimits += units.parseCpu(container.resources?.limits?.cpu || '0');
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Allocated totals only sum pod.spec.containers and ignore initContainers (and any special init-container semantics), so pods with init containers will be undercounted. Consider incorporating pod.spec.initContainers (typically using the max of init-container requests/limits per pod) to better match Kubernetes scheduling/allocation behavior.

Copilot uses AI. Check for mistakes.
Comment on lines +383 to +390
pods?.forEach((pod: KubePod) => {
pod.spec.containers.forEach((container: KubeContainer) => {
cpuRequests += units.parseCpu(container.resources?.requests?.cpu || '0');
cpuLimits += units.parseCpu(container.resources?.limits?.cpu || '0');
memoryRequests += units.parseRam(container.resources?.requests?.memory || '0');
memoryLimits += units.parseRam(container.resources?.limits?.memory || '0');
});
});
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sums resources for every pod returned by the fieldSelector, including pods in terminal phases (Succeeded/Failed) that can remain bound to the node, which can inflate allocation totals. Consider filtering out terminal pods (e.g., by pod.status.phase) before summing requests/limits.

Copilot uses AI. Check for mistakes.
Comment on lines +400 to +403
<ValueLabel>
{units.unparseCpu(cpuRequests.toString()).value}
{units.unparseCpu(cpuRequests.toString()).unit}
</ValueLabel>
Copy link

Copilot AI Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resource values are rendered by concatenating {value}{unit} without a separator (e.g., 0.5m), while other parts of the UI typically format these as value unit for readability/consistency (see e.g. Pod list renderers). Consider adding a space (or reusing a shared formatter) between the numeric value and unit.

Copilot uses AI. Check for mistakes.
@itvi-1234 itvi-1234 force-pushed the feat/node-resource-allocation branch from 6de5be0 to c736059 Compare April 5, 2026 03:50
@itvi-1234 itvi-1234 force-pushed the feat/node-resource-allocation branch from c736059 to ca777c4 Compare April 5, 2026 03:59
@itvi-1234
Copy link
Copy Markdown
Contributor Author

Hey @illume , I have implemented the suggested changes , lemme know if further required

@itvi-1234
Copy link
Copy Markdown
Contributor Author

Hey , I have updated the implementation to address all the feedback provided in the previous comments. I have force-pushed the changes, though some review comments may not have been automatically marked as 'outdated' by copilot yet.

Summary of improvements:
1.Multi-cluster scoping: Updated Pod.useList to use the node's cluster context.
2.Init containers: Included init container resources in the allocation totals using the Kubernetes standard formula.
3.Terminal pods: Added a filter to exclude pods in Succeeded or Failed phases.
4.Calculation safety: Implemented a guard against division by zero for node capacity.
5.Formatting: Standardized the display to use a space between values and units (${value} ${unit}) to match existing UI patterns.

@itvi-1234
Copy link
Copy Markdown
Contributor Author

Hi @sniok , @illume , Could you please review this PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants