feat(api): add resource topology endpoint for propagation chain visua…#493
feat(api): add resource topology endpoint for propagation chain visua…#493SunsetB612 wants to merge 1 commit intokarmada-io:mainfrom
Conversation
…lization Signed-off-by: SunsetB612 <10235101575@stu.ecnu.edu.cn>
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Code Review
This pull request introduces a resource topology feature to trace workload propagation from the Karmada control plane to member clusters. It includes a new API endpoint, shared informers with custom indexers for efficient resource lookups, and logic to construct a topology graph. Feedback focuses on improving performance by parallelizing synchronous API calls to member clusters, ensuring proper context propagation for request cancellation, and extending health check logic to support all workload types beyond Deployments.
| resp.Edges = append(resp.Edges, TopologyEdge{Source: rbNodeID, Target: workNodeID}) | ||
|
|
||
| // Step 4: Get member cluster workload status | ||
| memberStatus := getMemberWorkloadStatus(ctx, clusterName, namespace, name, kind) |
There was a problem hiding this comment.
Performing synchronous API calls to member clusters within a loop can lead to significant performance degradation, especially when a resource is propagated to a large number of clusters. This could cause the API request to time out. Consider fetching member cluster statuses in parallel using goroutines or an errgroup.
| return | ||
| } | ||
|
|
||
| result, err := topology.GetResourceTopology(k8sClient, namespace, name, kind) |
There was a problem hiding this comment.
The request context should be passed to GetResourceTopology to ensure that downstream operations (like network calls to member clusters) can be cancelled if the client disconnects or the request times out.
| result, err := topology.GetResourceTopology(k8sClient, namespace, name, kind) | |
| result, err := topology.GetResourceTopology(c.Request.Context(), k8sClient, namespace, name, kind) |
| } | ||
| return NodeStatusProgressing | ||
| default: | ||
| return NodeStatusHealthy |
There was a problem hiding this comment.
| func GetResourceTopology( | ||
| k8sClient kubeclient.Interface, | ||
| namespace, name, kind string) (*TopologyResponse, error) { | ||
| return traceChain(context.TODO(), k8sClient, namespace, name, kind) |
There was a problem hiding this comment.
The GetResourceTopology function should accept a context.Context instead of using context.TODO(). This allows the function to respect the lifecycle of the incoming API request.
func GetResourceTopology(
ctx context.Context,
k8sClient kubeclient.Interface,
namespace, name, kind string) (*TopologyResponse, error) {
return traceChain(ctx, k8sClient, namespace, name, kind)
No description provided.