Skip to content

feat: add helm profiling preset#2126

Open
wehzzz wants to merge 6 commits intoopen-telemetry:mainfrom
wehzzz:feat-add-helm-profiling-preset
Open

feat: add helm profiling preset#2126
wehzzz wants to merge 6 commits intoopen-telemetry:mainfrom
wehzzz:feat-add-helm-profiling-preset

Conversation

@wehzzz
Copy link
Copy Markdown

@wehzzz wehzzz commented Mar 26, 2026

What does this PR do?

Adds a new profiling preset to the opentelemetry-collector Helm chart. When presets.profiling.enabled: true, the chart automatically configures:

  • A profiling receiver and a profiles pipeline with debug exporter as fallback if no exporter configured
  • A tracefs host volume mount (/sys/kernel/tracing, read-only)
  • hostPID: true on the pod spec
  • Security context (runAsUser: 0, runAsGroup: 0, privileged: true)

When combined with the kubernetesAttributes preset, the chart also:

  • Injects the k8sattributes processor into the profiles pipeline (same pattern as logs/metrics/traces)
  • Adds container.id to the pod_association configuration - eBPF profilers identify workloads by container ID, not pod IP/UID.

Motivation

Deploying an eBPF profiler (e.g. opentelemetry-collector-ebpf-profiler) on Kubernetes currently requires significant boilerplate. This was discussed in open-telemetry/opentelemetry-ebpf-profiler#1072 where the community expressed interest in a preset to simplify this.

Describe how you validated your changes

Two new CI values files:

  • ci/preset-profiling-values.yaml - profiling preset only
  • ci/preset-profiling-kubernetesattributes-values.yaml - profiling + kubernetesAttribute

Additional notes

  • The container.id pod_association injection is gated on $config.service.pipelines.profiles existence in applyKubernetesAttributesConfig, not on profiling.enabled. This means any profiles pipeline (preset or user-defined) benefits from the enrichment when kubernetesAttributes is enabled.

  • hostPID is applied to all three workload templates (DaemonSet, Deployment, StatefulSet) for consistency.

@wehzzz wehzzz marked this pull request as ready for review March 26, 2026 13:59
@wehzzz wehzzz requested review from a team, TylerHelmuth, dmitryax and povilasv as code owners March 26, 2026 13:59
Copy link
Copy Markdown
Member

@florianl florianl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good to me - but I'm not an helm expert. Just a minor question around the set resource limits.

resources:
limits:
cpu: 100m
memory: 200M
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was this memory limit tested in some way? Depending on the environment, my experiments usually require 400M to 500M.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I'm not mistaken, this limit is only used for the CI environment. It's not a default value injected by kubernetesattributes or the new profiling preset.

{{- include "opentelemetry-collector.pod" ($podData | mustMergeOverwrite (deepCopy .)) | nindent 6 }}
hostNetwork: {{ .Values.hostNetwork }}
hostPID: {{ .Values.hostPID }}
hostPID: {{ or .Values.hostPID .Values.presets.profiling.enabled }}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What we expect if profiling.enabled: true and hostPID: ""?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the eBPF profiler requires hostPID: true to function, it makes sense that enabling the profiling preset forces this value to true, regardless of whether hostPID is left empty ("") in the user's configuration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants