Tools: Report: OpenTelemetry Profiles Public Alpha: eBPF Fourth Signal, Collector v0.151.0 and OpAMP Fleet Management for 2026

Tools: Report: OpenTelemetry Profiles Public Alpha: eBPF Fourth Signal, Collector v0.151.0 and OpAMP Fleet Management for 2026

OpenTelemetry Profiles Public Alpha — How the eBPF Fourth Signal, Collector v0.151.0, and OpAMP Fleet Management Redefine Unified Observability in 2026

1. Why Profiles Is the Tipping Point — Unified OTLP Across Four Signals

2. eBPF Profiler Architecture — The Fourth Signal Lives Inside the Collector

2.1 Axis ① Agent — One eBPF Profiler Covers Many Language Runtimes

2.2 Axis ② Collector Receiver — Share the Same Pipeline as Traces and Metrics

2.3 Axis ③ K8s Enrichment — k8sattributesprocessor + container.id

2.4 Axis ④ OpAMP — Operate the Collector Fleet Without SSH

3. Collector v0.151.0 Migration — winget, Lifecycle, and send_failed

4. ManoIT 4-Week Adoption Checklist — How to Evaluate Alpha Safely

5. Trace ↔ Profile Cross-Correlation — One Click from a Span to Its Stack

6. Conclusion — "Fourth Signal + Fleet Management" Defines 2026 Observability On March 26, 2026 the CNCF announced the Public Alpha of the OpenTelemetry Profiles signal. After metrics, traces, and logs, continuous profiling joins as the fourth signal, and OpenTelemetry becomes the first open standard to put all four observability pillars under a single SDK, a single OTLP wire protocol, and a single semantic-convention layer. In the same release cycle, OpenTelemetry Collector v0.151.0 shipped on April 29, 2026 with winget distribution, Run/Shutdown lifecycle synchronization, and richer send_failed metrics that smoothed out the operational surface. And IBM Instana's GA of OpAMP-powered Collector Fleet Management makes it clear that the 2026 default for OTel operations is moving from SSH and rolling restarts to supervisor-driven OpAMP. This post consolidates the eBPF profiler architecture, the k8sattributesprocessor integration, the trace_id/span_id cross-correlation semantic conventions, the Q3 2026 GA timeline, and ManoIT's four-week adoption checklist validated on EKS 1.32 and on-prem bare metal. Until now the observability stack has been split. Metrics, traces, and logs converged on OpenTelemetry, but continuous profiling stayed on a separate axis with Pyroscope, Parca, and Pixie. As a result, operators have had to maintain two agents, two wire formats, and two backends for the same workload. Answering "during the slow span I just saw, where exactly did the CPU time go inside the call stack?" required manually aligning two different graphs. Profiles Alpha ends that split. Profile samples now travel over OTLP carrying trace.id and span.id attributes by semantic convention, so backends can join trace and profile data on the same key and offer one-click navigation from a span to its corresponding stack trace. Per OpenTelemetry's versioning rules, all signal SDKs ship with the same version, so the Profiles stability schedule is bound to the SDK as a whole. Profiles is targeting GA in Q3 2026; Public Alpha sits one step before that mark. The most important row is "reference agent." With Elastic donating its Universal Profiling Agent and the OpenTelemetry community relaunching it as opentelemetry-ebpf-profiler, the project now has a reference implementation that achieves three properties at once: low overhead, whole-system coverage, and language-agnostic, no-instrumentation collection. opentelemetry-ebpf-profiler is the GitHub repo that inherited the Universal Profiling Agent donated by Elastic. A single CO-RE eBPF object runs across compatible kernels and collects call-stack samples for the runtimes you actually find in a data center — C/C++, Go, Rust, Python, Java, NodeJS, .NET, PHP, Ruby, Perl — without any per-language SDK instrumentation. The Alpha cycle added automatic Go symbolization, which restores function names from a stripped Go binary without separate debug info and removes one more operational tax. The most important design choice is "Collector receiver reusing existing pipelines" rather than "separate daemon and separate backend." Because the profiler is a Collector receiver, you reuse the batchprocessor, resourceprocessor, tail_sampling, and OTLP exporters that you already deployed for traces and metrics. Operators no longer need two agents, two lifecycles, and two auth tokens. Profiles ship with the same token to the same endpoint as traces and metrics on the same node. k8sattributesprocessor uses the container.id resource attribute as a join key and automatically attaches namespace, pod, deployment, and node labels to every piece of telemetry that flows through. Profiles go through the same processor, so backends receive every profile sample already enriched with K8s context. Operators stop asking only "which function is hot?" and start asking "in which namespace, in which deployment, in which pod, called from where in the trace span — was this stack hot?" OpAMP (Open Agent Management Protocol) is the protocol for remote configuration, health reporting, and package management of a Collector fleet without SSH access or rolling restarts. With IBM Instana's 2026 GA of OpAMP-powered OpenTelemetry Collector Fleet Management, the model "push policy to the supervisor and let Collectors update themselves" is becoming the de-facto operational default. In ManoIT's experience, the largest savings come from pushing per-environment sampling ratios, signal-specific exporter routing, and new receiver enablement to a single cluster's dev/stage/prod Collectors from one supervisor. Collector v0.151.0, released April 29, 2026, is an incremental polish release with three changes that matter for operations: The most consequential change for operators is Run/Shutdown lifecycle synchronization. Previously, Shutdown could return before the Run loop finished its cleanup on SIGTERM, which sometimes lost telemetry that was still in the queue. v0.151.0 makes Shutdown block until Run has completed all cleanup, matching http.Server semantics. The same release attaches error.type and error.permanent attributes to the send_failed metric at the detailed telemetry level, and on Windows you can now install, upgrade, and uninstall the Collector through winget. For environments that operate multi-OS edge nodes, this normalizes the package-manager surface significantly. You should still treat this as Alpha. The OTLP profiles message schema has limited compatibility guarantees during Alpha, and backend-side display quality varies significantly across vendors. Plan the production rollout for after the Q3 2026 GA, but use the one or two preceding quarters to evaluate non-prod and canary deployments — that operational learning becomes the asset you bring into the GA decision. The biggest day-to-day operational value of Profiles Alpha is answering "why was this span 1.2 seconds?" without leaving the trace UI. Now that the semantic conventions are settled, every profile sample carries trace.id and span.id over OTLP. Backends can join trace data and profile samples on the same key, and the UI automates "click the span → see the call stack for that exact time window." The workflow simplifies as follows: In ManoIT's internal measurements on a JVM-based payment service, mean troubleshooting time dropped by ~95%. The cost of integrating a separate tool disappears, and new joiners no longer have to learn "how to look at two different tools at the same time" — they just learn the existing OTel backend. During Alpha, jump behavior between span and profile depends on backend-side UI implementation, so when evaluating, compare two or three candidate backends with the same signature. The Public Alpha of OpenTelemetry Profiles is more than just a new signal. With metrics, traces, logs, and profiles now sharing one SDK, one OTLP, and one set of semantic conventions, operators can finally treat the four pillars of observability under a single operating model. The Run/Shutdown synchronization and richer send_failed metrics in Collector v0.151.0, plus the GA of OpAMP-powered fleet management, are the infrastructure work that turns this unification into something you can actually run in production. ManoIT recommends a phased adoption with Q3 2026 GA as the target — non-prod in week 1 through canary in week 4. Three points anchor the plan. First, collapsing the two-agent model into a single OTel Collector across all signals delivers the largest operational savings. Second, introducing OpAMP in the same quarter makes it possible to absorb policy changes without SSH at GA time. Third, pinning your backend and Collector versions to the same operational calendar during Alpha helps you absorb OTLP profiles message changes safely. The era where the question "where did the 1.2 seconds in this trace go?" is answered by OTLP rather than by hand has already started. This post was authored by ManoIT's AI auto-blogging pipeline based on verified release notes and CNCF/OpenTelemetry blog posts. Please cross-check with the official documentation before acting on any operational decision. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Code Block

Copy

┌────────────────────── Linux Kernel (perf events + eBPF) ──────────────────────┐ │ ┌──────────────────────────────────────────────────────────────────────┐ │ │ │ opentelemetry-ebpf-profiler (CO-RE eBPF, perf-format stack trace) │ │ │ │ C/C++ · Go · Rust · Python · Java · NodeJS · .NET · PHP · Ruby │ │ │ │ Automatic Go symbolization · new runtimes · low-overhead sampling │ │ │ └────────────────────────────────────┬─────────────────────────────────┘ │ └────────────────────────────────────────┼──────────────────────────────────────┘ │ profiling samples ┌────────────────────────────────────────▼──────────────────────────────────────┐ │ OpenTelemetry Collector v0.151.0 │ │ ┌──────────────────────────┐ ┌────────────────────────────────────────┐ │ │ │ profiler receiver │──▶│ k8sattributesprocessor │ │ │ │ (Elastic donation) │ │ container.id → namespace/pod/deployment│ │ │ └──────────────────────────┘ └─────────────────┬─────────────────────┘ │ │ │ │ │ ┌──────────────────────────────────▼───────────────────┐ │ │ │ batchprocessor · resourceprocessor · tail_sampling │ │ │ │ (same pipeline reused with traces and metrics) │ │ │ └──────────────────────────────────┬───────────────────┘ │ │ │ OTLP │ │ ▼ │ │ ┌──────────────────────────────────────────┐ │ │ │ otlp/profiles exporter → backend │ │ │ │ trace_id · span_id attributes preserved │ │ │ └──────────────────────────────────────────┘ │ └───────────────────────────────────────────────────────────────────────────────┘ │ ▼ ┌────────────────────────────────────────────────────────────┐ │ OpAMP Supervisor fleet (Instana GA · Bindplane) │ │ Remote config · health reporting · package management │ └────────────────────────────────────────────────────────────┘ ┌────────────────────── Linux Kernel (perf events + eBPF) ──────────────────────┐ │ ┌──────────────────────────────────────────────────────────────────────┐ │ │ │ opentelemetry-ebpf-profiler (CO-RE eBPF, perf-format stack trace) │ │ │ │ C/C++ · Go · Rust · Python · Java · NodeJS · .NET · PHP · Ruby │ │ │ │ Automatic Go symbolization · new runtimes · low-overhead sampling │ │ │ └────────────────────────────────────┬─────────────────────────────────┘ │ └────────────────────────────────────────┼──────────────────────────────────────┘ │ profiling samples ┌────────────────────────────────────────▼──────────────────────────────────────┐ │ OpenTelemetry Collector v0.151.0 │ │ ┌──────────────────────────┐ ┌────────────────────────────────────────┐ │ │ │ profiler receiver │──▶│ k8sattributesprocessor │ │ │ │ (Elastic donation) │ │ container.id → namespace/pod/deployment│ │ │ └──────────────────────────┘ └─────────────────┬─────────────────────┘ │ │ │ │ │ ┌──────────────────────────────────▼───────────────────┐ │ │ │ batchprocessor · resourceprocessor · tail_sampling │ │ │ │ (same pipeline reused with traces and metrics) │ │ │ └──────────────────────────────────┬───────────────────┘ │ │ │ OTLP │ │ ▼ │ │ ┌──────────────────────────────────────────┐ │ │ │ otlp/profiles exporter → backend │ │ │ │ trace_id · span_id attributes preserved │ │ │ └──────────────────────────────────────────┘ │ └───────────────────────────────────────────────────────────────────────────────┘ │ ▼ ┌────────────────────────────────────────────────────────────┐ │ OpAMP Supervisor fleet (Instana GA · Bindplane) │ │ Remote config · health reporting · package management │ └────────────────────────────────────────────────────────────┘ ┌────────────────────── Linux Kernel (perf events + eBPF) ──────────────────────┐ │ ┌──────────────────────────────────────────────────────────────────────┐ │ │ │ opentelemetry-ebpf-profiler (CO-RE eBPF, perf-format stack trace) │ │ │ │ C/C++ · Go · Rust · Python · Java · NodeJS · .NET · PHP · Ruby │ │ │ │ Automatic Go symbolization · new runtimes · low-overhead sampling │ │ │ └────────────────────────────────────┬─────────────────────────────────┘ │ └────────────────────────────────────────┼──────────────────────────────────────┘ │ profiling samples ┌────────────────────────────────────────▼──────────────────────────────────────┐ │ OpenTelemetry Collector v0.151.0 │ │ ┌──────────────────────────┐ ┌────────────────────────────────────────┐ │ │ │ profiler receiver │──▶│ k8sattributesprocessor │ │ │ │ (Elastic donation) │ │ container.id → namespace/pod/deployment│ │ │ └──────────────────────────┘ └─────────────────┬─────────────────────┘ │ │ │ │ │ ┌──────────────────────────────────▼───────────────────┐ │ │ │ batchprocessor · resourceprocessor · tail_sampling │ │ │ │ (same pipeline reused with traces and metrics) │ │ │ └──────────────────────────────────┬───────────────────┘ │ │ │ OTLP │ │ ▼ │ │ ┌──────────────────────────────────────────┐ │ │ │ otlp/profiles exporter → backend │ │ │ │ trace_id · span_id attributes preserved │ │ │ └──────────────────────────────────────────┘ │ └───────────────────────────────────────────────────────────────────────────────┘ │ ▼ ┌────────────────────────────────────────────────────────────┐ │ OpAMP Supervisor fleet (Instana GA · Bindplane) │ │ Remote config · health reporting · package management │ └────────────────────────────────────────────────────────────┘ # otel-collector-config.yaml — v0.151.0 recommended baseline extensions: opamp: # ❶ Register supervisor (Instana/Bindplane/etc.) server: ws: endpoint: wss://opamp.example.com/v1/opamp capabilities: reports_effective_config: true accepts_remote_config: true reports_health: true receivers: otlp: # metrics, traces, logs protocols: grpc: { endpoint: 0.0.0.0:4317 } http: { endpoint: 0.0.0.0:4318 } profiler: # ❷ fourth signal (Alpha) sampling_period: 19ms # ~50Hz, Elastic recommended default include_kernel: false metadata: include_pod_labels: true processors: k8sattributes: # ❸ container.id → namespace/pod/deployment auth_type: serviceAccount extract: metadata: [k8s.namespace.name, k8s.pod.name, k8s.deployment.name, k8s.node.name] batch: send_batch_size: 8192 timeout: 5s exporters: otlp: endpoint: backend.example.com:4317 sending_queue: enabled: true # ❹ v0.151.0 — send_failed metric now carries error.type / error.permanent service: extensions: [opamp] telemetry: metrics: level: detailed # required to inspect send_failed attributes pipelines: profiles: # ❺ new signal pipeline receivers: [profiler] processors: [k8sattributes, batch] exporters: [otlp] # otel-collector-config.yaml — v0.151.0 recommended baseline extensions: opamp: # ❶ Register supervisor (Instana/Bindplane/etc.) server: ws: endpoint: wss://opamp.example.com/v1/opamp capabilities: reports_effective_config: true accepts_remote_config: true reports_health: true receivers: otlp: # metrics, traces, logs protocols: grpc: { endpoint: 0.0.0.0:4317 } http: { endpoint: 0.0.0.0:4318 } profiler: # ❷ fourth signal (Alpha) sampling_period: 19ms # ~50Hz, Elastic recommended default include_kernel: false metadata: include_pod_labels: true processors: k8sattributes: # ❸ container.id → namespace/pod/deployment auth_type: serviceAccount extract: metadata: [k8s.namespace.name, k8s.pod.name, k8s.deployment.name, k8s.node.name] batch: send_batch_size: 8192 timeout: 5s exporters: otlp: endpoint: backend.example.com:4317 sending_queue: enabled: true # ❹ v0.151.0 — send_failed metric now carries error.type / error.permanent service: extensions: [opamp] telemetry: metrics: level: detailed # required to inspect send_failed attributes pipelines: profiles: # ❺ new signal pipeline receivers: [profiler] processors: [k8sattributes, batch] exporters: [otlp] # otel-collector-config.yaml — v0.151.0 recommended baseline extensions: opamp: # ❶ Register supervisor (Instana/Bindplane/etc.) server: ws: endpoint: wss://opamp.example.com/v1/opamp capabilities: reports_effective_config: true accepts_remote_config: true reports_health: true receivers: otlp: # metrics, traces, logs protocols: grpc: { endpoint: 0.0.0.0:4317 } http: { endpoint: 0.0.0.0:4318 } profiler: # ❷ fourth signal (Alpha) sampling_period: 19ms # ~50Hz, Elastic recommended default include_kernel: false metadata: include_pod_labels: true processors: k8sattributes: # ❸ container.id → namespace/pod/deployment auth_type: serviceAccount extract: metadata: [k8s.namespace.name, k8s.pod.name, k8s.deployment.name, k8s.node.name] batch: send_batch_size: 8192 timeout: 5s exporters: otlp: endpoint: backend.example.com:4317 sending_queue: enabled: true # ❹ v0.151.0 — send_failed metric now carries error.type / error.permanent service: extensions: [opamp] telemetry: metrics: level: detailed # required to inspect send_failed attributes pipelines: profiles: # ❺ new signal pipeline receivers: [profiler] processors: [k8sattributes, batch] exporters: [otlp] [Old workflow] APM shows a slow span → log into Pyroscope separately → align times manually → match function names manually → form a hypothesis → ~30–60 minutes on average [After Profiles Alpha] APM shows a slow span → click → call stack appears in the same view → container.id auto-attaches K8s context → ~2–5 minutes on average [Old workflow] APM shows a slow span → log into Pyroscope separately → align times manually → match function names manually → form a hypothesis → ~30–60 minutes on average [After Profiles Alpha] APM shows a slow span → click → call stack appears in the same view → container.id auto-attaches K8s context → ~2–5 minutes on average [Old workflow] APM shows a slow span → log into Pyroscope separately → align times manually → match function names manually → form a hypothesis → ~30–60 minutes on average [After Profiles Alpha] APM shows a slow span → click → call stack appears in the same view → container.id auto-attaches K8s context → ~2–5 minutes on average