Configuration
Pyroscope Configuration
Deployment in Our Setup
Pyroscope runs as a single binary in the monitoring namespace with Azure Blob Storage backend.
# Key Helm values
pyroscope:
replicas: 1
resources:
requests:
cpu: 100m
memory: 512Mi
limits:
cpu: "1"
memory: 2Gi
Collection via Grafana Alloy
All profile data flows through Grafana Alloy, which runs as a DaemonSet on every node (required for eBPF access).
eBPF Profiling (Cluster-Wide)
eBPF profiling captures CPU profiles from all processes on the node without any code changes:
pyroscope.ebpf "instance" {
forward_to = [pyroscope.write.default.receiver]
targets_only = false
default_target = {"service_name" = "unspecified"}
demangle = "none"
sample_rate = 97 // Hz — avoids lockstep sampling
}
Key settings:
sample_rate = 97— intentionally not a round number to avoid synchronization artifactstargets_only = false— profiles all processes, not just discovered targets- Privileged mode required — the Alloy DaemonSet must run as privileged for eBPF access
SDK-Based Scraping
For applications that expose pprof-compatible endpoints (Go, Python with Pyroscope SDK):
pyroscope.scrape "cpu" {
targets = discovery.relabel.pyroscope_pods.output
forward_to = [pyroscope.write.default.receiver]
profiling_config {
profile.process_cpu { enabled = true }
profile.memory { enabled = true }
profile.mutex { enabled = true }
profile.block { enabled = true }
profile.goroutine { enabled = true }
}
}
Enable SDK scraping per pod via annotations:
metadata:
annotations:
profiles.grafana.com/cpu_scrape: "true"
profiles.grafana.com/port: "6060"
Write Endpoint
pyroscope.write "default" {
endpoint {
url = "http://pyroscope.monitoring.svc.cluster.local:4040"
}
}
Storage
Pyroscope supports multiple storage backends:
| Backend | Use Case |
|---|---|
| Local disk | Development, testing |
| Azure Blob Storage | Production (used in our setup) |
| Amazon S3 | Production |
| Google Cloud Storage | Production |
Grafana Integration
Pyroscope is integrated into Grafana as a native datasource with the grafana-pyroscope-app plugin enabled.
Correlation with Other Signals
The greatest value comes from correlating profiles with other telemetry:
📈 Metric: CPU spike → 95%
│
├── 🔍 Trace: GET /api/reports (span: 12.5s)
│ └── 🔬 Profile: json.Marshal() → 78% CPU
│
└── 🪵 Log: "Report generation completed" (12.5s)
- Span → Profile: Click a slow span in Tempo to see which functions cause the slowdown
- Metric → Profile: Navigate from a Grafana dashboard to a flame graph for the same time period
- Profile → Log: Flame graph points to a function → check logs for what happened inside
Accessing Pyroscope
# Port forward to localhost
kubectl port-forward -n monitoring svc/pyroscope 4040:4040
# Open in browser
# http://localhost:4040
In the workshop environment, Pyroscope is accessible through Grafana’s Explore view using the Pyroscope datasource.