The OpenTelemetry Protocol (OTLP) is a core component of the OpenTelemetry (OTel) project, and it is quickly becoming the backbone of modern observability. OTLP provides a standardized way to transport logs, metrics, and traces between systems — replacing fragmented legacy transmission formats with a single, vendor-neutral protocol. It leverages cutting-edge technologies like gRPC and Protobuf to optimize performance and reduce overhead, which makes it especially well-suited for modern cloud-native environments.
The OTel Collector is an open-source solution that enables teams to route OTLP data to any compatible destination. However, it doesn’t provide the full control, visibility, and flexibility needed to manage telemetry data at enterprise scale. With Edge Delta’s Telemetry Pipelines, teams can easily pair the power of OTLP with real-time data flow visibility, intelligent processing recommendations, and full routing control, which enables them to optimize their troubleshooting workflows and reduce downstream costs.
In this blog post, we’ll cover the basics of OTLP, and demonstrate how to set up OTLP-based routing within an Edge Delta Telemetry Pipeline.
What Is OTLP, and How Does It Work?
OTLP is a general-purpose delivery protocol that defines how telemetry data is encoded, transported, and delivered between sources and destinations. It supports both gRPC and HTTP transports and uses Protocol Buffers for efficient binary serialization. This significantly reduces payload size to minimize network overhead and improve transmission speed.
OTLP also carries rich semantic context defined by the OpenTelemetry data model, including resource attributes, trace context, and instrumentation metadata. This enables seamless cross-signal correlation without complex configuration, allowing downstream systems to automatically link logs, metrics, and traces.
As OTLP adoption grows, many observability vendors have added OTLP-compliant ingestion endpoints to support standardized telemetry workflows. Edge Delta’s Telemetry Pipelines are designed with this flexibility in mind, allowing users to route OTLP data to any destination that supports the protocol.
Routing OTLP Data into New Relic with Edge Delta
In this scenario, we’re monitoring the OpenTelemetry Demo locally, using Grafana and Jaeger for visualization and analysis. However, we want to enhance our observability workflows with New Relic’s dashboards and trace analytics for a more robust, production-grade experience.
First, we’ll spin up an Edge Delta Telemetry Pipeline and configure it to receive OTLP data. While Edge Delta’s Telemetry Pipelines can fully replace the OTel Collector, we’ll keep the Collector running in the demo setup for simplicity and export data to Edge Delta from there:
Next, we’ll add an OTLP Destination node to our pipeline, which supports any OTLP-compatible endpoint across a wide range of downstream platforms. In this case, we’ll configure it to send data to New Relic’s OTLP endpoint. To complete the setup, we’ll include an api-key header for authorization and enable TLS (both of which are required for OTLP ingestion by New Relic).
After saving the configuration, telemetry data will immediately begin flowing into New Relic’s backend:
From here, we can explore traces more deeply, leveraging New Relic’s pre-built dashboards and troubleshooting tools to investigate system behavior. For example, we can see in the trace explorer view that a few of our “GET” requests from the “load-generator” service are experiencing some errors:
We can dive deeper by inspecting the associated log data to learn more about this latency issue. For this particular trace, the logs show that the “GetAds” method is unavailable, causing the ad service to fail its response and delay the full request lifecycle. From here, we can explore logs generated by the method to help us pinpoint the root cause and initiate remediation.
This correlation is only possible because we’re sending metadata with our logs and traces via OTLP — without it, linking log and trace data together would be far more difficult.
Enhance Downstream Analysis with Edge Delta’s Pre-Index Processing
In addition to flexible data routing, Edge Delta also allows us to pre-process our OTLP data before it’s indexed downstream.
For instance, if we’re struggling to control data ingestion in our New Relic instance, we can easily apply sampling to intelligently reduce log volumes without introducing blind spots. Additionally, we can strengthen downstream analysis by enriching trace spans with custom metadata tags, giving us more context during troubleshooting:
Live Capture lets us preview these transformations in real time by viewing de-serialized data pre- and post-processing, which helps us immediately understand their impact on data quality and cost savings.
Conclusion
OTLP enables data to move freely between tools, eliminating the need for custom exporters or translation layers. By combining the flexibility of OTLP with the control of Edge Delta’s Telemetry Pipelines, teams can unify telemetry workflows, optimize data in-flight, and avoid vendor lock-in, all while improving performance, cost-efficiency, and operational visibility.
To learn more about how Edge Delta’s Telemetry Pipelines enable efficient routing via OTLP, explore our free Playground, or schedule a demo with an expert today.