🤖 Meet OnCall AI, our observability copilot that makes troubleshooting easy. Read announcement.

Skip to content
Back to Videos

Observability Pipelines: Monitoring Tomorrow’s Applications at Scale

Sep 14, 2023 / 1:47
David Wynn, Principle Solutions Architect at Edge Delta, discusses a true bottleneck in monitoring applications and why DevOps teams have been looking into observability pipelines.


There are a number of reasons that DevOps teams have been looking into observability pipelines. Some of them are economic, some of them are performance related. However, the reason that I don't hear talked about too often that will become a true bottleneck at some point is the speed of light.

This is a chart often cited by architects during cloud migrations. New cloud customers often forget that the speed of light limits bandwidth speeds and that this will have a meaningful impact on how much data can be transferred across the wire at scale. Normally, the red, yellow, green tolerance band is much wider for cloud migrations because that data can be batch uploaded or has historical significance. I've recolored that band here specifically for observability, where the half life of data is much shorter and data immediacy is much more vital.

Simply put, this shows that there is a hard limit to how much data you can meaningfully transfer across the wire and still be useful for observability. Even if you assume all your other costs are entirely negligible over a fiber line which uses light speed to transfer data, the maximum amount of information you can programmatically transfer for an acceptable timeframe is somewhere between ten gigs and 100 gigs, period. That is the hard cap unless we can figure out a way around the speed of light.

It's not a limit that many companies are facing today, but it's not hard to see a future where distributed processing for observability data is not only preferred but required to keep up with monitoring tomorrow's applications at scale.