When it comes to processing observability data, teams often still use a centralize-then-analyze approach. This means they are routing all or most of their observability data to a downstream destination before analyzing it. From the central platform, teams manually build monitors and customize their dashboards as they see fit.
As teams continue to shift toward microservices, the volume of data that companies are generating has become too large for traditional downstream platforms to handle. As a result, index locations are bogged down and platform performance degrades. Teams are either having to accept a more costly and slower approach to observability or simply drop some of their data – neither of which is an ideal option for today’s standards.
Everyone knows that these analytics are crucial for performing daily monitoring tasks. But teams that aren’t indexing all of their data lose visibility into discarded datasets. In any case, teams need a way to process their data further upstream in order to gain visibility – regardless of whether your log data is indexed in its raw format.
How it Works
To help teams overcome this challenge, Edge Delta uses Processors. A Processor is based on regex monitoring logic that analyzes your log data as it’s created. It does so in an automated manner at the agent level.
We help teams change where data is processed and how soon they can access the context they need. Since all data is processed and analyzed up-front, teams no longer have to make the tough decision of which data to index and which to drop. Moreover, Processors help teams extract monitoring KPIs from their log data and reduce noise.
Edge Delta Processors
To achieve this, Edge Delta uses several different Processors, including Regex, Ratio, Top K, and Trace Processors. In this blog we’re going to focus on two buckets: Dimensional Processors and Cluster Processors.
Dimensional Processors Extract Metrics from Logs
Dimensional Processors extract dimensions from log data to create time series metrics. These metrics are visualized in our Metrics Explorer where you can track each KPI overtime.
Edge Delta also baselines these metrics to establish what’s ‘normal’ for your monitoring KPIs. When a metric lands outside the normal threshold, teams receive automated alerts paired with the associated data needed for troubleshooting. Now you know right away when something abnormal occurs, why it’s happening, and where it went wrong.
With how much data companies generate on a daily basis, taking a more proactive approach will prove essential to ensure applications are running smoothly. Additionally, by extracting these KPIs from your log data, you can populate metrics dashboards and gain real-time visibility into high-value datasets.
Cluster Processors Create Patterns in Your Logs
Cluster Processors work by finding common patterns in logs and decoupling them. This serves two purposes:
1. Monitoring and Troubleshooting
Cluster Processors help you reduce the noise in your log data by grouping common loglines together. This way, you can quickly understand each event instead of manually sifting through logline by logline. Now when an issue occurs, these Processors filter through and surface the data relevant for solving the problem.
2. Observability Pipelines
Cluster Processors enable you to route summaries of your log data downstream, versus complete raw datasets all the time. For example: let’s say one of your data sources generates a significant volume of INFO logs that your team doesn’t leverage frequently. Instead of indexing that data in its raw format, you can send a log cluster. When you apply this strategy at scale, you can save significantly on your observability bill, while giving your team visibility into the log data.
Cluster Processors work in tandem with other Processors. For example, the Top K Processor shows the frequency of log events, and the Ratio Processor proportion of similar events against non-similar events. So, you can easily see how much volume each cluster is consuming.
Take a Proactive Approach to Observability Data
Edge Delta’s approach to observability offers a proactive and efficient solution to handle the increasing volume of data. By using Processors that analyze data at the agent level, teams can extract valuable metrics and troubleshoot issues faster without sacrificing visibility or incurring extra costs.