Read on for an application of edge computing that is addressing one of the key concerns for security data lakes.
The dark side of fast alerting on limited volumes
The growing popularity of security data lakes can be traced back to the exploding volume of machine data. As shown in “Data Platforms Will Eat the SIEM”, traditional log analytics are breaking down in the face of relentless machine data growth.
Security teams that budgeted for last year’s log volumes find that they can’t come close to covering what their systems are now generating. Like a couple on a queen size bed sharing a twin size blanket, not everyone can be covered and some things will be exposed.
In recent conversations with security organizations, I’ve heard stories of teams having to choose the one key source that they’ll get to collect while the rest of their log data is left behind. Some log sources such as network flow data can be super valuable in the event of a breach but would hog the whole license- so they’re not analyzed.
While security teams are adapting to this situation by shifting analytics from disk-based solutions like Splunk and Elasticsearch to cloud storage-based solutions like Snowflake, they have run into a trade-off when it comes to alert latency.
Volume vs. Latency Trade-off
Solutions designed for log analytics have long supported real-time alerting. Splunk, for example, supports real-time alerts that can trigger on data that has reached the central server’s port but not yet been indexed. This feature can be used for issues that require immediate action, typically in the form of automated workflows.
Security data lakes such as those built on the Snowflake platform support streaming data but were not designed for real-time analytics. Streaming to the data platform also tends to be batched for price performance. As such, there is usually a delay of several minutes before the data is available and a latency of several more minutes before an alert query runs on schedule.
Forced to choose between volume and latency, security teams usually prefer to alert a little later rather than being blind to an entire data source. This is the right choice from a security perspective but is also an opportunity for innovation.
Alerting at the source
Edge computing technology enables machine learning and alerting decisions to take place at the “edges” of the network. For a typical enterprise security organization, this means that analytics are applied at the servers and clusters before their events are sent out over the network.
This is where innovation is happening in the form of anomaly detection with distributed technologies like federated learning. According to Hacker Noon,
In the traditional AI methods sensitive user data are sent to the servers where models are trained…
In contrast to the traditional AI methods, Federated Learning brings the models to the data source or client device for training and inferencing. The local copies of the model on the device eliminate network latencies and costs incurred due to continuously sharing data with the server.
Both real-time security alerts and operational monitoring can be enabled with this approach. By using “fast” edge computing technology to complement “deep” security data lake analytics, the volume vs. latency trade-off can be eliminated.
Edge computing in action
One of the leading vendors in the edge computing space is Edge Delta. I first spoke with the Edge Delta team a few weeks ago when they were helping a customer to deal with a traditional log analytics solution that was going to surpass $2M+ in annual spend. The security and DevOps budget for log analytics was capped and the customer was going to face growing blind spots in their log visibility.
To fix this situation, the customer rolled out Edge Delta agents to servers where logs and metrics data would be pre-processed using federated learning and rule logic. Only in case of an anomaly or threat detection would the relevant server’s logs be collected to the SIEM. This meant faster alerting and that collected volumes would drop significantly.
Where data volume limits had caused blind spots, now all devices could be monitored.
What about the rest of the events, those not tied to anomaly detections? All event logs would be shipped in parallel to the customer’s security data lake. There they could be cost-effectively analyzed for compliance, threat hunting and incident response.
Security data lake ecosystem is evolving rapidly
Innovative vendors like Edge Delta are stepping up to help security and DevOps teams realize the potential of data platforms like Snowflake. While machine learning has seemed like an empty promise to many InfoSec practitioners, now we are seeing results from AI concepts like federated learning applied to real problems like fast alerting and eliminating visibility gaps with security data lakes.
Originally published by Omer Singer on medium.com.