Treat Security Analysts like Analytics

by mohamed_hasan on Pixabay

You’ve finally done it. You studied security analytics for years, got smart on network and endpoint data, implemented amazing analytics, and even managed to toss in some machine learning onto your stack. Your company conducts a penetration test, and your solution detects all of the campaigns! There are several false negatives and duplicate incidents, but the amount of alerts generated falls under a manageable threshold. You push your security analytics to production. You decide to celebrate your success, but you know that security is never done. You keep improving your analytics based on emerging threat intelligence and better data from constant network monitoring and security validation.

The following month comes another penetration test. You feel confident after the last one, and rightfully so. Your improved analytics work on new attacks just as well as they did last month, and your confusion matrix looks fantastic! Your organization decides, however, to insert a small wrinkle into this month’s penetration test. They include the analysts in the test and turn it into a full-on red vs. blue exercise. Since you were sure to make the number of alerts fall in the analyst-defined threshold, you are not worried. You await the results of the exercise.

How could this happen? The red vs. blue exercise was a massive failure for your SOC. The analysts completely failed in properly identifying attack traffic. They threw out most of the actual attack traffic as false positives, and they heavily investigated the false negatives. They only accurately identified a single campaign, and they did not classify it as a true positive quickly enough to perform incident response. Of course, their failure is not your fault. Your analytics, after all, correctly identified all of the campaigns! You minimized duplicates and false negatives, and you ensured that the number of alerts generated was manageable. You recommend to your CISO that the analysts get better training because clearly they are the problem. Right?

While this story is hypothetical, this type of behavior happens all of the time. I often hear, “the tool is only as good as the analyst behind the console.” While I agree that security analysts are part of the equation, engineers who develop content and analytics for a SOC are also responsible for analyst performance. If you generate alerts which are difficult to understand, then you cannot blame the analyst for misclassifying the incident.

This problem appears most acutely in machine learning for security. Many machine learning algorithms have very poor interpretability. In other words, the algorithm works great, but you cannot figure out what caused the algorithm to trigger an incident. When you investigate the incident and perform root cause analysis, you have very little idea on how to start.

When we work with actual analytics and algorithms, we tend to treat them somewhat better than we treat people. If the algorithm does not work, we figure out why. We change the data format, engineer new features, or identify new data sources. We adapt what data we feed the algorithm until the algorithm performs satisfactorily. We must take this same approach to our analysts. If the analyst does not properly classify and act on the data or format provided, we have to continuously improve the information passed to the analyst. Although our analysts are also IT professionals, they are still users! Security engineers are not exempt from user-centered design!

If you engineer SOC solutions and you find yourself blaming analysts for poor attack detection and incident response, instead consider how you can better help them characterize the attacks. Yes, your analysts need training and can always get better. This does not, however, exempt you from taking responsibility for poor content, regardless of how good your metrics are. If you treat your analysts as well as you treat your analytics, you will be off to a good start.

Quick side note. If you are a SOC manager, please do not treat your analysts like robots. They need time off, upward mobility, regular raises, lunch, continuing education, and personal attention, among other human needs. Please treat them like human beings.