This is a blog post I imported from another platform. I got some great feedback from it and thought it’d be good to share it here and keep things in one spot!
What is a rule, really?
Dracula refuses a call with a security vendor
For as long as I have been in the security industry, there has been a concerted effort to sort through massive troves of data with powerful and mysterious tools called “rules”. It allows us mere mortals to take a million-line logfile and separate each line into two buckets: interesting or not interesting, malicious or not malicious, and vulnerable or not vulnerable. If you know what “bad” or “vulnerable” is, then you can codify it and let the computer do the sorting for you.
I cut my teeth in security research, writing WAF rules for modsecurity and looking for interesting HTTP-based attacks on behalf of a customer base. I also launched the security detection and research team at startups that are now public. At my current gig, I help my organization write detection content against 100s of data sources with terabytes of cloud-based control-plane and data-plane events flowing through our systems. Seeing how detection and research have evolved in my 10+ year career has been rewarding and tiring.
The security elders at my previous companies would scoff at my WAF rules. They would talk to me about a time when vulnerability scanner rules were the only thing that mattered. A team of researchers would feverishly comb through binaries and RE tools like the Matrix. When they would find a vulnerability, they would rush out a rule so their company would be the first to disclose it and have a detection before their competitors.
A security researcher from McAfee deploys a new rule to their vulnerability scanner (2003, colorized)
At the end of the day, this fell into the realm of "security research". Companies would scoop up new grads and old heads alike, put them on a security research team, and put them to work. They would then measure how many rules and detections they could push into production in a month. Hopefully, it was enough to claim that their products protected customers from more attacks and vulnerabilities than their competitors.
This go-to-market strategy can be effective but suffers diminishing returns. It begs the question: why is "more" better, and why is "lots more", lots better? In the age of vulnerability scanners, more rules meant more vulnerabilities being detected. This translates to better coverage, which is a great sales statistic. The same pervasiveness of coverage crept into threat detection products, but threats are not equal to vulnerabilities. Sure, you want coverage against an overwhelming number of threats, but will that help protect you and your firm? Can you do “all” threats forever? More than a competitor, more than a threat actor? Probably not.
This culture of more is better has caused burnout and pain for researchers at these companies. It doesn't matter if you wrote an exceptional rule that was relevant, contextual, and precise: it carried the same weight as another bad rule with bad results within the game of quotas. When detection counts are up, the sales engine gets revved up, and they rush to their pipelines to close more deals.
Detorction rules are like stonks, they can only go up
Threat detection is dead. Long live threat detection!
The security research team in these times (maybe not as much now, but I have a recency bias) was treated like wizards. They were the identity of the company. They had cringe-inducing named research teams, such as the IBM Hacker Ninjas or the McAfee Alpha Bro Exploiter Extraordinaires. The wizards would come down from their spire and preach to the world their latest findings, present at Blackhat and DEFCON. Afterward, they would head back up the spire and close the door behind them. Their rules, research, and detections would then be left for others to deal with. They had bigger things to worry about, like writing more rules to hit that damn quota.
In my opinion, this concept of "more is better" for detection rules is a sign that a company's go-to-market is either a) stuck in the past of vulnerability research coverage or b) doesn't know what they are doing, so they do as much as possible to hide that fact. I was part of this a few times in my career.
Now, I am not saying that you shouldn’t crank them out for the sake of coverage. There are legitimate reasons to write, deploy and maintain a vast ruleset. What I am saying is that I think we got into this mess because we think more coverage is more secure. This fallacy can lead internal teams or in my case, a product detection team, down rabbit holes that aren't fruitful in the long run. And the more I get into my career, the more I realize that I can’t solely blame sales or marketing people for this strategy. It's up to us, the researchers, to let them know which path is the more fruitful and why.
When a company relies heavily on a research team to pump out content, they need to ensure that the team has the right people supporting them. This will enable the team to focus on the nuances of security detection. Companies should provide access to project management resources and software engineering capabilities to scale rule writing efforts and infrastructure and consider the impact of rules using tried and tested methods in everyone’s favorite high school class: statistics.
I think the industry is starting to see that security detection and research, for the sole purpose of writing threat detection rules, is evolving into a more advanced and exciting type of security engineer: the Detection Engineer!
Detection Engineering is the new hotness but requires solid foundations in more than just security subject matter expertise
Detection Engineering, in my opinion, is the next level of security research. It's an evolution because companies have realized that it's more scalable to require security researchers to have skills in software engineering, project management, and statistics. If you want to scale your detection program, you need to hire a Detection Engineering team that can complement each other in the following areas:
1. Subject matter expertise in security
2. Software engineering
3. Statistics
That's it. That's all you need. Of course, this list can be picked apart, stretched, and folded under other areas like DevOps or Infrastructure. However, these three pillars can get you far without having to hire a ton of bodies.
You can't write detections for your network security product if you don't have network security experts. This is the same for endpoint, cloud, application, and host-based detections. It’s like having a bunch of data scientists build a machine-learning model to detect asthma in patients. However, they forgot to bring in a doctor to show them how pneumonia patients would give the model false positives. You need subject matter experts. This has not changed in the industry, nor should it.
What has changed is that these experts need a solid basis in software engineering principles. You can't scale all those detections and deploy them in a modern environment, manage sprints (yes, this is software engineering :)), or write unit, integration, and regression tests without lots of bodies or automation. I can reliably say my boss would rather hear that I can scale the problem away with software than with hiring more people.
Lastly, and I think this is the next step in the evolution of security research to detection engineering: we all must improve the explainability, and thus impact, of our rules, and statistics is how you do it. You can't reliably create, improve, deprecate or justify your detections to your sales teams, internal leadership, or customers without a background in statistics. This does not mean you need a graduate degree. Still, I think if security engineers and researchers spent some time looking at concepts like sampling bias and error, confusion matrices, precision, and recall, they could better understand how rules perform under certain conditions and spot errors much earlier before a rule hits production.
The more you learn, the more you realize you don't know anything
Conclusion
I am excited to see these three pillars discussed more in the detection engineering and security realm. It shows how much we've matured as an industry. I wrote this post as a rant and warning: do not do what I did. Do not fall victim to the "more is better" farce. I have a few more post ideas going into detail on what separates a good detection from a great detection (my team asks this question all the time), or what a go-to-market strategy for security detection rules should be (it's covering the right things, not more things). But for now, my parting advice for aspiring researchers and engineers is this Einstein quote:
"If I had only one hour to save the world, I would spend fifty-five minutes defining the problem and only five minutes finding the solution."
Also, turns out Einstein may not have said this, but the premise is still great. We write solutions (detections), trying to find problems (threats) without focusing on the problem (threat) beforehand. Don't do what I did. Don't commit to a quota!