Welcome to this year’s last Detection Engineering Weekly!
52 issues of Detection Engineering Weekly over 52 weeks. This took a little bit over a year, with 5 weeks taken off for
Graduating grad school
Moving states
+= 1 kid
I wanted to take this issue to thank everyone who’s supported me along the way, especially those in my family who were patient with me as I spent time writing, editing and posting this newsletter. There’s no new news or gems this week, rather, I took Gems from the last 52 issues and posted the Top 11 most useful (to me). They are posted descending from the most recent issue.
“Why 11?” you ask. I couldn’t dwindle it further :D
Programming note: I’ll be taking two weeks off to enjoy family and get back into the swing of things after my paternity leave ends. The next issue will be on January 10.
💎 Top 11 Gems 💎
https://www.detectionengineering.net/p/det-eng-weekly-47-my-gpt-is-hallucinating
Capacity Modeling: Enhancing Analyst Well-being & SOC Efficiency Jon Hencinski
Whether you are an aspiring or experienced detection engineer who delivers alerts to a SOC, this post is for you! Hencinski is one of the best minds regarding building and scaling a security operations program. In this post, he reviews a Twitter poll that asked users if they use a capacity model in their SOC program. Since SOC analysts are the end customers of alerts, we, as detection engineers, must plug into their modeling and try to reduce time to work on things via automation, enrichment, and, of course, accurate alerts!
https://www.detectionengineering.net/p/det-eng-weekly-46-whoami-considered
Dealing with Noisy Behavioral Analytics in Detection Engineering by Sean Hutchison
In this blog by CMU researcher Sean Hutchinson, he tries to offer prescriptive guidance on tuning alerts and introduces the concept of "benign positives". When you try to balance the idea of precision and recall, it's hard to place security alerts into two buckets: true positives and false positives. The reason behind this is that some alerts may need more context and enrichment than others to make a decision, and you'd rather have to triage and investigate false positive alerts if you can minimize the cost.
By learning patterns of true positives and filtering out benign patterns, you can layer filtering logic that filters out benign activity. Hutchinson offers a table of context types that you can use when creating these filters to reduce cost in the overall tuning strategy.
https://www.detectionengineering.net/p/detection-engineering-weekly-40-my
Scaling Detection and Response Operations at Coinbase Pt.1 by James Dorgan
This week’s gem showcases how Coinbase transitioned many “industrial” detection and response solutions to a singular “artisanal” solution that fits Coinbase’s use cases. A previous gem by Phil Venables defines industrial vs. artisanal, so go check that out and come back and read Dorgan’s blog. When you reach a scale like a large cryptocurrency company with a bespoke tech stack, your security tools need to evolve with the tech stack.
Coinbase does this by unifying its detection and response toolset into a singular platform, where analysts and responders can leverage the body of knowledge of their detection engineers to display enrichments, history, and response actions in a consistent view. These “economies of scale” of detection and response risked imposing costs on Coinbase, so heavily investing in these standardized views helped keep the cost down so the business could grow. Think of it like the “long-run average cost curve” in Economics.
https://www.detectionengineering.net/p/detection-engineering-weekly-37-theres
The Detection Maturity Level (DML) Model by Ryan Stillions
This gem is one of the oldest posts I’ve ever posted: it’s close to TEN years old! But, like Bordeaux, it’s aged beautifully and is still useful to this very day. It’s pretty amazing seeing someone think about concepts like coverage, backlogs and taxonomy way before things like ATT&CK and detection engineering becoming mainstream.
In this post, Stillions proposes an 8 level model that starts with nothing (literally), and moves it’s way up to maturity of detecting goals of major criminal groups and nation states. To me it builds on things like the Pyramid of Pain, where you are moving from tactical detection to more strategic, systemic detection of an adversary.
https://www.detectionengineering.net/p/detection-engineering-weekly-34-another
From soup to nuts: Building a Detection-as-Code pipeline Part 1 and
From soup to nuts: Building a Detection-as-Code pipeline Part 2 by David French
This week's gem contains a ton of content, and I couldn't leave out this 1-2 punch by French. If you want a full-fledged detection-as-code pipeline tutorial, then look no further! French explores building a lab for writing detections and scaling them using Terraform (infrastructure-and-detections-as-code), Sumo Logic (SIEM), and Tines (SOAR/low to no-code automation).
Once you boot the pipeline up, Part 2 focuses heavily on CI/CD workflows to test your detections. I love that French's assumption here is that detection rules, by design, have false negatives and tend to drift. When you have 10 to focus on, you can do this manually. But what happens when you 10x or 100x your rules? Without automation, your people spend most of their time on manual review and curation. This lab helps solve that issue by regularly testing these integrations and rules and creating GitHub issues when some drift is detected.
https://www.detectionengineering.net/p/detection-engineering-weekly-33-its
Tracking Detection Drift by Gary Katz
This is a 3-peat for Gary, and his first gem!
False positive reduction (increasing precision) is usually the talk of the town when we talk about detection engineering. Time wasted is money going down the drain. But what about false negative reduction (increasing recall)? Well, adversaries care way more about evading your detections than how many alerts you try to tune away. I love the examples Katz gives here, and they explain some basic statistics that you can run to get a much better metric around false negatives.
If you loved Katz's posts as much as I did, you would be thrilled to learn that he, Megan Roddie, and Jason Deyalsingh wrote arguably the first book on Detection Engineering. They sent me an early copy, and I've been nerding out on it for a few days now. Please go support our community and check out their book. I'd love to see more content applying the concepts in their book to real life.
https://www.detectionengineering.net/p/detection-engineering-weekly-26-i
On Detection: From Tactical to Functional by Jared Atkinson
I was excited to see Jared release this post and even more excited to dive deep into it. I linked his training for this in a previous issue. A “Tool Graph” implements function chaining, where you can model specific techniques, such as process injection, to document detection opportunities. A process injection could require 4-5 Windows API calls to succeed. But, due to the complicated nature of a modern operating system like Windows, an implicit chain of internal APIs is being called before a syscall is actually issued to the Windows kernel. You can “mix and match” a technique to avoid detections by mapping these function chains. By going down this rabbit hole, Jared showcases 900 different ways to achieve process injection, simply through mapping out the tool graph with function chaining.
This post is a fantastic demonstration of challenging bias against standard models, especially kill-chain and MITRE ATT&CK. I think Jared and the Spectre Ops team are some of the greatest minds in our industry and in the detection engineering “sub-profession.” I’d love to see this for Linux and other OSes!
https://www.detectionengineering.net/p/detection-engineering-weekly-20-call
Data Driven Detection Engineering by Julien Vehent
Detection Engineering is Software Engineering. A beautifully put observation by Vehent! This week’s gem goes into the evolution of threat detection from the “early days” into now. To scale threat detection efforts beyond human capacity, we need to use more software and data engineering techniques. I discuss this concept in my blog Table Stakes for Detection Engineering, but Vehent hones in on the data engineering specialty and its evolution. I love the comparison of “tripwire” vs. “behavioral” detections, as this relates to the Pyramid of Pain but with way more math and statistics. Awesome stuff!
https://www.detectionengineering.net/p/detection-engineering-weekly-12-dont
How to write an actionable alert by Daniel Wyleczuk-Stern
I hope OG readers go through this post and see concepts from previous gems and posts everywhere! The four tenets presented by Wyleczuk-Stern can be immediately used in standing up a detection program. They also can serve as a style-guide for rules when you are trying to ship them to prod, if your alerts are not: immediately actionable, automatically enriched, well prioritized, and grouped/correlated, they should not go and be used by analysts.
https://www.detectionengineering.net/p/detection-engineering-weekly-8-alert
Introducing the Funnel of Fidelity by Jared Atkinson
The Funnel of Fidelity is a fantastic model for visualizing and describing how a detection engineering effort should be designed. We can naively think that you can only create highly accurate alerts when, in fact, you should think of alerting as a series of stages from less precise to more precise and have different personas dealing with inputs and outputs along the alert chain. My favorite quote by Atkinson here, under the Detection section:
The concept of detection tends to be very nuanced in many organizations. For this reason we must distinguish between micro detection (the process of writing logic to alert on a potentially malicious event) and macro detection (the process of taking a true positive event from alert all the way to remediation).
https://www.detectionengineering.net/p/issue1
Detection Development Lifecycle by Haider Dost
I’ve been following Haider’s Medium site for a while now, and I think he does a great job explaining the strategy behind Detection Engineering rather than individual tactics. I’ve referred a number of colleagues to this post. Datadog’s Detection Engineering team has something very similar to the SDLC mentioned in Haider’s post.
Congrats, Zack! Thanks for inspiring me with your work:-)
Congratulations on this epic milestone! Happy Holidays and wishing you the best in the new year!