Integrating Detection Engineering with Automation

I often get asked, "When is the right time to automate?" In a previous post, we explored the Security Automation Development Lifecycle (SADLC) and how it integrates with Detection Engineering and Standard Operating Procedures (SOPs). Many people think security automation is solely about automating Incident Response (IR) processes. However, to get the best out of automation, it should be tied to your detection engineering, or directly at the source.

Why Integration Matters

To start automation effectively, it's crucial to understand the expected inputs and outputs. Visualising this, Erik Bloch's SOC input and outputs diagram is invaluable. Automation inputs are either human-generated events or machine-generated events. Detection Engineering should cover both, yet 90% of articles on Detection Engineering focus only on machine-based detections.

SOC inputs, outputs and outcomes by Erick Blocj

Connecting Automation to Incident Response

You might wonder how this ties back to Security Incident Response, or why SOAR vendors claim they can automate IR processes. The marketing around SOAR platforms has been somewhat misleading. SOAR was designed for case management automation—handling Security Incidents (Cases) and building automated workflows around them. This approach often led to high failure rates in SOAR implementations, creating a chaotic environment with scattered automations.

In contrast, HyperAutomation or Autonomous SOC platforms adopt a different approach. For a deeper dive, Francis and Josh's article (https://softwareanalyst.substack.com/p/the-future-of-soc-automation-platforms) is an excellent resource. Here, I'll focus on the technical aspects and the process to optimize your security automation program, regardless of the platform used. The issue is more about process than technology.

If you're enjoying my newsletter, why not start your own? Grab your 30-day trial and a 20% discount here:

If you want to get on a call and have a discussion about security automation, you can book paid consultancy here:

Process Integration: Detection Engineering and IR

Step one in integrating these processes involves recognizing that Detection Engineering covers both Human-Generated Events (HGE) and Machine-Based Detections (MBD). Using a standard Detection Engineering process, I reference Alex Teixeira's insightful visual and article , which outlines the Detection Engineering stages:

  1. Concept

  2. Research

  3. Engineering

  4. Delivery

  5. Optimization

Automation should be seen as a step within Detection Engineering, closely tied to the IR Playbook (SOC operating procedure).

Mapping SADLC to IR and Detection Engineering

Here's an updated SADLC process better mapping IR (SOP) and Detection Engineering:

The visual demonstrates how the SANS Incident Response stages are mapped to Detection Engineering, Automation, and Security Operation processes (IR Playbook building). In this view, all three processes—Detection Engineering, SOP, and Automation—occur primarily between the Preparation and Identification phases of the SANS Incident Response stages. The feedback loop completes at the Lessons Learned phase.

Preparation

  • Detection Use-Case Analysis: Begin by analyzing potential detection scenarios using real-time threat intelligence to inform rule creation. This step ensures data sufficiency and relevance, employing frameworks like MITRE ATT&CK to prioritize threats and map out the detection landscape effectively. This analysis also considers where detections will be deployed (e.g., SIEM, EDR) and ensures robust data sources to support these detections. Understanding specific malware, techniques, and threat actors is essential for crafting accurate and actionable detection rules.

  • Operating Procedure Backlog Creation: Simultaneously, plan for SOPs triggered by detection alerts. It's essential to assess whether new detections align with existing procedures or if new SOPs need to be developed to address emerging threats. This step ensures a clear and actionable response for every detection, minimizing confusion and response times during actual incidents.

  • Automation Feasibility Assessment: Evaluate existing automation for potential reuse and check integration capabilities with necessary tools. This assessment ensures readiness for SOP and automation development post-detection use-case analysis. If possible, run preliminary atomic testing to confirm initial assumptions. Just because something seems reusable doesn’t mean it necessarily applies to the new scenario. If something doesn’t work, note what it would need, so you can set it as a requirement in the Development phase.

Development

  • Detection Use-Case Development: Develop scalable detection rules based on the preparatory analysis, ensuring they are adaptable for future growth and complexity. Engage cross-functional teams for a comprehensive approach that considers business and compliance impacts. This step involves translating the theoretical detection scenarios into practical, actionable rules that can be deployed in the monitoring environment.

  • SOP Development: Draft SOPs aligned with detection rules, emphasizing collaboration across departments to ensure operational consistency and compliance. This phase should account for scalability and flexibility to adapt to evolving organizational needs. SOPs should be clear, concise, and actionable, providing step-by-step instructions for responding to different types of alerts.

  • Automation Scripting: Develop automation workflows that enhance and streamline SOP execution. Include orchestration strategies to ensure seamless integration and communication among various security tools, enhancing the coordinated response capability. Keep in mind the workflows that you want to reuse, ensuring they are built with modularity and flexibility in mind for future use cases.

Testing

  • Detection Use-Case Validation: Test detection rules against both simulated and real-world attack scenarios, using feedback to refine accuracy and reduce false positives. Engage in purple teaming to ensure comprehensive validation. This step ensures that detection rules are not only theoretically sound but also practically effective in real-world environments.

  • SOP Effectiveness Evaluation: Rigorously test SOPs to ensure comprehensive coverage and actionability. Use feedback from this testing phase to refine procedures, ensuring they are effective and efficient in managing alerts. This involves running through various scenarios and ensuring that each step of the SOP is actionable and leads to the desired outcome.

  • Automation Workflow Testing: Validate automation scripts, ensuring they process inputs and outputs accurately and integrate smoothly with detection and response processes. Test for scalability and flexibility to adapt to changing threat landscapes and organizational growth. This phase ensures that the automation workflows are robust, reliable, and ready for deployment in a live environment.

Production

  • Automation Deployment: Launch the automation workflows first, ensuring they synergise with the detection rules. Continuous monitoring and refinement are crucial to address potential issues such as API changes or data format modifications by third-party services.

  • Finalising SOPs: Adjust the IR playbook to reflect which parts are automated and which require manual intervention. This ensures clarity in the response process and maximizes the efficiency of automation.

  • Operationalising Detection Use-Cases: Finally, activate the detection rules, which will start triggering alerts and workflows that flow into the IR process. Continuous monitoring against KPIs like detection latency and false positive rates ensures these rules operate efficiently.

Continuous Improvement/Metrics

  • Detection Engineering Metrics: Track true positive and false positive rates, alignment with frameworks like MITRE ATT&CK, and effectiveness against threat profiles.

  • Operating Procedures Metrics: Measure the efficiency of procedures, balancing manual and automated processes.

  • Automation and Orchestration Metrics: Demonstrate how many alerts are processed automatically, the speed of response, and the impact on analyst workload.

  • Threat Simulation Metrics: Measure detection effectiveness, iterations required for successful simulations, and overall tool robustness.

Conclusion

Integrating Detection Engineering with Incident Response and Automation is crucial for building a robust security framework. By following a structured process and focusing on continuous improvement, you can optimise your security automation program, ensuring it is both effective and scalable.

Are you passionate about cybersecurity and eager to stay ahead of the curve? Become an Ultimate Supporter of our blog and gain exclusive access to cutting-edge content, while playing a pivotal role in sustaining our community.

By joining the Ultimate Supporter tier, you decide how much you wish to contribute, directly aiding in the maintenance and growth of our website. Your support helps us cover essential costs, ensuring we can continue to deliver top-notch insights and tools for engineers and cybersecurity leaders.

As an added benefit, each Ultimate Supporter will receive a link to the editable versions of the visuals used in our blog posts. This exclusive access allows you to customize and utilize these resources for your own projects and presentations.

Join us today and be part of a movement that drives innovation and security in the digital world. Your contribution, big or small, makes a significant impact. Let's secure the future together!

Reply

or to participate.