From Low-Code Automation to Detection as Code

Understanding the Diverging Trends

Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer or any other entities with which I am affiliated.

One of the trends that has puzzled me lately is why Security Automation has shifted toward low-code/no-code while Detection Engineering has embraced Detection as Code (DaC). It seems like these domains should be moving in the same direction—making security processes more accessible and efficient—but in reality, they’ve taken opposite paths. While the terms might sound similar or feel like buzzwords, the diverging needs and goals in each space explain why these approaches have developed differently.

Let’s break down how these two trends evolved and why they make sense in their respective contexts.

Sponsored Content

Want SOC 2 compliance without the Security Theater?

Tired of SOC 2 Security Theater? 🤔 

Oneleet is the all-in-one platform for building a real-world Security Program, getting a Penetration Test, integrating with a 3rd Party Auditor, and providing the Compliance Automation Software.

Security Automation

Let’s begin by examining the evolution of security automation. Around 15–20 years ago, security automation typically started with basic scripts—usually written in Python—that were executed on servers to automate small tasks like data enrichment or limited response actions. While functional, these automations required a developer or security engineer to create, maintain, and run them.

The next major evolution was the introduction of SOAR (Security Orchestration, Automation, and Response) platforms. SOAR aimed to make automation more accessible to security analysts, bringing in the concept of drag-and-drop automation workflows. It was a promising leap forward, but organizations quickly found that significant custom coding was still needed for complex tasks.

As SOAR vendors worked to make automation more business-friendly, the trend shifted toward low-code/no-code platforms. These platforms promised that security teams could build automation without needing to write extensive code, appealing to the growing demand for user-friendly solutions that don’t require advanced programming skills.

From a business perspective, this trend makes sense. Selling the idea of "no programming required" is an easy win for decision-makers, especially in organizations that lack the resources to hire and retain full-time security engineers. It provides a faster path to implementation, making automation more accessible to a broader range of users. But here’s where things start to get tricky. No-code platforms may simplify initial setups, but in reality, even the most intuitive platform still requires some coding for complex use cases.

Additionally, by moving to a low-code/no-code model, many platforms dropped integration with CI/CD pipelines and repository-based workflows. The focus shifted from maintaining detection logic in code repositories to handling everything within the platform’s UI. While this simplifies operations for users, it also raises concerns about transparency, scalability, and control over the automation process.

In this case, low-code/no-code is perfectly suited to businesses looking for quick wins and simplicity in security automation, where the focus is less on engineering precision and more on operational ease.

However, even in today’s low-code/no-code platforms, there are limitations. Complex use cases often require a few lines of custom code, and features like CI/CD pipelines or code repository integration are often sidelined. In some cases, CI/CD is replaced with internal auditing features baked into the platform, but this often lacks the robustness of traditional software development lifecycles (SDLC ).

To make low-code/no-code platforms truly effective, they still need to follow basic principles of SDLC, including:

  • Code changes tracking: Users need clear audits of what changes were made, when, and by whom, all visible within the platform's UI rather than buried in logs.

  • Versioning: Every change to an automation should be saved as a new version, with the ability to rollback to previous versions when necessary.

  • Approval process: Just like a pull request (PR) workflow, the person developing the automation should not push it directly to production. There should be an enforced review process (e.g., 4-eye principle).

  • Testing: Automations should not be pushed into production unless they pass at least one test scenario. There should be built-in QA features for testing in the platform itself.

For example, in my Security Automation Development Lifecycle (SADLC) model, I outline how a strong SDLC framework can be applied to security automation to ensure that even low-code/no-code platforms maintain engineering rigour.

This approach is crucial because even though automation is now more accessible, security teams should still adhere to structured workflows to minimise errors and enhance scalability.

Detection as Code (DaC): A New Domain Demands Engineering Precision

While Security Automation is going in the direction of low-code/no-code, Detection Engineering is heading toward a much more code-centric model. This is where Detection as Code (DaC) comes into play—a methodology where detection logic is managed like software code.

The rise of DaC can be traced to the increasing complexity of cybersecurity threats. Traditional signature-based or rule-based detection methods became inadequate in the face of modern, sophisticated attacks. Security teams needed detection systems that were not only highly customizable but also scalable and version-controlled—hence the move toward treating detection rules as code.

Unlike automation, where business users can benefit from low-code/no-code solutions, detection engineering demands a higher level of technical expertise. The complexity of detection logic—which includes precise definitions, tuning, and testing—requires the same rigor and version control used in software engineering. Anton Chuvakin highlighted the shift toward DaC as a necessary "engineering-first" approach, where detections are treated as structured, testable code.

This is where Sigma rules come into play. Sigma offers a standardised format for writing detection rules that can be converted into different SIEM languages, making detection logic more reusable and shareable across platforms. The community-driven nature of Sigma has helped accelerate the adoption of DaC, as it allows security teams to collaborate, refine, and standardise detection rules(FloQast).

With DaC, the focus is less on how easy it is for a non-developer to build detections and more about building scalable, automated processes that work across distributed environments. DaC emphasizes:

  • Version control: Detection rules are stored in repositories like Git, making it easy to track changes, rollback, and collaborate.

  • Automated testing: Continuous Integration/Continuous Delivery (CI/CD) pipelines allow security teams to test detection rules before pushing them live, ensuring they work as intended without false positives.

  • Reuse and sharing: Platforms like Sigma provide ready-to-use rules, but organizations can tailor these rules to their own environments.

Vendors shouldn’t just be offering code-repo integration and out-of-the-box rules for detection. They need to bake detection engineering workflows into their platforms—including CI/CD pipelines, version control, and automated testing—allowing teams to manage detection like code. Detection engineering should be about process, not just pre-built rules.

The Future: Aligning Detection Engineering with Automation

As the cybersecurity landscape evolves, there's a strong potential for convergence between low-code/no-code automation and Detection as Code (DaC). In the future, we should see platforms that allow security analysts to focus more on detection engineering and incident response by automating routine tasks. This would free up analysts' time for more strategic, engineering-focused work, without requiring them to become full-time coders.

In a world where analysts are burdened with repetitive Tier-1 tasks, automation platforms that integrate DaC workflows can help remove those burdens while still maintaining a high level of technical rigor. Analysts would no longer need to manually write detection logic from scratch, but could instead focus on optimizing detection processes, tuning logic, and responding to threats more efficiently.

In my previous blog post on Integrating Detection Engineering and Automation, I discuss how blending detection engineering with incident response (IR) and automation can lead to better threat detection and faster responses. The key takeaway here is that while Detection as Code provides the precision needed for accurate threat detection, automation can simplify routine, time-consuming tasks, ensuring that incident response remains quick and effective.

For this convergence to happen effectively, platforms will need to focus on:

  • Embedding detection workflows within the automation platform, rather than merely offering pre-packaged detection rules. The focus must be on flexible, adaptive processes that can evolve alongside the threat landscape.

  • Integrated CI/CD pipelines for detection logic that allow changes to detection rules to be tested and deployed in an automated, reliable manner.

  • Comprehensive version control and rollback functionality, ensuring that any changes made to detection logic are tracked, reversible, and properly audited. This is crucial for maintaining system stability and security over time.

  • Automated testing, which will ensure that detection rules are accurate and effective before being pushed into production. Minimizing false positives while maintaining strong detection capabilities is a critical requirement for the future of detection engineering.

What’s important to recognise is that low-code/no-code platforms cater to the growing demand for ease of use and accessibility in security operations. On the other hand, Detection as Code demands more engineering precision and technical rigor. As these two trends progress, the ideal solution is a hybrid model that combines the simplicity of low-code/no-code with the robust, scalable frameworks of DaC.

In the future of security operations, the goal should be to empower analysts to engage in more meaningful, engineering-focused tasks—optimizing detections, customizing workflows, and improving response times—without requiring deep coding knowledge. Platforms must blend user-friendly automation interfaces with the technical underpinnings of Detection as Code, allowing analysts to implement complex detection logic through intuitive UI elements, while the system handles the technical rigor behind the scenes.

This balanced approach will allow for greater scalability, faster threat response, and ultimately, a more secure organization. Analysts can leverage the best of both worlds, improving their efficiency and effectiveness in a rapidly evolving threat landscape.

For more insights on how to integrate detection engineering and automation into your incident response workflows, you can read my full blog post:

If you want to get on a call and have a discussion about security automation, you can book paid consultancy here:

Become an Ultimate Supporter of our blog and gain exclusive access to cutting-edge content, while playing a pivotal role in sustaining our community.

By joining the Ultimate Supporter tier, you decide how much you wish to contribute, directly aiding in the maintenance and growth of our website. Your support helps us cover essential costs, ensuring we can continue to deliver top-notch insights and tools for engineers and cybersecurity leaders.

As an added benefit, each Ultimate Supporter will receive a link to the editable versions of the visuals used in our blog posts. This exclusive access allows you to customize and utilize these resources for your own projects and presentations.

Reply

or to participate.