DAST Controls — Frequently Asked Questions for Auditors and Compliance Officers

Dynamic Application Security Testing (DAST) is a security control used in CI/CD pipelines to test running applications for vulnerabilities. For auditors and compliance officers, DAST is frequently encountered during reviews of application security and software delivery governance — yet it remains one of the most misunderstood controls in regulated environments.

This FAQ addresses the most common questions auditors and compliance officers have about DAST, with a focus on what to verify, what evidence to expect, and how DAST fits into broader compliance and governance frameworks.


What is DAST and why does it matter for auditors?

DAST (Dynamic Application Security Testing) is a security control that tests running applications by interacting with them externally, simulating real-world attack scenarios. Unlike code-level analysis (SAST), DAST validates the runtime behavior of applications — including authentication flows, session handling, and exposed endpoints.

For auditors, DAST matters because it provides evidence that applications have been tested under realistic conditions before deployment to production. It is a detective and preventive control that, when properly enforced, demonstrates that an organization validates application security as part of its release process.


How is DAST different from SAST and SCA?

Auditors frequently encounter all three controls together. Here is how they differ:

  • SAST (Static Application Security Testing) analyzes source code to find vulnerabilities before the application runs. It is a preventive, code-level control.
  • SCA (Software Composition Analysis) identifies risks in third-party libraries and dependencies. It is a supply chain risk control.
  • DAST tests the running application externally, without access to source code. It is a runtime validation control.

In regulated environments, these three controls are complementary. Auditors should expect to see all three (or justified alternatives) as part of a mature application security program. The absence of any one of them should prompt questions about how that risk area is addressed.


When should DAST run in a CI/CD pipeline?

From an audit perspective, DAST should run at defined control points within the software delivery process — typically before production releases, after significant changes, or on a scheduled basis for critical applications.

Auditors should verify that DAST execution is tied to the release process, not performed as an isolated activity disconnected from deployments. The key question is: can the organization demonstrate that DAST was executed and its results reviewed before a specific release reached production?


Can DAST block releases in regulated environments?

Yes. When properly governed, DAST acts as a gated control in CI/CD pipelines. Releases can be blocked when findings exceed predefined severity thresholds, or when required risk acceptance has not been approved.

Auditors should verify:

  • Gating thresholds are defined in policy and enforced in the pipeline
  • Exceptions (releases that proceed despite findings) are documented with approvals
  • The gating mechanism cannot be easily bypassed by development teams

Organizations that allow releases to proceed with documented exceptions are not necessarily non-compliant — but those exceptions must be properly governed, approved, and traceable.


How should false positives be governed in DAST?

False positives (findings that do not represent actual vulnerabilities) are inherent to DAST. The governance concern is not whether false positives exist, but how they are managed.

Auditors should expect to see:

  • A defined process for reviewing and classifying findings as false positives
  • Role-based suppression approvals (not self-service by developers)
  • Documented justifications for each suppressed finding
  • Expiration dates on suppressions, with periodic review
  • An audit trail showing who approved each suppression and when

Red flag: Large numbers of suppressed findings without documented rationale, or suppressions that never expire.


What do auditors typically expect from DAST controls?

Auditors evaluate DAST as a governance control, not as a penetration testing tool. The assessment focus is on:

  • Consistent execution — is DAST run reliably as part of the release process?
  • Documented gating logic — are severity thresholds defined and enforced?
  • Traceable suppression decisions — are exceptions governed and auditable?
  • Retained evidence — can scan results be retrieved for any given release?
  • Scope adequacy — does DAST cover the applications that matter most?

Auditors generally do not assess scanner brand, vulnerability counts, or scan depth in isolation. The focus is on the control’s governance, enforcement, and evidence quality.


Is DAST mandatory under DORA, NIS2, ISO 27001, or PCI DSS?

Most regulations and standards do not mandate DAST by name. Instead, they require organizations to demonstrate effective application security testing, risk management, and evidence retention.

DAST is commonly used as one component of a broader application security and CI/CD governance strategy. Its relevance varies by framework:

  • DORA: Requires ICT risk management and testing — DAST supports runtime validation evidence
  • NIS2: Requires risk management and secure development — DAST validates service exposure
  • ISO 27001: Requires demonstration of control effectiveness (Annex A, A.8.25–28) — DAST contributes evidence
  • PCI DSS: Explicitly requires web application security testing (Requirements 6.4, 11.3) — DAST is commonly used to meet this requirement

What should auditors look for in DAST reports?

When reviewing DAST reports, auditors should look beyond raw vulnerability counts. The key elements to assess are:

  • Scope of testing: Which applications and endpoints were tested? Were critical and externally facing applications included?
  • Severity classification: Are findings classified by severity, and do classifications follow a defined standard?
  • Gating outcome: Did the scan result in a pass or fail decision? If findings were present, was the release approved through a documented exception?
  • Trend data: Are recurring findings being addressed, or do the same issues appear across multiple releases?
  • Suppression details: Are suppressed findings documented with justifications and approvals?
  • Traceability: Can the report be linked to a specific release, environment, and date?

How do auditors verify DAST is properly enforced?

Verifying DAST enforcement requires looking beyond policy documents. Auditors should:

  • Request pipeline configurations — verify that DAST is defined as a required step that cannot be skipped
  • Sample recent releases — for a selection of recent production deployments, ask for the corresponding DAST results
  • Check for bypass evidence — look for releases that proceeded without DAST execution or with overridden results
  • Review exception records — if exceptions exist, verify they follow the documented approval process
  • Assess scope coverage — verify that DAST covers all applications within the defined scope, particularly externally facing and critical systems

Red flag: The organization has a DAST policy but cannot produce scan results for recent releases, or pipeline configurations show DAST as an optional step.


What are common DAST audit findings?

Based on common audit observations, the most frequent DAST-related findings include:

  • DAST is not enforced in the pipeline — it is available but optional, and teams can skip it without approval
  • No gating thresholds defined — DAST runs but all releases proceed regardless of results
  • Scan results are not retained — the organization cannot produce historical DAST evidence for past releases
  • Incomplete scope — DAST covers only some applications, leaving critical or externally facing systems untested
  • Ungoverned false positive suppression — findings are suppressed without documented justification or approval
  • No connection to the release process — DAST runs on a schedule but is not tied to actual deployments, so results cannot be linked to specific releases
  • Recurring unresolved findings — the same vulnerabilities appear across multiple releases without remediation or documented risk acceptance

How should organizations prepare their DAST controls for audit?

Organizations preparing for audit should ensure they can demonstrate:

  • DAST is integrated into the CI/CD pipeline at defined control points
  • Gating thresholds and approval workflows are documented and enforced
  • Scan results are retained and can be retrieved for any specific release
  • False positive suppressions are governed with documented approvals
  • DAST scope covers all applications within the compliance boundary
  • Exception handling follows documented procedures with proper authorization

What evidence should DAST produce for auditors?

DAST should produce evidence that is structured, retained, and traceable. Auditors should expect the following evidence artifacts from a properly governed DAST control:

  • Scan reports per release: Each production release should have a corresponding DAST report showing what was tested, what was found, and what the gating outcome was
  • Severity-classified findings: Findings should be categorized by severity using a defined standard (e.g., CVSS, internal severity matrix)
  • Gating decisions: A clear pass/fail record for each scan, with documentation of any overrides or exceptions
  • Suppression records: Documentation of suppressed findings including justification, approver, and expiration date
  • Scope documentation: Evidence of which applications and endpoints were included in testing
  • Trend data: Historical scan results showing whether findings are being addressed over time
  • Traceability metadata: Links between scan results and the specific release, environment, pipeline run, and date

If an organization cannot produce these artifacts for a given release, the DAST control may exist in policy but is not functioning as effective governance evidence.


What are common DAST red flags during audits?

During audit reviews, the following red flags indicate that DAST governance is weak, incomplete, or ineffective:

  • DAST exists in policy but not in practice: The organization has a DAST policy but cannot produce scan results for recent production releases
  • Optional pipeline step: DAST is configured in the CI/CD pipeline but can be skipped or bypassed without approval
  • No gating thresholds: DAST runs but all releases proceed regardless of findings — the control has no enforcement power
  • Disconnected from releases: DAST runs on a schedule (e.g., weekly) but is not tied to actual deployments, making results untraceable to specific releases
  • Mass suppressions without governance: Large numbers of findings are suppressed without documented justification, approver identity, or expiration dates
  • Incomplete scope: Critical or externally facing applications are excluded from DAST testing without documented risk acceptance
  • No evidence retention: Scan results are ephemeral and cannot be retrieved for past releases
  • Recurring unresolved findings: The same vulnerabilities appear across multiple releases with no remediation or escalation
  • Self-service suppression: Developers can suppress findings without independent review or approval

Any of these red flags warrants further investigation and should be documented as an audit finding or observation.


How does DAST map to DORA, NIS2, and ISO 27001 requirements?

DAST is not typically mandated by name in regulatory frameworks, but it directly supports key requirements across major regulations and standards:

DORA (Digital Operational Resilience Act)

DORA requires financial entities to maintain ICT risk management frameworks that include testing of ICT systems. DAST supports DORA compliance by:

  • Providing runtime validation evidence for deployed applications
  • Demonstrating that applications are tested under realistic conditions before production deployment
  • Supporting the requirement for continuous ICT risk identification and management

NIS2 (Network and Information Security Directive)

NIS2 requires essential and important entities to implement risk management measures including secure development practices and vulnerability handling. DAST supports NIS2 by:

  • Validating that externally facing services are tested for known vulnerability classes
  • Producing evidence of proactive vulnerability identification in deployed systems
  • Supporting incident prevention by identifying exploitable weaknesses before attackers do

ISO 27001 (Annex A, A.8.25-28)

ISO 27001 requires organizations to demonstrate secure development practices and control effectiveness. DAST supports ISO 27001 by:

  • Contributing to evidence of secure application development and testing (A.8.25-28)
  • Providing measurable control outputs that demonstrate the effectiveness of security testing processes
  • Supporting continuous improvement through trend analysis of runtime findings

Auditors should note that DAST alone does not satisfy any of these frameworks — it is one component of a broader application security control set that should also include SAST, SCA, and other complementary controls.


Related for Auditors


About the author

Senior DevSecOps & Security Architect with over 15 years of experience in secure software engineering, CI/CD security, and regulated enterprise environments.

Certified CSSLP and EC-Council Certified DevSecOps Engineer, with hands-on experience designing auditable, compliant CI/CD architectures in regulated contexts.

Learn more on the About page.