The Secure SDLC as a Control Framework
The Secure Software Development Lifecycle (Secure SDLC) is often presented as a development methodology — a sequence of practices that engineering teams follow to build more secure software. For auditors and compliance officers, however, it should be assessed as something more fundamental: a control framework.
Each phase of the SDLC represents a control point where specific security activities should occur, specific evidence should be generated, and specific outcomes should be verifiable. When an organisation claims to operate a Secure SDLC, auditors must look beyond process documentation and assess whether controls are actually operating at each phase.
This guide walks through each SDLC phase from the auditor’s perspective, specifying what should happen, what evidence to request, what good practice looks like, and what should raise concern.
PLAN Phase
What Should Happen
Security considerations are integrated into project planning before any development begins. This includes threat modelling to identify potential attack vectors, defining security requirements alongside functional requirements, and classifying the application according to the organisation’s risk classification framework.
Evidence to Request
- Threat model documentation (data flow diagrams, threat identification, mitigation decisions)
- Security requirements documented in the project backlog or requirements repository
- Application risk classification record with approval
- Records of security involvement in planning sessions
What Good Looks Like
- Threat models are created for all Tier 1 and Tier 2 applications and updated when architecture changes
- Security requirements are traceable — each threat identified in the model has a corresponding requirement or accepted risk
- Classification drives downstream control requirements (testing frequency, approval workflows)
- Security personnel participate in planning activities, evidenced by meeting records
What Bad Looks Like
- No threat models exist, or they were created once and never updated
- Security requirements are generic (“the application shall be secure”) rather than specific and testable
- Application classification is missing or was performed without security or compliance input
- Security is not involved until testing or deployment
CODE Phase
What Should Happen
Developers follow documented secure coding standards. Code changes are reviewed for security issues — either through peer review with security-aware reviewers or through automated Static Application Security Testing (SAST). Secrets scanning prevents credentials, API keys, and tokens from being committed to repositories.
Evidence to Request
- Secure coding standards document, approved and version-controlled
- Code review records showing security-related comments and resolutions
- SAST scan results integrated into the development workflow
- Secrets scanning configuration and alert records
- Training records for developers on secure coding practices
What Good Looks Like
- SAST runs automatically on every pull request or code commit, with results visible to developers
- Code review records show security findings being identified and addressed — not just functional review
- Secrets scanning blocks commits containing credentials, with a documented process for rotating any exposed secrets
- Developers receive annual (minimum) secure coding training relevant to their technology stack
What Bad Looks Like
- SAST is run manually or only before releases, meaning vulnerabilities accumulate
- Code reviews exist but never include security-related feedback
- No secrets scanning is in place, or alerts are ignored
- Secure coding standards are outdated or not aligned with the technologies in use
BUILD Phase
What Should Happen
The build process includes Software Composition Analysis (SCA) to identify vulnerabilities in third-party and open-source dependencies. A Software Bill of Materials (SBOM) is generated for each build to maintain an inventory of all components. Build artifacts are signed to ensure integrity and prevent tampering.
Evidence to Request
- SCA scan results for recent builds, showing identified vulnerabilities and their disposition
- SBOM records for production releases
- Artifact signing configuration and verification records
- Policy for acceptable vulnerability thresholds in dependencies
What Good Looks Like
- SCA runs on every build with defined policies for blocking builds that contain critical or high-severity vulnerabilities
- SBOMs are generated automatically and stored alongside release records
- Artifact signing is enforced — unsigned artifacts cannot be deployed to production
- A process exists for emergency patching of critical dependency vulnerabilities
What Bad Looks Like
- SCA is not integrated into the build process or runs only periodically
- No SBOM generation — the organisation cannot identify which components are in production
- No artifact signing — there is no way to verify that deployed artifacts match approved builds
- Known critical vulnerabilities in dependencies are present in production without documented acceptance
TEST Phase
What Should Happen
Dynamic Application Security Testing (DAST) is performed against running applications to identify vulnerabilities that static analysis cannot detect (authentication flaws, configuration issues, runtime injection vulnerabilities). Penetration testing by qualified personnel provides an adversarial perspective. Testing environments are isolated from production to prevent data leakage.
Evidence to Request
- DAST scan results and remediation records
- Penetration testing reports with scope, methodology, findings, and remediation status
- Evidence of environment isolation (network diagrams, access controls)
- Records showing testing frequency aligns with the application’s risk tier
What Good Looks Like
- DAST is automated and runs at the frequency specified by the application’s risk classification
- Penetration testing is conducted by qualified independent testers (internal or external) with defined scope
- Test environments do not contain production data, or production data is appropriately masked
- Findings from testing are tracked through to verified remediation
What Bad Looks Like
- DAST is not performed, or results are ignored
- Penetration testing is performed by the same team that built the application, with no independence
- Test environments contain unmasked production data
- Testing findings remain open indefinitely with no escalation
RELEASE Phase
What Should Happen
Releases are subject to approval workflows that verify all required security activities have been completed. Policy gates in the delivery pipeline enforce that security requirements are met before code can progress to production. Release decisions are documented as part of change management.
Evidence to Request
- Release approval records showing security sign-off where required
- Policy gate results from the delivery pipeline (pass/fail records with timestamps)
- Change management records linking releases to security testing outcomes
- Exception records for any releases that bypassed policy gates
What Good Looks Like
- Policy gates are automated and enforced — the pipeline prevents release if security criteria are not met
- Security sign-off is required for Tier 1 and Tier 2 applications, with documented approval
- Gate bypasses require formal exception approval and are tracked
- Release records are linked to specific scan results and test outcomes
What Bad Looks Like
- No policy gates exist — security testing is advisory only
- Gates exist but are frequently bypassed without formal approval
- Release approvals reference security testing but do not link to specific results
- Emergency release procedures are used routinely, circumventing normal controls
DEPLOY Phase
What Should Happen
Deployments are logged with sufficient detail to establish what was deployed, when, by whom, and to which environment. Configuration is validated against security baselines. Production environments match the configurations that were tested.
Evidence to Request
- Deployment logs with timestamps, artifact identifiers, deployer identity, and target environment
- Configuration validation records (security baseline checks)
- Evidence of environment parity between testing and production
- Rollback records where deployments were reversed
What Good Looks Like
- Deployments are automated, logged, and auditable — manual deployments to production are prohibited or require exceptional approval
- Configuration drift detection identifies and alerts on deviations from security baselines
- Infrastructure-as-code or equivalent ensures environment parity
- Failed deployments and rollbacks are documented with root cause
What Bad Looks Like
- Manual deployments with no audit trail
- No configuration validation — security settings are assumed rather than verified
- Significant differences between test and production environments
- Deployment access is broadly granted without role-based restrictions
MONITOR Phase
What Should Happen
Production applications are monitored for security events, anomalous behaviour, and new vulnerabilities. Runtime protection mechanisms detect and respond to active threats. A vulnerability disclosure process allows external researchers to report issues responsibly.
Evidence to Request
- Security monitoring configuration and alert records
- Incident detection and response records related to applications
- Vulnerability disclosure policy (publicly accessible)
- Records of vulnerabilities reported through disclosure channels and their resolution
- Runtime security tool deployment records
What Good Looks Like
- Application-level security monitoring is active, with alerts routed to the security operations team
- Incident response procedures specifically address application-level incidents (not just infrastructure)
- A vulnerability disclosure policy is published, and reports are triaged and tracked
- Newly disclosed vulnerabilities in dependencies trigger reassessment via SCA
What Bad Looks Like
- No application-layer monitoring — only infrastructure monitoring is in place
- No vulnerability disclosure process — external reports have no intake channel
- Security incidents involving applications are handled ad-hoc without documented procedures
- No process to respond to newly disclosed dependency vulnerabilities
Summary: Controls, Evidence, and Red Flags by Phase
| Phase | Key Control | Evidence Artifact | Where to Find It | Red Flag |
|---|---|---|---|---|
| PLAN | Threat modelling | Threat model document | Wiki, design repository, risk register | No threat models or models never updated |
| CODE | SAST / code review | Scan results, review records | CI/CD pipeline logs, code review tool | SAST not integrated or results ignored |
| BUILD | SCA / SBOM | Dependency scan results, SBOM files | Build system, artifact repository | No component inventory for production |
| TEST | DAST / penetration testing | Scan reports, pentest reports | Security testing tools, report archive | No dynamic testing or no independence |
| RELEASE | Policy gates / approval | Gate results, approval records | Pipeline logs, change management system | Gates bypassed without approval |
| DEPLOY | Deployment logging | Deployment logs, config checks | Deployment platform, monitoring tools | Manual deployments with no audit trail |
| MONITOR | Runtime monitoring | Alert logs, incident records | SIEM, monitoring platform, ticketing | No application-layer monitoring |
Common Audit Findings Across SDLC Phases
Based on typical audit outcomes in regulated organisations, the most frequent findings include:
- Inconsistent coverage: Security controls are applied to some applications but not others, with no documented rationale for the difference
- Evidence gaps: Controls are described in policy but evidence of execution is incomplete or missing — particularly for threat modelling and security code review
- Broken traceability: It is not possible to trace from a production release back to the specific security test results that cleared it
- Stale findings: Vulnerabilities identified in testing remain open for months or years without remediation, escalation, or formal risk acceptance
- Process without enforcement: The organisation has a Secure SDLC document but no automated gates or independent verification that it is followed
- Monitor phase neglected: Significant investment in pre-production security but no runtime monitoring or vulnerability management for production applications
Assessing SDLC Maturity
Auditors can use a maturity model to characterise how well the Secure SDLC is implemented. This helps frame findings and recommendations proportionately.
| Maturity Level | Characteristics | Audit Implications |
|---|---|---|
| Ad-Hoc (Level 1) | Security activities are performed inconsistently, depend on individual initiative, and are not documented in policy. No automation. | Fundamental control gaps. Findings will be numerous and significant. Recommend establishing baseline policy and governance. |
| Defined (Level 2) | Policies and procedures exist. Security activities are documented and assigned. Tooling is in place but may not be consistently enforced. | Controls exist but operating effectiveness may be weak. Focus on verifying consistent execution and evidence quality. |
| Managed (Level 3) | Security controls are consistently applied, automated where possible, and monitored through metrics. Governance is active. | Controls are operating effectively. Audit focus shifts to edge cases, exceptions, and continuous improvement. |
| Optimised (Level 4) | Continuous improvement based on metrics. Proactive threat identification. Security is fully integrated into development culture and tooling. Advanced automation. | High confidence in control environment. Audit focus on sustainability, adaptation to new threats, and governance of advanced capabilities. |
Further Reading
For related guidance, see:
Related for Auditors
- Glossary — Plain-language definitions of technical terms
- How Auditors Review CI/CD
- Continuous Compliance via CI/CD
- Core CI/CD Security Controls
New to CI/CD auditing? Start with our Auditor’s Guide.