Understanding Type I vs. Type II: Why the Distinction Matters
SOC 2 reports come in two forms, and the distinction is critical for organizations relying on CI/CD pipelines for software delivery.
Type I (Point-in-Time): Evaluates whether controls are suitably designed and implemented as of a specific date. A Type I report is essentially a snapshot — it confirms that the right controls exist on the day of examination.
Type II (Period-of-Time): Evaluates whether controls operated effectively over a defined period, typically six to twelve months. A Type II report demonstrates that controls are not just designed correctly but are consistently enforced throughout the examination period.
For CI/CD environments, this distinction is especially meaningful. It is relatively straightforward to configure pipeline controls — enforce code reviews, enable security scanning, require approvals. The far greater challenge is demonstrating that these controls operated without exception (or with properly documented exceptions) over an extended period.
Why Type II Matters More for CI/CD Environments
Customers, prospects, and partners increasingly require Type II reports because they provide assurance that controls are sustainable, not just aspirational. For CI/CD specifically:
- Automation creates both opportunity and risk: Pipelines can enforce controls consistently, but misconfigurations or exceptions can create systemic gaps that persist undetected for months
- Volume of evidence is substantial: A busy development team may produce thousands of deployments during an audit period, providing a large population for auditor sampling
- Patterns matter: Type II examination reveals trends — increasing exception rates, degrading compliance, or inconsistent enforcement — that a point-in-time assessment would miss
- Sustained operation demonstrates maturity: Consistent control operation over 6-12 months signals organizational commitment and operational maturity to relying parties
What “Operating Effectiveness Over Time” Means for CI/CD Controls
Operating effectiveness requires demonstrating that each control performed as designed throughout the entire examination period. For CI/CD controls, this means:
- Every deployment during the period followed the approved change management process
- Access controls were consistently enforced without unauthorized exceptions
- Security scanning ran on every build with appropriate thresholds maintained
- Access reviews were conducted on schedule with findings remediated within defined timeframes
- Incident response procedures were followed when pipeline security events occurred
A single control that operates correctly 95% of the time is not operating effectively. Auditors will identify the 5% failure rate and may qualify their opinion or report exceptions.
Type II Evidence Requirements by Control Area
| Control | Type II Evidence Requirement | Acceptable Evidence Format | Sampling Approach |
|---|---|---|---|
| Code review enforcement | All production changes received peer review throughout the period | System-generated merge request logs with reviewer attribution and timestamps | Statistical sampling from full population of merged requests |
| Deployment approval | All production deployments received authorized approval | Deployment approval records with approver identity, timestamp, and approval decision | Random sample across entire audit period, stratified by month |
| Security scanning | All builds underwent security scanning with defined pass/fail thresholds | Scan execution logs, threshold configuration history, gate enforcement records | Sample of builds plus verification that gate configuration was unchanged |
| Access reviews | Periodic reviews conducted on schedule with findings remediated | Access review completion records, remediation tickets, before/after access snapshots | All review cycles within the period examined |
| MFA enforcement | MFA required for all human access throughout the period | Authentication policy configuration logs, MFA enrollment reports, exception logs | Configuration audit at multiple points plus review of exception records |
| Incident response | Pipeline security events detected, evaluated, and resolved per procedure | Incident tickets, response timelines, post-incident reviews, escalation records | All incidents during the period |
| Segregation of duties | No individual both authored and approved/deployed the same change | Deployment records cross-referenced with authorship records | Statistical sample from deployment population |
| Secrets rotation | Credentials and secrets rotated per defined schedule | Rotation logs, credential age reports, automated rotation execution records | All secrets in inventory verified against rotation schedule |
Pipeline Evidence That Demonstrates Sustained Operation
Consistent Enforcement Logs
The most compelling evidence for Type II is system-generated logs showing that controls fired on every relevant event throughout the period. Auditors look for:
- Complete, unbroken log sequences — no gaps in coverage
- Consistent control behavior — the same rules applied throughout
- Automated enforcement rather than manual compliance
Access Review Cadence
Quarterly access reviews must be documented with:
- Date of review, reviewer identity, and scope of review
- Findings identified (inappropriate access, stale accounts, excessive permissions)
- Remediation actions taken with completion dates
- Sign-off from the responsible manager
Change Approval Patterns
Auditors analyze approval patterns to verify:
- Approvals come from authorized individuals (not just anyone with repository access)
- Approvals occur before deployment (not retroactively)
- The approval population does not show concentration (one person approving everything)
- Emergency changes follow documented emergency procedures
Security Scan Compliance Rates Over Time
Trend data showing scan compliance rates should demonstrate:
- Consistently high execution rates (target: 100% of eligible builds)
- Stable or improving vulnerability remediation timelines
- No periods where scanning was disabled or thresholds were lowered
Metrics That SOC 2 Auditors Evaluate
| Metric | What It Indicates | Red Flag Threshold |
|---|---|---|
| Approval bypass rate | Frequency of changes deployed without required approval | Any bypass without documented emergency justification |
| Mean time to remediate vulnerabilities | Responsiveness to identified security issues | Increasing trend or consistent SLA breaches |
| Policy exception trends | Whether exceptions are increasing, stable, or decreasing | Upward trend or exceptions becoming routine |
| Access review completion rate | Whether reviews are conducted on schedule | Missed reviews or reviews completed significantly late |
| Failed security gate override rate | How often failing security checks are overridden | High override rate or overrides without justification |
| Segregation of duties violation rate | Whether the same person authors and deploys changes | Any violation without documented emergency procedure |
Evidence Integrity: System-Generated vs. Manual
SOC 2 auditors place significantly more weight on system-generated evidence than manually compiled records.
Evidence Hierarchy (Strongest to Weakest)
- System-generated, immutable logs — produced automatically by the pipeline platform with tamper-evident controls
- System-generated, standard logs — produced automatically but stored in mutable systems
- Automated reports — compiled automatically from system data on a scheduled basis
- Manually compiled reports — created by personnel from system data
- Self-attestations — statements from personnel without supporting system evidence
Tamper Resistance: Evidence stored in write-once or append-only systems carries more weight. Consider log shipping to immutable storage, cryptographic signing of audit records, and retention policies that prevent premature deletion.
Retention: Evidence must be retained for at least the audit period plus a reasonable buffer. Establish retention policies that cover the full examination period and ensure evidence is accessible when auditors request it.
Common Type II Failures in CI/CD Environments
- Gap in evidence: Logging was reconfigured mid-period, creating a gap where no evidence exists
- Control relaxation: Security gates were temporarily disabled during a “crunch period” and evidence of this exists in configuration history
- Inconsistent enforcement: Some repositories or teams were exempt from controls without formal exception documentation
- Retroactive approvals: Deployments were approved after the fact, as evidenced by timestamp analysis
- Access review deficiencies: Reviews were conducted but findings were not remediated within the defined timeframe
- Missing emergency change documentation: Bypasses occurred but were not documented as emergency changes with proper justification
Preparing for the Audit Period: What to Start Collecting Now
If your Type II examination period has not yet started, use the preparation time to:
- Validate logging completeness: Confirm that every in-scope control produces system-generated evidence
- Test evidence retrieval: Ensure you can efficiently extract and present evidence for any control at any point in the period
- Establish baselines: Document current metric values so you can demonstrate improvement or stability during the period
- Formalize exception processes: Ensure every potential control bypass has a documented exception procedure with approval requirements
- Train teams: Ensure all personnel understand that the audit period requires consistent control operation — not just good intentions
- Conduct a dry run: Perform an internal assessment using the same sampling methodology auditors will use
Related Resources
Related for Auditors
- Glossary — Plain-language definitions of technical terms
- Continuous Compliance via CI/CD
- Audit Readiness Checklist
- Executive Audit Briefing
New to CI/CD auditing? Start with our Auditor’s Guide.