QSA Perspective: Assessing CI/CD Environments During PCI DSS Assessments
As Qualified Security Assessors (QSAs) encounter CI/CD pipelines with increasing frequency in PCI DSS assessments, the challenge is not whether these systems are in scope — but how to assess them effectively. Traditional assessment methodologies were designed for manual change management processes and static infrastructure. Modern software delivery pipelines require assessors to understand automated controls, evaluate system-generated evidence, and verify that technical enforcement mechanisms achieve the security objectives required by PCI DSS.
This article provides a structured approach for QSAs assessing CI/CD environments and for compliance officers preparing their organizations for such assessments.
Scoping CI/CD for PCI DSS: When Pipelines Are In Scope
A CI/CD pipeline is in scope for PCI DSS when it meets any of the following criteria:
- Deploys to the Cardholder Data Environment (CDE): Any pipeline that deploys code, configuration, or infrastructure to systems that process, store, or transmit cardholder data
- Handles cardholder data: Pipelines that process test data containing live PANs, or that manage encryption keys or tokenization systems
- Could affect the security of the CDE: Pipelines deploying to systems connected to the CDE, even if they do not directly handle cardholder data
- Manages security controls: Pipelines that deploy or configure firewalls, WAFs, IDS/IPS, or other security controls protecting the CDE
Key scoping principle: If a compromise of the pipeline could lead to unauthorized access to cardholder data, the pipeline is in scope.
Assessment Methodology for CI/CD Controls
An effective CI/CD assessment follows a structured approach combining documentation review, configuration examination, evidence sampling, and personnel interviews.
Phase 1: Documentation Review
Request and review: SDLC policy, change management procedures, pipeline architecture diagrams, RBAC documentation, and incident response procedures covering CI/CD.
Phase 2: Configuration Examination
Directly examine: branch protection rules, pipeline security gate configurations, RBAC settings, MFA enforcement policies, logging configurations, and environment segregation controls.
Phase 3: Evidence Sampling
Select samples from the assessment period to verify that controls operated consistently. Sampling should cover the full period and include different types of changes (normal, emergency, high-risk).
Phase 4: Personnel Interviews
Interview development team leads, security engineers, and pipeline administrators to verify understanding and consistent application of controls.
Assessment Areas: What to Request, Test, and Evaluate
| Assessment Area | What to Request | What to Test | Pass/Fail Criteria |
|---|---|---|---|
| Secure Development Evidence | SDLC documentation, training records, secure coding standards | Verify training is current; confirm SDLC addresses CI/CD; check coding standards are enforced via pipeline | Pass: Documented SDLC, current training, automated enforcement. Fail: No SDLC documentation, outdated training, no pipeline enforcement |
| Vulnerability Management | Scan configurations, vulnerability reports, remediation records, SBOM artifacts | Verify scans run on every build; sample vulnerabilities for remediation timeliness; validate SBOM completeness | Pass: 100% scan coverage, remediation within SLA, current SBOM. Fail: Missed scans, SLA breaches, no SBOM |
| Change Control | Pipeline configuration, deployment logs, approval records, emergency change log | Sample deployments for approval; verify SoD enforcement; examine emergency changes for proper documentation | Pass: All changes approved before deployment, SoD enforced, emergency changes documented. Fail: Unapproved deployments, self-approvals, undocumented emergencies |
| Access Controls | RBAC configuration, access review records, service account inventory, MFA enrollment reports | Verify least privilege; confirm MFA enforcement; review access review completion and remediation | Pass: Least privilege enforced, MFA universal, reviews current with remediation. Fail: Excessive permissions, MFA gaps, missed reviews |
| Logging and Monitoring | Log configuration, retention settings, alert rules, sample log entries | Verify completeness of logging; confirm retention meets requirements; test alert functionality | Pass: All events logged, retention adequate, alerts functional. Fail: Logging gaps, insufficient retention, no alerting |
| Encryption | Encryption configuration for secrets, transit encryption settings, key management procedures | Verify secrets encrypted at rest; confirm TLS for all pipeline communications; review key management | Pass: All secrets encrypted, TLS enforced, key management documented. Fail: Plaintext secrets, unencrypted communications, no key management |
| Environment Segregation | Architecture diagrams, network configuration, credential separation evidence | Verify network isolation; confirm separate credentials per environment; check that test data controls are enforced | Pass: Network isolation verified, credentials separated, no live data in test. Fail: Shared networks, shared credentials, live PANs in test |
Interview Questions for Development Teams
When interviewing development team personnel, QSAs should explore the following areas to verify that documented controls are understood and followed in practice:
Change Management
- Describe the process for deploying a change to production. What steps are required?
- What happens if you need to deploy an emergency fix outside of normal hours?
- Can you deploy to production without a code review? If so, under what circumstances?
- Who has the authority to approve deployments to the cardholder data environment?
Security Controls
- What security scans run as part of your pipeline? What happens when a scan finds a critical vulnerability?
- How do you manage secrets and credentials used by the pipeline?
- How are development, test, and production environments separated?
- Have you ever had to override a security gate? What was the process?
Access and Authentication
- How is access to pipeline systems requested and approved?
- When was your last access review? Were any changes made as a result?
- Is MFA required for all access to pipeline systems? Are there any exceptions?
Incident Response
- Describe what would happen if a security vulnerability was discovered in a deployed application
- Have there been any security incidents involving the pipeline? How were they handled?
Evidence Sampling Strategy for CI/CD
Effective sampling for CI/CD assessments requires consideration of the high volume and automated nature of pipeline activities.
Sampling Guidelines
| Population Size (Changes in Period) | Recommended Sample Size | Sampling Method |
|---|---|---|
| 1-50 | All items | Complete examination |
| 51-250 | 25-30 items | Random selection across the full period |
| 251-1,000 | 30-40 items | Stratified random: equal distribution across months plus targeted selection of high-risk changes |
| 1,000+ | 40-60 items | Stratified random across months plus all emergency changes plus targeted high-risk selection |
What to Verify in Each Sample
- Code review was completed by a qualified reviewer who is not the author
- Approval was granted before deployment by an authorized individual
- Security scans executed and passed (or failures were addressed before deployment)
- Change documentation is complete (description, impact, testing evidence)
- Segregation of duties was maintained throughout the change lifecycle
Red Flags That Indicate Non-Compliance
During assessment, the following findings should raise immediate concern:
- Approval timestamps after deployment timestamps: Indicates retroactive approval — changes deployed before authorization
- Same individual as author and approver: Segregation of duties failure
- Security scan results showing consistent overrides: Suggests controls exist but are routinely bypassed
- No emergency change documentation despite evidence of off-hours deployments: Indicates undocumented bypasses of normal change procedures
- Service accounts with administrative access to multiple environments: Violates least-privilege and environment segregation
- Gaps in logging during the assessment period: May indicate evidence tampering or configuration failures
- Access reviews showing no changes needed: May indicate reviews are perfunctory rather than substantive
- Pipeline configuration changes not subject to change management: The control environment itself is uncontrolled
- Developers with direct production access outside the pipeline: Indicates the pipeline can be bypassed entirely
Compensating Controls and Customized Approach Considerations
Compensating Controls
When an organization cannot meet a PCI DSS requirement as stated, compensating controls may be acceptable if they:
- Meet the intent and rigor of the original requirement
- Provide a similar level of defense
- Are above and beyond other PCI DSS requirements
- Are commensurate with the additional risk caused by not meeting the original requirement
Example: If a legacy pipeline tool cannot enforce segregation of duties technically, compensating controls might include mandatory post-deployment review by an independent party, enhanced logging with real-time alerts on self-approved changes, and monthly audit of all deployment records.
Customized Approach (v4.0)
The customized approach allows organizations to meet the security objective through alternative means. For CI/CD environments, this provides flexibility but requires:
- A documented Targeted Risk Analysis for each customized control
- Clear articulation of how the alternative control meets the security objective
- Evidence that the customized control is at least as effective as the defined approach
- More rigorous testing by the QSA to validate the customized control
Report of Compliance (ROC) Documentation for CI/CD Controls
When documenting CI/CD controls in the ROC, QSAs should ensure:
- Scope definition: Clearly document which pipeline components are in scope and the rationale for scoping decisions
- Control descriptions: Describe how CI/CD controls satisfy each relevant requirement, including technical enforcement mechanisms
- Evidence references: Reference specific evidence examined, including system-generated logs, configuration screenshots, and sample records
- Testing procedures: Document the assessment methodology used, including sampling approach and sample sizes
- Findings and observations: Document any exceptions, compensating controls, or areas where the customized approach was used
- Interview summaries: Document key findings from personnel interviews, particularly where interview responses confirmed or contradicted documentary evidence
Related Resources
Related for Auditors
- Glossary — Plain-language definitions of technical terms
- How Auditors Review CI/CD
- Audit Readiness Checklist
- Audit Day Playbook
New to CI/CD auditing? Start with our Auditor’s Guide.