AppSec Governance Model — Roles, Responsibilities, and Oversight

Why AppSec Governance Is Distinct from General IT Security Governance

Many organisations treat application security as a subset of IT security governance — a line item in an information security policy, overseen by the same committee that manages network security and endpoint protection. This is a structural mistake that auditors should recognise immediately.

Application security governance has unique characteristics that demand dedicated oversight:

  • Applications are built, not bought (or are heavily customised): Unlike infrastructure components, custom applications introduce risks that are unique to the organisation. Generic IT controls do not address them.
  • The risk is embedded in the development process: Vulnerabilities are introduced during design, coding, and configuration — phases that traditional IT security governance rarely touches.
  • Multiple teams share responsibility: Development, operations, security, and business stakeholders all play a role. Without clear governance, accountability gaps emerge.
  • Speed of change: Modern development practices mean applications change weekly or daily. Governance models designed for quarterly infrastructure reviews cannot keep pace.
  • Regulatory expectations are increasing: DORA, NIS2, and sector-specific regulations now explicitly require controls over software development and third-party components, not just operational IT systems.

For auditors, the key question is: does the organisation have a governance structure specifically designed for application security, with defined roles, decision rights, and oversight mechanisms?

Key Roles in AppSec Governance

Effective application security governance requires clearly defined roles with documented responsibilities. The following roles are essential — though titles may vary across organisations.

Application Security Lead

The individual (or team lead) responsible for defining and maintaining the organisation’s application security programme. This role owns the AppSec strategy, tooling decisions, policy development, and programme metrics. In larger organisations, this may be a dedicated team; in smaller ones, it may sit within the broader security function.

Security Champions

Embedded representatives within development teams who act as the first point of contact for security questions. They do not replace security specialists but serve as a bridge between development and the AppSec function. Their effectiveness depends on formal training, allocated time, and recognition within the governance structure.

Development Team Leads

Responsible for ensuring their teams follow secure development practices, address vulnerabilities within agreed SLAs, and participate in required security activities (code reviews, threat modelling sessions). They are accountable for remediation within their teams.

CISO / Security Director

Provides executive sponsorship for the AppSec programme, ensures adequate resourcing, and reports on application security risk to the board or risk committee. The CISO is ultimately accountable for the organisation’s security posture, including application security.

Risk Owner

Typically a business line leader who owns the risk associated with a specific application or set of applications. The risk owner accepts residual risk, approves exceptions, and ensures that business decisions consider application security implications.

Compliance Officer

Ensures that the AppSec programme satisfies regulatory requirements, maps controls to regulatory obligations, and coordinates with external auditors. The compliance officer validates that evidence collection meets regulatory expectations.

RACI Matrix for AppSec Activities

A RACI matrix (Responsible, Accountable, Consulted, Informed) eliminates ambiguity about who does what. Auditors should request this as a core governance document.

Activity AppSec Lead Security Champions Dev Team Leads CISO Risk Owner Compliance Officer
Threat Modelling A R R I C I
Security Testing Configuration R/A C I I I I
Vulnerability Triage A R R I I I
Remediation Tracking A C R I I I
Exception Approval C I I A R C
Metrics Reporting R C C A I I
Tool Selection R C C A I C
Security Training R/A R C I I I

R = Responsible (does the work), A = Accountable (owns the outcome), C = Consulted, I = Informed

Governance Structure: Centralised, Embedded, or Hybrid

How the AppSec function is organised within the broader enterprise has significant implications for effectiveness, accountability, and auditability.

Model Description Strengths Weaknesses Best Suited For
Centralised A dedicated AppSec team provides all security testing, reviews, and guidance. Development teams request security services. Consistent standards; clear ownership; easier to audit; specialised expertise concentrated Can become a bottleneck; may lack development context; perceived as external gatekeepers Heavily regulated organisations; smaller development portfolios; organisations early in AppSec maturity
Embedded Security engineers are placed within each development team and report to development leadership. Deep integration with development; faster feedback; security aligned to product context Inconsistent standards across teams; security may be deprioritised by development leadership; harder to maintain independence; difficult to audit consistently Large technology companies; organisations with mature security culture; product-centric organisations
Hybrid A central AppSec team defines standards, selects tools, and provides oversight. Security Champions or embedded engineers handle day-to-day activities within development teams. Balances consistency with integration; central oversight supports auditability; scalable across large portfolios Requires strong coordination; Security Champion effectiveness varies; dual reporting lines can create confusion Most regulated organisations; organisations with multiple development teams; entities subject to DORA, NIS2, or PCI DSS

For regulated organisations, the hybrid model is generally most defensible from an audit perspective because it preserves central oversight (critical for consistent policy enforcement and evidence collection) while maintaining practical integration with development teams.

Policy Framework

AppSec governance requires a set of interconnected policies that establish expectations and provide the basis for audit assessment. At minimum, the following policies should exist:

Secure Development Policy

Defines mandatory secure coding practices, security testing requirements by application tier, code review expectations, and training obligations. This is the foundational policy for the entire AppSec programme.

Vulnerability Management Policy (Application-Specific)

Specifies how application vulnerabilities are identified, triaged, prioritised, assigned, remediated, and verified. Must include remediation SLAs by severity and application tier, escalation procedures for overdue vulnerabilities, and criteria for vulnerability suppression or acceptance.

Exception Handling Policy

Documents the process for requesting, approving, and tracking exceptions to security requirements. Must specify who can approve exceptions, time limits on exceptions, required compensating controls, and review cadence for open exceptions.

Third-Party Component Policy

Governs the use of open-source and commercial third-party components, including approved sources, licence compliance requirements, vulnerability monitoring obligations, and update/patching expectations.

Oversight Mechanisms

Governance is only effective if oversight mechanisms exist to monitor compliance, identify issues, and drive improvement.

AppSec Steering Committee

A regular (typically monthly or quarterly) meeting of senior stakeholders that reviews programme performance, approves policy changes, addresses strategic risks, and allocates resources. Membership should include the CISO, AppSec Lead, senior development leadership, and compliance representation.

Vulnerability Review Board

A regular (typically weekly or bi-weekly) operational meeting that reviews critical and high-severity vulnerabilities, tracks remediation progress, approves exceptions, and escalates overdue items. This is where day-to-day governance decisions are made and documented.

Metrics Dashboards

Automated reporting that provides visibility into programme health: vulnerability trends, testing coverage, remediation SLA compliance, exception counts, and policy gate results. Dashboards should be generated from tooling data, not manually compiled.

Executive Reporting

Periodic (quarterly or as required) reporting to the board, risk committee, or executive management that summarises application security risk posture, programme maturity progress, key incidents, and resource adequacy. This is often a regulatory expectation under DORA and NIS2.

What Auditors Should Verify

When assessing AppSec governance, auditors should examine the following:

  • Documented roles and responsibilities: A current RACI matrix or equivalent document exists and has been approved by senior management
  • Evidence of governance meetings: Minutes or records from steering committee meetings, vulnerability review boards, and any other governance forums — including attendance, decisions made, and action items tracked to completion
  • Escalation records: Evidence that issues are escalated when SLAs are breached or exceptions are requested, with documented outcomes
  • Policy review cadence: All AppSec policies have defined review cycles (typically annual), with evidence that reviews have been conducted and updates approved
  • Resource adequacy: Evidence that the AppSec function is adequately resourced relative to the application portfolio size and risk profile
  • Security Champion programme: If a Security Champion model is used, evidence of training, participation, and integration into governance processes
  • Reporting to senior management: Evidence that application security risk is reported to the board or risk committee at an appropriate frequency

Red Flags

The following findings indicate governance weaknesses that auditors should highlight:

  • No dedicated AppSec ownership: Application security responsibilities are vaguely distributed across IT security, development, and operations with no single point of accountability
  • Security testing owned entirely by development: When development teams are solely responsible for security testing without independent oversight, there is an inherent conflict of interest — testing may be deprioritised when delivery deadlines pressure the team
  • No formal exception process: Vulnerabilities are suppressed, accepted, or deferred without a documented approval process, risk assessment, or time limit
  • Governance meetings not held or poorly attended: Steering committee or review board meetings are frequently cancelled, or senior stakeholders do not attend
  • Policies exist but are not enforced: Policies are documented but there is no mechanism to verify compliance and no consequences for non-compliance
  • No metrics or reporting: The organisation cannot produce quantitative data on its application security posture
  • CISO has no visibility into AppSec: Application security is managed entirely within development without reporting to the security function

Further Reading

For related guidance on application security and governance frameworks, see:


Related for Auditors

New to CI/CD auditing? Start with our Auditor’s Guide.