Framework Deepdive

Explore in-depth guides to the industry’s leading security frameworks broken down into practical requirements, controls, and implementation steps to help you build a compliant, audit-ready organization.

1. Overview

What the framework is

C5 is a cloud security attestation framework developed by the German Federal Office for Information Security (BSI). It defines a set of minimum baseline controls and audit criteria for cloud service providers (CSPs) and cloud customers. It enables transparency, security assurance, and comparability in cloud offerings.

Who it applies to

  • Cloud service providers (IaaS, PaaS, SaaS) delivering services to customers in Germany or the EU.
  • Cloud customers seeking assurance that providers meet baseline security expectations.
  • auditors performing assessments aligned with BSI standards.

Expected outcome

  • An independent attestation report, typically under ISAE 3000 (Type 1 or Type 2).
  • Demonstration of adherence to BSI C5:2020 controls.

High-level goals and risk domains

  • Information security
  • Transparency
  • Data protection alignment
  • Operational security baseline
  • Governance and compliance

2. Scope & Applicability

Typical in-scope elements

  • All cloud platforms and infrastructure supporting the cloud service.
  • Management plane and core operational processes.
  • Data centers, hosting locations, and physical security.
  • Cloud service customer interfaces (portals, APIs).
  • Security governance, vendor management, incident response, and monitoring.

Out-of-scope elements

  • Customer-managed infrastructure under IaaS responsibility split.
  • Customer applications and data not controlled by the cloud service provider.
  • Non-production internal corporate systems not supporting cloud delivery.

Assumptions and dependencies

  • CSP has a defined service boundary and architectural documentation.
  • Security roles shared via RACI (provider vs. customer responsibilities).
  • Established ISMS (ISO 27001 recommended but not mandatory).
  • Documented operational processes exist and are consistently followed.

3. Core Principles or Pillars

C5 is built around the following principles:

  • Governance & Compliance: formal management oversight and policy framework.
  • Information Security Management: risk-based protections aligned with global standards.
  • Confidentiality, Integrity, Availability: classical security triad.
  • Transparency: service descriptions, incident handling, supply chain clarity.
  • Data Protection: privacy considerations mapped to EU GDPR.
  • Operational Security: monitoring, patching, vulnerability management, BC/DR.

4. Control Breakdown

C5 consists of 17 control domains aligned partly to ISO 27001 and BSI guidelines.

Below is a summarised breakdown.

4.1 Organization of Information Security

Purpose: Establish governance and management responsibilities.

Minimum expectations: defined ISMS, roles, committees, internal governance structures.

Evidence: org charts, policies, meeting minutes.

Common gaps: unclear RACI, undocumented security responsibilities.

Example: Security steering committee with quarterly minutes.

Mapping: ISO 27001 A.5, SOC 2 CC1.

4.2 Security Policies

Purpose: guidance for acceptable behavior and security decisions.

Evidence: approved InfoSec policy, review logs.

Gap: outdated or unsigned policies.

Mapping: ISO 27001 A.5.1, SOC 2 CC1.

4.3 Human Resources Security

Purpose: ensure secure hiring, training, and offboarding.

Expectations: background checks, onboarding, annual training.

Evidence: training logs, HR checklists.

Mapping: ISO 27001 A.6, SOC 2 CC2.

4.4 Asset Management

Purpose: accurate understanding of cloud assets.

Expectations: asset inventory covering hardware, software, data.

Evidence: CMDB exports, inventory lists.

Gap: incomplete inventory of cloud components.

Mapping: ISO 27001 A.8.

4.5 Access Control

Purpose: ensure least privilege and safe access.

Expectations: RBAC, MFA, access reviews.

Evidence: access review reports, IAM configuration.

Mapping: ISO 27001 A.9, SOC 2 CC6.

4.6 Cryptography

Purpose: protect data in transit and at rest.

Expectations: documented crypto standards, key management.

Evidence: encryption configuration, KMS screenshots.

Mapping: ISO 27001 A.10.

4.7 Physical Security

Purpose: secure data centers and colocation sites.

Evidence: datacenter ISO 27001 certs, physical logs, visitor records.

Mapping: ISO 27001 A.11.

4.8 Operations Security

Purpose: protect daily operations including patching, change control.

Evidence: patch reports, change tickets, monitoring dashboards.

Gap: inconsistent patch cadence.

Mapping: ISO 27001 A.12.

4.9 Communications Security

Purpose: protect network infrastructure.

Expectations: segmentation, secure network devices, firewall management.

Evidence: network diagrams, firewall rulesets.

Mapping: ISO 27001 A.13.

4.10 Systems Acquisition, Development & Maintenance

Purpose: integrate security into SDLC.

Expectations: secure coding, testing, code reviews.

Evidence: SDLC workflow, vulnerability reports.

Mapping: ISO 27001 A.14, SOC 2 CC7.

4.11 Supplier Relationships

Purpose: manage third-party risk.

Expectations: vendor assessments, DPAs, SLAs.

Evidence: vendor risk reports, contracts.

Mapping: ISO 27001 A.15.

4.12 Incident Management

Purpose: ensure effective detection and escalation.

Expectations: defined IR plan, incident logging.

Evidence: incident tickets, postmortems.

Mapping: ISO 27001 A.16, SOC 2 CC7.

4.13 Business Continuity Management

Purpose: resilience and recoverability.

Expectations: DR plans, backups, tests.

Evidence: backup reports, DR test results.

Mapping: ISO 27001 A.17.

4.14 Compliance

Purpose: meet legal, regulatory, contractual obligations.

Evidence: GDPR mapping, legal register.

Mapping: ISO 27001 A.18.

4.15 Transparency Controls

Unique to C5: CSP must disclose service boundary, roles, incident expectations.

Evidence: transparency report.

Gap: incomplete customer-facing documentation.

4.16 Data Protection Controls

Complement GDPR expectations.

Evidence: DPIA records, data classification.

Mapping: ISO 27701.

4.17 Additional/Optional Controls (e.g., State-sponsored attacks)

Enhanced measures for high-risk clients.

Evidence: threat modelling, resilience assessments.

5. Minimum Requirements

Mandatory elements for compliance:

  • Security governance framework
  • Documented ISMS (ISO 27001-aligned recommended)
  • Asset inventory and configuration documentation
  • Encryption at rest and in transit
  • IAM with MFA and least privilege
  • Vulnerability management process with defined SLAs
  • Incident response plan with logged incidents
  • DR and backup strategy
  • Vendor risk management
  • C5 transparency report (mandatory unique requirement)

Recurring activities

  • Annual risk assessment
  • Quarterly access reviews
  • Annual policy reviews
  • Annual DR test
  • Regular vulnerability scanning (monthly minimum)

6. Technical Implementation Guidance

Infrastructure security

  • Ensure hardened baseline OS images.
  • Apply CIS benchmarks where possible.

Access control

  • Enforce MFA on all admin accounts.
  • Automate provisioning via SSO/SCIM.

Logging & monitoring

  • Collect logs from network, system, and application layers.
  • Use SIEM with defined alerting thresholds.

Vulnerability management

  • Monthly scanning, with critical patching within 7–14 days.
  • Track remediation via ticketing system.

Backup & DR

  • Daily backups for customer data.
  • Define RTO/RPO values and test annually.

Encryption

  • TLS 1.2+ for transit.
  • AES-256 for rest.
  • Rotate keys per policy.

Configuration baselines

  • Infrastructure-as-code (IaC) recommended.
  • Enforce peer review on IaC commits.

Tooling suggestions

  • SIEM: Splunk, Elastic, Sentinel
  • Vulnerability: Nessus, Qualys
  • Access governance: Okta, Azure AD
  • Asset/CMDB: ServiceNow, Jira CMDB

7. Policy & Procedure Requirements

Typical documents required:

  • Information Security Policy
  • Access Control Policy
  • Cryptography Policy
  • Operations Security Policy
  • Incident Response Plan
  • Business Continuity Plan
  • Change Management Procedure
  • Vulnerability Management Procedure
  • Logging & Monitoring Procedure
  • Supplier Management Policy
  • Transparency Report
  • Data Protection & Privacy Policy
  • Secure Development Policy

Each document should define:

  • purpose
  • scope
  • roles and responsibilities
  • activities and workflows
  • review cadence
  • references to frameworks

8. Audit Evidence & Verification

What auditors request

  • Policies and procedures
  • Architecture diagrams
  • Asset inventory
  • IAM configuration screenshots
  • Logs/tracking reports
  • Incident records
  • DR test evidence
  • Vulnerability scan results
  • Access review outputs
  • Supplier assessments
  • Transparency report

Sampling

  • Typically 25–40 samples per control.
  • For Type 2: testing over 6–12 months.

Common remediation actions

  • Improve documentation maturity
  • Strengthen monitoring
  • Close gaps in vendor risk management
  • Add missing transparency details
  • Improve patch SLAs

9. Implementation Timeline Considerations

Typical duration

4–6 months for initial C5 readiness.

Key milestones

  1. Gap assessment
  2. ISMS and governance documentation
  3. Technical hardening and monitoring baseline
  4. Transparency report publication
  5. Internal readiness audit
  6. External assessment (Type 1 → Type 2)

Dependencies causing delays

  • Missing documentation
  • Weak asset inventory
  • Lack of vulnerability management maturity
  • Slow vendor onboarding and assessments

10. Ongoing Security BAU Requirements

  • Quarterly access reviews
  • Monthly vulnerability scanning
  • Annual policy and risk assessment review
  • Annual DR test
  • Annual incident simulation
  • Continuous monitoring
  • Ongoing vendor reviews
  • SIEM tuning and monthly log reviews
  • Training and awareness program

11. Maturity Levels

Minimum compliance

  • Policies exist
  • Processes follow baseline
  • Vulnerability scans and access reviews performed
  • Transparency report complete

Intermediate maturity

  • Automated IAM
  • SIEM with active alerting
  • Security baselines via IaC
  • Internal audit program

Advanced/automated

  • End-to-end DevSecOps integration
  • Continuous compliance monitoring
  • Automated alerts to ticketing
  • Threat intelligence integrated

12. FAQs

Is C5 a certification?

No. It is an attestation aligned with ISAE 3000.

Do we need ISO 27001 to achieve C5?

Not mandatory, but highly recommended.

Is C5 equivalent to SOC 2?

C5 has similarities but adds requirements around transparency, data protection, and German legal/regulatory context.

Can C5 be done remotely?

Most of the audit can be remote, but datacenter evidence may require site reports or visits.

Is C5 required for cloud providers entering the German market?

Not legally required, but increasingly a procurement expectation.

13. Summary

C5 provides a structured, transparent, and security-focused attestation for cloud service providers. Achieving compliance requires strong governance, robust technical controls, defined transparency documentation, and consistent operational execution. When implemented effectively, C5 improves customer trust, strengthens security maturity, and aligns cloud operations with recognized EU and German security expectations.

1. Overview – What is CMMC (NIST SP 800-171)?

Cybersecurity Maturity Model Certification (CMMC) 2.0 is the U.S. DoD’s framework to verify that defense contractors are properly protecting:

  • Federal Contract Information (FCI)
  • Controlled Unclassified Information (CUI)

CMMC 2.0 is tightly aligned with NIST SP 800-171 Rev.2 for protecting CUI in non-federal systems.TestPros' IT Compliance Services+1

Who it applies to

  • All organizations doing business with the U.S. Department of Defense (prime and sub-contractors), especially where DFARS 252.204-7012, 7019, 7020, 7021 clauses appear in the contract.U.S. Department of Defense CIO+1
  • Practically, anyone storing, processing, or transmitting FCI/CUI in support of DoD contracts.

Levels and certification outcomes

CMMC 2.0 has three levels:Pivot Point Security+1

  1. Level 1 – Foundational
    • 17 basic practices (FAR 52.204-21).
    • Focused on FCI.
    • Self-assessment plus annual affirmation in SPRS.
  2. Level 2 – Advanced
    • 110 practices = full set of NIST SP 800-171 Rev.2 requirements for CUI.Sprinto+1
    • Mix of self-assessment and 3rd-party certified assessments depending on program sensitivity.
  3. Level 3 – Expert

From 10 November 2025, CMMC requirements are being phased into contracts, starting with Level 1 and Level 2 self-assessments.U.S. Department of Defense CIO+1

High-level goals and risk domains

CMMC / NIST 800-171 focus on:

  • Protection of CUI and FCI (mainly confidentiality)
  • Access control and identity assurance
  • Configuration, logging, monitoring, and incident response
  • Supply chain and subcontractor oversight
  • Governance and risk management, including SSP, POA&M, and ongoing assessmentscsrc.nist.rip+1

2. Scope & Applicability

Typical in-scope elements

  • Systems & infrastructure
    • Servers, endpoints, cloud services, collaboration platforms where CUI/FCI is stored or processed
    • Identity providers (IdP), VPN, firewalls, MDM, EDR, backup systems that protect or provide access to CUI
  • Data
    • CUI per DoD / NARA CUI registry (e.g. export controlled, critical infrastructure, law enforcement, etc.)
    • FCI associated with DoD contractsIsora GRC+1
  • Processes & teams
    • Contract, engineering, operations, IT, security, HR (if they touch CUI), support, incident response
  • Subcontractors / vendors
    • Any third party that stores, processes, or transmits your CUI/FCI, or administers systems in-scope.

Common out-of-scope elements

You may declare systems out of scope if they:

  • Never store, process, or transmit CUI/FCI
  • Are technically and logically separated from CUI systems (e.g. separate identity, no shared admin, segmented networks)
  • Are purely marketing/public website properties, or HR systems only for non-CUI staff records

However, corporate governance and security management are generally in scope even if they don’t directly hold CUI (e.g. risk management, policy management).

Assumptions and dependencies

  • Contract includes DFARS 252.204-7012 and related clauses, driving NIST 800-171 implementation.U.S. Department of Defense CIO+1
  • CUI has been identified and labeled (data classification exercise completed).Isora GRC
  • A clear CUI boundary / enclave is defined: where CUI lives, through which systems it flows.
  • Organizational leadership has committed to:
    • Funding required technical controls
    • Supporting policy enforcement and change management
    • Accepting and tracking risk treatment in a formal process

3. Core Principles / Pillars

CMMC (via NIST 800-171) is organized into 14 control families (summarized below) that collectively support:Sprinto+1

  • Confidentiality & Integrity of CUI
  • Availability of critical systems
  • Governance & accountability
  • Secure configuration & continuous monitoring
  • Personnel security and vendor oversight

The 14 NIST 800-171 families:

  1. Access Control (AC)
  2. Awareness & Training (AT)
  3. Audit & Accountability (AU)
  4. Configuration Management (CM)
  5. Identification & Authentication (IA)
  6. Incident Response (IR)
  7. Maintenance (MA)
  8. Media Protection (MP)
  9. Physical Protection (PE)
  10. Personnel Security (PS)
  11. Risk Assessment (RA)
  12. Security Assessment (CA)
  13. System & Communications Protection (SC)
  14. System & Information Integrity (SI)

4. Control Breakdown (By NIST 800-171 Family)

Below, each family describes: Purpose → Minimum expectations → Evidence → Common gaps → Implementation example → Mapping.

(Assume CMMC Level 2 / NIST 800-171 full implementation.)

4.1 Access Control (AC)

  • Purpose: Ensure only authorized users/processes access CUI; enforce least privilege.Sprinto
  • Minimum expectations
    • Role-based access; documented authorization process
    • MFA for all administrative, remote access, and ideally all CUI access
    • Session timeouts, remote access controls, secure configuration of VPN
  • Evidence
    • Access control policy; user access lists
    • IdP configuration screenshots (MFA, conditional access)
    • Joiner-mover-leaver records; ticket history for access changes
  • Common gaps
    • Shared admin accounts
    • Local accounts not governed by IdP
    • Excessive access to CUI file shares
  • Example
    • Azure AD / Okta + M365 CUI enclave; security groups tied to job roles; access requests via ticketing.
  • Mapping
    • ISO 27001: A.5, A.8, A.9
    • SOC 2: CC6.x (logical access), CC5.x (privileged access)

4.2 Awareness & Training (AT)

  • Purpose: Ensure users understand CUI, threats, and their responsibilities.kelsercorp.com
  • Minimum expectations
    • Annual security & CUI awareness training
    • Role-based training for admins, incident responders, developers
  • Evidence
    • Training policy & materials
    • LMS records / attendance logs; quizzes
  • Common gaps
    • No CUI-specific content
    • Contractors and temp staff not included
  • Mapping
    • ISO 27001: A.6.3
    • SOC 2: CC2.2 (training & awareness)

4.3 Audit & Accountability (AU)

  • Purpose: Log and review security-relevant events, especially around CUI.Sprinto
  • Minimum expectations
    • Central logging (SIEM or log platform) for CUI systems
    • Defined log retention (often ≥ 12 months)
    • Regular review of critical events and alerts
  • Evidence
    • Logging configuration, SIEM dashboards
    • Sample logs; documented procedures for log review
  • Common gaps
    • No logs from endpoints or SaaS CUI systems
    • Logs kept but never reviewed
  • Mapping
    • ISO 27001: A.8.15, A.8.16
    • SOC 2: CC7.x (monitoring & detection)

4.4 Configuration Management (CM)

  • Purpose: Maintain secure, documented, and controlled configurations.Sprinto+1
  • Minimum expectations
    • Baseline secure configurations (CIS, vendor benchmarks)
    • Formal change management process
    • Configuration control for servers, endpoints, network devices
  • Evidence
    • CM policy, baselines, change tickets
    • Config management / MDM screenshots
  • Common gaps
    • No documented baselines
    • Direct changes in production with no ticket
  • Mapping
    • ISO 27001: A.8.9, A.8.10, A.8.32
    • SOC 2: CC8.x (change management)

4.5 Identification & Authentication (IA)

  • Purpose: Verify user and device identities robustly before granting access.Sprinto
  • Minimum expectations
    • Unique IDs; MFA; strong/passwordless auth
    • Secure credential management; no shared accounts
  • Evidence
    • IdP settings; password policy; MFA enforcement reports
  • Common gaps
    • Legacy systems without MFA
    • Hard-coded passwords, shared accounts
  • Mapping
    • ISO 27001: A.8.2, A.8.3
    • SOC 2: CC6.3, CC6.1

4.6 Incident Response (IR)

  • Purpose: Prepare for, detect, respond to, and report security incidents.Sprinto+1
  • Minimum expectations
    • Documented IR plan, roles, and contact trees
    • Procedures for DFARS 7012 incident reporting (72-hour notification) where applicable
    • At least annual IR test / tabletop
  • Evidence
    • IR policy & playbooks
    • Incident logs, reports, lessons-learned
  • Common gaps
    • No linkage to DFARS reporting
    • No exercises or only informal handling
  • Mapping
    • ISO 27001: A.5.24–A.5.26
    • SOC 2: CC7.3, CC7.4

4.7 Maintenance (MA)

  • Purpose: Control system maintenance activities, including remote maintenance.Sprinto
  • Minimum expectations
    • Approved maintenance tools and procedures
    • Supervised remote maintenance; logs of activities
  • Evidence
    • Maintenance records; secure remote access configurations
  • Common gaps
    • 3rd-party support with uncontrolled remote access
  • Mapping
    • ISO 27001: A.8.5, A.8.7
    • SOC 2: CC6.6, CC7.1

4.8 Media Protection (MP)

  • Purpose: Protect CUI on physical and digital media.Sprinto+1
  • Minimum expectations
    • Encryption of removable media or blocking it in CUI enclave
    • Procedures for media sanitization and destruction
  • Evidence
    • DLP/endpoint policies, destruction certificates
  • Common gaps
    • Uncontrolled USB usage
    • No verifiable destruction process
  • Mapping
    • ISO 27001: A.8.10, A.8.11
    • SOC 2: CC6.7

4.9 Physical Protection (PE)

  • Purpose: Prevent unauthorized physical access to CUI systems.Sprinto
  • Minimum expectations
    • Secured offices / server rooms
    • Visitor management, badges, lockable storage
  • Evidence
    • Floor plans, access control logs, visitor logs
  • Common gaps
    • Shared office spaces with weak controls
  • Mapping
    • ISO 27001: A.7.x
    • SOC 2: CC6.7

4.10 Personnel Security (PS)

  • Purpose: Ensure personnel handling CUI are trustworthy and controlled on entry/exit.Sprinto+1
  • Minimum expectations
    • Background checks per contract
    • Formal onboarding/offboarding controls
  • Evidence
    • HR procedures; sample personnel files (with PII redacted)
  • Common gaps
    • Incomplete checks for contractors
  • Mapping
    • ISO 27001: A.6.x
    • SOC 2: CC1.2, CC2.1

4.11 Risk Assessment (RA)

  • Purpose: Identify and evaluate risks to CUI and CMMC scope.csrc.nist.rip+1
  • Minimum expectations
    • Documented risk methodology and register
    • At least annual risk assessment
  • Evidence
    • Risk reports, treatment plans, management sign-off
  • Common gaps
    • One-time “gap assessment” mistaken for risk assessment
  • Mapping
    • ISO 27001: Clause 6.1, 8.2, 8.3
    • SOC 2: CC3.2, CC3.3

4.12 Security Assessment (CA)

  • Purpose: Periodic control effectiveness checks and remediation tracking.Sprinto+1
  • Minimum expectations
    • Internal self-assessments against NIST 800-171
    • POA&M management and periodic updates
  • Evidence
    • Self-assessment worksheets, POA&M, status reports
  • Common gaps
    • POA&M exists but never updated; no prioritization
  • Mapping
    • ISO 27001: Internal audit & management review
    • SOC 2: CC4.x (monitoring activities)

4.13 System & Communications Protection (SC)

  • Purpose: Protect data in transit and at system boundaries.Sprinto+1
  • Minimum expectations
    • Network segmentation for CUI enclave
    • FIPS-validated crypto for CUI where required by DFARS 7012
    • Secure protocols (TLS 1.2+), firewall rules, email security
  • Evidence
    • Network diagrams, firewall configs, TLS settings
  • Common gaps
    • Weak segmentation (flat networks), non-FIPS crypto modules
  • Mapping
    • ISO 27001: A.8.21, A.8.22, A.8.24
    • SOC 2: CC6.7, CC6.8

4.14 System & Information Integrity (SI)

  • Purpose: Detect and correct security issues and malware.Sprinto+1
  • Minimum expectations
    • EDR/AV across all in-scope endpoints
    • Vulnerability management and patching program
    • Email and web filtering
  • Evidence
    • EDR dashboards, vulnerability scan reports, patch metrics
  • Common gaps
    • No scan coverage of cloud/SaaS, slow patch cycles
  • Mapping
    • ISO 27001: A.8.7, A.8.8, A.8.9
    • SOC 2: CC7.1, CC7.2

5. Minimum Non-Negotiable Requirements

To be realistically CMMC Level 2 / NIST 800-171 compliant, expect at least:

Mandatory documents

  • System Security Plan (SSP) – core description of environment, scope, and how each requirement is metcsrc.nist.rip+1
  • POA&M – with owners, timelines, and milestones for remaining gapsosibeyond.com+1
  • Key policies & procedures:
    • Information Security, Access Control, Acceptable Use, Change/Config Mgmt
    • Incident Response, Business Continuity/DR, Backup & Recovery
    • Vendor Risk Management, Secure Development (if applicable)
    • Data Classification & Handling / CUI Handling

Mandatory processes

  • Risk assessment and treatment
  • Onboarding / offboarding controls
  • Formal incident handling & DFARS reporting where applicable
  • Change management for in-scope systems
  • Regular internal security assessments & POA&M updates

Technical controls

  • Strong IdP with MFA and RBAC for CUI systems
  • Endpoint security (EDR, disk encryption)
  • Network segmentation & secure remote access
  • Centralized logging and alerting for key systems
  • Regular vulnerability scanning and patching
  • Backups with periodic restore testing

Governance / recurrence

  • Annual: risk assessment, training, policy review, CMMC self-assessment (for self-attest scope)U.S. Department of Defense CIO+1
  • Quarterly: access reviews, POA&M review, vulnerability mgmt review
  • At least annually: incident response test, backup restore test, internal audit / management review

6. Technical Implementation Guidance

Infrastructure security

  • Define a CUI enclave:
    • Preferably segregated network segment or dedicated cloud tenant; limited paths in/out.Isora GRC+1
  • Harden OS and services using CIS or vendor baselines.
  • Use infrastructure-as-code where possible to enforce consistent configuration.

Access control best practices

  • Central IdP (e.g., Entra ID/Okta) with:
    • Conditional access, MFA, device compliance requirements
    • Role-based groups mapped to job functions
  • Enforce least privilege and just-in-time elevation for admin accounts.

Logging & monitoring

  • Aggregate logs (cloud, IdP, endpoints, firewalls, critical SaaS) into a SIEM or log platform.
  • Define use cases and alerts aligned with CUI risk:
    • Unusual login locations
    • Failed admin logons
    • Mass file access / exfil indicators
  • Document triage playbooks and integrate with ticketing.

Vulnerability management workflow

  1. Asset inventory for in-scope systems (hardware, software, cloud).
  2. Scheduled authenticated scans (e.g., weekly/monthly depending on risk).
  3. Risk-based SLAs (e.g., critical = 14 days, high = 30 days).
  4. Track remediation in a ticketing system; report closure metrics.

Backup and DR expectations

  • Back up all CUI systems and critical configs.
  • Apply 3-2-1 principle (3 copies, 2 media, 1 offsite/immutable).
  • Perform and document periodic restore tests.
  • Ensure backup storage is logically segregated and access-controlled.

Encryption requirements

  • Encrypt CUI in transit (TLS 1.2+) and at rest where feasible.
  • Use FIPS-validated cryptographic modules when required by DFARS 252.204-7012 and related guidance.csrc.nist.rip+1
  • Manage keys centrally with proper segregation of duties.

Configuration baselines

  • OS and application baselines defined per role (server, workstation, admin workstation).
  • Enforce via MDM, GPO or cloud configuration management.
  • Track deviations and approvals.

Tooling suggestions (neutral)

  • IdP / SSO: Enterprise-grade IdP with MFA and conditional access
  • EDR: Modern EDR with centralized management and response
  • SIEM / logging: Any scalable log/SIEM platform integrated with IdP, endpoints, and network
  • Vuln scanning: Network + cloud + container where relevant
  • GRC / tracking: Tool or structured spreadsheets for SSP, POA&M, risk register, assessments
  • Ticketing: ITSM tool tracking change, incidents, access, and remediation actions

7. Policy & Procedure Requirements

Typical document set for CMMC/NIST 800-171:

  1. Information Security Policy
    • Purpose: Overall security governance and objectives.
    • Key sections: Scope, roles, risk mgmt, high-level requirements, management commitment.
    • Ops link: Basis for all other policies; reviewed annually.
  2. Access Control Policy & Procedures
    • Roles, least privilege, MFA, remote access, admin access.
    • Drives JML, access reviews, and IdP configurations.
  3. Acceptable Use Policy
    • Rules for using company systems, CUI handling expectations.
  4. Data Classification & CUI Handling Policy
    • Defines CUI vs FCI vs internal/public, labeling, handling, retention, disposal.
  5. Cryptographic / Key Management Policy
    • Approved algorithms, FIPS requirements, key lifecycle management.
  6. Change & Configuration Management Policy
    • Change classification, approvals, testing, emergency changes, configuration baselines.
  7. Incident Response Plan
    • Roles, severity levels, communication, DFARS reporting timelines, evidence collection.
  8. Vulnerability & Patch Management Procedure
    • Scan frequency, SLAs, reporting, exception handling.
  9. Backup & Recovery Policy
    • Scope, frequency, retention, storage locations, restore tests.
  10. Business Continuity / DR Plan
    • RTO/RPO, continuity strategies, testing.
  11. Vendor / Third-Party Risk Management Policy
    • Due diligence, security clauses in contracts, monitoring, CUI flow-down to subs.
  12. Secure Development / SDLC Policy (if you build software)
    • Secure coding practices, code review, SAST/DAST, change control.
  13. Physical Security Policy
    • Facility security, visitor management, access badges, secure areas.
  14. HR / Personnel Security Procedures
    • Background checks, onboarding/offboarding tasks connected to access control.

Each should clearly define scope, roles & responsibilities, control requirements, records/evidence, and review frequency.

8. Audit Evidence & Verification

What auditors / assessors typically request

For CMMC Level 2 certification or higher-assurance self-assessment:U.S. Department of Defense CIO+1

  • SSP and POA&M with current status
  • Policies and procedures
  • Network diagrams and data flow diagrams for CUI
  • Asset inventory, lists of in-scope systems and accounts
  • Samples of:
    • Tickets (changes, incidents, access requests)
    • Logs and alerts
    • Training records
    • Risk assessments and management reviews
    • Vendor assessments / contracts

Evidence formats

  • Documents (PDF/Word), spreadsheets
  • Screenshots of configurations
  • Exports from tools (SIEM, IdP, EDR, scanners)
  • Meeting minutes, sign-offs, decisions

Sampling expectations

  • Auditors will sample users, systems, and time periods:
    • E.g. 10 user accounts from HR, IT, engineering
    • Few months of logs
    • Recent incidents, changes, and access changes

Common remediation actions

  • Tightening MFA & access controls
  • Improving logging visibility and alert triage
  • Formalizing risk assessment and POA&M tracking
  • Closing high-risk vulnerabilities
  • Updating policies to reflect actual practice

9. Implementation Timeline Considerations

(Actual duration depends heavily on size and starting point.)

Typical phases

  1. Scoping & Discovery
    • Identify CUI/FCI, systems, vendors, contracts, and legal obligations.
  2. Gap Assessment vs NIST 800-171
    • Control-by-control review, initial SSP & POA&M draft.
  3. Remediation & Tooling
    • Deploy/enhance IdP, EDR, SIEM, segmentation, backups, etc.
  4. Documentation Build-out
    • Policies, procedures, detailed SSP, diagrams, evidence repositories.
  5. Operate & Stabilize
    • Run processes for a period; generate BAU evidence.
  6. Internal Assessment & Readiness Review
    • Validate controls, update POA&M.
  7. Formal CMMC Assessment / Self-Assessment Entry in SPRS (as applicable).sprs.csd.disa.mil+1

Key dependencies that delay progress

  • Unclear CUI boundary / scoping
  • Tooling selection and deployment
  • Vendor and subcontractor alignment
  • Lack of management decisions on risk acceptance vs remediation

10. Ongoing Security BAU Requirements

Recommended Business-as-Usual cadence:

  • Monthly
    • Vulnerability scans and patching cycles
    • Log and alert review; metrics reporting
  • Quarterly
    • User access reviews
    • POA&M review and update
    • Vendor risk check-in for critical suppliers
  • Semi-annual
    • Internal control testing / mini-audits
  • Annual
    • Full risk assessment
    • Security awareness & role-based training
    • IR exercise (tabletop or technical)
    • Backup restore test
    • Policy framework review & update
    • CMMC self-assessment and SPRS affirmation (where self-attestation applies)U.S. Department of Defense CIO+1

11. Maturity Levels

You can think of your posture in three stages:

  1. Minimum Compliance
    • All 110 NIST 800-171 requirements implemented “on paper”
    • SSP and POA&M complete; basic tools in place
    • Controls may be manual and ad hoc; evidence exists but not always structured
  2. Intermediate Maturity
    • Controls are standardized and repeatable
    • Automated provisioning, MFA everywhere, central logging with defined use cases
    • Performance metrics tracked (patch SLAs, incident MTTR, training completion)
    • POA&M managed as a live artifact
  3. Advanced / Automated
    • Strong alignment with NIST 800-172-style “enhanced” protections (micro-segmentation, advanced analytics)U.S. Department of Defense CIO+1
    • High automation in detection and response
    • Regular red-teaming / penetration testing
    • Integrated GRC platform for continuous compliance monitoring

12. FAQs

Q1: Is NIST 800-171 compliance the same as CMMC Level 2?

A: CMMC Level 2 practices are essentially the full NIST 800-171 Rev.2 set (110 requirements), but CMMC adds a formal assessment and governance layer (e.g., certification, POA&M rules, SPRS entries).TestPros' IT Compliance Services+1

Q2: Do we need CMMC if we only have FCI, not CUI?

A: Yes, but typically at Level 1, focused on basic safeguarding of FCI (17 practices).TestPros' IT Compliance Services+1

Q3: Which version of NIST 800-171 applies?

A: As of late 2025, CMMC 2.0 is aligned to NIST SP 800-171 Rev.2, even though Rev.3 is emerging; DoD guidance and ODP memos are managing the transition.TestPros' IT Compliance Services+1

Q4: What if we are not 100% compliant yet?

A: You must:

  • Be as close as possible to full implementation,
  • Document gaps in a POA&M, and
  • Follow CMMC rules on which practices may be on a POA&M and within what timeframe remediation must occur.osibeyond.com+1

Q5: How does this relate to ISO 27001 or SOC 2?

A: There is substantial overlap:

  • ISO 27001 = broader ISMS and risk management
  • SOC 2 = trust service criteria around security, availability, etc.
  • CMMC/NIST 800-171 are more prescriptive for CUI, particularly around DFARS, FIPS crypto, and DoD-specific requirements.

Q6: Do all subcontractors need to be CMMC certified?

A: If they handle your CUI/FCI or operate in your CUI enclave, they will need an appropriate CMMC level as required in the contract flow-down clauses.U.S. Department of Defense CIO+1

Q7: Is cloud (e.g., Microsoft 365, AWS) acceptable for CUI?

A: Yes, many organizations use cloud for CUI, but you must:

  • Choose services with appropriate FedRAMP / DoD authorizations where required,
  • Configure them in line with NIST 800-171 controls,
  • Reflect them clearly in the SSP and POA&M.U.S. Department of Defense CIO+1

Q8: How often do we reassess for CMMC?

A: Self-assessments and affirmations are generally annual, and 3rd-party certifications will have validity periods and surveillance expectations defined by DoD and the Cyber AB ecosystem.U.S. Department of Defense CIO+1

13. Summary

CMMC 2.0, built on NIST SP 800-171 Rev.2, is the DoD’s mechanism to ensure the defense industrial base actually implements and proves strong cybersecurity for CUI and FCI. It combines:

  • A clear technical baseline (110 controls across 14 families),
  • Structured governance (SSP, POA&M, assessments, SPRS), and
  • A maturity model that scales from basic (Level 1) to advanced (Level 3).

For a successful implementation, organizations need to:

  • Accurately scope their CUI environment,
  • Close technical and process gaps against NIST 800-171,
  • Build and maintain robust documentation and evidence, and
  • Embed security into BAU operations with regular assessments, monitoring, and improvement.

1. Overview

The EU AI Act is the European Union’s comprehensive regulatory framework governing the development, deployment, and use of Artificial Intelligence systems. It classifies AI systems by risk and imposes obligations accordingly.

Who it applies to:

  • Any organization placing AI systems on the EU market,
  • Any provider or deployer located in the EU,
  • Non-EU organizations whose AI systems impact EU users.

Outcome:

There is no "certification" equivalent to ISO. Rather, organizations must ensure conformity, maintain technical documentation, complete risk assessments, apply governance controls, and affix CE markings for High-Risk systems.

High-level goals and risk domains

  • Reduce risks to health, safety, and fundamental rights
  • Ensure transparency and accountability in AI systems
  • Establish governance, oversight, and monitoring obligations
  • Promote trustworthy and ethical AI innovation

2. Scope & Applicability

In-scope

  • AI systems following the EU’s broad definition (machine-based systems generating outputs that influence environments/decisions)
  • High-Risk AI categories (e.g., biometric identification, critical infrastructure, healthcare, education, employment)
  • General Purpose AI (GPAI) and foundation models
  • Limited-risk systems requiring transparency
  • AI systems interacting with children or vulnerable groups
  • Data used for model training/validation/testing

Common out-of-scope

  • Non-automated decision systems
  • Basic algorithmic systems without machine learning/statistical inference
  • AI systems developed solely for military purposes
  • Pure R&D models not released to users

Key assumptions & dependencies

  • A risk classification assessment must be completed before determining obligations.
  • Providers and deployers must understand their role under the Act—obligations differ.
  • Technical documentation must be maintained throughout the system lifecycle.
  • The Act assumes organizations already have baseline governance, security, and data protection controls (e.g., GDPR alignment).

3. Core Principles or Pillars

The EU AI Act is built on a set of fundamental principles:

  1. Risk-Based Regulation
  2. Mandatory obligations increase with risk levels.
  3. Safety & Fundamental Rights Protection
  4. Prevent harm related to discrimination, bias, privacy violations, and misuse.
  5. Transparency & Explainability
  6. Users must understand when they’re interacting with AI, and certain system decisions must be interpretable.
  7. Human Oversight
  8. High-Risk systems require clear human-in-the-loop mechanisms.
  9. Robustness, Accuracy, Cybersecurity
  10. Systems must be technically resilient and secure.
  11. Data Governance & Quality
  12. Training data must be high-quality, representative, and free from discriminatory bias.
  13. Accountability & Traceability
  14. Documentation and auditability are essential throughout the development lifecycle.

4. Control Breakdown (by Risk Category)

A. Unacceptable Risk AI (Prohibited Practices)

Examples: social scoring, real-time biometric surveillance in public (with strict exceptions).

  • Purpose: Protect fundamental rights and prevent harmful AI use cases.
  • Minimum expectations: Do not develop, deploy, import, or distribute prohibited AI systems.
  • Evidence: Risk classification records; product review documentation.
  • Common gaps: Misclassifying borderline systems; missing legal counsel review.
  • Practical example: A city cannot deploy real-time biometric surveillance without meeting narrow legal exemptions.
  • Mapping: Aligns with GDPR’s fundamental rights protection expectations.

B. High-Risk AI Systems

High-Risk applies to two broad groups:

  1. Products under existing EU safety law (medical devices, vehicles, aviation, etc.)
  2. AI systems in sensitive areas (employment, law enforcement, education)

Key control domains:

1. Risk Management System

  • Purpose: Identify, analyze, and mitigate AI-related risks.
  • Minimum expectations: End-to-end lifecycle risk assessment; mitigation measures; periodic updates.
  • Evidence: Risk logs, threat models, DPIAs (if overlapping with GDPR).
  • Common gaps: Missing continuous risk monitoring.
  • Example: HR screening AI must document risks of algorithmic bias.

2. Data Governance & Data Quality

  • Purpose: Prevent bias, ensure representativeness and accuracy.
  • Minimum expectations: Document data sources, preprocessing, bias tests.
  • Evidence: Data sheets, quality reports, fairness testing results.
  • Common gaps: Lack of documentation for training datasets.
  • Mapping: Strong connection to ISO 42001 & GDPR.

3. Technical Documentation

  • Purpose: Allow verification of compliance.
  • Minimum expectations: Model architecture, training methodology, performance metrics, known limitations.
  • Evidence: System technical file; version-controlled documentation.
  • Common gaps: Not updating documentation after retraining.

4. Logging & Traceability

  • Purpose: Enable incident investigation and auditing.
  • Minimum expectations: Granular logs of system behavior; automated logging.
  • Evidence: Log exports; monitoring dashboards.
  • Mapping: Aligns with SOC 2 Security & ISO 27001 logging requirements.

5. Transparency & User Information

  • Purpose: Ensure users understand system capabilities/limitations.
  • Minimum expectations: User instructions; performance metrics; disclaimers.
  • Evidence: User manual, help documentation.
  • Common gaps: Missing explanation of confidence scores.

6. Human Oversight

  • Purpose: Prevent harmful automated decisions.
  • Minimum expectations: Define roles for reviewers; override mechanisms.
  • Evidence: SOPs; role-based access controls; audit trails.
  • Common gaps: Not testing human control workflows.

7. Accuracy, Robustness & Cybersecurity

  • Purpose: Minimize risk of errors, attacks, failures.
  • Minimum expectations: Adversarial testing, performance benchmarks.
  • Evidence: Test results, vulnerability assessments.
  • Mapping: Relevant to ISO 27001 and SOC 2.

C. Limited Risk AI

Systems requiring transparency (e.g., chatbots, AI-generated content).

  • Purpose: Ensure users know they’re interacting with AI.
  • Minimum expectations: Clear disclosure notices.
  • Evidence: UI screenshots; customer communication guidelines.
  • Common gaps: Missing disclosure in embedded AI experiences.
  • Example: Chatbot must state it is an AI system.

D. Minimal Risk AI

Examples: spam filters, game AI.

  • Purpose: No mandatory requirements.
  • Expectation: Follow general governance best practices.
  • Evidence: N/A.

E. General Purpose AI (GPAI) & Foundation Models

New obligations introduced in latest amendments.

General Purpose AI

  • Documentation of training datasets
  • Cybersecurity & robustness expectations
  • API transparency and usage instructions

Foundation Models (including GenAI)

  • Systemic risk assessment
  • Incident reporting
  • Red teaming
  • Energy consumption transparency

5. Minimum Requirements

Mandatory documents

  • AI System Risk Classification Register
  • AI Risk Management Procedure
  • Data Governance & Quality Policy
  • Logging & Monitoring Procedure
  • Model Documentation (Technical File)
  • Human Oversight SOP
  • Incident Reporting Procedure
  • Post-market Monitoring Plan
  • User Instructions & Transparency Notices

Mandatory processes

  • Lifecycle risk assessments
  • Bias testing
  • Model performance monitoring
  • Data lineage documentation
  • Human oversight checks
  • Change management for AI models

Technical controls

  • Automated logging
  • Access control for model components
  • Adversarial robustness testing
  • Vulnerability scanning

Governance activities

  • Clear assignment of provider vs deployer responsibilities
  • Annual compliance review
  • Post-market monitoring

6. Technical Implementation Guidance

  • Infrastructure Security: Segmented environments, secure model hosting, IaC.
  • Access Control: RBAC for model training and inference pipelines.
  • Logging & Monitoring: Track inference requests, anomalies, model drift.
  • Vulnerability Management: Scan containers, code, dependencies; test adversarial robustness.
  • Backup & DR: Snapshot model weights, training datasets, configuration files.
  • Encryption: Encrypt datasets, model parameters, logs at rest & transit.
  • Configuration Baselines: Hardened OS images; benchmark adherence (CIS).
  • Tooling Suggestions: MLflow, Seldon, Kubeflow, EvidentlyAI, WhyLabs, Great Expectations.

7. Policy & Procedure Requirements

  • AI Governance Policy – defines roles, responsibilities, risk classification, oversight structure.
  • AI Risk Management Procedure – lifecycle assessment, mitigation.
  • Data Governance Policy – dataset requirements, bias prevention.
  • Model Development Procedure – documentation, training, validation.
  • Human Oversight Procedure – escalation channels, override processes.
  • Incident Response for AI – how AI-specific incidents are handled.
  • Change Management for AI Models – retraining, versioning, approval steps.
  • Post-Market Monitoring Procedure – continuous monitoring workflow.

Each must define scope, responsibilities, activities, approval, and evidence outputs.

8. Audit Evidence & Verification

Auditors typically request:

  • Risk classification outputs
  • Training data documentation
  • Bias testing reports
  • Model performance benchmarks
  • Logging configurations
  • Oversight review logs
  • Incident records
  • User-facing transparency notices

Evidence formats

  • Screenshots
  • System exports
  • PDF reports
  • Version-controlled documentation
  • Meeting notes and sign-offs

Sampling

  • Deployment instances
  • Training cycles
  • Logged inference events
  • Change requests

Common remediation actions

  • Improve dataset documentation
  • Add transparency features
  • Create post-market monitoring records
  • Enhance bias testing coverage

9. Implementation Timeline Considerations

Typical duration: 3–9 months, depending on system complexity.

Key milestones

  1. Risk classification
  2. Documentation build-out
  3. Technical control implementation
  4. Conformity assessment
  5. CE marking (if applicable)
  6. Post-market monitoring rollout

Dependencies that cause delays

  • Identifying data sources
  • Establishing bias testing pipelines
  • Determining provider vs deployer roles
  • Legal reviews

10. Ongoing Security BAU Requirements

  • Quarterly AI system risk reviews
  • Continuous drift and performance monitoring
  • Annual policy renewals
  • Annual AI governance review
  • Regular bias and accuracy testing
  • Incident response exercises
  • Vendor risk assessments for AI suppliers
  • Retraining documentation maintenance

11. Maturity Levels

Minimum Compliance

  • Basic documentation
  • Manual risk assessments
  • Simple logging
  • Reactive incident handling

Intermediate

  • Automated monitoring
  • Integrated data quality pipelines
  • Formal bias testing
  • Role-based oversight processes

Advanced

  • Continuous automated validation
  • Adversarial attack simulations
  • Full ML Ops governance
  • Enterprise-wide trustworthy AI framework

12. FAQs

Do small companies need to comply?

Yes—if you place AI systems on the EU market or impact EU users.

Does the Act apply to open-source models?

Partially; some exemptions exist unless commercialized or classified as high-risk.

Are we required to hire an AI ethics officer?

Not explicitly, but governance roles must be formally assigned.

Do all AI systems require CE marking?

Only High-Risk systems.

Is this replacing GDPR?

No. The AI Act complements GDPR; both may apply simultaneously.

13. Summary

The EU AI Act establishes a structured, risk-based framework to ensure AI systems deployed in the EU are safe, transparent, ethical, and well-governed. Compliance requires strong documentation, lifecycle risk management, robust technical controls, and ongoing monitoring. Organizations should begin by classifying their systems, identifying applicable obligations, building core governance documents, and implementing technical controls aligned with risk level. This ensures readiness for regulatory obligations and supports responsible AI operations.

1. Overview

What it is:

The Federal Risk and Authorization Management Program (FedRAMP) is a U.S. government-wide program that standardizes security assessment, authorization, and continuous monitoring for cloud services used by federal agencies.

Who it applies to:

  • Cloud Service Providers (CSPs) offering services to U.S. federal agencies.
  • Third-party assessment organizations (3PAOs) supporting audits.
  • Federal agencies sponsoring or consuming cloud services.

Outcome:

  • Authorization to Operate (ATO) granted by a specific federal agency, or
  • Provisional ATO (P-ATO) granted by the Joint Authorization Board (JAB).

High-level goals and risk domains:

  • Ensure consistent baseline cybersecurity for cloud environments.
  • Protect federal data through NIST SP 800-53 controls.
  • Improve visibility, continuous monitoring, and incident response readiness.
  • Assure security across confidentiality, integrity, availability, governance, and operational assurance.

2. Scope & Applicability

In-scope

  • Cloud infrastructure (IaaS, PaaS, SaaS) serving federal customers.
  • All systems storing, processing, or transmitting federal data.
  • Supporting services including logging, authentication, monitoring, vulnerability scanning, and ticketing.
  • Personnel with administrative access.
  • Physical facilities used for hosting or storage.

Out-of-scope

  • Internal corporate systems not supporting the cloud service.
  • Non-federal tenant environments unless shared architecture applies.
  • Consumer-facing or commercial-only environments.

Assumptions & Dependencies

  • CSP must classify the system impact level (FIPS 199: Low, Moderate, High).
  • The architecture must meet U.S. data residency requirements.
  • CSP must align with NIST SP 800-53 Rev 5 baseline and FedRAMP overlays.
  • A sponsoring agency or JAB must be secured for authorization.

3. Core Principles or Pillars

FedRAMP is built on NIST security principles:

  • Confidentiality: Strong access control, encryption, boundary protections.
  • Integrity: Logging, change management, configuration baselines.
  • Availability: Redundancy, resilience, backup and disaster recovery.
  • Governance: Formalized policies, risk management, documented controls.
  • Continuous Monitoring: Monthly reporting, patching, scanning, and POA&M updates.
  • Assessment & Authorization: Structured audit lifecycle via SSP → 3PAO assessment → ATO.

4. Control Breakdown

FedRAMP uses the NIST SP 800-53 Rev 5 control framework, with overlays for Low/Moderate/High impact levels.

Below are the major families with expectations.

AC – Access Control

Purpose: Protect access to systems and data through least privilege and verification.

Minimum expectations:

  • MFA for all privileged access.
  • RBAC with documented approvals.
  • Session timeouts, account lockouts, separation of duties.
  • Evidence:
  • Access control policy, access review records, screenshots of MFA settings, IAM export.
  • Common gaps:
  • Inconsistent revocation process.
  • Lack of automated logging for IAM events.
  • Example:
  • Use a central IAM with enforced MFA and automated provisioning workflow.
  • Mapping: ISO 27001 A.5, A.6; SOC 2 CC6.

AU – Audit and Accountability

Purpose: Ensure comprehensive logging and tamper-resistance.

Minimum expectations:

  • Centralized SIEM.
  • Immutable logs.
  • Log retention ≥ 1 year.
  • Evidence: SIEM screenshots, log configuration, retention settings.
  • Common gaps: Missing audit coverage for system-level events.
  • Mapping: ISO 27001 A.8; SOC 2 CC7.

CM – Configuration Management

Purpose: Maintain secure and consistent system configurations.

Expectations:

  • Baseline configurations (OS, DB, network).
  • Automated configuration scanning.
  • Change approval workflows.
  • Evidence: Baseline documents, scan results, CAB meeting notes.
  • Common gaps: Lack of a hardened baseline.
  • Mapping: ISO 27001 Annex A.8, A.12.

CP – Contingency Planning

Purpose: Ensure recovery capability for outages or disasters.

Expectations:

  • Tested backup strategy.
  • Annual disaster recovery testing.
  • Alternate processing site (as applicable).
  • Evidence: Test reports, backup logs.
  • Common gaps: Failure to test annually.

IR – Incident Response

Purpose: Structured response to security incidents.

Expectations:

  • 24/7 monitoring capability.
  • Defined escalation paths.
  • Annual tabletop exercises.
  • Evidence: IR plan, training logs, test results.
  • Common gaps: No defined federal reporting timelines.

RA – Risk Assessment

Purpose: Identify and treat risks.

Expectations:

  • Annual risk assessment.
  • NIST-based methodology.
  • Evidence: RA report, treatment plan.
  • Common gaps: No linkage between risks and POA&M.

SC – System and Communications Protection

Purpose: Protect data in transit and at rest.

Expectations:

  • TLS 1.2+.
  • FIPS 140-2 validated crypto modules.
  • Network segmentation.
  • Evidence: Configuration screenshots, certificates.
  • Common gaps: Non-FIPS compliant encryption.

SI – System & Information Integrity

Purpose: Vulnerability management and system integrity.

Expectations:

  • Monthly scans.
  • Patching within FedRAMP timelines.
  • Anti-malware tools.
  • Evidence: Scan results, patch reports, anti-virus logs.
  • Common gaps: Missing authenticated scans.

Additional Families

FedRAMP also includes controls from families such as IA, MP, MA, PE, PL, PM, PS, SA, CA.

Each follows similar structure: purpose → expectations → evidence.

5. Minimum Requirements

Mandatory documents

  • System Security Plan (SSP, the largest required document).
  • Customer Responsibility Matrix.
  • Incident Response Plan.
  • Configuration Management Plan.
  • Contingency Plan.
  • Continuous Monitoring Strategy.
  • Privacy Impact Assessment.
  • Change Management procedures.
  • Roles & responsibilities (RACI).
  • Training documentation.

Mandatory processes

  • Continuous Monitoring (ConMon).
  • Monthly vulnerability scanning.
  • POA&M management.
  • Annual risk assessment.
  • Annual penetration test.
  • Incident reporting to US-CERT.

Technical controls

  • MFA, FIPS encryption, SIEM, network segmentation, boundary firewalls.
  • File integrity monitoring.
  • Secure baseline enforcement.

Recurrence

  • Monthly: Scanning, inventory updates, POA&M updates, reporting.
  • Quarterly: Access reviews, patch reviews.
  • Annual: Pen test, IR test, DR test, training, risk assessment.

6. Technical Implementation Guidance

Infrastructure Security

  • Implement network isolation using VPCs/subnets.
  • Use WAF and IDS/IPS tools.
  • Enforce secure configuration baselines through Infrastructure as Code (IaC).

Access Control

  • Centralized IAM with MFA enforced for all access.
  • Use JIT privileged access when possible.

Logging & Monitoring

  • Integrate logs from servers, apps, DBs, firewalls, IAM, containers.
  • Configure log immutability in storage buckets.

Vulnerability Management

  • Automated authenticated scans.
  • Patch timelines based on severity:
    • High: 30 days
    • Moderate: 90 days

Backup & DR

  • Encrypt backups with FIPS-validated modules.
  • Test restore procedures annually.

Encryption

  • TLS 1.2+ for transit.
  • AES-256 or equivalent in FIPS mode for rest.

Configuration Baselines

  • CIS benchmarks mapped to NIST 800-53 where possible.

Tooling (neutral)

  • SIEM (ELK/OpenSearch/Splunk).
  • IAM platforms (Okta/Azure AD).
  • Scanning tools (Nessus/OpenVAS).
  • Workflow tools (Jira/ServiceNow).

7. Policy & Procedure Requirements

Typical required documents:

  • Access Control Policy
  • Audit Logging Policy
  • Configuration Management Policy
  • Incident Response Procedure
  • Information Security Policy
  • Business Continuity & Disaster Recovery Policy
  • Vulnerability Management Procedure
  • Change Management Procedure
  • Privacy Policy
  • Personnel Security Procedure
  • Risk Management Policy
  • Vendor Management Policy
  • Encryption & Key Management Policy

Each should include:

  • Purpose
  • Scope
  • Roles & responsibilities
  • Control activities
  • Record-keeping requirements
  • Revision schedule

8. Audit Evidence & Verification

What auditors request:

  • SSP (primary artifact).
  • Diagrams (data flow, network, boundary).
  • Screenshots showing control enforcement.
  • Logs, exports, configuration files.
  • Policies, procedures, training evidence.
  • Scan reports (monthly, authenticated).
  • POA&M with remediation details.

Sampling expectations:

  • Auditors may sample users, devices, changes, incidents, and tickets.
  • Expect multi-month sampling due to ConMon requirements.

Common remediation actions:

  • Tightening IAM controls.
  • Hardening baseline configurations.
  • Improving logging coverage.
  • Fixing overdue vulnerabilities.

9. Implementation Timeline Considerations

Typical duration

  • 6–18 months, depending on architecture maturity and readiness.

Key milestones

  1. Impact level determination (Low/Moderate/High).
  2. Readiness Assessment Report (RAR).
  3. SSP creation and evidence preparation.
  4. 3PAO security assessment.
  5. ATO/P-ATO decision.
  6. Continuous Monitoring onboarding.

Recommended sequence

Risk assessment → architecture hardening → documentation → readiness assessment → 3PAO audit → remediation → authorization.

Delay risks

  • Lack of FIPS-validated crypto.
  • Limited logging capability.
  • Weak IAM practices.
  • Missing sponsoring agency.

10. Ongoing Security BAU Requirements

  • Monthly scans and POA&M updates.
  • Real-time incident monitoring.
  • Quarterly access reviews.
  • Annual penetration test.
  • Annual IR/DR tests.
  • Annual SSP updates.
  • Training (annual).
  • Vendor reassessment.
  • Continuous vulnerability remediation.
  • Configuration drift checks.

11. Maturity Levels

Minimum Compliance

  • Baseline documentation completed.
  • Core controls implemented.
  • Scanning and monitoring operational.
  • SSP approved.

Intermediate

  • Automation for IAM, logging, patching.
  • Mature incident response coordination.
  • Consistent change management workflows.

Advanced/Automated

  • Continuous compliance monitoring with automated evidence collection.
  • Real-time risk scoring.
  • Infrastructure-as-Code–driven baselines.
  • Predictive analytics for threats.

12. FAQs

1. Do we need an agency sponsor?

Yes, unless pursuing a JAB P-ATO (more rigorous but agency-independent).

2. How long does an ATO last?

Indefinite, as long as continuous monitoring requirements remain in good standing.

3. Can commercial cloud regions be used?

Only if they meet FedRAMP data residency and boundary requirements.

4. Do we need FIPS 140-2 or 140-3?

FedRAMP currently mandates FIPS 140-2 validated cryptography.

5. How often are audits conducted?

Assessment occurs annually, with continuous monitoring monthly.

13. Summary

FedRAMP establishes a rigorous, NIST-based security framework for cloud services supporting U.S. federal agencies. Achieving authorization requires robust technical controls, extensive documentation, and disciplined operational processes. Once authorized, CSPs must maintain strong continuous monitoring practices to remain compliant and retain ATO status. Proper preparation, governance, and automation significantly improve success rates and reduce maintenance overhead.

1. Overview

The General Data Protection Regulation (GDPR) is the European Union’s primary data protection law, regulating how organizations collect, process, store, and share personal data of individuals located in the EU/EEA.

Who it applies to:

Any organization—regardless of location—that processes personal data of EU/EEA residents. This includes controllers and processors.

Outcome:

GDPR is not a certifiable framework. Organizations achieve demonstrable compliance through documented governance, operational practices, and technical controls. Some sectors may pursue codes of conduct or certification mechanisms, but these are optional.

High-level goals and risk domains:

  • Lawfulness, fairness, transparency
  • Purpose limitation
  • Data minimization
  • Accuracy
  • Storage limitation
  • Integrity & confidentiality (security)
  • Accountability

2. Scope & Applicability

In-Scope

  • Personal data of EU/EEA residents
  • Processing activities involving identification or potential identification
  • IT systems that store or process personal data
  • HR systems, CRM systems, marketing systems, product/platform databases
  • Third-party processors handling EU data
  • Staff with access to personal data
  • Cross-border transfers

Out-of-Scope

  • Anonymous or anonymized data
  • Systems without personal data
  • Non-EU data subjects (unless mixed datasets)

Assumptions & Dependencies

  • Clear identification of controller vs. processor roles
  • Up-to-date data inventory and processing records
  • Legal counsel involvement for data transfer and contractual items
  • Operational ability to fulfill data subject rights

3. Core Principles / Pillars

GDPR is built around seven key principles:

  1. Lawfulness, Fairness, Transparency
  2. Processing must be lawful; individuals must know how their data is used.
  3. Purpose Limitation
  4. Collect personal data only for legitimate, explicit purposes.
  5. Data Minimization
  6. Gather only what is necessary.
  7. Accuracy
  8. Keep data current and correct.
  9. Storage Limitation
  10. Retain data only as long as needed.
  11. Integrity and Confidentiality
  12. Apply appropriate security measures.
  13. Accountability
  14. Organizations must prove compliance through records, policies, and controls.

4. Control Breakdown

Below is a breakdown of major GDPR Articles treated as control categories.

4.1 Article 30 – Records of Processing Activities (ROPA)

Purpose: Demonstrate clear understanding of processing operations.

Minimum: Maintain up-to-date records for all processing activities.

Evidence: ROPA table, inventories, system/data flow diagrams.

Common Gaps: Missing processors, outdated inventories, incomplete retention rules.

Example: Centralized ROPA maintained quarterly.

Mapping:

  • ISO 27701: A.7, A.12
  • ISO 27001: A.5.9, A.8.1

4.2 Article 6 – Lawful Basis for Processing

Purpose: Ensure every processing activity has a lawful justification.

Minimum: Assign and document lawful basis per activity.

Evidence: Data mapping, privacy notice, consent records.

Common Gaps: Wrong basis (e.g., consent when contract applies).

Example: CRM marked as “legitimate interest” with documented LIA.

Mapping:

  • ISO 27701: A.7.2.2

4.3 Article 13/14 – Transparency & Privacy Notice

Purpose: Inform data subjects how data is used.

Minimum: Public privacy notice covering required elements.

Evidence: Published privacy policy, version history.

Common Gaps: Missing retention periods, vague purposes.

Example: Website privacy notice aligned with ROPA.

Mapping: ISO 27701 A.7.3

4.4 Article 16-21 – Data Subject Rights

Purpose: Enable subjects to exercise rights (access, erasure, rectification, etc.).

Minimum: Documented process, response within one month.

Evidence: Ticket logs, DSR register, responses.

Common Gaps: Untracked deadlines, incomplete verification steps.

Example: Automated intake workflow in helpdesk tool.

Mapping: ISO 27701 A.7.4

4.5 Article 25 – Privacy by Design & Default

Purpose: Embed privacy controls into systems.

Minimum: DPIAs, secure defaults, role-based access.

Evidence: DPIA records, system design documents.

Common Gaps: DPIAs missing from high-risk features.

Example: New features undergo privacy review in SDLC.

Mapping: ISO 27001: A.5.8, A.8.2

4.6 Article 28 – Processor Management

Purpose: Ensure processors are contractually and operationally compliant.

Minimum: DPAs, onboarding reviews, monitoring.

Evidence: Signed DPA, vendor risk assessments.

Common Gaps: Missing sub-processor disclosures.

Example: Vendor risk portal with annual checks.

Mapping: ISO 27001 A.5.19–A.5.21, ISO 27701 A.8.2

4.7 Article 32 – Security of Processing

Purpose: Protect personal data via technical and organizational measures.

Minimum: Encryption, access controls, logging, monitoring, DR.

Evidence: Policies, screenshots, system configs, reports.

Common Gaps: Weak access provisioning, lack of backups testing.

Example: MFA, SIEM logging, quarterly access reviews.

Mapping:

  • ISO 27001 entire Annex A
  • SOC 2 Security, Availability

4.8 Article 33/34 – Breach Notification

Purpose: Ensure timely reporting to supervisory authority and data subjects.

Minimum: 72-hour reporting process.

Evidence: IR plan, incident logs, tabletop exercises.

Common Gaps: Missing severity classification.

Example: IR playbook with escalation workflow.

Mapping: ISO 27001 A.5.24–A.5.25, SOC 2 CC7

4.9 Article 35 – Data Protection Impact Assessments

Purpose: Assess high-risk processing before launch.

Minimum: DPIA criteria, documented outcomes, mitigation plan.

Evidence: DPIA reports.

Common Gaps: DPIAs missing for AI/monitoring tools.

Example: DPIA incorporated into product lifecycle.

Mapping: ISO 27701 A.7.2.5

4.10 Article 44+ – International Data Transfers

Purpose: Ensure legal basis for cross-border transfers.

Minimum: SCCs, TIAs, transfer inventories.

Evidence: DPA Annexes, transfer logs.

Common Gaps: Missing TIA, outdated SCCs.

Example: TIA template used in vendor onboarding.

Mapping: ISO 27701 A.8.3

5. Minimum Requirements

Mandatory Documents

  • Privacy Policy
  • Data Protection Policy
  • ROPA
  • DPA templates (for processor relationships)
  • DPIA procedure and templates
  • Incident Response Policy
  • Data Retention Policy
  • Consent Policy (where applicable)
  • International Data Transfer Procedure

Mandatory Processes

  • Data subject rights workflow
  • Breach notification workflow
  • Vendor onboarding & monitoring
  • DPIA process
  • Regular policy reviews

Technical Controls

  • Encryption (at rest & in transit)
  • MFA
  • Logging & monitoring
  • Backup & recovery
  • Access control & RBAC
  • Secure development

Recurrence

  • Annual training
  • Annual risk assessment
  • Quarterly access reviews
  • Annual vendor reviews
  • DPIAs as needed

6. Technical Implementation Guidance

Infrastructure Security:

  • Network segmentation
  • Secure configurations (CIS baselines)
  • Cloud workload hardening

Access Control Best Practices:

  • MFA everywhere
  • Just-in-time access
  • Role-based access, least privilege

Logging & Monitoring:

  • Centralized logs (SIEM)
  • Retention aligned to policy
  • Automated alerting on anomalies

Vulnerability Management:

  • Monthly scans
  • Annual penetration tests
  • Risk-based prioritization

Backup & DR:

  • Automated snapshots
  • Regular restoration tests
  • Geo-redundant storage

Encryption:

  • TLS 1.2/1.3
  • AES-256 at rest
  • Key rotation and KMS usage

Configuration Baselines:

  • Hardening for OS, DB, containers
  • Secrets management enforced

Tooling Suggestions:

Vendor-neutral categories:

  • SIEM: log management tools
  • IAM: identity & access platforms
  • DLP: data loss prevention
  • GRC: compliance automation
  • Vulnerability scanning: standard scanning tools

7. Policy & Procedure Requirements

  1. Data Protection Policy — governance expectations, roles, principles
  2. Privacy Policy — transparency for data subjects
  3. Data Retention Policy — retention periods, deletion workflow
  4. DPIA Procedure — triggers, risk scoring, mitigation
  5. Incident Response Plan — detection, response, reporting
  6. Access Control Policy — roles, reviews, provisioning
  7. Vendor Management Policy — onboarding, DPA, due diligence
  8. Information Security Policy — overarching technical controls
  9. Consent & Preferences Policy — consent collection and withdrawal

Each must include responsibilities, operational steps, evidence requirements.

8. Audit Evidence & Verification

Auditors typically request:

Evidence Types:

  • ROPA export
  • Privacy notice URL
  • Sample DSAR responses
  • DPIA reports
  • DPAs and SCCs
  • Screenshots of security controls
  • Logs showing access reviews
  • Incident logs or exercises

Sampling:

  • 5–10 DSAR cases
  • 5–10 vendors
  • 3–5 DPIAs
  • 2–3 incidents or test scenarios

Common Remediation:

  • Update privacy notice
  • Implement better logging
  • Add missing DPAs
  • Document lawful basis more clearly

9. Implementation Timeline Considerations

Typical duration: 8–16 weeks depending on maturity.

Key Milestones

  1. Data inventory & mapping
  2. Lawful basis assessment
  3. ROPA creation
  4. Policy drafting
  5. Technical gap remediation
  6. DSAR & breach workflows
  7. Vendor management setup
  8. DPIA rollout

Dependencies that cause delays

  • Missing system owners
  • Legal review time
  • Lack of data mapping clarity
  • Complex cross-border transfers

10. Ongoing Security BAU Requirements

  • Annual GDPR training
  • Quarterly access reviews
  • Annual risk assessment
  • Yearly policy updates
  • Regular DPIAs for new features
  • Annual vendor reviews
  • Incident response testing
  • DSAR turnaround monitoring
  • Data retention/deletion activities
  • Breach logs & reporting exercises

11. Maturity Levels

Minimum Compliance

  • Privacy notice available
  • ROPA documented
  • DPA signed with vendors
  • DSAR and breach process functional
  • Core policies in place

Intermediate

  • DPIAs integrated into SDLC
  • Automated access reviews
  • Data minimization embedded in design
  • Regular awareness program

Advanced

  • Full privacy engineering program
  • Automated DSAR workflows
  • Real-time monitoring for data flows
  • Continuous DPIA and risk scoring
  • Privacy-preserving technologies (PETs)

12. FAQs

Do we need a DPO?

Only if processing large-scale sensitive data or monitoring individuals.

Do we need consent for everything?

No. There are six lawful bases; consent is just one.

Does GDPR apply if we’re not in the EU?

Yes, if processing EU/EEA personal data.

Can we use US vendors?

Yes, but require SCCs and a TIA.

Is pseudonymized data in scope?

Yes. Only fully anonymized data is out-of-scope.

How long can we retain data?

Only as long as necessary for the original purpose.

13. Summary

GDPR requires a structured approach to privacy governance, security controls, data inventories, and transparency obligations. Demonstrating compliance relies on well-documented processes, records, and safeguards that can be shown to regulators if needed. With the right combination of policies, technical controls, vendor oversight, and privacy-by-design practices, organizations can achieve and maintain GDPR readiness.

1. Overview

What HIPAA Is

HIPAA is a U.S. federal law designed to protect the privacy, security, and integrity of Protected Health Information (PHI). It is composed of several rules, including the Privacy Rule, Security Rule, Breach Notification Rule, and Enforcement Rule.

Who It Applies To

HIPAA applies to:

  • Covered Entities (CEs) – healthcare providers, health plans, healthcare clearinghouses
  • Business Associates (BAs) – third parties who create, receive, maintain, or transmit PHI on behalf of Covered Entities
  • Subcontractors of BAs – downstream vendors handling PHI

Expected Certification/Attestation Outcome

HIPAA has no formal certification. Organizations typically demonstrate compliance via:

  • Independent third-party assessments
  • Internal audits
  • SOC 2 + HIPAA mapping
  • Client-requested HIPAA assurance reports

High-Level Goals and Risk Domains

  • Protect PHI confidentiality, integrity, and availability
  • Standardize use and disclosure of PHI
  • Ensure robust administrative, technical, and physical safeguards
  • Provide mechanisms for breach detection and reporting

2. Scope & Applicability

In-Scope Elements

  • Systems storing or transmitting PHI (EHR systems, cloud platforms, APIs, databases)
  • Business processes involving PHI (patient intake, billing, claims processing)
  • Teams accessing PHI (support, engineering, operations, clinicians)
  • Third-party vendors handling PHI
  • Physical spaces storing PHI (offices, data centers)

Out-of-Scope Elements

  • Systems and processes that never handle PHI
  • Sandboxes/lab environments without PHI
  • External public marketing systems
  • Personal devices unless authorized under BYOD policy

Assumptions & Dependencies

  • Accurate identification of PHI data flows
  • Executed Business Associate Agreements (BAAs) with all relevant parties
  • Centralized identity and access management
  • Logging infrastructure in place
  • Enterprise-grade cloud configurations (e.g., AWS/Azure HIPAA-eligible services)

3. Core Principles or Pillars

HIPAA aligns with the following pillars:

  1. Confidentiality – limit unauthorized use or disclosure of PHI.
  2. Integrity – prevent improper alteration or destruction of PHI.
  3. Availability – ensure PHI is accessible when needed.
  4. Accountability & Auditability – maintain traceability of actions involving PHI.
  5. Minimum Necessary – PHI access should be restricted to the minimum needed to perform a job.
  6. Patient Rights – control over PHI, including access and amendment requests.

4. Control Breakdown

The HIPAA controls are organized across the Administrative, Physical, and Technical Safeguards.

Below is a structured breakdown.

4.1 Administrative Safeguards

Risk Analysis & Management

Purpose

Identify and mitigate risks to PHI.

Minimum Expectations

  • Annual risk assessment
  • Documented risk register
  • Risk treatment plan

Evidence Auditors Request

  • Risk assessment report
  • Updated risk register
  • Remediation tracking

Common Gaps

  • Outdated risk assessments
  • Lack of asset inventory

Practical Example

Perform a formal HIPAA-specific risk assessment, maintain remediation tickets in ClickUp/Jira.

Related Framework Mapping

  • ISO 27001: A.5, A.8
  • SOC 2: CC1, CC3

Workforce Security

Purpose

Ensure only authorized staff access PHI.

Minimum Expectations

  • Defined onboarding/offboarding
  • Role-based access control
  • Background checks

Evidence

  • Access logs
  • HR onboarding workflows

Common Gaps

  • Manual offboarding
  • Lack of periodic access reviews

Example

Quarterly access reviews for PHI systems.

Security Awareness & Training

Purpose

Educate staff on PHI handling and threats.

Minimum Expectations

  • Annual training
  • Phishing simulations

Evidence

  • Training completion logs

Contingency Planning

Purpose

Ensure PHI availability during disasters.

Minimum Expectations

  • Documented DR & business continuity plan
  • Backup strategy
  • Annual DR testing

Evidence

  • Backup reports
  • DR test results

4.2 Physical Safeguards

Facility Access Controls

Purpose

Prevent unauthorized physical access to PHI.

Minimum Expectations

  • Badge systems
  • Visitor management
  • Data center security

Evidence

  • Access logs
  • Vendor SOC 2 reports

Workstation & Device Security

Purpose

Protect PHI on devices.

Minimum Expectations

  • Screensaver lock
  • Encrypted devices
  • BYOD controls

Evidence

  • MDM screenshots
  • Asset inventory

4.3 Technical Safeguards

Access Controls

Purpose

Ensure only authorized users access PHI systems.

Minimum Expectations

  • MFA
  • Unique IDs
  • Role-based access

Evidence

  • IAM configurations

Audit Controls

Purpose

Enable monitoring of PHI access and usage.

Minimum Expectations

  • System logs
  • Access monitoring
  • Alerts for suspicious activity

Evidence

  • SIEM logs
  • Audit reports

Integrity Controls

Purpose

Ensure PHI is not modified improperly.

Minimum Expectations

  • Hashing, versioning
  • Change management processes

Evidence

  • Change logs
  • Source control history

Transmission Security

Purpose

Protect PHI in transit.

Minimum Expectations

  • TLS 1.2+
  • VPNs
  • Encrypted APIs

Evidence

  • Network configuration screenshots

5. Minimum Requirements

Mandatory Documentation

  • Risk Assessment
  • Security Policies
  • Contingency Plans
  • Incident Response Plan
  • Access Control Policy
  • BAAs with all vendors

Mandatory Technical Controls

  • Encryption at rest & in transit
  • MFA for all PHI systems
  • Centralized logging
  • Backups

Governance Activities

  • Annual training
  • Quarterly access reviews
  • Vendor risk assessments

6. Technical Implementation Guidance

Infrastructure Security

  • Use HIPAA-eligible cloud services (AWS, Azure, GCP).
  • Enforce network segmentation for PHI workloads.

Access Control

  • Enforce SSO + MFA.
  • Role-based access aligned to minimum necessary.

Logging & Monitoring

  • Centralized SIEM
  • Retention: 6 years recommended (aligned with HIPAA documentation retention)

Vulnerability Management

  • Monthly scanning
  • Annual penetration tests
  • Patch high-risk issues within 30 days

Backups & DR

  • Daily encrypted backups
  • Periodic restoration testing
  • Off-site geographic redundancy

Encryption

  • At rest: AES-256
  • In transit: TLS 1.2/1.3

Configuration Baselines

  • CIS benchmarks for servers and endpoints

Tooling Suggestions (Neutral)

  • SIEM: any enterprise SIEM
  • MDM: any enterprise MDM
  • IAM: any SSO provider
  • Vulnerability scanning: any industry-standard scanner

7. Policy & Procedure Requirements

Common Required Policies

  • HIPAA Privacy Policy
  • HIPAA Security Policy
  • Access Control Policy
  • Incident Response Policy
  • Breach Notification Procedures
  • Business Continuity & Disaster Recovery Policy
  • Device Security Policy
  • Vendor Risk Management Policy
  • Logging & Monitoring Policy

For each policy, include:

  • Purpose
  • Roles & responsibilities
  • Processes
  • Escalation paths
  • Review cycle (annual)

8. Audit Evidence & Verification

What Auditors Request

  • Policies & procedures
  • System access logs
  • MFA configurations
  • Risk assessment reports
  • BAA inventory
  • Backup logs
  • Training records
  • Incident logs

Evidence Formats

  • Screenshots
  • Log exports
  • PDF reports
  • Meeting minutes

Sampling Expectations

  • 10–25 items per category (access rights, incidents, changes)

Common Remediation Actions

  • Update BAAs
  • Improve access review processes
  • Implement missing encryption
  • Expand logging coverage

9. Implementation Timeline Considerations

Typical Duration

8–16 weeks for full implementation.

Key Milestones

  1. Data flow & PHI identification
  2. Risk assessment
  3. Policy creation
  4. Technical control implementation
  5. Workforce training
  6. Evidence collection

Dependencies that Delay Progress

  • Missing BAAs
  • Undefined PHI scope
  • Lack of centralized identity management

10. Ongoing Security BAU Requirements

  • Quarterly access reviews
  • Annual HIPAA training
  • Annual risk assessment
  • Annual policy reviews
  • Backup restoration testing
  • Vendor risk reviews
  • Incident response tabletop exercises
  • Continuous vulnerability scanning

11. Maturity Levels

Minimum Compliance

  • Formal policies
  • Encryption enabled
  • Basic logging
  • Annual risk assessment

Intermediate

  • SIEM implemented
  • Strong vendor risk management
  • Automated access provisioning

Advanced

  • Continuous monitoring
  • Automated configuration baselines
  • Full DR automation
  • Integrated GRC tooling

12. FAQs

Is HIPAA certification real?

No, HIPAA has no official certification.

Do you need a BAA with all vendors?

Yes, any vendor handling PHI must have a BAA.

Does HIPAA require encryption?

Technically “addressable,” but encryption is expected in modern environments.

How long should HIPAA logs be retained?

6 years is best practice.

Can HIPAA be combined with SOC 2?

Yes—SOC 2 + HIPAA mapping is common for SaaS providers.

13. Summary

HIPAA establishes strict expectations for protecting PHI through administrative, physical, and technical safeguards. Success requires clear data flow understanding, comprehensive risk management, strong access controls, encryption, and ongoing operational governance. Organizations that implement documented policies, enforce minimum necessary access, maintain logging and monitoring, and complete annual training and assessments will be well-positioned for HIPAA assurance reviews and client audits.

1. Overview

What it is

ISO/IEC 27001:2022 is an international standard for establishing, implementing, maintaining, and continually improving an Information Security Management System (ISMS). It defines how an organization systematically manages information security risk using policies, procedures, technical and organizational controls.

Who it applies to

Any organization (any size, sector, or geography) that:

  • Handles sensitive or valuable information (customer data, IP, financials, personal data, etc.)
  • Wants to demonstrate a mature, risk-based security posture
  • Needs a recognized, certifiable security standard for customers, regulators, or partners

Outcome

  • A formal certification by an accredited certification body, valid for 3 years with annual surveillance audits and a recertification audit at the end of the cycle.
  • Publicly usable certificate + Statement of Applicability (SoA) as the key outputs.

High-level goals and risk domains

  • Protect confidentiality, integrity, and availability (CIA) of information
  • Embed security into governance, operations, and technology
  • Manage risks via a formal risk assessment & treatment process
  • Cover domains like governance, HR, asset management, access control, cryptography, physical security, operations, supplier management, incident management, business continuity, and compliance.

2. Scope & Applicability

Typical in-scope items

  • Systems
    • Production infrastructure (cloud environments, on-premise servers)
    • Core business applications (SaaS product, ERP/CRM, ticketing tools)
    • Security tooling (SIEM, endpoint protection, vulnerability scanners)
    • Backup and DR systems
  • Processes
    • Software development lifecycle (SDLC / DevSecOps)
    • Change & release management
    • Access management (joiner/mover/leaver)
    • Incident management
    • Vendor and third-party risk management
    • Business continuity and disaster recovery
  • Teams
    • Leadership (ISMS sponsor, steering committee)
    • IT / Engineering / DevOps
    • HR & People Ops
    • Legal / Compliance / Privacy
    • Finance / Procurement
    • Customer-facing teams handling client data
  • Data
    • Customer data processed or stored by your services
    • Internal business data that supports service delivery
    • Personal data of employees or customers where relevant

Common out-of-scope elements

  • Legacy systems not used for service delivery and explicitly isolated
  • Non-production environments, if you choose and can justify/segregate (many include at least staging)
  • Business units that do not interact with in-scope information or systems
  • Personal devices (BYOD) if your policy is to prohibit or strictly sandbox them

(Note: Out-of-scope decisions must be carefully justified and documented; auditors challenge unjustified exclusions.)

Assumptions and dependencies

  • Executive management supports the ISMS and allocates resources
  • There is a defined information security role (CISO, ISO, or equivalent)
  • Some basic IT practices exist (identity management, backups, patching)
  • Legal/regulatory obligations are identified (e.g., GDPR, HIPAA) and considered in the risk assessment
  • Third-party services can provide sufficient evidence (e.g., SOC 2 reports, security addenda)

3. Core Principles / Pillars

ISO 27001 is built around:

  1. Confidentiality
    • Ensuring only authorized users have access to information.
    • Controls: access control, encryption, classification, NDAs.
  2. Integrity
    • Ensuring information is accurate, complete, and unaltered.
    • Controls: change management, input validation, backups, logging.
  3. Availability
    • Ensuring information and systems are accessible when needed.
    • Controls: redundancy, DR/BCP, capacity planning, monitoring.
  4. Risk-based Management
    • Systematic identification, evaluation, and treatment of security risks.
    • Controls chosen based on risk, not a checklist-only approach.
  5. Continuous Improvement (Plan–Do–Check–Act)
    • Plan: establish ISMS and risk treatment.
    • Do: implement controls.
    • Check: monitor, measure, internal audit.
    • Act: corrective actions, improvements.
  6. Governance & Accountability
    • Clear roles & responsibilities, management commitment, and evidence-based decision-making.

4. Control Breakdown (ISO 27001:2022 – Annex A Themes)

The 2022 version groups controls into 4 themes and 93 controls. Below is a high-level breakdown by theme with examples of major control areas.

4.1 Organizational Controls (Theme: A.5)

Examples: information security policies, roles & responsibilities, risk management, supplier management, incident management.

Control Area: Information Security Policies

  • Purpose
  • Define management direction and support for information security.
  • Minimum expectations
    • Approved information security policy (and related policies) by top management
    • Policy is communicated to all relevant stakeholders
    • Policy is reviewed at planned intervals (typically annually) or on major changes
  • Typical evidence
    • Approved policy document with version, date, approver
    • Communication proof (LMS records, email announcement, intranet posting)
    • Policy review log or change history
  • Common mistakes
    • Policy not aligned with actual practices (“paper-only ISMS”)
    • No documented review or approval
    • Policy not accessible to employees
  • Practical implementation example
    • Store all policies in a version-controlled repository (Confluence, Notion, Git)
    • Use an annual “policy review” task with assigned owners
    • Use LMS to distribute and track policy acceptance
  • Related mappings
    • SOC 2: CC1.2, CC1.3 (control environment)
    • NIST CSF: ID.GV-1, ID.GV-2

Control Area: Risk Management

  • Purpose
  • Establish a systematic information security risk assessment and treatment process.
  • Minimum expectations
    • Defined risk methodology (likelihood/impact scales, scoring model)
    • Formal risk register
    • Risk treatment plan linked to Annex A controls
    • Regular risk assessments (at least annually and on significant change)
  • Typical evidence
    • Risk Assessment Methodology document
    • Risk register (with owners, scores, status)
    • Risk treatment plan referencing Annex A controls
    • Meeting minutes of risk review / acceptance
  • Common mistakes
    • Treating risk assessment as a one-time pre-certification exercise
    • Lack of clear risk owners and due dates
    • No traceability from risks to controls (or vice versa)
  • Practical implementation example
    • Use a spreadsheet or GRC tool to track risks, likelihood, impact, scores
    • During quarterly security reviews, update risk status and treatments
    • Use color coding and dashboards to prioritize treatment actions
  • Related mappings
    • SOC 2: CC3.2, CC3.3
    • NIST CSF: ID.RA-1 to ID.RA-6

Control Area: Supplier Relationships

  • Purpose
  • Ensure third parties do not introduce unacceptable information security risks.
  • Minimum expectations
    • Vendor risk management process (onboarding, due diligence, review)
    • Security and privacy clauses in contracts where relevant
    • Periodic vendor reviews (especially critical suppliers)
  • Typical evidence
    • Vendor inventory with classification (critical, high, medium)
    • Completed security questionnaires or SOC 2 / ISO 27001 reports of suppliers
    • Contract clauses covering security and data protection
    • Vendor review logs
  • Common mistakes
    • No formal process; reliance on ad-hoc checks
    • Not revisiting vendor risk over time
    • Critical vendors not identified
  • Practical implementation example
    • Maintain a vendor register in a spreadsheet or GRC
    • Require security documentation before onboarding new critical vendors
    • Schedule annual critical-vendor review tasks
  • Related mappings
    • SOC 2: CC9.2
    • NIST CSF: ID.BE-1, ID.SC-1 to ID.SC-5

4.2 People Controls (Theme: A.6)

Examples: screening, terms of employment, awareness training, disciplinary process, responsibilities after termination.

Control Area: Security Awareness & Training

  • Purpose
  • Ensure personnel understand security responsibilities and act accordingly.
  • Minimum expectations
    • Induction training for new hires
    • Recurring training (typically annual) covering key topics (phishing, password hygiene, reporting incidents)
    • Records of participation and completion
  • Typical evidence
    • Training policy or plan
    • LMS export showing completion rates
    • Training materials or slides
    • Phishing campaign reports (if used)
  • Common mistakes
    • One-time training; no recurring cycle
    • No tracking of non-compliant employees
    • Training content too generic/not aligned to your environment
  • Practical implementation example
    • Use an LMS or security awareness platform
    • Require completion within X days of hire and annually thereafter
    • Include a short quiz and performance dashboards
  • Related mappings
    • SOC 2: CC2.2
    • NIST CSF: PR.AT-1 to PR.AT-5

4.3 Physical Controls (Theme: A.7)

Examples: secure areas, physical access control, equipment security.

Control Area: Physical Security of Offices / Data Centers

  • Purpose
  • Prevent unauthorized physical access, damage, and interference to premises and information.
  • Minimum expectations
    • Controlled access (badges, locks, reception controls)
    • Visitor management process (sign-in, visitor badges, escort)
    • Clear desk/clear screen expectations in relevant areas
    • Environmental protection for critical locations (where applicable)
  • Typical evidence
    • Floor plans with controlled areas defined
    • Access control system screenshots or logs
    • Visitor logs (physical or electronic)
    • Physical security policy
  • Common mistakes
    • No logs or inconsistent visitor records
    • Employees tailgating / propping doors open
    • Unlocked cabinets storing sensitive media
  • Practical implementation example
    • Use building access logs from your provider
    • Maintain a simple visitor sign-in sheet if no electronic system
    • Put visual reminders and screen-lock timeout configs in place
  • Related mappings
    • SOC 2: CC6.7
    • NIST CSF: PR.AC-2, PR.AC-3

4.4 Technological Controls (Theme: A.8)

Examples: access control, cryptography, operations security, logging & monitoring, backup, network security, secure development, vulnerability management.

Control Area: Access Control & Identity Management

  • Purpose
  • Ensure users only have access necessary for their role (least privilege).
  • Minimum expectations
    • Unique user accounts; no shared generic accounts (or tightly controlled)
    • Role-based access control (RBAC) where feasible
    • Joiner–mover–leaver process
    • Multi-factor authentication (MFA) for critical systems
    • Periodic access reviews
  • Typical evidence
    • Access control policy
    • Screenshots/config exports from IAM solutions
    • HR + IT offboarding checklist
    • Access review records (sign-off)
  • Common mistakes
    • Dormant accounts not disabled
    • Admin rights not reviewed regularly
    • Inconsistent offboarding procedures
  • Practical implementation example
    • Use SSO as a central point (e.g., Okta, Azure AD, Google Workspace)
    • Automate account creation and removal via HR triggers where possible
    • Run quarterly reviews for high-privilege groups
  • Related mappings
    • SOC 2: CC6.1, CC6.2, CC6.3
    • NIST CSF: PR.AC-1 to PR.AC-7

Control Area: Logging & Monitoring

  • Purpose
  • Record events and detect anomalous or malicious activity.
  • Minimum expectations
    • Logging of security-relevant events: authentication, privilege changes, key configuration changes
    • Logs protected from tampering and retained for a defined period
    • Regular review of alerts from logging/monitoring tools
  • Typical evidence
    • Logging & monitoring standard
    • SIEM or log management screenshots
    • Sample alerts and associated investigation notes
    • Defined log retention settings
  • Common mistakes
    • Logs enabled but never reviewed
    • Over-reliance on default settings without tuning
    • No process to follow up on alerts
  • Practical implementation example
    • Centralize logs from main systems to a log platform
    • Define a basic alert triage runbook
    • Conduct monthly review of key alerts & improvements
  • Related mappings
    • SOC 2: CC7.2, CC7.3
    • NIST CSF: DE.AE-1 to DE.AE-5, DE.CM-1 to DE.CM-8

Control Area: Backup & Recovery

  • Purpose
  • Ensure information can be restored in case of loss, corruption, or disaster.
  • Minimum expectations
    • Documented backup strategy (what, how often, retention, storage location)
    • Periodic restore tests
    • Protection of backup media (encryption, access control)
  • Typical evidence
    • Backup policy/standard
    • Backup configuration screenshots
    • Restore test records
    • DR/BCP test reports
  • Common mistakes
    • Never testing restores
    • Backups in the same blast radius as production (same region, same credentials)
    • No clarity on RPO/RTO
  • Practical implementation example
    • Configure automated backups in your cloud provider
    • Quarterly test restore for at least one key system
    • Store restore test results in your ISMS repository
  • Related mappings
    • SOC 2: A1.2
    • NIST CSF: PR.IP-4, PR.IP-9, RC.BC-1

(You can apply a similar breakdown to other Annex A areas: secure development, cryptography, vulnerability management, change management, incident response, etc.)

5. Minimum Requirements

To be realistically certifiable, you typically need at least:

Mandatory documents

  • ISMS scope statement
  • Information security policy
  • Risk assessment methodology
  • Risk assessment and risk treatment plan
  • Statement of Applicability (SoA)
  • Information classification and handling rules
  • Asset inventory
  • Access control policy
  • Change management procedure
  • Incident management procedure
  • Business continuity / disaster recovery plans
  • Internal audit procedure
  • Corrective action / nonconformity management procedure
  • Documented roles and responsibilities for ISMS
  • Applicable legal, regulatory, and contractual requirements register

Mandatory processes

  • Risk management (assessment + treatment + acceptance)
  • Management review of the ISMS
  • Internal audit program and execution
  • Corrective actions and continual improvement
  • Document and record control
  • Supplier risk management

Technical controls (baseline)

  • Identity and access management with MFA on critical systems
  • Secure configuration baselines for key platforms
  • Regular vulnerability scanning and patching
  • Logging and monitoring for critical systems
  • Backup and restore processes
  • Endpoint protection and basic network controls (firewalls, segmentation where relevant)
  • Encryption of data at rest and in transit for sensitive data

Governance activities & recurrence

  • Annually
    • ISMS risk assessment and treatment update
    • Management review meeting
    • Internal ISMS audit
    • Policy review and updates
    • Business impact analysis / BC update
  • Quarterly (typical)
    • Access reviews for critical systems
    • Vulnerability scan and remediation review
    • Vendor risk review (critical vendors)
    • Security KPI/metrics review
  • Monthly/Continuous
    • Log and alert review
    • Incident handling and reporting
    • Change management board/review

6. Technical Implementation Guidance

This can be tailored by size/maturity, but baseline expectations:

Infrastructure Security

  • Harden OS and application configurations based on benchmarks (e.g., CIS)
  • Segregate environments (prod vs non-prod) and networks (public vs internal)
  • Use infrastructure-as-code for consistent deployments where possible
  • Apply least privilege on cloud IAM roles and security groups

Access Control Best Practices

  • Centralize identity through SSO
  • Enforce MFA for:
    • Cloud console
    • Source code repositories
    • Production systems
    • VPN / remote access
  • Apply RBAC; avoid direct access to databases where possible, favor application-layer access
  • Automate user lifecycle (provisioning/deprovisioning) triggered by HR events

Logging & Monitoring Standards

  • Collect logs for:
    • Authentication/authorization events
    • Admin and privileged actions
    • Configuration changes in cloud/infra
    • Security events from tools (AV, IDS/IPS, WAF)
  • Define:
    • Minimum log retention (often 12–24 months)
    • Time synchronization (e.g., NTP)
    • Regular alert review process and on-call / escalation paths

Vulnerability Management Workflow

  1. Discover: Regular scans (infrastructure, applications, dependencies).
  2. Assess: Triage by severity and business impact.
  3. Treat: Patch, configuration change, or mitigation.
  4. Track: Ticket-based tracking with SLA (e.g., Critical: 14 days, High: 30 days).
  5. Verify: Confirm remediation and close ticket.

Backup & DR Expectations

  • RPO/RTO defined for key services
  • Automated backups as per policy (e.g., daily incremental, weekly full)
  • Offsite/geo-redundant storage for critical data
  • At least annual DR exercise (tabletop or live failover test)

Encryption Requirements

  • Data in transit: TLS for external services; internal where feasible
  • Data at rest: Disk or database encryption for sensitive stores
  • Key management: Use managed KMS where possible, rotate keys according to policy, restrict KMS permissions

Configuration Baselines

  • Standard images for servers and endpoints
  • Baselines defined for:
    • OS (services, ports, audit settings)
    • Databases (encryption, logging, access)
    • Cloud accounts (default security configs, guardrails)
  • Regular configuration drift checks (e.g., config management tools, CSPM)

Tooling Suggestions (neutral)

  • Identity & Access: IdP / SSO provider, directory service
  • Endpoint security: EDR/AV
  • Logging & monitoring: Central log platform or SIEM
  • Vulnerability scanning: Network, host, container, and dependency scanners
  • Ticketing & workflow: ITSM or project management tools
  • GRC/ISMS tool (optional): To manage policies, risks, and evidence

7. Policy & Procedure Requirements

Commonly expected set (rename to fit your environment):

  • Information Security Policy
  • Acceptable Use Policy
  • Access Control Policy
  • Password / Authentication Policy
  • Change Management Policy / Procedure
  • Secure Development Policy / SDLC Procedure
  • Asset Management Policy
  • Information Classification & Handling Policy
  • Backup & Recovery Policy
  • Incident Management Policy & Procedure
  • Business Continuity / Disaster Recovery Policy
  • Vendor / Supplier Security Policy
  • Logging & Monitoring Standard
  • Cryptography / Encryption Policy
  • HR Security Policy (covering onboarding, ongoing, offboarding)
  • Clean Desk / Physical Security Policy
  • Risk Management Policy and Methodology
  • Internal Audit Procedure
  • Corrective & Preventive Action (CAPA) Procedure
  • Data Protection / Privacy Policy (especially if handling personal data)

For each, you typically include:

  • Purpose and scope
  • Roles and responsibilities
  • Principles/requirements
  • Procedures or references to procedures
  • Exceptions and non-compliance handling
  • Review and approval information

8. Audit Evidence & Verification

What auditors typically request

  • Core ISMS documents (scope, risk methodology, risk assessment, SoA)
  • Policies and procedures
  • Records:
    • Risk assessment outputs
    • Training completion
    • Access reviews
    • Incident logs
    • Change records
    • Internal audit report and corrective actions
    • Management review meeting minutes
    • Vendor assessments
    • Backup logs and restore test records

Evidence formats

  • PDFs/exports from tools (audit logs, reports)
  • Screenshots (with timestamps and scopes visible)
  • Configuration snippets or read-only views
  • Meeting minutes (with attendees, date, agenda)
  • Signed approvals or acknowledgments (can be digital)

Sampling expectations

  • Auditors often sample:
    • A subset of users for HR/onboarding/offboarding checks
    • A subset of changes from your change log
    • A subset of incidents, tickets, or support cases
    • A subset of assets from your inventory
  • Samples must trace through the process: input → handling → outcome.

Common remediation actions

  • Clarifying or updating policies (missing sections, outdated references)
  • Formalizing ad-hoc practices into procedures
  • Improving evidence capture (e.g., adding screenshots, logs)
  • Closing or better documenting risks and treatment decisions
  • Implementing missing reviews (access, vendor, DR tests)

9. Implementation Timeline Considerations

Typical duration (for a first-time ISMS)

  • Small/medium tech company starting from a basic baseline: 6–12 months to certification readiness.
  • Very mature organizations or those with existing SOC 2/NIST: potentially shorter.

Key milestones

  1. Project initiation & scope definition
  2. Risk assessment & treatment planning
  3. Policies & procedures development
  4. Technical control implementation / gap closure
  5. ISMS operation period (usually at least 3 months of “evidence accumulation”)
  6. Internal audit
  7. Management review
  8. Stage 1 audit (documentation readiness)
  9. Stage 2 audit (implementation & effectiveness)

Recommended sequencing

  • Start with: scope definition → context & interested parties → risk methodology
  • Then: risk assessment → risk treatment → SoA
  • Parallel: policy development + technical control implementation
  • After some operational period: internal audit → management review
  • Finally: schedule and undergo external audits (Stage 1 then Stage 2)

Dependencies that can delay progress

  • Slow decision-making on scope and risk appetite
  • Lack of executive sponsorship
  • Dependence on multiple third-party vendors for evidence
  • Lack of tooling for logging, ticketing, or training
  • Underestimation of time needed to implement technical controls

10. Ongoing Security BAU Requirements

To keep the ISMS alive between audits:

  • Quarterly
    • Access reviews (critical apps & infra)
    • Vulnerability reports and remediation review
    • Review of security metrics / KPIs
    • Critical vendor review (where needed)
  • Annual
    • Full ISMS risk assessment
    • Policy review and updates
    • Internal ISMS audit
    • Management review
    • Business continuity / DR test
    • Penetration test or significant security test (often annually or biannually)
    • Security awareness training cycle
  • Continuous / Monthly
    • Log and alert review
    • Incident response activities and post-incident reviews
    • Change management governance
    • Asset inventory maintenance
    • Onboarding/offboarding execution
    • Corrective action follow-up

11. Maturity Levels

Minimum Compliance State

  • ISMS is documented and implemented
  • All mandatory requirements are addressed
  • Controls operate, but mostly manual and reactive
  • Evidence collection is done, but may be effortful

Intermediate Maturity

  • Many processes partially automated (access management, logging, backups)
  • Regular metrics and dashboards for key control areas
  • Clear alignment with other frameworks (SOC 2, NIST, etc.)
  • Lessons learned from incidents feed into risk and controls

Advanced / Automated State

  • Strong integration with DevOps/DevSecOps (security-as-code)
  • Continuous control monitoring and near real-time metrics
  • Automated evidence collection and centralized GRC
  • Predictive or proactive security improvements informed by data
  • ISMS is tightly integrated into business strategy and decisions

12. FAQs

Q1: Do we need to implement all 93 Annex A controls?

No. You must evaluate all controls, but you may justify exclusions in the Statement of Applicability based on risk and scope. Exclusions must be reasonable and well-documented.

Q2: How much historical evidence do we need before certification?

Typically at least 3 months of ISMS operation with records (risk updates, training, logs, incidents, changes, etc.). More is better.

Q3: Is ISO 27001 only about IT?

No. It covers people, processes, and technology, including HR, legal, physical security, and supplier management.

Q4: We already have SOC 2; is ISO 27001 redundant?

There is overlap, but ISO 27001 focuses on a formal ISMS with risk-based control selection and certification via an accredited body. Many organizations align both to leverage similar controls and evidence.

Q5: Can we include only our SaaS product and not the whole company?

Yes, as long as the scope is clearly defined and justifiable. However, supporting teams and systems that affect the service will often be in scope.

Q6: Is a dedicated CISO required?

No. But information security responsibilities must be assigned to a competent person or team, with clear authority and management backing.

13. Summary

ISO 27001:2022 provides a structured, risk-based framework for managing information security. To succeed:

  • Define a realistic scope and understand your risks.
  • Implement and document key policies, processes, and controls across people, process, and technology.
  • Operate the ISMS consistently and gather evidence through BAU activities.
  • Use internal audit, management review, and metrics to drive continual improvement.

1. Overview

ISO 27701 is a Privacy Information Management System (PIMS) extension to ISO 27001 and ISO 27002. It defines how organisations must manage personal data and demonstrate compliance with privacy requirements.

Who it applies to:

Any organisation that processes personal data—both controllers and processors—across all industries.

Outcome:

Certification or attestation (when built on ISO 27001) demonstrating that privacy controls are implemented and maintained in line with global best practice. ISO 27701 strengthens GDPR, CCPA, and other privacy obligations.

High-level goals and risk domains:

  • Governance of personal data
  • Data subject rights
  • Lawful processing
  • Data lifecycle protection
  • Third-party oversight
  • Privacy by design and default
  • Transparency and accountability

2. Scope & Applicability

Typical in-scope elements:

  • Systems processing personal data (applications, databases, cloud workloads)
  • Privacy-relevant processes (collection, storage, access, deletion, transfer)
  • Teams handling personal data (Engineering, HR, Support, Marketing)
  • Vendors and subprocessors
  • Records of processing activities (RoPA)
  • Data subject rights workflows

Common out-of-scope elements:

  • Systems with no connection to personal data
  • Archived legacy systems marked for decommissioning
  • Experimental/test environments without personal data (unless production data is used)

Assumptions & dependencies:

  • ISO 27001 must be implemented; ISO 27701 cannot stand alone
  • A lawful basis for processing data exists
  • The organisation maintains an inventory of personal data and processing activities
  • Clear controller/processor role definitions

3. Core Principles / Pillars

ISO 27701 is built around privacy principles common to GDPR and global regulations:

  1. Lawfulness, Fairness, Transparency – Data is processed legally and communicated clearly.
  2. Purpose Limitation – Data is only used for defined, legitimate purposes.
  3. Data Minimisation – Only the minimum data required is collected and retained.
  4. Accuracy – Personal data is kept accurate and up to date.
  5. Storage Limitation – Data is retained only for the necessary duration.
  6. Integrity & Confidentiality – Data is safeguarded via technical and organisational controls.
  7. Accountability – The organisation demonstrates compliance through controls, documentation, and governance.

4. Control Breakdown

ISO 27701 extends ISO 27001 Annex A controls with additional privacy-specific requirements. Key categories:

A.1 — Privacy Governance

Purpose: Establish governance for managing privacy obligations.

Minimum expectations:

  • Privacy roles defined (DPO, privacy lead)
  • Policies covering PIMS management, privacy principles, DPIAs
  • Evidence: Organisation charts, policies, role descriptions
  • Common gaps: Missing role assignments; overlapping responsibilities
  • Example: Assign a DPO for EU data processing and publish responsibilities internally.

A.2 — Roles and Responsibilities for Controllers and Processors

Purpose: Define obligations based on processing role.

Minimum expectations:

  • Document whether the organisation is a controller, processor, or both
  • Maintain legal basis for processing
  • Evidence: RoPA, contracts, consent records
  • Common gaps: Unclear designation of processing role
  • Example mapping:
  • SOC 2 CC2.1 (Governance)
  • ISO 27001 A.5 (Policies)

A.3 — Data Processing Policies

Purpose: Ensure personal data processing meets regulatory and contractual requirements.

Minimum expectations:

  • Privacy policy
  • Data processing policy
  • Data transfer policy
  • Evidence: Published policies, audit logs
  • Common gaps: Policies not aligned to actual practices
  • Example: Policy outlining data retention periods by category.

A.4 — Data Subject Rights

Purpose: Ensure individuals can exercise their rights.

Minimum expectations:

  • DSAR intake, authentication, fulfilment workflows
  • Response tracking within required timeframes
  • Evidence: DSAR log, ticket exports, templates
  • Common gaps: No formal DSAR ownership
  • Mapping: GDPR Art. 12–23

A.5 — PII Minimisation & Purpose Limitation

Purpose: Reduce unnecessary data and ensure appropriate retention.

Minimum expectations:

  • Defined retention schedule
  • Data minimisation procedures
  • Evidence: Retention policy, deletion logs
  • Common gaps: Deletion not operationalised
  • Example: Automated data deletion workflow using lifecycle rules.

A.6 — Data Security (Extension of ISO 27001 Security Controls)

Purpose: Apply security controls to personal data.

Minimum expectations:

  • Access control
  • Encryption
  • Logging and monitoring
  • Secure SDLC
  • Evidence: Access reviews, system configurations, monitoring alerts
  • Mapping: ISO 27001 Annex A controls, SOC 2 CC6/CC7

A.7 — Third-Party Management

Purpose: Ensure subprocessors protect data appropriately.

Minimum expectations:

  • DPIAs or privacy reviews for vendors
  • DPAs in place
  • Regular vendor assessments
  • Evidence: Vendor risk assessments, DPA copies
  • Common gaps: Reviewing only security, not privacy requirements
  • Example: Annual reviews of cloud vendors’ privacy certifications.

A.8 — Privacy by Design & Default

Purpose: Integrate privacy into system design.

Minimum expectations:

  • DPIA for new systems
  • Privacy checks during SDLC
  • Evidence: SDLC templates, DPIA forms
  • Common gaps: Privacy not included in engineering workflows
  • Example: Engineering pull requests include a “privacy impact” checkbox.

5. Minimum Requirements

Mandatory documentation:

  • Privacy Information Management System Manual
  • RoPA
  • DPIA templates
  • Privacy policy (internal and external)
  • Data retention policy
  • DSAR procedure
  • Vendor privacy assessment procedure
  • Controller/processor role documentation

Mandatory processes:

  • DSAR management
  • DPIA completion
  • Data deletion and retention workflows
  • Vendor privacy risk reviews
  • Privacy risk assessment

Technical controls:

  • Encryption at rest and in transit
  • Access control aligned to least privilege
  • Audit logging
  • Data lifecycle automation

Governance:

  • Annual PIMS management review
  • Annual internal audit
  • Annual risk assessment

Recurrence:

  • Quarterly vendor reviews
  • Quarterly access reviews
  • Annual DPIA review and updates
  • Annual policy reviews

6. Technical Implementation Guidance

Infrastructure security:

  • Segregate systems processing personal data
  • Apply network restrictions and API rate limiting
  • Enforce MFA everywhere

Access control best practices:

  • Role-based access control
  • Documented approval and periodic access review
  • Offboarding automation

Logging & monitoring:

  • Capture access to personal data repositories
  • Monitor administrative actions
  • Maintain immutable logs

Vulnerability management:

  • Perform monthly scans
  • Execute annual penetration tests
  • Include privacy checks in findings analysis

Backup & DR:

  • Ensure backups do not violate retention schedules
  • Encrypt backup storage
  • Test recovery annually

Encryption:

  • Strong encryption protocols (TLS 1.2+, AES-256)
  • Key management procedures with rotation

Configuration baselines:

  • CIS or vendor security benchmarks for cloud environments
  • Baselines documented and monitored

Tooling suggestions:

  • Data discovery/scanning tools
  • Automated DSAR management platforms
  • Ticketing tools for tracking privacy requests
  • Vendor risk management systems

7. Policy & Procedure Requirements

Common documents:

  1. Privacy Policy – Statement of principles and obligations.
  2. PIMS Manual – How privacy management integrates with ISMS.
  3. Data Processing Policy – Rules for how personal data is collected and used.
  4. Data Retention/Deletion Policy – Retention periods and deletion workflows.
  5. DPIA Procedure – When and how impact assessments are done.
  6. DSAR Procedure – Intake, verification, response workflow.
  7. Vendor Privacy Management Procedure – Assessments, onboarding, reviews.
  8. Incident Response with Privacy Addendum – How privacy incidents are handled.

Expected contents:

  • Purpose
  • Scope
  • Roles and responsibilities
  • Process steps
  • Recordkeeping requirements
  • Performance metrics or KPIs

8. Audit Evidence & Verification

Auditors typically request:

  • RoPA
  • DPIA samples
  • DSAR records
  • Policies and proof of communication
  • Vendor assessments and DPAs
  • Logs showing access to personal data
  • Deletion/retention evidence
  • Training records

Sampling expectations:

  • DSAR samples from last 12 months
  • Vendors selected based on highest privacy impact
  • DPIAs for new systems
  • User access reviews

Common remediation actions:

  • Formalising undocumented practices
  • Updating RoPA to reflect actual processing
  • Improving DSAR tracking and deadlines
  • Enhancing vendor management workflows

9. Implementation Timeline Considerations

Typical duration: 3–6 months, depending on ISO 27001 maturity.

Key milestones:

  1. Gap assessment
  2. Data mapping & RoPA
  3. Risk assessment
  4. DPIA completion
  5. Policy and procedure development
  6. Process implementation
  7. Internal audit
  8. Certification audit

Dependencies that cause delays:

  • Unclear data flows
  • Missing retention schedules
  • Undefined controller/processor roles
  • Lack of DSAR handling processes
  • Vendor DPA negotiation delays

10. Ongoing Security BAU Requirements

  • Quarterly access reviews
  • Quarterly vendor privacy reviews
  • Annual privacy training
  • Annual PIMS management review
  • Annual privacy risk assessment
  • Annual DPIA refresh
  • Continuous DSAR handling
  • Ongoing data minimisation efforts
  • Incident response exercises with privacy components

11. Maturity Levels

Minimum compliance:

  • Policies in place
  • Basic RoPA
  • DSAR workflow established
  • DPIAs completed for major systems

Intermediate maturity:

  • Automated data lifecycle management
  • Integrated privacy checks in SDLC
  • Vendor privacy scoring model

Advanced/automated maturity:

  • Tooling for DSAR automation
  • Continuous data discovery and classification
  • Real-time privacy monitoring dashboards
  • Automated deletion and retention enforcement

12. FAQs

Do we need ISO 27001 before ISO 27701?

Yes. ISO 27701 is an extension and cannot be certified independently.

Is certification mandatory?

No, but it greatly strengthens global privacy compliance programs.

Does ISO 27701 make us GDPR compliant?

It provides strong alignment but does not replace legal obligations.

Does marketing data fall under scope?

Yes, if it contains personal data.

Can cloud providers be certified?

Yes, both controllers and processors can certify.

13. Summary

ISO 27701 enhances an existing ISMS with comprehensive privacy requirements covering governance, processing principles, data subject rights, lifecycle management, and accountability. Implementing it requires robust documentation, operational processes, and technical controls that align with global regulatory expectations. Organisations that complete certification demonstrate a mature, structured approach to protecting personal data across its lifecycle.

1. Overview

What the framework is

ISO 42001 is the international standard for Artificial Intelligence Management Systems (AIMS). It provides a governance, operational, and risk-based structure for organizations that design, develop, deploy, or use AI systems. Its purpose is to ensure trustworthy, transparent, safe, and responsible AI.

Who it applies to

Suitable for any organization building or using AI, including:

  • AI product companies
  • Software organizations integrating AI
  • Enterprises operating high-risk AI systems
  • Service providers using AI for decision automation

Expected certification outcome

Organizations can achieve ISO 42001:2023 certification through a third-party accredited audit, similar to ISO 27001. The certification confirms the organization has established, implemented, maintained, and continually improved an AI management system.

High-level goals and risk domains

ISO 42001 focuses on ensuring AI governance across:

  • AI ethics & accountability
  • Transparency & explainability
  • Safety & robustness
  • Data governance & quality
  • Human oversight
  • Model lifecycle management
  • Security & privacy impacts
  • Bias, fairness, and discrimination
  • Regulatory alignment (e.g., EU AI Act)

2. Scope & Applicability

Typical in-scope elements

  • AI models (ML models, LLMs, classical ML, and rule-based systems)
  • Training datasets, labeling processes, synthetic data pipelines
  • Deployment pipelines, MLOps environments
  • AI-supported business processes
  • Data science, ML engineering, AI ethics teams
  • Cloud infrastructure used for AI model development and hosting
  • Vendors providing AI components (APIs, datasets, tooling)

Common out-of-scope elements

  • Non-AI software components
  • General IT infrastructure not used for AI workloads
  • Internal corporate systems without AI involvement
  • Manual-only processes with no AI decision support

Assumptions & dependencies

  • A risk assessment methodology exists or will be established
  • AI lifecycle is documented (design → develop → train → validate → deploy → monitor → retire)
  • Responsible AI policy and governance board are in place
  • Data governance processes (quality, lineage, access control) are functioning
  • Technical teams can provide operational evidence (logs, model cards, datasets, metrics)

3. Core Principles or Pillars

ISO 42001 reflects the following key principles:

  1. AI Governance – Oversight, accountability, board-level involvement.
  2. Risk-based Approach – Identify and manage AI risks proportionally.
  3. Transparency & Explainability – Users must understand how AI affects decisions.
  4. Fairness & Bias Mitigation – Continuous monitoring of discriminatory effects.
  5. Human Oversight – Humans remain in control and empowered to intervene.
  6. Safety & Robustness – Model performance, reliability, drift monitoring.
  7. Security & Privacy – Protection of training data, models, and outputs.
  8. Continual Improvement – Monitoring, audits, corrective actions.
  9. Lifecycle Management – Structured processes across model development and deployment.

4. Control Breakdown

ISO 42001 contains major clause groups aligned to Annex SL similar to ISO 27001.

Below is a breakdown of the key clauses relevant to AIMS.

4.1 Clause 4 – Context of the Organization

Purpose

Ensure AI activities, stakeholders, and risks are fully understood.

Minimum expectations

  • Define AI systems in scope
  • Identify interested parties (users, regulators, impacted groups)
  • Document external obligations (laws, AI Act, ethics guidelines)

Evidence

  • Scope statement
  • Stakeholder analysis
  • Documented AIMS boundaries

Common gaps

  • Missing inventory of AI systems
  • Weak stakeholder mapping

Example

Maintaining an AI System Register listing all models and their risk classifications.

Related frameworks: EU AI Act (risk classes), NIST AI RMF.

4.2 Clause 5 – Leadership

Purpose

Establish accountability and top-level support.

Minimum expectations

  • Assign AIMS owner and AI Governance Board
  • Approve Responsible AI policy
  • Provide resources

Evidence

  • Governance charter
  • Meeting minutes
  • Policy approval records

Common gaps

  • No formal AI accountability structure

Example

Quarterly AI governance committee reviewing risks, incidents, and KPIs.

4.3 Clause 6 – Planning

Purpose

Risk assessment and mitigation for AI.

Minimum expectations

  • AI risk assessment methodology
  • Objectives for responsible AI
  • Risk treatment plans

Evidence

  • AI risk register
  • Control implementation plans

Common gaps

  • No bias-specific risk assessment
  • Model drift risks unmanaged

Example

Assessing all new models for safety, fairness, privacy, and robustness.

4.4 Clause 7 – Support

Purpose

Ensure documented information, competence, and awareness.

Minimum expectations

  • Training for data scientists, developers, and business users
  • Document management process
  • Communication plan for AI risks

Evidence

  • Training logs
  • Version-controlled documentation

Common gaps

  • Missing AI ethics training

Example

Annual training on responsible AI and bias mitigation.

4.5 Clause 8 – Operation

Purpose

Establish operational controls across the AI lifecycle.

Minimum expectations

  • Data governance practices
  • Model development & validation procedures
  • Transparency and explainability controls
  • Monitoring for performance, safety, and fairness
  • Incident management for AI failures
  • Human oversight mechanisms

Evidence

  • Model cards
  • Dataset documentation
  • Evaluation metrics
  • Deployment logs
  • Test results

Common gaps

  • Missing model documentation
  • No continuous monitoring for drift

Example

MLOps pipelines automatically logging input distributions and performance metrics.

Mapping

  • NIST AI RMF (Govern, Map, Measure, Manage)
  • ISO 27001 Annex A (data & access controls)

4.6 Clause 9 – Performance Evaluation

Purpose

Measure effectiveness of the AIMS.

Minimum expectations

  • Internal audits
  • KPI monitoring
  • Management review

Evidence

  • Audit reports
  • Performance dashboards

Common gaps

  • Insufficient monitoring of fairness metrics

Example

Quarterly fairness and safety review for all AI systems.

4.7 Clause 10 – Improvement

Purpose

Corrective actions and continual improvement.

Minimum expectations

  • Incident analysis
  • Remediation tracking
  • Periodic updates to AIMS processes

Evidence

  • Corrective action logs
  • Improvement plans

Common gaps

  • Failure to track AI incidents

Example

Root cause analysis after a model produces harmful or biased outputs.

5. Minimum Requirements

Mandatory documents

  • AIMS Scope
  • Responsible AI Policy
  • AI Governance Charter
  • AI Risk Assessment Methodology
  • AI System Register
  • Data Management Procedure
  • Model Development Lifecycle Procedure
  • AI Incident Response Procedure
  • Monitoring & Human Oversight Procedure
  • Bias & Fairness Assessment Procedure
  • Supplier Management Procedure (AI vendors)

Mandatory processes

  • Model lifecycle (design → develop → validate → deploy → monitor → retire)
  • AI risk assessment
  • Data governance and quality management
  • Human oversight approvals
  • AI incident management
  • Continuous monitoring & metrics tracking

Technical controls

  • Access control for model pipelines
  • Dataset versioning
  • Model versioning
  • Secure training environment
  • Monitoring dashboards
  • Bias-testing workflows
  • Model documentation (model cards)

Recurrence

  • Annual risk assessment
  • Quarterly governance board meetings
  • Annual training
  • Quarterly monitoring reports
  • Annual internal audit

6. Technical Implementation Guidance

Infrastructure security

  • Segregate AI workloads into dedicated environments
  • Enable encryption for training data, checkpoints, outputs
  • Use role-based access to MLOps pipelines

Access control

  • Fine-grained access to datasets and models
  • Enforce MFA and least privilege

Logging & monitoring

  • Log datasets used for training
  • Track model inputs, outputs, and performance drift
  • Set alert thresholds for anomalies

Vulnerability management

  • Scan AI libraries and dependencies
  • Evaluate risks of open-source models and datasets

Backup & DR

  • Backup training data, models, feature stores
  • Maintain model rollback capability

Encryption

  • Encrypt datasets in transit and at rest
  • Use secure enclaves for high-risk training data

Configuration baselines

  • Document model hyperparameters
  • Standardize training pipelines

Tooling suggestions

  • Model documentation: model cards
  • Data versioning: DVC, LakeFS
  • MLOps: MLflow, Kubeflow, SageMaker, Vertex AI
  • Monitoring: EvidentlyAI, Arize AI
  • Risk management: internally defined templates

Neutral, avoid vendor lock-in.

7. Policy & Procedure Requirements

Document Purpose Key Sections Operational Activities
Responsible AI Policy Ethical principles & commitments Fairness, transparency, oversight Annual review, training
AI Governance Charter Leadership structure Roles, responsibilities, committees Governance meetings
AI Risk Assessment Procedure Manage AI risks Method, criteria, scoring Assessment for each model
Data Governance Procedure Ensure data quality & compliance Data sourcing, quality checks Dataset approval workflow
Model Development Process Control lifecycle Design, training, validation Review checkpoints
Model Deployment & Monitoring Procedure Safe rollout & operations Monitoring metrics, alerts Dashboards, log review
AI Incident Management Procedure Address failures Detection, triage, RCA Incident response
Bias & Fairness Procedure Control discrimination risks Tests, thresholds, remediation Regular fairness evaluation
Vendor/Supplier Procedure Manage AI-related risks Evaluation, monitoring Annual vendor assessments

8. Audit Evidence & Verification

Auditors typically request:

Evidence types

  • Policies & procedures
  • Risk assessments
  • Model documentation (architecture, metrics, testing)
  • Dataset lineage & quality reports
  • Logs of model performance, drift, bias tests
  • Incident reports
  • Training evidence
  • Governance meeting minutes

Sampling

  • Selection of AI systems and datasets
  • Sampling of model versions
  • Random review of monitoring reports

Common remediation actions

  • Improve documentation
  • Add monitoring for bias or drift
  • Formalize human oversight
  • Better access control on model pipelines

9. Implementation Timeline

Typical duration: 3–9 months depending on AI maturity.

Key milestones

  1. AI System Inventory
  2. Risk Assessment
  3. Governance Setup
  4. Policies & Procedures
  5. Lifecycle Documentation
  6. Technical Controls Deployment
  7. Monitoring Framework Setup
  8. Internal Audit
  9. Certification Audit

Dependencies that cause delay

  • Unclear AI ownership
  • Poor dataset governance
  • Incomplete documentation
  • Lack of monitoring tooling

10. Ongoing Security BAU Requirements

  • Quarterly governance board review
  • Quarterly monitoring & fairness reports
  • Quarterly access reviews
  • Annual risk assessment
  • Annual policy review
  • Annual internal audit
  • Annual awareness training
  • Continuous drift/bias monitoring
  • AI incident logging
  • Vendor assessments

11. Maturity Levels

Minimum compliance

  • Basic governance
  • Documented lifecycle
  • Risk assessment performed
  • Monitoring in place for critical metrics

Intermediate maturity

  • Automated monitoring
  • Standardized pipelines
  • Routine fairness testing
  • Strong data lineage

Advanced maturity

  • Fully automated MLOps
  • Real-time bias detection
  • Continuous retraining pipelines
  • Predictive risk analytics
  • Tight integration with regulatory frameworks (AI Act)

12. FAQs

Is ISO 42001 only for high-risk AI systems?

No, but the level of control must match the risk level.

Do we need full MLOps to comply?

Not mandatory, but strong monitoring and versioning are required.

Does ISO 42001 overlap with ISO 27001?

Yes. ISO 27001 covers security; ISO 42001 adds AI-specific governance.

Do we need human oversight for all AI systems?

Oversight is required but depth depends on risk.

Does it apply to LLMs?

Yes. Any AI/ML system is covered.

13. Summary

ISO 42001 provides a comprehensive structure for responsible AI governance across technical, ethical, and operational domains. Organizations achieving certification demonstrate control over AI risks, strong governance, transparent decision-making, and ongoing monitoring. Implementation requires clear processes, documented lifecycle management, robust data controls, and continuous oversight. With proper preparation, organizations can integrate these requirements into existing security and compliance programs efficiently.

1. Overview

What PCI DSS Is

The Payment Card Industry Data Security Standard (PCI DSS) is a global security standard designed to protect cardholder data and reduce payment fraud. It is maintained by the PCI Security Standards Council (PCI SSC).

Who It Applies To

Any entity that stores, processes, or transmits cardholder data—or can impact the security of cardholder data—must comply. This includes merchants, service providers, payment processors, and hosting providers.

Certification/Attestation Outcome

Depending on transaction volume and role in the ecosystem, organizations complete one of the following:

  • ROC (Report on Compliance) & AOC (Attestation of Compliance) by a QSA
  • Self-Assessment Questionnaire (SAQ) for eligible smaller merchants
  • Quarterly ASV scans by an Approved Scanning Vendor

High-Level Goals & Risk Domains

  • Build and maintain secure systems
  • Protect cardholder data
  • Implement strong access controls
  • Maintain vulnerability management
  • Monitor and test networks
  • Enforce information security policies

2. Scope & Applicability

In-Scope

  • Cardholder Data Environment (CDE): systems storing, processing, transmitting CHD
  • Connected-to systems: authentication servers, logging platforms, monitoring tools
  • People: administrators, developers, customer support, SOC analysts
  • Processes: payment flows, vendor management, incident response
  • Data: cardholder data, sensitive authentication data (SAD)

Out-of-Scope

  • Systems with effective segmentation from the CDE
  • Internal business systems without CHD interaction
  • External SaaS not touching CDE data and fully segmented

Assumptions & Dependencies

  • Network segmentation is implemented and validated
  • Asset inventory is complete and accurate
  • Roles and responsibilities for PCI DSS are assigned
  • Secure payment flow architecture is documented

3. Core Principles / Pillars

PCI DSS aligns with:

  • Confidentiality – protect cardholder data at rest and in transit
  • Integrity – maintain secure configuration and prevent unauthorized changes
  • Availability – ensure reliable and monitored systems
  • Authentication & Authorization – strong identity controls
  • Monitoring & Auditability – complete audit trails
  • Governance & Risk Management – policies, oversight, and periodic reviews

4. Control Breakdown (PCI DSS v4.0 Categories)

Below is a summary for each of the 12 PCI DSS requirements.

1. Install and Maintain Network Security Controls (NSCs)

Purpose: Protect CDE with firewalls/NSCs.

Minimum Expectations: Documented firewall rules, segmentation validation, deny-all baseline.

Evidence: Firewall configs, rule review logs, architecture diagrams.

Common Gaps: Allow-all rules, outdated diagrams.

Example: NSC rules allowing only HTTPS traffic into web tier and restricted outbound traffic.

2. Apply Secure Configurations to All System Components

Purpose: Harden systems and remove insecure defaults.

Minimum Expectations: CIS baselines, removed default accounts, patch settings.

Evidence: Baseline configs, config scans, screenshots.

Common Gaps: Default passwords, open ports.

Example: Using CIS Level 1 baselines for Linux and Windows.

3. Protect Stored Account Data

Purpose: Avoid storing CHD unless required and protect retained data.

Minimum Expectations: Truncation, hashing, tokenization, encryption.

Evidence: Data flow diagrams, encryption keys, storage scans.

Common Gaps: Unapproved storage, weak key management.

Example: Tokenization service replacing PAN with irreversible token.

4. Protect Cardholder Data in Transit

Purpose: Secure CHD across open networks.

Minimum Expectations: TLS 1.2+, secure cipher suites.

Evidence: TLS scans, configs.

Common Gaps: Legacy protocols enabled.

Example: Forced HTTPS on all payment endpoints.

5. Protect Systems and Networks from Malicious Software

Purpose: Detect and prevent malware infections.

Minimum Expectations: Next-gen AV/EDR, logging, controlled updates.

Evidence: EDR screenshots, deployment reports.

Common Gaps: Servers without AV.

Example: EDR agent installed on all CDE servers.

6. Develop and Maintain Secure Systems and Software

Purpose: Manage vulnerabilities and secure development.

Minimum Expectations: Patch SLAs, secure SDLC, code reviews, SAST/DAST.

Evidence: Jira tickets, scan reports.

Common Gaps: Missing patch timelines.

Example: Critical vulnerabilities patched within 30 days.

7. Restrict Access to System Components and Cardholder Data

Purpose: Enforce least privilege.

Minimum Expectations: RBAC, documented approvals.

Evidence: Access reviews, access requests.

Common Gaps: Shared accounts.

Example: CDE admin access requires documented approval and MFA.

8. Identify Users and Authenticate Access

Purpose: Ensure unique identification and strong auth.

Minimum Expectations: MFA for ALL access to CDE, password policy.

Evidence: MFA configs, IAM settings.

Common Gaps: Service accounts unmanaged.

Example: SSO + MFA for system administration.

9. Restrict Physical Access to Cardholder Data

Purpose: Prevent physical breaches.

Minimum Expectations: Badge access, cameras, visitor logs.

Evidence: Access logs, camera retention policies.

Common Gaps: Missing visitor badge tracking.

Example: Data center access controlled by biometric readers.

10. Log and Monitor All Access

Purpose: Ensure traceability.

Minimum Expectations: Centralized logging, SIEM alerts, retention policy.

Evidence: Log samples, SIEM reports.

Common Gaps: Missing logs from endpoints.

Example: CloudTrail + SIEM for all CDE components.

11. Test Security of Systems and Networks

Purpose: Identify weaknesses.

Minimum Expectations: Quarterly ASV scans, annual pen tests, segmentation tests.

Evidence: Scan results, remediation tickets.

Common Gaps: Missed retests.

Example: Annual external penetration test covering all payment flows.

12. Support Information Security with Governance

Purpose: Provide overarching governance and policy structure.

Minimum Expectations: Security policies, training, incident response, risk assessments.

Evidence: Policies, training logs.

Common Gaps: Outdated policies.

Example: Annual ISO-aligned security awareness training.

5. Minimum Requirements

Mandatory Documents

  • Information Security Policy
  • Data Retention and Disposal Policy
  • Access Control Policy
  • Incident Response Plan
  • Secure Development Policy
  • Change Management Procedure
  • Logging & Monitoring Policy
  • Network Diagram (CDE + segmentation diagram)

Mandatory Processes

  • Quarterly access reviews
  • Quarterly vulnerability scans (ASV + internal)
  • Annual risk assessment
  • Annual pen test
  • Segmentation testing
  • Patch management workflow
  • Incident response exercise

Technical Controls

  • MFA across CDE
  • EDR on all endpoints
  • TLS 1.2+ enforced
  • Encryption of CHD at rest
  • Hardened configurations
  • Centralized logging and SIEM

Governance Activities

  • Annual policy review
  • Training for all personnel
  • Vendor assessments

6. Technical Implementation Guidance

Infrastructure Security

  • Segmented network: separate CDE from corporate LAN
  • Use cloud-native security groups, NACLs, WAFs
  • Deny-all inbound default rule

Access Control

  • SSO + MFA
  • RBAC with least privilege
  • Access recertification every 90 days

Logging & Monitoring

  • Send logs to SIEM
  • Implement alerting on admin actions, failed logins, network anomalies
  • Retain logs 12 months (3 months immediately available)

Vulnerability Management

  • Weekly scans for internal CDE systems
  • Patch priority: critical within 30 days, high within 60

Backup & DR

  • Encrypted backups
  • Quarterly restore tests
  • Offsite replication

Encryption

  • AES-256 for data at rest
  • TLS 1.2/1.3 only
  • KMS/HSM preferred

Configuration Baselines

  • CIS Level 1 or Level 2 where applicable
  • Automated configuration checks

Tooling Suggestions (Neutral)

  • SIEM: Cloud-native tools, ELK, Splunk
  • Vulnerability scanning: OpenVAS, Nessus
  • IAM: Azure AD, Okta
  • EDR: CrowdStrike-type tools, Defender, SentinelOne
  • Ticketing: Jira, ServiceNow

7. Policy & Procedure Requirements

Core Policy Set

  • Information Security Policy
  • Access Control Policy
  • Vulnerability Management Policy
  • Secure SDLC Policy
  • Logging & Monitoring Policy
  • Incident Response Plan
  • Business Continuity & Disaster Recovery Plan
  • Change Management Procedure
  • Data Retention & Disposal Policy
  • Vendor Management Policy

Expected Structure

  • Purpose
  • Scope
  • Responsibilities
  • Control Requirements
  • Step-by-step Procedures
  • Review Frequency

Operational Activities

  • Following change control steps
  • Logging access approvals
  • Monitoring alerts and escalations
  • Conducting training and awareness

8. Audit Evidence & Verification

Common Auditor Requests

  • Network diagrams
  • CHD data flow diagrams
  • Firewall rule reviews
  • Log exports
  • Patch reports
  • Pen test reports
  • Ticket evidence for change management
  • Access review documentation
  • Incident response testing results

Evidence Formats

  • Screenshots
  • PDF reports
  • Raw logs
  • Ticketing exports
  • Meeting notes

Sampling

  • 3–25 samples per control
  • More for high-risk processes or large environments

Common Remediations

  • Tightening firewall rules
  • Updating SDLC documentation
  • Improving evidence retention
  • Implementing MFA across all CDE assets

9. Implementation Timeline Considerations

Typical Duration: 3–9 months

Key Milestones

  1. Scoping & data flow mapping
  2. Network segmentation validation
  3. Policy + procedure development
  4. Technical hardening
  5. Logging & monitoring setup
  6. Vulnerability management implementation
  7. Evidence gathering
  8. QSA readiness assessment
  9. ROC/AOC completion

Dependencies That Cause Delays

  • Poor scoping
  • Lack of network diagrams
  • Incomplete patching
  • Missing logs
  • Undocumented process flows

10. Ongoing Security BAU Requirements

  • Quarterly access reviews
  • Quarterly ASV scans
  • Quarterly internal vulnerability scans
  • Annual pen test
  • Annual training
  • Annual incident response test
  • Annual policy review
  • Monthly patch management
  • Quarterly segmentation tests
  • Vendor risk assessments
  • Asset inventory maintenance

11. Maturity Levels

Minimum Compliance

  • Policies in place
  • Basic segmentation
  • Required scans and testing
  • Manual evidence collection

Intermediate

  • Automated patching
  • Structured IAM and RBAC
  • SIEM monitoring with triage playbooks
  • Automated configuration management

Advanced

  • Zero-trust architecture
  • Automated compliance evidence collection
  • Continuous vulnerability detection
  • SOAR-assisted incident response
  • Encryption with HSM-backed key rotation

12. FAQs

Do we need PCI DSS if we only use Stripe/PayPal?

Yes, but scope is greatly reduced. Typically SAQ-A.

Is storing CHD allowed?

Yes, but strongly discouraged. Must follow strict encryption and retention rules.

Do all systems worldwide need to comply?

Only systems in or affecting the CDE.

Do we need an annual pen test?

Yes, including segmentation testing.

Do we need MFA?

Yes—mandatory for all access to CDE and for all admin access.

13. Summary

PCI DSS provides a structured framework to secure cardholder data and reduce payment fraud. Achieving compliance requires accurate scoping, architectural clarity, strong technical controls, documented policies, and recurring BAU activities. Organizations that implement segmentation, automate evidence capture, and embed security into operations reach compliance faster and maintain it more efficiently.

1. Overview

What it is

SOC 1 (System and Organization Controls 1) is an audit framework developed by the AICPA focused on controls that are relevant to a service organization’s Internal Control over Financial Reporting (ICFR).

Who it applies to

Organizations that process, store, transmit, or impact customer financial reporting data—commonly payroll providers, payment processors, benefits administration services, loan/credit processors, investment platforms, and financial system hosting providers.

Outcome

A formal SOC 1 Type I (design evaluation at a point in time) or Type II (design and operating effectiveness over a period, usually 12 months) attestation report issued by an independent CPA firm.

High-level goals & risk domains

  • Reliability of financial transaction processing
  • Accuracy, completeness, and timeliness of financial data
  • Integrity of system logic impacting financial reporting
  • Access and change control supporting financial applications
  • Operational controls directly affecting financial reporting accuracy

2. Scope & Applicability

Common in-scope elements

  • Financial applications impacting customer financial statements
  • Transaction processing systems
  • Databases storing financial records
  • Interfaces, APIs, and data flows used in financial reporting
  • Change management processes for financial systems
  • Access provisioning for financial operations
  • Relevant infrastructure (cloud services, servers, networks)
  • IT general controls that support ICFR

Common out-of-scope elements

  • Systems with no financial reporting impact
  • HR systems unrelated to financial statements
  • Marketing, CRM, and sales tools
  • Physical office environments not impacting financial systems
  • Non-production environments (unless relevant for change control)

Assumptions & dependencies

  • Management asserts which systems impact ICFR
  • Boundaries of financial systems are clearly documented
  • Third-party service dependencies are identified
  • Roles and responsibilities for ICFR are defined
  • Risk assessment for financial reporting has been completed

3. Core Principles / Pillars

SOC 1 relies on IT general control pillars relevant to ICFR:

  • Control Environment – governance, roles, responsibilities
  • Risk Assessment – financial reporting related risks
  • Information & Communication – policies, reporting lines, documentation
  • Control Activities – access control, change management, operations
  • Monitoring – internal reviews, audits, system monitoring

These align with COSO principles as the foundational control framework.

4. Control Breakdown

SOC 1 does not prescribe specific controls. Instead, control categories are derived from COSO and industry ICFR practices. Below is a breakdown of typical control families.

4.1 Control Environment

Purpose

Establish tone at the top and governance for ICFR.

Minimum expectations

  • Documented ICFR roles
  • Management oversight
  • Policies approved and communicated

Evidence auditors look for

  • Org charts
  • Policy documents
  • Job descriptions
  • Governance meeting minutes

Common gaps

  • Policies not approved or outdated
  • Undefined ICFR responsibilities

Example implementation

  • Annual approval of financial-impacting policies by leadership.

4.2 Logical Access Controls

Purpose

Ensure only authorized personnel can access financial systems.

Minimum expectations

  • Unique user accounts
  • Role-based access
  • Periodic access reviews
  • Immediate removal of leavers
  • MFA for administrative access

Evidence auditors look for

  • Access lists
  • Access requests/approvals
  • Termination logs
  • Access review results

Common gaps

  • Lack of documented approvals
  • Incomplete offboarding

Implementation example

  • Quarterly access reviews of financial systems with logged approvals.

4.3 Change Management

Purpose

Ensure changes to financial systems are authorized, tested, reviewed, and documented.

Minimum expectations

  • Formal change request workflow
  • Testing evidence
  • Code reviews
  • Segregation of duties (SoD)
  • Production deployment approvals

Evidence auditors look for

  • Change tickets
  • Pull/merge request logs
  • Testing results
  • Deployment records

Common gaps

  • Missing approvals
  • Developers deploying changes without review

Implementation example

  • Git-based workflow requiring review by a senior engineer before merging into production.

4.4 System Operations

Purpose

Ensure financial systems operate reliably and exceptions are handled.

Minimum expectations

  • Monitoring of job failures
  • Ticketing for incidents
  • Backup operations
  • Capacity monitoring

Evidence auditors look for

  • Incident reports
  • Log exports
  • Backup logs
  • Monitoring dashboards

Common gaps

  • No documented incident postmortems
  • Inconsistent backup validation

Implementation example

  • Automated monitoring alerting operations teams for failed financial data processing jobs.

4.5 Data Integrity & Processing Controls

Purpose

Ensure financial data is complete, accurate, and processed correctly.

Minimum expectations

  • Input validation
  • Reconciliation procedures
  • Exception handling
  • Batch processing controls

Evidence auditors look for

  • Reconciliation logs
  • Data validation rules
  • Exception reports

Common gaps

  • No evidence of reconciliation
  • Manual processes undocumented

Implementation example

  • Daily reconciliation of financial transactions between system A and system B.

4.6 Vendor / Third-Party Management

Purpose

Ensure third parties supporting ICFR are controlled.

Minimum expectations

  • Vendor assessments
  • SOC reports review
  • Risk classification

Evidence auditors look for

  • SOC 1/SOC 2 reports
  • Vendor risk assessments
  • Remediation follow-ups

Common gaps

  • Not reviewing vendor SOC reports
  • No documented vendor inventory

Implementation example

  • Annual review of cloud provider SOC reports with documented findings.

5. Minimum Requirements

Mandatory documents

  • ICFR scope and system boundary documentation
  • Access control policy
  • Change management policy
  • Incident management policy
  • Backup & restore procedure
  • Vendor management policy
  • Monitoring and operations procedures

Mandatory processes

  • Access provisioning & deprovisioning
  • Formal change control workflow
  • ICFR risk assessment
  • Reconciliation of financial data
  • Incident logging and analysis

Technical controls

  • MFA
  • Logging & monitoring
  • Segregated environments
  • Regular backups

Recurrence requirements

  • Annual risk assessment
  • Quarterly access reviews
  • Annual policy reviews
  • Continuous monitoring
  • Annual vendor SOC report reviews

6. Technical Implementation Guidance

Infrastructure security

  • Hardened servers, secure cloud configurations
  • Patch management with defined SLAs

Access control

  • Role-based access for financial systems
  • SSO integrated with MFA

Logging & monitoring

  • Centralized logging for financial apps
  • Alerting for failed transactions

Vulnerability management

  • Monthly scanning
  • Risk-based remediation windows

Backups & DR

  • Daily backups
  • Quarterly restoration tests

Encryption

  • Data in transit via TLS 1.2+
  • Data at rest via provider-managed keys

Baseline configurations

  • CIS-based baselines for servers and cloud

Tooling suggestions (neutral)

  • SIEM
  • IAM platform
  • Ticketing/Change management system
  • Code repository & CI/CD pipeline
  • Cloud configuration scanning tools

7. Policy & Procedure Requirements

Common required documents and their purpose:

  1. Access Control Policy – governs access provisioning and reviews
  2. Change Management Policy – defines change workflows
  3. Incident Management Policy – defines handling of financial-impacting incidents
  4. Backup & Recovery Policy – ensures recoverability
  5. Vendor Management Policy – governs third-party risk
  6. Monitoring & Logging Policy – supports operational and financial control monitoring
  7. Risk Management Policy – covers ICFR risk assessment

Each must define:

  • Purpose
  • Scope
  • Roles & responsibilities
  • Procedures
  • Review cadence

8. Audit Evidence & Verification

Typical evidence requested

  • System exports (access lists, logs, tickets)
  • Screenshots with timestamps
  • Change tickets
  • Incident reports
  • Reconciliation documentation

Formats

  • PDF exports
  • Screenshots
  • CSV data extracts
  • Architecture diagrams

Sampling

  • Auditors select samples from the review period based on risk
  • Most common sampling: 25–40 items per control

Common remediation actions

  • Implementing missing approvals
  • Fixing access review findings
  • Improving evidence storage
  • Strengthening reconciliation documentation

9. Implementation Timeline Considerations

Typical duration

3–6 months to prepare for SOC 1 Type I

12 months of operational evidence for Type II

Key milestones

  1. Scoping & ICFR mapping
  2. Gap assessment
  3. Policy and process development
  4. Evidence readiness
  5. Type I audit
  6. Control operation period
  7. Type II audit

Dependencies

  • Identifying in-scope systems
  • Clear process ownership
  • Availability of transaction data

10. Ongoing BAU Requirements

  • Quarterly access reviews
  • Annual ICFR risk assessment
  • Annual policy reviews
  • Regular reconciliation checks
  • Continuous incident reporting
  • Annual vendor SOC report review
  • Continuous monitoring of financial systems
  • Change management adherence

11. Maturity Levels

Minimum compliance

  • Basic policies
  • Manual access controls
  • Manual reconciliations
  • Logging with limited monitoring

Intermediate maturity

  • Partial automation
  • Defined workflows with approvals
  • Centralized logging
  • Documented reconciliations

Advanced/automated

  • Fully automated provisioning via IAM
  • Continuous audit logging
  • Automated integration checks
  • Predictive monitoring

12. FAQs

Q: Is SOC 1 the same as SOC 2?

No—SOC 1 focuses on financial reporting; SOC 2 focuses on security, availability, processing integrity, confidentiality, privacy.

Q: Who needs a SOC 1?

Any service provider whose system impacts customer financial reporting.

Q: Can a company be “certified”?

No—SOC 1 results in an attestation report, not a certification.

Q: What is the difference between Type I and II?

Type I: design at a point in time

Type II: design + operating effectiveness over a period (usually 12 months)

13. Summary

SOC 1 attestation evaluates whether your controls adequately support Internal Control over Financial Reporting. Achieving compliance requires clear scope definition, strong IT general controls, documented financial data processing controls, and consistent operational evidence. With structured governance, disciplined change and access management, and continuous monitoring, an organization can achieve and sustain SOC 1 Type II readiness.

1. Overview

What SOC 2 Is:

SOC 2 (System and Organization Controls 2) is an attestation standard developed by the AICPA that evaluates an organization’s controls related to the Trust Services Criteria (TSC). It focuses on how service organizations safeguard customer data and ensure the effectiveness of related controls.

Who It Applies To:

Any service organization handling customer data, particularly SaaS companies, cloud services, data processors, and organizations providing outsourced services that impact customer security, availability, confidentiality, processing integrity, or privacy.

Outcome:

An independent auditor’s SOC 2 Type I or SOC 2 Type II attestation report:

  • Type I: Controls designed and implemented at a point in time.
  • Type II: Controls designed, implemented, and operating effectively over a period (typically 3–12 months).

High-Level Goals & Domains

SOC 2 is structured around the Trust Services Criteria (TSC):

  • Security (mandatory)
  • Availability
  • Confidentiality
  • Processing Integrity
  • Privacy

2. Scope & Applicability

Typical In-Scope Elements

  • Production infrastructure hosting customer data
  • SaaS platforms and APIs
  • Identity & access management systems
  • Security tooling (SIEM, endpoint protection, vulnerability scanners)
  • Operational processes: change management, incident response, risk management
  • Employees with access to production systems
  • Vendors supporting production or customer data processing

Common Out-of-Scope Elements

  • Internal corporate systems not connected to service delivery
  • Marketing websites (unless tied to customer authentication)
  • HR systems (unless processing customer data)
  • Personal devices outside corporate control

Assumptions & Dependencies

  • Cloud infrastructure is properly configured (AWS, GCP, Azure)
  • Adequate logging capability
  • Defined organizational structure with assigned roles
  • Policies and procedures are formally documented
  • Vendors provide necessary SOC, ISO, or security assurances

3. Core Principles (Trust Services Criteria)

 

Principle Focus
Security (Common Criteria) Protect systems against unauthorized access and misuse
Availability Ensure the system is operational and accessible per commitments
Confidentiality Protect sensitive information from unauthorized disclosure
Processing Integrity Ensure system processing is complete, accurate, and timely
Privacy Manage personal data in line with privacy commitments

 

4. Control Breakdown

Below is a structured summary of the main SOC 2 Common Criteria (CC) categories and additional criteria.

CC1 — Control Environment

Purpose: Establish governance, structure, and top-level accountability.

Minimum Expectations:

  • Defined roles and responsibilities
  • Code of conduct
  • Background checks
  • Organizational oversight
  • Evidence:
  • Org charts, HR files, signed code of conduct
  • Common Gaps:
  • No documented roles
  • No periodic HR checks
  • Example:
  • Annual leadership review of organizational responsibilities.
  • Mapping: Similar to ISO 27001 Clause 5 & 7.

CC2 — Communication & Information

Purpose: Ensure relevant security information is communicated internally and externally.

Minimum Expectations:

  • Policies communicated to employees
  • Secure communication channels
  • Incident notifications defined
  • Evidence:
  • Policy acknowledgment logs, training records
  • Common Gaps:
  • Policies not accessible to employees
  • Example:
  • Using an internal wiki with controlled access.
  • Mapping: ISO 27001 Clause 7 & Annex A.6.

CC3 — Risk Assessment

Purpose: Identify and assess risks to business objectives.

Minimum Expectations:

  • Formal risk assessment
  • Risk treatment plans
  • Annual or trigger-based updates
  • Evidence:
  • Risk register, methodology, meeting minutes
  • Common Gaps:
  • Risk assessment not updated annually
  • Example:
  • Quarterly risk steering committee reviews.
  • Mapping: ISO 27001 Clause 6.

CC4 — Monitoring Activities

Purpose: Regular evaluation of controls.

Minimum Expectations:

  • Internal audits
  • Management reviews
  • Control testing
  • Evidence:
  • Audit reports, review notes
  • Common Gaps:
  • No internal audit function
  • Example:
  • Quarterly internal control testing.
  • Mapping: ISO 27001 Clause 9 & 10.

CC5 — Control Activities (Change Management & Logical Access)

Purpose: Apply technical and procedural safeguards.

Minimum Expectations:

  • Change management procedures
  • Access provisioning & de-provisioning
  • MFA and least privilege
  • Evidence:
  • Ticket logs, access review reports
  • Common Gaps:
  • Incomplete de-provisioning
  • Example:
  • Automated JIT access workflows.
  • Mapping: ISO 27001 Annex A.8, A.9, A.12.

Additional TSC Categories

Availability (A1)

Purpose: Ensure uptime and system resilience.

Evidence:

  • Uptime metrics, DR tests, backup logs
  • Common Gaps:
  • No DR testing
  • Mapping: ISO 27001 Annex A.17.

Confidentiality (C1)

Purpose: Protect sensitive information.

Expectations:

  • Data classification
  • Encryption
  • Access control
  • Mapping: ISO 27001 Annex A.5, A.8.

Processing Integrity (PI1)

Purpose: Ensure correct and reliable processing.

Expectations:

  • Input validation
  • Monitoring of data processing
  • Mapping: Less common in ISO.

Privacy (P1–P8)

Purpose: Manage personal data per privacy commitments.

Expectations:

  • Consent, notice, access rights
  • Retention & disposal
  • Mapping: ISO 27701, GDPR principles.

5. Minimum Requirements (Non-Negotiable)

Mandatory Documents

  • All policies (security, access, incident, DR, change mgmt, etc.)
  • Risk assessment & treatment plan
  • Asset inventory
  • Data flow diagrams
  • Vendor management documentation
  • Incident logs
  • DR plan & test results

Mandatory Processes

  • Background checks
  • Onboarding/offboarding
  • Quarterly access reviews
  • Change management
  • Patch management
  • Vulnerability scanning
  • Security awareness training

Technical Controls

  • MFA everywhere possible
  • Encryption in transit and at rest
  • Logging, monitoring, alerting
  • Endpoint protection
  • Backups & recovery testing

Recurrence

  • Annual risk assessment
  • Quarterly access reviews
  • Annual policy reviews
  • Annual incident response test
  • Annual DR test
  • Continuous vulnerability scanning

6. Technical Implementation Guidance

Infrastructure Security

  • Harden cloud resources using CIS Benchmarks
  • Apply security groups and least-privilege network rules

Access Control

  • Enforce SSO + MFA
  • Role-based access
  • Automated de-provisioning

Logging & Monitoring

  • Centralized log aggregation
  • Alerts for critical events
  • 90+ days online retention

Vulnerability Management

  • Weekly automated scans
  • SLA-based remediation
  • Patch cadence aligned with risk

Backup & DR

  • Immutable backups
  • Recovery time objectives defined
  • Annual DR failover test

Encryption

  • TLS 1.2+ for data in transit
  • KMS-managed keys for at-rest encryption

Configuration Baselines

  • CIS-aligned templates
  • Guardrails via IaC or policies

Tooling Suggestions (Neutral)

  • SIEM: Any cloud-native or third-party
  • Endpoint: Any EDR
  • IAM: Cloud-native IAM, SCIM provisioning
  • Vulnerability scanning: Cloud-native scanners or enterprise-grade tools

7. Policy & Procedure Requirements

Typical documents include:

  • Information Security Policy
  • Access Control Policy
  • Asset Management Policy
  • Logging & Monitoring Policy
  • Incident Response Plan
  • Disaster Recovery & Business Continuity Plan
  • Change Management Policy
  • Vendor Risk Management Policy
  • Encryption Policy
  • Data Retention & Disposal Policy
  • Privacy Policy (if P criteria included)

Each document should define:

  • Purpose
  • Scope
  • Roles & responsibilities
  • Processes & procedures
  • Controls and expectations
  • Review frequency

8. Audit Evidence & Verification

What Auditors Request

  • System descriptions
  • Architecture diagrams
  • Policy acknowledgements
  • Access review logs
  • Change management tickets
  • Incident records
  • Security tool exports
  • HR onboarding/offboarding evidence

Formats

  • Screenshots
  • Tickets
  • Logs
  • CSV exports
  • Meeting notes

Sampling

  • Typically 25–40 items per control area
  • Larger samples for access reviews and change management

Common Remediation Items

  • Missing approval workflows
  • Poor evidence timestamps
  • Unclear ownership
  • Inconsistent access provisioning

9. Implementation Timeline Considerations

Typical Duration

  • Type I: 4–12 weeks
  • Type II: Additional 3–12 months for audit window

Milestones

  1. Scoping
  2. Gap assessment
  3. Remediation
  4. Documentation
  5. Evidence collection
  6. Audit readiness
  7. Audit fieldwork

Dependencies

  • Policy completion
  • Hiring or access to security roles
  • Cloud baseline maturity
  • Vendor documentation availability

10. Ongoing Security BAU Requirements

  • Quarterly access reviews
  • Continuous vulnerability scans
  • Monthly patching
  • Annual security training
  • Annual risk assessment
  • Annual DR and IR tests
  • Vendor reviews
  • Asset inventory updates
  • Logging and alert monitoring

11. Maturity Levels

Minimum Compliance

  • Policies in place
  • Controls implemented
  • Basic tooling
  • Manual reviews

Intermediate

  • Automated provisioning
  • Mature monitoring
  • Documented procedures

Advanced

  • Full automation (IAM, CI/CD, monitoring)
  • Continuous compliance tooling
  • Predictive risk analytics

12. FAQs

Is SOC 2 mandatory?

No, but customers often require it.

How long is a SOC 2 report valid?

Typically 12 months.

Do we need all five Trust Services Criteria?

Only Security is mandatory; others are optional.

Is SOC 2 the same as ISO 27001?

No, but there is close control overlap.

Can we use a compliance automation tool?

Yes, but it does not replace preparation or audit evidence.

13. Summary

SOC 2 provides assurance that a service organization protects customer data and maintains effective operational controls. Achieving SOC 2 compliance requires well-defined governance, documented processes, technical safeguards, and consistent operational discipline. With proper scoping, systematic implementation, and continuous monitoring, organizations can successfully obtain and maintain SOC 2 Type I or Type II attestation.