Incident Response Plan¶
MazeVault Security Incident Detection, Response, and Recovery Procedures
Document ID: MV-LEG-007
Version: 1.0.0
Classification: Confidential
Owner: Chief Information Security Officer (CISO)
Last Updated: 2026-05-01
Review Cycle: Annual (and after each significant incident)
Approved By: CEO / Board of Directors
Regulatory Basis: Act No. 264/2025 Sb. §15, NIS2 Directive Art. 23, DORA Art. 17-19, ISO/IEC 27001:2022 A.16
1. Purpose and Scope¶
1.1 Purpose¶
This Incident Response Plan ("Plan") establishes the formal procedures for detecting, reporting, responding to, recovering from, and learning from security incidents affecting MazeVault systems, services, and customer data. The Plan ensures a coordinated, efficient, and legally compliant response to all security events, minimizing impact to operations and stakeholders.
1.2 Scope¶
This Plan applies to:
- All security incidents affecting the MazeVault platform, infrastructure, and services
- All customer data processed, stored, or transmitted by MazeVault systems
- All MazeVault personnel, contractors, and third-party service providers
- All environments: production (PRO), non-production (NPR), development, and staging
- All deployment models: cloud-hosted, on-premise gateway installations, and hybrid configurations
- All connected infrastructure: backend services, gateways, agents, databases, and supporting systems
1.3 Definitions¶
| Term | Definition |
|---|---|
| Security Event | Any observable occurrence in a system or network that may indicate a security policy violation |
| Security Incident | A confirmed security event that compromises the confidentiality, integrity, or availability of information assets |
| Data Breach | An incident resulting in unauthorized access to, or disclosure of, personal data or customer secrets |
| Incident Response Team (IRT) | The designated group responsible for managing incident response activities |
| Incident Commander (IC) | The individual with overall authority and responsibility during an incident |
2. Incident Classification¶
2.1 Severity Classification Matrix¶
| Severity | Description | Examples | Initial Response Time | Escalation |
|---|---|---|---|---|
| Critical (P1) | Imminent or confirmed compromise of customer data, complete service unavailability, or active exploitation | Data breach, Remote Code Execution (RCE), authentication bypass, customer secret/key exposure, ransomware deployment | Immediate (≤15 min) | CISO + CEO immediately |
| High (P2) | Significant security degradation, privilege escalation, or major availability impact without confirmed data loss | Privilege escalation, significant DoS impacting SLA, unauthorized admin access, gateway compromise without data exfiltration | ≤1 hour | CISO within 30 min |
| Medium (P3) | Limited information disclosure, contained denial of service, or degraded security controls | Limited information disclosure, localized DoS, failed exploitation attempts with partial success, misconfiguration exposing non-sensitive data | ≤4 hours | Security team lead |
| Low (P4) | Minor security concerns, reconnaissance activity, or opportunities for defensive improvement | Port scanning, minor information leakage (version disclosure), policy violations without impact, phishing attempts (no compromise) | ≤24 hours | Next business day |
2.2 Impact Classification¶
Each incident SHALL additionally be classified by the following impact dimensions:
| Dimension | High | Medium | Low |
|---|---|---|---|
| Confidentiality | Customer secrets, encryption keys, or personal data exposed to unauthorized parties | Internal system data or metadata exposed; no customer secrets compromised | Information of minimal sensitivity exposed (e.g., public configuration) |
| Integrity | Customer data modified, audit logs tampered with, or cryptographic operations compromised | System configuration altered; no direct customer data impact | Minor data inconsistencies with no operational impact |
| Availability | Complete service outage or >50% capacity degradation affecting production customers | Degraded performance or partial service disruption; workarounds available | Minimal impact; non-production environments or isolated components |
2.3 Regulatory Classification Trigger¶
An incident SHALL be classified as reportable to regulatory authorities when any of the following conditions are met:
- Confirmed unauthorized access to customer personal data (GDPR Art. 33)
- Significant impact on essential service provision (NIS2 Art. 23)
- Major ICT-related incident affecting financial entity customers (DORA Art. 19)
- Incident meeting criteria defined in Act No. 264/2025 Sb. §15
3. Incident Response Team (IRT)¶
3.1 Team Composition and Roles¶
| Role | Primary Responsibility | Deputy |
|---|---|---|
| Incident Commander (IC) — CISO | Overall incident coordination, regulatory communication decisions, escalation authority, final sign-off on containment/recovery actions | CTO |
| Technical Lead — CTO / Sr. Engineer | Technical investigation, containment execution, forensic evidence collection, system recovery | Senior Platform Engineer |
| Communications Lead | Internal communications, customer notifications, press coordination, status page updates | Head of Customer Success |
| Legal Counsel | Regulatory notification assessment, legal obligation compliance, evidence preservation guidance, liability assessment | External Legal Advisor |
| Customer Relations | Customer-facing communication, impact assessment per customer, support coordination | Account Management Lead |
| Operations Lead | Infrastructure actions, backup/restore operations, system monitoring during incident | DevOps Engineer |
3.2 Escalation Matrix¶
Level 1: Security Operations (on-call engineer)
↓ [P3/P4 confirmed or P2/P1 suspected]
Level 2: Security Team Lead + Technical Lead
↓ [P2 confirmed or P1 suspected]
Level 3: CISO (Incident Commander) + Full IRT activation
↓ [P1 confirmed or regulatory notification required]
Level 4: CEO + Board notification + External counsel
3.3 Contact and Availability¶
- The Incident Commander and Technical Lead SHALL be reachable 24/7/365 via designated emergency contact channels.
- All IRT members SHALL acknowledge activation within 15 minutes during business hours and 30 minutes outside business hours.
- A duty roster SHALL be maintained ensuring at least one qualified responder per role is available at all times.
3.4 Authority¶
The Incident Commander is authorized to:
- Shut down any system or service to contain an incident
- Revoke any access credentials or certificates
- Engage external forensic specialists or legal counsel
- Authorize emergency changes without standard change management approval
- Communicate with regulatory authorities on behalf of MazeVault
4. Detection Mechanisms¶
4.1 Automated Detection¶
4.1.1 AnomalyDetectionService¶
The AnomalyDetectionService provides real-time behavioral analysis:
- Access Pattern Tracking: Establishes baseline behavior for each user and device, detects statistical deviations from normal patterns
- Deviation Detection: Triggers alerts when access patterns exceed defined thresholds (frequency, timing, volume, geographic origin)
- Risk Scoring: Assigns dynamic risk scores to sessions based on behavioral anomalies
- Integration: Feeds into ZeroTrustService for enforcement decisions
4.1.2 ZeroTrustService¶
The ZeroTrustService enforces continuous verification:
- Untrusted Device Detection: Identifies devices not meeting security posture requirements; blocks access to critical resources
- Critical Resource Access Control: Prevents access from untrusted contexts to high-sensitivity operations (secret retrieval, key management, administrative functions)
- Session Termination: Automatically terminates sessions exhibiting suspicious characteristics
- Alert Generation: Creates security events for investigation when enforcement actions are taken
4.1.3 SIEM Detection Rules¶
Seventeen (17) detection rules are deployed covering the following threat categories:
| Rule Category | Detection Scenario | Alert Severity |
|---|---|---|
| Authentication | Brute force (>10 failed attempts in 5 min per account) | High |
| Authentication | Credential stuffing (distributed failed logins across accounts) | High |
| Authentication | Impossible travel (logins from geographically distant locations within implausible timeframes) | Critical |
| Authorization | Privilege escalation (role elevation outside normal workflow) | Critical |
| Data Exfiltration | Bulk export (>100 secrets accessed in 1 hour) | Critical |
| Data Exfiltration | Unusual download volume (>3x baseline) | High |
| Operational | Mass revocation (>50 certificates revoked in 1 hour) | High |
| Operational | Mass secret deletion (>20 secrets deleted in 30 min) | Critical |
| Infrastructure | Unauthorized configuration change | High |
| Infrastructure | Service account abuse (use outside designated service) | High |
| Network | Lateral movement indicators | Critical |
| Network | C2 communication patterns | Critical |
| Availability | DDoS pattern detection (traffic spike >10x normal) | High |
| Compliance | Audit log gap (>5 min without expected log entries) | High |
| Compliance | Tamper detection (hash chain validation failure) | Critical |
| API | API abuse (rate limit exceeded + unusual endpoints) | Medium |
| Cryptographic | Key usage anomaly (signing operations outside maintenance window) | High |
4.1.4 GatewayHealthMonitor¶
- Heartbeat Monitoring: Timeout threshold of 2 minutes; gateways must report health within this interval
- Failure Threshold: 3 consecutive missed heartbeats triggers alert and potential failover
- Health Check Interval: Every 30 seconds
- Alert Escalation: Missed heartbeat → warning; 3 consecutive failures → critical alert + IRT notification
4.1.5 Prometheus Alerting¶
- Infrastructure metrics monitoring (CPU, memory, disk, network)
- Application-level metrics (request latency, error rates, queue depths)
- SLA threshold alerts (response time, availability percentage)
- Certificate expiration warnings (30, 14, 7, 3, 1 day)
- Database connection pool exhaustion
4.2 Manual Detection¶
| Source | Process |
|---|---|
| Customer Reports | Received via support channels; triaged by support team; escalated to security if incident indicators present |
| Employee Reports | Internal reporting channel (info@mazevault.com); any employee may report suspected incidents |
| Third-Party Notification | Vulnerability researchers, CERT teams, partner organizations, law enforcement; routed to CISO |
| Audit Findings | Internal or external audit identifies control failures or evidence of compromise |
| Threat Intelligence | Industry feeds, vendor advisories, NUKIB warnings indicating active threats to similar systems |
5. Response Phases¶
5.1 Phase 1: Detection & Triage¶
Objective: Confirm the incident, assess initial severity, and activate appropriate response resources.
Actions:
- Validate Alert: Confirm the security event is a genuine incident (eliminate false positives)
- Assign Incident ID: Generate unique identifier in format
INC-YYYY-MMDD-NNN(e.g.,INC-2026-0501-001) - Classify Severity: Apply classification matrix (Section 2) to determine priority level
- Assess Scope: Determine affected systems, data, customers, and environments
- Activate IRT: Notify team members per escalation matrix (Section 3.2)
- Initiate Incident Record: Create formal incident record with all known details
- Preserve Initial Evidence: Ensure logging continues; initiate evidence preservation (Section 8)
- Determine Regulatory Obligations: Assess whether notification thresholds are met (Section 6)
Output: Incident record created, IRT activated, initial assessment documented.
Time Constraint: Triage SHALL be completed within 30 minutes of initial detection for P1/P2 incidents.
5.2 Phase 2: Containment¶
Objective: Limit the scope and impact of the incident while preserving evidence.
5.2.1 Short-Term Containment (Immediate)¶
- Isolate Affected Systems: Network segmentation, disable compromised gateway connections
- Revoke Compromised Credentials: Invalidate tokens, API keys, sessions, certificates confirmed or suspected compromised
- Block Malicious Sources: IP blocking, geo-blocking, firewall rule updates
- Disable Compromised Accounts: Lock user accounts exhibiting unauthorized activity
- Enable Enhanced Monitoring: Increase logging verbosity, reduce alert thresholds on related systems
- Activate GatewayEnvironmentLock: If gateway compromise suspected, ensure fencing token prevents split-brain
5.2.2 Long-Term Containment¶
- Apply Emergency Patches: Deploy fixes for exploited vulnerabilities
- Reconfigure Affected Systems: Harden configurations, rotate all potentially exposed secrets
- Implement Additional Controls: Deploy new detection rules, tighten access policies
- Establish Clean Recovery Environment: Prepare verified-clean systems for restoration
- Rotate Encryption Keys: If key compromise suspected, execute emergency key rotation procedure
Decision Gate: Incident Commander must approve transition from containment to eradication, confirming active threat is neutralized.
5.3 Phase 3: Eradication¶
Objective: Remove the root cause and verify no persistence mechanisms remain.
Actions:
- Identify Root Cause: Complete forensic analysis to determine attack vector and method
- Remove Malicious Artifacts: Delete malware, backdoors, unauthorized accounts, persistence mechanisms
- Patch Vulnerabilities: Apply permanent fixes for all exploited vulnerabilities
- Verify Clean State: Scan all potentially affected systems; validate integrity of binaries and configurations
- Verify No Lateral Movement: Confirm attack did not spread to other systems or environments
- Update Detection Rules: Add signatures/patterns from this incident to detection capabilities
- Validate Audit Log Integrity: Verify chain-hashed audit logs show no tampering during incident
Output: Root cause analysis document, confirmation of clean system state, updated threat indicators.
5.4 Phase 4: Recovery¶
Objective: Restore affected systems to full operational status and confirm normal operations.
Actions:
- Restore from Verified Backups: If system integrity cannot be confirmed, restore from last known-good backup (per BCP/DR procedures MV-LEG-008)
- Verify Data Integrity: Compare restored data against backup checksums; validate database consistency
- Gradual Service Restoration: Bring systems back online incrementally, monitoring for anomalies at each stage
- Verify Security Controls: Confirm all security mechanisms (ZeroTrust, AnomalyDetection, SIEM rules) are active and functioning
- Monitor for Recurrence: Maintain enhanced monitoring for minimum 72 hours post-recovery
- Confirm SLA Restoration: Verify system performance meets defined SLA thresholds
- Customer Communication: Notify affected customers of service restoration and any actions they must take (e.g., credential rotation)
Decision Gate: Incident Commander authorizes transition to post-incident phase when all systems are confirmed operational and stable for ≥24 hours.
5.5 Phase 5: Post-Incident¶
Objective: Learn from the incident, improve processes, and fulfill remaining obligations.
Actions:
- Conduct Post-Incident Review (PIR): Blameless retrospective within 5 business days of incident closure
- Document Lessons Learned: What worked well, what could be improved, what was missing
- Update Policies and Procedures: Revise this Plan, runbooks, and other documents based on findings
- Implement Preventive Measures: Address root cause to prevent recurrence
- Submit Final Regulatory Reports: Complete all required final reports (Section 6)
- Update Risk Register: Reflect new risks or risk reassessments in the organizational risk register
- Brief Stakeholders: Provide executive summary to leadership and board as appropriate
- Close Incident Record: Formally close the incident with complete documentation
Output: Post-Incident Report, updated risk register, action items with owners and deadlines.
6. Regulatory Notification Requirements¶
6.1 Notification Timeline Matrix¶
| Authority / Recipient | Trigger | Initial Notification | Intermediate Report | Final Report | Method |
|---|---|---|---|---|---|
| NUKIB (per Act No. 264/2025 Sb. §15) | Significant security incident affecting essential service | Within 24 hours of incident detection | Within 72 hours | Within 1 month of incident closure | NUKIB electronic portal (ESVZ) |
| Customer Notification | Incident directly affecting customer data or service | Immediate for Critical (P1) affecting their data; within 24 hours for High (P2) | Ongoing updates as material information becomes available | Final summary upon incident closure | Email + in-platform notification + status page |
| GDPR — Data Protection Authority (ÚOOÚ) (Art. 33) | Personal data breach likely to result in risk to rights and freedoms | Within 72 hours of becoming aware | As additional information becomes available | Upon completion of investigation | DPA electronic reporting form |
| GDPR — Data Subjects (Art. 34) | Personal data breach likely to result in high risk to rights and freedoms | Without undue delay | N/A | N/A | Direct communication to affected individuals |
| DORA — Competent Authority (Art. 19) | Major ICT-related incident (financial sector customers) | Within 4 hours of incident classification | Within 72 hours | Within 1 month | Regulatory reporting channel per financial authority requirements |
| CNB (Czech National Bank) | Incident affecting regulated financial entity customers | Support customer's own reporting obligation (provide evidence and impact assessment) | Ongoing support | Provide final technical report for customer's use | Via customer; direct if requested |
| Law Enforcement | Evidence of criminal activity | As directed by Legal Counsel | As requested | As requested | Direct contact with relevant authority |
6.2 Critical Timelines¶
T+0 Incident detected
T+15 min P1 triage complete, IRT activated
T+4 hours DORA initial notification (if applicable) — MANDATORY
T+24 hours NUKIB initial notification — MANDATORY
T+72 hours GDPR DPA notification — MANDATORY
T+72 hours Intermediate reports (NUKIB, DORA)
T+1 month Final reports (NUKIB, DORA)
CRITICAL: Failure to meet the 24-hour NUKIB reporting requirement or the 4-hour DORA initial notification constitutes a regulatory violation and may result in administrative penalties. The Incident Commander SHALL ensure timely notification even if full incident details are not yet available.
6.3 Notification Decision Authority¶
- NUKIB notification: Authorized by Incident Commander (CISO) or CEO
- GDPR DPA notification: Authorized by Data Protection Officer (DPO) in consultation with Legal Counsel
- DORA notification: Authorized by Incident Commander (CISO) with Legal Counsel confirmation
- Customer notification: Authorized by Incident Commander; content approved by Communications Lead and Legal Counsel
- Law enforcement contact: Authorized exclusively by Legal Counsel and CEO
7. Communication Templates¶
7.1 Internal Escalation Template¶
Subject: [SEVERITY] Security Incident - INC-YYYY-MMDD-NNN
INCIDENT SUMMARY
- Incident ID: [INC-YYYY-MMDD-NNN]
- Severity: [Critical / High / Medium / Low]
- Detected: [Timestamp UTC]
- Detection Method: [Automated/Manual - specifics]
- Affected Systems: [List]
- Affected Customers: [Count / "Under Investigation"]
- Current Status: [Detected / Investigating / Contained / Resolved]
INITIAL ASSESSMENT
- Description: [Brief description of the incident]
- Suspected Impact: [Confidentiality / Integrity / Availability]
- Estimated Scope: [Systems, data types, customer count]
IMMEDIATE ACTIONS TAKEN
- [Action 1]
- [Action 2]
REQUIRED ACTIONS
- [Action required from specific team/individual]
NEXT UPDATE: [Timestamp or "Within X hours"]
Incident Commander: [Name]
Contact: [Emergency channel]
7.2 Customer Notification Template¶
Subject: Security Notice - Action Required / For Your Information
Dear [Customer Name],
We are writing to inform you of a security incident that [may affect / has affected]
your MazeVault environment.
WHAT HAPPENED
[Factual description without excessive technical detail]
WHAT INFORMATION WAS INVOLVED
[Types of data potentially affected, NOT specific contents]
WHAT WE ARE DOING
[Actions taken and planned by MazeVault]
WHAT YOU SHOULD DO
[Specific recommended actions: rotate credentials, review access logs, etc.]
ADDITIONAL INFORMATION
[Resources, support contact, timeline for updates]
We take the security of your data extremely seriously and are committed to
full transparency as our investigation continues.
For questions: [Dedicated incident support contact]
Next update: [Date/time]
[Signature]
MazeVault Security Team
7.3 Regulatory Notification Template (NUKIB Portal Format)¶
NOTIFICATION OF SECURITY INCIDENT
1. REPORTING ENTITY
- Organization: MazeVault s.r.o.
- IČO: [Registration Number]
- Contact Person: [CISO Name]
- Contact: [Phone] / [Email]
2. INCIDENT IDENTIFICATION
- Internal Reference: [INC-YYYY-MMDD-NNN]
- Date/Time of Detection: [ISO 8601 timestamp]
- Date/Time of Occurrence (if known): [ISO 8601 timestamp]
- Reporting Deadline Compliance: [Yes - within 24h / Explanation if late]
3. INCIDENT DESCRIPTION
- Category: [Unauthorized access / Data breach / DoS / Malware / Other]
- Affected Service: MazeVault Secret Management Platform
- Description: [Factual summary]
4. IMPACT ASSESSMENT
- Affected Systems: [List]
- Affected Users/Customers: [Count]
- Data Types Affected: [Categories]
- Service Disruption: [Duration, scope]
- Cross-border Impact: [Yes/No - details]
5. ACTIONS TAKEN
- Containment: [Measures implemented]
- Mitigation: [Steps to reduce impact]
- Recovery: [Status and timeline]
6. ADDITIONAL INFORMATION
- Root Cause (if known): [Description]
- Indicators of Compromise: [If shareable]
- Assistance Requested: [If any]
Status: [Initial / Intermediate / Final]
7.4 Press Statement Template (If Required)¶
FOR IMMEDIATE RELEASE
MazeVault Addresses Security Incident
[City], [Date] — MazeVault acknowledges a security incident affecting
[general scope]. We detected [general description] on [date] and
immediately activated our incident response procedures.
[If customer data affected]:
We are directly notifying all affected customers and providing guidance
on recommended protective actions.
Our investigation is ongoing in coordination with relevant authorities.
The security and privacy of our customers' data remains our highest priority.
We will provide updates as our investigation progresses.
Media Contact: [Name, email, phone]
8. Evidence Preservation¶
8.1 Principles¶
All evidence collection and preservation SHALL adhere to the following principles:
- Integrity: Evidence must not be altered during collection or storage
- Chain of Custody: Complete documentation of who accessed evidence, when, and why
- Admissibility: Collection methods must support potential legal proceedings
- Completeness: All relevant evidence types must be collected
- Timeliness: Evidence must be collected before it is overwritten or expires
8.2 Evidence Types and Collection¶
| Evidence Type | Collection Method | Priority | Retention |
|---|---|---|---|
| Audit Logs | Chain-hashed, tamper-evident logs exported from AuditService | Critical | Minimum 3 years |
| System Logs | Centralized log aggregation export (structured JSON) | Critical | Minimum 3 years |
| Application Logs | Export from log management system | High | Minimum 3 years |
| Network Captures | PCAP from affected network segments (if available) | High | Duration of investigation + 1 year |
| Memory Dumps | Live memory capture from affected systems (if applicable and feasible) | Medium | Duration of investigation + 1 year |
| Disk Images | Forensic disk image (bit-for-bit copy) of affected systems | High (if compromise confirmed) | Duration of investigation + 3 years |
| Configuration Snapshots | System configuration at time of incident | High | Minimum 3 years |
| Authentication Logs | All auth events for affected accounts/systems | Critical | Minimum 3 years |
| SIEM Alerts/Correlations | Complete alert chain and correlation data | Critical | Minimum 3 years |
8.3 Chain-Hashed Audit Log Integrity¶
MazeVault's AuditService produces tamper-evident audit logs using cryptographic hash chaining:
- Each audit event includes a hash of the previous event, creating an immutable chain
- Any modification to historical log entries breaks the hash chain and is immediately detectable
- Hash chain validation is performed automatically and can be verified during forensic analysis
- These logs serve as authoritative evidence of system activity during an incident
8.4 Chain of Custody Requirements¶
For each piece of evidence collected:
- Label: Unique evidence identifier linked to incident ID
- Record: Date/time of collection, collector identity, collection method
- Secure Storage: Evidence stored in access-controlled, integrity-protected storage
- Access Log: All access to evidence must be logged (who, when, purpose)
- Transfer Records: Any transfer between parties documented and signed
- Integrity Verification: Hash values computed at collection and verified at each access
8.5 Evidence Retention¶
- Minimum retention period: 3 years from incident closure for all critical evidence
- Extended retention: If legal proceedings are initiated or anticipated, evidence SHALL be retained until proceedings conclude plus applicable appeal periods
- Regulatory requirements: Evidence supporting regulatory notifications SHALL be retained for the period specified by the relevant regulation (minimum 5 years for DORA)
- Disposal: Evidence disposal requires Incident Commander approval and SHALL be documented
9. Incident Notification Service¶
9.1 Technical Implementation¶
The IncidentService provides automated incident lifecycle management and notification:
9.1.1 Incident Creation and Notification¶
When a security incident is created in the system:
IncidentServicegenerates a unique incident identifier- Automatic notifications are triggered to all active notification channels:
- Email: Sent to IRT distribution list and configured escalation contacts
- Slack: Posted to designated security incident channel
- Microsoft Teams: Posted to security operations team channel
- Jira: Incident ticket created with appropriate priority and assignment
- Webhook: HTTP POST to configured external endpoints (SIEM, SOAR integration)
- Audit event
INCIDENT_DETECTEDis logged with full incident metadata - Incident status is set to
open
9.1.2 Status Lifecycle¶
| Status | Description | Trigger |
|---|---|---|
open |
Incident detected, awaiting triage | Automatic on creation |
investigating |
IRT actively analyzing and responding | Manual: IC confirms triage complete |
contained |
Active threat neutralized, scope limited | Manual: IC confirms containment |
resolved |
Root cause removed, systems recovered | Manual: IC confirms recovery |
closed |
Post-incident complete, all reports submitted | Manual: IC final sign-off |
Each status transition generates notifications to all active channels and creates an audit event.
9.1.3 Notification Channel Configuration¶
All notification channels are managed through the administrative interface. Each channel can be independently enabled/disabled. Channel health is monitored; delivery failures trigger fallback notifications via alternative channels.
10. Testing and Exercises¶
10.1 Exercise Program¶
| Exercise Type | Frequency | Scope | Participants |
|---|---|---|---|
| Tabletop Exercises | Semi-annual (every 6 months) | Scenario-based discussion of incident response procedures; decision-making practice | Full IRT + management |
| Simulation Exercises | Annual | Realistic simulated incident with hands-on response activities; end-to-end process validation | Full IRT + operations |
| DR Failover Tests | Quarterly | Technical disaster recovery procedure execution; gateway failover, database restore | Operations + Infrastructure |
| Communication Tests | Quarterly | Verify notification channels functional; test escalation contacts reachable | Communications Lead + IRT |
| Post-Real-Incident Review | After each P1/P2 incident | Comprehensive review of actual incident response; identify improvements | All involved personnel |
10.2 Exercise Requirements¶
- All exercises SHALL have defined objectives, scenarios, and success criteria
- Exercise results SHALL be documented, including identified gaps and improvement actions
- Improvement actions SHALL be tracked to completion with assigned owners and deadlines
- Exercise scenarios SHALL be updated annually to reflect current threat landscape
- At least one annual exercise SHALL include a regulatory notification simulation
10.3 Exercise Scenarios (Minimum)¶
- Customer data breach via exploited vulnerability
- Ransomware attack affecting production database
- Insider threat: privileged user exfiltrating secrets
- Supply chain compromise via compromised dependency
- Gateway compromise and lateral movement attempt
- Encryption key compromise requiring emergency rotation
11. Metrics¶
11.1 Key Performance Indicators¶
| Metric | Target | Measurement |
|---|---|---|
| Mean Time to Detect (MTTD) | <15 min (P1), <1 hour (P2) | Time from incident occurrence to detection |
| Mean Time to Respond (MTTR) | <30 min (P1), <2 hours (P2) | Time from detection to containment initiation |
| Mean Time to Recover | <4 hours (P1), <8 hours (P2) | Time from containment to service restoration |
| Regulatory Notification Compliance | 100% on-time | Percentage of notifications submitted within required timeframes |
| Incidents by Severity per Quarter | Trending downward | Count of incidents per severity level |
| False Positive Rate | <5% for P1/P2 alerts | Percentage of alerts that are false positives |
| Exercise Completion Rate | 100% per schedule | Percentage of planned exercises completed |
| Post-Incident Action Completion | 100% within 30 days | Percentage of PIR actions completed on time |
11.2 Reporting¶
- Monthly: Security operations dashboard (incidents, alerts, trends) to CISO
- Quarterly: Incident summary report to CEO and Board
- Annual: Comprehensive incident response program review, including metrics trends, exercise results, and improvement plan
12. Integration with Other Processes¶
12.1 Vulnerability Management (MV-LEG-006)¶
- A vulnerability actively exploited in the wild or confirmed exploited against MazeVault systems SHALL be escalated to an incident
- Incident forensics may identify previously unknown vulnerabilities; these SHALL be entered into the vulnerability management process
- Critical vulnerability patches may be deployed under emergency change authority granted by Incident Commander
12.2 Change Management¶
- Failed changes that result in security degradation SHALL be treated as incidents
- Emergency changes during incident response SHALL be retrospectively documented and reviewed
- Post-incident remediation changes follow standard change management with incident reference
12.3 Business Continuity / Disaster Recovery (MV-LEG-008)¶
- A Critical (P1) incident causing extended service outage (>RTO threshold) SHALL trigger BCP activation
- The Incident Commander coordinates with BCP roles during combined incident/continuity events
- DR procedures (failover, restore) may be invoked as part of incident recovery (Phase 4)
- BCP activation does NOT supersede incident response; both processes run in parallel with coordinated leadership
12.4 Risk Management (MV-LEG-003)¶
- Each P1/P2 incident SHALL result in a risk register update
- Incident trends inform annual risk assessment updates
- New risks identified during incident response are immediately added to the risk register
12.5 Access Control (MV-LEG-005)¶
- Incidents involving unauthorized access trigger immediate access review for affected systems
- Privilege escalation incidents require full access audit of the compromised account chain
- Post-incident: access policies updated to prevent recurrence
13. Compliance Mapping¶
| Requirement | Section in This Document |
|---|---|
| Act No. 264/2025 Sb. §15 (Incident reporting to NUKIB) | Section 6.1 (24-hour notification), Section 7.3 |
| NIS2 Directive Art. 23 (Incident notification) | Section 6.1, Section 6.2 |
| DORA Art. 17 (ICT-related incident management) | Sections 2-5 (full incident lifecycle) |
| DORA Art. 18 (Classification of ICT-related incidents) | Section 2 (classification matrix) |
| DORA Art. 19 (Reporting of major ICT-related incidents) | Section 6.1 (4-hour initial notification), Section 6.2 |
| GDPR Art. 33 (Notification to supervisory authority) | Section 6.1 (72-hour DPA notification) |
| GDPR Art. 34 (Communication to data subject) | Section 6.1 (data subject notification) |
| ISO/IEC 27001:2022 A.5.24 (Information security incident management planning) | Sections 1-5 |
| ISO/IEC 27001:2022 A.5.25 (Assessment and decision on information security events) | Section 2 (classification), Section 5.1 (triage) |
| ISO/IEC 27001:2022 A.5.26 (Response to information security incidents) | Sections 5.2-5.4 |
| ISO/IEC 27001:2022 A.5.27 (Learning from information security incidents) | Section 5.5, Section 11 |
| ISO/IEC 27001:2022 A.5.28 (Collection of evidence) | Section 8 |
| SOC 2 CC7.3 (Incident response) | Sections 2-5, 8 |
| SOC 2 CC7.4 (Incident recovery) | Sections 5.4, 12.3 |
14. Related Documents¶
| Document ID | Title |
|---|---|
| MV-LEG-001 | Information Security Policy |
| MV-LEG-002 | Cryptography Policy |
| MV-LEG-003 | Risk Management Policy |
| MV-LEG-004 | Logging & Monitoring Policy |
| MV-LEG-005 | Access Control Policy |
| MV-LEG-006 | Vulnerability & Patch Management Policy |
| MV-LEG-008 | Business Continuity & Disaster Recovery Plan |
Document Control¶
| Version | Date | Author | Change Description |
|---|---|---|---|
| 1.0.0 | 2026-05-01 | CISO | Initial release |
END OF DOCUMENT