Skip to main content
Back to Frameworks
Optimisation · Enterprise

Crisis Communications Framework

Every brand will face a crisis. The ones that survive are those that prepared before it happened. This framework covers both preparation (before a crisis) and response (during one). It includes severity assessment, stakeholder mapping, holding statement templates, escalation protocols, and post-crisis review. The golden rule: speed, empathy, and facts — in that order.

When to use this framework

  • You want to prepare a crisis communications plan before something goes wrong
  • A crisis is unfolding and you need to organise your response
  • You're assessing a potential issue to decide if it could escalate
  • You need to brief leadership on crisis readiness and response protocols
  • You're conducting a post-crisis review to improve for next time

Sign in to unlock full access

You're browsing as a visitor. Create a free account to fill in worksheets, download PDFs, and save your progress.

Worked Example

Crowdstrike

1. Severity Assessment

First step: classify the severity to determine the right response level.

What has happened or is happening? Stick to confirmed facts only.

A faulty software update pushed to 8.5 million Windows devices caused global blue-screen-of-death crashes, grounding flights, disrupting hospitals, and taking down banking systems. The issue is in our Falcon sensor content update, not a cyberattack.

How severe is this? Determines the response team and speed.

level-4

Could this get worse? What would cause it to escalate? Assess honestly.

Extremely high. Affected systems include critical infrastructure (healthcare, aviation, emergency services). Government scrutiny is inevitable. Customers may face downstream liability. Class-action litigation is likely. Competitor messaging will be aggressive. Every hour of downtime erodes trust that took years to build.

2. Stakeholder Map

Who needs to know, and in what order? Get the sequence right.

Who inside the company needs to be briefed? CEO, legal, board, customer support, employees.

1. CEO George Kurtz — immediate briefing, will be public face 2. CTO — leads technical root cause and fix 3. Legal — litigation exposure assessment within 1 hour 4. Board of Directors — emergency briefing within 2 hours 5. All-hands employee communication — within 3 hours 6. Customer Success — briefed before customers call, armed with talking points 7. Sales team — briefed on how to handle prospect and renewal conversations

Customers, media, regulators, partners, investors. Who do you proactively contact vs. wait for inbound?

Proactive (within 2 hours): All enterprise customers via direct email from CEO. Proactive (within 4 hours): Regulatory bodies (CISA, relevant government agencies). Reactive: Media (prepare statement, CEO available for interviews). Proactive (Day 1): Technology partners (Microsoft coordination critical). Monitor: Social media, Reddit r/sysadmin, Hacker News.

Who is in the room? Comms lead, legal, CEO, subject-matter expert, social media manager.

CEO (public spokesperson), CTO (technical lead), VP Comms (messaging and media), General Counsel (legal/regulatory), VP Customer Success (customer comms), VP Engineering (fix deployment), Social Media Director (monitoring and response).

3. Holding Statement & Key Messages

The first public statement. Must contain: acknowledgement, empathy, what you're doing, when you'll update. Keep it under 100 words.

We are aware of a widespread issue affecting Windows systems running Falcon sensor software. This is not a cyberattack — it is a defect in a content update we released. We have identified the issue, reverted the update, and are deploying a fix. We understand the severity and are working with every affected customer to restore operations. We will provide updates every 2 hours. We are deeply sorry for the disruption.

What do you KNOW to be true? Never speculate. List only confirmed facts.

1. The issue is caused by a content update to the Falcon sensor, not a cyberattack 2. Only Windows systems are affected — Mac and Linux are not impacted 3. The faulty update has been reverted — no new systems will be affected 4. Fix requires manual intervention on affected machines (boot into Safe Mode) 5. Approximately 8.5 million devices are affected globally

What should no one in the company say publicly? Lines that will make things worse.

1. Never say 'this only affected a small number of systems' — 8.5M is not small 2. Never blame Microsoft or Windows 3. Never say 'this couldn't have been prevented' — it clearly could have 4. Never minimize the impact ('just a reboot' — some systems need manual fix) 5. Never speculate on financial impact or liability 6. Never say 'we tested this thoroughly' until the root cause investigation explains what testing missed

4. Response Channels & Timeline

What goes out where? Social media, press statement, customer email, website banner, employee comms.

1. CrowdStrike blog: Technical details and fix instructions (updated continuously) 2. Direct email: CEO-signed message to all customers within 2 hours 3. Support portal: Real-time status page and fix documentation 4. Social (Twitter/X, LinkedIn): CEO video statement + regular updates 5. Press: Written statement immediately, CEO available for CNBC/Bloomberg within 4 hours 6. Employee Slack: All-hands message from CEO, dedicated #incident channel 7. Customer Success: 1:1 outreach to all enterprise accounts

How frequently will you provide updates? Set expectations and keep them.

First 24 hours: Update every 2 hours. Day 2-3: Update every 6 hours. Day 4-7: Daily update. Week 2+: Update when there's material progress on root cause review. Final: Publish full post-incident review within 30 days.

Who is monitoring social? What keywords/hashtags? How are you routing inbound questions?

24/7 monitoring by social team + agency support. Keywords: CrowdStrike, Falcon, blue screen, BSOD, outage, #CrowdStrike. Route technical questions to support. Respond to factual errors within 15 minutes. Do NOT engage with trolls or speculation. Flag media inquiries to VP Comms immediately. Monitor Reddit r/sysadmin and Hacker News for technical sentiment.

5. Post-Crisis Review

After the crisis is resolved, learn from it.

What actually caused this? Go beyond the surface symptom.

Root cause: Content update was deployed without adequate staged rollout. The update contained a logic error that triggered a kernel panic on Windows systems. Testing environment did not catch the issue because it tested the update in isolation, not under production-like conditions with full system load.

What did the team do right? Speed of response, tone, coordination?

1. CEO was visible and accountable within hours 2. Technical transparency — published root cause details quickly 3. Did not try to minimise or deflect blame 4. Customer Success team reached all enterprise accounts within 24 hours

Where were the gaps? Slow escalation, unclear ownership, missing information?

1. Content updates must go through staged rollout (canary → 1% → 10% → 100%) 2. Need automated rollback capability — manual Safe Mode fix was too slow 3. Testing must include production-like environments 4. Need pre-approved crisis holding statements for common scenarios 5. Customer communication templates should be ready to deploy in <30 minutes
Your Worksheet

Fill in for your brand

Sign in to use this worksheet

Create a free account to fill in frameworks with your own brand details, download completed worksheets, and save your progress.

Take this framework offline

Download a blank PDF to fill in by hand, use it in workshops, or pin it to your wall. If you've filled in the worksheet above, you can download your completed version too.