AI Will Be in Your Next Crisis. The Only Question Is Who's Controlling It.

By Paul Walker
5 min read
AI LeadershipCrisis ManagementChange ManagementCommunication Strategy

A few months ago, I assisted an organization in developing a crisis preparedness plan.

The timing couldn't have been better --- or worse.

Just as we started, everything went sideways. Stakeholders revolted. Social media lit up. Elected officials got involved.

Our prep call turned into a live crisis meeting.

So I made a call of my own: I turned on an early version of CrisisCommand EDU.

It moved fast --- drafted holding statements, outlined next steps, surfaced scenarios.

But it also revealed something deeper: AI isn't plug-and-play. And a crisis isn't the time to figure that out.

Within minutes, we encountered human bottlenecks --- review fatigue, confusion over ownership, and hesitation about accuracy. Eventually, the team scaled back. Not because the threat had passed, but because they had.

We didn't fail. But I walked away with this: the people part of the process needs as much design as the tech.

We've Been Here Before

And it hit me --- we've been here before.

At Andersen Consulting (now Accenture), we implemented massive enterprise systems, including ERP and CRM. Many underdelivered. Not because the software was harmful, but because people didn't adopt it.

They went back to spreadsheets. Workarounds. Habits.

So, we built change-management strategies --- including training, workflow, and expectations. And it worked.

Now, with AI, we're back at that same inflection point.

Why AI Can Go Sideways in a Crisis

1️⃣ Hallucination Risk --- AI is confident but not always correct. Every word must be verified.

2️⃣ Human Bottlenecks --- AI drafts in seconds. Humans take time. Skip that, and the system jams.

Here's the twist: modern LLMs are smarter, but slower. Some outputs even simulate delay --- "Let me take a moment to work on that ..." That's not thinking; that's UX theater. You're managing tempo, perception, and trust.

3️⃣ Tone Risk --- Crisis is emotional. One robotic message can do more harm than silence.

4️⃣ Workflow Confusion --- Who owns the message? Who approves it? If that's unclear, clarity collapses.

5️⃣ Security & Privacy --- Sensitive data plus open models equals hesitation.

6️⃣ Cultural Resistance --- Some see AI as a threat; others use it secretly. Neither works at scale.

Don't Just Deploy AI --- Onboard It

Treat AI like a new team member, not a productivity hack. What works:

✅ Simulate first. ✅ Define where it helps. ✅ Give humans room to review. ✅ Clarify ownership. ✅ Build confidence before the next crisis.

One of our beta testers, rewriting his crisis plan, told me: "The AI made it better in minutes. But I hesitated to show it --- I didn't want my team to think I was trying to scoreboard them."

That's the tension. AI may be ready. Your team might not be.

The Leadership Challenge

The challenge now is leadership --- guiding humans and machines through the same storm, toward the same purpose.

When the next crisis hits, the "other side" might show up with a trained AI that thinks faster than your team ever will.

Do you have a plan to lead with AI?

Or are you hoping your team figures it out mid-crisis, with everything on the line?

At CrisisCommand, we've learned that integrating AI into crisis leadership isn't about replacing judgment --- it's about engineering it. The best systems are those that enhance human judgment, making it stronger, faster, and more consistent under pressure.

Paul Walker headshot

Paul Walker

Founder

Veteran strategist with a career spanning PulsePoint Group, Accenture, Y&R/Burson-Marsteller, Cohn & Wolfe, and The University of Texas. Paul has built and led businesses across the U.S., Asia, and Europe — from startups to major universities to Global 1000 companies.

Ready to Get Crisis-Ready?

See how CrisisCommand can help your organization prepare for and manage crisis communications with AI-powered intelligence.