Securing the World's Biggest Machine: Critical Infrastructure, AI, and the Ethics of Innovation
What happens when decades of critical infrastructure experience meet today’s rapidly evolving AI landscape? In this episode, host Bob Slevin sits down with Ernie Hayden, award-winning author, former Navy nuclear officer, ethical hacker, and founder of 443 Consulting, for a deep dive into what it truly takes to secure modern, interconnected systems.
Drawing on a career spanning electric utility SCADA networks to chemical manufacturing environments, Ernie explains why adopting a holistic lens is essential for both cybersecurity and operational excellence. By understanding inputs, outputs, dependencies, and system-wide relationships, organizations can better anticipate risk and improve performance. He also shares why regulated industries like power and nuclear have developed hard-earned practices that the broader business world would benefit from adopting.
The conversation then turns to AI, where Ernie makes a critical distinction between AI as an analytical support tool and autonomous agents making real-time decisions. He outlines the risks of deploying agent-driven AI in high-stakes environments like power generation, where even a small mistake could have major consequences, while highlighting the immediate value of AI-assisted analysis. Throughout the discussion, he emphasizes the irreplaceable role of human judgment, especially in high-pressure situations where experience and intuition guide decisions no model can replicate.
The episode closes with a look into Ernie’s work as a photojournalist and nature photographer in the Pacific Northwest, along with his perspective on why continuous learning and an innovate or die mindset have become essential in today’s fast-moving world.
Takeaways:
→ Adopt a holistic, systems-level view of your infrastructure. Don't just look at individual components — understand how every system communicates with and depends on others, and ask what happens when any one link is severed.
→ Look to NERC CIP as a security framework, even if you're not a utility. These standards for cyber and physical security of the electric grid have proven themselves over time and can be adapted as a model for other regulated or high-risk industries.
→ Draw a "digital fence line" around your critical assets. Think of your security perimeter not just physically but digitally — monitor everything that comes in and goes out, and treat both with equal rigor.
→ Understand the difference between AI and AI agents before deploying either. AI that surfaces insights and recommendations for a human operator is a manageable first step; AI agents that take autonomous action in critical systems carry substantially higher risk and demand far greater scrutiny.
→ Document AI recommendations and require human approval before any action is taken. Especially in regulated or safety-critical environments, every AI-generated recommendation should be logged, explained, and acted on only with explicit human authorization.
→ Use AI as a powerful research accelerator, but verify outputs carefully. Tools like Perplexity are excellent for gathering information quickly, but hallucinations are a real risk — always validate AI-generated content before publishing or acting on it.
→ Don't wait for a crisis to start thinking about innovation. Whether you're running a refinery, a data center, or a telecom network, the organizations that are not actively incorporating AI and modern monitoring risk being overtaken by those that are.
Quote of the Show:
"You just don't look at a bridge or a factory or a refinery. You look at it as a series of systems, and all these systems pile up and work together. And then what you're trying to do is understand how do they communicate, how do they support each other — and better than that, what if I cut those links?"
Links:
→ LinkedIn: https://www.linkedin.com/in/enhayden/
→ Email: Enhayden1321@gmail.com
→ Photography Website: https://risingmoonnw.com/
→ Book Link: https://a.co/d/05Wa6Qk9