
This new guidance amounts to leading Western governments telling OT users (industrial businesses in manufacturing, energy, power, logistics, critical infrastructure, and the like), “Yes, you can use AI in OT, but only if you’re prepared for it to fail and you can recover quickly when it does.”
Summary of the announcement
The guidance, "Principles for the Secure Integration of Artificial Intelligence in Operational Technology," issued 3 Dec 2025 by CISA and nine partner agencies (in the U.S., U.K., EU, Canada, Australia and New Zealand), lays out four core principles for using AI in OT safely and securely:
- Understand AI risks
- Be selective about where you use AI in OT
- Build governance around it
- Embed oversight, safety and security into AI-enabled OT systems.
Parsing the guidance
If we boil down the document to its essence, four key points emerge:
- Availability and recovery are front and center: The risk tables explicitly tie AI issues (model drift, lack of explainability, alarm noise, interoperability problems) to increased recovery time, reduced system availability and recovery challenges.
- AI makes OT more fragile and complex: The authors warn that AI adds complexity, new attack surfaces (including internet-exposed paths) and interoperability headaches, all of which can make troubleshooting and recovery harder when something goes wrong.
- Operators are told to plan for AI failures: The guidance calls for explicit failsafe mechanisms, the ability to bypass or replace AI and the integration of AI failure modes into existing functional safety and incident-response processes.
- Vendors are also on the hook: It urges operators to demand transparency from vendors (e.g., SBOMs, cloud dependencies, data-usage policies) and the ability to disable AI features or run without constant internet access.
The net message is that AI in OT is acceptable only if you can understand it, constrain it and recover from it quickly when it misbehaves.
What it means from a practical perspective
The governments behind the announcement are explicitly insisting that OT operators design for AI failure and rapid recovery. That is the business that Acronis has been in for over 20 years, helping OT users back up and rapidly restore failed systems regardless of the cause. An AI deployment that leads to an OT system failure is just another example of an incident we can help recover from quickly. From the perspective of Acronis:
- Prevention is never enough: A focus on recovery is equally important. Even if you do everything right on the preventative side, e.g., secure development, vendor vetting, network segmentation and human-in-the-loop controls, the guidance still assumes that AI incidents are inevitable. Thus, the guidance recommends folding AI into existing incident-response and functional-safety plans. This echoes a similar new emphasis on recovery in recent editions of cybersecurity compliance regulations (like NIS 2), cybersecurity standards (like the NIST CSF 2.0), and requirements on businesses to qualify for cyber insurance.
- AI-induced outages look just like any other OT outage; except they can be harder to debug. When an AI-enabled HMI, historian or engineering workstation starts making bad recommendations, changing setpoints or crashing, the plant manager’s first problem is not subtle model behavior; it’s “My line is down and I need it back in a known-good state now.”
- AI-caused OT outages are simply one more problem that rapid OT system recovery solutions (like Acronis Cyber Protect Local) are designed to solve. It doesn’t matter whether the outage is caused by a ransomware attack, a clumsy AI deployment, bad training data, a misconfigured agent or failed patching. You can roll the compromised OT system back to a last-known-good image in minutes and quickly restore safe, deterministic behavior.
How Acronis helps OT users deal with new AI-borne risk
Here’s how Acronis can help industrial enterprises and critical infrastructure operators respond to each of the core principles in the guidance.
Understand AI and its risks
The guidance says that operators should understand AI model drift, lack of explainability, operator cognitive load and interoperability risks, all of which can increase downtime and complicate recovery.
Acronis doesn’t try to “fix” AI models. Rather, we assume they will fail and ensure you can revert AI-enabled SCADA servers, HMIs, engineering workstations and other PC-based OT systems to a known-good, pre-change image in minutes. We view “AI gone wrong” as just another recoverable failure mode, like a hardware failure, cyberattack or botched patch.
Consider AI use in the OT domain
The guidance says to be cautious about where you place AI in the Purdue stack, recognize cloud and latency issues and demand vendor transparency about connectivity and data usage. Acronis makes it safer for OT users to add AI-enabled functions to PC-based OT assets like SCADA servers, HMIs, historians and MES servers. If an AI upgrade destabilizes a system, Acronis can roll back the entire machine (OS, application, AI runtime and configuration) to the last vetted state. Our ability to provide immutable, offline and logically isolated backup copies mean that even if the AI is abused as an attack path, the recovery point remains intact.
Establish AI governance and assurance frameworks
The guidance says to integrate AI into existing security frameworks, define roles and responsibilities with AI vendors, system integrators and managed services, and test thoroughly before full deployment. Acronis backup and recovery can serve as a governance control: no AI-enabled change gets rolled into production OT without a recoverable, tested image and a documented rollback procedure. Acronis can also enable periodic test restores of critical OT systems as part of the assurance framework, proving that “minutes-to-recovery” is demonstrably real. In multiparty ecosystems (vendor plus integrator plus operator), recovery responsibilities can be clearly assigned: who owns the golden images, who executes recovery drills and who signs off on AI-related change windows.
Embed oversight, safety and fail-safes
The guidance says to implement monitoring and fail-safe mechanisms that let AI systems “fail gracefully,” revert to traditional automation or manual control and incorporate AI failure states into incident response and safety processes. Acronis Cyber Protect Local is another fail-safe. For example, if AI alarms run wild, Acronis can roll back the OT system to a known-good system image. If AI-driven configuration changes go sideways, Acronis can instantly roll back to last safe configuration. If AI becomes a malware delivery path, Acronis can restore the affected OT system from clean, validated backups and isolate the compromised image.
In short, the ability to recover systems in minutes is not only about business continuity. In safety-critical sectors (energy, chemicals, water, pharma), it’s also about minimizing the time a plant spends in an unstable or degraded state.
About Acronis
A Swiss company founded in Singapore in 2003, Acronis has 15 offices worldwide and employees in 50+ countries. Acronis Cyber Protect Cloud is available in 26 languages in 150 countries and is used by over 21,000 service providers to protect over 750,000 businesses.


