The most dangerous OT insider events won’t trip alerts because nothing looks “malicious.”
In many incidents, the user is legitimate:
Operators, contractors, engineers.
Credentials are valid.
Commands are allowed.
The risk is in small deviations that create big safety and uptime impact.
Instead of hunting for “bad actors,” define safe normal per role and process, then alert on drift.
What to baseline in OT:
– Time: out-of-shift changes, weekend maintenance that wasn’t scheduled
– Place: new asset touchpoints (a contractor suddenly interacting with SIS-adjacent systems)
– Sequence: unusual command chains (mode changes followed by setpoint edits, repeated downloads, rapid start/stop loops)
– Pace: bursts of commands, retry storms, “workarounds” that bypass standard steps
What this enables:
– Detection of insider risk without relying on signatures
– Fewer false positives because “normal” is defined by your plant’s reality
– Earlier intervention before a deviation becomes a safety or downtime event
If your OT monitoring mostly looks for known indicators of compromise, you are missing the events that look like routine work.
Question for OT security and operations leaders: do you have role-based behavioral baselines, or are you still alerting on isolated events?