In OT, risk isn’t a buzzword: operationalize (threat × vulnerability × asset) into a weekly prioritization loop

Standard

Most OT programs fail because they rank vulnerabilities, not risk.

Flip it: start with your assets and credible threats, then decide which vulnerabilities actually matter enough to fix this week.

When every finding is “critical,” nothing gets done. The backlog becomes political, and engineering, IT, and operations debate severity instead of impact.

A simple, repeatable model breaks the stalemate:
Risk = Threat × Vulnerability × Asset

Turn that into a weekly loop:
1) Asset: Pick the top systems that keep product moving and people safe (not everything).
2) Threat: Agree on the few credible scenarios that could realistically hit those assets (not theoretical CVSS fear).
3) Vulnerability: Only then map weaknesses that enable those scenarios.
4) Score: Use a consistent 1–5 scale for each factor. Multiply. Rank.
5) Commit: Fix the top 5–10 items this week. Everything else waits.
6) Review: Capture what changed in the environment, threats, or compensating controls and rescore next week.

Outcome: shared language across OT engineering, IT security, and operations, and a prioritized plan tied to real-world impact.

If your OT backlog feels permanent, stop asking “Which vulnerabilities are worst?”
Start asking “Which asset-threated paths are most likely and most damaging this week?”

ISA/IEC 62443 for SIS: stop treating Safety Systems as “off-limits” and start applying security levels like an engineering spec

Standard

Contrarian take: the safest SIS is the one you can still patch, monitor, and validate.

Too many SIS environments get a security pass because they’re “safety-critical.”
That logic is backwards.

If a cyber event can change logic, blind diagnostics, or disrupt comms, your safety case is now conditional on security you didn’t specify.

ISA/IEC 62443 gives a practical way out: define Security Levels at SIS boundaries and turn risk talk into engineering requirements.

What that looks like in practice:
– Define SIS zones/conduits explicitly (SIS controller, engineering workstation, diagnostics, vendor remote access)
– Assign target SL based on credible threat capability, not comfort level
– Translate SL into design requirements: segmentation, authentication, hardening, logging, backup/restore, update strategy
– Make it testable: FAT/SAT cybersecurity checks, periodic validation, evidence for MOC and audits
– Assign ownership: who maintains accounts, patches, monitoring, and exception handling

Security levels aren’t bureaucracy. They’re how you prove the safety function still holds under cyber conditions.

If your SIS is “off-limits” to security engineering, it’s also off-limits to assurance.

How are you defining SIS security boundaries and target SLs today?

BeyondTrust RS/PRA command injection (CVE-2026-1731): why Zero Trust is necessary but not sufficient for remote support tools

Standard

Zero Trust won’t save you from a vulnerable admin tool by itself.

Ask one question:
If this box is compromised, what’s the maximum damage it can do in 10 minutes?

A command injection in a privileged remote support platform collapses the trust boundary. The “helpdesk tool” becomes:
– Immediate privileged code execution
– Credential access and session hijack potential
– Fast lateral movement across managed endpoints

Zero Trust helps only if it is translated into hard controls that shrink blast radius:
– Least privilege for the platform service accounts and integrations
– Network segmentation so the tool cannot reach everything by default
– Just-in-time access for technicians and elevated actions
– Isolation: dedicated jump hosts, separate admin planes, restricted egress
– Application allowlisting and controlled script execution
– Session recording and strong audit logs that cannot be tampered with
– Compensating monitoring: alert on unusual commands, new tool binaries, and rapid host-to-host pivots

Remote support is operationally critical. Treat it like a Tier 0 asset.
Design it so compromise is survivable, not catastrophic.

Poland’s energy-sector cyber incident: the overlooked OT/ICS gaps that still break most “enterprise” security programs

Standard

Contrarian take: If your OT security plan looks like your IT plan (patch faster, add more agents, buy another SIEM), you’re probably increasing risk.

Critical infrastructure incidents rarely fail because of exotic malware.
They fail because IT-first controls don’t translate to OT realities: uptime constraints, legacy protocols, safety interlocks, and always-on vendor access.

Where most “enterprise security” programs still break in OT/ICS:

1) Asset visibility that stops at the switch
If you can’t answer “what is this PLC/HMI, what talks to it, and what would break if it changes,” you’re operating blind.

2) Remote access governance built for convenience, not safety
Shared vendor accounts, always-on VPNs, no session recording, no time bounds, no approvals. This is the common entry point.

3) Segmentation designed for org charts, not process safety
Flat networks and dual-homed boxes turn a small intrusion into plant-wide impact. Segment by function and consequence, then control the conduits.

4) Monitoring that can’t see OT protocols
If telemetry is only Windows logs and SIEM alerts, you’ll miss the real story on Modbus, DNP3, IEC 60870-5-104, OPC, and proprietary vendor traffic.

5) Patch expectations that ignore outage windows and certification
In OT, “just patch” can equal downtime. Compensating controls and risk-based maintenance matter.

If you lead security or build products for critical infrastructure: start with asset inventory, remote access, and safety-driven segmentation. Reduce risk without disrupting operations.

What OT/ICS gap do you see most often in the field?

Just‑In‑Time, time‑bound access for OT: the fastest way to cut vendor and admin risk without slowing operations

Standard

The goal isn’t “least privilege” on paper. It’s least time.

If an account can exist forever, it will be used forever (and eventually abused).

In OT environments, persistent accounts and always‑on remote access are still common. They also show up repeatedly as root causes in incidents:
– Shared vendor logins that never expire
– Standing admin rights “just in case”
– Remote tunnels left open after maintenance
– Accounts that outlive the contract, but not the risk

Just‑In‑Time (JIT) + time‑bound access changes the default:
Access is requested, approved, logged, and automatically revoked.

What you gain immediately:
– Smaller blast radius when credentials are exposed
– Clear audit trails for who accessed what, when, and why
– Faster offboarding for vendors and rotating staff
– Fewer exceptions that turn into permanent backdoors

The key is designing around OT realities:
– Support urgent break/fix with pre-approved workflows
– Time windows aligned to maintenance shifts
– Offline/limited-connectivity options where needed
– Access that’s scoped to assets and tasks, not “the whole site”

If you’re still managing vendor access with permanent accounts and manual cleanup, JIT is one of the highest-impact controls you can deploy without slowing operations.

Where are persistent accounts still hiding in your OT environment today?

Virtual patching isn’t a fix — it’s a risk-managed bridge (if you treat it like one)

Standard

Most teams use virtual patching as a comfort blanket.

In OT, you often cannot patch immediately. Uptime commitments, safety certification, vendor approvals, and fragile legacy stacks make “just patch it” unrealistic.

Virtual patching can be a smart short-term control, but only if you treat it like a bridge, not a destination.

What it actually reduces:
– Exploitability of known vulnerabilities by blocking specific traffic patterns or behaviors
– Exposure window while you coordinate downtime, testing, and vendor support

What it does not fix:
– The vulnerable code still exists
– Local exploitation paths, misconfigurations, and unknown variants may remain
– Asset and protocol blind spots can turn into false confidence

Disciplined virtual patching looks like:
1) Define an expiration date and an owner (when does “temporary” end?)
2) Specify coverage: which CVEs, which assets, which protocol functions, which zones
3) Validate with proof, not assumptions: replay exploit attempts, run vulnerability scans where safe, confirm logs/alerts, verify rule hit rates
4) Run a parallel plan to remove it: patch path, compensating controls, maintenance window, rollback plan
5) Reassess after changes: new firmware, new routes, new vendors, new threats

If you can’t explain what your virtual patch blocks, how you tested it, and when it will be removed, you have not reduced risk. You’ve created permanent temporary security.

How are you validating virtual patching effectiveness in your OT environment?

Stop chasing “bad actors” in OT: baseline behavior to catch “normal” actions at the wrong time, place, or sequence

Standard

The most dangerous OT insider events won’t trip alerts because nothing looks “malicious.”

In many incidents, the user is legitimate:
Operators, contractors, engineers.
Credentials are valid.
Commands are allowed.

The risk is in small deviations that create big safety and uptime impact.

Instead of hunting for “bad actors,” define safe normal per role and process, then alert on drift.

What to baseline in OT:
– Time: out-of-shift changes, weekend maintenance that wasn’t scheduled
– Place: new asset touchpoints (a contractor suddenly interacting with SIS-adjacent systems)
– Sequence: unusual command chains (mode changes followed by setpoint edits, repeated downloads, rapid start/stop loops)
– Pace: bursts of commands, retry storms, “workarounds” that bypass standard steps

What this enables:
– Detection of insider risk without relying on signatures
– Fewer false positives because “normal” is defined by your plant’s reality
– Earlier intervention before a deviation becomes a safety or downtime event

If your OT monitoring mostly looks for known indicators of compromise, you are missing the events that look like routine work.

Question for OT security and operations leaders: do you have role-based behavioral baselines, or are you still alerting on isolated events?

Maritime OT security isn’t “remote OT with worse Wi‑Fi” — it’s a moving, intermittently connected supply chain

Standard

Contrarian take: If your maritime OT strategy starts with patch cadence and endpoint agents, you’re already behind.

Ships, offshore platforms, and port equipment don’t behave like always-on plants.
They run with:
– Long offline windows between port calls and stable links
– Satellite bandwidth constraints and high latency
– Third-party vendor access across multiple owners and charterers
– Safety-critical systems where “just patch it” is not a plan

That combination creates invisible exposure: configuration drift, unverified vendor actions, and monitoring gaps that only surface after the vessel reconnects.

What to design for instead:
1) Disconnected-by-default controls
Local logging, local detection, local time sync, and store-and-forward telemetry
2) Vendor trust boundaries
Brokered access, least privilege by task, session recording, and break-glass workflows
3) Provable state while offline
Baselines, signed change packages, asset identity, and tamper-evident logs
4) Risk-based maintenance windows
Patch only when it’s safe, testable, and operationally feasible; compensate with segmentation and allowlisting

Maritime OT security is less about perfect visibility and more about maintaining safety and assurance when connectivity disappears.

If you’re building a maritime OT program, start with: What must still be true when the vessel is offline?

Stop looking for “bad actors”—use behavioral baselines to catch insider risk in OT before it becomes downtime

Standard

Most insider detection fails because it hunts intent.
OT needs to hunt anomalies that predict impact.

In industrial environments, “insiders” are often trusted technicians, engineers, and contractors.
Their actions look legitimate until one small change turns into:
– an unsafe state
– a quality excursion
– unplanned downtime

That’s why the winning question isn’t “who is malicious?”
It’s: “What behavior would cause an unsafe state if repeated at scale?”

Behavioral baselines help you answer that without relying on malware signatures or perfect asset inventories.
You’re not trying to label a person.
You’re watching for deviations in:
– what changed
– when it changed
– from where it changed
– how often it changed
– which systems are being touched outside normal patterns

Examples of high-signal OT deviations:
– new engineering workstation talking to a controller it never touched before
– a contractor account executing the same write operation across multiple PLCs
– after-hours logic changes followed by disabled alarms or altered setpoints
– a burst of “normal” commands at an abnormal rate

Outcome: earlier detection, fewer escalations, and interventions before production feels it.

If you could baseline one behavior in your OT environment to reduce risk fast, what would it be?

Ships are the harshest edge-case for OT: how salt, satellite links, and vendor handoffs create remote-by-default attack paths

Standard

Contrarian take: if your OT security assumes stable connectivity and on-site admins, it’s not a security program, it’s a lab demo.

Maritime OT lives in the worst possible assumptions:
– Intermittent satellite links and high latency
– Tiny patch windows tied to port calls and class rules
– Vendors doing remote support through shifting handoffs (ship crew, management company, OEM, integrator)
– Physical exposure: shared spaces, swapped laptops, removable media, and “temporary” networks that become permanent

That combination creates remote-by-default attack paths:
A single weak credential, a poorly controlled remote session, or an untracked engineering workstation can outlive the voyage.

A sea-ready baseline looks different:
1) Design for comms failure: local logging, local detection, and store-and-forward telemetry
2) Treat remote access as a product: per-vendor isolation, just-in-time access, recorded sessions, and strong device identity
3) Patch like aviation: plan by voyage/port cycle, pre-stage updates, verify by checksum, and prove rollback
4) Control the engineering toolchain: signed configs, golden images, USB governance, and offline recovery media
5) Clarify accountability at handoff points: who owns credentials, approvals, and emergency access when the link drops

If you build for the ship, you’ll usually harden every remote industrial site.

What’s the biggest OT security failure mode you’ve seen offshore: connectivity, patching, third-party access, or physical exposure?