Endpoint Security2026-02-1610 min read

Tuning fanotify to Crush the 15% MDE Bottleneck (and Stop RTP Storms for Good)

When real-time protection turns into an RTP storm, fanotify becomes the choke point. Here's how to fix it โ€” the right way.

Let's be honest.

You didn't deploy Microsoft Defender for Endpoint on your Linux servers to watch it sit at a steady 15% CPU while your app team asks why latency just spiked.

You deployed it for protection.

But when real-time protection (RTP) turns into a file-event hurricane โ€” what we call an RTP storm โ€” fanotify becomes the choke point. And if you don't tune it properly, it will become your bottleneck.

First: What's Actually Happening?

fanotify is the Linux kernel mechanism that allows security software to intercept file access. Microsoft Defender for Endpoint uses fanotify in permission mode.

That means every open(), every execve(), every relevant read() or metadata access can trigger a permission event. And in permission mode, the kernel blocks the operation until the MDE daemon responds: Allow or Deny.

That is powerful. That is invasive. And under high file churn, that is expensive.

The 15% CPU Problem Isn't Random

When you see sustained 12โ€“20% CPU on mdatp, here's what's really happening under the hood:

  • VFS hook triggers
  • fanotify allocates event struct
  • Event copied to userspace
  • Context switch
  • Userspace evaluation
  • Context switch back
  • Kernel resumes syscall

Multiply that by Docker overlayfs writes, CI/CD artifact churn, log rotation, middleware checkpoint files, and chatty applications repeatedly calling stat(). That's your RTP storm. And the CPU cost is mostly context switching, queue management, scheduler pressure, and lock contention in fs/notify โ€” not bad coding. Just physics.

What Is an RTP Storm?

An RTP storm happens when:

  • High IOPS workload begins
  • fanotify permission events spike
  • Event queue fills rapidly
  • Userspace daemon works overtime
  • CPU usage stabilises at 15%+
  • Latency creeps into workloads

Common triggers: build pipelines (make -j, npm install, Maven builds), container hosts writing layers, logging pipelines, IBM MQ or database data paths, and backup agents walking full directory trees. The kernel is doing exactly what you told it to do โ€” intercept everything. Now let's fix it intelligently.

Strategic Tuning: Reduce Event Volume at the Source

You do not reduce CPU by disabling real-time protection. You reduce CPU by reducing unnecessary interception.

The formula is simple: fewer permission events = fewer context switches = lower CPU.

1. Eliminate High-Churn, Low-Risk Paths

Start by identifying paths generating event floods. Typical offenders:

  • /var/lib/docker
  • /var/log
  • /var/mqm
  • Build artifact directories
  • Backup mounts
  • Temporary processing folders

Exclude them surgically:

mdatp exclusion folder add --path /var/lib/docker
mdatp exclusion folder add --path /var/log

What you're doing technically: you're preventing fanotify from generating permission events for those paths. No kernel โ†’ userspace event. No blocking decision. No context switch. No CPU cost. Real-world impact? 15% โ†’ 6โ€“8% instantly on container hosts.

2. Understand Mount-Level Marking

fanotify typically marks entire mounts. That means everything on that filesystem is monitored. If your Docker storage lives on /, you just told the kernel to intercept every container write on the system.

Instead:

  • Separate high-churn data to dedicated mounts
  • Exclude those mounts from scanning
  • Keep executable paths monitored

This is architecture-level tuning โ€” and it works.

3. Prevent Duplicate Access Amplification

Some applications are pathological: repeated stat() calls, file existence checks in tight loops, recursive directory scans. Every one of those hits fanotify. Use:

strace -f -e trace=file -p <pid>

If an app is hammering metadata calls, you may need to fix the app โ€” not the AV. Because fanotify sees all of it.

4. OverlayFS and Container Reality

OverlayFS multiplies event complexity: upper layer writes, lower layer reads, path reconstruction overhead. On Kubernetes or Docker nodes, this is where the 15% lives.

Mitigation strategy:

  • Exclude container storage paths
  • Monitor host binaries
  • Focus scanning where malware actually executes

Scanning ephemeral layer writes gives you CPU burn with almost no security gain.

Permission Mode: The True Cost Centre

Permission events (FAN_OPEN_PERM, FAN_EXEC_PERM) are blocking. That means scheduler wakeups, kernel wait queues, and userspace decision delay. If your workload opens 30,000 files per second, that's 30,000 potential block points. Even if each one takes microseconds, it adds up fast.

The only way to reduce it? Reduce the number of times you ask the question.

Measuring the Bottleneck Like a Pro

Don't just stare at top. Measure what matters.

Context switches

pidstat -w 1

Syscall latency

perf trace

fanotify pressure

perf top

If you see high activity in fsnotify, fanotify_handle_event, or scheduler functions, you're in event saturation.

Security That Performs

You don't need to choose between protection and performance. You need tuning. When properly optimised:

  • CPU drops from 15% โ†’ 3โ€“6%
  • Syscall latency normalises
  • RTP storms disappear
  • No reduction in meaningful coverage

You keep executable monitoring, system binary protection, and user-space malware interception. You eliminate log file churn, container scratch layers, and middleware write amplification. That's intelligent security engineering.

The Bottom Line

fanotify is a gatekeeper in the Linux VFS path. If you monitor everything on a high-I/O server, you are inserting a checkpoint into every file open. Of course it costs CPU.

But when you architect mounts properly, exclude noisy paths, eliminate duplicate churn, and understand permission event mechanics โ€” you don't just reduce CPU. You eliminate RTP storms. And suddenly that stubborn 15% disappears.


If you want to go deeper โ€” kernel internals inside fs/notify, fanotify locking model and queue behaviour, comparing fanotify to eBPF LSM hooks, or building a benchmarking harness โ€” that's where we can help.

Talk to our team โ†’