1 Jan 2026: I will be updating this post with my test results, corrections, and other info over the next few weeks. I have gotten the resources (🧡🍎) to test some of these thoughts in a sandbox with some other researchers! If you have an EDR you want to test, let me know! Updates will be in orange.
I’ve seen newer macOS malware using languages like Nim, Go, and Rust, which can evade traditional detection. These languages produce large, statically-linked Mach-O executables with unusual patterns.
For example, NimDoor is a 2025 macOS backdoor written in Nim, an uncommon choice for macOS malware. Nim supports compile-time code execution, which complicates static analysis by embedding logic that only runs during compilation. The attackers used Nim-compiled Mach-O binaries as installer payloads to establish persistence. Like Go, Nim embeds its own runtime, resulting in large binaries with atypical section layouts. As Phil Stokes notes, Go binaries are often identifiable by their size and unique string handling. In Nim’s case, analysts had to build custom tooling to strip the compile-time scaffolding and isolate malicious logic.
Endpoint and EDR engineers often struggle with edge-case like these due to limited education on macOS internals, restricted access to low-level APIs, and the lack of visibility caused by EDRs that obscure or strip macOS telemetry.
In this post, I’ll explore why lesser-known languages are evading EDRs, and I’ll explain why I think this is happening.
This is my third and final holiday post! See you next year!
Read:
- macOS NimDoor: DPRK Threat Actors Target Web3 and Crypto Platforms with Nim-Based Malware by Phil Stokes and Raffaele Sabato.
- NimDoor MacOS Malware by The Hivemind.
Overview: Why can’t EDRs just see the Mac APIs?
Even if a Nim or Go program calls the same macOS system calls, its binary signature and static behavior differ enough to evade simple heuristics.
For example, a Nim binary may encrypt or pack its configuration in a novel way, or use unusual Mach-O load commands. Standard signatures or rules tuned for C/C++ or popular scripting languages won’t match.
Phil Stokes and Raffaele Sabato claimed that NimDoor’s binary required new analysis techniques since Nim “was a more unusual [language] choice.” Similarly, the Sliver C2 payload (written in Go) had no built-in packing. However, attackers added UPX on macOS, forcing them to handle even well-known frameworks masquerading behind packing tools.
If you want to hear my thoughts on packing, consider reading Why Packers are Rare and Sus on macOS.
macOS security APIs and EDR telemetry
macOS provides some APIs that security software can use, but third-party EDRs have limitations.
Apple’s Endpoint Security (ES) API lets applications subscribe to kernel events (process creation, file operations, network activity, etc.) and block actions at runtime.
This gives AV and EDR tools visibility.
“Endpoint Security is… available on macOS 10.15 or higher… [and] allows applications to look for malicious activity by providing visibility into certain parts of their system” – Stuart Ashenbrenner.
In other words, an EDR that fully uses ES can see relevant activity without special privileges.
Learn more: Endpoint Security In a macOS World by Stuart Ashenbrenner.
However, not all relevant telemetry is exposed.
For instance, Apple’s unified log system (ULS) is off-limits.
macOS requires a private entitlement to read system logs, so EDR agents typically cannot access them. Many older EDRs still use kernel extensions (now deprecated) or user-land API hooks. If a vendor has not fully ported to ES, it may miss some events.
Learn more: EDR Internals for macOS and Linux by Kyle Avery.
Note: “User mode hooking would require modification of the executable code in memory in libc/libSystem/etc in each process you want to hook. This is prevented by SIP; the same reason you can’t inject into a remote process” – Kyle Avery. This is something I need to look into more because I don’t know how older OS-ses are protected then, but I am 99.9% sure this is a me problem.
macOS has improved its protection APIs. Unfortunately, some solutions (especially older or less supported ones) still rely heavily on built-in features like Gatekeeper, XProtect, and basic signature scanning rather than leveraging the ES API.
macOS’s built-in features
Similarly, EDRs claim that macOS built in features have huge blind spots that only their specific EDR solution can fix. While I question claims about needing a specific EDR, it’s true that Gatekeeper, XProtect, Malware Removal Tool (MRT), and System Integrity Protection (SIP) have had significant blind spots.
Note: MRT is old, and that should have been mentioned this post! Luckily, JC Alvarado caught this! XProtect Remediator is the replacement for MRT! Dr. Howard Oakley wrote a post on this on 11 Jan 2026: Last Week on My Mac: Does your laptop Mac get scanned for malware?
For example, Gatekeeper only scans files downloaded via certain channels; malware fetched by curl or launched by Brew bypasses it. At one point, XProtect only had a few dozen signatures, according to SentinelOne (this is obviously false today, with at least ~300 as of mid 2024), and relied on Gatekeeper tags to trigger, so skilled malware could slip by easily at that time.
To my knowledge, Apple’s MRT runs at boot and periodically on fixed paths. SIP blocks some write paths. But, attackers have discovered ways to bypass SIP (and some SIP-protected areas still require user approval to modify). In short, Apple’s native defenses leave gaps that EDRs would ideally fill.
See:
- Uncovering the security protections in MAC: XProtect and MRT by Shubham Dubey
- Built-in macOS Security Tools by Stuart Ashenbrenner
- Protecting against malware in macOS by Apple
- macOS Adload: Prolific Adware Pivots Just Days After Apple’s XProtect Clampdown by Phil Stokes
- MRT: what do we know about it? by Dr. Howard Oakley
- Analyzing CVE-2024-44243, a macOS System Integrity Protection bypass through kernel extensions by Jonathan Bar Or
Who is handling the gaps?
Unfortunately, that is not really how EDR capabilities have panned out thus far.
Unless you plan to routinely reverse their EDR agent (which they definitely won’t appreciate), you’re not going to get a reliable picture of which EDR products they’re actually using across different operating systems. You’re left relying on their word, and the ability of your own engineers, macOS-focused or not, to vet EDR claims through testing.
One way to validate an EDR vendor’s claims is to chat with their support and sales teams. Are they comfortable with putting you in contact with a product engineer? If they can’t explain something as basic as the Endpoint Security API on macOS, that’s a red flag. Additionally, if they advertise features that don’t exist on the platform or stitch together Windows, Linux, and macOS documentation to imply unrealistic functionality, that’s another.
If the product is so opaque that you can’t understand its behavior from documentation, support, or public technical resources… and the only way to verify its behavior is through reverse engineering… then either the product is poorly designed, or the vendor is deliberately obscuring functionality. Neither is acceptable.
🌸👋🏻 Join 10,000+ followers! Let’s take this to your inbox. You’ll receive occasional emails about whatever’s on my mind—offensive security, open source, boats, reversing, software freedom, you get the idea.
EDR effectiveness and bypasses
That said, relying entirely on an EDR is risky. Even the best products, people, and systems can be evaded.
For example, Aon’s Stroz Friedberg demonstrated a “Bring Your Own Installer” attack that completely disabled SentinelOne’s agent. This exploit abused a misconfigured update process to disable the protection for ~55 seconds, allowing a Babuk ransomware payload to run.
SentinelOne has since patched this, but the incident shows that attackers can bypass anti-tamper, and kill the EDR agent if it is not configured properly. Friedberg noted this general technique could apply to any EDR with a similar upgrade/downgrade path.
Bypasses are comin’
The open-source Sliver implant deployment with UPX and bundlers effectively bypassed many simple scanners. As Kyle Avery’s Mac EDR internals blog mentions, EDR solutions that don’t leverage Apple’s Endpoint Security API often rely on fragile userland hooks or callbacks.
Again, NimDoor malware demonstrated that using an uncommon language and payload loading (Nim executables, AppleScripts, etc.) can slip past signature-based engines.
In practice, verifying any EDR requires testing it against (ideally) novel techniques.
Note: This is what I will be testing in the coming months to validate my theories, but my attacks will not all be novel. I will run both old and new malware on the system. I would feel weird calling any of them truly “novel” since I’m not developing and testing 0 days.
A true gap PoC would be to create a benign or isolated POC (e.g., compile a C/Go/Nim program with obfuscation, or use a tool like Platypus), and see if the EDR detects it. If an organization lacks macOS expertise, it should not assume the EDR’s marketing claims cover all threats.
Many EDRs’ marketing highlights one-time or novel macOS limitations and claims to consistently provide security beyond what Apple offers. In reality, most rely heavily on just enforcing native mechanisms like Gatekeeper, Code Signing, and TCC policies, with minimal novel or unique detection logic beyond what Apple already provides.
This lowers the barrier for EDR evasion, making bypasses more a matter of time than possibility.
Another note: When having discussions with EDR companies, remember that withholding vulnerabilities from Apple to maintain a competitive edge raises serious ethical concerns. If such behavior were substantiated, it could severely damage the vendor’s credibility and complicate any existing or future collaboration with Apple.
Technical analysis
Why would malware written in Go, Rust, or Nim be harder for EDRs to detect if they ultimately call the same macOS APIs?
This comes down to how EDRs recognize malicious behavior. Under the hood, a file delete is a file delete, whether done by a C program or a Go program. The macOS system call (e.g., unlink()) is the same.
However, EDR systems rely on a mix of methods, including API hooking, behavioral heuristics, and binary analysis. Malware in uncommon languages can thwart some of these methods, especially because (again) EDRs often rely on enforcing pre-existing detections or signatures, rather than creating or finding novel new ones, despite claiming otherwise.
These areas include things like:
- Static and signature evasion
- EDR hooking limitations
- Monolithic static binaries
- Memory management and EDR signals
For example, static and signature evasion
Traditional AVs and EDRs might use signatures or heuristics tuned for common malware families. Go and Rust produce very large binaries with lots of runtime and library code statically linked. This “noise” makes it harder to spot a small malicious routine.
In the early days of Go malware, security products struggled to even parse the binaries. Phil Stokes and Raffaele Sabato ReaderUpdate (a Mac malware loader) was in the wild since 2020, but “passed relatively unnoticed by many vendors and remain[ed] widely undetected.”
One reason was that the malware authors kept rewriting it in different languages; first Python in a bundled form, then Crystal, Nim, Rust, and Go. Each rewrite drastically changed the binary footprint, making it difficult to obtain easy signature matches.
Many AV engines lacked YARA rules or models for these new-language binaries because they were rare. By the time patterns emerged, the malware would morph again.
EDR hooking limitations
Endpoint security on macOS often instruments high-level APIs or system libraries to detect malicious actions.
For example, an EDR might hook the execve() library call to catch process launches or monitor file writes via FS events. But languages like Go and Rust don’t always use the standard macOS libc calls. They may perform syscalls directly or use their own runtime. An attacker can also intentionally call lower-level interfaces.
On Windows, I’d think this is analogous to malware skipping the WinAPI (which EDR hooks) and doing direct syscalls. On macOS, if malware avoids calling common monitored APIs (say, it doesn’t call execv from the Libc, but instead uses Mach system calls directly), the EDR might miss it. Lesser-known languages can make such avoidance easier, as malware authors can use built-in runtime functions or inline assembly for syscalls.
In short, the EDR’s “watcher” might be looking at the wrong place. If it hooked a library function that never gets invoked in a Rust binary using syscalls, the malicious behavior slips by.
Monolithic static binaries
Go and Rust binaries are typically built as monolithic executables. Most of their runtime, networking, and cryptography logic is compiled directly into the binary rather than pulled in from shared system libraries at runtime. This reduces reliance on externally loaded dylibs that userland EDRs have historically hooked using dynamic library interposition.
macOS does not support fully static executables, so Go and Rust processes still dynamically link against libSystem. However, higher-level functionality is usually not implemented in separate dylibs, which limits the number of convenient hook points available to EDRs.
Go binaries are only mostly static when CGO is disabled. Enabling CGO pulls in libc and the dynamic loader, restoring some conventional interception surfaces. Even then, large portions of Go’s networking, TLS, and runtime behavior remain inside the Go runtime rather than being delegated to common system libraries like libcurl or CFNetwork.
In contrast, many C and C++ applications rely heavily on widely used shared libraries. Instrumenting those libraries can give EDRs richer semantic insight into application behavior. Go’s internal net stack removes those hooks, forcing EDRs to fall back to lower-level telemetry (I think).
As a result, macOS EDRs can still see what the process does, such as file access, process execution, and network activity, via ES and NetworkExtension. What they may lose is higher-level context about which internal code paths caused those actions. That reduced semantic visibility can limit behavioral fidelity compared to more ✨conventionally✨ structured, dynamically linked applications.

Memory management and EDR signals
Uncommon runtime behaviors can confuse EDR heuristic models.
For example, a Nim or Go malware might allocate a huge amount of memory or use unconventional syscalls. These aren’t illegal per se, and because legitimate software written in those languages does similar things, EDRs can’t blindly flag them.
The result is that early or naive EDR models sometimes whitelisted the runtime behavior of these languages. Attackers noticed this and started writing malware to blend in.
It’s like a camouflage: EDR’s baseline models were trained mostly on C/C++ and script malware, so a Rust binary with lots of harmless-looking runtime calls could fly under the radar.
Examples
I’ve seen multiple macOS malware campaigns shifting to less-common languages. Besides ReaderUpdate, there’s RustBucket (a backdoor written in Rust targeting Macs) and malware like GoLang (Golang) loaders for Cobalt Strike on macOS.
These often initially slipped past detections. Adva Gabay and Daniel Frank reported new stealer malware in 2023, written in Rust and Nim, that easily evaded Apple’s built-in XProtect signatures.
Eventually, patterns get recognized, and EDRs catch up. Yet, there’s a window of opportunity where such malware is effectively “invisible” to defenses not tuned for it.
🌸👋🏻 Join 10,000+ followers! Let’s take this to your inbox. You’ll receive occasional emails about whatever’s on my mind—offensive security, open source, boats, reversing, software freedom, you get the idea.
Importantly, the macOS APIs remain the same.
If an attacker in Go wants to encrypt files, they still have to call open() and write to files. If they want to steal data, they still have to use system calls to read files or send network traffic.
A well-architected EDR should catch the malicious pattern in theory. But that also expects them to know macOS internals well, which historically has been a lot to ask for.
The problem is in practice: EDRs rely on specific telemetry sources and behavioral analytics that can be subverted or overwhelmed by new implementations.
If the malware performs the API calls in a manner the EDR isn’t watching (or groups them in a way that looks “normal” for a legit app in that language), the EDR may not trigger an alert.
It’s kinda like speaking to the Mac EDR with an unfamiliar accent. The words (APIs) are the same, but the security “ear” doesn’t immediately recognize the threat. This is why malware authors seem to love using lesser-known languages now. It’s another form of obfuscation, at a higher level. It confuses static analysis and can delay behavioral detection.
Mac’s native security APIs and EDR visibility
Apple provides security mechanisms on macOS that Apple (lol) and third-party tools can use.
As I mentioned earlier, one interface is Apple’s Endpoint Security (ES) API that allows security software to subscribe to system events, like process executions, file writes, network connections, and more.
All major Mac EDR products (including SentinelOne, CrowdStrike, etc.) now use the Endpoint Security API, because Apple deprecated the old kernel extensions.
The ES API is powerful but also enforces Apple’s rules. For example, some events are “notify-only”; the security agent learns about them after they happen. Also, some ES events allow blocking, preventing execution, if the agent decides to. Apple essentially controls what an EDR can see and do. It’s a sandbox for security software.
So, could an engineer directly use Apple’s APIs to protect a Mac environment, without an EDR?
To an extent, yes. Apple’s built-in tools use these facilities: XProtect, the built-in AV, uses a database of YARA signatures to catch known malware on download or execution.
This would work even better if you had an MDM, which would give you access to a lot more information and automation.
Gatekeeper uses Code Signing and Notarization checks to block untrusted apps (e.g., an app from the internet that isn’t notarized will be stopped with a warning). Transparency, Consent, Control (TCC) frameworks prompt the user or block apps from accessing sensitive data (camera, mic, etc.) unless explicitly allowed.
Apple even has a system extension for network filtering. Some third-party firewalls use this.
All these are native defenses that, when properly configured, provide a baseline of protection.
Oh, also! macOS Ventura/Monterey introduced XProtect Remediator, which runs periodic scans/remediations for known malware families in the background.
Apple has evolved XProtect from a simple signature check into a mini-AV that can quarantine certain malware.
However, Apple’s built-ins have limitations.
XProtect is signature-based. Thus, zero-days or new malware written in Rust, etc., can slip by until Apple updates signatures, and Gatekeeper can be bypassed by user decisions or creative malware.
Apple’s focus is also on not impeding user experience. They tend to err on the side of allowing execution with a warning rather than outright blocking enterprise-wide.
That’s where EDRs claim to add value: enforcement and visibility. EDRs tie into the ES API to monitor and, optionally, block suspicious behavior in real time, ideally sending telemetry to a central console. Do they do this in practice? Meh.
An Apple admin could build or supplement their own monitoring using ES API. Some open-source tools, like Objective-See’s ProcessMonitor or FileMonitor do this for specific event types. But doing so at scale, with correlation and response, is non-trivial. Again, that’s supposedly the service EDR vendors sell.
Note that both Apple’s built-ins and third-party EDRs are constrained by the same fundamental limits.
One huge limitation: in-memory monitoring.
Apple, citing user privacy, does not allow apps to arbitrarily read another process’s memory or inject code since removing kernel extensions.
This means neither Apple’s tools nor EDR agents can scan inside a running process’s memory for malicious code. As Patrick Wardle put it, Apple cares more about privacy than security in this realm:
“Due to privacy concerns, Apple does not allow any process, even security tools, to read the memory of another process… reflectively loaded payloads are safe from any non-kernel macOS security tool.”
In other words, if malware never touches the file system (nothing to scan on disk) and executes solely in memory, both Apple’s XProtect and EDRs are largely blind to payload bytes. They can see effects (a process might spawn or make network connections), but they can’t dump its memory to find an injected shellcode.
Apple used to allow kernel extensions that could do memory forensics, but those are now abolished for security/stability reasons. So we’re left with a file-system and behavior-centric detection approach on macOS.
That explains why certain obfuscations (like fileless malware or packing that only decrypts in RAM) are so effective at bypassing signature-based (and occasionally behavioral-based) Mac tools.
The EDR can only scrutinize what’s on disk or what calls are made through monitored interfaces. If an attacker encrypts their code and decrypts it only in memory at execution time, an EDR can’t analyze the decrypted code.
Additionally, Apple’s unified logging system, which records a wealth of system activity, is partially off-limits to third parties. It requires a special/private entitlement to stream all logs.
com.apple.private.logging.stream
Therefore, EDRs must install their own sensors rather than rely on Apple’s logs unless Apple’s policies change or if they somehow get access.
Read: Constructing our own Console.app: A Custom OSLog facility viewer by Osama Alhour.
Conclusion
In summary, Apple’s native APIs make basic malware detection “easy” (via signatures and event notifications) but also level the playing field: all EDRs get the same types of events from Endpoint Security. There’s no secret extra feed that one EDR has, and others don’t.
The differentiation comes in what they do with those events. But it’s important to realize that if something is outside the scope of Apple’s allowed telemetry (e.g., scanning another process’s memory or blocking an operation that Apple doesn’t allow third-parties to block, then any EDR can’t magically overcome that).
They are limited to using Mac’s provided hooks and kernel-provided data.
If you enjoyed this post on evading EDRs with lesser-known languages, consider reading Code Obfuscation Techniques on macOS: Beyond Packers.


You must be logged in to post a comment.