Tools: Essential Guide: I Crashed My Mac 5 Times So You Don't Have To: Mounting S3 Files on macOS

Tools: Essential Guide: I Crashed My Mac 5 Times So You Don't Have To: Mounting S3 Files on macOS

Why This Matters

The Problem: macOS Can't Speak S3 Files

Attempt 1: Native macOS NFS Mount → 💀 Kernel Panic (x5)

Attempt 2: Raw mount -t nfs4 via NLB → ❌ "access denied"

Attempt 3: efs-proxy Without TLS → ❌ "access denied"

Attempt 4: The IPv6 Detour → ✅ First Success (But Wrong Conclusion)

Attempt 5: efs-proxy ReadBypass → ❌ Proxy Crash Loop

Attempt 6: The Full Stack → ✅ It Works

The Benchmark: WebDAV Destroys SMB on macOS

Region Matters: ca-central-1 vs us-east-2

The Architecture

The Developer Experience: Two Commands

The Backstory: Mountpoint for S3 and the iPhone Backup That Almost Worked

What's Next: Use Cases I'm Excited About

A Note on WSL2

S3 Files vs. Mountpoint for Amazon S3

Security: What's Safe and What's Not

The Failure Table

Try It Yourself Two days ago, AWS launched S3 Files — a managed NFS layer that turns any S3 bucket into a mountable filesystem. Sub-millisecond latency within AWS. Full read/write. Bidirectional sync. The AWS community collectively lost its mind, and rightfully so. There's just one problem: it only works on AWS compute. EC2, Lambda, EKS, ECS. Not your Mac. Not your laptop. Not the machine where you actually write code. I spent the last 48 hours fixing that. Along the way, I kernel-panicked my MacBook five times, got "access denied" in three different ways, discovered a crash bug in efs-proxy, and eventually built a tool that mounts S3 Files on macOS with two commands. This is the story of everything that went wrong, and the one thing that finally worked. As Corey Quinn put it, S3 has never been a filesystem — but now there's a real one sitting in front of it. Andy Warfield's team didn't just bolt a POSIX layer onto S3 and call it a day. They built a proper filesystem backed by EFS infrastructure, with S3 as the durable source of truth. Think of S3 Files as another tier in the S3 hierarchy — a file system front end for hot, frequently accessed data that needs mutation, user interaction, or low-latency access. You create a file system on any bucket or prefix with no data migration. Your existing S3 data is immediately visible as files and folders. The smart defaults are what make it feel magical: Changes sync back to S3 approximately every minute. Changes to S3 objects sync into the file system via EventBridge notifications. Data expires from the fast tier after 30 days by default (configurable) and rehydrates on next access. AWS explicitly positions agentic AI as a first-class use case — multi-step, multi-process workloads where agents need to share state, read reference data, and produce outputs collaboratively. That's exactly the use case that got me excited enough to spend 48 hours making this work on a Mac. That's what I wanted. A native Mac folder backed by S3. S3 Files requires three things that macOS cannot provide: Three hard requirements. Zero macOS support. Let's see how many ways this can fail. My first instinct was the obvious one. S3 Files exposes a mount target with a private IP in your VPC. I put an internet-facing Network Load Balancer in front of it (TCP 2049), pointed my Mac at the NLB, and ran: The screen went black. Hard reboot. I tried again with different NFS options. Black screen. Reboot. I tried vers=4.0 explicitly. Black screen. Reboot. Five kernel panics in total. macOS NFSv4 bugs are well-documented — the client chokes on protocol features it doesn't understand. When S3 Files responds with NFSv4.2 capabilities, the macOS NFS client doesn't gracefully degrade. It crashes the kernel. Lesson: macOS NFSv4 is not just old — it's actively dangerous when pointed at a v4.2 server. OK, so macOS is out. I spun up a Docker container running Amazon Linux (which has a proper NFSv4.2 client) and tried a raw NFS mount from inside the container: This is where I started reading the efs-utils source code. S3 Files isn't a standard NFS server you can just connect to. Before any NFS traffic flows, the client must authenticate via a custom protocol called EFS RPC Bind — essentially proving "I have valid AWS credentials and I'm allowed to mount this filesystem." The efs-proxy binary handles this. A raw mount -t nfs4 skips the entire auth layer. Lesson: You can't just NFS-mount S3 Files. The auth isn't optional — it's the only way in. I installed amazon-efs-utils in the container and tried mount -t s3files. The efs-proxy binary started up, but I hadn't configured TLS properly (Docker isn't EC2 — there's no instance metadata service, no AZ info, no automatic certificate provisioning). "Access denied." Again. Digging into the efs-utils config, I found that efs-proxy wraps the TCP connection to port 2049 in TLS 1.2, then performs an RPC Bind — a custom handshake where the client proves it has valid AWS credentials. Think of it as mTLS with IAM instead of certificates. Without TLS, the mount target drops the connection before auth even begins. I patched the config file (/etc/amazon/efs/s3files-utils.conf) to remove the {az_id} placeholder from the DNS format (no AZ metadata in Docker) and set the region via environment variable. Lesson: S3 Files enforces TLS on every single connection. No TLS, no mount. Period. At this point I was convinced the NLB was the problem. Something about how it proxied TCP was breaking S3 Files at the NFS protocol level. So I built a workaround: bypass the NLB entirely. The mount target ENI had an IPv6 address (assigned by the subnet's IPv6 CIDR). My Mac has IPv6 connectivity. Docker Desktop doesn't — but I could bridge the gap with a Python TCP proxy on my Mac that accepts IPv4 from Docker and forwards to the mount target over IPv6. This required opening the mount target's security group directly to my public IPv6 address on port 2049. Not great — exposing a mount target to the internet is exactly the kind of thing Security Hub flags. But for debugging, I went with it. Inside the container, I used mount -t s3files with mounttargetip pointing at the Mac's Docker gateway. And it worked. Files appeared. Read/write confirmed. S3 sync verified. First success after hours of debugging. But why did it work when the NLB path didn't? I assumed it was because I'd eliminated the NLB. Wrong. The real reason: mount -t s3files automatically enables TLS. My earlier attempts used manual efs-proxy commands without TLS. The official mount helper adds it by default — S3 Files won't work without it. Retried the NLB with mount -t s3files instead of manual efs-proxy. Worked perfectly. TLS was the missing piece all along. The NLB was fine — it's Layer 4, it just passes TCP bytes through, TLS and all. I deleted the TCP bridge, removed the IPv6 SG rule, and moved on. Lesson: when something works through path A but not path B, the difference might not be the path — it might be what path A does automatically that you forgot to do on path B. With the NLB working via mount -t s3files, I had one more problem. During my earlier manual efs-proxy debugging (before discovering the TLS fix), I'd hit a persistent crash: The proxy would connect, authenticate (BindResponse::READY), then crash the moment NFS traffic flowed. Restart. Crash. Restart. Hundreds of incarnations per second. After reading the efs-proxy source and the mount.s3files Python wrapper, I found the culprit: the ReadBypass module. Remember how S3 Files serves large files directly from S3's throughput layer? ReadBypass is the efs-proxy implementation of that — it intercepts NFS read requests and serves them directly from S3, bypassing the NFS data path. This is designed for EC2 instances with direct VPC access to S3. In our setup — Docker container, patched efs-utils config, traffic routed through an NLB — the parser chokes on certain response formats and panics. It's not necessarily a bug in ReadBypass itself; it's that we're running efs-proxy far outside its intended environment. The efs-proxy binary accepts a --no-direct-s3-read flag (I found this by running efs-proxy --help after the --no-read-bypass flag I guessed didn't exist). The mount -t s3files equivalent is the nodirects3read mount option. With ReadBypass disabled, the proxy forwarded NFS traffic cleanly. No crashes. Lesson: efs-proxy ReadBypass doesn't work in our non-standard Docker + NLB setup. Use nodirects3read to disable it. On a normal EC2 instance, it likely works fine. The winning combination: Wait — WebDAV? Why not just use the Docker mount directly? Because Docker Desktop runs in a Linux VM. The NFS mount lives inside that VM. To access it from macOS, you need to re-export it over a protocol that macOS can mount natively. The two candidates: SMB (Samba) and WebDAV. I benchmarked both. The results were... not close. WebDAV is 10–54x faster than SMB on macOS. Apple's SMB client is notoriously slow — it adds packet signing, metadata prefetching, and delayed TCP acknowledgments to every operation. A simple ls triggers dozens of round-trips. WebDAV is just HTTP requests — one request, one response, done. I used WsgiDAV as the WebDAV server inside the container. It re-exports the NFS mount at /mnt/s3files over HTTP on port 8080. macOS mounts it natively via mount_webdav. Since the latency floor is internet RTT, I deployed the same CDK stack to two regions and benchmarked from my Mac in Canada: The CDK stack is region-agnostic — just change -c region=ca-central-1. Pick the region closest to you. For me in Canada, ca-central-1 shaves ~40% off interactive operations. Your Mac talks WebDAV to a Docker container. The container talks authenticated, encrypted NFSv4.2 to S3 Files through an NLB. The NLB is Layer 4 — it just forwards TCP bytes without inspecting or modifying the TLS payload. S3 Files syncs bidirectionally with your S3 bucket. From your Mac's perspective, it's just a folder. I wrapped everything in a CDK stack and a shell script. The entire setup: That's it. docker-mount.sh up builds the container, starts efs-proxy, mounts S3 Files via NFS, starts the WebDAV server, and mounts WebDAV at /tmp/s3files. One command. To tear down: docker-mount.sh down. The CDK stack provisions everything: VPC with public subnet, S3 bucket (versioning enabled — required by S3 Files), IAM role with the elasticfilesystem.amazonaws.com trust policy, the S3 Files filesystem and mount target, an NLB forwarding TCP 2049, and security groups locking it down. This isn't my first attempt at mounting S3 locally. Last year, I experimented with Mountpoint for Amazon S3 on Windows via WSL2. Mountpoint is a FUSE-based client that presents S3 as a local filesystem — but it's optimized for read-heavy workloads. Writes are limited: you can create new files, but you can't modify existing ones in place. I had a wild idea: back up my iPhone to S3 via iTunes. I mounted an S3 bucket using Mountpoint in WSL2, pointed iTunes at it, and kicked off a backup. The initial full backup actually worked — iTunes wrote all the files sequentially, which is exactly what Mountpoint handles well. Then I tried an incremental backup. iTunes needs to read existing backup files, compare them, and overwrite changed ones. Mountpoint doesn't support overwrites. The backup failed. S3 Files changes this equation entirely. Full read/write. In-place modifications. Bidirectional sync. The filesystem semantics that iTunes (and every other desktop app) expects. I haven't re-tested the iPhone backup scenario yet with S3 Files, but the technical blockers that stopped Mountpoint are gone. This could finally be the path to backing up an iPhone directly to S3 with full incremental support. Shared IDE workspace. Mount the same S3 bucket from multiple machines. Edit files in VS Code on your Mac, pick up where you left off on your Linux workstation. S3 is the source of truth. No git push/pull dance for work-in-progress files. Agentic AI shared state. This is the one that keeps me up at night. AI agents — coding assistants like Kiro, autonomous agents like OpenClaw — increasingly work with files: markdown docs, config files, memory stores, tool outputs. Mount an S3-backed filesystem as the agent's workspace. Multiple agents can read and write to the same shared state. The data lives in S3, durable and accessible from anywhere. It's a shared brain for your agent fleet. Cross-platform development. Same S3 bucket, three platforms: macOS (via Docker + WebDAV), Windows (via WSL2 — native NFSv4.2, no Docker needed), Linux (native mount -t s3files). One source of truth, zero file sync tools. If you're on Windows, you might not need Docker at all. WSL2 runs a real Linux kernel (5.15+) with full NFSv4.2 support. You can install amazon-efs-utils directly in WSL2 and mount S3 Files natively — no WebDAV re-export, no container overhead. The mount appears as a Linux path accessible from Windows Explorer via \\wsl$\. You'd still need the NLB (or a VPN) for connectivity, but the protocol stack is native. I haven't tested this yet, but the kernel capabilities are all there. For anyone wondering how these two compare: S3 Files is a managed NFS filesystem with S3 as the durable backend. Mountpoint is a lightweight FUSE client for reading large datasets from S3. Different tools for different jobs. S3 Files gives you the full filesystem semantics that applications like databases, IDEs, and backup tools expect. Mountpoint gives you fast, cheap reads for data pipelines. The PoC uses an internet-facing NLB so Docker Desktop can reach the mount target. This sounds scary, but the actual risk is mitigated: That said, for production use, replace the public NLB with AWS Client VPN. AWS documents this exact pattern for accessing EFS from on-premises networks, and it applies equally to S3 Files. VPN eliminates the internet-facing endpoint entirely. Also use private subnets with a Gateway endpoint for S3 — it's free and routes S3 traffic through the AWS network, bypassing NAT Gateway costs. Because every good debugging story deserves a summary of the wreckage: The entire project is open source (MIT): github.com/awsdataarchitect/s3files-mount Two commands to go from zero to a native Mac folder backed by S3: If you try it, break it, improve it, or find new use cases — I'd love to hear about it. Open an issue, submit a PR, or find me on LinkedIn. S3 has never been a filesystem. But as of this week, your S3 data can live in one — even on your Mac. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

$ -weight: 600;">sudo mount -t nfs -o vers=4 nlb-dns.amazonaws.com:/ /mnt/s3files -weight: 600;">sudo mount -t nfs -o vers=4 nlb-dns.amazonaws.com:/ /mnt/s3files -weight: 600;">sudo mount -t nfs -o vers=4 nlb-dns.amazonaws.com:/ /mnt/s3files mount -t nfs4 -o nfsvers=4.2 nlb-dns.amazonaws.com:/ /mnt/s3files mount -t nfs4 -o nfsvers=4.2 nlb-dns.amazonaws.com:/ /mnt/s3files mount -t nfs4 -o nfsvers=4.2 nlb-dns.amazonaws.com:/ /mnt/s3files Docker → Mac TCP bridge (IPv4:2049) → IPv6 → Mount Target (SG opened for my IPv6) Docker → Mac TCP bridge (IPv4:2049) → IPv6 → Mount Target (SG opened for my IPv6) Docker → Mac TCP bridge (IPv4:2049) → IPv6 → Mount Target (SG opened for my IPv6) ERROR efs_proxy::nfs::nfs_reader Error handling parsing error SendError { .. } ERROR efs_proxy::nfs::nfs_reader Error handling parsing error SendError { .. } ERROR efs_proxy::nfs::nfs_reader Error handling parsing error SendError { .. } # 1. Deploy infrastructure (VPC, bucket, IAM role, S3 Files, NLB) cd infra && -weight: 500;">npm -weight: 500;">install && npx cdk deploy -c region=ca-central-1 # 2. Mount ./-weight: 500;">docker/-weight: 500;">docker-mount.sh up <NLB_DNS_from_CDK_output> # 3. Use it ls /tmp/s3files/ echo "hello world" > /tmp/s3files/test.txt open /tmp/s3files # opens in Finder code /tmp/s3files # opens in VS Code # 1. Deploy infrastructure (VPC, bucket, IAM role, S3 Files, NLB) cd infra && -weight: 500;">npm -weight: 500;">install && npx cdk deploy -c region=ca-central-1 # 2. Mount ./-weight: 500;">docker/-weight: 500;">docker-mount.sh up <NLB_DNS_from_CDK_output> # 3. Use it ls /tmp/s3files/ echo "hello world" > /tmp/s3files/test.txt open /tmp/s3files # opens in Finder code /tmp/s3files # opens in VS Code # 1. Deploy infrastructure (VPC, bucket, IAM role, S3 Files, NLB) cd infra && -weight: 500;">npm -weight: 500;">install && npx cdk deploy -c region=ca-central-1 # 2. Mount ./-weight: 500;">docker/-weight: 500;">docker-mount.sh up <NLB_DNS_from_CDK_output> # 3. Use it ls /tmp/s3files/ echo "hello world" > /tmp/s3files/test.txt open /tmp/s3files # opens in Finder code /tmp/s3files # opens in VS Code cd infra && npx cdk deploy -c region=ca-central-1 ./-weight: 500;">docker/-weight: 500;">docker-mount.sh up <NLB_DNS> cd infra && npx cdk deploy -c region=ca-central-1 ./-weight: 500;">docker/-weight: 500;">docker-mount.sh up <NLB_DNS> cd infra && npx cdk deploy -c region=ca-central-1 ./-weight: 500;">docker/-weight: 500;">docker-mount.sh up <NLB_DNS> - Metadata pre-warms instantly. When you create a file system, all S3 key prefixes are mapped to directories and files. ls works immediately — no waiting. This is a massive differentiator from FUSE-based tools like Mountpoint, where ls on a large dataset can take minutes because it does a HEAD or LIST call per object. - Small files (under 128KB) auto-sync on directory access. When you cd into a directory, code files, configs, and small assets are pulled into the fast tier automatically. No explicit fetch needed. - Large files stream directly from S3. Files over 128KB are lazy-loaded on first read, and very large files may be served directly from S3's throughput layer without ever being copied into the file system tier. This is the ReadBypass optimization in efs-proxy — designed for EC2, but as we'll see, it doesn't play well with our non-standard Docker + NLB setup. - NFSv4.2 — macOS ships with NFSv4.0. The NFS client is baked into the kernel. You can't -weight: 500;">upgrade it. - TLS encryption — S3 Files rejects every unencrypted NFS connection. No exceptions. - IAM authentication — Every mount requires an EFS RPC Bind handshake with AWS credentials, handled by a binary called efs-proxy (part of amazon-efs-utils). This only runs on Linux. - Docker (Amazon Linux) — provides NFSv4.2 kernel support - efs-proxy — handles TLS + IAM authentication - NLB — bridges Docker Desktop to the VPC mount target - nodirects3read — avoids the ReadBypass crash - WebDAV — re-exports the NFS mount to macOS as a native folder - S3 Files enforces TLS encryption and IAM authentication on every connection — you can't mount without valid AWS credentials - The NLB security group only allows inbound TCP 2049 - The mount target security group only accepts traffic from the NLB security group