Prerequisites and environment partitioning
Start with a service account (not a shared interactive login) so permissions and LaunchAgents stay stable across SSH sessions. Map two physical ideas even if you only have one disk today: a hot APFS volume or fast folder on internal NVMe for latency-sensitive shards, and a warm tier on external USB4/TB or a second volume for bulk archives. Document every environment variable your OpenClaw build reads—common patterns include a home-level config dir plus vendor caches for Hugging Face, Ollama, or llama.cpp; align names with your installation troubleshooting guide so teammates reproduce the same tree.
- Pin Node 22+ and OpenClaw semver; record them in an internal runbook.
- Create
~/Library/Logs/OpenClawCache/with mode755for the service user. - Export
HF_HOME,OLLAMA_MODELS, or equivalent inlaunchd EnvironmentVariables—never commit secrets. - Verify
diskutil apfs listand mount paths survive reboot (avoid transient/Volumes/Untitlednames).
Cache paths and tiering strategy
Pick one canonical path your jobs always use (for example ~/.openclaw/cache/models). On disk, store blobs under tier-specific roots such as /Volumes/FastSSD/oc-models and /Volumes/BulkDisk/oc-models-archive. Replace the canonical directory with a symlink to the hot tier so CI scripts never hard-code mount letters.
# Example: canonical path points at fast tier (stop OpenClaw jobs first) mkdir -p "/Volumes/FastSSD/oc-models" rm -rf "$HOME/.openclaw/cache/models" # backup if it was a real directory ln -s "/Volumes/FastSSD/oc-models" "$HOME/.openclaw/cache/models" readlink -f "$HOME/.openclaw/cache/models"
Promote models from warm to hot with rsync -a --partial --inplace or atomic mv within the same volume. Demote cold weights with the reverse path after recording checksums.
| Decision | Hot tier (internal) | Warm tier (external / bulk) |
|---|---|---|
| Latency target | < 5 ms read p95 for small shards | Best-effort; OK for rare pulls |
| Symlink at canonical path | Default for active runway models | Use secondary symlink only if hot is full |
| Shared runner risk | Per-user symlink if multi-tenant | Good for org-wide read-only mirror |
Cache directory checklist
- Canonical symlink exists and survives reboot (
readlinkin CI preflight). - Tmp/download dirs live on the same volume as final blobs to avoid cross-device rename failures.
- Manifest or lockfile names version pins—tie cleanup scripts to those filenames.
- Document inode-heavy trees (small files) separately from a few multi-GB weights.
Scheduled cleanup and log rotation
Never delete blindly while OpenClaw holds mmap handles. Prefer a cleanup window when CI is quiet—typical remote Mac pools use 02:00–04:00 local time. Use a user LaunchAgent with StartCalendarInterval (hour + minute) instead of cron on macOS unless you already standardize on /usr/libexec/atrun-style jobs; LaunchAgents inherit the graphical session user and are easier to audit per tenant.
| Scheduler | Best for | Watch-out |
|---|---|---|
| LaunchAgent + StartCalendarInterval | Per-user Mac runners, GUI keychain access | Duplicate plists if multiple admins edit |
| crontab | Legacy parity with Linux docs | Diff PATH, no GUI session, easy permission drift |
| CI-triggered SSH script | Centralized policy from Git | Needs backoff if runners overlap time zones |
For log rotation, append dated files under ~/Library/Logs/OpenClawCache/ and add a /etc/newsyslog.d/ stanza or a weekly gzip pass in the same plist. Keep at least seven days of cleanup stdout to explain sudden cache misses.
# Sketch: plist ProgramArguments call this after idle check #!/bin/bash set -euo pipefail # if pgrep -f openclaw-gateway >/dev/null; then exit 0; fi # optional guard find "$HOME/.openclaw/cache/tmp" -type f -mtime +3 -delete find "$HOME/.openclaw/cache/tmp" -type d -empty -delete
Quotas and disk watermark thresholds
Pair quota ideas with simple df parsing in your patrol script. A practical 2026 matrix for shared Apple Silicon runners:
| Utilization | Action | Alert channel |
|---|---|---|
| ≥ 80% | Notify + schedule cleanup in next window | Slack/webhook INFO |
| ≥ 85% | Block new experimental model pulls | Pager WARNING |
| ≥ 90% | LRU eviction on tmp + demote to warm tier | Pager CRITICAL |
If your host supports per-volume quotas, set soft limits slightly below marketing capacity so APFS snapshots do not surprise you. Always check df -h and df -i; inode exhaustion mimics full disks but will not be fixed by deleting a single 200 GB file.
Failure retry and troubleshooting
Wrap downloads with capped exponential backoff (for example 5s, 15s, 45s) and jitter so a fleet of Macs does not stampede a mirror. When a job fails:
- ENOSPC / “No space left” — confirm symlink target volume, not only
$HOME. - Stale file handle — remount external disk; verify APFS encryption unlock order at boot.
- HTTP 429 / 5xx — treat as registry policy; lower concurrency and respect
Retry-After. - Permission denied — compare umask vs service user; re-run
openclaw doctorafter macOS updates.
Cross-check gateway stability with failure recovery and retry patterns so automation does not fight a flapping daemon.
Go-live acceptance checklist
- Canonical symlink resolves on a cold boot before any job runs.
- Patrol script prints human-readable utilization and exits non-zero on >90% fast tier.
- Cleanup dry-run log archived with ticket ID; destructive mode only after two green dry-runs.
- CI references the canonical path only—no duplicate hard-coded mount URLs.
- Rollback doc explains how to swap symlink back to warm tier within five minutes.
Summary and where to go next
Summary: Reliable OpenClaw model pulls on a remote Mac come from one canonical cache path, a symlinked hot tier, a documented warm archive, LaunchAgent cleanup windows, rotated logs, and explicit disk watermarks. Treat automation as part of CI: version the plist, test dry-runs, and keep failure retries separate from disk emergencies.
Explore more (no login to browse): read MacPull help, compare pricing, open remote Mac purchase, revisit the blog index, or continue the OpenClaw series with ClawHub CI pre-pull automation and gateway security hardening.
Dedicated Apple Silicon nodes give you predictable SSD headroom and quiet maintenance windows—exactly what tiered caches need to stay reproducible week after week.
Need Disk Headroom for Model Caches?
Rent a remote Mac Mini with fast NVMe and stable SSH. View pricing and purchase without logging in.