Cross-border latency: two modes, wall-clock and what actually differs
On hosted remote Mac CI, cold yarn install time is dominated by TLS round trips and tarball bytes, not local CPU. Yarn Berry with PnP (nodeLinker: pnp) reduces millions of small file writes by keeping a single .pnp.cjs map plus compressed cache entries—often faster when your mirror is far away but APFS is fast. Classic node_modules (nodeLinker: node-modules) explodes inode creation; it can win when tools expect physical paths or when you already pay the cost for native postinstall scripts that rewrite trees.
| Mode | Typical win on remote Mac | Cross-border angle | Hybrid monorepo caveat |
|---|---|---|---|
| PnP | Lower metadata I/O after cache warm; fewer files for APFS to track during link phase | Same tarball count as node_modules; benefit shows after packages hit .yarn/cache |
Metro, Jest, or legacy iOS scripts may need sdk integration or node-modules overrides per package |
| node_modules | Maximum compatibility with tools that stat deep trees | High RTT hurts more when many tiny packages serialize; tune concurrent install aggressively | Predictable for CocoaPods-adjacent scripts that assume hoisted paths |
| PnP + loose (intermediate) | Softens strict resolution for some edge packages | Network profile similar to strict PnP | Document why looseness exists—security reviewers will ask |
Neither mode removes the need for a nearby registry mirror or HTTP/2-friendly egress. Treat marketing charts that claim “PnP is always 2× faster” as incomplete unless they cite your geography, mirror hop count, and whether installs are cold or warm.
npm / Yarn registry mirror parameters and .yarnrc.yml knobs
Commit a single source of truth for registry endpoints. For remote Mac CI, pair a mirror hostname approved by security with explicit timeouts so jobs fail loudly instead of hanging. The checklist below maps common Yarn 4 settings to operational intent.
npmRegistryServerpoints to your mirror or upstream npm as policy dictates.networkConcurrencyis capped for cross-border runners (start near 12–20, measure, then adjust).httpTimeoutreflects worst-case tarball fetch (often 60–120s per request behind lossy links).httpRetryencodes bounded retries for 5xx/429 (align with internal SRE guidance).enableGlobalCache: trueonly on dedicated warm hosts with tenant isolation; otherwise keep job-local cache.checksumBehavioristhrowin CI unless you have an audited exception process.
# Example .yarnrc.yml fragments — adjust hosts; never commit secrets nodeLinker: pnp # or node-modules compressionLevel: mixed npmRegistryServer: "https://registry.npmmirror.com" # example mirror # Throttle concurrent fetches on high-latency paths networkConcurrency: 16 httpTimeout: 90000 httpRetry: 3 # Safer CI default: fail if lockfile would change enableImmutableInstalls: true # Optional: global cache on warm single-tenant Mac minis only # enableGlobalCache: true # Trusted internal mirror without TLS (rare; requires explicit allowlist) # unsafeHttpWhitelist: # - "npm.internal.corp"
Pair this with Node 22+ and corepack enable so developer laptops and Apple Silicon agents resolve the same yarnPath release line.
GitHub Packages and npm: dual-source authentication and token rotation
Hybrid teams frequently pull public packages from npmjs while private @org/* scopes live on GitHub Packages. Yarn supports per-scope registries via npmScopes; map tokens through environment variables injected in CI, not literals in git. Rotation should overlap: load NPM_TOKEN and GITHUB_PACKAGES_TOKEN from your vault, and during rotation keep two valid secrets for at least one pipeline cycle.
# .yarnrc.yml — use env vars; tokens live in CI secrets
npmScopes:
myorg:
npmRegistryServer: "https://npm.pkg.github.com"
npmAlwaysAuth: true
npmAuthToken: "${GITHUB_PACKAGES_TOKEN}"
npmRegistries:
"https://registry.npmjs.org":
npmAuthToken: "${NPM_READ_TOKEN}"
Least privilege: GitHub tokens need read:packages only for install jobs; publish workflows get narrower scopes on separate workflows. When mirrors sit in another jurisdiction, confirm your compliance team accepts their TLS inspection and logging story—registry mirror choice is partly legal, not only latency.
CI parallelism, concurrent install, and disk watermark thresholds
Parallel CI jobs multiply concurrent tarball fetches. Cap total outbound connections per Mac build node using networkConcurrency and orchestrator-level job limits. Disk-wise, node_modules trees routinely exceed tens of gigabytes in large monorepos; PnP shifts bytes into .yarn/cache but still needs scratch for unpack. Define three thresholds in your runbook: warn (for example free space below 30 GB), throttle (pause new jobs), stop (fail fast before install corrupts).
On shared remote Mac pools, correlate disk alerts with yarn install phases and with native steps (Xcode DerivedData). For a fuller compile-cache angle, read the sccache vs ccache matrix; for conda-heavy scientific packages alongside JS, see conda/mamba/micromamba on remote Mac CI.
Failure retries, backoff, and lockfile consistency acceptance
Wrap yarn install --immutable with bounded exponential backoff (for example 2s, 4s, 8s) on transport failures, but never retry past a lockfile checksum violation—those retries hide supply-chain drift. Acceptance gates for merges should include: identical yarn.lock checksum across macOS agents, no YN0028-class immutable errors on clean checkout, and recorded Yarn release in build metadata.
#!/usr/bin/env bash
set -euo pipefail
corepack enable
corepack prepare yarn@stable --activate
attempt=1 max=4 delay=2
while [ "$attempt" -le "$max" ]; do
echo "yarn install attempt $attempt/$max"
if yarn install --immutable 2>&1 | tee "yarn-install-${attempt}.log"; then
exit 0
fi
# Do not blind-retry immutable / checksum failures (e.g. YN0028)
if grep -qE 'YN0028|checksumBehavior|checksum' "yarn-install-${attempt}.log"; then
exit 1
fi
sleep "$delay"
delay=$((delay * 2))
attempt=$((attempt + 1))
done
exit 1
Teach reviewers that concurrent install flakes that disappear after rerunning without code changes often indicate mirror rate limits, not flaky tests—fix the registry tier before masking with jest.retryTimes on unrelated steps.
FAQ
Should we default new monorepos to PnP in 2026? Default to PnP when web tooling supports it out of the box and you want less inode pressure on shared Macs; default to node-modules when mobile or native bridges dominate and you cannot yet invest in resolver configuration.
Does a registry mirror weaken supply-chain guarantees? Only if you bypass integrity checks. Keep checksumBehavior: throw, mirror only approved hosts, and audit mirror TLS like any other dependency proxy.
How does this relate to CocoaPods or SwiftPM? JavaScript install strategy is independent but competes for disk and network on the same host—schedule heavy JS installs before or after native prefetch based on which step is more deterministic for your queue.
Can we mix linkers in one repo? Yarn supports per-package dependenciesMeta patterns; use sparingly and document, because mixed modes complicate support.
Summary
Choose Yarn Berry PnP when inode churn and tree size hurt more than resolver work; choose classic node_modules when compatibility buys cheaper operations than tuning Metro or native scripts. In both cases, win remote Mac CI reliability with a governed registry mirror, scoped tokens for npm versus GitHub Packages, conservative concurrent install caps, disk watermarks, immutable lockfile gates, and honest retries that stop at checksum failures.
If you want Apple Silicon capacity colocated with your mirror strategy—predictable queues for Node 22+ pipelines—open pricing, review purchase options, and the help center on macpull.com (no account needed to read), then try a small runner pool before migrating your entire monorepo. The blog index stays updated with remote-cache and registry playbooks as we ship them.