A team asked how to cut costs and CO₂ at the same time just by fixing how they cache. They want to quantify “grams CO₂ per GB delivered” at the CDN vs origin tier.
• AdTech bidder: BlazingCDN, illustrative 35–55% lower egress cost • Game launcher: BlazingCDN, illustrative 20–40% p95 TTFB drop • SaaS updater: BlazingCDN, illustrative 75–92% origin offload • Docs portal: BlazingCDN, illustrative 96–99.5% cache-hit ratio • GIS tiles: BlazingCDN, illustrative 30–60% egress cost reduction A funny side-effect of scaling static traffic: the infra budget is itemized, the carbon footprint is not. One team asked a simple question: “Can we cut costs and CO₂ at the same time just by fixing how we cache?” Example: a SaaS updater pushing tens of TB/month worldwide. Before: flat cache rules, 70–80% hit ratio, origins running hot, power-hungry hardware in one region. After tightening caching and routing: illustrative 95–99% hit ratio and 60–80% fewer origin requests, so they downsized origin clusters and turned off a backup DC. Why it worked: • Anycast / RTT-based routing kept users on closer POPs, shrinking network miles per request • Tiered cache + origin shielding reduced duplicate misses hammering origin • Cache-Control + long s-maxage + stale-while-revalidate turned “hot forever” assets into true edge-resident objects • Pre-warm of top N binaries after deploy avoided stampedes and cold-cache spikes Quick test: pick 5–10 largest static paths; tighten cache headers + enable shielding on a subset; compare cache-hit, origin RPS, and power draw / instance-hours over 2–4 weeks. For those who’ve measured it: what’s the cleanest way you’ve quantified “grams CO₂ per GB delivered” at the CDN vs origin tier? I work at BlazingCDN.