We’ve been experimenting with running CI/CD jobs multi-cloud, across AWS, Azure, and GCP — but with a twist: each job gets routed to the lowest-carbon region available at the time (among other weightings like latency, performance, and provider preferences).

Our project is called CarbonRunner.io and there’s a surprising amount of variability between cloud regions.

Some run on hydro, nuclear, or wind — others are still coal- or gas-heavy. By choosing where code runs based on carbon intensity, we’ve seen up to 90% reduction in emissions per job, 25% cheaper than Github actions and it's all from a one-line-code change.

jobs:
    deploy: 
-       runs-on: ubuntu-latest
+       runs-on: carbonrunner-4vcpu-ubuntu-latest

For example:

We’ve been tracking the average grid intensity of jobs run by GitHub Actions and Azure (when region is unspecified), and we’ve seen an average of ~285 gCO₂/kWh.By contrast, when we set a hard limit of 100 gCO₂/kWh max, our jobs averaged just 48 gCO₂/kWh — with some running as low as 24 gCO₂/kWh.

It’s been eye-opening to see how much of a difference regional scheduling can make — especially for something like CI/CD, where latency is often a lesser concern.

It’s made us wonder:

  • Should developers be thinking about this, or should platforms abstract it away? - Where’s the right tradeoff between performance, carbon, and cost?

If anyone else here is exploring carbon-aware infra, cloud sustainability, or multi-cloud scheduling

CarbonRunner pulls live grid intensity data and applies weighted logic to select the best region for each job across providers — would love to hear what you're seeing or thinking about.

Please signup for our waitlist as we're starting to onboard early users.