Speeding up firebase deploy
toc
At $dayjob, we use GCP and Firebase to manage deployments (Firebase Functions). We currently run over 100 Firebase Functions — one main API handler and many cron, Pub/Sub, and task queue triggers. The deployment takes around 20 minutes (100+ functions) - which gets rather annoying when you deploy multiple times a day.
We recently migrated our Firebase Functions v1 to v2. The migration was not exactly seamless, but we managed (probably deserves its own post). But the good thing is that v2 functions are running on Cloud Run. Cloud Run allows you to run any docker images as a container.
Here’s how we cut deploy times for 50+ Firebase Functions in half.
tl;dr
Let Firebase deploy one function (e.g. api). This will build an image, push it to GCP Artifact Registry (container registry) and use that image to deploy your function. You can then take this image and update all other functions to use this new image.
- Deploy 1 function with
firebase deploy - Get the docker image from the newly deployed container
- Swap the docker image in all of your other functions to use the new image
(Some steps are missing here, but you get the gist)
Firebase deploy limitations
Rate Limits
You can trigger deploy only 60 times a minute (see Rate Limits). Which means your deployment can’t be just firebase deploy because it will start hitting the rate limits and deploys will start failing. firebase-tools performs some automatic retries, but they’re fairly naive and uncontrollable.
So you need to write a custom deploy script that will chunk the functions (e.g. by 50) and deploy the chunks. You’ll also need to handle the rate limitting yourself.
Deployments are slow
firebase deploy sets up all the infra you need to use e.g. onSchedule triggers, pubsub topics, etc. Which is great when you add a new function (you don’t need to think about it and do it yourself). But it also reconciles the underlying services against what’s in your code every deploy.
This reconciliation can take quite a while especially because of the rate limits. If you know you haven’t made any code changes to your infra code (e.g. changing memory on a handler), then you can avoid this reconciliation.
Speeding up
If you’re using Cloud Functions v2, your functions run on Cloud Run under the hood. That means every time you deploy, Firebase builds a Docker image and pushes it to Artifact Registry — one image per deploy.
You don’t need to rebuild that image for every function if the runtime environment hasn’t changed. Instead, you can reuse the same image across multiple functions and skip the full reconciliation step.
How it works
When you deploy a v2 function with firebase deploy, Firebase:
- Builds a Docker image based on your current code.
- Pushes that image to Artifact Registry.
- Deploys a Cloud Run service for each exported function, using the same image but different
FUNCTION_TARGETvalues.
It seems Firebase does some wrapping so that it knows which handler to invoke when and uses the FUNCTION_TARGET for that.
If two functions share the same code (just different handlers), they can safely point to the same image.
When to reuse an image
Before each deploy, check whether the function’s handler configuration changed (e.g., memory, timeout, triggers, schedule, …).
| Condition | What to do |
|---|---|
| Handler config changed | Run firebase deploy to let Firebase update infra and image. |
| Handler config unchanged | Reuse the existing image via gcloud run services update. |
Step by step
- Have a custom deploy script to deploy your functions
- Deploy 1 function with
firebase deploy - Get the docker image from the newly deployed container
IMAGE=$(gcloud run services describe $SERVICE_NAME \ --project="$PROJECT" \ --region="$REGION" \ --format="value(spec.template.spec.containers[0].image)")- Tag container with the latest commit hash (this is so you know how to look for changes in the handler config)
COMMIT_HASH=$(git rev-parse HEAD)
gcloud run services update "$SERVICE_NAME" --project=$PROJECT_ID --region=${REGION} --update-labels commit-hash="${COMMIT_HASH}" "- Check for diffs in your handler definitions using the latest deployed commit and the current commit
- For any functions with any changes in the handler definition, deploy the handler with
firebase deploy(chunked to avoid getting rate limited, see above) - For any functions with no changes in the handler definition, swap the docker image
- Check that all functions are using the latest docker image
- Any new functions get deployed with
firebase deploy
Detecting handler config changes
To determine which functions need a full firebase deploy versus just a docker image swap, you need to detect changes in the handler config between the last deployed commit and the current HEAD.
The approach is to use git diff with grep/ripgrep to check for changes in your handler definitions:
# Get the last deployed commit hash (from the docker image tag)LAST_DEPLOYED_COMMIT="<commit-hash-from-image-tag>"
# Check for changes in handler configgit diff $LAST_DEPLOYED_COMMIT HEAD \ | grep -E "(onSchedule|onRequest|onCall|topic|schedule)"For each function, if the grep finds changes in the handler config (schedule definitions, pubsub topics, HTTP settings, etc.), that function needs to be deployed with firebase deploy. If there are no config changes, you can safely swap the docker image.
You’ll want to tailor the grep pattern to match the specific patterns in your codebase. For example, if you’re using TypeScript with Firebase Functions SDK v2, you might look for patterns like:
.onSchedule('...')for cron triggers.onTaskDispatched()for Cloud Tasks.onPublishedMessage('...')for Pub/Sub topics- Config objects with
schedule,topic,memory,timeoutSeconds, etc.
For a more robust solution, consider using ts-morph to parse your TypeScript AST and detect changes in function handler configs programmatically, which can help avoid false positives from comments or unrelated code changes.
Swapping the docker image
gcloud run services update <lower-case-function-name> \ --image="$IMAGE" \ --project="$PROJECT" \ --region="$REGION" \ --asyncChecking that functions are using the latest docker image
gcloud run services list \ --project=${PROJECT_ID} \ --region=${REGION} \ --filter="status.conditions.status:True" \ --format="json(metadata.name,spec.template.spec.containers[0].image,spec.template.metadata.annotations.'run.googleapis.com/cloudsql-instances')"# then check that the image is the latest oneResults
This swap brought down the deploy time from 20 minutes to about 10 minutes. It’s a bit convoluted versus firebase deploy but who wouldn’t want to deploy faster?
Conclusion
While this approach adds complexity to your deployment pipeline, the time savings become significant as your function count grows (and especially so if you deploy multiple times a day).
This technique works because Cloud Functions v2 is essentially Cloud Run with Firebase’s deployment magic on top. By understanding this, you can selectively bypass the slow parts (full infrastructure reconciliation) while keeping the convenient parts (automatic trigger setup for new/changed functions).
If you’re just starting out or have fewer than 20 functions, stick with firebase deploy. But once you hit rate limits and 15+ minute deploys, this approach can make your deployment pipeline much more manageable.