Monorepo p2 - The CI build and deploy

15th Jun 2023

Monorepo p2 - The CI build and deploy

This is the second part of a series where I go through my monorepo setup with Github Actions deploying to Vercel. Links to all the articles in the series are at the bottom of the page.

In this article we will look closely at the build and deploy steps to the pipeline.

CI Pipeline

The main CI pipeline is handled by Turborepo. Any package which can be deployed defines a ci script within the package.json. This means that I can entrust Turborepo with only running changed applications as well as keeping the setup for each application encapsulated within.

Here is a snippet from the Github workflow for what we will go through in this article.

- name: Turbo ci pipeline
run: yarn turbo run ci
- name: Asign domain name alias
run: "yarn turbo run ci:domains"
GIT_REF_NAME: ${{ github.ref_name }}
GIT_SHA: ${{ github.sha }}
PR_NUMBER: ${{ github.event.pull_request.number }}

You might notice from the steps above that I don't define variables at a global level, or on a step within the Github workflow. This is because they will be used across all application deployments. It is important to keep environment variables for each application separate so that they don't accidentally bleed into each other and give me grief.

I chose to make use of my existing investment in Doppler to handle environment variables. Using the doppler CLI you can load the environment variables from a particular configuration into the running of a script. I used this within the "ci" task on an application.

"scripts": {
"ci": "doppler run -p appname -c $VERCEL_ENV -- ./scripts/",

Because Doppler is also responsible for Vercel runtime variables, this meant I didn't need to load the Vercel project's environment.

With the environment setup, all that is left is: the vercel build, and vercel deploy.

Vercel Build and Deploy

Both of these steps need to be aware of the VERCEL_ENV variable which is global to the whole workflow. If we are deploying to production then the vercel CLI expects the --prod argument passed in.

For example, I have a reusable script which does the following:

if [ "$VERCEL_ENV" = "production" ]; then
vercel build --prod --token=$VERCEL_TOKEN
vercel build --token=$VERCEL_TOKEN

My reusable script is little more involved. It uses the pattern for getting the deployment url or stderr from the deploy command.

I call the deploy script with two arguments.

../../scripts/ -n soniq -o ../../
  1. The name of the application I am deploying: used for displaying logs and appending PR comments
  2. The markdown file where it will output the deployment result

After an initial setup, the script runs the vercel deploy.

# Deploy and save stdout and stderr to files
if [ "$VERCEL_ENV" = "production" ]; then
vercel deploy --prebuilt --prod --token=$VERCEL_TOKEN >_tmp_deployment-url.txt 2>_tmp_error.txt
vercel deploy --prebuilt --token=$VERCEL_TOKEN >_tmp_deployment-url.txt 2>_tmp_error.txt

On a successful deploy, I append a line to the output markdown which will be used as a PR comment later. If the deploy fails then the script logs the error and exits with a non-zero exit code to stop the workflow.

# check the exit code
if [ $code -eq 0 ]; then
deploymentUrl=`cat _tmp_deployment-url.txt`
echo "$name: <$deploymentUrl>" >> ${output}
echo "Deployed $GITHUB_REF on $name: $deploymentUrl"
echo "ERROR: There was an issue deploying:"
cat _tmp_error.txt
exit 1

Turborepo Cacheing

I want to make a note here about the cacheing I applied to the Turborepo ci script. Because a PR is likely to be run multiple times, after each commit, it is also likely that an app will not be deployed as a result of every push. This speeds up the pipeline considerably but we don't want to lose the result of previous runs.

With this in mind, the ci script caches the vercel output as well as the output txt files from the script. These output txt files from the deploy script are used as inputs for assigning domains. This means we only assign domains when deployments change.

"ci": {
"dependsOn": ["^prebuild", "^test", "test", "^build"],
"inputs": [
"outputs": [
"ci:domains": {
"inputs": ["_tmp_deployment_url.txt", "_tmp_error.txt"],
"env": ["GIT_REF_NAME", "PR_NUMBER"]

When a subsequent run is triggered, any previously deployed urls will still be accessible in the cached _tmp_deployment_url.txt file.

Assigning Domains

At this point in the process, there may have been deployments across the apps within the monorepo which have a unique vercel domain attached to them. This is fine for some of the apps, but for Soniq, I want a couple of things:

  1. A PR domain like pr-123.domain.tld
  2. If the deploy happened on the canary branch I want to assign the canary.domain.tld domain

Let's take a moment to look at how I implemented both of these scenarios.

The Vercel CLI includes a command alias which can be used to point one domain to another. We will use this to point the desired domain (pr-123.domain.tld) to the deployed preview domain.

I get the deployed preview domain by reading the _tmp_deployment-url.txt file. This is created or cached after a successful deploy. The $VERCEL_TOKEN is an environment variable set on the entire github workflow.

Here is the relevant part of my script which deals with this assignment. It is called from a ci:domains script within the app's package.json.

if [ -f ./_tmp_deployment-url.txt ]; then
deploymentUrl=`cat _tmp_deployment-url.txt`
echo "Found deployment url: $deploymentUrl}"
# Are we deploying the canary branch
if [ "$GIT_REF_NAME" = "canary" ]; then
echo "Assigning canary alias"
# PR_NUMBER string length is non-zero
if [ -n "$PR_NUMBER" ]; then
echo "Assigning PR alias"
if [ -n "$aliasDomain" ]; then
echo "Aliasing $deploymentUrl to $aliasDomain"
vercel alias --token=$VERCEL_TOKEN "$deploymentUrl" "$aliasDomain"

Next is Part 3: Updating the Pull Request