CI/CD Static website hosted in Azure Storage using Azure CDN.
As already seen in the last post Static Website on Azure CDN/Storage vs Azure Static Web App, we compared the static websites on Azure CDN/Storage and the new option Azure Static Web App. In this post we’ll review and learn how to implement our CI/CD for our client application with Azure CDN/Storage.
Therefore, we’ll reproduce the following scenario:
Prerequisites
An Azure subscription and GitHub account.
- An Azure account with an active subscription. Create an account for free.
- A GitHub repository with your static website code. If you don’t have a GitHub account, sign up for free.
Steps
- Azure resources creation.
- GitHub workflow implementation.
1. Creating our infrastructure
In the first place, there are different ways to create Azure resources: Azure Portal, Azure CLI, ARM templates or Terraform (this two last could be added to our CI/CD pipelines), but as far as I am concerned we’ll use Azure CLI commands in order to create our initial infrastructure.
Note: We could add our Azure CLI scripts in our pipeline, but it will crash if we haven’t got an infrastructure state to know when this resource properties changed or not.
List of resources to create:
- Resource group.
- Azure Storage Account.
- Azure CDN profile & endpoint.
Initially we need to open an Azure CLI in our local computer, and we need to log in to our Azure account. I am using Windows terminal so the terminal we’ll redirect me to a link for verify my pc with a code in order to authenticate my device through Azure account, as pictured below:
Secondly, we need to create a resource group that holds related resources for an Azure solution with the following command:
az group create --name <resource group name> --location <location>/* Abbreviating parameters */
az group create -n <resource group name> -l <location>
Note: we can use abbreviations for the parameters but ensure it checking the documentation for each command.
Personally I prefer to use environment variables to pass it as parameters in our commands, so we’ll type the following commands:
export RGROUP_NAME=Resources
export LOCATION=westeurope
az group create -n $RGROUP_NAME -l $LOCATION
We can observe the output of this command:
Note: The id of the resource group is a value that we should store in an environment variable because it will be useful in next steps, so copy this value and add it to our $RGROUP_ID.
export RGROUP_ID=/subscriptions/'id'/resourceGroups/Resources
Let’s begin with the creation of the Azure Storage Account:
export STORAGE_NAME=storage12345webhost
az storage account create -n $STORAGE_NAME -g $RGROUP_NAME -l $LOCATION --sku Standard_ZRS
As we can see we are passing the resource group name, the location that is equal as the resource group location, and we pass the SKU of the Azure Storage, this parameter accepts the following values: Premium_LRS, Premium_ZRS, Standard_GRS, Standard_GZRS, Standard_LRS, Standard_RAGRS, Standard_RAGZRS, Standard_ZRS.
Note: SKUs (Stock Keeping Units) are Microsoft references for their products. Each Money version will have at least one SKU, which is normally printed on the packing materials that the product comes with.
Our output:
The crux of the matter is enabling the static website feature to your Account Storage, this action will create a blob container named $web inside our azure storage account, you can learn more about this with Microsoft documentation about Azure Blob Storage here: Introduction to Azure Blob Storage.
Furthermore, the creation of this blob will be the following command:
export WEB_INDEX_DOC=index.html
export WEB_ERROR_DOC=error.html
az storage blob service-properties update --account-name $STORAGE_NAME --static-website --404-document $WEB_ERROR_DOC--index-document $WEB_INDEX_DOC
We can pass these optional parameters:
- — 404-document Represents the path to the error document that should be shown when an error 404 is issued, in other words, when a browser requests a page that does not exist.
- — index-document Represents the name of the index document. This is commonly “index.html”.
Output:
If we go to our Azure portal we can check that this service it is enabled with the related documents name and path:
One should, however, not forget that we want to bring the content closer to the end-user, we’ll create an Azure CDN profile and the related endpoint.
Let’s create the Azure CDN profile:
export CDN_PROFILE_NAME=staticwebsitecdn
az cdn profile create -g $RGROUP_NAME -n $CDN_PROFILE_NAME --sku Standard_Microsoft
Note: Free Subscriptions for Azure CDN profiles just accept Standard Microsoft sku.
Output:
Another point is that we need to create for our CDN profile an endpoint that will point to our storage web primary endpoint.
Firstly we should extract our origin endpoint, we can do it listing the storages of our resource group using the following command:
az storage account list -g $RGROUP_NAME
We’ll see an array with the different storages of our resource group. As we just created one, we’ve got one element inside the list, if we check it we can see the property primaryEndpoints where we’ll extract the origin for our CDN endpoint
Now we’ll create the CDN endpoint:
export CDN_ORIGIN=https://storage12345webhost.z6.web.core.windows.net
export CDN_PROFILE_NAME=staticwebsitecdnaz cdn endpoint create -g $RGROUP_NAME -n $CDN_ENDPOINT_NAME --profile-name $CDN_PROFILE_NAME --origin $CDN_ORIGIN --origin-host-header $CDN_ORIGIN
Note: Azure CDN origins, such as Web Apps, Blob Storage, and Cloud Services require this host header value to match the origin hostname by default.
Output:
Now we can go to Azure Portal and check that our storage is connected with our CDN endpoint.
2. Setting up GitHub Actions workflow
After the required infrastructure has been created, now we’ll implement our GitHub workflow pipeline. So, let’s open our IDE, personally I use VS Code, and we’ll add in our root path the folder .github/workflows. Inside this folder we will create the pipeline.yml file where we’ll implement the workflow for our CI/CD.
In order to deploy our code we’ll generate deploy credentials and then add these credentials as Secret in our GitHub repository. For generate these credentials, we’ll create a service principal in order to give access to our repository to deploy code in our Azure infrastructure.
Using Azure CLI and the following commands we’ll get the credentials.
export SP_NAME=staticwebsite-service-principalaz ad sp create-for-rbac --name $SP_NAME --role contributor --scopes $RGROUP_ID--sdk-auth
Note: For this step we’ll use the environment variable named $RGROUP_ID that we should have initialized when we created our Azure resource group.
Output:
The output is a JSON object with the role assignment credentials that provide access to your storage account similar to below.
Copy this output and go to your repository and select Settings > Secrets > New secret. Create a new one and paste the output from the Azure CLI command into the secret's value field. Give the secret a name like AZURE_CREDENTIALS.
I’ve used React for my SPA, basically the scenario is we’ve got a Customer Service that will create, update and delete Incidences in order to assign these to the IT department. Therefore, we will need to build our client web app and then the results of the build will be uploaded to our storage blob container named $web.
Let’s begin to implement our pipeline.yml file:
We just declare the name for our workflow and the environment variables that we will need through Azure CLI inline scripts that we’ll use for upload the content of our build job.
The next step is to declare when or which events will be triggered in our workflow.
As we can observe, we declared the statement on, basically it defines which actions on our repository will invoke the execution of the workflow defined in our pipeline.yml file.
Well, we are seeing that will be triggered when someone does a push in the master branch.
After that we’ll add the jobs to do when the workflow will be triggered, in our case we need two jobs, build our application and deploy this build files to our $web Storage blob container.
First of all, let us try to understand workflows run in a hosted GitHub runner meaning that every job will be setted up and will run in different machines.
GitHub Actions provides to us some actions that will help us to share data between two jobs (two workflow executions in different machines), we will be able to upload data in one artifact and in our second job download this data.
We’ll use these actions:
Before the definition of our first job we need to know that every job can have a different step, a sequence of tasks called. Steps can run commands, run setup tasks, or run an action in your repository, a public repository, or an action published in a Docker registry. Not all steps run actions, but all actions run as a step.
Now, let’s see how we define the build job:
As we observe, inside the job we define the name, on which OS will run the job and the related steps.
We're doing 4 steps:
- Use the action checkout that basically is a required action in order to synch the GitHub repository with the hosted runner.
- Use Node.js step that is using an action to set up Node.js in the hosted runner, defining the version of Note using the statement with.
- Run bash scripts, here we are running 3 sequence scripts, accessing to our SPA Code, installing the dependencies of our package.json and building the application.
- Use the action upload artifact in order to store the build output in our artifact.
Note: Artifacts allows us to share data between jobs and store data once a workflow is complete.
After that we’ll define the deploy job:
Note: Take a minute and check the deploy job. We added the needs statement, meaning that our deploy job depends directly from our build job.
- Use the action checkout that basically is a required action in order to synch the GitHub repository with the hosted runner.
- Use the action to download our artifact in order to store in build folder our files uploaded previously.
- Use the action for login to Azure using the credentials that previously we setted in secrets of our repository, named AZURE_CREDENTIALS.
- Use action azure cli that we named Upload to blob storage where we are defining an inline script to upload data to our storage.
- Use action azure cli that we named Purge CDN endpoint, we are doing this action to make sure our users always obtain the latest copy of our assets, due to the fact that Azure CDN edge nodes will cache assets until the asset’s time-to-live expires.
- Finally, we use the action azure cli that we named Logout, and we just use an inline script to logout from Azure.
Note: As you can see, we can access to secrets, and environment variables using ${{ secrets.<nameOfSecret>}} or ${{ env.<nameOfVariable>}}
Finally, now we just need to add our changes, commit them and push it to our master branch, then go to Action section in your repository and check the list of them go inside the workflow named as your commit title.
As we can see while I was writing the Build job has been finished and working correctly if we enter to this job we can see our steps.
Lastly, if we go to our Summary we can see different interesting things:
- Workflow.
- Annotations like warnings.
- Artifacts created during runtime.
Personal conclusions
On the whole, we can find different ways to implement this workflow. Perhaps we should also point out the fact that we are not using any action to upload our assets to our storage.
In my point of view, when you are learning is good way to do all this steps to understand how it’s working and what is doing internally the action, for example this: Azure Blob Storage Upload · Actions · GitHub Marketplace, so it’s up to everybody to decide whether use it or not.
In brief exists multiple ways and tools as Azure Devops, Vercel or Circle CI to implement the continuous integration and continuous development, but personally I think that Git Hub actions is the most easy way to learn it.