DevOps Engineer
Before understanding how this blog is being hosted, we should first understand how it was created. To build this static website, I wrote some basic HTML and JS using the HUGO framework. It’s a very simple and practical way to create a personal website. It’s not hard to learn and you don’t need to write everything from scratch.
Before moving to S3 and CloudFront configuration, we need to make sure the connection is secure by enabling HTTPS, as well as providing a custom domain (rodrigoascencao.me in my case).
I bought the domain on CloudFlare because it’s cheaper, and Route53 is not included in the AWS Free Tier plan. You will need to assign the CloudFlare domain to your CloudFront distribution and also generate a TLS certificate.
I used AWS ACM to generate the certificate for me using the DNS Validation option. After that, all I had to do was verify the DNS using the CNAME provided by ACM. I pointed that CNAME to CloudFlare. This process allows AWS to confirm that you are really the owner of the domain.
After this process, you can have HTTPS enabled on your website because the ACM certificate will be issued and your connection will be encrypted and secure.
There are many different ways to host a static website, but in my case, since I’m an AWS specialist, I chose to use the modern AWS stack for static websites: S3 + CloudFront.
The configuration is pretty simple. You just need to store all the HUGO public/* files inside an S3 bucket, enable public access, and configure the proper bucket policies.
After enabling access at the bucket level, we need to create a CDN using CloudFront and point the S3 bucket as the origin. (Don’t forget to create a CloudFront invalidation after changes so the cache updates as quickly as possible)
I’ll leave here the documentation I used to follow along: Jeromethibaud Documentation
Well, with S3 + CloudFront + DNS + ACM we basically have everything we need for the website to work in production, but we still miss one thing.
Ask yourself: if you wanted to update the content of the website, maybe by adding a new article, you would need to manually change the files, commit to GitHub, generate the public/* folder again, and upload everything one by one to S3… that would be a very annoying process, right?
That’s where CI/CD pipelines come in.
With GitHub Actions, we can create a CI/CD pipeline that automatically deploys our code to S3 whenever we push new code to the repository. This is a standard way of working with applications in DevOps culture, and that’s how we’re going to set up our website.
The process of connecting GitHub Actions with your AWS account is pretty interesting, so I’ll document it in the next section.
Before writing the pipeline code, we need to make sure that GitHub Actions has the proper access to our AWS resources (CloudFront and S3), and this is not as simple as it sounds.
To make this possible, we need to do a few things:
1 - Configure OIDC on AWS
In this scenario, GitHub will be the Identity Provider (IDP) (they know who you are) and AWS will be the Relying Party (RP) (they require authentication before allowing you to perform actions). Inside the ID Token:
The aud (Audience) will be the AWS Security Token Service
The sub (Subject) will be your GitHub repository
On AWS, we must configure GitHub as an Identity Provider. This allows AWS to properly authenticate incoming requests from GitHub and validate that they are coming from a trusted source before assigning the correct IAM Role.
2 - Create a Role for deployment
Here we must create an IAM Role, which is basically a set of permissions that allows actions against specific AWS resources.
By creating a dedicated role, we can limit permissions to only what is required, following the Principle of Least Privilege.
After creating the role, we need to attach policies to it, making sure that when GitHub Actions authenticates with AWS and assumes the role, it will have the right permissions to perform the actions we want.
Policy to allow syncing to S3 with only the minimum required permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SyncToBucket",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::myhugoproject.com/*",
"arn:aws:s3:::myhugoproject.com"
]
}
]
}
Policy that allows the role to invalidate (flush) the CloudFront cache after a deployment:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "FlushCache",
"Effect": "Allow",
"Action": "cloudfront:CreateInvalidation",
"Resource": "arn:aws:cloudfront::123456789012:distribution/CFDISTRIBUTIONID"
}
]
}
With everything configured, we can move to the next step.
6 - Configure the GitHub Workflow
Here it’s pretty simple. We’re going to create some GitHub repository variables such as AWS_REGION, AWS_BUCKET_NAME, SITE_BASE_URL, etc., and store the proper values there.
After that, we create the workflow based on the documentation:
# Workflow for building and deploying a Hugo site to S3
name: Deploy Hugo site to S3
on:
# Runs on pushes targeting the default branch
push:
branches: ["main"]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to S3
permissions:
contents: read
id-token: write
# Allow only one concurrent deployment
concurrency:
group: "hugo_deploy"
cancel-in-progress: false
# Default to bash
defaults:
run:
shell: bash
jobs:
# Build job
build:
runs-on: ubuntu-latest
env:
HUGO_VERSION: ${{ vars.HUGO_VERSION }}
steps:
- name: Install Hugo CLI
run: |
wget -O ${{ runner.temp }}/hugo.deb https://GitHub.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_extended_${HUGO_VERSION}_linux-amd64.deb \
&& sudo dpkg -i ${{ runner.temp }}/hugo.deb
- name: Install Dart Sass
run: sudo snap install dart-sass
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
# - name: Install Node.js dependencies
# run: "[[ -f package-lock.json || -f npm-shrinkwrap.json ]] && npm ci || true"
- name: Build with Hugo
env:
HUGO_ENVIRONMENT: production
HUGO_ENV: production
run: |
hugo \
--minify \
--baseURL "${{ vars.SITE_BASE_URL }}/"
- name: Upload a Build Artifact
uses: actions/upload-artifact@v4.3.1
with:
name: hugo-site
path: ./public
# Deployment job
deploy:
runs-on: ubuntu-latest
needs: build
steps:
- name: Download artifacts from previous workflow
uses: actions/download-artifact@v4
with:
name: hugo-site
path: ./public
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4.0.2
with:
aws-region: ${{ vars.AWS_REGION }}
role-to-assume: arn:aws:iam::123456789012:role/MyHugoProject_S3Deployer
role-session-name: GithubActions-MyHugoProject
mask-aws-account-id: true
- name: Sync to S3
id: deployment
run: aws s3 sync ./public/ s3://${{ vars.BUCKET_NAME }} --delete --cache-control max-age=31536000
- name: CloudFront Invalidation
id: flushcache
run: aws cloudfront create-invalidation --distribution-id ${{ vars.CF_DISTRIBUTION_ID }} --paths "/*"
GitHub connects to AWS
Assumes the IAM Role using OIDC
Deploys the public/* folder into S3
Invalidates the CloudFront cache
Updates the website automatically
And that’s basically how this blog is hosted =)