Hexo Serverless (part duex)

This document is a continuation of Putting the Hexo CMS into a Serverless world.


Time to make terraform use this AWS S3 bucket to store our Terraform state in the event of local system failure or corruption.

Create a file called terraform.tf with this content:

1
2
3
4
5
6
7
8
9
10
terraform {
required_version = ">= 0.12.3"

backend "s3" {
bucket = "serverless-hexo-demo-tf-state"
key = "regions/us-east-1/account.tfstate"
region = "us-east-1"
profile = "default"
}
}

Remember to change the region and profile accordingly as these cannot be variables at this time. Run terraform init to export the data from the local terraform.tfstate file to AWS S3. Terraform will also reinitialize the local state with the exported state to ensure integrity.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Initializing the backend...
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "local" backend to the
newly configured "s3" backend. No existing state was found in the newly
configured "s3" backend. Do you want to copy this state to the new "s3"
backend? Enter "yes" to copy and "no" to start with an empty state.

Enter a value: yes


Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
...

We could move onto creating a DynamoDB table for state locking, but for now let’s assume we are a one person show and will add locking as we add members to the project. Despite this fact, we still need a place to store our code and version it for a plethora of reasons.

1
Todo: Add info about creating a repository and committing to it.

Now that we have encryption setup for things moving forward, we need something to show for it all. Let’s make the AWS S3 bucket that will hold the code for our Hexo Serverless website. Got back to the s3.tf file and add the following content to create a bucket to hold the content.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
resource "aws_s3_bucket" "serverless_hexo" {
bucket = "${var.s3_bucket_website_content}"
acl = "public-read"
tags = "${merge(var.tags, map("Name", "Serverless Hexo Demo Website"))}"
website {
index_document = "index.html"
error_document = "error.html"
}
versioning {
enabled = true
}
logging {
target_bucket = "${aws_s3_bucket.bucket_logging.id}"
target_prefix = "serverless_hexo_website"
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
# using a CMK via SSE-KMS, you must sign requests
# using default AES256 via SSE-S3, you do not
sse_algorithm = "AES256"
}
}
}
policy = <<EOF
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::${var.s3_bucket_website_content}/*"
]
}
]
}
EOF
}

Again, we enable versioning and logging. We also add encryption, but using the built in AWS managed keys instead of our own CMK in KMS. This is because we would need to sign the requests to decrypt the CMK. The S3 SSE engine handles this for us without this requirement on this project. We also added public read access for the ACL and a bucket policy to allow everyone read access to the public content.

Let’s setup some things to make our more professional by using our own domain, make our website faster to load for end users by using a CDN as well as more secure for everyone by securing it with TLS.

For this to work, we need to adjust where we are storing the content for our web site. It needs to reside in a bucket named after your domain name and for “www” redirects to work, we need a bucket for that too. Let’s make the changes to s3.tf and variables.tf.

Change the resource aws_s3_bucket serverless_hexo to:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
resource "aws_s3_bucket" "serverless_hexo" {
bucket = "${var.domain_name}"
acl = "public-read"
tags = "${merge(var.tags, map("Name", "Serverless Hexo Demo Website"))}"
website {
index_document = "index.html"
error_document = "error.html"
}
versioning {
enabled = true
}
logging {
target_bucket = "${aws_s3_bucket.bucket_logging.id}"
target_prefix = "${var.domain_name}"
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
# using a CMK via SSE-KMS, you must sign requests
# using default AES256 via SSE-S3, you do not
sse_algorithm = "AES256"
}
}
}
policy = <<EOF
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::${var.domain_name}/*"
]
}
]
}
EOF
}

We also need to add a bucket to redirect all “www” requests to the top level domain name. Add the following to s3.tf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
resource "aws_s3_bucket" "serverless_hexo_www" {
bucket = "www.${var.domain_name}"
acl = "public-read"
tags = "${merge(var.tags, map("Name", "Serverless Hexo Demo WWW Redirect"))}"

website {
redirect_all_requests_to = "${var.domain_name}"
}

policy = <<EOF
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::www.${var.domain_name}/*"
]
}
]
}
EOF
}

You can now also remove the variable s3_bucket_website_content from variables.tf as it is no longer used.

We also need to setup a domain to serve content from. The following will create the DNS records needed to serve our website and redirect requests to “www”. Create and add the following content to route53.tf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
resource "aws_route53_zone" "domain" {
name = "${var.domain_name}"
}

resource "aws_route53_record" "at" {
zone_id = "${aws_route53_zone.domain.zone_id}"
name = "traviscrowder.com"
type = "A"

alias {
name = "${aws_cloudfront_distribution.serverless_hexo.domain_name}"
zone_id = "${aws_cloudfront_distribution.serverless_hexo.hosted_zone_id}"
evaluate_target_health = true
}
}

resource "aws_route53_record" "www" {
zone_id = "${aws_route53_zone.domain.zone_id}"
name = "www"
type = "CNAME"

alias {
name = "${aws_s3_bucket.serverless_hexo_www.website_endpoint}"
zone_id = "${aws_s3_bucket.serverless_hexo_www.hosted_zone_id}"
evaluate_target_health = true
}
}

resource "aws_route53_record" "domain_validation" {
count = "${length(aws_acm_certificate.domain.domain_validation_options)}"
name = "${aws_acm_certificate.domain.domain_validation_options.*.resource_record_name[count.index]}"
type = "${aws_acm_certificate.domain.domain_validation_options.*.resource_record_type[count.index]}"
zone_id = "${aws_route53_zone.domain.id}"
records = ["${aws_acm_certificate.domain.domain_validation_options.*.resource_record_value[count.index]}"]
ttl = 60
}

Let’s get a TLS certificate to encrypt our website communications from AWS Certificate Manager by creating a file called acm.tf and placing the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
resource "aws_acm_certificate" "domain" {
domain_name = "${var.domain_name}"
subject_alternative_names = ["www.${var.domain_name}"]
validation_method = "DNS"

tags = "${merge(var.tags, map("Name", "${var.domain_name}"))}"

lifecycle {
create_before_destroy = true
}
}

resource "aws_acm_certificate_validation" "domain" {
count = "${length(aws_route53_record.domain_validation)}"
certificate_arn = "${aws_acm_certificate.domain.arn}"
validation_record_fqdns = [
"${aws_route53_record.domain_validation.*.fqdn[count.index]}"
]
}

Run terraform apply and create the resources we defined in the files above. If all went as desired and defined, we should be able to access the S3 bucket from the domain name specified in our variables.tf file. Visit the domain name in your browser now. You should get a 404 error returned from S3. This is expected as we haven’t added any content to our bucket yet.

Before we move on, we need to validate our domain with AWS ACM by adding a CNAME record to our domain. You can find the DNS entry that needs to be validated by visiting AWS ACM. We create all of the validation steps and resources needed, but it takes some time for AWS to receive updated DNS to validate ownership.

Ensure that your certificate has been Issued and head on over to part 3.