AWS Serverless Deployment with Terraform

Siddharth Malani
4 min readJun 21, 2021

--

When deploying serverless you have a few options. You have SAM which does wrap the complexity of deploying code in serverless microservices architecture easily. However it comes with different set of issues. This post is to discuss two options and strategies you could use to deploy lambdas in your environments.

Use SAM

So the way SAM does things is by creating cloudformation based on supplied parameters and manifests and deploys the lambdas. All you need to do is create a build file with sam commands. I will not go into the details of how to do all this as there are plenty of documents around this subject.

Here is a link on how to do this in GitLab and it can be repurposed in any other pipeline that you might be using.

https://about.gitlab.com/blog/2019/02/04/multi-account-aws-sam-deployments-with-gitlab-ci/

Use Terraform to deploy lambdas

What if your application objectives are not met with serverless components? In order to keep it clean and consistent I would prefer a single IaC instead of a mix of Terraform and SAM. So the alternative presented here is using Terraform to deploy Lambdas and shows how to tie it together.

Dev, Test, UAT and Prod Setup for CI/CD

Here are some important details about this setup.

  1. We use Terraform cloud for infrastructure deployment and GitHub as the GIT repository.
  2. In AWS setup we have 5 stacks — Dev, Test, UAT, Prod and DevOps.
  3. The DevOps has 4 S3 buckets one matching each environment. These are replicated into the relative stack/environment in a matching bucket using S3 replication. This is a nice clean mechanism to disperse artifacts.
  4. Each stack also has an auto-deployer lambda that gets triggered when an artifact gets copied into S3. This lambda deploys the new version of microservice automatically.

Now that we know what the setup is like lets see how devs can use this for build and deploy of microservices effectively.

Build artifact

Firstly the lambda resource is deployed via Terraform during infrastructure creation just like any other resource. One of the variables passed to the terraform script for each microservice is a SHA of the commit.

Codebuild

The pipeline that builds the artifact uses the SHA in the naming convention. For example <lambda-name>-<SHA>.zip.

buildspec.yml

version: 0.2
phases:
install:
runtime-versions:
python: 3.7
build:
commands:
- python -m pip install -r test_lambda_$CODEBUILD_RESOLVED_SOURCE_VERSION.zip -t ./
- aws s3 cp test_lambda_$CODEBUILD_RESOLVED_SOURCE_VERSION.zip s3://devops-artifacts-bucket-3290293/test_lambda_$CODEBUILD_RESOLVED_SOURCE_VERSION.zip

So the filename that generates is something like below

test_lambda_67dd8101f4960af33db349df8de5b0426e607285.zip

Terraform Lambda scripts

In terraform you use the variable with SHA in artifact name.

lambda.tf

variable "environment" {
}
variable "artifacts_bucket" {
}
variable "test_lambda_version" {
}
resource "aws_lambda_function" "test_lambda" { function_name = join("_", [var.environment, "test_lambda"])
role = aws_iam_role.iam_for_lambda.arn
handler = "lambda_function.lambda_handler"
s3_bucket = var.artifacts_bucket
s3_key = "test_lambda/test_lambda_${var.test_lambda_version}.zip"


runtime = "python3.7"
}

S3 Replication

For S3 replication you can do it via terraform

In DevOps account

Allow DevOps to assume role in Dev account and similarly for other accounts.

variable "dev_account_artifacts_bucket" {}resource "aws_iam_role" "dev-replication" {
name = "dev-replication"

assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
POLICY
}
resource "aws_s3_bucket" "dev-artifacts" {
bucket = join("-", [
"dev-artifacts",
lower(random_string.random.result)])
acl = "private"
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
replication_configuration {
role = aws_iam_role.dev-replication.arn
rules {
id = "copy_files"
status = "Enabled"
destination {
bucket = var.dev_account_artifacts_bucket
account_id = "<dest account id>"
storage_class = "STANDARD"
access_control_translation {
owner = "Destination"
}
}
}
}
versioning {
enabled = true
}
}

In accounts with replica S3

In Dev account give permissions to allow devops to assume role and copy artifacts into the dev-artifacts-bucket.

resource "aws_s3_bucket" "artifacts-bucket" {
bucket = join("-", [
var.environment,
"artifacts-bucket",
lower(random_string.random.result)])
acl = "private"

server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}

versioning {
enabled = true
}
}


resource "aws_s3_bucket_policy" "artifacts-bucket-policy" {
bucket = aws_s3_bucket.artifacts-bucket.id
policy = <<EOF
{
"Version": "2012-10-17",
"Id": "S3-Console-Auto-Gen-Policy-1606711876528",
"Statement": [
{
"Sid": "S3PolicyStmt-DO-NOT-MODIFY-1606711846338",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<devops-account-id>:root"
},
"Action": [
"s3:GetBucketVersioning",
"s3:PutBucketVersioning",
"s3:ReplicateObject",
"s3:ReplicateDelete",
"s3:ObjectOwnerOverrideToBucketOwner"
],
"Resource": [
"${aws_s3_bucket.artifacts-bucket.arn}",
"${aws_s3_bucket.artifacts-bucket.arn}/*"
]
}
]
}
EOF

}

AutoDeploy

Deployment is really easy. Just add an event to the artifacts bucket in the dev account with correct permissions and a few lines of code will do it. You can even inline this within terraform.

bucket = event['Records'][0]['s3']['bucket']['name']     
key = urllib.parse.unquote_plus(
event['Records'][0]['s3']['object']['key'],
encoding='utf-8'
)
response = client.update_function_code(
FunctionName=derive_function_name(key),
S3Bucket=bucket,
S3Key=key,
Publish=True)

Conclusion

Hope the above gives some ideas on how this can be done effectively. Please comment below if you need more details.

--

--

No responses yet