Enter your search

Running a Node app on Amazon ECS

By
The EC2 Container Service Mega-Walkthrough

Running Node on Amazon ECS

Amazon ECS ventures into the wonderful world of containers, specifically for running containerised apps on AWS. You can choose to have ECS run containers for you, or place them on EC2 instances.

Since building the ECS launch demo for AWS back in 2014, we thought we ought to also try the service itself!

So here’s some of our experiences we’ve had with ECS and how we set up the infrastructure using ECS to run Node apps.

Why ECS?

Before we get into that though, why use ECS?

There are so many options for deploying apps out there, but if you’re running stuff on EC2 then ECS is definitely worth checking out.

Here are a bunch of things we like about it:

  • Centralised Logs – Just like how Lambda’s logs work, ECS ships stdout/err streams from all containers to a single log group in CloudWatch logs. No more extra instrumenting to get our logs in one place, and they are searcheable there, too.
  • Tests, CI & Deploy – working with containers means you can wrap your app and its dependencies in one container, and only containers that pass tests are deployed. They can also be linked to externals in other containers such as Elasticsearch / Redis and tested with these at build time. * with a little extra magic from our friends CodeBuild and CodePipeline.
  • Scheduler & supervisor – ECS looks after your app. It will move it across hosts if necessary, monitors its health, and replaces processes if they become unwell.
  • Metrics – The essentials – CPU and memory metrics – are built in. It’s another thing you don’t have to worry about configuring yourself.
  • Scaling – Backed by the all-powerful Autoscaling, ECS can also scale the app when load fluctuates.
  • Load balancing – Busy service? No sweat. Balance that traffic across containers like a boss!

Altogether, when used in conjunction with CodeBuild and CodePipeline, it supplants an awful lot of deployment tooling we would normally have to maintain ourselves.

Cons

As is the case with anything, nothing’s perfect, and we find there are some downsides of ECS to be aware of:

  • EC2 instancesYou have to set up EC2 instances for your ECS cluster yourself. We would love to see a version where AWS managed this behind the scenes so you only have to worry about the containers. * This is no longer the case – see update below!
  • Instance termination – When instances terminate, it’s up to you to drain container tasks off the instance before it shuts down. Otherwise, your tasks could be killed uncleanly. Later we look at how to do this with a Lambda function.
  • Spot instance termination – Similar to the last point, when spot instances terminate they vanish and take your tasks with them. Not ideal! Since it is possible to ask a spot instance if it is about to terminate, we think the ECS agent should do this and drain tasks beforehand.
  • Initiating deploys – It’s not exactly intuitive – you have to know enough about ECS services to figure out that updating task definitions is the way to get ECS to push new containers – and new versions of your app – out to the cluster.

So there’s a few things to iron out before ECS becomes a really nice experience for developers, but it’s still a very useful service. So let’s crack on and see how we got set up.

* Update 2017-12-22: AWS announced Fargate at Re:Invent ’17: you can now choose to deploy containers without EC2 instances! If you wish to deploy with Fargate instead of EC2, you can skip “The Cluster Stack” step and move straight to the “Ship-it” stack.

1. Provisioning the infrastructure

First off, we need to get everything provisioned in AWS:

  1. We need a bunch of EC2 instances to form the basis of our ECS cluster.
  2. Then, we need a CodeBuild project to produce a container from our source code.
  3. CodePipeline pulls the steps together into a linear deployment flow, from source to shippage.
  4. Finally, we can sort out the rest, including the ECS service itself and some autoscaling groups.

As there’s quite a lot involved in the process, we’ll split this out into two separate posts, and focus on points 1-3 in this-a-here post.

Here be dragons

Before we get onto the above though, we need to address a couple of the aforementioned cons first. The questions of:

  • How do we deploy updated versions to ECS and –
  • How do we drain containers off departing instances?

Never fear, Lambda is here!

1.a. Setting up the supporting Lambdas

Ah, Lambda. The trusty companion for all the grizly workarounds in AWS.

In this case we need two functions, one to address each situation:

  • Deployer Lambda – The deployer Lambda knows enough ECS etiquette to get it deploying our updated container once it’s freshly baked out of CodeBuild and sitting in our container registry.
  • Lifecycle Lambda – The Lifecycle Lambda is our safety marshal, making sure that EC2 doesn’t destroy any instances until all containers have been herded off them safely.

Enter, Serverless

If you haven’t encountered it already in the burgeoning serverless movement – there’s a handy framework called ‘severless‘ (go figure) for constructing Lambda-backed services.

More broadly it’s a whole different methodology of deploying services but we won’t go into all that here.

Back on topic, we chose serverless to help us get these Lambdas set up.

Free gift – this code is open source

The good news is we’ve released the code, so you don’t have to do any of this yourself!

If you want to take a look you can clone it on GitHub and have a poke around:

git clone git@github.com:gosquared/ops-lambdas.git
cd ops-lambdas

It should be fairly plug-and-play, just take a look at serverless.yml and change the values as necessary.

What does this do?

Serverless glues together our Lambda code using yaml configuration and can provision it all to AWS.

It uses CloudFormation behind the scenes which lets us use a little trick to hook the Lambdas into the stuff we’ll set up later on.

How do I provision the Lambdas?

Once you’ve tweaked the serverless.yml it’s simply an npm run deploy.

1.b. Now we bring out the big guns

I can’t believe we’ve got this far without any code (embedded in the post, at least). Well sorry, you’re not getting off that easy.

This is where it gets a little bit heavy, so bear with me as we wade through some thick CloudFormation material.

The CloudFormation Stacks

What we’re going to do is break down the infrastucture into two CloudFormation stacks:

  1. The Cluster Stack – which will set up the EC2 instances for our cluster.
  2. The Ship-it Stack – all the stuff for deploying our code (CI pipeline for the fancy term).

The ship-it stack will also set up the ECS parts for running and scaling the app.

The Cluster Stack

Update 2017-12-22: AWS announced Fargate at Re:Invent ’17, allowing you to deploy containers without EC2 instances. If you choose to deploy with Fargate, you can skip this step and move straight onto the next stack.

I have to admit, I originally had everything in one stack first time around, but split into two for fear of reaching the onerous 50kb CF stack size limit.

50kb you say! Yeah, it can happen. But not today.

Here is the cluster stack:

# Stack to create EC2 instances for ECS cluster.
#
# aws cloudformation deploy \
# --stack-name app-cluster-prod \
# --template-file ./aws-cluster-stack.yaml \
# --parameter-overrides \
# KeyName=DEFAULT \
# SecurityGroups=group1,group2 \
# ImageId=ami-123456 \
# InstanceType=c5.large \
# Subnets=subnet-1234,subnet-5678 \
# EcsClusterName=myapp-prod \
# --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM \
# --no-execute-changeset
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: EC2 instances for ECS cluster.
Parameters:
KeyName:
Description: The EC2 Key Pair to allow SSH access to the instance
Type: AWS::EC2::KeyPair::KeyName
SecurityGroups:
Description: Security group ids to use for the instances.
Type: CommaDelimitedList
ImageId:
Description: ECS-optimised AMI ID for your region. http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html
Type: String
InstanceType:
Description: EC2 instance type.
Type: String
Subnets:
Description: Subnet ids for instance placement.
Type: CommaDelimitedList
EcsClusterName:
Description: Name of the ECS cluster.
Type: String
Resources:
## EC2
InstanceRole:
Type: AWS::IAM::Role
Properties:
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role
- arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- sts:AssumeRole
Principal:
Service: ec2.amazonaws.com
Policies:
- PolicyName: logs
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- logs:*
Resource:
- !Sub arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:*
InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: /
Roles:
- !Ref InstanceRole
AutoscalingLaunchConfig:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
ImageId: !Ref ImageId
InstanceType: !Ref InstanceType
KeyName: !Ref KeyName
SecurityGroups: !Ref SecurityGroups
IamInstanceProfile: !Ref InstanceProfile
UserData:
!Base64:
Fn::Sub: |
- Content-Type: multipart/mixed; boundary="==BOUNDARY=="
MIME-Version: 1.0
--==BOUNDARY==
MIME-Version: 1.0
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
echo ECS_CLUSTER=${ClusterName} >> /etc/ecs/ecs.config
STACK_NAME=${AWS::StackName}
# Install awslogs and the jq JSON parser
yum install -y awslogs jq https://s3-${AWS::Region}.amazonaws.com/amazon-ssm-${AWS::Region}/latest/linux_amd64/amazon-ssm-agent.rpm
# Inject the CloudWatch Logs configuration file contents
cat > /etc/awslogs/awslogs.conf <<- EOF
[general]
state_file = /var/lib/awslogs/agent-state
[/var/log/dmesg]
file = /var/log/dmesg
log_group_name = /var/log/dmesg
log_stream_name = {cluster}/{container_instance_id}
[/var/log/messages]
file = /var/log/messages
log_group_name = /var/log/messages
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %b %d %H:%M:%S
[/var/log/docker]
file = /var/log/docker
log_group_name = /var/log/docker
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%S.%f
[/var/log/ecs/ecs-init.log]
file = /var/log/ecs/ecs-init.log.*
log_group_name = /var/log/ecs/ecs-init.log
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%SZ
[/var/log/ecs/ecs-agent.log]
file = /var/log/ecs/ecs-agent.log.*
log_group_name = /var/log/ecs/ecs-agent.log
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%SZ
[/var/log/ecs/audit.log]
file = /var/log/ecs/audit.log.*
log_group_name = /var/log/ecs/audit.log
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%SZ
[/var/log/amazon/ssm/amazon-ssm-agent.log]
file = /var/log/amazon/ssm/amazon-ssm-agent.log
log_group_name = amazon-ssm
log_stream_name = agent-$STACK_NAME/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%SZ
[/var/log/amazon/ssm/errors.log]
file = /var/log/amazon/ssm/errors.log
log_group_name = amazon-ssm
log_stream_name = errors-$STACK_NAME/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%SZ
EOF
--==BOUNDARY==
MIME-Version: 1.0
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Set the region to send CloudWatch Logs data to (the region where the container instance is located)
region=$(curl 169.254.169.254/latest/meta-data/placement/availability-zone | sed s'/.$//')
sed -i -e "s/region = us-east-1/region = $region/g" /etc/awslogs/awscli.conf
--==BOUNDARY==
MIME-Version: 1.0
Content-Type: text/upstart-job; charset="us-ascii"
#upstart-job
description "Configure and start CloudWatch Logs agent on Amazon ECS container instance"
author "Amazon Web Services"
start on started ecs
script
exec 2>>/var/log/ecs/cloudwatch-logs-start.log
set -x
until curl -s http://localhost:51678/v1/metadata
do
sleep 1
done
# Grab the cluster and container instance ARN from instance metadata
cluster=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .Cluster')
container_instance_id=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F/ '{print $2}' )
# Replace the cluster name and container instance ID placeholders with the actual values
sed -i -e "s/{cluster}/$cluster/g" /etc/awslogs/awslogs.conf
sed -i -e "s/{container_instance_id}/$container_instance_id/g" /etc/awslogs/awslogs.conf
service awslogs start
chkconfig awslogs on
end script
--==BOUNDARY==--
- ClusterName: !Ref EcsClusterName
AutoscalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
VPCZoneIdentifier: !Ref SubnetIds
LaunchConfigurationName: !Ref AutoscalingLaunchConfig
MinSize: 0
MaxSize: 0
HealthCheckType: EC2
Tags:
- Key: !Ref AWS::StackName
Value: 'true'
PropagateAtLaunch: true
- Key: Name
Value: !Ref AWS::StackName
PropagateAtLaunch: true
- Key: role
Value: !Ref AWS::StackName
PropagateAtLaunch: true
LifecycleHookRole:
Type: AWS::IAM::Role
Properties:
Path: /
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- sts:AssumeRole
Principal:
Service: autoscaling.amazonaws.com
Policies:
- PolicyName: SNSAccess
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- sns:Publish
Resource: !ImportValue ops-lambdas-prod:EcsLifecycleHookTopicArn
AutoscalingGroupInstanceTerminationHook:
Type: AWS::AutoScaling::LifecycleHook
Properties:
AutoScalingGroupName: !Ref AutoscalingGroup
HeartbeatTimeout: 600
LifecycleTransition: autoscaling:EC2_INSTANCE_TERMINATING
NotificationTargetARN: !ImportValue ops-lambdas-prod:EcsLifecycleHookTopicArn
RoleARN: !GetAtt LifecycleHookRole.Arn
InstanceScaleOutPolicy:
Type: AWS::AutoScaling::ScalingPolicy
Properties:
AdjustmentType: PercentChangeInCapacity
AutoScalingGroupName: !Ref AutoscalingGroup
EstimatedInstanceWarmup: 420
PolicyType: StepScaling
StepAdjustments:
- MetricIntervalLowerBound: 0
MetricIntervalUpperBound: 10
ScalingAdjustment: 10
- MetricIntervalLowerBound: 10
ScalingAdjustment: 30
InstanceScaleInPolicy:
Type: AWS::AutoScaling::ScalingPolicy
Properties:
AdjustmentType: PercentChangeInCapacity
AutoScalingGroupName: !Ref AutoscalingGroup
EstimatedInstanceWarmup: 420
PolicyType: StepScaling
StepAdjustments:
- MetricIntervalUpperBound: 0
MetricIntervalLowerBound: -10
ScalingAdjustment: -10
- MetricIntervalUpperBound: -10
ScalingAdjustment: -30
InstanceCpuAlarmHigh:
Type: AWS::CloudWatch::Alarm
Properties:
EvaluationPeriods: 5
Statistic: Average
Threshold: 80
AlarmDescription: Alarm if instance CPU high enough to trigger scale out policy.
Period: 60
AlarmActions:
- !Ref InstanceScaleOutPolicy
Namespace: AWS/EC2
Dimensions:
- Name: AutoScalingGroupName
Value: !Ref AutoscalingGroup
ComparisonOperator: GreaterThanOrEqualToThreshold
MetricName: CPUUtilization
InstanceCpuAlarmLow:
Type: AWS::CloudWatch::Alarm
Properties:
EvaluationPeriods: 30
Statistic: Average
Threshold: 30
AlarmDescription: Alarm if instance CPU low long enough to trigger scale in policy.
Period: 60
AlarmActions:
- !Ref InstanceScaleInPolicy
Namespace: AWS/EC2
Dimensions:
- Name: AutoScalingGroupName
Value: !Ref AutoscalingGroup
ComparisonOperator: LessThanOrEqualToThreshold
MetricName: CPUUtilization
Outputs:
AutoscalingGroupName:
Description: Name of ASG.
Value: !Ref AutoscalingGroup

Not so bad. If you peruse that, you’ll see we’ve got an autoscaling group to control our instances.

You might also spot the mentions of !ImportValue ops-lambdas-prod:... which is the little trick we mentioned earlier to integrate the serverless lambdas into this stack.

The UserData is mostly boilerplate lifted from the ECS docs to get the Cloudwatch logging and ECS agent configured on the instance.

Setting the parameters

Note that this doesn’t set up subnets or security groups or anything like that. We assume these are already in place, and the ids for these can be given as parameters SecurityGroups and Subnets to the stack.

You’ll also want to give an InstanceType that will suit your capacity requirements. Plus there’s ImageId which is actually quite easy, it’s just an id from the ecs-optimised AMI page – we leave it to you to specify the latest AMI for your region because AWS update them now and then.

There’s just one thing we haven’t figured out yet – what our ECS cluster will be called. There’s a parameter for this, EcsClusterName because the instances need to be told which cluster they will be serving. Now’s a good time to think of a name – a hint for a name could be like myapp-prod but really this could be anything you like. This will also be the name of the Ship-it stack later.

Provisioning the Cluster Stack

Once you’ve got the above parameters ready, we’re good to provision the stack. Let’s hit it! (don’t forget to sub in your params):

aws cloudformation deploy \
--stack-name app-cluster-prod \
--template-file ./aws-cluster-stack.yaml \
--parameter-overrides \
KeyName=DEFAULT \
SecurityGroups=group1,group2 \
ImageId=ami-123456 \
InstanceType=c5.large \
Subnets=subnet-1234,subnet-5678 \
EcsClusterName=myapp-prod \
--capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM \
--no-execute-changeset

Notice the --no-execute-changeset at the end there. We’ve not actually created anything yet, we’ve just made a changeset so we can check everything is good.

If there are any configuration errors CloudFormation usually flags them up at this point. Often this is followed by some manner of to-and-fro tail-chasing to fix the stack until it is happy. Such is life with CloudFormation.

Once the changeset goes green, we send it out the door by hitting execute in the CF console.

If that worked first time for you, congrats! Any errors, see if CF gives any hints and try to work through it.

2. The Ship-it Stack

Right, now that the cluster is sorted, time for the real heavyweight.

The Ship-it stack incorporates all of our deployment pipeline and the ECS service.

Before we get into the stack, we’ve got some prep to go over first.

Preparing the pipeline

CodePipeline will need to get your code from somewhere.

GitHub will be the source code provider of choice here, but if you need something else check out the CodePipeline docs for alternatives.

We need to give CodePipeline access to the repo for it to scoop up our code. In the case of Github this can be done with an access token.

GitHub auth

To get hold of an access token, create a Personal Access Token on GitHub and grant the repo and admin:repo_hook permissions.

With this token CodePipeline can access the repos on your behalf and get the code.

It actually watches a specific branch on your repo and will start new builds when it sees new commits getting pushed to the branch.

Ship-it template

Prepare yourself for this one. It’s pretty hefty:

# App ship-it stack with ECS, CodeBuild & CodePipeline.
#
# aws cloudformation deploy \
# --stack-name myapp-prod \
# --template-file ./aws-ship-it-stack.yaml \
# --parameter-overrides \
# KeyName=<KEY_NAME> \
# GitHubAuthToken=<ACCESS_TOKEN> \
# RepoOwner=<OWNER_NAME> \
# RepoName=<REPO_NAME> \
# BranchName=<BRANCH_NAME> \
# --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM \
# --no-execute-changeset
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: App services on ECS.
Parameters:
KeyName:
Description: The EC2 Key Pair to allow SSH access to the instance.
Type: AWS::EC2::KeyPair::KeyName
GitHubAuthToken:
Description: GitHub access token.
Type: String
RepoOwner:
Description: Name of the GitHub user or org who owns the repository.
Type: String
RepoName:
Description: The GitHub repo name.
Type: String
BranchName:
Description: Name of repo branch to watch.
Type: String
PipelineBucketName:
Description: Name of S3 bucket to create for CodePipeline.
Type: String
Resources:
## ECS CLUSTER
EcsCluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: !Ref AWS::StackName
EcsServiceScalingTargetRole:
Type: AWS::IAM::Role
Properties:
Path: /
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- sts:AssumeRole
Principal:
Service: application-autoscaling.amazonaws.com
Policies:
- PolicyName: Scaling
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- application-autoscaling:*
- ecs:*
- cloudwatch:*
Resource: '*'
## SERVICE
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: !Ref AWS::StackName
RetentionInDays: 7
EcrRepository:
Type: AWS::ECR::Repository
Properties:
RepositoryName: !Ref AWS::StackName
RepositoryPolicyText:
Version: "2012-10-17"
Statement:
- Sid: CodeBuildPushPull
Effect: Allow
Principal:
Service: codebuild.amazonaws.com
Action:
- ecr:GetDownloadUrlForLayer
- ecr:BatchGetImage
- ecr:BatchCheckLayerAvailability
- ecr:PutImage
- ecr:InitiateLayerUpload
- ecr:UploadLayerPart
- ecr:CompleteLayerUpload
EcsTaskDefinition:
Type: AWS::ECS::TaskDefinition
DependsOn:
- EcrRepository
Properties:
Family: !Ref AWS::StackName
ContainerDefinitions:
- Name: !Ref AWS::StackName
Image: !Sub ${AWS::AccountId}.dkr.ecr.us-east-1.amazonaws.com/${AWS::StackName}:latest
Cpu: 256
MemoryReservation: 512
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref AWS::StackName
awslogs-region: !Ref AWS::Region
awslogs-stream-prefix: !Ref AWS::StackName
Environment:
- Name: NODE_ENV
Value: production
# any additional env vars
EcsService:
Type: AWS::ECS::Service
DependsOn:
- EcrRepository
Properties:
TaskDefinition: !Ref EcsTaskDefinition
DesiredCount: 0 # only when creating stack
LaunchType: FARGATE # or EC2 if using instances
DeploymentConfiguration:
MinimumHealthyPercent: 50
PlacementStrategies:
- Type: spread
Field: instanceId
Cluster: !Ref EcsCluster
EcsServiceScalingTarget:
Type: AWS::ApplicationAutoScaling::ScalableTarget
Properties:
MinCapacity: 0
MaxCapacity: 0
ResourceId:
!Sub
- service/${EcsCluster}/${EcsService}
- EcsCluster: !Ref EcsCluster
EcsService: !GetAtt EcsService.Name
RoleARN: !GetAtt EcsServiceScalingTargetRole.Arn
ScalableDimension: ecs:service:DesiredCount
ServiceNamespace: ecs
EcsServiceScaleOutPolicy:
Type: AWS::ApplicationAutoScaling::ScalingPolicy
Properties:
PolicyName: EcsServiceScaleOutPolicy
PolicyType: StepScaling
ScalingTargetId: !Ref EcsServiceScalingTarget
StepScalingPolicyConfiguration:
AdjustmentType: PercentChangeInCapacity
StepAdjustments:
- MetricIntervalLowerBound: 0
MetricIntervalUpperBound: 10
ScalingAdjustment: 10
- MetricIntervalLowerBound: 10
ScalingAdjustment: 30
EcsServiceScaleInPolicy:
Type: AWS::ApplicationAutoScaling::ScalingPolicy
Properties:
PolicyName: EcsServiceScaleInPolicy
PolicyType: StepScaling
ScalingTargetId: !Ref EcsServiceScalingTarget
StepScalingPolicyConfiguration:
AdjustmentType: PercentChangeInCapacity
StepAdjustments:
- MetricIntervalLowerBound: -10
MetricIntervalUpperBound: 0
ScalingAdjustment: -10
- MetricIntervalUpperBound: -10
ScalingAdjustment: -30
EcsServiceHighCPUAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
EvaluationPeriods: 2
Statistic: Average
Threshold: 70
AlarmDescription: Alarm if ECS Service CPU high.
Period: 60
AlarmActions:
- !Ref EcsServiceScaleOutPolicy
Namespace: AWS/ECS
Dimensions:
- Name: ClusterName
Value: !Ref EcsCluster
- Name: ServiceName
Value: !GetAtt EcsService.Name
ComparisonOperator: GreaterThanOrEqualToThreshold
MetricName: CPUUtilization
EcsServiceLowCPUAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
EvaluationPeriods: 2
Statistic: Average
Threshold: 40
AlarmDescription: Alarm if ECS Service CPU low.
Period: 60
AlarmActions:
- !Ref EcsServiceScaleInPolicy
Namespace: AWS/ECS
Dimensions:
- Name: ClusterName
Value: !Ref EcsCluster
- Name: ServiceName
Value: !GetAtt EcsService.Name
ComparisonOperator: LessThanOrEqualToThreshold
MetricName: CPUUtilization
## BUILD
S3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Ref PipelineBucketName
CodeBuildRole:
Type: AWS::IAM::Role
Properties:
Path: /
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- sts:AssumeRole
Principal:
Service: codebuild.amazonaws.com
Policies:
- PolicyName: S3Access
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- s3:*
Resource:
!Sub
- arn:aws:s3:::${S3Bucket}*
- S3Bucket: !Ref S3Bucket
- PolicyName: ServicesAccess
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- logs:*
- codecommit:*
Resource: '*'
CodePipelineRole:
Type: AWS::IAM::Role
Properties:
Path: /
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- sts:AssumeRole
Principal:
Service: codepipeline.amazonaws.com
Policies:
- PolicyName: S3Access
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- s3:*
Resource:
!Sub
- arn:aws:s3:::${S3Bucket}/*
- S3Bucket: !Ref S3Bucket
- PolicyName: ServicesAccess
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- codepipeline:*
- codebuild:*
- lambda:*
- iam:ListRoles
- iam:PassRole
Resource: '*'
## BUILD
CodeBuildProject:
Type: AWS::CodeBuild::Project
Properties:
Name: !Ref AWS::StackName
Description: App on ECS.
ServiceRole: !GetAtt CodeBuildRole.Arn
Source:
Type: CODEPIPELINE
Artifacts:
Type: CODEPIPELINE
Environment:
Type: linuxContainer
ComputeType: BUILD_GENERAL1_MEDIUM
Image: aws/codebuild/docker:1.12.1
EnvironmentVariables:
- Name: APP_NAME
Value: !Ref AWS::StackName
- Name: APP_IMAGE
Value: !Sub ${AWS::AccountId}.dkr.ecr.us-east-1.amazonaws.com/${AWS::StackName}:latest
TimeoutInMinutes: 10
Tags:
- Key: Name
Value: !Ref AWS::StackName
- Key: role
Value: !Ref AWS::StackName
CodePipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
ArtifactStore:
Type: S3
Location: !Ref S3Bucket
Name: !Ref AWS::StackName
RoleArn: !GetAtt CodePipelineRole.Arn
Stages:
- Name: Source
Actions:
- Name: Source
RunOrder: 1
ActionTypeId:
Version: 1
Category: Source
Owner: ThirdParty
Provider: GitHub
OutputArtifacts:
- Name: !Ref AWS::StackName
Configuration:
Owner: !Ref RepoOwner
Repo: !Ref RepoName
Branch: !Ref BranchName
OAuthToken: !Ref GitHubAuthToken
- Name: Build
Actions:
- Name: Build
RunOrder: 1
Configuration:
ProjectName: !Ref AWS::StackName
InputArtifacts:
- Name: !Ref AWS::StackName
ActionTypeId:
Version: 1
Category: Build
Owner: AWS
Provider: CodeBuild
OutputArtifacts:
- Name: !Sub ${AWS::StackName}-built
- Name: Deploy
Actions:
- Name: Deployer
RunOrder: 1
ActionTypeId:
Version: 1
Category: Invoke
Owner: AWS
Provider: Lambda
Configuration:
FunctionName: !ImportValue ops-lambdas-prod:EcsDeployerLambdaFunctionName
UserParameters:
!Sub
- |
{
"Service": "${AWS::StackName}",
"Family": "${AWS::StackName}",
"EcsService": {
"Name": "${EcsServiceName}",
"Arn": "${EcsServiceArn}",
"Cluster": "${EcsServiceCluster}"
}
}
- EcsServiceName: !GetAtt EcsService.Name
EcsServiceArn: !Ref EcsService
EcsServiceCluster: !GetAtt EcsCluster.Arn
Outputs:
EcsClusterName:
Description: ECS Cluster Name.
Value: !Ref EcsCluster
ServiceName:
Description: ECS service name.
Value: !Ref EcsService

In true CloudFormation style the configuration is very verbose but allows us to control virtually all the settings of the infrastructure.

Create the Ship-it stack

Similar to our Cluster stack, we use the aws cli to create a change set for this stack.

Once again, customise the params to your needs, you know the drill.

Just make sure the --stack-name matches the ECS cluster name you came up with earlier.

Also, just to clarify, RepoOwner is your GitHub username / organisation name. So for https://github.com/your-org/your-repo the params would be RepoOwner=your-orgRepoName=your-repo.

aws cloudformation deploy \
  --stack-name myapp-prod \
  --template-file ./aws-ship-it-stack.yaml \
  --parameter-overrides \
     KeyName= \
     GitHubAuthToken= \
     RepoOwner= \
     RepoName= \
     BranchName= \
  --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM \
  --no-execute-changeset

 

There’s a fair bit more infrastructure being laid out here, so check everything’s gone through properly. If not, CF should tell you what’s up and you can delete the stack and try again.

Note – if you do delete the stack, there are a couple of parts that need manual cleanup: CF will show you what these are. Everything else should clean up automatically.

Wrapping up part 1

There we go. With that last stack, we now have all the infrastructure in place ready to host our app.

There won’t be any instances running yet (unless you’ve changed that).

In the next post we’ll go through the whole deployment process and get the app up and running.

Proceed to Part 2.

You May Also Like

Group 5 Created with Sketch. Group 11 Created with Sketch. CLOSE ICON Created with Sketch. icon-microphone Group 9 Created with Sketch. CLOSE ICON Created with Sketch. SEARCH ICON Created with Sketch. Group 4 Created with Sketch. Path Created with Sketch. Group 5 Created with Sketch.

undefined

Chat with GoSquared

Typically replies within the hour

undefined

Chat with GoSquared

Typically replies within the hour

Need help? Want to know how you can make the most of GoSquared? Let's chat!

undefined

Drop files here to upload