CodeDeploy is one piece of AWS’s automated CI/CD pipeline called CodePipeline. CodeDeploy handles the deployment part of the pipeline, getting the latest code updates onto your fleet of servers without requiring you to update them manually.
How Does CodeDeploy Work?
CodeDeploy supports automated deployments to EC2, Lambda, and ECS. For each application type, you create a “Deployment Group” that tells CodeBuild which servers to update and how. CodeDeploy can update servers one at a time, connecting to your load balancer and automatically routing traffic away from servers being updated. You can also choose to spin up entirely new servers to minimize downtime entirely.
You can also use it to manually deploy new code, but it’s much more useful in conjunction with CodePipeline, which automates the process. Every time new commits are pushed to the chosen release branch in CodeCommit, Github, or BitBucket, CodePipeline sends it over to CodeBuild for automated testing and building. If the build is successful, the finished build is sent to CodeDeploy, which handles the deployment process.
The actual updating is handled by the CodeDeploy agent, which you install and configure with an appspec file on your EC2 instances to handle replacing files, running before/after install scripts, and starting the updated application.
For Lambda, the CodeDeploy setup is much simpler, though you will have to define your Lambda functions with a SAM template. CodeDeploy can slowly transfer traffic to the new Lambda function using a seperate version and an alias setup to direct a percentage of traffic to the new version.
For ECS, you need to set up CodeBuild to automate the building of your container and pushing to AWS ECR. Once it’s there, though, the actual setup is as simple as selecting your ECS cluster. In fact, you don’t even have to use CodeDeploy for this—you can simply select AWS ECS as the deployment provider.
How to Set Up CodeDeploy
Next, we’re going to show you the process for setting up EC2, as the others are very straightforward. (The hard part for Lambda and ECS is setting up CodeBuild.)
From the CodeDeploy console, create a new application. You must choose from EC2 or Lambda deployments here. From this application, you create a deployment group:
The deployment group wizard has a lot of options. Give it a name to start, and head over to the IAM Console to create a service role for CodeDeploy to operate as. Choose “CodeDeploy” as the service, and select CodeDeploy as the use case.
AWS automatically attaches a proper permissions policy. Just click through, and give it a name. Copy the ARN for the role, then paste it into the CodeDeploy settings.
Next, you must choose how you want your servers updated. There are two primary options:
- In-Place, which updates servers without provisioning new ones. Say you have 10 servers; you could configure CodeDeploy to update 2 at a time, while keeping 8 of them healthy. The servers being updated are taken offline, and traffic is redirected to the other 8 (presuming you have a load balancer).
- Blue/Green, which leaves your current servers in place, and slowly fires up additional servers with updated code. Once a new “Green” server comes online, one of the old “Blue” servers will be taken offline.
Both have their pros and cons. They each cost you about the same, but Blue/Green costs slightly more if you want to support quick rollbacks, as you need to leave the old servers running for a bit. Technically, you’re paying extra for the time your green servers take to boot up, but it shouldn’t be too much if you’re taking the blue servers offline immediately.
Blue/Green permits you to keep your servers running for a bit, so if your app is having issues with clients having short sessions, you can use this and transition them over to healthier servers. Most web-based applications shouldn’t care about the server behind the load balancer though.
In-Place is simpler and works with Reserved instances, but it always leaves you with at least one of your servers being taken offline. Blue/Green doesn’t support reserved instances, but it works better if you only have a few servers. The choice is up to you, but they’re mostly the same in setup.
You can manage the deployment settings more finely with a deployment configuration. There are three defaults for EC2:
- All at once, which you should only use with Blue/Green unless you like angry customers.
- Half at a time, which keeps 50% of your hosts healthy.
- One at a time, which is slow but the safest option.
You can create your own deployment configurations, so you’re not limited to these three. You can specify how many hosts you’d like to keep healthy at any given time, either as a percentage or a number.
Once that’s configured, you need to select the servers you’re actually deploying on. If you’re setting this up, you most likely have Auto-Scaling set up, so go ahead and select the Auto-Scaling group you’re trying to update.
Otherwise, you can select EC2 instances manually by tag and value.
Choose the target group you use for your load balancer. You definitely want to keep it enabled, otherwise CodeDeploy won’t route traffic away from instances in the process of being updated.
Under “Advanced – optional”, you’ll find a useful feature you may want to enable. You can configure CloudWatch alarms for deployment groups, which can alert you if your application is behaving poorly. You can connect this to CodeDeploy to automate a rollback whenever alarm thresholds are met, which minimizes your downtime.
CodeDeploy should now be configured, and ready to be integrated into the rest of the pipeline.
In order for CodeDeploy to do anything meaningful, you need to install the CodeDeploy agent on your servers. Really, you should integrate it into your instance creation script or custom AMI, or you won’t be able to use auto-scaling or Blue/Green deployment otherwise.
You also need to write a file called
appspec.yml and place it in the source of your repository. Essentially, this file tells the CodeDeploy agent what to copy to where, what to run to start your application, and what scripts to run before and after the install. In a way, this functions much like a Dockerfile.
The basic structure is as follows:
version: 0.0 os: linux files: - source: Config/config.txt destination: /webapps/Config - source: source destination: /webapps/myApp hooks: BeforeInstall: - location: Scripts/UnzipResourceBundle.sh - location: Scripts/UnzipDataBundle.sh AfterInstall: - location: Scripts/RunResourceTests.sh timeout: 180 ApplicationStart: - location: Scripts/RunFunctionalTests.sh timeout: 3600 ValidateService: - location: Scripts/MonitorService.sh timeout: 3600 runas: codedeployuser
You can read our guide to writing appspec and buildspec files to learn more.
Connecting CodeBuild to Your Pipeline
Connecting to CodePipeline is fairly simple. It can be found in the same sidebar as all the other CodeSuite tools. Create a new pipeline, then select your source repository. CodePipeline watches for changes in this repository and can trigger an automatic update.
The source is sent directly to the build stage. You can use CodeBuild for this, but if your application doesn’t need to be built, you can skip this step.
At the deploy stage, select “AWS CodeDeploy” as the provider, select the region it will operate in, then choose your application and deployment group.
That’s all the configuration necessary. Once you create the pipeline, it will run automatically with whatever is in the source repo. You can run it manually, or can push new changes to the source repo to trigger an automatic change.
One thing to note is that if you skip the build stage, the unbuilt source is sent directly to CodeDeploy. If you have a build stage, CodeDeploy will receive its input from CodeBuild, and the source will include the contents of the build.