Popular Searches

Process Video in the Cloud with AWS’s Elastic Transcoder

Clouds shaped like old cinema video camera.

Bandwidth costs money, and streaming video out from CloudFront is very pricey. You can cut down on that cost by transcoding your videos beforehand with AWS’s Elastic Transcoder, which processes videos to cut down on file size.

Bandwidth Costs Money

Video files are fairly large as fair as media goes, compared to images and audio. AWS charges you for storage space and for bandwidth used. If you have a 100 MB video file that is viewed 1,000 times, that’s nearly 100 GB of bandwidth, or $8.50 in bandwidth costs using CloudFront to serve the content. If your application hosts video, this can be a major cost factor.

Luckily, video doesn’t have to be so large. Through a process called transcoding, you alter the video’s bitrate. Every video is encoded at a certain data rate; for example, a recording may be encoded at 10Mb/s.

Take a look at this zoomed-in frame from some test footage of a jellyfish. The one on the left is encoded at a reasonable 3Mb/s, and the one on the right is encoded at a very high 100Mb/s. Can you tell the difference?

There’s a bit of distortion, and low bitrate video can definitely get blurry with a lot of motion, but for the most part, the 3Mb/s video looks entirely acceptable.


You will notice the difference in size though—the lower bitrate file takes up 11MB for 30s of footage, and the higher bitrate file takes up 358MB. Obviously you’d never use such a large file in production, but if you’re not sanitizing user input, you might encounter this. Even a small decrease in file size can lead to a lot of costs saved on files that are downloaded quite often. For big companies like Netflix, media transcoding is huge business.

Transcoding is also used to generate different sized videos for different devices. Small mobile devices watching a video in portrait mode probably don’t need a 4K video, or even a 1080p one. Users on slow connections may only be able to stream a 480p video. AWS’s Elastic Transcoder can generate multiple video files for each input.

This is exactly what YouTube does whenever you upload a video—your video is processed, and transcoded for delivery on multiple platforms (and if you’re particularly early to a YouTube video, you might notice the quality doesn’t go above 480p, because the 1080p transcode hasn’t finished yet).

How to Use AWS’s Elastic Transcoder

Head over to the Elastic Transcoder Console. You’ll want to create a new Pipeline, a queue that manages the transcoding jobs. Pipelines use S3 buckets for input and output, so you’ll want to create two new buckets from S3 Management Console.

Give your pipeline a name, and then select your input bucket:

Create a new Pipeline. Give pipeline a name, select its input bucket.

You’ll want to make sure your bucket and pipeline are in the same AWS region, or you’ll be charged for data processed and the transcoding will be slower. Select the output bucket, and a bucket to use for thumbnails. (This can be the output bucket.)


Create your bucket, and make note of the Pipeline ID:

Create bucket, making note of Pipeline ID.

Actually using the Elastic Transcoder is a strangely manual process. You have to create a new job from the console for each file, and queue it with the appropriate settings. Luckily, you can automate the whole process with a Lambda function that will run whenever a new video file is uploaded to S3.

Head over to the Lambda Console, and create a new function. Choose NodeJS 10 as the runtime, and paste in this script, courtesy of Swapnil Pawar on Medium.

You’ll want to edit the values for pipelineId and bucket, placing them in quotes like so:

Edit values for pipelineId and bucket, placing them in quotes.

You’ll also want to edit PresetId to the preset you want to transcode, which you can find in the AWS Docs. You can create your own from the transcoder console if the default ones are not sufficient. If you want to queue multiple transcodes for a single file, add more items to this array:

Edit PresetId to preset you want to transcode.

Once you’ve filled in everything, add a trigger for your Lambda function to run whenever an object is created in your input bucket:

Add trigger for Lambda function to run whenever object is created in input bucket.


Under the execution role, make sure the role has access to Elastic Transcoder.

Make sure execution role has access to Elastic Transcoder.

You can use this sample jellyfish footage to test your function. Download a medium bitrate file (30Mb/s or so), and upload it into your input bucket. If it was successful, you should see a new job queued in the “Jobs” tab of the Elastic Transcoder Console, and you should see a new “videos” folder in your output bucket that contains the outputted files. The “Generic 1080p” preset took a 112MB 30Mb/s video, and encoded it down to just 18MB (about 5Mb/s):

Download medium bitrate file, upload it into your input bucket. If successful, new job should be queued in Elastic Transcoder Console's "Jobs" tab, and new "videos" folder should be in output bucket containing outputted files..

If your Lambda function failed, you’re able to view the logs under the “Monitoring” tab. You can also create a test case to test the function without uploading anything to S3, though the job sent to the transcoder will be bunk.

This script, in particular, will save the output files with the exact same name as the input, though you can add a prefix if you want. The transcoder works fairly quickly, so you’ll be able to access your video in the output bucket soon after uploading.

Anthony Heddings Anthony Heddings
Anthony Heddings is the resident cloud engineer for LifeSavvy Media, a technical writer, programmer, and an expert at Amazon's AWS platform. He's written hundreds of articles for How-To Geek and CloudSavvy IT that have been read millions of times. Read Full Bio »

The above article may contain affiliate links, which help support CloudSavvy IT.