Containers are a Unix concept that allows applications to be packaged with all their required dependencies into one easy-to-run image. This has resounding benefits for a DevOps workflow, but is it worth the extra hassle?
Containers Synchronize Dev and Prod Environments
With containers, the whole idea is that they package everything you need to run your code into an easily distributable image. This means all that is required to run the image is to download it and run
Gone are the days of “it doesn’t work on my machine.” With containers—provided everyone has Docker installed properly and knows how to use it—the container should run very close to exactly the same on your machine as it does everyone elses.
In addition, this also applies to your production environment as well. You might enable a few dev features in development builds, but for the most part, containers will be able to be sent as-is to your production servers. You shouldn’t experience many issues with hosting containers.
Containers Enable Efficient Scaling
Becaue it’s so easy to run a container, there are loads of services that will run them for you. These are usually referred to as orchestration tools—tools that manage running multiple instances of containers across many servers.
AWS has their Elastic Container Service, which manages running your containers on a fleet of EC2 instances, or on their own Fargate service. Kubernetes is open source, and many cloud providers provide integrations using it.
Each orchestration service will be able to monitor the health of your instances and spin up new ones when traffic is high. This enables efficient scaling, which can save you a lot of money on hosting costs (up to 90% on AWS with Auto Scaling and Spot Instances), and means you don’t have to worry too much about outgrowing your infrastructure.
Plus, containers don’t experience the same performance degradation that comes with running virtual machines, as it doesn’t have to run a guest OS for every app. This makes container hosting cheaper in general, and much more efficient.
And all of this is enabled due to the nature of containers, with no extra work required. You can do the same thing on AWS using custom AMIs, but they’re much harder to manage than containers, and you’ll be doing much of the same work anyway.
Containers Version Control Your SysAdmin
Perhaps the coolest consequence of containers is that they bring all of your server configuration out of your SysAdmin’s head and onto
git, where it can be managed and tracked. Because every new package, configuration file, installation script, and dependency is located within the build folder for the container, it’s trivial to hook it up to source control.
Containers integrate particularly well with the Operations side of a DevOps workflow. They allow you to use the same version management and testing systems you have in place to manage your server architecture. And because everyone is in sync using the same environment to develop, build, and test, it should flow very smoothly.
Plus, Docker works well with continuous integration systems. Docker builds are easy to automate, especially if you’re using Azure Pipelines. Pushing a Docker image to your fleet of servers is as simple as updating the image in the repository. You can even deploy a new container on a subset of servers to monitor its health before deploying across the whole fleet, something that would be non-trivial to implement without containers.
The Downside: The Headache Is Real
Let’s be real—containers are certainly the more elegant solution, but they’re way harder to set up and work with, compared to just firing up a new Linux box and spending an hour installing software. Everyone has done the latter, but the former takes much more time investment overall. (Though, if you’re running a ton of servers, Docker only needs to be configured once.)
If your task isn’t particularly complicated, or you don’t have a lot of demand, implementing it with containers can be overkill. There’s no real reason to containerize
node if you’re just running it on one instance.
And while containers make it easier to manage all the dependencies that may come with running your app, it’s also much more of a pain to run Docker and bind ports whenever you want to test your app, compared to just
npm start or something similar in your project directory. This can definitely be mitigated with startup scripts, but if you’re on macOS or Windows, you’re still running a whole VM just to load up your web app.
At the end of the day, if you’re a fan of Docker and its concepts, nothing is stopping you from using it for your personal projects. But the benefits of Docker only really start to outweigh the headaches once you’re operating in a larger team. In a team environment, bringing everything surrounding your app into your version management systems and DevOps workflow helps production to flow smoothly.