You’ve finished developing your website and now you want to put it online except you’re running into a ton of problems.
Your application crashes when you start it. A module is missing. You can’t get it to install. When you finally manage to fix the issue a new one pops up.
“Why does it have to be this hard to get my application in front of my users?”
“Is there an issue with my code or is every build like that?”
Rest assured, it’s not your code. Deploying applications is not trivial and some people are paid to do it full time. It’s an entire industry even.
When you’re learning to code, learning to host on top of that can be insane.
Luckily, deploying applications has become easier in recent years and there’s a constant flow of new technologies. It used to be much worse before that. Like The Flintstones worse.
Docker is one such technology that makes application deployment less frustrating for developers. Docker neatly packs your application and its environment so it runs without errors in production just like it does on your local machine. It does that in a documented way tracked by version control, so next time you’re deploying you don’t have to worry about forgetting to run a command on the server host.
Docker allows you to easily share your application with other developers too. No more: “It works on my machine”.
In this guide you will learn
- How to go from your NodeJS application to a Docker image of your application ready to be deployed
- What a Dockerfile is and how it relates to a Docker image
- The concept of Docker instructions with detailed explanations of a few commonly used ones
- The role of
.dockerignoreand how it makes your life easier
Before we dive in there are two things you need to be able to follow along.
In this guide, I’m explaining each concept along the way so you can apply them to your situation. I encourage you to follow along using your NodeJS application.
Table of Contents
The first step of deploying your application with Docker is creating a Docker image. A Docker image is a blueprint of your application and it has everything your application needs so it can run. It contains your code/binaries (your application), runtimes (eg.: NodeJS), dependencies (eg.: 3rd party libraries in your package.json) and other filesystem objects.
We’ll create a Docker image in three steps outlined below.
- Write a Dockerfile
- Build the Docker image
Let’s get to it!
1. Write a Dockerfile
Dockerfile is a step-by-step recipe to build a Docker image. It tells Docker how to build a filesystem with everything in it so your application can run without errors. Each line in the file is an instruction that describes how the filesystem should look like. Let’s have a look at a
Dockerfile example that has the minimum amount of steps for a common NodeJS application.
FROM node:12.14.1 WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . . CMD ["node", "index.js"]
This file usually sits at the root of your application, next to your
Let me explain what each line does in detail so you’re not left alone in mystery.
FROM node:12.14.1 – Sets the base image to
Every Dockerfile needs to start with the
FROM instruction. It tells Docker the starting point of this image. Since we want to save ourselves the time of building an image from scratch by having to install NodeJS and configure the server, we use the official
node:12.14.1 image. This image is pulled from the Docker Hub repository and gives us Node 12.14.1 to work with. If you’re running your application on a different version of Node, change the base image to match your local Node version to avoid pesky errors during the build step later on.
For the inquisitive amongst us who want to see how the Dockerfile of the official
node:12.14.1 looks like, you may find it in the NodeJS Github repository. This image is in turn based off another image which is based off another image and so on. Docker images are, in that sense, building blocks that give you a starting point to save you time and avoid cluttering your Dockerfile with obscure instructions unrelated to your application.
WORKDIR /usr/src/app – Sets the working directory for future actions
WORKDIR to specify that actions from this point forward should be taken from the
/usr/src/app directory in your image filesystem. Otherwise, the next line would have to be
COPY package.json /usr/src/app/package.json. We could get rid of this line and be a little more verbose in the others but since
WORKDIR is created anyway it’s better to be explicit and set it ourselves to avoid surprises.
Why do we put the application in
/usr/src/app you’re wondering? This is what Docker suggests and it follows the Linux Filesystem Hierarchy. It doesn’t matter much in which directory you store your application code but it’s better to stick to the recommendation for consistency and avoid overwriting files or directories used by the OS.
COPY package*.json ./ – Copies
package-lock.json if it exists) into the image
COPY instruction does exactly what it says. It copies your application’s
package-lock.json files from the host file system to the present location (
./) in your image. Which is in this case
/usr/src/app as we’ve defined in the previous step.
COPY takes two arguments: source and destination. Source is relative to the location of
Dockerfile in your application. Destination is relative to
We want to copy
package-lock.json as well so we end up with a
node_modules folder that is (ideally) identical to the one you’ve been using in development. When running
npm install, NPM checks to see if there is a
package-lock.json file and if there is, it will install the exact versions of your dependencies instead of the latest versions that match the SemVer rules of your
RUN npm install – Installs your application’s dependencies
You’re probably familiar with this step since you’ve run it yourself on localhost while developing your application.
RUN executes a command on the image at the working directory location. We run
npm install to install the application’s dependencies which will be placed in a
COPY . . – Copies the rest of your application’s code into the image
After installing your application’s dependencies we copy the rest of the application code into the image at the present location. You may wonder, why didn’t we copy all the code in the first place? The reason we first copy
package-lock.json and install our dependencies before copying the rest of the application is speed.
Docker images are built in layers and each line in a
Dockerfile represents a layer. When you build an image, Docker tries to speed up the build time by only rebuilding the layer that has changed, along with the layers on top of it (the ones below in the
Dockerfile). If we copy the entire codebase before installing our dependencies, on each change we make during development Docker will have to reinstall all our dependencies, even though most of the time they haven’t changed. Whereas now Docker will only run
npm install if your
package-lock.json have changed. If not, then it will only copy the latest changes in your codebase. Building an image can take some time so this is a sane optimisation we want to make use of.
CMD ["node", "index.js"] – Sets the command to be executed when running the image
CMD instruction is part of the metadata of the image and is usually located at the end of a
Dockerfile. Unlike other steps, this step is not run in the build phase but it’s a way to tell Docker how to run the application in this image. There can only be one
CMD instruction. It takes an argument in the form of a JSON array of strings that will be chained together to form a single command. In this case, we run the application with
node index.js. If your application has a different entry point you should change this accordingly (eg.:
["node", "server.js"] or
You can also write
CMD node index.js (without the array) but there’s a slight difference you should be aware of. In our case we’re using the exec form and Docker will run our command without invoking a command shell. This is the recommended way as your application will properly receive signals (such as
SIGKILL). The form without the array is called the shell form and Docker will execute the shell with
/bin/sh -c. Therefore,
CMD node index.js is the same as
CMD ["sh", "-c", "node index.js"]. Refer to the Dockerfile reference for the complete documentation of
2. Add a
You don’t need a
.dockerignore file to build a Docker image. It does, however, make your life easier when using the
COPY instruction*. During the
COPY . . step we copied the entire codebase to the image but we don’t want to copy the
node_modules folder as we already have our dependencies installed on the image from the step before. So you want to add
node_modules to your
.dockerignore file which will tell Docker to exclude it from all
COPY instructions. The file would then simply be:
.dockerignore file gives us more flexibility in specifying which files we don’t want to copy to the image. It supports wildcards such as
*. Generally, we want to tell Docker to ignore extremely large files, files that contain sensitive information (
.env) or are otherwise irrelevant to running the application on production (
ADD instruction is the other instruction that takes into consideration the contents of
3. Build the Docker image
You’re now ready to build a Docker image based on the
Dockerfile template you’ve created. Open a terminal at the root of your application directory, where your
Dockerfile is located, and consider the following command:
docker image build -t [application name] .
-t option lets you give a name to the image so it’s easier to find later. Replace
[application name] with the name of your application.
. is the path to your application which is the present location.
When you run this command you should see Docker step through each instruction in your Dockerfile, building your image as it goes. If successful, the build process will end with a message starting with
Successfully tagged … . To verify your image has been created, run
docker images and it should appear in the list.
Congratulations! 🎉 You’ve successfully created a Docker image of your application. This can be a daunting process so if you’ve got this far pat yourself on the shoulder.
You now have a container image that can be pushed to a container registry and downloaded from your production server during deployment. In the next tutorial, Automate your Docker deployments, I teach you how to do that with an automated CI/CD pipeline.