Docker is the defacto toolset for building modern applications and setting up a CI/CD pipeline – helping you build, ship and run your applications in containers on-prem and the cloud.
Whether you're running on simple compute instances such as AWS EC2 or Azure VMs or something a little fancier like a hosted Kubernetes service like AWS EKS, Azure AKS, Digital Ocean Kluster, Docker's toolset is your new BFF.
But what about your local development environment? Setting up local dev environments can be frustrating.
Remember the last time you joined a new development team?
You needed to configure your local machine, install development tools, pull repositories, fight through out-of-date onboarding docs and READMEs, get everything running and work locally without knowing about the code and its architecture. Oh, and don't forget about databases, caching layers and message queues. These are notoriously hard to set up and develop locally.
I've never worked at a place where we didn't expect at least a week or more of onboarding for new developers.
So what are we to do? There is no silver bullet, and these things are hard to do (that's why you get paid the big bucks), but with the help of Docker and its toolset, we can make things a lot easier.
In Part I of this tutorial, we'll walk through setting up a local development environment for a relatively complex application that uses React for its front end, Node and Express for a couple of micro-services and MongoDb for our datastore. Then, we'll use Docker to build our images and Docker Compose to simplify everything.
Let's get started.
Prerequisites
To complete this tutorial, you will need:
- Install the software below on your development machine
- Ensure you have a Docker Hub account
- An IDE or text editor to use for editing files. I would recommend VSCode.
Did you know that VSCode has an embedded terminal, which can be accessed using the Ctrl + Tilde (~) Key combination?
Clone the Code Repository
The first thing we want to do is download the code to our local development machine. Let's do this using the following git command:
git clone https://github.com/ctrlTilde/node-api-skeleton.git
Now that we have the source code available locally, let's look at the project structure. First, open the code in your favourite IDE and expand the root level directories. Then, you'll see the following file structure.
├── docker-compose.yml
├── nginx
│ ├── default.conf
│ └── Dockerfile
└── node_api
├── api
│ ├── controller
│ │ └── rootController.js
│ ├── lib
│ │ ├── logger.js
│ │ ├── responder.js
│ │ └── wrapper.js
│ └── routes
│ ├── index.js
│ └── root.js
├── app.js
├── Dockerfile
├── lambda.js
├── package.json
├── server.js
└── yarn.lock
The application comprises simple microservices that provide a restful APIEven though not consumed as part of the existing skeleton structure; it would use PostgreSQL as its relational database or its datastore. It also has a caching layer that can be consumed, provided by Redis.
Typically at this point, we would start a local version of PostgreSQL or start looking through the project to find out where our applications will be looking for the database.
Additionally, we would do the same for Redis.
For larger applications, where there are multiple microservices, we would start each of our microservices independently and then finally start the UI and hope that the default configuration works.
It can be very complicated and frustrating, mainly if our micro-services use different versions of node.js and are configured differently.
To make things worse, when it gets to the deployment, we would need to sit and mess with a webserver, to be able to expose the microservice path.
So let's walk through making this process easier by dockerizing our application and putting our database into a container.
Dockerizing Applications
Docker is a great way to provide consistent development environments. It will allow us to run each of our services and UI in a container. We'll also set up things so that we can develop locally and start our dependencies with one docker command.
The first thing we want to do is dockerize each of our applications. For larger applications containing multiple microservices, you can even share the same Dockerfile between them.
Create Dockerfiles
Create a Dockerfile in the notes-services directory and add the following commands.
FROM node:lts
RUN mkdir -p /src && npm i -g nodemon npm
WORKDIR /src
ADD package.json package.json
RUN yarn install
COPY . /src/
The above is a very basic Dockerfile to use with node.js. If you are not familiar with the commands, you can start with the Docker getting-started guide. Also, please take a look at their reference documentation.
Building Docker Images
Now that we've created our Dockerfile, let's build our image. Make sure you're still located in the node_api
directory and run the following command:
docker build -t node_api .
$ docker build -t node_api .
[+] Building 1.1s (11/11) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 175B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/node:lts 1.0s
=> [1/6] FROM docker.io/library/node:[email protected]:ffe804d6fcced29bc 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 94.79kB 0.0s
=> CACHED [2/6] RUN mkdir -p /src && npm i -g nodemon npm 0.0s
=> CACHED [3/6] WORKDIR /src 0.0s
=> CACHED [4/6] ADD package.json package.json 0.0s
=> CACHED [5/6] RUN yarn install 0.0s
=> CACHED [6/6] COPY . /src/ 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:2a6894296dcce08e11624cd8e3d634bbef62d1 0.0s
=> => naming to docker.io/library/node_api 0.0s
Now that we have our image built, let's run it as a container and test that it's working.
docker run --rm -p 8080:8080 --name node node_api yarn start
Okay, while we have a successful running container, the command prompt is locked to the container's process. Therefore, only the container breaking or us performing some intervention would stop the container.
Let us open another terminal and go through some docker basics.
- See running containers:
docker ps
- For this example, I've launched all the containers - which we will cover later in this tutorial
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1ac9214f991d node-api-skeleton_nginx "/docker-entrypoint.…" 6 seconds ago Up 6 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp nginx
85f538fa26af node-api-skeleton_node_api "docker-entrypoint.s…" 7 seconds ago Up 6 seconds 8080/tcp node_api
662ac5a63f54 redis "docker-entrypoint.s…" 8 seconds ago Up 7 seconds 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redis-cache
fd3d05b487a5 postgres:alpine "docker-entrypoint.s…" 8 seconds ago Up 7 seconds 5432/tcp
- Find a container ID by container name:
docker ps -q --flter ancestor=node_api
$ docker ps -q --filter ancestor=node_api
85f538fa26af
- Stop the container named node_api:
docker stop $(docker ps -q --filter ancestor=node_api )
- Stop and delete the container named node_api:
docker rm $(docker stop $(docker ps -a -q --filter ancestor=node_api
In our new terminal, let us proceed to stop the node_api
container:
docker stop $(docker ps -q --filter ancestor=node_api )
Our original terminal has been released, and we can use it again.
Local Database and Containers
So. we have seen how we can use only Docker to run our API, but what about the databases and other dependencies?
Not to stress. Docker Compose to the rescue.
Let us open the Docker compose file in our editor:
version: '3'
services:
postgres:
restart: always
image: postgres:alpine
env_file: ./.env
environment:
POSTGRES_USER: $DB_USER
POSTGRES_PASSWORD: $DB_PASS
POSTGRES_DB: $DB_NAME
expose:
- "5432"
volumes:
- data:/var/lib/postgresql/data
redis:
container_name: redis-cache
image: redis
ports:
- '6379:6379'
expose:
- '6379'
node_api:
restart: always
build: ./node_api
container_name: node_api
expose:
- "8080"
command: yarn start
depends_on:
- postgres
- redis
env_file: ./.env
environment:
- DB_HOST=postgres
- VIRTUAL_HOST=proxy.example
stdin_open: true
tty: true
nginx:
build:
context: ./nginx
container_name: nginx
ports:
- "80:80"
links:
- node_api:node_api
volumes:
data:
external: false
As seen above, we use service definitions to define upstream images that supply Postgresql and Redis.
This structure will download the images, set them up, pass environment variables into them, and run them with connectivity between each other while only exposing the rest API to us on port 80 through Nginx.
To launch the stack, Â run docker-compose up -d
from your terminal.
$ docker-compose up -d
Creating network "node-api-skeleton_default" with the default driver
Creating redis-cache ... done
Creating node-api-skeleton_postgres_1 ... done
Creating node_api ... done
Creating nginx ... done
We can now see that without needing to worry about installing any additional dependencies into our code, we can quickly and easily set up a development environment using nothing but Docker, Git and an IDE, such as VSCode.
Until next time, where we will look at deploying our application in the cloud.