Skip to content

Jesse's Software Engineering Blog

Jul 21

Jesse

Docker Container Stacks

Docker is a great tool for defining application environments (containers) and deploying the various containers as a single project. Custom Docker images can be created by writing a DockerFile, which contains all the container boot up instructions. Multiple Docker images can be managed with Docker Compose allowing for comlplex container stacks to easily be managed and deployed.

Useful Commands

When building Docker container stacks it can take multiple attempts to get everything up and running correctly, especially when building the images from scratch. Here are a list of useful commands to help speed up the process.

View containers

sudo docker ps # running
sudo docker ps -a # all

Force delete ALL containers

sudo docker rm -f $(sudo docker ps -a -q)

Show / delete local images

sudo docker images
sudo docker rmi <image-id>

Log into an image’s bash prompt to verify configs and settings:

sudo docker run -it <image-id> /bin/bash

Run an image while displaying output

sudo docker run -a STDOUT <image-id>

Verify links on a container

sudo docker inspect -f "{{ .HostConfig.Links }}" <container-id>

View env vars of an image

sudo docker-compose run <name-from-yml-file> env

Monitor local traffic either outside or inside a container to verify data is being passed around

# specify a network interface
sudo tcpdump -i lo -p -n dst

# specify a port
sudo tcpdump -p -n dst port 25826

Verify container ports are open

sudo nmap -sU -p 25826 127.0.0.1 # udp
sudo nmap -p 80 127.0.0.1 # tcp

Building An Image

The DockerFile defines the build instructions for a container. A brief overview of the common commands

  • FROM – sets the base OS for the container
  • ENV – define an environment variable
  • RUN – run a one time command, good for installing packages, updating OS, etc.
  • ADD – add a local file to the container
  • VOLUME – link a directory inside the container to a local directory, allows for data persistence on container restart
  • CMD – command to run when the container is booted up

It is important to read through the Docker documentation for a complete overview of all the DockerFile options.

Once the DockerFile has been created an image can be built from it which is required for running a container.

# -t defines a name (not important yet)
sudo docker build -t jessecascio/local-test:latest .

# verify
sudo docker images

While there are a lot of prebuilt containers that should be used for common services like MySQL, Mongo, Apache, etc. custom containers can be built on top or built from scratch. For example if there is need for a background cron system built in NodeJS, a custom container could pull the code base from a repo, setup the cron scripts, and run a process in the foreground to prevent the container from terminating.

Or if a service needs to be customized, such as SupervisorD, and a sufficient prebuilt container can not be found, then a container can be built to install the service, ensure the correct config files are in place, and run any custom init commands.

Running Container

Now that the image has been created a container can be run from it. It is essential to verify the container is working as expected before incorporating it into a stack. The container can either be ran as a daemon (in the background) or can be logged into, depending on what needs to be tested.

As a daemon you can verify the container isn’t dieing i.e. the services are being run in foreground, that the ports are open and accepting data, that services are accessible, etc. From the command line you can verify configs, services are running, check error logs, verify project code, etc. NOTE: When logging into the image directly the CMD will not be executed from the DockerFile, this will need to be run manually via the command line. Also if links or env vars are needed they will need to be passed in separately via the run command.

# run in background
sudo docker run -d jessecascio/local-test:latest

# access command line
sudo docker run -it jessecascio/local-test:latest /bin/bash

Pushing Container

Once the container is running, and has been verified, it can be pushed to DockerHub so that it is accessible.

Commit the container by id, setting the name to be used in DockerHub following the naming conventions. The name is important as this is how the image is pulled in the container stack. Similar to GitHub, newer versions should be tagged with a version number. I typically manually update the latest tag as well when I commit new versions, although this is not required.

sudo docker commit -m "initial commit" -a "jessecascio" <container-id> jessecascio/nodejs-app:0.1.0
sudo docker commit -m "initial commit" -a "jessecascio" <container-id> jessecascio/nodejs-app:latest

Once the images have been committed simply push (all commits will be pushed)

sudo docker push jessecascio/nodejs-app

Now that the image is in DockerHub it can be viewed via the DockerHub website or pulled to a machine locally

# will default to pulling latest tag
sudo docker pull jessecascio/nodejs-app

# or specify version
sudo docker pull jessecascio/nodejs-app:0.1.0

Every container needed for the application should be commited and pushed to DockerHub.

Running Stack

Once all the Docker containers have been pushed to DockerHub, Docker Compose should be set up. After it has been installed, you just need to create the docker-compose.yml file and run the docker-compose up command.

A brief overview of the yml file options. Every run option, and more, can be configured in the yml file so it is important to read the Docker documentation thoroughly.

  • build – path to Dockerfile
  • environment – environment variables to pass in
  • expose – ports to expose
  • ports – port mapping
  • links – containers to link to

So for an example of setting up my NodeJS app to run with MongoDB and RabbitMQ I could build the following yml file

mongodb:
    image: mongo
    environment:
        -storageEngine=wiredTiger
    expose:
        - "27017"
    ports:
        -"27017:27017"
rabbitmq:
    image: rabbitmq
    expose:
        - "15672"
        - "5672"
    ports:
        -"15672:15672"
        -"5672:5672"
nodeapp:
    image: jessecascio/nodejs-app
    expose:
        - "8080"
        - "80"
    ports:
        - "8080:8080"
        - "80:80"
    links:
        - mongodb
        - rabbitmq

This yml files uses my custom node app image which contains all my custom logic for pulling code, setting up cron scripts, and running a web server, along with two pre built images for running MongoDB and RabbitMQ. All the appropriate ports have been turned on and the app container has been linked to the MongoDB and RabbitMQ containers sharing container information via env vars and the host file. Now that the stack has been defined, running docker compose up will start containers based on all images and run them as daemons.

sudo docker-compose up -d

Alternatively, instead of pulling a pre-built image with the image option, could build a new image from a DockerFile and specify the path to the DockerFile with the build option

nodeapp:
    build: nodeapp/.

Conclusion

The ability to create and push Docker images, along with composing multiple containers together via a yml file is extremely powerful. An example use case would be to create the yml file of all needed services and commit that file to source code so its available to nodes in a server cluster. Then when a server needs to be set up instead of having to manually ssh in and set everything up or having to use a tool like Puppet or Chef which incurs unnecessary overhead on smaller servers, simply install Docker and Docker Compose, pull the yml file, run docker-compose up, and the server will be ready to go. This is especially powerful when packaging a product which requires multiple services and allows users a quick and clean product set up.

Blog Powered By Wordpress