Getting Started with Docker

Softray Solutions
7 min readJan 18, 2023

--

Written by Sanja Blagojević, Software Developer at Softray Solutions

If you work in the IT industry, you have probably heard the sentence, “It worked on my machine!”. We all have said this to our QA team or fellow developers from time to time. Just imagine that you don’t need to worry about it anymore, that something that works on your machine will also work on other people’s machines or in production servers. Docker aims to help with that dream and does it pretty well!

Over the past years, Docker has become one of the most popular tools for developing and deploying software.

What is Docker?

Docker is an open-source platform that enables developers to build, deploy, run, update and manage containers.

So that’s a nice sentence, but what exactly does it mean? What’s a container in software development, and why might we want to use it?

A container in software development is a standardized unit of software, which means it’s a package of code and dependencies to run that code. The same container always yields the same application and execution behavior. No matter where or by whom it might be executed. Support for containers is built into modern operating systems.

In the end, Docker simplifies the creation and management of such containers.

Getting started with Docker

Installation

The first step is to ensure that you install Docker on your machine. Docker is available for installation on several Linux distributions and Windows and Mac operating systems. Visit https://docs.docker.com/get-docker/ and select installation for your preferred operating system.

Docker Engine or Docker is the base engine installed on your host machine to build and run containers using Docker components and services.

Docker Client is the way you’ll interact with Docker. Docker Client uses the Docker API to send the command to the Docker Daemon.

Docker Daemon listens to clients’ requests and interacts with the operating system to create or manage containers.

Docker Registry is an open-source server-side service used for hosting and distributing images. Docker Hub is the largest registry of Docker images. Images can be stored in either public or private repositories. We use pull and push commands to interact with a Docker Registry. The pull command is used to get a Docker Image from the Docker repository to build a container. With the push command, a user can share the Docker Image in Docker Registry.

Docker Repository is a collection of related Docker images with the same name but different tags.

Docker Image — the term image in the context of Docker only maps some of that well to a physical image. Docker images are more like blueprints or templates for containers. In real life, you can think about docker images as cookie cutters or molds. Images are read-only and contain the application and the necessary application environment (operating system, runtimes, tools,…).

Dockerfile is a text file with instructions to build Docker images. Each instruction in the file represents a layer of the image.

Docker Container is a runnable instance of an image. It is a standalone, executable software package that includes applications and their dependencies. Using Docker API or CLI, we can start, stop, delete or move the container. A container is defined by the image and any additional configuration options you provide to it when you create or start it. From one image, you can create as many containers as you want.

Let’s dockerize a Node.js web app!

This simple example aims to show the process of getting a Node.js application into a Docker container.

1. Create Node.js app

First, create a new directory, where all the files would be. In this directory create a package.json file that describes your app and its dependencies.

{
"name": "docker-guide",
"version": "1.0.0",
"description": "",
"main": "server.js",
"author": "Sanja Blagojevic",
"dependencies": {
"express": "^4.17.1"
}
}

Then, create a server.js file that defines a web app using the Express.js framework:

const express = require('express')
const app = express()
const port = 80

app.get('/', (req, res) => {
res.send('Hello World!')
})

app.listen(port, () => {
console.log(`Example app listening on port ${port}`)
})

This app starts a server and listens on port 80 for connections. The app responds with “Hello World!” for requests to the root URL. For every other path, it will respond with a 404 Not Found.

What would you usually do to run this app locally? First, you’ll need to install Node.js from nodejs.org. Then you will open a terminal in your project folder and run npm install to download and install all dependencies. Once you have done this, you can execute the server.js file with the node command. This would start the server on localhost, port 80.

We don’t want to do this!

2. Creating Dockerfile

We will create our Dockerfile in the folder that contains our code. Our Dockerfile will look like this:

FROM node

WORKDIR /app

COPY . /app

RUN npm install

EXPOSE 80

CMD ["node", "server.js"]

Let’s explain Dockerfile line by line below:

FROM node

A Dockerfile must begin with a FROM instruction. The FROM instruction specified the parent image from which you are building. In this case you are using official node image.

WORKDIR /app

We then set the working directory in out container with WORKDIR. WORKDIR /app sets the current directory to /app when the container starts running.

COPY . /app

As a next step, we wanna to tell Docker which files that live here on our local machine should go into the image. And for that, Docker uses COPY command. Now what does this mean? You basically specify two paths here. First dot specify that all the files from our current directory (the one which contains the Dockerfile) should be copied into the /app directory in our container.

RUN npm install

The RUN command executes when we build the image. Any additional dependencies or packages are usually installed using the RUN command. For node applications, we had to run NPM install in order to install all the dependencies of our node application.

EXPOSE 80

The EXPOSE keyword in a Dockerfile tells Docker that a container listens for traffic on the specified port. Our app listens on port 80, so we need to expose that port.

CMD ["node", "server.js"]

The CMD specifies the command which is executed when we start the container. The difference to run is that this will now not be executed when the image is created, but when a container is started based on the image. And that’s what we want.

3. Build your image

Now you have Dockerfile, to build image, you need to execute the following command:

$ docker build  -t my-image:latest . 

The -t flag lets you tag your image so it’s easier to find later. To see all images, execute the command:

REPOSITORY     TAG       IMAGE ID         CREATED                SIZE 

my-image latest a01687f0e426 About a minute ago 1GB

node latest 51bd6c84a7f2 7 days ago 998MB

4. Run the image

Running your image with -d runs the container in detached mode, leaving the container running in the background. The -p flag redirects a public port to a private port inside the container. By adding — name, you assign a NAME to the container. The name can be used for stopping and removing etc. Run the image you previously built:

$ docker run -p 3000:80 -d --name my_container my-image:latest 

716c2fe68aac30f3af5d2ad2868462bd975640938170e52b6d0214a8396ea4bf

Useful docker command when working with containers:

  • docker ps: list all running containers (to include stopped ones add flag -a)
  • docker stop NAME: stop container with specific name
  • docker rm: delete a stopped container
  • docker rmi IMAGE : remove an image by name / id
  • docker push IMAGE : push an image to DockerHub (or another registry) — the image name/tag must include the repository name/ url
  • docker pull IMAGE : pull (download) an image from DockerHub (or another registry) — this is done automatically if you just docker run IMAGE and the image wasn’t pulled before

For a full list of all commands, add — help after a command — e.g. docker — help , docker run — help etc. You can use — help on all subcommands.

Congratulations! You learnt how to dockerize your app!

Before we go to conclusion, let’s mention few more important concepts and features.

1. Like git, Docker also has an ignore file called .dockerignore. In this file, you define which files and folders you don’t want to add to the image.

2. Volumes and Bind mounts are mechanism for persisting data generated and used by Docker containers.

3. Docker Networking allows you to connect Docker containers together.

4. If you have multi-container application, Docker Composite is a tool for you.

Conclusion

The main purpose of this article was to introduce you to Docker. We just scratched the surface with the basics. I hope you enjoyed reading this article.

Docker is a complex system and if you want to know more, check the learning resources.

Docker: Learning Resources

Official Docker Documentation

Docker Cheat Sheet

Docker and Kubernetes: The Practial Guide by Maximilian Schwarzmüller

What Is Docker? | What Is Docker And How It Works? | Docker Tutorial For Beginners | Simplilearn

If you enjoyed reading this, click the clap button so others can find this post.

--

--

No responses yet