Docker 101
Tutorials – Docker
You might think Docker is a tool reserved for gnarly sys admins, useful only to service companies that run complicated SaaS applications, but that is not true: Docker is useful for everybody.
Docker [1] manages and runs containers, a thing that acts like an operating system. It is similar to a virtual machine, but a container uses a lot of the underlying operating system (called the "host") to work. Instead of building a whole operating system with emulated hardware, its own kernel, and so on and so forth, a container uses everything it can from the underlying machine, and, if it is well-designed, implements only the bare essentials to run the application or service you want it to run.
Whereas virtual machines are designed to run everything a regular machine can run, containers are usually designed to run very specific jobs. That is why Docker is so popular for online platforms: You can have a blogging system in one container, a user forum in another, a store in another, and the database engine they all use in the background in another. Every container is perfectly isolated from the others. Docker allows you to link them up and pass information between them. If one goes down, the rest continue working; when the time comes to migrate to a new host, you just have to copy over the containers.
But there's more: Docker is building a library of images [2] that lets you enjoy whole services just by downloading and running them. These libraries are provided by the Docker company or shared by users and go from the very, very general, like a WordPress container [3], to the very, very niche, like a container that provides the framework to run a Minetest [4] server [5].
This means exactly what you think it means: Download the image, run it (with certain parameters), and your service is ready, madam – no dependency hunting, very little configuring, and not much more beyond hooking up the service to a database (running in another container) and setting your password as the service administrator.
Getting Started
To enjoy the marvels of Docker, first install it on your box. Most, if not all, of the main distributions have relatively modern versions of the Docker packages in their repositories. In Debian, Ubuntu, and other Debian-based distributions, look for a package called docker.io. In Fedora, openSUSE, Arch, Manjaro, Antergos, and others, it is simply docker. You will also find official and updated versions of the software for several systems at the Docker website [6].
Once Docker is downloaded and installed, check that the daemon is running:
systemctl status docker
If it is not, start it and enable it so that it runs every time you boot your machine:
sudo systemctl start docker sudo systemctl enable docker
Now Docker is running, it is time to get some images.
Imagine
An image is similar to an ISO image you would use to install a GNU/Linux operating system, except you don't need to burn it to a DVD or USB thumb drive.
You can use the docker
utility to search for images like this:
docker search peertube
Docker will show you all the available images that contain the word "peertube" in the name or description (Figure 1). It will also tell you its rating given by users – more stars is better.
To install an image, you can pull
it from a repository:
docker pull chocobozzz/peertube
This will download a PeerTube image (see the "What is PeerTube?" box) from Docker's repository and add it to your roster.
What is PeerTube?
PeerTube [7] is a video portal service akin to YouTube and Vimeo (Figure 2), but without any of the dumb restrictions of those closed and proprietary alternatives. It is called PeerTube because anyone can set up a server and join a federated network of PeerTube instances; any video a user uploads to one instance gets propagated to the other instances. All instances share the load of streaming the videos to visitors using P2P technology.
You can check that the image is now installed by running:
docker image list
Among other things, the list will give you a unique identifier (just in case you have two images with the same name) and will tell you how much space the image takes up on disk.
You could also just run
the image, even before downloading it. The command
docker run chocobozzz/peertube
will have Docker look for the PeerTube image on your hard disk, and, if it can't find it, it will download it, drag in all the dependencies it needs (including other images, like an image for a PostgreSQL server), and run it (Figure 3, top).
When you run an image, Docker creates a container with the software running inside it. In many cases, it will show the software's output so you can check that everything is working correctly (Figure 3, bottom). In this case, the output tells you that your PeerTube instance is running on localhost. However, if you visit http://localhost:80 with your browser, you probably won't see the PeerTube interface, because Docker sets up its own network for its containers.
To know which IP PeerTube is running on, first list your running docker containers like this:
docker container list
This will give you a container ID (something like 8577b5867b93) and a name that Docker makes up by mashing together random words (something like hopeful_volhard) for your container. You can use either to identify your container and get some details using:
docker container inspect <container_id_or_name>
Toward the end of the output, you will see a line that says "IPAddress" : and, well, an IP address. If you haven't changed Docker's default configuration, it will be something like 172.17.0.2. Point your browser at that, and … Voilà! PeerTube (Figure 4).
You can stop a container with stop
:
docker stop <container_id_or_name>
And start it again with start
:
docker start <container_id_or_name>
Using run
will create a completely new Docker container from your original image. If you have made changes (like created or modified a file) within a Docker container of the same image, your changes will not be in the new container. The "Getting Rid of Stuff" box explains how you can cleanly remove both containers and images.
Getting Rid of Stuff
List the images you have installed and use the ID of the one you want to remove to delete it:
docker image rm <id_number>
You may get an error informing you that the image is in use or needed by a certain container. Note that, even if all your containers are stopped, they are not necessarily removed and are sitting there waiting to be restarted. You can see all you containers, even those that are not running, with:
docker container list --all
and then you can remove the offending container with:
docker container rm <container_id_or_name>
After that, you can go back and remove the image.
Inside the Container
Finishing off the configuration of PeerTube would require an article of its own (watch this space!), so I'll move on to a more generic image for experimentation. Grab yourself a Linux distro image, like Ubuntu,
docker pull ubuntu
and run it with:
docker run -i -t ubuntu bash
After a few seconds, Docker will dump you into a shell within the container. Unpacking that last command line, the -i
option tells Docker that you want an interactive exchange with the container, which means that the commands you type into the host's stdin (usually your shell) will be pushed to the Docker image. The -t
option tells docker to emulate a terminal over which you can send the commands. You will often see both options combined together as -it
.
Next comes the ID or name of the image you want to interact with (ubuntu
in this case). Finally, you pass the name of the command you want to run, in this case a Bash shell.
Find out what the name or ID of the container is (docker container list
), and you can open a new shell in the running container using the exec
command:
docker exec -it <container_id_or_name> bash
The instruction above logs you into the container, and you can install and remove software, edit files, start and stop services, and so on.
To stop the shell in the container, issue an exit
as you would do to exit a regular shell. Once you log out from all the shells, and as long as no other processes are executing, your Ubuntu container will stop. Docker containers are designed to run one process and one process only. Although you can run more, this is frowned upon by Docker purists and considered suboptimal. When that unique process ends, Docker is designed to close down the container.
However, if you want to keep a container running in the background (so you can have it run a non-interactive command sent to it from time to time), you can do this:
docker run -t -d <image_id_or_name>
As you saw above, -t
tells Docker to create a faux terminal. The -d
option stands for detached and tells Docker to run the container in the background.
To run a command non-interactively in a running container and have the output appear under the command, enter
docker exec <container_id_or_name> ls
which will show the default working directory's contents. You can also show the contents of a directory that is not the default by adding the path, as you would with a regular ls
command:
docker exec <container_id_or_name> ls </path/to/container/directory>
Talking of working directories, if you are not sure which is the container's current working directory, try this:
docker exec <container_id_or_name> pwd
Another thing you can do is share directories between the host and a container. For example:
docker run -it -v /home/<your_username>:/home/brian ubuntu bash
The -v
option takes the path to the directory on the host (in this case, your own home directory) and maps it to the directory within the container. If either of these directories do not exist, Docker will try and create them for you.
Once you have shared your directory, from within the container, ls
the /home/brian
directory, and you will see the files from your own home directory. If you execute touch /home/brian/from_docker.txt
from inside your container, you will see the file from_docker.txt
pop up in your home directory on the outside.
This is very useful for when you want to use a Docker container to do some dirty work for you, like when you want to make an app for Android.
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Thousands of Linux Servers Infected with Stealth Malware Since 2021
Perfctl is capable of remaining undetected, which makes it dangerous and hard to mitigate.
-
Halcyon Creates Anti-Ransomware Protection for Linux
As more Linux systems are targeted by ransomware, Halcyon is stepping up its protection.
-
Valve and Arch Linux Announce Collaboration
Valve and Arch have come together for two projects that will have a serious impact on the Linux distribution.
-
Hacker Successfully Runs Linux on a CPU from the Early ‘70s
From the office of "Look what I can do," Dmitry Grinberg was able to get Linux running on a processor that was created in 1971.
-
OSI and LPI Form Strategic Alliance
With a goal of strengthening Linux and open source communities, this new alliance aims to nurture the growth of more highly skilled professionals.
-
Fedora 41 Beta Available with Some Interesting Additions
If you're a Fedora fan, you'll be excited to hear the beta version of the latest release is now available for testing and includes plenty of updates.
-
AlmaLinux Unveils New Hardware Certification Process
The AlmaLinux Hardware Certification Program run by the Certification Special Interest Group (SIG) aims to ensure seamless compatibility between AlmaLinux and a wide range of hardware configurations.
-
Wind River Introduces eLxr Pro Linux Solution
eLxr Pro offers an end-to-end Linux solution backed by expert commercial support.
-
Juno Tab 3 Launches with Ubuntu 24.04
Anyone looking for a full-blown Linux tablet need look no further. Juno has released the Tab 3.
-
New KDE Slimbook Plasma Available for Preorder
Powered by an AMD Ryzen CPU, the latest KDE Slimbook laptop is powerful enough for local AI tasks.