Part 0
I decided to use a Debian VM to follow the course. It’s been quite a while since
I made these notes and I was still using the deprecated docker-compose
. All
the relevant and up-to-date materials can be found
here.
Part 1
What is DevOps
DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). It aims to shorten the systems development life cycle and provide continuous delivery with high software quality. DevOps is complementary with Agile software development; several DevOps aspects came from Agile methodology. (Wikipedia)
Why Docker?
-
it solves the problem of “runs on my machine” by bundling applications and their dependencies into an image that runs on every machine that can run Docker.
-
How is that different from a virtual machine? The following graphical comparison by Docker shows the difference:
What’s an Image?
A file that is built by another instructional file called “Dockerfile”. The image cannot be changed, you can only “create a new layer to it”
What’s a Container?
Containers are instances of an image. Think of the Dockerfile as your shopping list, and of the image as the ingredients you end up buying. The container is the meal you get at the end if everything worked out.
CLI Basics
Docker commands | Description |
---|---|
images | list all images |
run <img> | run an image |
run -d <img> | run image in detached mode |
run --name <name> <img> | run image with name for easy reference |
run --rm <name> <img> | run image and remove container after exit |
rm <id1> <id2> | remove container “id1…” and “id2…” |
rmi <img> | remove image called “img-name” |
container ls | list running containers |
container ls -a | list all containers |
container prune | remove all stopped containers |
pull <img> | pull image called “img” from a docker hub |
exec <id> | execute command in container |
exec -it <id> <cmd> | start interactive session in tty in container |
search | search the registry |
pause | pause container |
unpause | unpause container |
logs -f <name> | follow the output of logs from terminal |
start <name> | start container |
stop <name> | stop container |
kill <name> | kill container if it does not stop |
attach <name> | attach to container from terminal |
attach --sig-proxy=false <name> | attach to container and make sure it cannot be stopped |
history <img> | show which operations have taken place in an image and how they affected the size |
Exercise 1.5
The first non-trivial exercise. I solved it by first running
This then prompts me for a website. But I know that curl
is not installed
within the container, so I run
in order to install curl
. then return to the first terminal and type in a
website. It works as expected.
Working with Dockerfiles
Exercise 1.6
Exercise 1.7
Exercise 1.8
Exercise 1.9
Run
which then allows me to visit my Debian’s VM IP at port 5000:
Transclude of _20201112_161656screenshot.png"
Exercise 1.10
Now running
yields the following:
If found this more elegant version (not using ubuntu:16.04
) online:
Exercise 1.11
Here, we need a slightly slimmer Dockerfile that exposes another port:
Exercise 1.12
We can just use the Dockerfiles from the past two exercises and run these two commands:
yielding a working front-end
Transclude of _20201112_182002screenshot.png"
Exercise 1.13
The Dockerfile looks as such
issuing the following commands then builds the image and runs the container
yielding the Spring application on port 8080.
Exercise 1.14
yielding
Exercises 1.15-1.17
I did not do these, as they are related to publishing images to Dockerhub and
Heroku. Also, I want to get started with docker-compose
. I did like the idea
of creating a docker container for my development environment (tools and
libraries). I added that to my roadmap for the future.
Other Key Takeaways
-
When commands depend on one another, it is best practice to run them together, like so:
-
There is an important difference between exposing and publishing a port:
- exposing a port. means that you tell Docker that the container listens
to a certain port. It is done by adding an
EXPOSE <port>
line to your Dockerfile - publishing a port will map the container’s ports to host’s ports. In
order to publish a port you need to run the container with
-p <host-port>:<container-port>
- exposing a port. means that you tell Docker that the container listens
to a certain port. It is done by adding an
Part 2
Volumes in docker-compose
Exercise 2.1
this can then be started by:
Web Services
We learn that we can give ports and environment variables to
docker-compose
Exercise 2.2
easy…
Exercise 2.3 & 2.5
This really shows the advantages of docker-compose
over using the Docker CLI.
I also added redis which solves Exercise
2.5
Scaling
- A really good summary of why you want to use reverse proxies with Docker specifically
- Jason Wilder, the guy who wrote the article above also built
nginx-proxy
(repo), a neat little container that configuresnginx
for us from the docker daemon as containers are started and stopped. - Another useful resource is https://colasloth.com - a DNS “hack” which “simply resolves to 127.0.0.1, i.e. localhost, instead of pointing to the address of a specific machine. All subdomains under the domain also resolve to localhost, which can be useful in many situations”. I guess you can easily do that yourself with an unused domain.
All of the above is encapsulated in this simple whoami
single-server hosting
configuration:
Exercise 2.4
The command is fairly simple. On my configuration 5 instances of computer was enough:
However, I could not access compute.localtest.me:3000
from my host machine
running Fedora due to CORS. I probably would have to edit my hosts
file to
make it work. I was not inclined to do so, so I just installed xfce
in my
Debian VM, openend Gnome Boxes and browsed to local.test.me:3000
:
Networking & More Complex Applications
Let’s setup Redmine along with Adminer on the basis of a PostgreSQL database.
NB We are letting Docker manage our volumes for us here. The docs of the PostgreSQL image do a good job of explaining what that means
Exercise 2.6
The following docker-compose.yaml
does the job.
yielding a frontend with a working database connection as the developer console shows
Exercise 2.7
This was an interesting one:
Again, the communication between backend and frontend does not work if I access the frontend from outside my Debian VM. Inside the VM, however, it works flawlessly:
Exercise 2.8, 2.9 and 2.10
Finally we’re adding a reverse proxy to our front- and backend containers from above:
yielding an all-around working application that is accessible simply by
navigating to http://localhost
(inside the VM) or http://192.168.122.67
(outside the VM)
All that I needed to change for exercise 2.9 was to delete the volumes at the
end of the docker-compose
file and change the postgres
service configuration
to the following:
Regarding Exercise 2.10: All buttons worked for me as shown above.
Part 3
-
CircleCI is a cool tool after you understand what it is doing for you. Every
git push
triggers a new build of your containers (and possibly an update of the contained images if using watchtower) and deploys them to Docker Hub. Cool! -
I skipped the exercises until Exercise 3.2 as I did not feel like deploying something on Heroku and it seems fairly obvious how to get it working.
-
If you need tools for the build but not for the execution, do a so-called multi-stage build
Exercise 3.3
So I’m going with a simply bash script here that takes an argument, namely the name of the github repository to clone, to then build the image based the Dockerfile contained within that repository. Finally, it tags and pushes the image to Docker Hub.
Basically the same stuff I did as in my Debian VM to get Docker runnning
Then, we run the following:
Exercise 3.4, 3.5 & 3.7
- Regarding 3.4 and 3.5, I have already done both of these things at earlier stages.
- An example of a multi-stage (node) build using the node modules can be found here. I leave it to my future self to figure it out again:
Podman as an alternative
Podman seems to solve some of Docker’s inherent problems (big fat and often rootful daemons, difficult to inspect images berfore pulling them etc.) and is reasonable compatible with Docker.
- There is a really good presentation by Dan Walsh on Podman’s features (and Docker’s shortcomings)
- Here is a blog post on
the alternative for
docker-compose
when using Podman- also review examples in podman-compose, as the project is maturing every day