Skip Navigation

ELI5: Docker containers

I can't seem to wrap my head around (Docker) containers and especially their maintenance.
As I understand it, containers contain a stripped-down OS that shares some resources with the host?
Or is it more like a closed-off part of the file system?

Anyway, when I have several containers running on a host system,
Do I need to keep them all updated separately? If so, how?
Or is it enough to update the host system, and not worry about the containers?

21 comments
  • Docker is essentially a security construct.

    The idea is that the process inside the container, like say MySQL, Python or Django, runs as a process on your machine in such a way that it can only access parts of the system and the world that it's explicitly been granted access to.

    If you naively attempted this, you'd run into a big problem immediately. Namely that a program needs access to libraries. So you need to grant access to those. Libraries might be specific to the program, or they might be system libraries like libc.

    One way is to explicitly enumerate each required library, but then you'd need to install those for each such process, which is inconvenient and a security nightmare.

    Instead you package the libraries and the program together in a package called a Docker image.

    To simplfy things, at some point it's simpler to start with a minimal set of known files, like say Alpine, Debian, or Fedora.

    This basically means that you're downloading a bunch of stuff to make the program run and thus is born the typical Docker image. If you look at the Python image, you'd see that it's based on some other image. Similarly, a Django image is based on a Python image. It's the FROM line in a Dockerfile.

    A container is such an image actually running the isolated process, again, like say MySQL.

    Adding information to that process happens in a controlled way.

    You can use an API that the process uses, like say a MySQL client. You can also choose to include the data in the original image, or you can use a designated directory structure that's visible to both you and the process, this is called a volume.

    To run something like a Django application would require that Python has access to the files, which can be included in the image by using a custom Dockerfile, or it can be accessed y the container whilst it's running, using a volume.

    It gets more interesting when you have two programs needing access to the same files, like say nginx and python. You can create shared volumes to deal with this.

    Ultimately, Docker is about security and making it convenient to implement and use.

    Source: I use Docker every day.

  • I'd say it's more like a closed-off part of the filesystem but with networking and probably lots of other stuff closed off as well.

    Updates on the host are separate from updates of the containers. Ideally the host has only the minimal stuff needed to run the containers.

    Containers are usually updated when the contained apps are updated. That's actually my main concern with containers. When the main app doesn't need an update but some dependency needs one you have to actively update the dependency unless the app maintainers keep up with what their dependencies are doing. And usually you don't even know what the dependencies are. Because the whole point of containers is that you only care about the main app.

    • To elaborate on this, when you want an update, you "update the container." This usually means downloading an entirely new container image and replacing yours with the new one, which has new internal versions and data but works the exact same. You rely on the supplier of the container (if you didn't make it yourself) to do all of that for you, and you just receive the update when you request it.

      So ideally, dependencies will be taken care of for you when the container updates, if you are using a pre-built container.

    • I still don't understand! I feel so dumb when it comes to docker.

      I'm writing an application in Django (a python web framework), and there are docker images for that. But somehow my code needs to get in there I guess? Or does my code go alongside the container in a predefined folder? When I'm developing and need to update the docker container over and over between changes for testing, am I crating a whole new container or updating the one I made originally?

      I don't even get the purpose of the million images on docker hub. What's the difference between a MySQL image and requiring MySQL in a docker compose and making my own image?

      So sorry to bother you with this but I'm thinking you might be able to help me understand. I understood packages, jails, and VMs but this is a whole other thing, lol.

      • You would probably make your own image that would depend on another Django image. Building that image would put your code into the container you made. To ease development you would mount your development directory into the container.

        Then when you release your app you would update your container image with the latest code and also update the django container it depends on.

        MySQL would live in another container separate from yours. It would need its own mounted directory where all the database files live on the host.

        If you needed some other app with a web API or so you would put that in its own container as well.

        To put everything together you would use docker-compose. That puts them into one network and defines how they may talk with each other, what directories or files from the host to mount and other configuration.

      • You would run “docker build” to create your image. Maybe around the top of your source tree. That would have a step which copies your code into a directory which will be part of the built image.

        Though as another reply mentions, for dev purposes (probably not for production) you could create a mount point / volume which mounts the source dir from your host inside of the container. This will allow you to make changes to your source code on your host without having to re-run “docker build” every time.

  • I would highly recommend using docker compose files. The services you are after usually have them in their installation instructions, on github or docker hub (the latter tells you how many image pulls so you can see what most people are using). Also check out https://awesome-docker-compose.com/apps and https://haxxnet.github.io/Compose-Examples/.

    Then think of each compose file as a separate service that functions completely independently and can't access any others unless you open a port to the host system (ports: ) or have a common network (networks:). The container cannot access or save files unless you open volumes (volumes: ). Personally I have separate folders for each service, and always persist and store config, data and db files in a subfolder of that so it's all in one place. It's easier to migrate or save your info if something goes wrong, and males backups easier to manage.

    In the composer file there is image: <image place/image>:

    <tag>

    The tag could be 'latest' or a specific version you can look up on docker hub by searching for that image and looking a the tags that are near the 'latest' tag or have the same file size. For critical services use a specific version, and for non critical use latest.

    To update a docker compose file, go to the folder, update the version of the image (e.g :15.6 to :16.1) or if using the 'latest' tag no need to change anything. Then run "docker compose down && docker compose pull && docker compose up -d" to update the services top the latest image.

    I use wud https://github.com/getwud/wud about once a week to highlight any available updates then manually update them one by one, and before doing so looking at the update notes to see if there are any breaking changes and testing the services after. I used to just use latest and blindly update but have had occasional issues like bad updates or having to figure out breaking changes. If it goes wrong you can just go back to the old version while you investigate more.

    Also, docker keeps old images forever unless you prune them so lookup 'docker image prune' or 'docker system prune' before trying them as they'll remove a lot.

21 comments