Docker Slow Ext4 Partition

Use the BTRFS storage driver. Btrfs is a next generation copy-on-write filesystem that supports many advanced storage technologies that make it a good fit for Docker. Btrfs is included in the mainline Linux kernel. Docker’s btrfs storage driver leverages many Btrfs features for image and container management.

Mount the FAT and ext4 partitions of the USB card to the system. First, insert your microSD card into the reader you attached to the VM earlier, then run the following commands: mkdir -p mnt/fat32 mkdir -p mnt/ext4 sudo mount /dev/sdb1 mnt/fat32 sudo mount /dev/sdb2 mnt/ext4 Installing modules and copying the built Kernel. I recently had a case where the / partition was running very low on disk space, but I also had an additional disk mounted at /home/ with plenty of disk space. However as by default Docker stores everything at /var/lib/docker my / partition was nearly full. To fix that I moved the default /var/lib/docker to another directory on the /home partition. Mar 07, 2021 Docker For Mac Slow Ext4 Partition Estimated reading time: 16 minutes Docker Desktop Version: docker version Client: Docker Engine - Community Cloud integration 0.1.18 Version: 19.03.13 API version: 1.40 Go version: go1.13.15 Git commit: 4484c46d9d Built: Wed Sep 16 17: OS/Arch: windows/amd64 Experimental: false. Unregister docker-desktop-data from wsl, note that after this, your ext4.vhdx file would automatically be removed (so back it up first if you have important existing image/container): wsl -unregister docker-desktop-data Import the docker-desktop-data back to wsl, but now the ext4.vhdx would reside in different drive/directory.

We recently bought a new set of laptops to Debricked, and since I’m a huge fan of Arch Linux, I decided to use it as my main operating system. The laptop in question, a Dell XPS 15 7590, is equipped with both an Intel CPU with built-in graphics, as well as a dedicated Nvidia GPU. You may wonder why this is important to mention, but at the end of this article, you’ll know. For now, let’s discuss the performance issues I quickly discovered.

Docker performance issues

Our development environment makes heavy use of Docker containers, with local folders bind mounted into the containers. While my new installation seemed snappy enough during regular browsing, I immediately noticed something was wrong when I had setup my development environment. One of the first steps is to recreate our test database. This usually takes around 3-4 minutes, but I can admit I was eagerly looking forward to benchmark how fast this would be on my new shiny laptop.

After a minute I started to realize that something was terribly wrong.

16 minutes and 46 seconds later, the script had finished, and I was disappointed. Recreating the database was almost five times as slow as on my old laptop! Running the script again using time ./recreate_database.sh, I got the following output:

What stands out is the extreme amount of time spent in kernel space. What is happening here? A quick check on my old laptop, for reference, showed that it only spent 4 seconds in kernel space for the same script. Clearly something was way off with the configuration of my new installation.

Debug all things: Disk I/O

My initial thought was that the underlying issue was with my partitioning and disk setup. All that time in the kernel must be spent on something, and I/O wait seemed like the most likely candidate. I started by checking the Docker storage driver, since I’ve heard that the wrong driver can severely affect the performance, but no, the storage driver was overlay2, just as it was supposed to be.

The partition layout of the laptop was a fairly ordinary LUKS+LVM+XFS layout, to get a flexible partition scheme with full-disk encryption. I didn’t see a particular reason why this wouldn’t work, but I tested several different options:

  • Using ext4 instead of XFS,
  • Create an unencrypted partition outside LVM,
  • Use a ramdisk

After using a ramdisk, and still getting an execution time of 16 minutes, I realised that clearly disk I/O can’t be the culprit.

What’s really going on in the kernel?

After some searching, I found out about the very neat perf top tool, which allows profiling of threads, including threads in the kernel itself. Very useful for what I’m trying to do!
Firing up perf top at the same time as running the recreate_database.sh script yielded the following very interesting results, as can be seen in the screenshot below.

That’s a lot of time spent in read_hpet which is the High Precision Event Timer. A quick check on my other computers showed that no other computer had the same behaviour. Finally I had some clue on how to proceed.

The solution

While reading up on the HPET was interesting on its own, it didn’t really give me an immediate answer to what was happening. However, in my aimless, almost desperate, searching I did stumble upon a couple of threads discussing the performance impact of having HPET either enabled or disabled when gaming.

While not exactly related to my problem – I simply want my Docker containers to work, not do any high performance gaming – I did start to wonder which of the graphics cards that was actually being used on my system. After installing Arch, the graphical interface worked from the beginning without any configuration, so I hadn’t actually selected which driver to use: the one for the integrated GPU, or the one for the dedicated Nvidia card.
After running lsmod to list the currently loaded kernel modules, I discovered that modules for both cards were in fact loaded, in this case both i915 and nouveau. Now, I have no real use for the dedicated graphics card, so having it enabled would probably just draw extra power. So, I blacklisted the modules related to nouveau by adding them to /etc/modprobe.d/blacklist.conf, in this case the following modules:

Upon rebooting the computer, I confirmed that only the i915 module was loaded. To my great surprise, I also noticed that perf top no longer showed any significant time spent in read_hpet. I immediately tried recreating the database again, and finally I got the performance boost I wanted from my new laptop, as can be seen below:

As you can see, almost no time is spent in kernel space, and the total time is now faster than the 3-4 minutes of my old laptop. Finally, to confirm, I whitelisted the modules again, and after a reboot the problem was back! Clearly the loading of nouveau causes a lot of overhead, for some reason still unknown to me.

Conclusion

Partition

So there you go, apparently having the wrong graphics drivers loaded can make your Docker containers unbearably slow. Hopefully this post can help someone else in the same position as me to get their development environment up and running at full speed.

Want to stay up to date with our lastet news and products?

Estimated reading time: 5 minutes

Docker takes a conservative approach to cleaning up unused objects (oftenreferred to as “garbage collection”), such as images, containers, volumes, andnetworks: these objects are generally not removed unless you explicitly askDocker to do so. This can cause Docker to use extra disk space. For each type ofobject, Docker provides a prune command. In addition, you can use dockersystem prune to clean up multiple types of objects at once. This topic showshow to use these prune commands.

Prune images

The docker image prune command allows you to clean up unused images. Bydefault, docker image prune only cleans up dangling images. A dangling imageis one that is not tagged and is not referenced by any container. To removedangling images:

To remove all images which are not used by existing containers, use the -aflag:

By default, you are prompted to continue. To bypass the prompt, use the -f or--force flag.

You can limit which images are pruned using filtering expressions with the--filter flag. For example, to only consider images created more than 24hours ago:

Other filtering expressions are available. See thedocker image prune referencefor more examples.

Prune containers

When you stop a container, it is not automatically removed unless you started itwith the --rm flag. To see all containers on the Docker host, includingstopped containers, use docker ps -a. You may be surprised how many containersexist, especially on a development system! A stopped container’s writable layersstill take up disk space. To clean this up, you can use the docker containerprune command.

By default, you are prompted to continue. To bypass the prompt, use the -f or--force flag.

By default, all stopped containers are removed. You can limit the scope usingthe --filter flag. For instance, the following command only removesstopped containers older than 24 hours:

Other filtering expressions are available. See thedocker container prune referencefor more examples.

Prune volumes

Docker slow ext4 partition tool

Volumes can be used by one or more containers, and take up space on the Dockerhost. Volumes are never removed automatically, because to do so could destroydata.

By default, you are prompted to continue. To bypass the prompt, use the -f or--force flag.

By default, all unused volumes are removed. You can limit the scope usingthe --filter flag. For instance, the following command only removesvolumes which are not labelled with the keep label:

Other filtering expressions are available. See thedocker volume prune referencefor more examples.

Docker Slow Ext4 Partition Windows 10

Prune networks

Docker networks don’t take up much disk space, but they do create iptablesrules, bridge network devices, and routing table entries. To clean these thingsup, you can use docker network prune to clean up networks which aren’t usedby any containers.

By default, you are prompted to continue. To bypass the prompt, use the -f or--force flag.

By default, all unused networks are removed. You can limit the scope usingthe --filter flag. For instance, the following command only removesnetworks older than 24 hours:

Other filtering expressions are available. See thedocker network prune referencefor more examples.

Docker Slow Ext4 Partition Tool

Prune everything

Docker Slow Ext4 Partition Recovery

The docker system prune command is a shortcut that prunes images, containers,and networks. Volumes are not pruned by default, and you must specify the--volumes flag for docker system prune to prune volumes.

To also prune volumes, add the --volumes flag:

By default, you are prompted to continue. To bypass the prompt, use the -f or--force flag.

Docker Slow Ext4 Partition Free

pruning, prune, images, volumes, containers, networks, disk, administration, garbage collection