In this blog-post, I would like to discuss Docker Storage and storage drivers and Application data management using Docker Volumes. Every fact we explore in detailed experimented and collected and published here.
Docker Container Persistent Storage
When you see the word 'Storage' we get in mind that HARD disk, CD, DVD, pen drive, shared NFS, etc., For Docker storage that referred to the storage of images, containers, volumes and we need to store the data that belongs to an application. It may be an application code or database that referred to in the application service. Each one has its own isolation with others.
Actual physical Storage deals with different devices. Linux got the Logical storage devices where you can make use of single disk into multiple disks drives called logical drives as we see in Windows (C: D:). Disk space can be shared across multiple containers partition of disks and a group of partitions. Docker uses this capability with special storage drivers.
Which bit is which part of the drive is known by using the filesystem. Docker uses the FUSE filesystem.
Docker persistence Storage Volume |
Manage Application data
Overlay2 is the default and preferred storage driver in most modern Linux platforms.
device-mapper is generally preferred when we want to take advantage of LVM.
The data appears to be a copy, but is only a link (or reference) to the original data the actual copy happens only when someone tries to change the shared data whoever changes the shared data ends up sharing their own copy instead.
Changes are visible only to the current process, private maps are fast even on huge files it uses mmap() fr map files MAP_PRIVATE. How does it work? The answer is MMU ( Memory Management Unit) Here each memory access translates to an actual physical location, alternatively a page fault
What is Docker CoW?
The secret of Docker is Unix COW(Copy on Write) is a key capability that helped to build Docker container storage. CoWs movement is pretty simple the content of the layers are moved between containers in gzip files copy-on-write is a strategy of sharing and copying for maximum efficiency it saves space and also reduces startup time.The data appears to be a copy, but is only a link (or reference) to the original data the actual copy happens only when someone tries to change the shared data whoever changes the shared data ends up sharing their own copy instead.
Why CoW?
The race condition challenge in Linux from COW is handled by kernel's memory subsystem with private memory mappings. Create a new process quickly using fork() even if is many GB of RAMChanges are visible only to the current process, private maps are fast even on huge files it uses mmap() fr map files MAP_PRIVATE. How does it work? The answer is MMU ( Memory Management Unit) Here each memory access translates to an actual physical location, alternatively a page fault
What happens without CoW?
- It would take forever to start a container
- The container would use up a lot of space
- Without copy-on-write on your desktop
- Docker would not be usable on your Linux machine
- There is no docker at all
How do you know what Storage-Driver in use?
To know the docker storage driver information we can use the 'docker info' command one of the attributes that show 'Storage Driver' value. We can use JSON query tool 'jq' which is available in all Latest Linux platforms.
docker info -f '{{json .}}'|jq -r '.Driver'
or else you can use the -f or --format option as follows:
docker info -f 'Storage Driver: {{.Driver}}' docker info --format '{{.Driver}}'
Let's run this commands
Docker Info JSON format extraction methods |
Selecting Docker storage drivers
- The storage driver handles the details about the way how to interact with each other layers.
- The storage driver controls how images and containers are stored and managed on your Docker host.
- To create data in the writable layer of the container
- No persistence of data in containers
- use stackable image layers and the copy-on-write (CoW) strategy
Docker Bind mount
Bind mounts can only be mounted inside the container in the read-only and read-write mode so they are flexible as per the need to prevent file corruption.Docker tmpfs mounts
tmpfs are only available for Linux platforms.tmpfs Sample 1:
docker run -d -it \ --name tmptest1 \ --mount type=tmpfs,destination=/app \ nginx:latest docker container inspect tmptest1 docker container stop tmptest1 docker container rm tmptest1 or docker container rm -v -f tmptest1
Sample tmpfs for a container |
tmpfs Sample 2: If we don't mention the destination what will be considers as container path. let's see
docker run -d -it --name tmptest2 \ --tmpfs /app \ nginx:latest # Understand the source and destinations paths docker container inspect --format '{{json .Mounts}}' tmptest2 docker container inspect -f '{{.HostConfig.Mounts}}' tmptest2 docker container stop tmptest2 docker container rm tmptest2
Sample 2 tmpfs in Docker |
Docker Volume
The Docker Volumes are easier to backup or migrate than the bind mounts
The interesting feature of Volumes is that they work on both Linux and Windows container’s
New volumes can have their content pre-populated by a container.
Volumes can be managed with both CLI and API.
Create Docker Volumes
There are two methods to create a docker volume
The interesting feature of Volumes is that they work on both Linux and Windows container’s
New volumes can have their content pre-populated by a container.
Volumes can be managed with both CLI and API.
Create Docker Volumes
There are two methods to create a docker volume
- Container creation does the local volume using –v option
- Container creation using --mount option
Let's see both examples of how they work for containers.
Container and Volume Relationship
Docker containers are ephemeral as much as possible. By removing a container DOESN'T remove the volume attached to it. Let's experiment with it, As we know Volumes are persistent attachable storage that holds data in a folder on the Host machine and it has a mapping inside the docker container as a mounted volume.#Check the available Volumes list docker volume ls # Create a Volume name it as web-volume docker volume create web-volume # Use the newly created volume 'web-volume' with nginx container 'myweb' docker run -d --name myweb \ --mount source=web-volume,destination=/usr/share/nginx/html nginx # Stop the Container and remove container 'myweb' docker container stop myweb && docker container rm myweb # Validate the list of Volumes - still web-volume exists docker volume lsThe execution goes as shown below:
Docker volume persistent example |
Storage and Volumes References
Let's see the external storage how to use within container in another super exciting post
1 comment:
Informative blog. Thanks for sharing.
DevOps Training
DevOps Online Training
DevOps Online Training in Hyderabad
DevOps Online Training institute
DevOps Training Online
DevOps Online Course
Post a Comment