Docker's typical application scene

Docker technology has become more mature, but many new contacts Docker's friends may be "Docker in the end can be used to do what" the problem is more tangled. This article summarizes some of the authors' experience with Docker in applying packaged, multi-version hybrid deployments, upgrade rollback, multi-tenant resource isolation, and internal development environments, hoping to give the Docker's waiters some inspiration.

Compared to the VM, Docker in its lightweight, configuration complexity and resource utilization has a clear advantage. With the continuous maturity of Docker technology, more and more companies began to consider through Docker to improve their IT systems.

This article lists some of the practical application of Docker scene, in order to be able to play a role, to help you more convenient to use Docker.

Application package

Produced by RPM, GEM and other software packages of students may be very clear that each package depends on which version of which library, often need to be explicitly written in the dependency list. Dependencies are often divided into compile-time dependencies and runtime dependencies.

In the traditional infrastructure environment, in order to ensure that the generated software package can be installed and running on other machines, it is generally necessary to create a clean virtual machine before packaging, or create a chroot environment manually, and then in this clean environment Under the various security packages, and then execute the package script. After generating the package, you need to create a clean environment to install and run the package to verify that it meets your expectations. Although this can also complete the packaging work, but at least the following shortcomings:

  • Time consuming and labor intensive
  • Dependencies easily missed, such as: in a clean environment after several debugging, the lack of dependency package one by one installed,
    But the last write the spec file forget to add a dependency, resulting in the next package need to re-debug or package can not use the package and other issues.

Through the Docker can be a good solution to the packaging problem. The specific approach is as follows:

  • "Clean packaging environment" is easy to prepare, Docker official provided ubuntu, centos and other system images can be born as a pure pollution-free packaging environment
  • Dockerfile itself can play the role of curing the document, as long as the written Dockerfile, create a good package image, the future will be able to repeat the use of this mirror unlimited packaging


We want to make a RPM package for a PHP extension module (eg php-redis).

First, you need to write a Dockerfile to create a package image, as follows:

  FROM centos: centos6 
RUN yum update -y
RUN yum install -y php-devel rpm-build tar gcc make
RUN mkdir -p / rpmbuild / {BUILD, RPMS, SOURCES, SPECS, SRPMS} && \
Echo '% _topdir / rpmbuild'> ~ / .rpmmacros
ADD /rpmbuild/SOURCES/redis-2.2.7.tgz
RUN rpmbuild -bb /redis.spec

And then execute docker build -t php-redis-builder . After the success of the implementation, we will need to generate RPM package.

Next, execute the following command to copy the generated package from the Docker image:

  [-d / rpms] || mkdir / rpms 
Docker run --rm -v / rpms: / rpms: rw php-redis-builder cp / rpmbuild/RPMS/x86_64/php-redis-2.2.7-1.el6.x86_64.rpm / rpms /

Then / rpms directory will have we just produced the RPM package.

Finally, the package verification is also very simple, only need to create a new docker mirror, the new generation of the software package can be added and installed.

Dockerfile as follows (for the ADD RPM file, need to be saved in the / rpms directory):

  FROM centos: centos6 
ADD php-redis-2.2.7-1.el6.x86_64.rpm /php-redis-2.2.7-1.el6.x86_64.rpm
RUN yum localinstall -y /php-redis-2.2.7-1.el6.x86_64.rpm
RUN php -d "extension =" -m | grep redis

docker build -t php-redis-validator . in the / rpms directory docker build -t php-redis-validator . If the execution succeeds, the RPM package will work properly.

Multi-version mixed deployment

With the continuous upgrading of the product, the deployment of multiple applications on a server or multiple versions of the same application within the enterprise is very common.

But a server on the deployment of multiple versions of the same software, file path, port and other resources tend to conflict, resulting in multiple versions can not coexist the problem.

If the docker, the problem will be very simple. Because each container has its own independent file system, there is no problem with file path conflicts; for port conflicts, it is only necessary to specify a different port mapping when starting the container to resolve the problem.

Upgrade rollback

An upgrade is often not just an upgrade of the application itself, but also an upgrade that includes dependencies. But the old and new software dependencies are likely to be different, or even conflict, so in the traditional environment to do rollback is generally more difficult.

If you use Docker, we only need to create a new Docker image each time the application is upgraded, first stop the old container, and then start the new container. Need to roll back, the new container to stop, the old start to complete the rollback, the entire process in the second complete, very convenient.

Multi – tenant resource isolation

Resource isolation is a strong demand for companies that offer shared hosting services. If you use VM, although the isolation is very thorough, but the deployment density is relatively low, will cause the cost increase.

The Docker container makes full use of the linux kernel's namespaces to provide resource isolation.

Combined with cgroup, you can easily set a container resource quotas. Both to meet the needs of resource isolation, but also for different levels of users to set different levels of quota restrictions.

However, in this application scenario, since the programs running in the container are untrustworthy to the hosting service provider, special means are required to ensure that the user can not operate from the container to the host & apos; s resources (i.e., jailbreak The probability of this problem is very small, but no small security, more than a layer of protection certainly make people more assured).

Safety and segregation reinforcement, consider the following measures:

  • Through iptables block from the container to all internal network IP communication (of course, if necessary, can also be specific for the IP / port open permissions)
  • Use selinux or apparmor to restrict the resources that a container can access
  • For some sysfs or procfs directories, mount it in read-only mode
  • Reinforce the system kernel with grsec
  • Through the cgroup of memory, CPU, disk read and write resources such as quota control
  • The bandwidth of each container is controlled by tc

In addition, we found in the actual test system random number generator is easy due to the depletion of the entropy source and the occurrence of blocking. In a multi-tenant sharing environment, you need to enable rng-tools on the host to supplement the entropy source.

This application scene has a lot of work docker itself can not provide, and the implementation of the need to pay more attention to the details. To this end we provide a secure enhanced version of Docker management platform, can be a perfect solution to the above problems. Need friends can csphere official website to learn more details.

Internal development environment

Before the advent of container technology, companies often developed a test environment by providing one or more virtual machines for each developer.

Development of the test environment is generally low load, a large number of system resources are wasted in the virtual machine itself on the process.

The Docker container does not have any additional overhead on the CPU and memory, and is well suited for providing the company's internal development test environment.
And because Docker mirror can be very convenient to share within the company, which the development of the environment is also a great help.

If you want to use the container as a development machine, you need to solve the remote login container and container process management issues. Although Docker's original intention is designed for the "micro service" architecture, but according to our actual experience, running multiple processes in the Docker, even sshd or upstart is also feasible.


Above summarizes some of the scenarios we use Docker in our actual development and production environment, as well as the problems and corresponding solutions in each case, and we want to be inspired by friends who are interested in using Docker. At the same time we also welcome more friends to share the experience with Docker.

Original link: Typical application scenarios Docker's (Author: Wei Shijiang revision: Li Yingjie)


about the author:
Wei Shijiang, NiceScale co-founder, has long been engaged in DevOps related research and development work. Focus on the Linux environment for Web applications and peripheral services configuration management automation. Good use of the go language and PHP language, the container technology has some research, is currently focused on doing docker-based enterprise-class solutions. Before the venture in the Sina cloud platform (SAE) as technical manager. Welcome like-minded friends in a variety of ways hooked harassment. Microblogging: @ Wei Shijiang Email:

Heads up! This alert needs your attention, but it's not super important.