Docker in PHP project development environment applications
[Editor's note] Environmental deployment is the problem that all teams have to face, and as the system grows more and more, the services that depend on are more and more, how can they better solve these problems? This article documented the evolution of the PHP team recently to the development process of the Docker process, hoping to help the majority of PHPer.
Environmental deployment is the problem that all teams have to face, and as the system grows more and more, more and more dependent services, such as our current project will be used:
- Web server: Nginx
- Web program: PHP + Node
- Database: MySQL
- Search Engine: ElasticSearch
- Queue Service: Gearman
- Cache Service: Redis + Memcache
- Front-end build tool: npm + bower + gulp
- PHP CLI tool: Composer + PHPUnit
So the team's development environment deployment has exposed a number of issues:
- Rely on a lot of services, the local build a set of environmental costs are getting higher and higher, the primary staff is difficult to solve some of the problems in environmental deployment
- Differences in the version of the service and differences in OS can lead to online environment BUG
- When the project introduces a new service, everyone's environment needs to be reconfigured
For question 1, you can use Vagrant virtual machine-based projects to solve, team members share a set of development environment mirror. For question 2, you can introduce similar PHPBrew multi-version PHP management tools to solve. But both are not a good solution to the problem 3, because the virtual machine mirror there is no version management concept, when people maintain a mirror, it is prone to configuration missing or conflict, a large image transmission is not convenient.
The emergence of Docker gives the above problem a better solution, although the individual for Docker large-scale application to the production environment is also cautious, but if only consider the test and development, privately think of Docker's container concept is really able to solve Environmental deployment issues silver bullet.
The following describes the evolution of Docker's PHP project development environment . In this article, it is assumed that your operating system is Linux, Docker is already installed, and you know what the Docker is and the basis of the Docker command line . If you do not have this background, Know yourself.
First or from a PHP in the Docker container under the Hello World instance. We prepare such a php file
Echo "PHP in Docker";
And then in the same directory to create a text file and named
Dockerfile , the contents of:
# Build from official PHP mirror
# Copy index.php to the / var / www directory in the container
ADD index.php / var / www /
# Externally exposed port 8080
# Set the default working directory of the container to / var / www
WORKDIR / var / www /
# The container executes the instructions that are executed by default
ENTRYPOINT ["php", "-S", "0.0.0.0:8080"]
Build this container:
Docker build -t allovince / php-helloworld.
Run this container
Docker run -d -p 8080: 8080 allovince / php-helloworld
Curl localhost: 8080
PHP in Docker
So we created a Docker container for the demo of the PHP program, and any machine that installed the Docker could run the container to get the same result. And any of the above php files and Dockerfile people can build the same container, which completely eliminates the different environments, different versions may cause a variety of problems.
Imagine how the program is going to be complicated, how we should expand, the direct idea is to continue to install other services in the container, and all the services run up, then our Dockerfile is likely to develop into this way:
ADD index.php / var / www /
# Install more services
RUN apt-get install -y \
# Write a startup script to start all services
Although we built a development environment through Docker, but I do not feel like some acquaintance. Yes, in fact, this approach and the production of a virtual machine mirror is almost the same, there are several problems in this way:
- If you need to verify a different version of a service, such as testing PHP5.3 / 5.4 / 5.5 / 5.6, you must prepare four images, but in fact each mirror is only a small difference.
- If you start a new project, then the installation of the container will continue to expand the service, and ultimately can not figure out which service belongs to which project.
Use a single process container
The above pattern of all the services in a container has an image of the unofficial call: Fat Container. In contrast, the service is split into the container. From the Docker design can be seen, the process of building a mirror can be specified only a container to start the instructions, so Docker naturally suitable for a container to run only one service, and this is the official more respected.
The first question that a spin-off service encounters is where does the base mirror of each of our services come from? Here are two options:
Option 1, unified from the standard OS image extension , such as the following are Nginx and MySQL mirror
FROM ubuntu: 14.04
RUN apt-get update -y && apt-get install -y nginx
FROM ubuntu: 14.04
RUN apt-get update -y && apt-get install -y mysql
The advantage of this approach is that all services can have a unified base image, the mirror can be extended and modified in the same way, such as the choice of ubuntu, you can use
apt-get instructions to install the service.
The problem is that a large number of services need to maintain their own, in particular, sometimes need a different version of a service, often need to compile the source code, debugging maintenance costs are high.
Option 2, directly from the Docker Hub inherited the official mirror , the following is the same Nginx and MySQL mirror
FROM nginx: 1.9.0
FROM mysql: 5.6
Docker Hub can be seen as Dicker's Github, Docker official has prepared a lot of commonly used services mirror , but also a lot of third-party submission of the mirror. Or even based on the Docker-Registry project in a short time to build their own a private Docker Hub.
Based on the official image of a service to build a mirror, a very rich choice, and can be a very small price to switch the version of the service. The problem with this approach is that the official image of the building a variety of ways to expand the need to first understand the original image of the
To make the service more flexible to consider, we choose the latter to build the mirror.
In order to split the service, now our directory becomes as follows:
~ / Dockerfiles
├ ─ ─ mysql
│ └ ─ ─ Dockerfile
├ ─ ─ nginx
│ ├ ─ ─ Dockerfile
│ ├ ─ ─ nginx.conf
│ └ ─ ─ sites-enabled
│ ├── default.conf
│ └── evaengine.conf
│ ├ ─ ─ Dockerfile
│ ├ ─ ─ composer.phar
│ ├── php-fpm.conf
│ ├── php.ini
│ ├── redis.tgz
└ ─ ─ Dockerfile
Create a separate folder for each service and place a Dockerfile in each service folder.
MySQL inherited from the official MySQL5.6 image , Dockerfile only one line, without any additional processing, because the general needs of the official have been implemented in the mirror, so the contents of Dockerfile:
FROM mysql: 5.6
Run in the project root directory
Docker build -t eva / mysql ./mysql
Will automatically download and build the mirror, where we will name it
As the container will run at the end of all the database data will be discarded, in order not to have to import data every time, we will use the way to persist the MySQL database, the official image of the default database stored in
/var/lib/mysql , You must set an administrator password through the environment variable, so you can use the following instructions to run the container:
Docker run -p 3306: 3306 -v ~ / opt / data / mysql: / var / lib / mysql -e MYSQL_ROOT_PASSWORD = 123456 -it eva / mysql
Through the above instructions, we will be the local 3306 port bound to the container port 3306, the container within the database to the local
~/opt/data/mysql , and for MySQL set up a root password
Nginx directory in advance to prepare the Nginx configuration file
nginx.conf and the project configuration file
default.conf and so on. The contents of Dockerfile are:
FROM nginx: 1.9
ADD nginx.conf /etc/nginx/nginx.conf
ADD sites-enabled / * /etc/nginx/conf.d/
RUN mkdir / opt / htdocs && mkdir / opt / log && mkdir / opt / log / nginx
RUN chown -R www-data.www-data / opt / htdocs / opt / log
VOLUME ["/ opt"]
Since the official Nginx1.9 is based on Debian Jessie, so the first copy of the prepared configuration file to the specified location, replace the configuration within the mirror, here according to personal habits, agreed
/opt/htdocs directory for the Web server root directory,
/opt/log/nginx directory is the Nginx log directory.
Also build a mirror image
Docker build -t eva / nginx ./nginx
And run the container
Docker run -p 80:80 -v ~ / opt: / opt -it eva / nginx
Note that we bind the local port 80 to port 80 of the container and mount the local
~/opt directory into the container's
/opt directory so that the project source code can be placed in the
~/opt directory and accessed through the container The.
PHP container is the most complex one, because in the actual project, we may need to install some PHP extensions alone, and use some command-line tools, here we Redis expansion and Composer to example. First of all need to expand the project and other documents in advance to download the php directory, so that when you can build from the local copy without having to download each time through the network, greatly speed up the mirror to build the speed:
Wget https://getcomposer.org/composer.phar -O php / composer.phar
Wget https://pecl.php.net/get/redis-2.2.7.tgz -O php / redis.tgz
Php directory also prepared the php configuration file
php-fpm.conf , the basic mirror we chose is PHP 5.6-FPM , which is also a Debian Jessie mirror. The official more intimate in the mirror prepared a
docker-php-ext-install directive, you can quickly install such as GD, PDO and other commonly used extensions. All supported extension names can be obtained by running
docker-php-ext-install in the container.
Take a look at Dockerfile:
FROM php: 5.6-fpm
ADD php.ini /usr/local/etc/php/php.ini
ADD php-fpm.conf /usr/local/etc/php-fpm.conf
COPY redis.tgz /home/redis.tgz
RUN docker-php-ext-install gd \
&& docker-php-ext-install pdo_mysql \
&& pecl install /home/redis.tgz && echo "extension = redis.so"> /usr/local/etc/php/conf.d/redis.ini
ADD composer.phar / usr / local / bin / composer
RUN chmod 755 / usr / local / bin / composer
WORKDIR / opt
RUN usermod -u 1000 www-data
VOLUME ["/ opt"]
In the construction process to do something like this:
- Copy php and php-fpm configuration files to the appropriate directory
- Copy the redis extension source code to
- Install the GD and PDO extensions via
- Install the Redis extension via
- Copy the composer to mirror as a global directive
In accordance with personal habits, still set
/opt directory as a working directory.
Here is a detail, in the copy tar package file, the use of the Docker instruction is
COPY instead of
ADD , this is because the
ADD instruction will automatically extract the
tar file .
Now can finally build + run:
Docker build -t eva / php ./php
Docker run -p 9000: 9000 -v ~ / opt: / opt -it eva / php
In most cases, Nginx and PHP read the project source code is the same, so here also mount the local
~/opt directory, and bind the 9000 port.
Php container in addition to running php-fpm, but also should be used as the project php cli use, so as to ensure that php version, expansion and configuration files consistent.
For example, running Composer in a container can be done with the following instructions:
Docker run -v $ (pwd -P): / opt -it eva / php composer install --dev -vvv
So that in any directory to run this line of instructions, is equivalent to dynamically mount the current directory to the container's default working directory and run, which is the PHP container specified working directory for
Similarly, you can also achieve phpunit, npm, gulp and other command line tools run in the container.
For ease of presentation, Redis is only used as a cache, there is no persistent demand, so Dockerfile only one line
FROM redis: 3.0
The above has been originally in a container to run the service split into multiple containers, each container only run a single service. So that the containers need to be able to communicate with each other. There are two ways to communicate between Docker containers. One is to bind the container port to a local port as described above. The other is through the Docker provided Linking function , in the development environment, through the Linking communication more flexible, but also to avoid some of the problems caused by port occupancy, such as the following way can be linked to Nginx and PHP:
Docker run -p 9000: 9000 -v ~ / opt: / opt --name php -it eva / php
Docker run -p 80:80 -v ~ / opt: / opt -it - link php: php eva / nginx
In the general PHP project, Nginx need to link PHP, and PHP and need to link MySQL, Redis and so on. In order to make the link between the containers easier to manage, Docker official recommended the use of Docker-Compose to complete these operations.
Use one line of instructions to complete the installation
Pip install -U docker-compose
And then in the root directory of the Docker project to prepare a docker-compose.yml file, the contents of:
- ~ / opt: / opt
- "9000: 9000"
- ~ / opt: / opt
- "3306: 3306"
- ~ / opt / data / mysql: / var / lib / mysql
- "6379: 6379"
And then run
docker-compose up , to complete all the port binding, mount, link operation.
More complex examples
The above is a standard PHP project in the Docker environment evolution process, the actual project will generally integrate more complex services, but the basic steps can still be common. Such as EvaEngine / Dockerfiles is to run my open source project EvaEngine prepared based on the Docker development environment, EvaEngine rely on the queue service Gearman, cache service Memcache, Redis, front-end build tools Gulp, Bower, back-end Cli tools Composer, PHPUnit and so on. Specific implementation can read the code yourself.
After the team practice, the original need for 1 day time to install the environment, switch to Docker only need to run more than 10 instructions, time is also significantly reduced to 3 hours (most of the time is waiting to download), the most important thing is Docker The construction of the environment are 100% consistent, there will be no human error caused by the problem. In the future we will further apply the Docker to the CI and the production environment.
This article starts with my column in Wolong Court PHP and entrepreneurs of those things , reproduced please keep.
Author introduction Xu Qian (AlloVince), start-up company PHPer, open source enthusiasts, Blog often share PHP related experience.