GeneDock based on Docker deployment operation and maintenance practice
[Editor's Note] This article describes how GeneDock builds an automated, easy-to-use, highly available operation and maintenance deployment system based on Docker's container technology, and hopes to share GeneDock's experience and lessons.
This article by GeneDock company intern engineer Hu Yingqian wrote, the original in this , reproduced please keep the author information and the original link
- Docker deletes all containers
- In CoreOS, Docker can not map port using -P
- Running Java in Docker: To prevent failure, you need to know that
- 2016 foreign container use survey report: to the production environment, tools and system ranking Who is the winner?
- How do I use Index as an index of a regstry cluster?
- When using Dockerfile to make a mirror, how to set the application to boot
GeneDock's system is a typical micro-service architecture: more than 20 modules are responsible for the various functions of the platform, such as interface, permissions, resource management, compilation, scheduling, monitoring, etc., through the RESTful interface between modules. Module so much, if there is no automatic deployment of tools, operation and maintenance engineers all by manual operation, in the rapid development of agile development, upgrade deployment will become a big pit.
So the demand came: the use of procedures instead of people, to achieve the script or even automated deployment.
Do first? Most engineers think of two things:
1. Use Docker technology to publish and deploy all services.
2. Write the script, automatically complete the code download, build the mirror, start the service process.
Really rolled up the sleeves to work, found the problem: the original module has not been isolated between the clean, there is code and path dependence. So each service Docker mirror production script is not the same, possession of a lot of black magic in the inside. , For example, from other modules to copy a source code file over, or even a number of services had to share a mirror. The new version of the code changes slightly, the deployment of the script will have to change, it is easy to go wrong.
Bad Smell! Perfectionist GDers are enthusiastic and begin to refactor:
1. Split to ensure that only one logical independent service is initiated in each Docker image;
2. Decoupling the cross-service call to RESTful interface calls, prohibiting cross-module import code;
3. Some public base class libraries (such as encryption algorithms) are made into a standardized PyPI package, use pypiserver to build a private PyPI source, and then put these packages into the Docker file to go inside;
4. Specification of the directory structure of each service, start mode, log format. All service docker mounts code, data and logs in the same way. This solves the problem of confusion in the directory.
After careful reconstruction, we achieved the following objectives:
1. Implement code for all modules independently, removing cross-module code-level dependencies;
2. to achieve all the services of the Docker release;
3. Automated deployment of the script, the test environment will automatically from the GitHub master branch update version;
Although the introduction of the Docker, to achieve the automation. But the deployment process is still a bit long:
Download the service code from the GitHub service to the server local;
Create a mirror with the Docker tool;
Start the Docker container;
To determine the service has been normal start, to the relevant engineers to monitor WeChat.
In fact, there are many professional third-party cloud services powerful and focused, can help us greatly improve the efficiency of the deployment, the cost is not much. So, as a cloud service believers we use the following services on behalf of advanced productivity:
- Use GitHub to manage your code, prevent code loss and confusion, and improve team collaboration.
- Using Travis for continuous integration, each time you submit a new code to automatically do code checking and unit testing, in each release release system will automatically build Docker image, and pushed to the Docker Registry.
- Using the Docker Hub to maintain the image, the Docker Hub has a full version number support and is more convenient than building a proprietary Docker Registry.
Using the above services, to achieve the separation of the release and deployment. When you publish a tag on GitHub, the deployment system will pull the service image from the official Docker Hub on the server by executing the command
docker run and then start the container to easily support offsite deployment and rollback. This really played the benefits of Docker: service packing, download and use.
GitHhub and Docker Hub all have perfect privatization and privilege support. For self-built pypi sources and other services that may be accessed by external networks, we use the Nginx reverse proxy for forwarding, using Nginx's basic authentication for forwarding, taking advantage of convenience and security.
In other words, after further improvement, we have progress:
1. greatly simplifies the deployment process, significantly reduce the burden of operation and maintenance engineers;
2. Operation and maintenance engineers no longer need to contact the code, clear the operation and development and development of the border;
3. The basic elimination of security risks.
Finally, GeneDock's current deployment of the operation and maintenance system architecture as shown in Figure:
- The code is managed on GitHub, through Travis automated code checking with unit testing
- Docker mirroring is managed by Docker Hub, and Travis continues to be integrated and deployed
- Configure the use of MongoDB unified management, MongoDB related configuration unified path and then mounted to the Docker container
- Deployment and testing are monitored by Travis and will be alerted if failed
- Run the container with Docker's event mechanism to monitor
- The server log is stored using the Docker's advanced Data Volume for analysis and collection of related log services
When building this system, in addition to their own groping, but also refer to some of the best practices and documentation, and even read the Docker source, each company is different, need to determine the actual combination of specific programs. Any comments and suggestions welcome to contact us. If you are interested in the above technology can even do better, welcome to us to vote resume email@example.com , specific job information please see https://genendock.com/joinus/ .