How often have you had to set up your server environment to deploy your application (such as a website)? Surely more often than I would like.
In the best case, you had a script that did all this automatically. In the worst case, it could look like this:
- install D database version xxx
- install web server N version xx, etc.
Environmental management configured in this way becomes very resource-intensive over time. Any, even minor change in configuration means at least:
- that every developer should be aware of these changes
- all these changes should be safely added to the production environment
It is difficult to track such changes and manage them without special tools. One way or another, there are problems with the configuration of environment dependencies. The further development moves, the more difficult it becomes to find and fix these problems.
Above, I described what is called vendor lock-in. For the development of applications, in particular server-type, this phenomenon becomes a big problem. In this article, we will consider one of the possible solutions -
Docker . You will learn how to create, deploy and run an application based on it.
/ Disclaimer: / This is not a review of the Docker. At the end of this article is a list of useful literature that describes working with Docker better. This is the first entry point for developers who plan to deploy node.js applications using Docker containers.
While developing
one of my projects , I was faced with a lack of detailed articles, which gave rise to a considerable number of bicycles. This post is a bit late trying to fix the lack of information on the topic.
What is it and what does it eat with?
In simple words, Docker is an abstraction of LXC containers. This means that processes launched using Docker will see only themselves and their descendants. Such processes are called Docker containers.
In order to be able to create some kind of abstraction based on such containers, an image exists in Docker (/ docker image /). Based on the Docker image, you can configure and create containers.
There are thousands of ready-made Docker images with pre-installed databases, web servers and other important elements. Another advantage of Docker is that it is a very economical tool for memory consumption, as it uses only the resources it needs.
Get closer
We will not dwell on the
installation for a long time. The process over the past few releases has been simplified to a few clicks / teams.
In this article, we will analyze the deployment of a Docker application using the example of a server-side Node.js application. Here is its primitive, source code:
// index const http = require('http'); const server = http.createServer(function(req, res) { res.write('hello world from Docker'); res.end(); }); server.listen(3000, function() { console.log('server in docker container is started on port : 3000'); });
We have at least two ways to package the application in a Docker container:
- create and run a container from an existing image using the command-line-interface tool;
- create your own image based on the finished sample.
More often the second method is used.
To get started, download the official node.js image:
docker pull node
The docker pull command downloads a Docker image. After that, you can run the docker run command. This will create and run the container based on the downloaded image.
docker run -it -d --rm -v "$PWD":/app -w=/app -p 80:3000 node node index.js
This command will launch the index.js file, map 3000 ports to 80 and display the id of the created container. Already better! But on one CLI you will not go far. Let's create a Dockerfile for our server.
FROM node WORKDIR /app RUN cp . /app CMD ["node", "index.js"]
This Dockerfile describes the image from which the current version is inherited, as well as the directory in which container commands and the copy files command from the directory in which the image assembly is launched will start. The last line indicates which command will run in the created container.
Next, we need to build an image from this Dockerfile that we will deploy:
docker build -t username / helloworld-with-docker: 0.1.0 . This command creates a new image, marks it with
username / helloworld-with-docker and creates a 0.1.0 tag.
Our container is ready. We can run it with the docker run command. Thus, we solve the vendor lock-in problem. The launch of the application is no longer dependent on the environment. Code is delivered along with the docker image. These two criteria allow us to deploy the application to any place where we can run Docker.
Deploy
The first 99% are not so terrible as the remaining 99%.
After we have completed all the instructions above, the deployment process itself becomes a matter of technology and your development environment. We will consider 2 options for deploying Docker:
- manual deployment of Docker image;
- deployment using Travis-CI.
In each case, we will consider delivering the image to an independent environment, for example, the staging server of your product.
Manual deployment
This option is good if you do not have any continuous integration environment. First you need to upload the Docker image to a location accessible by the staging server. In our case, it will be a DockerHub. For each user, he provides for free one private image repository and an unlimited number of public repositories.
Log in to access our DockerHub:
docker login -e username@gmail.com -u username -p userpass
We
load our image there:
docker push username / helloworld-with-docker: 0.1.0.
Next, go to the staging server (I remind you that Docker must already be preinstalled on it).
To deploy our application on the server, we need to execute only one command:
docker run -d --rm -p 80:3000 username/helloworld-with-docker:0.1.0.
And thatβs all! Check the local register of images. If you donβt find the result you want, enter
username / helloworld-with-docker to check the DockerHub registry. An image with this name can be found in the register, since we already uploaded it there. Docker downloads it, creates a container on its basis and launches your application in it.
Now, every time you need to update the version of your application, you can push with a new tag and just restart the container on the server each time.
PS This method is not recommended if it is possible to use Travis-CI.
Deploy with Travis-CI
First, add the DockerHub data to Travis-CI. They will be stored in environment variables.
travis encrypt DOCKER_EMAIL=email@gmail.com travis encrypt DOCKER_USER=username travis encrypt DOCKER_PASS=password
Then we add the received keys to the .travis.yml file. We will also add a comment to each key in order to distinguish between them in the future.
env: global: - secure: "UkF2CHX0lUZ...VI/LE=" # DOCKER_EMAIL - secure: "Z3fdBNPt5hR...VI/LE=" # DOCKER_USER - secure: "F4XbD6WybHC...VI/LE=" # DOCKER_PASS
Next, we need to log in and download the image:
after_success: - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS - docker build -f Dockerfile -t username/hello-world-with-travis. - docker tag username/hello-world-with-travis 0.1.0 - docker push username/hello-world-with-travis
Also, image delivery can be launched from Travis-CI in various ways:
- manually;
- via ssh connection;
- online deploy services (Deploy Bot, deployhq);
- AWS CLI;
- Kubernates;
- Tools for Docker deployment.
Summary
In this article, we examined the preparation and deployment of Docker using an example of a simple node.js server in two ways: automatic and automated using Travis-CI. I hope this article has benefited you.