Assembly and deployment of the same type of microservices with werf and GitLab CI





Two years ago we published the article β€œ Building projects with GitLab CI: one .gitlab-ci.yml for hundreds of applications ”, and now we’ll talk about solving a similar problem today. New material is about how you can build CI / CD processes for a large number of similar applications with the advent of include



in .gitlab-ci.yml



and the advent of werf to replace dapp.



Introductory



In the further instructions given in the article, the following situation is considered:





For simplicity and convenience (and as a tribute to fashion), we will continue to call these applications microservices. All these microservices are assembled, deployed and launched the same way , and specific settings are configured using environment variables.



Clearly, copying .gitlab-ci.yml



, werf.yaml



and .helm



brings a lot of problems. After all, any editing in CI, the assembly process or the description of the Helm-chart should be added to other repositories ...



Connecting templates in .gitlab-ci.yml



With the advent of the include:file



directive in GitLab CE ( since version 11.7 ), it became possible to make a common CI. include



itself appeared a little earlier (in 11.4), but it allowed connecting templates only from public URLs, which somewhat limited its functionality. The GitLab documentation perfectly describes all the features and usage examples.



Thus, it was possible to refuse to copy .gitlab-ci.yml



between repositories and support its relevance. Here is an example .gitlab-ci.yml



with include



:



 include: - project: 'infra/gitlab-ci' ref: 1.0.0 file: base-gitlab-ci.yaml - project: 'infra/gitlab-ci' ref: 1.0.0 file: cleanup.yaml
      
      





We strongly recommend that you use branch names in ref



with caution . Include's are calculated at the time the pipeline was created, so your CI changes can automatically get into the production pipeline at the most inopportune moment. But the use of tags in ref



makes it easy to version the description of CI / CD processes. When updating, everything looks as transparent as possible and you can easily track the history of changes in pipeline versions if you use semantic versioning for tags.



Connect .helm from a separate repository



Since these microservices are deployed and run the same way, the same set of Helm templates is required. To avoid copying the .helm



directory between repositories, we used to clone the repository in which Helm templates were stored and checked on the desired tag. It looked something like this:



  - git clone https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.example.com/infra/helm.git .helm - cd .helm && git checkout tags/1.0.0 - type multiwerf && source <(multiwerf use 1.0 beta) - type werf && source <(werf ci-env gitlab --tagging-strategy tag-or-branch --verbose) - werf deploy --stages-storage :local
      
      





There were also variations using git submodules, but it all seems more like a workaround ...



And now with the recent werf release, he has the opportunity to connect charts from external repositories. Full support for the functions of the package manager, in turn, made it possible to transparently describe the dependencies for the deployment of the application.



Sequencing



Let's get back to solving our problem with microservices. Let's raise our repository for storing Helm charts - for example, ChartMuseum . It easily deploys in a Kubernetes cluster:



 helm repo add stable https://kubernetes-charts.storage.googleapis.com helm install stable/chartmuseum --name flant-chartmuseum
      
      





Add ingress:



 apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/force-ssl-redirect: "false" nginx.ingress.kubernetes.io/proxy-body-size: 10m nginx.ingress.kubernetes.io/ssl-redirect: "false" name: chart-museum spec: rules: - host: flant-chartmuseum.example.net http: paths: - backend: serviceName: flant-chartmuseum servicePort: 8080 path: / status: loadBalancer: {}
      
      





Deployment flant-chartmuseum



needs to change the environment variable DISABLE_API



to false



. Otherwise (by default), the ChartMuseum API requests will not work and it will not be possible to create new charts.



Now we describe the repository in which the shared Helm charts will be stored. The structure of its directories is as follows:



 . β”œβ”€β”€ charts β”‚ └── yii2-microservice β”‚ β”œβ”€β”€ Chart.yaml β”‚ └── templates β”‚ β”œβ”€β”€ app.yaml └── README.md
      
      





Chart.yaml



might look like this:



 name: yii2-microservice version: 1.0.4
      
      





The templates



directory should contain all the necessary Kubernetes primitives that will be needed to deploy the application to the cluster. As you may have already guessed, in this case the microservice is a PHP application based on the yii2 framework. Let's describe its minimal Deployment with two nginx and php-fpm containers that are built using werf:



 --- apiVersion: apps/v1 kind: Deployment metadata: name: {{ .Values.global.werf.name }} spec: replicas: 1 revisionHistoryLimit: 3 template: metadata: labels: service: {{ .Values.global.werf.name }} spec: imagePullSecrets: - name: registrysecret containers: - name: backend {{ tuple "backend" . | include "werf_container_image" | indent 8 }} command: [ '/usr/sbin/php-fpm7', "-F" ] ports: - containerPort: 9000 protocol: TCP name: http env: {{ tuple "backend" . | include "werf_container_env" | indent 8 }} - name: frontend command: ['/usr/sbin/nginx'] {{ tuple "frontend" . | include "werf_container_image" | indent 8 }} ports: - containerPort: 80 name: http lifecycle: preStop: exec: command: ["/usr/sbin/nginx", "-s", "quit"] env: {{ tuple "frontend" . | include "werf_container_env" | indent 8 }} --- apiVersion: v1 kind: Service metadata: name: {{ .Values.global.werf.name }} spec: selector: service: {{ .Values.global.werf.name }} ports: - name: http port: 80 protocol: TCP
      
      





The variable .Values.global.werf.name



contains the name of the project from the werf.yaml



file, which allows you to get the necessary names of services and Deployments.



Let's do the simplest automation for push in the ChartMuseum of our charts when committing to the master branch. To do this, we describe .gitlab-ci.yml



:



 Build and push to chartmuseum: script: - for i in $(ls charts); do helm package "charts/$i"; done; - for i in $(find . -type f -name "*.tgz" -printf "%f\n"); do curl --data-binary "@$i" http://flant-chartmuseum.example.net/api/charts; done; stage: build environment: name: infra only: - master tags: - my-shell-runner-tag
      
      





Charts are versioned by changing the version



in Chart.yaml



. All new charts will be automatically added to the ChartMuseum.



We go to the finish line! In the project repository in .helm/requirements.yaml



write the dependencies for the chart:



 dependencies: - name: yii2-microservice version: "1.0.4" repository: "@flant"
      
      





... and execute in the directory with the repository:



 werf helm repo init werf helm repo add flant http://flant-chartmuseum.example.net werf helm dependency update
      
      





We .helm/requirements.lock



in it .helm/requirements.lock



. Now, to deploy the application to the cluster, it is enough to run the werf helm dependency build



before running werf deploy



.



To update the description of the deployment of the application, you now need to go through the repositories with microservices and apply small patches with changes to the hashes and tags in requirements.yaml



and requirements.lock



. If desired, this operation can also be automated through CI: we already described how to do this in the mentioned article .



Conclusion



I hope that the described sequence of actions for servicing similar applications will be useful to engineers who encounter similar problems. And we will be happy to share other practical recipes for using werf . Therefore, if you have difficulties that seem insurmountable or simply incomprehensible to implement, feel free to contact Telegram or leave requests for future materials here in the comments.



PS



Read also in our blog:






All Articles