OK, do I really need Kubernetes?





In a large company, it is often very difficult to coordinate the allocation of resources for work tasks. The whole Agile crunching against the wall of a three-week agreement with the IS of the new infrastructure. Therefore, we often receive requests to transfer the infrastructure to containers in order to roll out the changes not once every three months, and when the business needs it. At the same time, they ask to configure / implement Kubernetes as the most popular instrument of orchestration, although, as practice shows, out of 10 projects, a maximum of three are needed. But in fact, it’s worth using either Kubernetes, but OpenShift, or working with it not in your infrastructure, but in a public cloud, for example. I will try to talk about the real business cases that we decided, I will describe the main differences between Kubernetes and OpenShift. And also about how we reduced the coordination of information security to 30 minutes, and all remained alive.



We had several interesting implementations in which we raked up the accumulated problems of the customer. For example, a retail company came to us who needed to roll out new chips continuously. The competition is wild! And they only have the infrastructure for development each time it takes from six to ten days, which causes downtime. Solving the issue by purchasing new hardware for testing and development is expensive and the road to nowhere. As a result, we transferred the IT infrastructure to container virtualization. As a result, thanks to containers, the load was reduced by 40%, and the infrastructure for the new development is now being prepared from one to four hours. The bonus is savings, since all processes could continue to be conducted on the basis of existing capacities without buying new ones.



What are we going to do with information security?



IB are very necessary people. They often go a little too far in their internal requirements for projects, but this is much better than seeing once that your external SFTP server was surrounded by Romanian hackers.



But there is a problem with them. If we consider the business process as a conveyor, then their division often becomes the very bottleneck that everyone else rests on. On the one hand, they are responsible for all the risks associated with security, on the other, they simply do not physically manage to manually view the code and all the details of the architecture of the new product.



The situation is very similar to security services in areas with a large flow of people. We can arrange a total inspection of each passenger in the metro, shining it on scanners, twisting pockets and inspecting the phone. As a result, however, instead of security, you get complete transport collapse and system paralysis. The only option in this situation is the organization of automated systems that will respond, say, to individuals, identifying wanted people or reacting to some abnormal behavior.



In our context, such automated systems are properly organized CI / CD processes with code analyzers at intermediate stages, solutions such as JFrog Xray for Artifactory, and correctly tightened Kubernetes / OpenShift nuts that do not allow unsafe approaches like launching a container as root. Now we are preparing a boxed solution that will provide all this.



The goal is to move from the concept of "will not go into the sale until we look" to "if there are no objections, it will be deployed automatically." There is no point in automation if organizational processes remain the same.



In one of the projects, we managed to reduce the time for IS failure to 30 minutes. In other words, if within half an hour the “security guard” does not reject the action, then it is agreed automatically, and the changes are rolled into production. Now we are trying to achieve a deadline of 60 minutes for all coordinators in the project without compromising security.



What is the difference between container management systems?



Kubernetes and OpenShift are the main solutions for container orchestration. Let's analyze the main differences and advantages for the business.



Openness



Kubernetes is a fully open product that can be deployed independently and serviced either on its own or with external support. The situation on the labor market has already more or less stabilized, and finding experts in this topic is no longer a problem.



OpenShift is a semi-closed product with a sophisticated licensing system from RedHat. In fact, it contains Kubernetes, but it has a bunch of additional bindings that simplifies many tasks. The vendor provides full paid support for its product.



Here the choice depends on what suits you best - support by the forces of your specialists or vendor.



Platforms



Kubernetes works on almost any Linux platform and most cloud providers.



OpenShift does not work anywhere except on RHEL, Red Hat Atomic, Red Hat CoreOS. There is a community version - OKD, but it is tightly tied to distributions. The one exception is that it can be installed on CentOS. And keep in mind that officially Red Hat does not guarantee paid support.



Security policies



OpenShift out of the box has tighter security settings. This is a plus when deploying infrastructure from scratch, but it can be a problem due to the incompatibility of some images with politicians. For example, in OpenShift, it is forbidden to run the container from root, which breaks compatibility with old images. On the other hand, this approach, combined with convenient integration with AD, convenient logging on the basis of the EFK stack (ElasticSearch, Fluentd, Kibana) gives us the very security out of the box that is needed to unload the IS unit.



Kubernetes can also be finished to this level, but it will require a lot of effort and time on the part of engineers.



Patterns



OpenShift templates are less flexible than Kubernetes Helm charts. Due to more stringent security policies, Red Hat cannot provide the flexibility of Helm charts right now. Nevertheless, in OpenShift 4, the situation has leveled off thanks to the integrated OperatorHub .



Having well-designed templates out of the box makes life much easier. In fact, this is such a package manager option for building and configuring various services.



One conditional command "helm install prometheus-operator" deploys a rather complex system, which takes a very long time to deploy itself. Kubernetes is leading the way.



General conclusions



Like most products, Red Hat OpenShift is a more boxed, but architecturally tougher solution. It requires less red-eyed and more experienced staff to work. More convenient deployment scenarios, excellent integration with CI / CD solutions, in particular, with Jenkins. OpenShift is great for companies that are easier to pay to support a finished product than to hire their own specialists.



For companies with strong specialists in this field, Kubernetes may be a more flexible and interesting solution. It may also be suitable for a relatively small business that wants to save on Red Hat licenses, but does not have complex tasks that require highly qualified experts.



Real cases



I’ll try to show how containerization technologies helped solve business problems, saved licenses and ensured smoothing of peak loads during mass user raids.



Case Study: E-Commerce



Problem



The customer had over 100 active services running VMware's cloud foundation. And all of this park had several different problems:



  1. 12 resource-demanding and non-margin services spun on VMware, eating up a budget of approximately $ 408,000 a year.
  2. Three services (a portal and two mobile applications) began to develop actively: over the course of seven months, the amount of resources needed for work increased 4.5 times and continues to grow.
  3. Several customer services have peak loads, during which resources are needed six to seven times more than during normal times. Accordingly, equipment was allocated for their correct operation, which most of the time was utilized by less than 15%.


In addition to all this, the customer has a desire to get away from binding to one virtualization vendor.



Our decision



The first and simplest solution: we transfer services with peaks to the public CROC cloud . With billing for consumed resources. Everything is as simple and boring as possible. Moving from someone's VMware to our KVM is one of the most frequent cases for cloud engineers.



Since hardware for scaling has already been purchased by the customer, we only had to calculate the licenses. For new hosts from VMware, they cost about $ 2,100,000, which did not suit the customer very much. We proposed transferring some of the services to KVM running OpenStack. And in order not to get lost, integrate the management of both infrastructures by CloudForms (and OpenShift as well).



As a result, the customer received the second shoulder of a private cloud based on OpenStack, which closed the vendorlock question. Moving some of the resource-intensive services to a new shoulder, it turned out to reduce costs for VMware licenses (24-hour support from CROC turned out to be cheaper).



Case: Retail



Problem



During the audit, it turned out that something terrible was happening with the customer regarding the allocation of infrastructure. Projects - more than 250, development teams - more than 150, half - containers on Kubernetes. Resources for each new project are purchased and remain assigned to it without the ability to select, even if they are not used. There is no billing at all, there is no single portal. Huge costs for test and pre-production environments, as they also spin on VMware.



At the same time, the customer does not want to completely switch to a new platform and wants to assemble everything under a single management portal. Moreover, “everything” is not only VMware, but also PaaS, containers and a single billing.



Our decision



As a result, inside the solution turned out to be rather heaped up, but outside for the customer everything looks extremely simple.



The directory in which the user selects the parameters of the virtual machine and the necessary circuit: DevTest, PreProd, Prod. And then CloudForms chooses where to deploy the requested resource: on VMware or on OpenShift. We are still working on single billing, since the hybrid VMware + OpenShift solution is quite difficult to put together.



In fact, in this way we put things in order in the infrastructure, sorting out the rubble that the customer piled.



Case: Industry



Problem



Copying VMs to various environments (Test, Dev, Prod, Preprod) takes more than three days and requires manual execution of 15 different operations, each of which takes up to 30 minutes. A deeper audit showed that earlier it took two weeks to allocate resources for a new project, and there were 10–20 requests per month. Now there are more than ten requests for resources per day, which, without automation, led to a collapse.



Our decision



In fact, the customer needed to transfer the IT infrastructure to a service model, rebuild and automate the process of making changes to the infrastructure, create a self-service portal, fill it with a service catalog and implement an environment for managing containerized applications. We automated the VM copying process, but it still took a lot of time: 40-60 minutes, this did not suit the customer. As a result, we proposed switching to containers, which reduced the copying time to three to five minutes.



conclusions



Containerization and microservices are not indulgences for bad code that will be written with the left foot and immediately fly into production. On the contrary, this is a whole concept, involving a bunch of improvements due to the deep automation of all processes:





And sometimes containerization is not needed at all, and the problem can be solved by migration to an external cloud. But in order to make a decision, in any case, we need a good external audit and analysis of what is happening. In short, containers are just one of the tools for solving business problems, albeit very cool.



All Articles