Container Security in CI / CD

Autumn has come in the courtyard; techno utopia is raging in all. Technology is rushing forward. We carry in our pocket a computer whose computing power is hundreds of millions of times greater than the power of the computers that controlled the flights to the moon. With the help of Youtube VR, we can swim in the ocean with jellyfish and whales, and robots have long been exploring the lifeless horizons of cold planets.



At the same time, engineers and IT services specialists, developers and their countless colleagues divided into two camps: those who create new solutions (software, strategies, information systems), and those who understand them.



Burst into the ecosystem of application development and the method of using microservices. Until recently, it was an incomprehensible, closed from prying eyes, a fundamentally new technique. But today, after only a few years, large and medium-sized companies are already confidently using this approach in their own development environment. What is he like? We will not use the “classic” definitions, but will tell you in our own words.



image





Microservice architecture. Containers



Microservices is a method in which the functionality of one system is divided into many small services, and each of them performs one task from a complex functionality. One acts as a web server, the other as a database, the third is busy with something else, etc. Between all these microservices, network interaction is established, and the necessary resources are allocated. A funny and no less obvious example of such a concept is the system of ordering and issuing lunch in fast food eateries.



The client (user) creates a new order using the terminal or reception (web server), transfers data on the number of cheeseburgers, potatoes and the type of soda in his lunch (fills the database), makes a payment, after which the data is transferred to the kitchen, where part employees fries potatoes (cooking service), and the other part spreads food and pours soda (collection service). Then the collected order is transferred to the issuing counter (issuing service), where the client shows a check with the order number (there is even verification!), And they give him lunch. Each of the process units in this case is busy with one task, and it works quickly and smoothly, however, according to one embedded algorithm. You must admit that it would be foolish to expect that at the front desk you will be soda popped faster than in the kitchen, or that the employee who fries the patties will be able to accept payment for your order. However, if the system is properly debugged and the processes proceed as expected, then everything works really quickly and efficiently.



Regarding microservice architecture, it is worth noting an important point - its execution environment. Today, a widespread option is containers (hi, Docker). A feature of containers is the idea of ​​packing in them all the necessary environment, starting with a lightweight OS image, installed packages, loaded frameworks and libraries, and ending with the application itself. How is this useful? At least by the fact that the developer will not be able to excuse himself with the explanation “Everything works on my computer” when you come to him with a message that the application is not working.



Containers allow you to create applications, pack them together with the entire environment into a ready-made image of a lightweight system and no longer worry about problems associated with working in various environments. Need to deploy the application to another server? We launched a working image with it in the container - and everything works.



In an information system built using containers, you no longer have to worry about software versions in the environment where the applications are running, and with continuous delivery processes implemented, you also need to deliver the code, etc. The idea of ​​the container implies that the development itself the application runs in the same container environment, and since the microservice is closed, it doesn’t matter on which operating system or with which installed packages it works in it. The application works because it was created for a container environment that does not change regardless of the system surrounding the container. It is no longer necessary to install all the necessary software for a long and tedious period, to establish connections, dependencies and packages. You do not need to manually migrate applications when moving between development and deployment environments or when computing center capacities increase. A new level of abstraction has appeared, taking its place between the final software with its users and the host environment (virtual or bare-metal), on which all this works. This approach, due to its convenience, is steadily gaining momentum and is not going to slow down.



Many large organizations strive to adopt the best techniques for improving business efficiency, its development, scalability and transformation, including in the context of information systems. After all, it is precisely for large, digital oriented businesses that are primarily interested in solutions aimed at flexibility, scalability and mobility, since, when changing the industry, it becomes the first to face the difficulties associated with expanding, adapting to a changing market, etc. not necessarily about IT companies. In the context of CI / CD issues and its security, companies from the public sector, major financial market players such as banks and even logistics monopolists turn to our organization. In all cases, they are united by one thing - the use of containers in one form or another when developing applications, services, etc.



Big business is often compared to sharks. In this case, a more suitable comparison is difficult to come up with. Have you seen how sharks hunt for schools of fish? Have you noticed how much more maneuverable small fish are, even if they are in a large flock? Who reacts sharper, more maneuverable and faster - a shark or small mackerel? The same goes for companies in the market as a whole. Of course, we do not take into account Microsoft or Apple in those days when they fit in the garage, these are isolated cases. But statistically the picture is such that large companies are able to dictate trends and set directions, however, it is easier for small companies to quickly adapt and adapt. And large companies are also trying to increase mobility and flexibility in affordable ways.



But, as they say, there is a nuance ... Big companies are in fact not a monolith. They consist of many departments, with many departments, teams, units and an even larger bunch of employees. And in the context of information systems and development in particular, the areas of action of teams may overlap. And just at this junction, those conflict situations arise between IS services and developers. That is, it turns out that neither large nor medium-sized businesses are immune to such difficulties.



Sooner or later, when using containers, the organization will have a question. This can happen at the very beginning of their use, or maybe after some incident, as a result of which the company will incur losses.



How to make processes safe?



In one form or another, we often hear this question and together with the customer we think about the solution and look for suitable ways. As a rule, an organization, especially a large one that has implemented CI / CD, consists of smart and experienced specialists, including those in information security. These people understand why they need to use microservices, what problems it will solve and what new difficulties will appear. Therefore, they take actions for the successful implementation and use of technology: prepare the infrastructure, conduct audits, deploy systems, build processes and coordinate internal regulations.



However, it is not always possible to foresee everything, keep track of everything and monitor everything. Here's how, for example, to understand if the version of the SQL server in the container contains a critical vulnerability? Manually? Let's say. And if there are dozens of containers? And hundreds?



How can an information security specialist be sure that the base OS image in the container with the application does not contain a hidden exploit? Check manually? And what exactly to check, where to look? On all tens and hundreds of containers? And where to get so much time and resources?



With the development and distribution of CI / CD, the issue of security in the development cycle became more acute. After all, you need to be sure at least at an acceptable level of image quality, you need to know about the vulnerabilities of software and packages, the state of working containers, whether suspicious or obviously illegitimate actions are taking place in them. And if we are talking specifically about the development cycle, then it would be worthwhile to have tools to ensure security in the development process itself, in its pipeline, and not just in containers. And you also need control over secrets, audit of registries, control over network interaction and so on. And here we come to the main issue.



Why security is needed, and what are its features in CI / CD?



Essentially, it is a set of practices for ensuring security in and around the development cycle. That is, it is the use of special software, the development of methods and regulations, and even the preparation of teams (teams are the key to everything!) To ensure security. And an important point: the justified introduction of all this security into development, and not cutting from the shoulder!



Here we dwell in more detail. The usual approach to IT security in general and in development in particular, which comes to mind, is based on the principles of “Prohibit / Restrict / Prevent”. By far, the safest information system is one that doesn't work at all. But it must work! And people using this system should also be able to work with it.



There is a conflict of interest mentioned above. The information security specialist seeks to make the environment safe and wants to have complete control over it, the developer seeks to make the product and wants it to happen quickly and conveniently, and the IT engineer makes the environment workable, fault-tolerant and together with the developer strives for automation.



image



Protection in CI / CD is not about software or regulations, it is about teamwork and embedding the security concept in the development stream. This approach is aimed at the equal involvement of all participants in the process, at the distribution of clear areas of responsibility, at automation, monitoring and transparent reporting. And most importantly - to implement security inside the application development process in such a way that it appears in the early stages and does not require the allocation of additional resources, both computational and human.



Let's take a look at an example. Suppose one of the development teams creates an application, this happens on a container medium under the supervision of information security specialists, while infrastructure managers maintain the operability of this environment. At the end of development, a ready-made application appears that flows into production and starts working there, customers use it - everyone is happy. But after some time, a serious vulnerability is discovered in the container with the application. An information security specialist registers this vulnerability and passes it on to developers for elimination; they, in turn, fix it with a patch, after which you have to rewrite the dependencies and update some packages. Then the next version of the application is rolled out.



The time spent by IS specialists to localize the problem, the time of the development team to fix it, and a little while the IS service will conduct a second audit and close the case. But what if we could use some kind of tool and implement it at the development stage? What if our application did not roll out to the product, being inappropriate for the given level of security? And each specialist involved in the process, could see what threats are detected in his area of ​​responsibility? But what if the whole process would also be automated, starting with detection and ending with incidents in the bug tracker?



This is the goal of organizing safe development and implementation. That it was not only safe, but also convenient. The introduction of new processes or the use of additional tools should simplify activities, and not vice versa.



There are tools and techniques that allow you to monitor the status of the necessary elements of the system and stages of the development process and help all involved parties to keep abreast of events within their area of ​​responsibility. The emphasis here is not on specialized software, but on the interaction of people with the system and with each other. Is it more convenient for the developer to fix and test the application right inside the working container? Well, let him fix it. At the same time, the IS service wants to be sure that there are no illegitimate actions inside the container? No problem, she will be aware of everything. The task was completed, and no one interfered with anyone in the process. Efficiency and rationality! And this is possible if you apply the necessary information security tools in the right places in the development environment and revise the rules of interaction between services and teams. And then let there be no utopia, but you must agree that living with both will become a little easier.



Why bind the hands of developers, and load the IB service with it, if you can build processes so that the developer does as he can (selflessly, exhausted in ecstasy from the perfection of his code), and the IB specialist controlled this process (not interfering, but only sharing this pleasure from the side).



Container safety. Is it worth it?



We are contacted by customers at completely different stages and with different experience in CI / CD. There were large organizations that encountered an undetected backdoor in a container in production through which access to important data was obtained and the data was stolen. As a result, it became obvious that in an overgrown, cumbersome system, it is too difficult to keep track of and preventively neutralize potential threats.



There were also small companies that recently used CI / CD in development. After analyzing the processes in the pipelines, their experts came to the conclusion that the available information security tools cover an insufficient volume of processes, and there are dangerous places through which an attack can occur sooner or later. Perhaps not now, perhaps later, or maybe never at all. But if this happens, the price of the error will be high.



Our clients share concerns, which in most cases come down to the very concept of beating the hands of developers, engineers, and everyone involved in the process when trying to go beyond the time limit. But sometimes it’s easier and faster that way, and in some cases it’s generally otherwise. And then the IS service has to look at violations through the fingers. But we are for a different concept. Why violate or ignore regulations that were written with such pain? Why should services interfere with each other to perform their tasks? In working with the customer, we begin communication with his problems. They are always there.



image



“Do not look for a client, find a problem and offer a solution - the client will appear himself.”



After listening to the customer, as a rule, we understand what problems they face. It can be difficulties in team interaction, unsuccessful decisions in the design of the "pipeline", lack of both computing and human resources, and much more. In our approach to the implementation of security enhancement tools in the customer’s infrastructure, we are guided by the desire to help solve his problems and achieve this in such a way as to minimize the likelihood of new ones. Thus, a foundation is created for the future, it does not matter who will service the created system, many problems should be foreseen and solved at the beginning. In the early stages, decisions in most cases turn out to be cheaper than with high workloads in deep production.



Based on all this, we suggest starting with an audit, which includes not only the state of information systems and their elements, but also an understanding of the process component between development teams, the approach of information security services, etc. Do not forget that people are the most important factor both in terms of threats and for the results of the functioning of the system. An audit is a necessary stage, as a result of which important system flaws or miscalculations in interactions that together with the customer we try to solve or process can be revealed. It is important to understand that the goal is not to pile up a lot of software solutions and products and not to shut up vulnerabilities discovered by them. On the contrary, a scenario is likely in which the purchase of expensive software products may not be necessary at all. Often, the process of increasing security in a CI / CD environment can be achieved with the help of existing security features of the organization, both built-in to the containerization environment (in the orchestra, for example), and third-party. The important thing is not so much their quantity or quality as the correct application.



Based on the results of the audit, it becomes possible to create a work plan, a roadmap with clear interim objectives and an understandable ultimate goal. Well, everything further is a technical matter. Proper planning in any business is a massive part of successful completion.



It is important not to get too carried away and not to forget why everything is done. No one has the task of creating an insanely protected system in which no container starts when any threats are detected in it, or no application is deployed in production if there are any deviations from the protected model. It is worth remembering that the introduction or launch of protection tools should serve not only to increase security, but also for convenience.



For example, one of these cases involved the introduction of container protection software and orchestration environments. It would seem that they implemented, configured, scanned, saw all the vulnerabilities - and then the long process of elimination. However, with this approach, problems arise, which were discussed at the beginning. The IS service has a tool that allows you to block many activities in an orchestration environment, which, if used improperly, does more harm than good. The developers have their hands tied, because the information security service is trimming their capabilities. For example, it is no longer possible to fix something inside the container “hot”, and when a package vulnerability is detected in the image, the developer is involved in the regulatory bureaucracy. As a result, the solution to a problem that could be eliminated during the day is delayed indefinitely.



To avoid such situations, we recommend that the processes be arranged in such a way that the IS service is not aloof from the development process, as is often the case, but is a participant in it. It certainly takes some time, but it’s worth debugging the process, and life is really getting easier. For example, IS tools allow you to monitor threats already at the assembly stage inside the CI / CD pipeline, and the IS service can register this. At the same time, information about threats at each stage of the assembly can be automatically transferred to the team or specialist responsible for a particular stage, and he, in turn, will take the necessary actions to eliminate the threat. And all this happens not under the supervision of a whip, but under the monitoring of the IS service.



Ultimately, the time spent on fixing vulnerabilities can be significantly reduced, and with it the costs of the business, for example, financial or reputational.



With any initial data, we strive to form an approach to organizing and improving the security level of container development, concentrating not only on the design and implementation of specific solutions, but also with an eye to human resources. Each participant in the process fulfills its role, and the more convenient it is, the less unnecessary interference, the better will be its final result. And if you find a balance between the convenience of all parties involved, you will end up with a very effective team. In the future, of course, various optimization, application or increase in automation of some processes, delegation of tasks, etc. is possible. But the main idea remains unchanged - a revision of security as such. Separation of duties, monitoring instead of mindless prohibitions, and each participating within their own areas of responsibility.



And of course, convenience! Security can be convenient.



All Articles