Can autotests replace a person in search of vulnerabilities: interview with Alexandra Svatikova







Alexandra Svatikova works as an information security expert at Odnoklassniki. Over 8 years ago, she switched from developing in Java to testing application security.







We interviewed her where we discussed:









There will be no hardcore in this article - you can take a trip to Heisenbug 2019 Moscow , where Alexandra will talk about static security testing. We’ll move on to her report at the end of the post, but for now, welcome to cat.







Logging in to application security is not as difficult as it sounds.



“Who did you study for?” Why did you become involved in application security?







- I’m probably not an example to follow (laughs) . I studied as a programmer, and something like “software engineering” is written in the diploma. Worked first as a mobile developer, then switched to web development in Java. After searching for another job as a Java programmer, I got into the team that deals with application security - maintaining the security-related part of the development. There was a well-organized, formulated process, and they needed people to do code reviews and static analyzers. Accordingly, they were looking for a Java programmer and were ready to teach him security. It seemed interesting to me, so I stayed.







- What do you think, will you stay in this area for a long time, or will something follow this stage further?







- I think that I will stay forever, because I have been an application security analyst since 2011. I already have less development experience, it turns out. The programmer should not be afraid of routine tasks, fix bugs, and the security specialist has a different specificity, it attracts me more.







- What additional areas compared to conventional development did you have to master?







- At the dawn of my career, I was testing. It sets your brains well: you see the system in the form of not a bunch of code, but an organism, which can be influenced in different ways. So testers and developers have a difference in thinking.







For example, 10-15 years ago, it was believed that testers should break the system and find the bug in any way. Security professionals sometimes need to have a similar point of view. So, you need to understand how the system works as a whole.







Some developers are obsessed with details: they know how a part of the system works, but they don’t understand how it all interacts. For example, he knows how JS is executed in a browser, but the developer does not know how this browser works further and communicates with the server, why this is arranged like this. The tester must look from a bird's eye view to assess the interconnections of components, weak changes or any vulnerabilities.







- And something engineering, for example, network stacks, you had to learn from scratch? For example, how does JS, front, backing work?







- In principle, I was already a full-stack developer, so I understand how the backend and frontend work. You need to have a certain outlook, but deepening in the details already depends on what you are doing. The same developers and testers, depending on the project, know more about some system things (for example, network protocols), or they know more about the frontend. It depends on the circumstances.







- How realistic does it seem to you that an abstract full-stack developer or a full-stack tester, who is engaged in programming, automation and testing, will go over to application analysis, that is, what are you doing?







- It's easy for a tester. Be sure to have some programming skills and understand how the system is built from the inside. But a good tester without this no longer exists. If this is the case, then there comes the question of specialization: he needs to read about security and certain technologies (for example, Android or backing), that is, what he is trying to find vulnerabilities in.







Developers see their participation in the process a little differently. Therefore, for the developer this is a revolution, a different look at the profession. It is important for the developer that something works, but it will be difficult for him to break.







Pentester, rescher and application security analyst: what's the difference



- As I understand it, your specialization is related to pentesters. How do pentesters and zero-day vulnerability researchers compare, or is it all mixed into one bottle and is it really difficult to figure out who is who?







- There is no clear division into posts. But I’ll tell you what terms the party uses.







There are scouts who research a product, technology, protocol, server in an attempt to find an interesting problem. Interesting refers to a problem that was not previously found, or was identified in an unexpected place, or it is a complex combination of previously known problems. I can say that as a rule, all sorts of 0day vulnerabilities are found by scouts.







Pentest is a service. You have a system, and you want to find vulnerabilities. The task of pentesters is to penetrate the system. They will not be able to find all potential problems. For example, a vulnerability can be mitigated by other security mechanisms at different levels. Pentester will look for what is actually being exploited, and after that it will issue a report and leave, because it does not have a task to increase the security of the system or configure development processes.







There is also application security or product security analytics. On the contrary, they look at the system from the inside. Their task is to make the system safe. This means that they have information about the system, and it is not a “black box" for them. On the other hand, they rank problems differently. Analysts even consider vulnerabilities that cannot be exploited at the moment. For example, there is a critical hole in the admin panel, and it’s clear that it is accessible only from the internal network, and therefore it’s not very scary. But the insider understands that under certain circumstances a terrible thing can happen.







Analysts are more focused on supporting the process. If the pentesters find 20 bugs and leave, and the developers in the process of fixing bugs do the same, then the pentesters will not help here. Therefore, understanding the number of vulnerabilities in the system will be relevant only at that moment.







- Then the application security analyst does this in the process, day-by-day?







- Yes, and at the same time, activity is directed in two directions. On the one hand, it is necessary to search for existing vulnerabilities and combat them. On the other hand, there is a task to make the system safer.







This can be approached in different ways. For example, build the development process so that there are fewer errors or they are quickly detected. Or implement mechanisms that will reduce the risk that the vulnerability will fall into production. There are several ways to ensure system security.







- So, the work of application security analytics is closely connected with the teams and the development process exactly inside?







- Yes, an application security analyst should raise a question about the development process. SDLC (Secure Development LifeCycle) is the first buzzword to come across when reading about application security. In short, the goal of these actions is to ensure that system security considerations are taken into account at each stage of development. In this case, it is not always a security specialist who is engaged in the performance of specific tasks, sometimes you can delegate to other team members. After all, the sooner you find a problem, the cheaper it is to fix it.







The human mind is still indispensable in finding vulnerabilities



- Problems with the product can be found at the level of the specification, its discussion, prototype, sketch, when there is not a single line of code. But security issues at what stage does it become possible to find? And is it possible to find them even before writing the code?







- Of course, because there are problems directly related to how the requirements are formulated. Let me give you a wild example. You make a login form, and the designer tells you: “let’s tell our users that they not only entered the wrong password, but made a mistake in the last letter of their password.” That is, the wording of the requirement may be inherently unsafe.







Also, at the stage of TK, we can assume that the system will be subject to certain vulnerabilities, and some protection mechanisms will have to be implemented. If you take the same form on the site, then you definitely need to make a captcha to prevent password guessing. Therefore, such points should be mentioned in the development of architecture.







- How often is security testing embedded in CI / CD processes, and is it difficult? That is, in order to be able to “tear up” any pipeline in Jenkins or TeamCity, and there was a separate stage where security tests are run. How real is it?







- There are guidelines on the same SDLC, there are requirements of regulators. Accordingly, companies that have a mature development process implement them. There are tools for automating part of the process, but the ratio of the effort and the result depends on the nature of the project and on what stage of development these techniques began to be implemented.







If the application is written from scratch, then you can suggest using a static analyzer to avoid questionable instructions in the code. And if 10 years before you wrote the code, and you came there with your tool for crazy money, then, of course, you will find a little.







All automatic tools have the problem that they do not know out of the box how the system works. If an individual component of the system can be isolated, then it can be tested with ready-made tools. But in a system with many dependencies, automated scanners can lose valuable information.







Security analytics in other companies



- Which company or which individual project, in your opinion, is the flagship in application security analytics, based on the speeches of companies and reports at conferences?







- Microsoft came up with the implementation of SDLC, which became the canon. But here's how they work at their lowest level, what tools are used, I don’t know.







Facebook writes a lot about how everything is arranged technically: how they scan code, find vulnerabilities in working systems, etc. Some Facebook tools are even open source projects, so you can dig deeper into their guts.







If we take Russian companies, then Sberbank interestingly talked about how they formalized SDLC, documented the process. Although their application security team is rather big, it is not enough for many developers. Therefore, they educate security champions in teams, put some knowledge about security into their heads, and in which case champions can raise a red flag.







Yandex talks at conferences about cool things like "how to hack a browser."







- Is it realistic that the tester after the conference, where he heard a report about threats and tools, will get a significant effect after their implementation? For example, a company will buy a scanner for $ 10,000 that is looking for holes in the Jenkins pipeline. Or is it more important to know the mechanics of exploiting vulnerabilities in order to implement some things?







- You can’t buy a scanner for 10 thousand dollars (laughs) . If we talk about the search for vulnerabilities, then it comes down to testing specific scenarios. Scenarios are taken from collections of general knowledge.







For example, OWASP (Open Web Application Security Project) publishes guides on how to conduct security testing, code reviews, etc. For example, in OWASP Application Security Verification Standard it is written about everything that needs to be tested. To read the document, special knowledge is not required, it is enough to know about web applications, so any tester can handle it.







The standard and cheat sheets are enough to run manual tests. It says what types of vulnerabilities exist and how to look for them. Some tests cannot be performed by standard scanners by definition: for example, those related to checking business logic.







On the other hand, if you need to find XSS, then you need to add a quotation mark, bracket, etc. in each parameter that is changed. And if there are 100 million parameters in the system, the task is no longer feasible. But automated tools will do just fine with her.







But when you start the scanner, there may be three scenarios:







  1. The tool issued a report where there are a lot of good reproducible bugs and a little false positive (ideal, but unlikely).
  2. The report contains 20 thousand finds, of which about 1% are true. Therefore, you have to sit and determine whether the real problems.
  3. The tool could not understand something in the system, therefore, it issued a report in which everything is fine, but it is not. For example, I could not compile the code because I could not find the library. Or is it a vulnerability scanner that made 10 requests, ran into anti-flood, and did not receive a response from the server for the remaining million requests.


Therefore, I think it’s important to understand nevertheless that the scanner is “under the hood” in order to select the tool corresponding to the task to be solved and adequately evaluate the result.







Alexander will develop the topic of the right choice and tuning of tools in his report on Heisenbug. There she will talk about the application and customization of SonarQube and Find Security Bugs SAST solutions to search for vulnerabilities in her project. What features do these tools provide “out of the box”? And how to expand their functionality on their own? As an example, the speaker will consider the vulnerabilities of XSS and IDOR.



The conference will be held December 5-6 in Moscow.



All Articles