In practice, in 80-90% of cases, the web application is slow due to the front-end: an interview with Ivan Akulov







Ivan Akulov is a Google Developer Expert in web technologies and the founder of the performance company PerfPerfPerf . Very soon, at HolyJS 2019 Moscow, he will hold a workshop dedicated, oddly enough, to performance - finding problems in React, debugging slow applications and other runtime things.







To more immerse readers and visitors of HolyJS 2019 Moscow in the topic, we discussed with Ivan:









Questions are asked by Dmitry Makhnev and Artyom Kobzar from the HolyJS program committee.







About what he does and how he came to performance



Dmitry: Tell me a few words about yourself.







Ivan: I’m Ivan Akulov, performance consultant, Google Developer Expert, doing my performance consulting agency.







Dmitry: You say that a consulting agency is not the main job. But basically what are you doing?







Ivan: My time is now distributed approximately so that I am half loaded with work with one old client. I work with one Brazilian company, together we create a Wordpress content marketing platform, I manage the infrastructure there and a bit of product development, and some kind of common vision.







The rest of the time I devote to consulting, speeches, articles and all that.







Dmitry: Are there many appeals for performance consulting? How does it even work?







Ivan: In general, it very much depends on the month or something like that ...







Dmitry: When do astrologers announce the month of performance? :)







Ivan: Rather, when the accountants announce a new quarter! (laughs). I am not actively looking for clients right now, the main reason is that there is no time for this now, since I am already loaded with what I have. But on the whole, everything looks like that I write some articles, make some speeches, and when working with some clients, they remember me and recommend me new ones. Mostly people come thanks to the network, and pretty cool dudes come.







Artyom: How did you even come to the performance theme before you created your own consulting firm?







Ivan: Actually, it all started with the fact that we once rewritten an old-old project at Epam for half a year on wepback. There was an old project with a bunch of legacy, with its own front-end framework, half of which worked in the java stack. And since we spent half a year making webpack relatively fast, I got experience with webpack. And at that time I could write a webpack config of medium complexity, lines of 20–40, literally from memory, without googling anything, without peeping and even without using IntelliSense.







And I realized that such an experience could be useful to someone else, I decided to try to start some kind of consulting in the field of webpack. I made a landing for myself, posted it somewhere, a couple of people came, I worked with one of them. And somehow it all began.







And later, I smoothly flowed from webpack related performance to performance consulting in general.







Artyom: Did you manage to improve assembly performance on that project?







Ivan: That project didn’t work out perfectly, not the way I would like to get in the end. It was such a thing that the guys came at a time when I really needed money and did not yet understand how to negotiate, take care of myself and take my interests into account in these negotiations. I suggested some small fixed price with an unsecured amount of work, we worked, I helped them, made some decision that seemed to work, and fixed the performance.







Then it turned out that there were bugs in this solution, it turned out to be overcomplicated and strange bugs popped up in rare edge cases. We started to fix it, I fixed one, second, third bug, all this lasted for a month. And in the end, at some point, the bugs stopped happening, but the guys decided to ask me for something else, but since I have an internal budget for this, it’s completely over, I just said: “Sorry, I'm already loaded, and not I can help. "







As a result, as far as I found out, in a couple of months they replaced this solution with some other, less complicated one and worked in 100% cases.







I honestly don’t know myself ... It seems that I came and helped, and the final decision that they made was born in our earlier conversations, but I do not know how much they helped what I did. And how satisfied they were with this all. In short, it was not as perfect as I would like.







Artyom: And speaking of today, do you somehow track past clients, how are they doing there, did it all help?







Ivan: Yes, definitely. I gradually developed some approaches. Firstly, at the end of my work I try to get some public feedback from each client, which can be posted on the site or posted somewhere.







Secondly, I developed an approach to ask clients at the process or end of work the question “How are you satisfied by the current process, current workflow, current something?” With answer options “more than satisfied”, “satisfied”, “almost satisfied” , "Not really satisfied".







And I like this question because it gives specific answers, and not some silly scale from 1 to 5, which everyone perceives in their own way. So far, none of the clients have answered “almost satisfied” or “not really satisfied”. It was Satisfied, Happy, and something like that.







Artyom: Do I understand correctly that your area of ​​expertise in performance is mainly aimed at the client side? Or do you have the whole web stack affected by the consultations?







Ivan: Yes, the area of ​​expertise is mainly aimed at the client stack, with the performance optimization in the backend or in the databases, I worked much less. But in practice, if a web application slows down, even some article or research is, that 80-90% of cases - it slows down precisely because of the front-end.







If a client arrives and something slows down, in most cases my expertise is just right.







About the most popular issues



Artyom: And how does it happen when some edge-cases? When we have a problem not with parsing JavaScript and its launch, but problems with transport. When the first interaction is assigned to the client, as well as to the backend. Trite gzip incorrectly configured, it takes too long to spin. What do you do in these cases? And if your analysis is mainly on the front line, how do you find such cases?







Ivan: Well, this usually means that the response time from the server becomes too long. If I notice this during some kind of audit, I look in devtools, look in the network and see that the time from the server is 800 ms - to go nuts, it takes too long. And I'm going, trying to understand how the guys have a server, what they do during each answer. If this is JavaScript or PHP, I will most likely be able to figure it out fine and fix everything, if some other language I didn’t work with might need their help.







Dmitry: Can you name any of the top 5 most common performance problems that are encountered in practice?







Ivan: I will follow the way I remember. One of the common problems that occurs not in complex web applications, but on simple sites, is when a developer or client puts a lot of scripts inside the head tag without the async or defer attributes , and this is bad because the browser cannot start rendering the page until will not load and execute these scripts.







And if there is a slow server or a large script that takes a long time to execute, then the person who uses the site will not see the page until the end of the execution.







Another common problem is third party scripts. I am currently working with a client whom I am helping to improve their lighthouse score . Initially, the lighthouse score they had was about 35-40. I came, did all kinds of different things, deleted unnecessary JavaScript, render-blocking resources, optimized somehow ... After all that I did, the score only grew up somewhere to 55. And it turned out that if you go with this optimized page and comment out a single React component that loads all analytics, the lighthouse score jumps somewhere up to 88-90 + points. This happens because there are so many JSs that remove metrics.







In some complex web applications, a frequent topic is when developers install some kind of large library and it bundles into the bundle entirely, although it is not used entirely. Often this is Moment.js , in which there are huge localization files, 165 KB of minified localization files that almost no one needs, or lodash , in which there are a lot of methods, and all use only 10-20.







There was a frequent problem with rendering performance before, when developers hung up some kind of event handler, for example, on an event scroll, it took 5-10 ms, and every time you tried to scroll something, the whole page lagged. Now this has become less, because in the same Chrome [added passive event listeners] ( https://stackoverflow.com/questions/37721782/what-are-passive-event-listeners ).







About how to measure performance



Dmitry: In the course of cases, a question about Lighthouse surfaced. In my opinion, all three were at BeerJS Summit , and there was the coolest report - (hundredth) from Alexey Kalmakov . There is no entry , but I saw a similar article on Habré , and such things a little compromise Lighthouse. If you take it purely as a tool by which to evaluate the work of a performance engineer, you can do some tricks there.







Do you have any tools, maybe even self-written, by which you clearly assess whether you succeeded or not. For example, you sign a contract and they say that you need to make performance x2. What set of tools will you use for this?







Ivan: Well, first of all, if we conclude a contract and agree on x2, then we will agree on some kind of instrument. But in general, besides Lighthouse, I really like WebPageTest . This is a very cool advanced web performance tool that allows you to test your application from a real arbitrary location, for example, Brazil or Australia, on a real, for example, very weak device, such as Moto G.







This is cooler than Lighthouse, because it emulates all this, and only after tests on a real device do you know that the site is rendering for 15 seconds. The second benefit - it gives a bunch of super detailed metrics of all kinds of charts, such as downloads. I use it regularly to look at some things.







Dmitry: Have you written any of your tools, for example, on top of Chrome DevTools Protocol ? What do you lack in tools?







Ivan: I wrote my instrument on top of Lighthouse. To work with one of the clients I needed a setup that would allow me to measure the performance of one page with the Lighthouse in a fairly stable environment, consider it the lighthouse score and copy it to the table. There’s a problem with him: the Lighthouse in PageSpeed ​​Insights is not very stable. If you measure the same page three times, you can get three different dimensions. In addition, I needed to measure not the pages that were ready and published on the network, but the local one. And the only option is to run Lighthouse locally, which also greatly affects the lighthouse score, because score starts to depend on what works on your laptop. For example, launched Docker, score fell 2 times.







For this case, I needed to measure the Lighthouse predictably. I wrote a script that ran Lighthouse three times, a score, took the median value of the three times, and did it for many pages with many setups. I rented a DigitalOcean server for this and ran everything there to make it as isolated as possible from external factors.







Artyom: You mentioned an interesting case about different Lighthouse metrics. There was a recent article, An introduction to 99 percentile for programmers , in which they just concluded that modern benchmarking tools and, in principle, micro-benchmarking alone do not prove anything.







With modern tools it may well turn out that I made a benchmark, you wrote, Dima wrote, and they will all say completely different things. And now in the world without some more or less deep knowledge in theory and statistics, benchmarking does not seem very plausible. You mentioned Lighthouse, but is there any evidence that you received in the benchmark, some additional confirmation?



Dmitry: Maybe mix Lighthouse with something? You mentioned WebPageTest. We take them, maybe even observations from Chrome DevTools, and somehow mix them?







Ivan: Actually, ideally, if there is anything on a project that allows you to track RUM (real user metrics) - how real the site is loading for some users, and if we can roll it out to real users and see how it worked for them - it's the perfect case.







But in general, yes, if I use some kind of tool, I really run more than one test to get the median value, but in general, the article brings up the real problem that there is some kind of 99th percentile of users who have everything very it’s bad and we don’t know if we just use Lighthouse, but this does not say that Lighthouse is useless, it still continues to work and show the average temperature in the hospital. If overall we have improved performance, Lighthouse will show it.







Dmitry: You touched on real user metrics. I just worked at Odnoklassniki and we had problems with how to track this, because it is not clear how to do it, and the volumes are large. We wrote our own and collected from users, and it was just some kind of chaos. And if it's still some kind of average project, what can be taken to measure on the user side? Just window.performance or something else recommended? Or use some cunning tactics, shelling on real user accounts?







Ivan: First of all, there are probably ready-made libraries or services that allow you to configure RUM. For example, add in one line to the page and it will begin to track. Secondly, in modern browsers, such a thing as PerformanceObserver has appeared - this is an API that measures all kinds of things in browsers and allows you to find out the metrics that the browser usually holds inside, including first contentful paint , first meaningful paint and all that. And to find out this metric, just a few lines of code are enough, you do not need to write something overcomplicated. I subscribed to the events, received it and chided it.







Dmitry: And what is the most important thing to pay attention to first of all? First Paint , time to interactive , time to first byte , something else?







Ivan: I have just about it [there is a report] ( https://www.youtube.com/watch?v=-H1l9XBXH6Q ) and I tell you that the most important things to look at are the first meaningful paint and time to interactive. And they are as important as they show exactly what the user first came for. He came either to read or see something, then this is the first meaningful paint, or to work with the application, then it is time to interactive. If you are doing some kind of landing page or news site, then optimize the first meaningful paint, and if you have a complex application, then optimize the first meaningful paint and time to interactive.







Artyom: We mentioned the most popular cases in your practice. And which cases are the rarest or most interesting, in which you had to dig into the guts somewhere?







Ivan: I think I have a rather interesting case now. I am currently working with Framer . These guys have a fashionable design tool , an analog of Figma and Sketch . And I help them improve runtime performance - this is generally a super interesting zone for me. They use the browser super specific. Browsers were not originally designed to write such complex applications, so both Figma and Framer reimplement many browser pieces themselves. And there are a lot of their developments, which are not on other projects, and which are super interesting. I enjoy working with all of this. This is really something interesting, something new and very cool.







How to optimize performance



Dmitry: We talked about the main browser nuances, and before moving on to frameworks, I would like to learn about performance optimization using the architecture, some data structures. Did you have to change something so thorough in the application, for example, add a prefix list for the search or something else that is not very usual for the frontend? Does this happen?







Ivan: I practically didn’t work at the level of data structures or algorithmic complexity, because a lot of things have to be done so that the application slows down in this, and not something else. And so, regarding large pieces, I highly recommend it to everyone when creating a new project or if the project has just begun and it is still quite small, do it on a framework like Next.js or Gatsby .







Both that and another is a framework for React which is constructed so that you simply wrote application using pair of approaches of this framework, and it automatically became fast. And these are very popular frameworks, they do their job perfectly, I myself use them in my production projects and highly recommend it to everyone.







Artem: Just here we recently had an incident related to how the V8 deoptimized React due to the Number device inside the V8. Did you feel this issue or were there any searches why the application slows down?







Ivan: I did not feel and did not read this article about deoptimization. Was there any specific operation or slowed down the whole React as a whole?







Artyom: In general, because in React, there is a timestamp inside FiberNode, and it was initially set to 0 and prevented extension (preventExtension), and for this, one shape with a small integer was allocated to optimize operations on numbers. And when the timestamp was filled with a real value, which climbed out beyond 128, it turned out that it was necessary to change the structure from a small integer to a mutable heap number, and since preventExtension was called, completely new shapes were built for each fiber node. And there are tens and hundreds of thousands.







Ivan: I haven’t noticed this, and I would hardly have noticed, because when I’m debugging, I don’t have one in my memory that this operation should take only 10 ms in React, and it takes 20. I just debud, I’m watching performance trace, I select some bottleneck where something slows down. If performance is distributed, there are some lags in V8 and are distributed evenly throughout the application, then if I do not go to very deep debugs, then I will not notice.







If this happens in one de-optimized V8 operation and it takes a long time, I will notice and go into deep debugging.







Artyom: Was there really any issue of React itself, or of other frameworks where you had to create an issue, to get to the bottom of something?







Ivan: In my opinion, I reported a couple of bugs, but I don’t remember the details ... I don’t think whether this would be relevant to this issue, but I recently came across an interesting case, when css variables turned out to be slower than application changes by React prop. And for me it feels strange, because we are used to saying that css is superfast, and then suddenly it works much slower than React.







I’m trying to add an article about this, and in general it works like that because there is no optimization in browsers - if you change the css variable on an element that has a lot of children, then the browser, at least Chrome, does not remember which ones children use this variable and it goes and calculates all the children and styles for them. If there are many styles and nodes, this is a lot of time, and if you replace some variable with React, then maybe the calculation of styles may not happen and it will all happen quickly.







I am 80% sure that this is so according to what I understood from the V8 code, but I could be wrong. But this is a thing that could be recorded and repaired in browsers, but this is not a microbug, maybe there is a lot of work. I haven’t reported it yet and I don’t know how much time it will take to fix it if I fix it.







Artyom: Did you have to watch the resulting bytecode? With the same views of deoptimization, you watch the V8 bytecode, and there are some extra operations inserted?







Ivan: No, I’ve probably never looked into the bytecode, but I pretty well learned to read minified JavaScript. (laughs)







Dmitry: And to the previous answer - do you somehow communicate with browser developers to clarify such things? And if so, how do they respond? And do they answer?







Ivan: The GDE (Google Developer Expert) status helps me, thanks to this I was added to the Google repository and Slack channel, where I can ping directly Google commands, including Chrome developers. Once I used this for some complicated client case, I wanted to confirm one behavior with them. I was answered within a couple of days, normal. All OK. But I haven’t used it yet.







Dmitry: Cool! Now it’s clear why this status is needed.







Ivan: I'm also in London now thanks to the tickets Google bought for me, so GDE is still needed for this (laughs).







Dmitry: The second plus :)







Ivan: For this, however, you need to fly in and speak at a conference that does not pay for this.







About choosing a framework



Dmitry: We already talked a little about bundling, and you mentioned a very cool case with Moment.js. I remember Andrei Sitnik told me that you can replace it with Day.js or something else if you look at bundlephobia . But it happens that you use, for example, Angular, and there comes a lot of things at once.







I use it because, it seems to me, according to their approach and the way they write that it is built on types, according to static analysis, they have a certain margin of safety when they later optimize it. What do you think of such cases? Is it possible to look at the framework from the side and select it, despite some performance issues, but they promise to fix them in the future? If so, what criteria of trust would you choose for this?







Ivan: A difficult question ... I don’t know. Still, if I chose the framework, I would choose it first of all, based on popularity. Because it would be important for me as a team lead for a project to have many developers and many third-party libraries, userspace communities and components.







I don’t think that I would focus on performance as the first criterion for choosing a framework.







On the other hand, if the framework is quite popular, I think they will do performance optimizations for it anyway. Starting with the banal webpack hacks with code splitting and something like that, and ending with lighter alternatives. React, for example, has a bunch of lightweight versions with similar APIs like Preact , Inferno .







Dmitry: And what could stop you in choosing a framework? Here he is so popular, he has a large community, as, for example, in Vue.js, a huge number of people, you can see how they interact. What can be a stop signal for you?







Ivan: I would definitely not choose frameworks that are on the decline. Honestly, I do not know how Angular is doing; I did not work with Angular. But if you look at the statistics of Angular downloads, I’m not sure that it will not decline. We need to check right now ... No, it is growing, cool. Some Ext JS will definitely go into decline.







And so I don’t even know what to answer. I do not have an abstract answer, I would just look each time from the available real alternatives and the available real project endings. It all depends on the specific case, I do not have a common answer.







About WebAssembly



Artyom: Has it ever happened that you run into some JS problems, for example, if we are talking about high computation problems. Did you advise or prototype WebAssembly yourself to remove computational problems?







Ivan: I would really like to work with such a case, WebAssembly and optimization of something low-level and algorithmic, but there have not been such cases yet.







Artyom: As I understand it, in the current company that you advise may be WASM. If we just draw parallels with Figma, it seems to be on Blazor itself, on C # WebAssembly, and just because of this, their work with graphics and calculations is shrinking in time. Maybe in your current company there are such cases or didn’t you reach them?







Ivan: I’m not sure that I have the right to comment on WebAssembly in this particular company. In general, I want to work with WebAssembly, but it hasn’t worked yet. I had no practical cases.







Dmitry: How do you look at him and what do you expect from him?







Ivan: This is a tool that helps solve specific problems. If you have something that takes a lot of time in JavaScript due to the large algorithmic complexity, you can rewrite it to WebAssembly and speed it up, say, three times or 10 times.







Or if there is a third-party library that you want to use, but which is not in JavaScript, but in C ++ or Rust, you can also compile it in WebAssembly and use it. This is not a solution to all problems, it is a very narrow and clear tool that solves specific problems.







About platforms



Dmitry: Let's move on to the platforms. We touched a bit on scroll, and as I recall, this is a problem on mobile phones. Have you come across something fiercely mad at mobile phones? For example, in Chrome on iOS or were there some cases in ChromeOS or somewhere else, on some semi-exotic platforms? Or, I don’t know, Yandex.Browser?







Ivan: One client came to me once, who needed performance optimization for IE11. It was important for them, because there was a gambling platform or site, and it turned out that there a large percentage of customers use IE11. But we did not grow together, we did not work. Therefore, I have not optimized for IE11.







Other than that, there wasn’t much. Yandex.Browser and Chrome on iOS are not really exotic, because Yandex.Browser is the same Chromium, and Chrome on iOS is the same WebKit, all browsers have the same iOS engine







Dmitry: Well, there are small differences ... It happens (sighs)







About transport issues and migration to HTTP / 3



Artyom: And if we move on from what depends on the client to what is not particularly dependent. This is about transport. At us it seems like everything is gradually being transferred to HTTP / 2, while HTTP / 3, the very QUIC protocol, has already been implemented. The question is twofold, probably: does the multipush with HTTP / 2 really help a lot according to the metrics we mentioned earlier?







And were there cases when the guys switched to HTTP / 3 and all the metrics boosted by an order of magnitude?







Ivan: Is HTTP / 3 already implemented and public?







Artyom: Yes, it is definitely implemented in Chrome, they added to cURL, as far as I remember.







Ivan: Oh, cool. Because the last time I checked it all, I remember that the protocol that formed the basis of HTTP / 3 was definitely implemented and supported. That is, Google first made the protocol, and then it formed the basis of HTTP / 3, and I was not sure whether HTTP / 3 was completed.







Judging by what I saw, HTTP / 2 definitely helps and is definitely useful in the front end, because if you upload more than 2-3 files from your server, which happens in almost all cases, then with HTTP / 2 it loads faster on average. HTTP / 3 I have not yet tested in practice. But I wonder, I'm waiting for some case when I can touch it with my hands.







Where to find useful performance information



Artyom: But in general, where do you consume information from, in what form? Articles, browser-specific blogs, twitter accounts of specific browser developers?







Ivan: Firstly, I take a pack of information from Twitter, sometimes articles pop up there (for example, [time] ( https://twitter.com/slightlylate ), [two] ( https://twitter.com/zachleat ), [three] ( https://twitter.com/csswizardry ), [four] ( https://twitter.com/igrigorik ), [five] ( https://twitter.com/philwalton )). Secondly, Google conducts a monthly call for GDE, where it talks about some new changes in the web platform or performance-optimization. This is a pretty interesting source of information.







Thirdly, the guys from Caliber (this is a tool for performance monitoring) have a cool monthly newsletter Perf.email , which collects all sorts of links to new performance articles.







Dmitry: We found the third plus GDE! If you go into the topic of finding information, you have an excellent telegram channel in which you share all this. For example, is it enough for me to read only this channel and a couple of small sources, and in general, how did the idea to create and develop arise? As far as I understand, this is not very simple.







Ivan: I don’t know, I do not have a Service License Agreement, so I will not cover 95% of the articles. I try to throw there what I myself read on performance and what interests me. I also really like the Juliarderity channel, which is hosted by Seryozha Rubanov and Roman Dvornov , I really love him very much, they are very cool. They write fresh updates of browsers, standards and all that.







Dmitry: I simply call it “the best channel in telegram”, it’s easier than the name. And how important do you think it is to follow the language updates? If I am React, Angular, etc. — framework-specific , , ? TC39?







: (). , — , — .







: ?







: - . proposal, , . proposal, - . — .







: , RSS? - .







: RSS.







: , Habr, « Chrome», RSS . RSS , .







: , -, , , DevTool-, , - , , , .







: , ?







: Twitter, , -, Twitter, 20-30 . , « @vkozula ?».







, « , ». - . . , . , . Facebook- , - 50 , , .







: jsunderhood - ?







: underhood. - , . underhood, — jsunderhood 2017 , , , , , , .









: , , HolyJS jsunderhood, .. . ? , ?







: , , . , , , … , , . 10—20 .







: , ?







: , , , , , , , , . , , , Twitter , , . - Twitter Telegram , , .







: , . , . , , ?







: , — webpack , webpack 2017 . , , , -, .







, 10 , , , , . , , egghead.io , , , . , .







: , ? , , V8 -. , Michel Weststrate, « . Docker, ». ? , , ?







: — , . , - , , , . , - , super advanced , . , advanced . , , - , «» . - , , - , V8 Chromium, .







: , , ?







: , . - , . - introduction , .







introduction performance React, , , , - , - , - advanced , .







: HolyJS ? - , ?







: , , . , . ()







: . . .








All Articles