Optimization or how not to shoot yourself in the foot

Good day. Today I want to talk with you about optimization. What is it, why is it needed, and most importantly, how to make sure that it does not painfully pain then.







First of all, we will understand what optimization is in general, and what optimization in JS is. So, optimization is the improvement of something according to some quantitative characteristic. JS identified four quantitative characteristics for itself:







The amount of code - it is generally accepted that the less lines of code written, the more efficient and better it is. My opinion is fundamentally different, because by writing one line of code you can create such a memory leak or perpetual cycle that the browser simply dies.







Speed ​​(performance) is the so-called computational complexity, that is, the number of actions that the parser will need to perform to execute the instruction.







Build speed - it’s no secret that now almost none of the projects can do without builders like Webpack or Gulp, therefore this feature displays the correctness of the project builder settings. Believe me, when the server is a little smarter than the coffee grinder, it becomes important.







Reusability of the code - this characteristic shows how well the architecture for reusing functions, components, modules is built.

Consider each of the categories in more detail, we will analyze what characteristics it includes and what it depends on.







Code Volume:









Performance:









Build speed:









Code Reuse:









As said in previous articles, in order to change something, you need to determine the starting point and figure out how bad everything is. Where to start such a voluminous process? Start with the simplest thing: speed up the assembly and cutting time of excess from the project. You ask why it is worth starting with this? Due to the fact that they depend on each other. Reducing the amount of code will increase the build speed of builds, and, consequently, increase your productivity.







Optimization of the build time inevitably introduces us to the concept of “Cold” build - this is the process when a project starts from scratch and up to the point that all dependencies are affected and the code is completely recompiled. Do not confuse with Rebild - this is rebuilding client code without pulling up external dependencies and other tinsel.







To increase the build speed of the build will help:









Speeding up the rebuild and the "cold" build will require cutting out unnecessary comments and dead pieces of code. However, what to do if you have a huge project and it is not possible to inspect it yourself? In such cases, code analyzers come to the rescue.







Personally, I periodically use SonarQube , not the best, but flexible. It can be taught the features of the project, if any. From time to time he does such things that at least stand, at least fall, but, like any instrument, he must be able to use it, and do not forget to be skeptical about his remarks. Despite all its disadvantages, he copes with a bang with the search for dead code, comments, the presence of copy-paste and small things, like the lack of a strict comparison.







The fundamental difference between SonarQube and ESlint / TSLint / Prettier and others like them is that it checks the quality of the code, isolates the dubbing, the complexity of the calculation, and also gives recommendations on the necessary changes. Analogs simply check the code for errors, syntax and formatting.







In practice, I came across codacy , a good service with a free and paid subscription. It will be useful if you need to check something on the side, while not deploying this 'harvester'. It has an intuitive interface, a detailed indication of what is wrong with the code and much more.







In this article I will not touch on the topic of setting up the build, chunks and the rest, because it all depends on the needs of the project and the installed builder. Perhaps I will talk about this in other articles.







The done manipulations helped to speed up the assembly - profit, but what next? Since analyzers are able to find code dubbing, it will be useful to place it in separate modules or components, thereby increasing code reuse.







There remains the only section that we did not touch - the speed of the code itself. The very mechanism of bringing to a sense of productivity thereof is called by all the hated word refactoring. Let's take a closer look at what is worth doing when refactoring and what is not.







Life rule: if it works, don’t touch it, it should not guide you in this process. The first rule in IT: do a backup, then you will say thanks to yourself. At the front, before making any changes, make tests so as not to lose functionality in the future. Then ask yourself - how to determine load times and memory leaks?







This will help DevTool. It will not only show a memory leak, tell you the page load time, drawdown on animation, and if you are lucky, it will conduct an audit for you, but this is not accurate. DevTools also has a nice feature, such as limiting the download speed, which allows you to predict the page loading speed in case of bad Internet.







We were able to identify the problems, now let's solve them!







To get started, we will reduce the loading time using the browser caching mechanism. The browser can cache everything and subsequently provide the user with data from the cache. Localstorage and sessionstorage no one took from you. They allow you to store some of the data that helps to speed up the SPA during subsequent downloads and reduce unnecessary server requests.







It is considered necessary to optimize the code based on the environment in which it will be executed, but as practice shows, it eats up a lot of time and effort, while not bringing a tangible increase. I propose to consider this only as a recommendation.

It is naturally advisable to eliminate all memory leaks. I will not focus on this, I think everyone knows how to eliminate them, and if not, just google it.







Another of our assistants is a webworker. Web workers are browser-owned threads that can be used to execute JS code without blocking the event loop. Web workers can perform computationally heavy and lengthy tasks without blocking the user interface flow. In fact, when they are used, the calculations are performed in parallel. Before us is real multithreading. There are three types of web workers:







  1. Dedicated Workers - Instances of dedicated web workers are created by the main process. Only the process itself can exchange data with them.
  2. Shared Workers (Shared Workers) - Access to a shared worker can be obtained by any process that has the same source as the worker (for example, different browser tabs, iframe, and other shared workers).
  3. Service Workers are event-driven workers registered using their source and path. They can control the web page they are connected to by intercepting and modifying navigation commands and resource requests, and by caching data that can be very precisely controlled. All this gives us excellent tools to control the behavior of the application in a certain situation (for example, when the network is unavailable).


How to work with them can be easily found on the Internet.







We sort of figured out the approaches and third-party proubles, now I propose to talk about the code itself.







First of all, try to get rid of direct calls to the DOM tree, as this is a time-consuming operation. Let's imagine that you are constantly manipulating some kind of object in your code. Instead of working with this object by reference, you constantly pull the DOM tree to search for this element and work with it, and we implement the caching pattern in the code.







The second step is to get rid of global variables. ES6 gave us a wonderful invention of mankind called block variables (in simple terms, variable declarations from var to let and const ).







And finally, the most delicious. Here, unfortunately, not everyone has enough experience to understand the nuance. I am against using recursive functions. Yes, they reduce the amount of code written, but it can’t do without a catch: often such recursive functions do not have exit conditions, they are simply forgotten about them. As in the saying “you can break a finger with a hammer, but this is not a hammer problem, but a finger owner” or a joke about cats: recursive functions are not bad, they must be able to cook.







Despite all the power of modern front-end applications, do not forget about the basics. A clear example of waste and irrationality is the addition of new elements to the beginning of the array. Who knows, he understood, and he who does not - now I will tell. Everyone knows that array elements have their own index, and when we are going to add a new array element to its beginning, the sequence of actions will be as follows:







  1. Definition of array length
  2. Numbering of each element.
  3. Shift of each array element
  4. Insert a new item into an array
  5. Re-indexing array elements.


Summary:







It's time to round off, and for those who are comfortable with the format of memos, keep a list of steps thanks to which you can understand at what stage of optimization you are now and what to do next:







  1. We determine how much everything is good / bad, remove the metrics.
  2. We cut out everything unnecessary: ​​unused dependencies, dead code, unnecessary comments.
  3. We customize and speed up the assembly time, configure different profiles for the contours.
  4. We analyze the code and decide which parts we will optimize and rewrite.
  5. We are writing tests to prevent the loss of functionality.
  6. We start refactoring, get rid of global variables, memory leaks, dubbing code and other garbage, and do not forget about caching.
  7. We simplify the complexity of calculations and take everything possible to a web worker.


Everything is not as complicated as it seems at first glance. Your sequence will probably be different from mine, if only because you have your own head on your shoulders. You will add new items or, conversely, reduce their number, but the basis of the list will be similar. I specifically described the division so that this activity can be run in parallel with the main work. Often the customer is not ready to pay for rework, agree?







And finally.







I believe in you, and that you will succeed. Think me naive? I think you will be surprised, but since you found this article, read it to the end, it means (I have good news for you) you have brains, and you are trying to develop them. I wish you success in such a difficult undertaking as optimizing the front!








All Articles