Shader is not magic. Writing shaders in Unity. Introduction

Hello! My name is Grigory Dyadichenko, and I am the founder and CTO of Foxsys Studios. Today I want to talk about shaders. The ability to write shaders (and generally work with the render) is very important when developing for mobile platforms or AR / VR, if you want to achieve cool graphics. Many developers believe that shaders are magic. That there is little good information on them, and that to write them you need to have, at least, the title of candidate of science. Yes, the development of shaders in its principles is very different from client development. But the main thing is to understand the basic principles of shaders, as well as to know their essence, so that there is nothing magical and the search for information on this topic was a simple task. This series of articles is designed for beginners, so if you are good at programming shaders, this series will not be of interest to you. Anyone who wants to understand this topic - welcome under the cut!







This is an introductory article in which I will describe the general principles of writing shaders. If the topic is interesting, then we will analyze in more detail in separate articles: vertex shaders, geometric shaders, fragment / pixel shaders, triplanar shaders, screen space effects and computer shaders (OpenCL, CUDA, etc.). And in general, all the magic that can be done on the GPU. This will be sorted out in the context of the Unity standard pipeline. So LWRP and HDRP so far seem a little damp to me.



What is a shader?





Source: www.shadertoy.com/view/MsGSRd



In fact, this is a program running on the GPU, the output of which is different information. In vertex shaders, these are the parameters of the mesh vertices. Pixel shaders are executed pixel by pixel.



To understand how shaders work, you need to tell what a graphic pipeline is. Very often people speak about this topic in rather complicated words, but we will make it a little easier to understand. Take the example of OpenGL. In this regard, I really like this picture.







If you omit parts related to lighting, etc. In general, from the point of writing the same Unlit shaders on hlsl, the essence is as follows. We have a shader



#pragma vertex vert #pragma fragment frag
      
      





where we determine that the vertex part of the shader will be written in the vert function, and the fragment part in the frag function.



The structures that we describe in the shader determine what data we will take from the mesh and after processing with the vertex shader that hang on our MeshRenderer and MeshFilter objects.



  struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; }; struct v2f { float2 uv : TEXCOORD0; UNITY_FOG_COORDS(1) float4 vertex : SV_POSITION; };
      
      





Next, the vertex shader calculates by receiving the appdata data as an input and gives the result in the form of a v2f structure, which then goes to the fragment shader. Which in turn will already calculate the color of the pixel. Since v2f information is written only to the vertices (which are smaller than pixels), the data in the fragment part is interpolated. All this can be represented as the fact that vert is considered in each vertex independently. Then the result is transferred to the fragment part, where frag for each pixel is considered as independent. Since the calculations are carried out in parallel, in these parts there is no information about the neighbors (if you do not transmit it somehow cleverly).



In more detail, all the nuances, as well as many examples are described in the documentation of Unity docs.unity3d.com/Manual/SL-Reference.html



Shader Programming Languages





Source: www.shadertoy.com/view/WsS3Dc



What else is important not to forget. The fact that shaders are now written in three programming languages ​​that have nothing to do with unity. CG, GLSL and HLSL. The easiest way to write shaders in a unit is HLSL, because it is where shader files with .shader permission are written. And if there is comparatively little information on shaders in the unitary context, then information separately on HLSL, GLSL and CG is just tons. The documentation for the shaders describes how to transfer what is written in these languages ​​to Unity. Therefore, it turns out that almost all information in general about these programming languages ​​is valid. All three languages ​​are very similar to C, but each has its own characteristics.



Further, from the point of view of studying shaders, when these languages ​​no longer raise questions, you can see what possibilities "UnityCG.cginc" and other libraries written by unity provide in itself to simplify their work.



Why if in shaders is bad?





Source: www.shadertoy.com/view/Md3cWr



It is important to understand how shaders are executed at the iron level and why they are so fast that they can perform millions of operations without straining.



The main idea of ​​GPUs is maximum parallelism of calculations. Here it is necessary to introduce such a concept as a “wave front”. In fact, it is quite simple, the wavefront is a group of shaders that performs the same sequence of operations. That is, from the point of view of the GPU, the best option is when the same instructions are executed at the same time. The only difference in execution is the input. The problem of branching is that a situation may occur when in a single group of shaders, shaders must invoke different operations. Which in turn leads to the creation of a new wave front, copying data into it, etc. And it is very expensive.



There are nuances and exceptions, but in order to safely write if, you must understand how it behaves on the target version of the graphic api. Since the same OpenGL ES 2 or DX11 in this regard are very different.



Why do I need to know this, because there are node editors?







It’s important to understand that node editors are primarily a tool for technical artists. These are specialists who have expertise in mathematics, but are more designers. Shaders of the wireframe type (where understanding of barycentric coordinates is required), or the transformation to Cartesian coordinates, which is used for tricky projections, is much easier to code with, like many mathematical models of physical materials. At the same time, from the point of view of a shader programmer, you essentially make custom nodes and tools for technical artists to create real magic. Node editors have limited functionality from this point of view. Therefore, it is important to be able to write shaders in languages ​​like hlsl. Understand how rendering works, etc.



Useful Resources for Learning





Source: www.shadertoy.com/view/4tlcWj



In terms of learning shader programming, a good exercise is to rewrite shaders from www.shadertoy.com or glslsandbox.com . In addition, there is a cool profile of a specialist from Unity, where you can see a lot of interesting github.com/keijiro



Everything else is mathematics and an understanding of the physics of effects. This is somewhat similar to mixing the ingredients, if the specific problem of physical modeling is not solved. A lot of interesting things can be done by mixing noise, refraction, subsurface scattering of light, caustics, Fresnel effect, diffusion reaction and other physical properties of objects. In general, shader programming is certainly not elementary, and there is where to dig in depth.



If the topic of shaders is interesting, then I will try to release a series of articles on this topic, already with specific examples and tutorials on the topic of creating different effects. Suggest in the comments about what you would be interested to read and what topics to study. Thanks for attention!



All effects in the article are a recording of shader effects with shadertoy.



All Articles