What are shaders in the game? How to install shaders on Minecraft? What shaders answer.

With global computerization, a huge number of incomprehensible terms have come to our world. Dealing with all of them is not so easy as it seems at first glance. Many of them are similar in name, many have broad functionality. It's time to find out what a shader is, where it came from, what it is for and what it is.

Optimizer

Most likely, you are an avid Minecraft player and that is why you came to find out what it is. It should be noted right away that the concept of "shader" is easily separated from this game and can "live" from it separately. Just like mods. Therefore, it is not worthwhile to tightly connect these two concepts.

In general, the shader comes from programming, appeared as an assistant to specialists. It will probably be loud to call this tool an optimizer, but it really improves the picture in games. So, when you have already begun to roughly understand what it is, let's move on to an accurate interpretation.

Interpretation

What is a shader? which is executed by the video card processors. These tools are developed in a special language. Depending on the purpose, it can be different. After that, the shaders are conditionally translated into instructions for the graphics accelerator processors.

Application

It must be said right away that the application as a whole is predetermined by the purpose. Programs are embedded in video card processors, which means that they work on the parameters of objects and images of three-dimensional graphics. They can perform a variety of tasks, including working with reflection, refraction, dimming, shear effects, etc.

Premise

People have been trying to find out what a shader is for a long time. Even before these programs, developers did everything by hand. The process of forming an image from some objects was not automated. Before the game was born, the developers did the rendering themselves. They worked with an algorithm, composed it for different tasks. So there were instructions for applying textures, video effects, etc.

Of course, some processes were still built into the work of video cards. Such algorithms could be used by developers. But they could not manage to impose their algorithms on the video card. Non-standard instructions could be executed by the CPU, which was slower than the GPU.

Example

To understand the difference, it is worth looking at a couple of examples. Obviously, in a game, rendering could be hardware and software. For example, we all remember the famous Quake 2. So, the water in the game could just be a blue filter when it comes to hardware rendering. But with software intervention, a splash of water appeared. It's the same story with CS 1.6. Hardware rendering only gave a white flash, while software rendering added a pixelated screen.

Access

So it became clear that it was necessary to solve such problems. Graphics accelerators began to expand the number of algorithms that were popular among developers. It became clear that it was impossible to "stuff" everything. It was necessary to open access to the video card specialists.

Before there were games like "Minecraft" with mods and shaders, developers were given the opportunity to work with GPU blocks in pipelines that could be responsible for different instructions. This is how programs with the name "shader" became known. To create them, programming languages ​​were specially developed. So, video cards began to be loaded not only with standard "geometry", but also with instructions for the processor.

When such access became possible, new programming possibilities began to open up. The specialists could solve mathematical problems on the GPU. These calculations became known as GPGPU. This process required special tools. From nVidia CUDA, from Microsoft DirectCompute, and the OpenCL framework.

Types

The more people learned what shaders are, the more information about them and their capabilities was revealed. Initially, the accelerators had three processors. Everyone was responsible for their own type of shader. Over time, they were replaced by a universal one. Each had a certain set of instructions that had three types of shaders at once. Despite the unification of the work, the description of each type has survived to this day.

Vertex type worked with the vertices of shapes that have many faces. There are many tools involved. For example, we are talking about texture coordinates, vectors of tangent, binormal or normal.

The geometric type worked not just with one vertex, but also with a whole primitive. Pixel was designed to process fragments of bitmap illustrations and textures in general.

In games

If you are looking for shaders for Minecraft 1.5.2, then you most likely just want to improve the picture in the game. To make this possible, the programs went through "fire, water and copper pipes." Shaders were tested and refined. As a result, it became clear that this tool has advantages and disadvantages.

Of course, the simplicity in compiling various algorithms is a huge plus. This is both flexibility and a noticeable simplification in the game development process, and therefore, a decrease in cost. The resulting virtual scenes become more complex and realistic. Also, the development process itself becomes many times faster.

Of the shortcomings, it is worth noting only that you will have to learn one of the programming languages, and also take into account that different sets of algorithms are located on different models of video cards.

Installation

If you have found a shader pack for Minecraft, you need to understand that there are many pitfalls in its installation. Despite the already fading popularity of this game, its loyal fans still remain. Not everyone likes graphics, especially in 2017. Some people think that thanks to shaders they will be able to improve it. In theory, this statement is correct. But in practice, you will not change much.

But if you are still looking for ways on "Minecraft 1.7", then, first of all, be careful. The process itself is not complicated. In addition, along with any downloadable file, there is an instruction for its installation. The main thing is to check the versions of the game and the shader. Otherwise, the optimizer will fail.

There are many places on the Internet where you can install and download such a tool. Next, you need to unpack the archive into any folder. There you will find the file "GLSL-Shaders-Mod-1.7-Installer.jar". After starting, you will be shown the path to the game, if it is correct, then agree with all subsequent instructions.

Then you need to move the "shaderpacks" folder to ".minecraft". Now, when you start the launcher, you will need to go to the settings. Here, if the installation was correct, the line "Shaders" will appear. You can choose the package you want from the entire list.

If you need shaders for Minecraft 1.7.10, then just find the shaderpack of the required version and do the same. Unstable versions can be found on the Internet. Sometimes you have to change, reinstall and look for a suitable one. It is better to look at the reviews and choose the most popular ones.

Introduction

The world of 3D graphics, including games, is filled with terms. Terms that do not always have the only correct definition. Sometimes the same things are called differently, and vice versa, the same effect can be called "HDR", then "Bloom", then "Glow", then "Postprocessing" in the game settings. Most people, out of the developers' boast about what they have built into their graphics engine, are confused about what they really mean.

The article is intended to help you understand what some of these words, most often used in such cases, mean. Within the framework of this article, we will not talk about all the terms of 3D graphics, but only about those that have become more widespread in recent years as distinctive features and technologies used in game graphics engines and as names for graphic settings of modern games. To begin with, I highly recommend that you familiarize yourself with.

If something in this article and in Alexander's articles is not clear to you, then it makes sense to start from the earliest, p. These articles are already somewhat outdated, of course, but the basic, most initial and important data is there. We'll talk about more "higher-level" terms with you. You should have a basic understanding of 3D real-time graphics and the structure of the graphics pipeline. On the other hand, don't expect mathematical formulas, academic precision, and code examples - that's not what this article is about at all. Terms

List of terms described in the article:

Shader

A shader in a broad sense is a program for visually defining the surface of an object. This can be a description of lighting, texturing, post-processing, etc. Shaders grew out of Cook's shade trees and Perlin's pixel stream language. RenderMan Shading Language is now the most famous. Programmable shaders were first introduced in Pixar's RenderMan, which defines several types of shaders: light source shaders, surface shaders, displacement shaders, volume shaders, imager shaders. These shaders are most often executed in software by universal processors and do not have a full hardware implementation. Later, many researchers described languages ​​similar to RenderMan, but they were already designed for hardware acceleration: PixelFlow system (Olano and Lastra), Quake Shader Language (used by id Software in the graphics engine of the game Quake III, which described multi-pass rendering), etc. Peercy and his colleagues developed a technique for executing programs with loops and conditions on traditional hardware architectures using multiple rendering passes. RenderMan shaders broke into several The number of passes that were combined in the framebuffer. Later there were languages ​​that we see hardware accelerated in DirectX and OpenGL. This is how shaders were adapted for real-time graphics applications.

Early video chips were not programmable and performed only pre-programmed actions (fixed-function), for example, the lighting algorithm was rigidly fixed in the hardware, and nothing could be changed. Then, video chip manufacturers gradually introduced programmability elements into their chips, at first these were very weak capabilities (NV10, known as NVIDIA GeForce 256, was already capable of some primitive programs), which did not receive software support in the Microsoft DirectX API, but since over time, the possibilities have been constantly expanding. The next step was for both the NV20 (GeForce 3) and NV2A (the video chip used in the Microsoft Xbox game console), which became the first chips with hardware support for DirectX API shaders. The Shader Model 1.0 / 1.1 version, which appeared in DirectX 8, was very limited, each shader (especially for pixel ones) could be relatively small in length and combine a very limited set of instructions. Later, Shader Model 1 (SM1 for short) was improved with Pixel Shaders 1.4 (ATI R200), which offered more flexibility, but also had too limited capabilities. Shaders of that time were written in the so-called assembly shader language, which is close to the assembly language for general purpose processors. Its low level brings certain difficulties for understanding the code and programming, especially when the program code is large, because it is far from the elegance and structuredness of modern programming languages.

The Shader Model 2.0 (SM2) version, which appeared in DirectX 9 (which was supported by the ATI R300 video chip, which became the first GPU with support for the shader model version 2.0), significantly expanded the capabilities of real-time shaders, offering longer and more complex shaders and a significantly expanded set of instructions. The ability to calculate floating point in pixel shaders was added, which was also a major improvement. DirectX 9, in the face of SM2 capabilities, also introduced the high-level shader language (HLSL), which is very similar to the C language. And an efficient compiler that translates HLSL programs into low-level, hardware-friendly code. Moreover, there are several profiles available for different hardware architectures. Now, the developer can write one HLSL shader code and compile it using DirectX into the optimal program for the video chip installed by the user. After that, chips from NVIDIA, NV30 and NV40 were released, which improved the capabilities of hardware shaders one step further, adding even longer shaders, the possibility of dynamic transitions in vertex and pixel shaders, the ability to fetch textures from vertex shaders, etc. were not, they are expected towards the end of 2006 in DirectX 10 ...

In general, shaders have added a lot of new capabilities to the graphics pipeline for transforming and lighting vertices and individually processing pixels the way the developers of each specific application want it. And yet, the capabilities of hardware shaders have not yet been fully disclosed in applications, and as their capabilities increase in each new generation of hardware, we will soon see the level of those RenderMan shaders that once seemed unattainable for video game accelerators. So far, in real-time shader models supported by hardware video accelerators, only two types of shaders are defined: and (in the definition of the DirectX 9 API). DirectX 10 promises to be added to them in the future.

Vertex Shader

Vertex shaders are programs executed by video chips that perform mathematical operations with vertices (vertex, they make up 3D objects in games), in other words, they provide the ability to execute programmable algorithms for changing the parameters of vertices and their lighting (T&L - Transform & Lighting) ... Each vertex is defined by several variables, for example, the position of a vertex in 3D space is determined by coordinates: x, y and z. Vertices can also be described by color characteristics, texture coordinates, and the like. Vertex shaders, depending on the algorithms, change this data in the course of their work, for example, calculating and writing new coordinates and / or color. That is, the input data of the vertex shader is data about one vertex of the geometric model that is currently being processed. Typically these are spatial coordinates, normal, color components, and texture coordinates. The resulting data of the executed program serves as input for the further part of the pipeline, the rasterizer makes linear interpolation of the input data for the surface of the triangle and for each pixel executes the corresponding pixel shader. A very simple and rough (but clear, I hope) example: a vertex shader allows you to take a 3D sphere object and use a vertex shader to make a green cube out of it :).

Before the advent of the NV20 video chip, developers had two ways, either to use their own programs and algorithms that change the parameters of the vertices, but then all calculations would be done by the CPU (software T&L), or to rely on fixed algorithms in video chips, with support for hardware transformation and lighting (hardware T&L ). The very first DirectX shader model meant a big step forward from fixed functions for transforming and lighting vertices to fully programmable algorithms. It became possible, for example, to execute the skinning algorithm entirely on video chips, and before that the only possibility was their execution on universal central processors. Now, with the capabilities greatly improved since the aforementioned NVIDIA chip, you can do a lot with vertices using vertex shaders (except for their creation, perhaps) ...

Examples of how and where vertex shaders are applied:

Pixel Shader

Pixel shaders are programs executed by the video chip during rasterization for each pixel of the image, they perform texture sampling and / or mathematical operations on the color and depth value (Z-buffer) of pixels. All pixel shader instructions are executed pixel by pixel after the geometry transformation and lighting operations are complete. As a result of its work, the pixel shader produces the final value of the pixel color and the Z-value for the next stage of the graphics pipeline, blending. The simplest example of a pixel shader that can be cited: banal multitexturing, just mixing two textures (diffuse and lightmap, for example) and imposing the result of the calculation on a pixel.

Before the advent of video chips with hardware support for pixel shaders, developers had only opportunities for conventional multitexturing and alpha blending, which significantly limited the possibilities for many visual effects and did not allow doing much of what is now available. And if with geometry something else could be done programmatically, then with pixels - no. Early versions of DirectX (up to 7.0 inclusive) always performed all calculations vertically and offered extremely limited functionality for per-pixel lighting (remember EMBM - environment bump mapping and DOT3) in the latest versions. Pixel shaders made it possible to illuminate any surface pixel-by-pixel using developer-programmed materials. The pixel shaders 1.1 (in the DirectX sense) that appeared in the NV20 could not only do multitexturing, but also much more, although most games using SM1 simply used traditional multitexturing on most surfaces, executing more complex pixel shaders only on a part of the surfaces, for creating a variety of special effects (everyone knows that water is still the most common example of using pixel shaders in games). Now, after the advent of SM3 and video chips that support them, the capabilities of pixel shaders have grown to even allow raytracing, albeit with some limitations.

Examples of using pixel shaders:

Procedural Textures

Procedural textures are textures described by mathematical formulas. Such textures do not take up space in the video memory, they are created by the pixel shader "on the fly", each of their elements (texel) is obtained as a result of executing the corresponding shader commands. The most common procedural textures are: different types of noise (for example, fractal noise), wood, water, lava, smoke, marble, fire, etc., that is, those that can be described mathematically in a relatively simple way. Procedural textures also allow you to use animated textures with just a slight modification of the mathematical formulas. For example, clouds made in this way look pretty decent both in dynamics and in statics.

The advantages of procedural textures also include an unlimited level of detail for each texture, there will simply be no pixelation, the texture is always generated at the size required to display it. Animated is also of great interest, with its help you can make waves on the water, without using pre-calculated animated textures. Another plus of such textures is that the more they are used in a product, the less work for artists (albeit more for programmers) to create regular textures.

Unfortunately, procedural textures have not yet been properly used in games, in real applications it is still often easier to load a regular texture, the volume of video memory is growing by leaps and bounds, in the most modern accelerators they already install 512 megabytes of dedicated video memory, which is needed more than to borrow something. Moreover, they still often do the opposite - to speed up mathematics in pixel shaders, lookup tables (LUTs) are used - special textures containing pre-calculated values ​​obtained as a result of calculations. In order not to count a few math instructions for each pixel, they simply read the pre-calculated values ​​from the texture. But the further, the more the emphasis should shift towards mathematical calculations, take the same new generation ATI video chips: RV530 and R580, which have 12 and 48 pixel processors for every 4 and 16 texture units, respectively. Moreover, if we are talking about 3D textures, because if two-dimensional textures can be placed without problems in the local memory of the accelerator, then 3D textures require much more of it.

Examples of procedural textures:

Bump Mapping / Specular Bump Mapping

Bumpmapping is a technique for simulating irregularities (or modeling microrelief, whichever you prefer) on a flat surface without large computational costs and geometry changes. For each pixel in the surface, a lighting calculation is performed based on values ​​in a special height map called a bumpmap. This is usually an 8-bit black and white texture and the texture color values ​​are not overlaid like regular textures, but are used to describe the roughness of the surface. The color of each texel determines the height of the corresponding relief point, higher values ​​mean higher height above the original surface, and lower values, respectively, lower. Or vice versa.

The degree of illumination of a point depends on the angle of incidence of the rays of light. The smaller the angle between the normal and the ray of light, the greater the illumination of a point on the surface. That is, if you take a flat surface, then the normals at each point will be the same and the illumination will also be the same. And if the surface is uneven (in fact, almost all surfaces in reality), then the normals at each point will be different. And the illumination is different, at one point it will be more, at another - less. Hence the principle of bumpmapping - to model irregularities for different points of the polygon, surface normals are set, which are taken into account when calculating per-pixel lighting. As a result, a more natural image of the surface is obtained, bump mapping gives the surface more detail, such as bumps on bricks, pores on the skin, etc., without increasing the geometric complexity of the model, since the calculations are carried out at the pixel level. Moreover, when the position of the light source changes, the illumination of these irregularities changes correctly.

Of course, vertex lighting is much simpler computationally, but it looks too unrealistic, especially with relatively low-poly geometry, color interpolation for each pixel cannot reproduce values ​​larger than the calculated values ​​for the vertices. That is, the pixels in the middle of the triangle cannot be brighter than the fragments near the vertex. Consequently, areas with abrupt changes in illumination, such as glare and light sources very close to the surface, will not physically display correctly, and this will be especially noticeable in dynamics. Of course, the problem can be partially solved by increasing the geometric complexity of the model, dividing it into more vertices and triangles, but pixel-by-pixel lighting is the best option.

To continue, you need to remind about the components of lighting. The color of a surface point is calculated as the sum of ambient, diffuse and specular components from all light sources in the scene (ideally from all, often neglected by many). The contribution to this value from each light source depends on the distance between the light source and a point on the surface.

Lighting components:

Now let's add some bump mapping to this:

The uniform (ambient) component of lighting is an approximation, "initial" lighting for each point of the scene, at which all points are illuminated equally and the illumination does not depend on other factors.
The diffuse component of the light depends on the position of the light source and on the surface normal. This lighting component is different for each vertex of the object, which gives them volume. The light no longer fills the surface with the same shade.
The specular component of illumination manifests itself in the reflection of light rays from the surface. For its calculation, in addition to the vector of the position of the light source and the normal, two more vectors are used: the vector of the direction of the gaze and the vector of reflection. The Specular lighting model was first proposed by Phong Bui-Tong. These flares significantly increase the realism of the image, because rare real surfaces do not reflect light, so the specular component is very important. Especially in motion, because the glare immediately shows the change in the position of the camera or the object itself. Later, researchers came up with other ways to calculate this component, more complex (Blinn, Cook-Torrance, Ward), taking into account the distribution of light energy, its absorption by materials and scattering in the form of a diffuse component.

So, Specular Bump Mapping is obtained in this way:

And let's see the same with the example of the game, Call of Duty 2:


The first fragment of the picture is rendering without bumpmapping () at all, the second (top-right) is bumpmapping without a specular component, the third is with a specular component of normal magnitude, which is used in the game, and the last, from the right-bottom, with the maximum possible specular component.

As for the first hardware application, some types of bumpmapping (Emboss Bump Mapping) began to be used back in the days of video cards based on NVIDIA Riva TNT chips, but the techniques of that time were extremely primitive and were not widely used. The next known type was Environment Mapped Bump Mapping (EMBM), but only Matrox video cards had hardware support in DirectX at that time, and again the use was severely limited. Then Dot3 Bump Mapping appeared and video chips of that time (GeForce 256 and GeForce 2) required three passes in order to completely execute such a mathematical algorithm, since they are limited by two textures used at the same time. Starting with NV20 (GeForce3), it became possible to do the same thing in one pass using pixel shaders. Further more. They began to use more effective techniques such as.

Examples of using bumpmapping in games:


Displacement Mapping is a method of adding detail to 3D objects. Unlike bumpmapping and other per-pixel methods, when only the illumination of a point is correctly modeled by height maps, but its position in space does not change, which gives only the illusion of an increase in surface complexity, displacement maps allow you to get real complex 3D objects from vertices and polygons, without restrictions. inherent in per-pixel methods. This method repositions the vertices of the triangles by normalizing them by an amount based on the values ​​in the displacement maps. A displacement map is usually a black and white texture, and the values ​​in it are used to determine the height of each point on the surface of an object (values ​​can be stored as 8-bit or 16-bit numbers), similar to a bumpmap. Displacement maps are often used (in which case they are also called height maps) to create a terrain with hills and valleys. Since the terrain is described by a two-dimensional displacement map, it is relatively easy to deform it if necessary, as it would only require modifying the displacement map and rendering the surface based on it in the next frame.

The creation of a landscape using displacement maps is clearly shown in the picture. Initially, 4 vertices and 2 polygons were used, as a result, a full-fledged piece of the landscape turned out.

The big advantage of overlaying displacement maps is not just the ability to add details to the surface, but the almost complete creation of the object. A low-poly object is taken, split (tessellated) into more vertices and polygons. The vertices produced by the tessellation are then displaced along the normal based on the value read in the displacement map. We end up with a complex 3D object from a simple one using the appropriate displacement map:


The number of triangles created by the tessellation must be large enough to capture all the detail defined by the displacement map. Sometimes additional triangles are created automatically using N-patches or other methods. Displacement maps are best used in conjunction with bump mapping to create fine detail where proper pixel-by-pixel lighting is sufficient.

Displacement mapping was first supported in DirectX 9.0. This was the first version of this API to support the Displacement Mapping technique. DX9 supports two types of displacement mapping, filtered and presampled. The first method was supported by the forgotten MATROX Parhelia video chip, and the second - by the ATI RADEON 9700. The filtered method differs in that it allows using mip-levels for displacement maps and applying trilinear filtering for them. In this method, the mip level of the displacement map is selected for each vertex based on the distance from the vertex to the camera, that is, the level of detail is automatically selected. This achieves an almost even division of the scene when the triangles are approximately the same size.

Displacement mapping can essentially be thought of as a geometry compression technique, using displacement maps reduces the amount of memory required for a particular 3D model detail. Bulky geometry data is replaced with simple 2D displacement textures, usually 8-bit or 16-bit. This reduces the memory and bandwidth requirements required to deliver geometry data to the video chip, and these constraints are among the main ones for today's systems. Alternatively, with equal bandwidth and storage requirements, displacement mapping allows for much more complex geometric 3D models. The use of models of much less complexity, when instead of tens or hundreds of thousands of triangles, units of thousands are used, it also makes it possible to speed up their animation. Or improve it by applying more complex complex algorithms and techniques, such as cloth simulation.

Another advantage is that using displacement maps turns complex polygonal 3D meshes into multiple 2D textures that are easier to process. For example, for organization, you can use regular mip-mapping to overlay displacement maps. Also, instead of relatively complex algorithms for compressing three-dimensional meshes, you can use the usual methods of compressing textures, even JPEG-like ones. And for procedural creation of 3D objects, you can use the usual algorithms for 2D textures.

But displacement maps also have some limitations, they cannot be applied in all situations. For example, smooth objects that do not contain a lot of fine detail will be better represented as standard meshes or other higher-level surfaces such as Bezier curves. On the other hand, very complex models such as trees or plants are also not easy to represent with displacement maps. There are also problems with the convenience of their use, this almost always requires specialized utilities, because it is very difficult to directly create displacement maps (if we are not talking about simple objects, such as a landscape). Many of the inherent problems and limitations of displacement maps are the same as those of, since the two methods are essentially two different representations of a similar idea.

As an example from real games, I will cite a game that uses texture sampling from a vertex shader, a feature that has appeared in NVIDIA NV40 video chips and Shader Model 3.0. Vertex texturing can be applied to a simple method of overlaying displacement maps entirely performed by the GPU, without tessellation (splitting into more triangles). The use of such an algorithm is limited, they make sense only if the maps are dynamic, that is, they will change in the process. For example, this is the rendering of large water surfaces, which is done in the game Pacific Fighters:


Normalmapping is an improved version of the bumpmapping technique described earlier, an extended version of it. Bumpmapping was developed by Blinn back in 1978, where surface normals are altered with this terrain mapping method based on information from bump maps. While bumpmapping only changes the existing normal for surface points, normalmapping completely replaces the normals by fetching their values ​​from a specially prepared normal map. These maps are usually textures with pre-calculated normal values ​​stored in them, represented as RGB color components (however, there are also special formats for normal maps, including those with compression), in contrast to 8-bit black and white height maps in bumpmapping.

In general, like bump mapping, it is also a "cheap" method for adding detail to models of relatively low geometric complexity, without using more real geometry, only more advanced. One of the most interesting uses of the technique is to significantly increase the detail of low-poly models using normal maps obtained by processing the same model of high geometric complexity. Normal maps contain a more detailed description of the surface than bumpmapping and allow you to represent more complex shapes. Ideas for obtaining information from highly detailed objects were voiced in the mid-90s of the last century, but then it was about using for. Later, in 1998, ideas were presented for transferring details in the form of normal maps from high-poly models to low-poly ones.

Normal maps provide a more efficient way to store detailed surface data than simply using a large number of polygons. Their only serious limitation is that they are not very well suited for large details, because normal mapping does not actually add polygons or change the shape of the object, it only creates the appearance of it. This is just a simulation of details based on pixel level lighting calculations. At the extreme polygons of the object and large angles of inclination of the surface, this is very noticeable. Therefore, the most reasonable way to apply normal mapping is to make the low poly model detailed enough to maintain the basic shape of the object, and use normal maps to add finer details.

Normal maps are usually generated from two versions of the model, low and high poly. The low poly model consists of a minimum of geometry, the basic shapes of the object, and the high poly model contains everything you need for maximum detail. Then, using special utilities, they are compared with each other, the difference is calculated and stored in a texture called a normal map. When creating it, you can additionally use a bump map for very small details that cannot be modeled even in a high-poly model (skin pores, other small depressions).

Normal maps were originally represented as regular RGB textures, where the R, G and B color components (0 to 1) are interpreted as X, Y and Z coordinates. Each texel in the normal map is represented as the normal of a surface point. Normal maps can be of two types: with coordinates in model space (general coordinate system) or tangent space (the term in Russian is "tangent space", the local coordinate system of a triangle). The second option is more often used. When normal maps are presented in model space, then they must have three components, since all directions can be represented, and when in the local coordinate system, tangent space, then two components can be dispensed with, and the third can be obtained in a pixel shader.

Modern real-time applications still greatly outperform pre-rendered animation in terms of image quality, this concerns, first of all, the quality of lighting and the geometric complexity of scenes. The number of vertices and triangles calculated in real time is limited. Therefore, methods to reduce the amount of geometry are very important. Before normal mapping, several such methods were developed, but low poly models even with bumpmapping are much worse than more complex models. Normal mapping, although it has several drawbacks (the most obvious - since the model remains low poly, this is easily seen by its angular borders), but the final rendering quality is noticeably improved, leaving the geometric complexity of the models low. Recently, an increase in the popularity of this technique and its use in all popular game engines have been clearly visible. This is due to the combination of excellent resultant quality and the simultaneous reduction in the requirements for the geometric complexity of the models. The normal mapping technique is now used almost everywhere, all new games use it as widely as possible. Here is just a short list of famous PC games using normal mapping: Far Cry, Doom 3, Half-Life 2, Call of Duty 2, FEAR, Quake 4. They all look much better than games of the past, including due to the use of maps normals.

There is only one negative consequence of using this technique - an increase in the volume of textures. After all, a normal map strongly influences how an object will look, and it must be of a sufficiently large resolution, so the requirements for video memory and its bandwidth are doubled (in the case of uncompressed normal maps). But now video cards with 512 megabytes of local memory are already being produced, its bandwidth is constantly growing, compression methods have been developed specifically for normal maps, so these small restrictions are not very important, in fact. The effect of normal mapping is much greater, allowing relatively low-poly models to be used, reducing the memory requirements for storing geometric data, improving performance and giving a very decent visual result.

Parallax Mapping / Offset Mapping

Normal mapping, developed back in 1984, was followed by Relief Texture Mapping, introduced by Olivera and Bishop in 1999. It is a texture mapping technique based on depth information. The method did not find application in games, but its idea contributed to the continuation of work on parallax mapping and its improvement. Kaneko introduced parallax mapping in 2001, which was the first efficient method to render parallax effect by pixel. In 2004, Welsh demonstrated the use of parallax mapping on programmable video chips.

This method has perhaps the most different names. I will list those that I met: Parallax Mapping, Offset Mapping, Virtual Displacement Mapping, Per-Pixel Displacement Mapping. The first title is used in the article for brevity.
Parallax mapping is another alternative to bump mapping and normal mapping techniques that gives you even more insight into surface details, a more natural display of 3D surfaces, also without losing too much performance. This technique is similar to both displacement mapping and normal mapping at the same time, it is somewhere in between. The method is also designed to display more surface details than the original geometric model has. It is similar to normal mapping, but the difference is that the method distorts the texture mapping by changing the texture coordinates so that when you look at the surface from different angles, it looks convex, although in reality the surface is flat and does not change. In other words, Parallax Mapping is a technique for approximating the effect of shifting surface points depending on a change in point of view.

The technique shifts the texture coordinates (which is why the technique is sometimes called offset mapping) so that the surface looks more voluminous. The idea behind the method is to return the texture coordinates of the point where the view vector intersects the surface. This requires ray tracing (ray tracing) for the heightmap, but if it does not have too much changing values ​​("smooth" or "smooth"), then you can get by with approximation. This method is good for surfaces with smoothly varying heights, without miscalculation of intersections and large offset values. Such a simple algorithm differs from normal mapping with only three pixel shader instructions: two mathematical instructions and one additional fetch from the texture. After the new texture coordinate is calculated, it is used further to read other texture layers: base texture, normal map, etc. This method of parallax mapping on modern video chips is almost as effective as conventional texture mapping, and its result is a more realistic surface display than simple normal mapping.

But the use of conventional parallax mapping is limited to heightmaps with little difference in values. "Steep" irregularities are processed by the algorithm incorrectly, various artifacts appear, textures "floating", etc. Several modified methods have been developed to improve the parallax mapping technique. Several researchers (Yerex, Donnelly, Tatarchuk, Policarpo) have described new methods that improve the initial algorithm. Almost all ideas are based on ray tracing in a pixel shader to determine the intersection of surface details with each other. The modified techniques have received several different names: Parallax Mapping with Occlusion, Parallax Mapping with Distance Functio ns, Parallax Occlusion Mapping. For brevity, we will call them all Parallax Occlusion Mapping.

Parallax Occlusion Mapping methods also include ray tracing to determine heights and take into account the visibility of texels. Indeed, when viewed at an angle to the surface, the texels block each other, and with this in mind, you can add more depth to the parallax effect. The resulting image becomes more realistic and such improved methods can be used for deeper relief, it is great for depicting brick and stone walls, pavements, etc. It should be especially noted that the main difference between Parallax Mapping and Displacement Mapping is that calculations are all per-pixel, and not superficial. This is why the method has names like Virtual Displacement Mapping and Per-Pixel Displacement Mapping. Look at the picture, it's hard to believe, but the stones of the pavement here are just a pixel-by-pixel effect:

The method allows you to effectively display detailed surfaces without the millions of vertices and triangles that would be required when implementing this geometry. At the same time, high detail is preserved (except for silhouettes / edges) and animation calculations are greatly simplified. This technique is cheaper than using real geometry, and significantly fewer polygons are used, especially in cases with very small details. There are many applications for the algorithm, and it is best suited for stones, bricks and the like.

Also, an additional advantage is that height maps can dynamically change (water surface with waves, bullet holes in walls, and much more). The disadvantage of the method is the lack of geometrically correct silhouettes (edges of the object), because the algorithm is pixel-by-pixel and is not a real displacement mapping. But it saves performance in the form of reduced load on transformation, lighting and animation of geometry. Saves video memory required for storing large amounts of geometry data. The technology also benefits from a relatively simple integration into existing applications and the use of familiar utilities used for normal mapping in the process.

The technique has already been used in real games of recent times. So far, they get by with simple parallax mapping based on static height maps, without ray tracing and calculating intersections. Here are some examples of how parallax mapping can be used in games:

Postprocessing

In a broad sense, post-processing is everything that happens after the main imaging steps. In other words, post-processing is any change to an image after it has been rendered. Post-processing is a set of tools for creating special visual effects, and their creation is performed after the main work on rendering the scene is completed, that is, when creating post-processing effects, a ready-made raster image is used.

A simple example from a photograph: you have photographed a beautiful lake with greenery in clear weather. The sky is very bright and the trees are too dark. You load the photo into a graphics editor and start changing the brightness, contrast and other parameters for areas of the image or for the whole picture. But you no longer have the opportunity to change the camera settings, you do the processing of the finished image. This is post-processing. Or another example: selecting a background in portrait photography and applying a blur filter to that area for a depth of field effect with greater depth. That is, when you change or correct a frame in a graphics editor, you are doing post-processing. The same can be done in the game, in real time.

There are many different possibilities for post-rendering image processing. Everyone has probably seen a lot of so-called graphic filters in graphic editors. This is exactly what is called postfilters: blur, edge detection, sharpen, noise, smooth, emboss, etc. When applied to real-time 3D rendering, this is done like this - the entire scene is rendered into a special area, the render target, and after the main rendering this image is additionally processed using pixel shaders and only then displayed on the screen. Of the post-processing effects in games, the most commonly used are,,. There are many other post effects: noise, flare, distortion, sepia, etc.

Here are a couple of prime examples of post-processing in gaming applications:

High Dynamic Range (HDR)

High Dynamic Range (HDR) as applied to 3D graphics is high dynamic range rendering. The essence of HDR is to describe intensity and color with real physical quantities. The usual model for describing an image is RGB, when all colors are represented as the sum of the primary colors: red, green and blue, with different intensities in the form of possible integer values ​​from 0 to 255 for each, encoded with eight bits per color. The ratio of the maximum intensity to the minimum that can be displayed by a particular model or device is called dynamic range. So, the dynamic range of the RGB model is 256: 1 or 100: 1 cd / m 2 (two orders of magnitude). This model for describing color and intensity is commonly referred to as Low Dynamic Range (LDR).

Possible LDR values ​​for all cases are clearly not enough, a person is able to see a much larger range, especially at low light intensity, and the RGB model is too limited in such cases (and at high intensities too). The dynamic range of human vision is from 10 -6 to 10 8 cd / m 2, that is, 10,000,000,000,000: 1 (14 orders of magnitude). We cannot see the entire range at the same time, but the range visible to the eye at any given time is approximately 10,000: 1 (four orders of magnitude). Vision adapts to values ​​from another part of the illumination range gradually, using the so-called adaptation, which can be easily described by a situation with turning off the light in a room at night - at first the eyes see very little, but over time they adapt to changed lighting conditions and see much more. ... The same thing happens when you change the dark environment back to a light one.

So, the dynamic range of the RGB description model is not enough to represent images that a person is able to see in reality, this model significantly reduces the possible values ​​of light intensity in the upper and lower parts of the range. The most common example cited in HDR materials is an image of a darkened room with a window on a bright street on a sunny day. With an RGB model, you can get either a normal display of what is outside the window, or only what is inside the room. Values ​​greater than 100 cd / m 2 in LDR are generally cropped, which is the reason why it is difficult in 3D rendering to correctly display bright light sources directed directly into the camera.

So far, the data display devices themselves cannot be seriously improved, and it makes sense to abandon LDR in the calculations, you can use real physical values ​​of intensity and color (or linearly proportional), and display the maximum that it can on the monitor. The essence of HDR representation is to use intensity and color values ​​in real physical quantities or linearly proportional and to use not integers, but floating point numbers with high precision (for example, 16 or 32 bits). This removes the limitations of the RGB model and dramatically increases the dynamic range of the image. But then any HDR image can be displayed on any display medium (the same RGB monitor), with the highest possible quality for it using special algorithms.

HDR rendering allows you to change the exposure after we have rendered the image. It makes it possible to simulate the effect of adaptation of human vision (moving from bright open spaces to dark rooms and vice versa), allows for physically correct lighting, and is also a unified solution for applying post-processing effects (glare, flares, bloom, motion blur). Image processing algorithms, color correction, gamma correction, motion blur, bloom and other post-processing methods are better performed in HDR representation.

In real-time 3D rendering applications (games, mainly), HDR rendering began to be used not so long ago, because it requires calculations and support for a render target in floating point formats, which first became available only on video chips with support for DirectX 9. The usual way of HDR rendering in games: rendering a scene to a floating point buffer, post-processing of an image in an extended color range (changing contrast and brightness, color balance, glare and motion blur effects, lens flare and the like), applying tone mapping to output the final HDR image to LDR display device. Sometimes environment maps are used in HDR formats, for static reflections on objects, the use of HDR in simulating dynamic refractions and reflections is very interesting, for this, dynamic maps in floating point formats can also be used. To this you can add more light maps, calculated in advance and saved in HDR format. Much of the above has been done, for example, in Half-Life 2: Lost Coast.

HDR rendering is very useful for complex post-processing of higher quality than conventional methods. The same bloom will look more realistic when computed in the HDR view model. For example, as is done in Crytek's Far Cry game, it uses standard HDR rendering techniques: applying bloom filters provided by Kawase and tone mapping operator Reinhard.

Unfortunately, in some cases, game developers can hide under the name HDR just a bloom filter calculated in the usual LDR range. And while most of what is being done in games with HDR rendering right now is a better quality bloom, the benefits of HDR rendering are not limited to this post-effect, it's just the easiest to do.

Other examples of HDR rendering in real-time applications:


Tone mapping is the process of converting an HDR luminance range to the LDR range displayed by an output device such as a monitor or printer, since outputting HDR images to them will require converting the dynamic range and gamut of the HDR model to the corresponding LDR dynamic range, most commonly RGB. After all, the brightness range presented in HDR is very wide, it is several orders of magnitude of the absolute dynamic range at a time, in one scene. And the range that can be reproduced on conventional output devices (monitors, televisions) is only about two orders of magnitude of dynamic range.

The HDR to LDR conversion is called tone mapping, and is lossy and mimics the properties of human vision. These algorithms are commonly referred to as tone mapping statements. Operators categorize all image brightness values ​​into three different types: dark, medium and bright. Based on the assessment of the brightness of the midtones, the overall illumination is corrected, the brightness values ​​of the pixels of the scene are redistributed in order to enter the output range, the dark pixels are lightened and the bright ones are darkened. Then, the brightest pixels in the image are scaled to the range of the output device or output view model. The following picture shows the simplest conversion of an HDR image to the LDR range, a linear transformation, and a more complex tone mapping operator is applied to the fragment in the center, which works as described above:

It can be seen that only with the use of nonlinear tone mapping can you get the maximum of details in the image, and if you convert HDR to LDR linearly, then many little things are simply lost. There is no single correct tone mapping algorithm, there are several operators that give good results in different situations. Here's a good example of two different tone mapping statements:

Together with HDR rendering, tone mapping has recently been used in games. It became possible to optionally simulate the properties of human vision: loss of sharpness in dark scenes, adaptation to new lighting conditions during transitions from very bright to dark areas and vice versa, sensitivity to changes in contrast, color ... This is how the imitation of the ability of vision to adapt in Far Cry looks like. The first screenshot shows the image the player sees just turning from a dark room to a brightly lit open space, and the second shows the same image a couple of seconds after adaptation.

Bloom

Bloom is one of the cinematic post-processing effects that brightens the brightest parts of an image. This is the effect of very bright light, which manifests itself in the form of a glow around bright surfaces, after applying the bloom filter, such surfaces not only receive additional brightness, the light from them (halo) partially affects the darker areas adjacent to bright surfaces in the frame. The easiest way to show this is with an example:

In 3D Bloom graphics, the filter is made using additional post-processing - mixing a frame blurred by the blur filter (the entire frame or its individual bright areas, the filter is usually applied several times) and the original frame. One of the most commonly used bloom post-filter algorithms in games and other real-time applications:

  • The scene is rendered into a framebuffer, the glow intensity of the objects is written to the alpha channel of the buffer.
  • The framebuffer is copied into a special texture for processing.
  • The texture resolution is reduced, for example, by a factor of 4.
  • Anti-aliasing filters (blur) are applied to the image several times, based on the intensity data recorded in the alpha channel.
  • The resulting image is mixed with the original frame in the framebuffer and the result is displayed on the screen.

Like other types of post-processing, bloom is best used when rendering in high dynamic range (HDR). Additional examples of processing the final image by a bloom filter from real-time 3D applications:

Motion blur

Motion blur occurs in photographs and films due to the movement of objects in the frame during the exposure time of the frame, while the lens shutter is open. A frame taken by a camera (photo, movie) does not show a snapshot taken instantly with zero duration. Due to technological limitations, the frame shows a certain period of time, during this time the objects in the frame can move a certain distance, and if this happens, then all the positions of the moving object during the open shutter of the lens will be presented on the frame as a blurred image along the motion vector ... This happens if the object is moving relative to the camera or the camera is relative to the object, and the amount of blur gives us an idea of ​​the magnitude of the speed of movement of the object.

In three-dimensional animation, at any given moment in time (frame), objects are located at certain coordinates in three-dimensional space, similar to a virtual camera with an infinitely fast shutter speed. As a result, there is no blur similar to that obtained by the camera and the human eye when looking at fast moving objects. It looks unnatural and unrealistic. Consider a simple example: several spheres rotate around some axis. Here is an image of how this motion would look with and without blur:

From an image without blurring, one cannot even tell whether the spheres are moving or not, while motion blur gives a clear idea of ​​the speed and direction of movement of objects. By the way, the lack of motion blur is also the reason why motion in games at 25-30 frames per second seems jerky, although movies and videos look great at the same frame rate parameters. To compensate for the lack of motion blur, either a high frame rate (60 frames per second or higher) or the use of additional image processing methods to emulate the effect of motion blur is desirable. This is used to improve the smoothness of the animation and for the effect of photo and film realism at the same time.

The simplest motion blur algorithm for real-time applications is to use data from previous animation frames to render the current frame. But there are more efficient and modern methods of motion blur, which do not use previous frames, but are based on the motion vectors of objects in the frame, also adding just one more post-processing step to the rendering process. The blur effect can be either full-screen (usually done in post-processing), or for individual, fastest moving objects.

Possible applications of the motion blur effect in games: all racing games (for creating the effect of very high speed of movement and for use when watching TV-like replays), sports games (the same replays, and in the game itself, blurring can be applied to very fast moving objects, like a ball or a puck), fighting games (fast movements of melee weapons, arms and legs), many other games (during in-game 3D cutscenes on the engine). Here are some examples of the motion blur post effect from games:

Depth Of Field (DOF)

Depth of field, in short, is the blurring of objects depending on their position relative to the focus of the camera. In real life, in photographs and in movies, we see not all objects equally clearly, this is due to the peculiarity of the structure of the eye and the structure of the optics of cameras and cinema cameras. Photo and cinema optics have a certain distance, objects located at such a distance from the camera are in focus and look sharp in the picture, and objects farther from the camera or close to it look, on the contrary, blurry, the sharpness decreases gradually with increasing or decreasing distance ...

As you might have guessed, this is a photograph, not a rendering. In computer graphics, each object of the rendered image is perfectly clear, since lenses and optics are not imitated in the calculations. Therefore, in order to achieve photo- and cinematic realism, it is necessary to use special algorithms to do something similar for computer graphics. These techniques simulate the effect of a different focus on objects at different distances.

One of the common techniques for real-time rendering is to blend the original frame and its blurred version (multiple passes of the blur filter) based on the depth data for the pixels in the image. In games, the DOF effect has several uses, for example, cinematics on the game engine, replays in sports and racing games. Real-time depth of field examples:

Level Of Detail (LOD)

Level of detail in 3D applications is a method of reducing the complexity of rendering a frame, reducing the total number of polygons, textures and other resources in a scene, and generally reducing its complexity. A simple example: the main character model consists of 10,000 polygons. In cases where it is located close to the camera in the processed scene, it is important that all polygons are used, but at a very large distance from the camera, it will occupy only a few pixels in the final image, and there is no point in processing all 10,000 polygons. Perhaps, in this case, hundreds of polygons, or even a couple of polygons and a specially prepared texture will be enough for approximately the same display of the model. Accordingly, at medium distances, it makes sense to use a model consisting of more triangles than the simplest model and less than the most complex one.

The LOD method is usually used when modeling and rendering 3D scenes, using several levels of complexity (geometric or otherwise) for objects, in proportion to the distance from them to the camera. The technique is often used by game developers to reduce the polygon count in a scene and to improve performance. When located close to the camera, models with a maximum of details are used (the number of triangles, the size of textures, the complexity of texturing), for the highest possible image quality and vice versa, when the models are removed from the camera, models with fewer triangles are used to increase the rendering speed. Changing the complexity, in particular, the number of triangles in the model, can occur automatically based on one 3D model of maximum complexity, or maybe based on several pre-prepared models with different levels of detail. By using models with less detail for different distances, the estimated rendering complexity is reduced, with almost no deterioration in the overall image detail.

The method is especially effective if the number of objects in the scene is large and they are located at different distances from the camera. For example, take a sports game such as a hockey or soccer simulator. Low poly character models are used when they are far from the camera, and when zoomed in, the models are replaced by others with a large number of polygons. This example is very simple and it shows the essence of the method based on two levels of model detail, but no one bothers to create several levels of detail so that the effect of changing the LOD level is not too noticeable, so that the details gradually "grow" as the object approaches.

In addition to the distance from the camera, other factors can also be important for LOD - the total number of objects on the screen (when one or two characters are in the frame, then complex models are used, and when 10-20, they switch to simpler ones) or the number of frames per second (the limits of FPS values ​​are set, at which the level of detail changes, for example, at FPS below 30 we reduce the complexity of the models on the screen, and at 60, on the contrary, increase). Other possible factors affecting the level of detail are the speed of movement of the object (you are unlikely to have time to consider a rocket in motion, but a snail - easily), the importance of the character from a game point of view (take the same football - for the player model you control, you can use more complex geometry and textures, you see it closest and most often). It all depends on the desires and capabilities of a particular developer. The main thing is not to overdo it, frequent and noticeable changes in the level of detail are annoying.

Let me remind you that the level of detail does not necessarily relate only to geometry, the method can also be used to save other resources: when texturing (although video chips already use mipmapping, sometimes it makes sense to change textures on the fly to others with different details), lighting techniques (close objects are illuminated using a complex algorithm, and distant ones - using a simple one), texturing techniques (complex parallax mapping is used on near surfaces, and normal mapping is used on far ones), etc.

It is not so easy to show an example from the game, on the one hand, to some extent LOD is used in almost every game, on the other hand, it is not always possible to clearly show this, otherwise there would be little sense in LOD itself.

But in this example, it is still clear that the closest car model has maximum detail, the next two or three cars are also very close to this level, and all the distant ones have visible simplifications, here are just the most significant: there are no rear-view mirrors, license plates, windshield wipers, etc. additional lighting equipment. And from the farthest model there is not even a shadow on the road. This is the level of detail algorithm in action.

Global illumination

It is difficult to simulate realistic lighting of the scene, each ray of light in reality is repeatedly reflected and refracted, the number of these reflections is not limited. And in 3D rendering, the number of reflections strongly depends on the design capabilities, any scene calculation is a simplified physical model, and the resulting image is only close to realism.

Lighting algorithms can be divided into two models: direct or local illumination and global illumination (direct or local illumination and global illumination). The local lighting model uses the calculation of direct illumination, light from light sources to the first intersection of light with an opaque surface, the interaction of objects with each other is not taken into account. Although this model tries to compensate for this by adding background or uniform (ambient) lighting, this is the simplest approximation, a highly simplified illumination from all indirect rays of light sources, which specifies the color and intensity of illumination of objects in the absence of direct light sources.

The same ray tracing calculates the illumination of surfaces only by direct rays from light sources and any surface, in order to be visible, must be directly illuminated by a light source. This is not enough to achieve photorealistic results, in addition to direct illumination, it is necessary to take into account secondary illumination by rays reflected from other surfaces. In the real world, rays of light are reflected from surfaces several times until they are completely extinguished. Sunlight passing through a window illuminates the entire room, although the rays cannot directly reach all surfaces. The brighter the light source, the more times it will be reflected. The color of the reflective surface also affects the color of the reflected light, for example a red wall will cause a red spot on an adjacent white object. Here is a clear difference, calculation without and taking into account secondary lighting:

In the global illumination model, global illumination, illumination is calculated taking into account the influence of objects on each other, multiple reflections and refractions of light rays from the surfaces of objects, caustics and subsurface scattering are taken into account. This model allows you to get a more realistic picture, but complicates the process, requiring significantly more resources. There are several global illumination algorithms, we will take a quick look at radiosity (indirect illumination calculation) and photon mapping (global illumination calculation based on photon maps pre-calculated using tracing). There are also simplified methods for simulating indirect lighting, such as changing the overall brightness of a scene depending on the number and brightness of light sources in it, or using a large number of point lights placed around the scene to simulate reflected light, but still this is far from a real algorithm. GI.

The radiosity algorithm is the process of calculating the secondary reflections of light rays from one surface to another, as well as from the environment to objects. Rays from light sources are traced until their strength drops below a certain level or the rays reach a certain number of reflections. This is a common GI technique, calculations are usually done before rendering, and the results of the calculation can be used for real-time rendering. The basic ideas of radiosity are based on the physics of heat transfer. The surfaces of objects are divided into small areas called patches, and it is assumed that the reflected light is scattered evenly in all directions. Instead of calculating each beam for the lights, an averaging technique is used, dividing the lights into patches based on the energy levels they produce. This energy is distributed proportionally among the surface patches.

Another method for calculating global illumination, proposed by Henrik Wann Jensen, is the photon mapping method. Photonic mapping is another ray-traced global illumination algorithm used to simulate how light rays interact with objects in a scene. The algorithm calculates secondary reflections of rays, refraction of light through transparent surfaces, scattered reflections. This method consists in calculating the illumination of points on the surface in two passes. The first is direct tracing of light rays with secondary reflections, this is a preliminary process that is performed before the main rendering. This method calculates the energy of photons going from the light source to the objects in the scene. When the photons reach the surface, the intersection point, direction and energy of the photon are stored in a cache called a photon map. Photonic maps can be saved to disk for later use so they don't have to be rendered every frame. Reflections of photons are calculated until the work stops after a certain number of reflections or when a certain energy is reached. In the second rendering pass, the illumination of the scene pixels with direct rays is calculated, taking into account the data stored in the photon maps, the photon energy is added to the energy of the direct illumination.

Global illumination calculations that use a large number of secondary reflections take much longer than direct illumination calculations. There are techniques for hardware calculation of a radio city in real time, which use the capabilities of the latest generations of programmable video chips, but so far the scenes for which real-time global illumination is calculated should be quite simple and many simplifications are made in the algorithms.

But what has been used for a long time is static pre-calculated global illumination, which is acceptable for scenes without changing the position of light sources and large objects that strongly affect lighting. After all, the calculation of global illumination does not depend on the position of the observer, and if the position of such objects in the scene and the parameters of lighting sources do not change in the scene, then the previously calculated illumination values ​​can be used. This is used in many games, storing the GI calculation data in the form of lightmaps.

There are also acceptable algorithms for simulating dynamic global illumination. For example, there is such a simple method for use in real-time applications for calculating indirect illumination of an object in a scene: simplified rendering of all objects with reduced detail (except for the one for which the lighting is calculated) into a low-resolution cube map (it can also be used for displaying dynamic reflections on the surface of an object), then filtering that texture (multiple passes of the blur filter), and applying the data from the calculated texture to illuminate this object as a complement to direct lighting. In cases where the dynamic calculation is too heavy, static radiosity maps can be dispensed with. An example from the MotoGP 2 game, which clearly shows the beneficial effect of even such a simple imitation of GI:



"itemprop =" image ">

"What are shaders?" Is a very common question from curious players and novice game developers. In this article, I will tell you clearly and clearly about these terrible shaders.

I consider computer games to be the engine of progress towards photorealistic images in computer graphics, so let's talk about what “shaders” are in the context of video games.

Before the first graphics accelerators appeared, all the work of rendering video game frames was done by the poor central processor.

Drawing a frame is actually quite a routine job: you need to take "geometry" - polygonal models (world, character, weapon, etc.) and rasterize it. What is Rasterize? The entire 3d model consists of the smallest triangles, which the rasterizer turns into pixels (that is, "rasterize" means turning into pixels). After rasterization, take texture data, parameters of illumination, fog, etc. and calculate each resulting pixel of the game frame, which will be displayed to the player.

So, the central processing unit (CPU - Central Processing Unit) is too smart a guy to make him do such a routine. Instead, it is logical to allocate some kind of hardware module that offloads the CPU so that it can do more important intellectual work.

Such a hardware module is a graphics accelerator or video card (GPU - Graphics Processing Unit). Now the CPU prepares the data and loads a colleague with routine work. Considering that the GPU is now not just one colleague, it is a crowd of minions-cores, then it copes with this kind of work at once.

But we have not yet received an answer to the main question: What are shaders? Wait, I'm getting to this.

Nice, interesting and close to photo-realism graphics, required the developers of video cards to implement many algorithms at the hardware level. Shadows, lights, highlights and so on. This approach - with the implementation of algorithms in hardware is called "Fixed pipeline or pipeline" and where high-quality graphics is required, it is no longer found. His place was taken by the Programmable Pipeline.

Players' requests “come on, bring in a good graphonium! surprise! ”, pushed game developers (and video card manufacturers, respectively) to more and more complex algorithms. So far, at some point, there are not enough hard-wired hardware algorithms for them.

Now is the time for graphics cards to become more intelligent. The decision was made to allow developers to program GPU blocks into arbitrary pipelines that implement different algorithms. That is, game developers, graphic programmers have now been able to write programs for video cards.

And now, finally, we have come to the answer to our main question.

"What are shaders?"

Shader (English shader - shading program) is a program for a video card that is used in three-dimensional graphics to determine the final parameters of an object or image, can include the description of absorption and scattering of light, texture mapping, reflection and refraction, shading, surface displacement, etc. many other parameters.

What are shaders? For example, you can get this effect, this is a water shader applied to a sphere.

Graphic pipeline

The advantage of the programmable pipeline over its predecessor is that now programmers can create their own algorithms on their own, and not use a hard-wired set of options.

At first, video cards were equipped with several specialized processors that support different sets of instructions. Shaders were divided into three types, depending on which processor will execute them. But then video cards began to be equipped with universal processors that support instruction sets for all three types of shaders. The division of shaders into types has been preserved to describe the purpose of the shader.

In addition to graphics tasks with such intelligent video cards, it became possible to perform general-purpose calculations (not related to computer graphics) on the GPU.

For the first time, full-fledged support for shaders appeared in the GeForce 3 series video cards, but the rudiments were implemented back in the GeForce256 (in the form of Register Combiners).

Shader types

Depending on the stage of the pipeline, shaders are divided into several types: vertex, fragment (pixel) and geometric. And in the latest types of pipelines, there are also tessellation shaders. We will not discuss the graphics pipeline in detail, I am still thinking whether to write a separate article about this, for those who decide to study shaders and graphics programming. Write in the comments if you are interested, I will know if it is worth wasting time.

Vertex shader

Vertex shaders make animations of characters, grass, trees, create waves on the water and many other things. In a vertex shader, the programmer has access to data related to vertices, for example: coordinates of a vertex in space, its texture coordinates, its color and a normal vector.

Geometric shader

Geometric shaders are capable of creating new geometry, and can be used to create particles, modify model detail on the fly, create silhouettes, and more. Unlike the previous vertex, they are able to process not only one vertex, but also a whole primitive. The primitive can be a segment (two vertices) and a triangle (three vertices), and if there is information about adjacent vertices (English adjacency) for a triangular primitive, up to six vertices can be processed.

Pixel shader

Pixel shaders perform texture mapping, lighting, and various texture effects such as reflection, refraction, fog, Bump Mapping, etc. Pixel shaders are also used for post effects.

The pixel shader works with bitmap slices and textures - it processes data associated with pixels (for example, color, depth, texture coordinates). The pixel shader is used at the last stage of the graphics pipeline to form a fragment of an image.

What do shaders write on?

Initially, shaders could be written in an assembler-like language, but later there were high-level shader languages ​​similar to the C language, such as Cg, GLSL and HLSL.

Such languages ​​are much simpler than C, because the tasks solved with their help are much simpler. The type system in such languages ​​reflects the needs of graphics programmers. Therefore, they provide the programmer with special data types: matrices, samplers, vectors, etc.

RenderMan

Everything we discussed above is related to realtime graphics. But there are non-realtime graphics. What is the difference - realtime - real time, that is, here and now - to give 60 frames per second in the game, this is a real time process. But rendering a complex frame for cutting edge animation for a few minutes is non-realtime. The essence is in time.

For example, we cannot get graphics of such quality as in the latest animated films of the Pixar studio in real time. Very large render farms calculate light simulations using completely different algorithms, very expensive, but giving almost photorealistic images.

Super realistic graphics in Sand piper

For example, look at this cute cartoon, grains of sand, bird feathers, waves, everything looks incredibly real.

* Videos can be banned from Youtube, if they don't open, google pixar sandpiper - the short cartoon about the brave sandpiper is very cute and fluffy. Will touch and demonstrate how cool computer graphics can be.

So this is RenderMan from Pixar. It became the first shader programming language. The RenderMan API is the de facto standard for professional rendering and is used in all Pixar work and beyond.

Useful information

Now you know what shaders are, but besides shaders, there are other very interesting topics in game development and computer graphics that will certainly interest you:

  • is a technique for creating stunning effects in modern video games. Overview article and video with tutorials on creating effects in Unity3d
  • - If you are thinking about developing video games as a professional career or a hobby, this article contains an excellent set of recommendations "where to start", "what books to read", etc.

If you have any questions

As usual, if you still have any questions, ask them in the comments, I will always answer. For any kind word or correction of errors, I would be very grateful.

This tutorial will help you install shaders in Minecraft and thereby improve the game world by adding dynamic shadows, wind and grass noise, realistic water and much more.

It should be noted right away that shaders load the system quite heavily, and if you have a weak or integrated video card, we recommend refraining from installing this mod.

Installation consists of two stages, first you need to install the mod on the shaders, and then additional shaderpacks to it

STEP # 1 - Installing the mod for shaders

  1. Download and install Java
  2. Install OptiFine HD
    or ShadersMod;
  3. We unpack the resulting archive to any place;
  4. Run the jar file, because he is an installer;
  5. The program will show you the path to the game, if everything is correct, click Yes, Ok, Ok;
  6. Go to .minecraft and create a folder there shaderpacks;
  7. We go into the launcher and see in the line a new profile with the name "ShadersMod", if not, then select it manually.
  8. Next, you need to download the shaderpacks

STEP # 2 - Installing the shaderpack

  1. Download the shaderpack you are interested in (list at the end of the article)
  2. Press the keys WIN + R
  3. Go to .minecraft / shaderpacks... If there is no such folder, then create it.
  4. Move or extract the shader archive to .minecraft / shaderpacks... The path should look like this: .minecraft / shaderpacks / SHADER_FOLDER_NAME / shaders / [. fsh and .vsh files inside]
  5. Start Minecraft and go Settings> Shaders... Here you will see a list of available shaders. Select the required
  6. In shader settings enable "tweakBlockDamage", disable "CloudShadow" and "OldLighting"

Sonic Ether "s Unbelievable Shaders
Sildur "s shaders
Chocapic13 "s Shaders
sensi277 "s yShaders
MrMeep_x3 "s Shaders
Naelego "s Cel Shaders
RRe36 "s Shaders
DeDelner's CUDA Shaders
bruceatsr44 "s Acid Shaders
Beed28 "s Shaders
Ziipzaap "s Shader Pack
robobo1221 "s Shaders
dvv16 "s Shaders
Stazza85 super shaders
hoo00 "s Shaders pack B
Regi24 "s Waving Plants
MrButternuss ShaderPack
DethRaid "s Awesome Graphics On Nitro Shaders
Edi "s Shader ForALLPc" s
CrankerMan "s TME Shaders
Kadir Nck Shader (for skate702)
Werrus "s Shaders
Knewtonwako "s Life Nexus Shaders
CYBOX shaderpack
CrapDeShoes CloudShade Alpha
AirLoocke42 Shader
CaptTatsu "s BSL Shaders
Triliton "s shaders
ShadersMcOfficial "s Bloominx Shaders (Chocapic13" Shaders)
dotModded "s Continuum Shaders
Qwqx71 "s Lunar Shaders (chocapic13" s shader)

Designed for execution by video card processors (GPU). Shaders are compiled in one of the specialized programming languages ​​(see) and compiled into instructions for the GPU.

Application

Prior to the use of shaders, procedural texture generation was used (for example, used in the Unreal game to create animated textures of water and fire) and multitexturing (on which the shader language used in the Quake 3 game was based). These mechanisms did not provide the same flexibility as shaders.

With the advent of reconfigurable graphics pipelines, it became possible to perform mathematical calculations (GPGPU) on the GPU. The best known GPGPU mechanisms are nVidia CUDA, Microsoft DirectCompute, and open source OpenCL.

Shader types

Vertex shaders

The vertex shader operates on data associated with the vertices of polyhedra, for example, with the coordinates of a vertex (point) in space, with texture coordinates, with a vertex color, with a tangent vector, with a binormal vector, with a normal vector. The vertex shader can be used for viewing and perspective transformation of vertices, for generating texture coordinates, for calculating lighting, etc.

Sample code for a vertex shader in the language:

vs.2.0 dcl_position v0 dcl_texcoord v3 m4x4 oPos, v0, c0 mov oT0, v3

Geometric shaders

A geometric shader, unlike a vertex one, is capable of processing not only one vertex, but also a whole primitive. The primitive can be a segment (two vertices) and a triangle (three vertices), and if there is information about adjacent vertices (English adjacency) for a triangular primitive, up to six vertices can be processed. The geometry shader is capable of generating primitives on the fly (without using the central processor).

Geometry shaders were first used on Nvidia 8 series graphics cards.

Pixel (fragment) shaders

The pixel shader works with bitmap slices and textures - it processes data associated with pixels (for example, color, depth, texture coordinates). The pixel shader is used at the last stage of the graphics pipeline to form a fragment of an image.

Sample code for a pixel shader in the language:

ps.1.4 texld r0, t0 mul r0, r0, v0

Advantages and disadvantages

Advantages:

  • the ability to compose any algorithms (flexibility, simplification and reduction in the cost of the program development cycle, increase in the complexity and realism of the rendered scenes);
  • increased execution speed (compared to the speed of execution of the same algorithm executed on the central processor).

Disadvantages:

  • the need to learn a new programming language;
  • the existence of different sets of instructions for GPUs from different manufacturers.

Programming languages

A large number of shader programming languages ​​have been created to meet the different needs of the market (computer graphics have many areas of application).

Usually, languages ​​for writing shaders provide the programmer with special data types (matrices, samplers, vectors, etc.), a set of built-in variables and constants (for interacting with the standard functionality of the 3D API).

Professional rendering

The following are shader programming languages ​​focused on achieving maximum rendering quality. In such languages, the properties of materials are described using abstractions. This allows people who do not have special programming skills and do not know the specifics of hardware implementations to write code. For example, artists can write these shaders to provide the "right look" (texture mapping, light placement, etc.).

Usually, the processing of such shaders is quite resource-intensive: creating photorealistic images requires a lot of computing power. Typically, most of the computation is done by large computer clusters or blade systems.

RenderMan The shader programming language, implemented in Pixar's RenderMan software, was the first shader programming language. The RenderMan API, developed by Rob Cook and described in the RenderMan Interface Specification, is the de facto standard for professional rendering, used throughout Pixar's work. OSL OSL - eng. Open Shading Language is a shader programming language developed by the company Sony Pictures Imageworks and resembling language. It is used in the proprietary program "Arnold", developed by Sony Pictures Imageworks and intended for rendering, and in the free program Blender, intended for creating three-dimensional computer graphics. Real-time rendering GLSL GLSL the open GL S hading L anguage) is a shader programming language described in the OpenGL standard and based on the version of the language described in the ANSI C standard. The language supports most of the ANSI C features, supports data types often used when working with three-dimensional graphics (vectors, matrices). The word "shader" in GLSL refers to an independently compiled unit written in this language. The word "program" refers to a collection of compiled shaders linked together. Cg (eng. C for g raphics) is a shader programming language developed by nVidia together with Microsoft. The language is similar to the language and to the HLSL language developed by Microsoft and included in DirectX 9... The language uses the types "int", "float", "half" (a floating point number of 16 bits). The language supports functions and structures. The language has peculiar optimizations in the form of "packed arrays" (