I've applied the water planes material from Epic (lake one) as well as the automatic landscape material to my drone game prototype. Both the water and the landscape material required some tweaking. But this was easy to do after carefully studying the inner workings on both (which I discussed in my two previous posts). I was having issues getting my game to cook correctly for Windows due to the long names in the automatic material shader assets. To go around that I had to rename a few of the assets as well as the directories. Anyways, the result of what I got so far is sown below.
The wonderful guys from Epic released three water shaders (or as they are known in UE4: materials) that are free to use in any Unreal related project. I want to incorporate them into my drone scene but before I do I want to understand how they work. After all, I'm more interested in the inner workings of the engine than in creating a game. Here are my notes from what I've understood about these water shaders. Pro tip: One thing that helped me a lot in understanding these complicated shaders is the Node Preview feature. Just right click on any node and choose 'Start Previewing Node' option.
The Lake Water material is the foundation to understanding the Ocean Material and the Translucent material. Metallic is set to 0.8 for all water materials based on experimentation of what looks nice (is my theory). The base color is a linear interpolation between two colors and is driven by a Fresnel node so that one color shows up more when looking straight at the surface while the other color shows up more when looking the surface at a perpendicular angle. Roughness is simply calculated by a linear interpolation between two parameter values. What's interesting about this is how the lerp is being driven because this value will also drive the lerp for the two normal results calculated later on. Essentially, the alpha value used for lerping between the two roughness values and the two normal results, is being calculated with the help of a Motion_4WayChaos applied to a variation mask texture. Basically a Motion_4WayChaos node takes in the texture coordinates (u,v's), a speed value, and a texture object and creates a moving texture that moves in a very random way. By looking inside this node, one can see it is implemented by adding together the result of 4 panner nodes with each one taking a slightly shifted uv input. It is worth pointing out that the UV inputs used in the shaders to index the different texture objects fed into the Motion_4WayChaos are obtained by projecting the pixel world coordinates. This is important since its the reason why two water planes can be put together side by side without seams. So, these UVs are scaled using parameters, plugged into the Motion_4WayChaos, the result of that is passed through some power nodes controlled by more parameters in an attempt to provide more customization for the material (things like variation amount and variation sharpness). At the end of this, the resulting sample for the randomly moving texture is masked so that only the R component is used and this becomes the alpha value driving the lerp for the two normal results and the two roughness results. At a high level, this creates a nice variation effect where water seems stationary at some points on the surface of the lake and moving a lot at others.
Next I'll try to explain how the two normal results are calculated, or more specifically, interpolated at the material normal input. The same concept from the variation mask texture explained above applies here: UVs coming from a pixel world space coordinates projection are scaled by appropriate parameters and these are fed into a Motion_4WayChaos using a Normal map texture object and a given speed to create a randomly moving normal map. The first of this normal results simply uses a small waves normal map. There is a parameter to control the 'strength' of these wavelets and is pretty much an alpha value that lerps the result from the 4WaChaos result and (0,0,1). This result with the small wavelets will end up forming the deeper parts of the lake (the pieces of surface that don't show much variation and looks almost stationary). The second normal result ends up forming the shallow parts of the lake (the pieces of the surface that shows more variation and looks like its moving faster). This second normal result is calculated by mixing (adding) the result of plugging a Medium size waves normal map to a 4WayChaos node and plugging a Small size waves normal map to a 4WayChaos. The (1,1,0) modulating the small wave intermediate result, before its added to the medium size wave result, is because we want to perturb the medium size wave with the small size wave but only on the x and y directions. The z is kept the original as obtained from the 4WayChaos that is plugged to the medium size waves normal map.
A lot of the concepts explained above for the lake material apply to this material as well. Hence, I will focus on the novelties introduced and/or differences. As before, we have the Motion_4WayChaos used here to create randomly moving textures of 1) Large Wave Normal Map 2) Small Wave Normal Map 3) Large Wave Height Texture Map and 4) A Seafom Texture Map. And as before, the UVs come from a pixel world coordinates projection after being scaled by the appropriate scale parameters. One novel feature here is that for 1 and 3 the UVs are passed through a Rotator node. However the time input is not connected to a time node but to a constant node rather. This means that these texture maps are simply rotated clockwise by a constant amount. Also new, for 1 and 2 there is a small amplification network which simply consists in multiplying the 4WayChaos result with the result of a lerp with a constant alpha (a parameter from the instance material) between two constant vectors. These amplified-randomly-moving normal maps are added together to form the materials normal map input result. Actually, the result of this addition is normalized and transformed to world space (from tangent space) in order to be used as the normal input in the Fresnel node that is in turn used to linearly interpolate between two color waters (shallow and deep). 3 resulting red channel is used, after going through some customization math parameters such as Luminance Bias and Displacement to create a displacement vector to add to the vertex normal in world space and thereby providing a world displacement input for the material. This is what creates the up and down wavy movement of the highly tessellated plane. It is worth mentioning that the tessellation factor is a constant parameter that is connected to the 'Tessellation Multiply' input of the material. Similarly, there is a constant parameter value for the Roughness input, the Metallic Input and the Specular input.
Another new concept is that of switches nodes inside the shader. There are two: Add Sea Foam (X) and Add Reflection Map (Y). They provide two alternative outputs based on whether they are set to true or false. The calculation of the last input in this shader material is Base Color and its result relies heavily on these switches.
When X is FALSE and Y is FALSE: We take the Fresnel water color calculated earlier with the help of the normal map result converted into world coordinates. These are used as the normal input of the Fresnel node and we are done.
When X is TRUE and Y is FALSE: We take the Fresnel water color from above and linearly interpolate it with the moving foam texture by using the foam texture's green channel (1 where its 100% foam and 0 where its 100% ocean). Let's call this Fresnel Foam color. This Fresnel Foam color is linearly interpolated with the regular Fresnel water color using the moving Height Texture (after it has been scaled by using different math parametrized nodes) red channel. This means that the base color will be fully Fresnel foam color where the wave rises the most and fully Fresnel water color where the wave rises the least.
When X is TRUE and Y is TRUE: We take the result from above and add it to the result of the reflection cubemap network. The cubemap network takes the result of sampling the reflection cubemap and modulates this against the result of multiplying the moving foam texture green channel, the moving wave elevation texture red channel and the Fresnel output (the one calculated with the moving normal maps).
When X is TRUE and Y is FALSE: We take the result from above and add it to the result of the reflection cubemap network. The cubemap network in this case takes the result of sampling the reflection cubemap and modulates this against the resulting Fresnel water color.
Here two new nodes are introduced: Scene Depth and Pixel Depth. Scene Depth node samples the largest depth value, i.e., the depth value under the water surface while Pixel Depth node samples the depth at the water surface from the camera. Hence the ratio, SceneDepth / PixelDepth increases as the water becomes deep. Author takes the current pixel world position and bias it against the relative to the water height (subtract the water height from its z component). Call this PixWorldBiasPos. They he takes the camera world position and bias it the same way. Call this CamWorldBiasPos. Then:
CamWorldBiasPos + [(PixWorldBiasPos - CamWorldBiasPos) * (SceneDepth / PixelDepth)]
Where PixWorldBiasPos - CamWorldBiasPos = Cam2PixWaterSurfaceV = Vector from camera to water surface
CamWorldBiasPos + [Cam2PixWaterSurfaceV * (SceneDepth / PixelDepth)]
Note what happens at the shore: SceneDepth / PixelDepth = 1.0 aprox. Hence CamWorldBiasPos + Cam2PixWaterSurfaceV is a ray from camera to water surface where the z component of this vector is zero.
Then the z component of this resulting vector math is taken and passed through a 1-x node. In our shore example, 1 - (0) = 1. But then this is divided by 'shore depth' parameter to control the alpha between 0 opacity and the base opacity. This creates a smooth transition between water and shore. When the water is deep, x is less than zero and so 1 - x is always greater than one ensuring a full base opacity in the lerp.
This 1 - x is also used to influence the lerp between the deep water color and the shallow water color in a similar manner as the opacity with the exception that a divide by parameter factor and power using parameter exp nodes have been added in the middle to add customization. Eventually the result of the deep vs shallow water lerp will be taken into account for the material's base color input calculation.
For the normals and the roughness inputs of the material, the exact same node network described for the lake water material is used. Same idea of using the macro variation texture along with the medium and small normal maps.
The most complicated network in this material is the base color input network. It gets complicated because this implementation also calculates refraction values in the base color as opposed to using the refraction input of the material. At a high level the author describes that the intent for the refraction is to built in a refraction behavior where shallow parts of the water don't distort as much and deeper areas of the water do. In order to do this he leverages on the normals (x,y) final result to index into a render-to-texture texture (here called SceneTexture:BaseColor). Basically he gets the pixel screen coordinates and reads back its result from the SceneTexture:BaseColor. This by itself would not do anything since this would only read back the value set before and set it again. The key in understanding this is that these screen coordinates u,v's are distorted using the final normal x,y values (which themselves have been rescaled using the depth factor mentioned at the beginning). Because distorted normals are added to the screen space pixel u,v's the result is that the value read at pixel u,v will contain contributions of the neighboring pixels at (u+x, v+y). This is how the distortion works. And because of that depth factor the distortion is greater at deeper water. This distortion value is modulated by the Shallow water color vs Deep Water Color lerp. This result is further modulated by a Fresnel result between two water colors. This Fresnel node output also controls the alpha of the final lerp to the base color input. This final lerp takes into account the reflection cube result vs the final calculated color. Hence the reflections are greater when the camera is almost perpendicular to the surface.
I've finally pulled the trigger on the Automatic Landscape Material available on the marketplace. It was $65 but let me tell you it was money well spent! Basically this landscape material shades the landscape procedurally based on terrain features such as height and angles. Packet is set up as a master material with 13 instanced materials. The Master material is fairly complex as it offers a lot of customization. Hence, I'll devote this post to describe what all the different input parameters do based on my latest reverse engineering efforts. About parameters, I'll be talking mainly about the main data input parameters and skipping over other more artistic parameters such as layer textures and layer base colors. Also, I will provide notes (mainly for self reference) on the mechanics of the different subcomponents inside the Master Material. Kudos to the inventor, material works great and it looks beautiful. Its very clever.
Material parameter inputs
I'm a software engineer with a passion for computer graphics.