I've applied the water planes material from Epic (lake one) as well as the automatic landscape material to my drone game prototype. Both the water and the landscape material required some tweaking. But this was easy to do after carefully studying the inner workings on both (which I discussed in my two previous posts). I was having issues getting my game to cook correctly for Windows due to the long names in the automatic material shader assets. To go around that I had to rename a few of the assets as well as the directories. Anyways, the result of what I got so far is sown below.
The wonderful guys from Epic released three water shaders (or as they are known in UE4: materials) that are free to use in any Unreal related project. I want to incorporate them into my drone scene but before I do I want to understand how they work. After all, I'm more interested in the inner workings of the engine than in creating a game. Here are my notes from what I've understood about these water shaders. Pro tip: One thing that helped me a lot in understanding these complicated shaders is the Node Preview feature. Just right click on any node and choose 'Start Previewing Node' option.
The Lake Water material is the foundation to understanding the Ocean Material and the Translucent material. Metallic is set to 0.8 for all water materials based on experimentation of what looks nice (is my theory). The base color is a linear interpolation between two colors and is driven by a Fresnel node so that one color shows up more when looking straight at the surface while the other color shows up more when looking the surface at a perpendicular angle. Roughness is simply calculated by a linear interpolation between two parameter values. What's interesting about this is how the lerp is being driven because this value will also drive the lerp for the two normal results calculated later on. Essentially, the alpha value used for lerping between the two roughness values and the two normal results, is being calculated with the help of a Motion_4WayChaos applied to a variation mask texture. Basically a Motion_4WayChaos node takes in the texture coordinates (u,v's), a speed value, and a texture object and creates a moving texture that moves in a very random way. By looking inside this node, one can see it is implemented by adding together the result of 4 panner nodes with each one taking a slightly shifted uv input. It is worth pointing out that the UV inputs used in the shaders to index the different texture objects fed into the Motion_4WayChaos are obtained by projecting the pixel world coordinates. This is important since its the reason why two water planes can be put together side by side without seams. So, these UVs are scaled using parameters, plugged into the Motion_4WayChaos, the result of that is passed through some power nodes controlled by more parameters in an attempt to provide more customization for the material (things like variation amount and variation sharpness). At the end of this, the resulting sample for the randomly moving texture is masked so that only the R component is used and this becomes the alpha value driving the lerp for the two normal results and the two roughness results. At a high level, this creates a nice variation effect where water seems stationary at some points on the surface of the lake and moving a lot at others.
Next I'll try to explain how the two normal results are calculated, or more specifically, interpolated at the material normal input. The same concept from the variation mask texture explained above applies here: UVs coming from a pixel world space coordinates projection are scaled by appropriate parameters and these are fed into a Motion_4WayChaos using a Normal map texture object and a given speed to create a randomly moving normal map. The first of this normal results simply uses a small waves normal map. There is a parameter to control the 'strength' of these wavelets and is pretty much an alpha value that lerps the result from the 4WaChaos result and (0,0,1). This result with the small wavelets will end up forming the deeper parts of the lake (the pieces of surface that don't show much variation and looks almost stationary). The second normal result ends up forming the shallow parts of the lake (the pieces of the surface that shows more variation and looks like its moving faster). This second normal result is calculated by mixing (adding) the result of plugging a Medium size waves normal map to a 4WayChaos node and plugging a Small size waves normal map to a 4WayChaos. The (1,1,0) modulating the small wave intermediate result, before its added to the medium size wave result, is because we want to perturb the medium size wave with the small size wave but only on the x and y directions. The z is kept the original as obtained from the 4WayChaos that is plugged to the medium size waves normal map.
A lot of the concepts explained above for the lake material apply to this material as well. Hence, I will focus on the novelties introduced and/or differences. As before, we have the Motion_4WayChaos used here to create randomly moving textures of 1) Large Wave Normal Map 2) Small Wave Normal Map 3) Large Wave Height Texture Map and 4) A Seafom Texture Map. And as before, the UVs come from a pixel world coordinates projection after being scaled by the appropriate scale parameters. One novel feature here is that for 1 and 3 the UVs are passed through a Rotator node. However the time input is not connected to a time node but to a constant node rather. This means that these texture maps are simply rotated clockwise by a constant amount. Also new, for 1 and 2 there is a small amplification network which simply consists in multiplying the 4WayChaos result with the result of a lerp with a constant alpha (a parameter from the instance material) between two constant vectors. These amplified-randomly-moving normal maps are added together to form the materials normal map input result. Actually, the result of this addition is normalized and transformed to world space (from tangent space) in order to be used as the normal input in the Fresnel node that is in turn used to linearly interpolate between two color waters (shallow and deep). 3 resulting red channel is used, after going through some customization math parameters such as Luminance Bias and Displacement to create a displacement vector to add to the vertex normal in world space and thereby providing a world displacement input for the material. This is what creates the up and down wavy movement of the highly tessellated plane. It is worth mentioning that the tessellation factor is a constant parameter that is connected to the 'Tessellation Multiply' input of the material. Similarly, there is a constant parameter value for the Roughness input, the Metallic Input and the Specular input.
Another new concept is that of switches nodes inside the shader. There are two: Add Sea Foam (X) and Add Reflection Map (Y). They provide two alternative outputs based on whether they are set to true or false. The calculation of the last input in this shader material is Base Color and its result relies heavily on these switches.
When X is FALSE and Y is FALSE: We take the Fresnel water color calculated earlier with the help of the normal map result converted into world coordinates. These are used as the normal input of the Fresnel node and we are done.
When X is TRUE and Y is FALSE: We take the Fresnel water color from above and linearly interpolate it with the moving foam texture by using the foam texture's green channel (1 where its 100% foam and 0 where its 100% ocean). Let's call this Fresnel Foam color. This Fresnel Foam color is linearly interpolated with the regular Fresnel water color using the moving Height Texture (after it has been scaled by using different math parametrized nodes) red channel. This means that the base color will be fully Fresnel foam color where the wave rises the most and fully Fresnel water color where the wave rises the least.
When X is TRUE and Y is TRUE: We take the result from above and add it to the result of the reflection cubemap network. The cubemap network takes the result of sampling the reflection cubemap and modulates this against the result of multiplying the moving foam texture green channel, the moving wave elevation texture red channel and the Fresnel output (the one calculated with the moving normal maps).
When X is TRUE and Y is FALSE: We take the result from above and add it to the result of the reflection cubemap network. The cubemap network in this case takes the result of sampling the reflection cubemap and modulates this against the resulting Fresnel water color.
Here two new nodes are introduced: Scene Depth and Pixel Depth. Scene Depth node samples the largest depth value, i.e., the depth value under the water surface while Pixel Depth node samples the depth at the water surface from the camera. Hence the ratio, SceneDepth / PixelDepth increases as the water becomes deep. Author takes the current pixel world position and bias it against the relative to the water height (subtract the water height from its z component). Call this PixWorldBiasPos. They he takes the camera world position and bias it the same way. Call this CamWorldBiasPos. Then:
CamWorldBiasPos + [(PixWorldBiasPos - CamWorldBiasPos) * (SceneDepth / PixelDepth)]
Where PixWorldBiasPos - CamWorldBiasPos = Cam2PixWaterSurfaceV = Vector from camera to water surface
CamWorldBiasPos + [Cam2PixWaterSurfaceV * (SceneDepth / PixelDepth)]
Note what happens at the shore: SceneDepth / PixelDepth = 1.0 aprox. Hence CamWorldBiasPos + Cam2PixWaterSurfaceV is a ray from camera to water surface where the z component of this vector is zero.
Then the z component of this resulting vector math is taken and passed through a 1-x node. In our shore example, 1 - (0) = 1. But then this is divided by 'shore depth' parameter to control the alpha between 0 opacity and the base opacity. This creates a smooth transition between water and shore. When the water is deep, x is less than zero and so 1 - x is always greater than one ensuring a full base opacity in the lerp.
This 1 - x is also used to influence the lerp between the deep water color and the shallow water color in a similar manner as the opacity with the exception that a divide by parameter factor and power using parameter exp nodes have been added in the middle to add customization. Eventually the result of the deep vs shallow water lerp will be taken into account for the material's base color input calculation.
For the normals and the roughness inputs of the material, the exact same node network described for the lake water material is used. Same idea of using the macro variation texture along with the medium and small normal maps.
The most complicated network in this material is the base color input network. It gets complicated because this implementation also calculates refraction values in the base color as opposed to using the refraction input of the material. At a high level the author describes that the intent for the refraction is to built in a refraction behavior where shallow parts of the water don't distort as much and deeper areas of the water do. In order to do this he leverages on the normals (x,y) final result to index into a render-to-texture texture (here called SceneTexture:BaseColor). Basically he gets the pixel screen coordinates and reads back its result from the SceneTexture:BaseColor. This by itself would not do anything since this would only read back the value set before and set it again. The key in understanding this is that these screen coordinates u,v's are distorted using the final normal x,y values (which themselves have been rescaled using the depth factor mentioned at the beginning). Because distorted normals are added to the screen space pixel u,v's the result is that the value read at pixel u,v will contain contributions of the neighboring pixels at (u+x, v+y). This is how the distortion works. And because of that depth factor the distortion is greater at deeper water. This distortion value is modulated by the Shallow water color vs Deep Water Color lerp. This result is further modulated by a Fresnel result between two water colors. This Fresnel node output also controls the alpha of the final lerp to the base color input. This final lerp takes into account the reflection cube result vs the final calculated color. Hence the reflections are greater when the camera is almost perpendicular to the surface.
I've finally pulled the trigger on the Automatic Landscape Material available on the marketplace. It was $65 but let me tell you it was money well spent! Basically this landscape material shades the landscape procedurally based on terrain features such as height and angles. Packet is set up as a master material with 13 instanced materials. The Master material is fairly complex as it offers a lot of customization. Hence, I'll devote this post to describe what all the different input parameters do based on my latest reverse engineering efforts. About parameters, I'll be talking mainly about the main data input parameters and skipping over other more artistic parameters such as layer textures and layer base colors. Also, I will provide notes (mainly for self reference) on the mechanics of the different subcomponents inside the Master Material. Kudos to the inventor, material works great and it looks beautiful. Its very clever.
Material parameter inputs
Drone Mesh and Material
I found the drone mesh in a website that had many free 3d meshes. However, I've lost the exact link where I got it from :( I do remember that the mesh was in a non-FBX format so I had to import it to blender first just to export it to FBX right afterwards. In the import/export process I lost a piece of the mesh that represented the blades. Hence, rather than trying to create some blades in blender and then somehow animate them, I decided to make believe my drone is a military prototype that uses jet propulsion...
For the military looking material, I used a navy cammo texture for the base color and the noise texture shown below as my roughness map. This is how I'm achieving the wear look on the surface. Btw, I've made this a fully metallic material in UE4s physically based shaders.
Finally, I generated a physics body for the mesh inside the editor. I've used a simple bounding volume and its shown above as the purple box wireframe. Nothing fancy. Later through script I attached 4 thrusters to this box and this is how I'm achieving flight.
The blueprint class components
All in all, this blueprint class contains the following components and a screenshot of the components view for this blueprint class is shared below.
This is the mesh described above.
There are two cameras. The first one is attached to a spring arm which is in turn attached to the mesh and set up to be active by default. It achieves the third person camera. The second camera is set right at the nose of the drone to achieve a first person camera and its deactivated by default. Inside the event graph I'm using the 'Select' button release event on my PS3 gamepad to trigger a flip-flop node that switches between third person and first person cameras.
In the picture below the red arrows represent the positive x-axis for each of the four physics thrusters components. The thruster's strength is set through script and depends on the gamepad's left and right thumbstick axis inputs. More specifically, the left thumbstick y-axis controls the initial strength for each thruster. The right thumbstick controls pitch and roll via the y-axis and x-axis respectively. In order to implement flying controls, the overall strength for each thruster is calculated as follows
Thrust = overall thrust strength (left thumbstick y-axis) - pitch amount (right thumbstick y-axis) - roll amount (right thumbstick x-axis).
Jet Exhaust Particle System
This is the exact same particle system discussed in my previous post. As pointed out in the previous post, the particles' initial speed was made a parameter so that it can be set through script in a way that is proportional to the overall strength of each thruster component respectively. Controlling the initial speed of the particles is a proxy for the intensity of the thruster visual effect.
I found a rocket booster sound in the www.freesound.org website and added it as an audio component to my blueprint. I then make its pitch and volume change proportionally to the left thumbstick y-axis (the one that controls the main thrusters strength).
This particle system was designed using three emitters that are briefly discussed below: Combustion emitter, Smoke emitter and Heat emitter.
Combustion is achieved by creating a GPU Sprites Emitter with 500 particles that are put through a vector field. This vector field is designed to push the particles in a jet stream like motion. Further, the vector field is set up as local vector field with a constant 'intensity' value and a 'tightness' value of zero, meaning that it only influences the particle's velocity (as opposed to completely taking over its velocity). Therefore, the particles need to have an initial velocity when spawned. The velocity cone module is used to spawn the particles with an initial velocity along a cone with a small angle aperture that is always pointing downwards. The angle of the combustion cone is fixed but the velocity distribution is set up to be a 'Distribution Float Particle Parameter' in order to be able to control this parameter from code or blueprints. Exposing this parameter to code or blueprints allows for controlling how strong the overall combustion effect looks like and, in this particular case, its hooked to the strength of the thrusters. Which is in turn controlled by pulling the gamepad left thumbstick up and down.
It is worth mention that an alternative way to controlling the overall strength of the combustion effect could have been set up by using a global vector field instead. This means that is up to the blueprint/code to define and own the vector field allowing it to modify the vector field intensity dynamically (instead of modifying the velocity cone's velocity parameter).
A color over life module is used to create the blue to red gradient effect. Also the size over life effect allows for the sprite to grow twice its size as it ages.
Lastly, another key module to achieve this combustion effect is to use a subimage index module (sub UV). First of all, this subuv module allows for a sprite to play a sprite animation. In this case the explosion sprite atlas shown below was used.
It should be noted that there are two ways to achieve sprite animations via subuvs: One option is to create a custom material that uses time to programmatically sweep the uvs horizontally and vertically across the sprite atlas. Another simpler approach, which was used here, is to create a custom material and just use a particle subuv expression. This expression automatically does the uvs horizontal and vertical sweeps (cascade tells it how many sub images there are horizontally and vertically). All there is left to do when using this expression is to provide the sprite atlas texture. When using this approach the subimage index module needs to be created in order to define how many subimages are to be used. Moreover, the amount of subimages horizontally and vertically is defined through the subuv section parameters in the 'required' module.
Finally, a couple comments regarding collision. I found that GPU collision only works when the option 'Use Local Space' is disabled in the required module. According to Unreal Forums this is because "collision on GPU particles calls from Scene Depth and cannot function in local space". However, disabling this local space option causes the particle effect to have a trailing effect when flying and I didn't like that. Hence I disabled collisions on this combustion emitter (The cascade screenshot may not reflect that tho).
Originally the smoke emitter was simply meant to be a simple CPU sprite emitter to simulate spark collisions when the jet exhaust got too close to the ground. This was meant to overcome the limitation of the GPU sprites from above not supporting collisions when dealing with local space. However, I couldn't get the sparks to look natural enough and eventually this module evolved into smoke. The particles in this emitter are simply shot down with an initial velocity and made to collide with any static actor in the world. A maximum of two collisions per particle was chosen and a damping factor vector that makes the particles (smoke) jump in an upward fashion when colliding. To create the sprite material a smoke alpha texture was used in a translucent material configuration. As usual, the particle color modulates the material emissive color as well as the opacity. Hence, to create smooth smoke a color over life was chosen where smoke is born light gray and dies black. Also, it is born almost translucent just to become fully opaque halfway and die fully translucent again. Finally, this is alpha texture used for the smoke sprite.
The heat effect is mainly achieved by the sprite material. This material is set up as a lit translucent surface and the main thing about it is its refraction variation. The refraction variation is being driven by a lerp module that interpolates between 1.0 and 1.6 for the refraction index input parameter in the shader. Furthermore, this lerp is driven by a modulation between the particle's color and a noise texure being panned. Finally, the normal of this material is driven by two normal maps that are panned and then added to each other in order to create random motion. With the material set up, the emitter is setup to spawn particles on the surface of a sphere centered around the combustion emitter with an initial velocity pointing up (heat rises). A drag module helps slow down the particles as they mature and grow in size. Finally, the scale color module allows for the emitter to communicate to the material to increase that modulation factor driving the refraction interpolation.
In order to better showcase the HD landscape (discussed in my previous post), I decided to create a day-night cycle blueprint. The main idea is simple but, as usual, the devil is in the details...
At a very high level, the main steps were the following:
- Create a 'Timeline' node inside the level blueprint.
- Create the different parameter curves inside the 'Timeline' needed to drive all the cyclic items. More specifically, I'm talking about the sun position, the fog amount, skylight intensity, and the time of day.
- In order for the time of day and sun position to be consistent I had to drive the sun position from -270 degrees at 00:00 to 90 degrees at 24:00 hrs. Moreover, my time scale was 1 hr = 10 seconds.
- The fog parameters were trial and error but overall attempted to reduce fog during midday and increase it at night.
- The skylight allowed for nice ambient lighting during day light but it had issues at night as it would light up the terrain like crazy. Therefore, I decided to drive the skylight intensity to almost zero during night time.
- Create own copy of the engine's sky sphere in order to be able to tweak freely without affecting other levels or projects.
- Cloud amount, cloud speed, and stars brightness can be tweaked in the sky sphere blueprint but only at construction time. Hence, in order to drive those material parameters with the time of day, create another 'Timeline' node inside the sky sphere's BP event graph. Then, create the different curves for cloud amount, cloud speed, and stars brightness inside this timeline node. Afterwards, hook up those outputs to the corresponding parameters of the dynamic material instance. Finally, connect the time of day, from the 'Timeline' node in the level blueprint, to the time input of the Timeline node just created for the sky sphere's BP event graph. This is how the two blueprints communicate.
- Add milky way galaxy at night by grabbing picture from NASA and importing it as a power of two texture into the project. After that, find the section where the stars are being drawn inside the sky sphere's material and linearly interpolate the milky way galaxy texture. Getting the UV's tiling was trial and error but at the end found U tiling of 2 and V tiling of 1 factors worked best for me.
- Create Matinee by adding camera groups, a director group and a soundtrack.
- Need to set the directional light as movable.
- Need to register the directional light with the sky sphere in order to use its angle to drive the sun position.
- Need to call the sky sphere's 'Update Material' function every time the sun position is updated.
- It looks nicer with distance field ray traced shadows enabled.
- Reduce skylight intensity at night in order to avoid glitches where the terrain lights up at night. Still investigating if this is a glitch with skylights or if its a problem where I'm abusing of the interface type of deal.
To do items
- Explore the distance field shadows parameters better.
- Investigate lighting artifact where there are isolated pockets of the landscape that light up at weird times during night cycle.
- Investigate why the dirt mask applied to camera 2 is not working.
- Improve landscape material. Investigate how to autogenerate landscape materials blends based on terrain features.
- Another reason to improve the landscape material is due to the grid patterns being formed by the UV tiling.
NED offers 15x15 minute files at 1/9th AS resolution for selected areas within the US. This is approximately 3 meters by 3 meters post spacing resolution!! Hence, I embarked in a mission to load one of this files as landscape in Unreal. The first obstacle in my path was decoding their .IMG file. Luckily, I found GDAL.
In Ubuntu 14.06, installing GDAL for development was straightforward if installed from the main repo (GDAL 1.9):
Then, in order to compile, do something like this. One thing I learned is that g++ really expects you to pass the link parameters after the .cpp or else it won't work.
g++ `gdal-config --cflags` myProgram.cpp `gdal-config --libs` -o myProgram
So I played around with GDAL C++ API but couldn't get too far. To be honest, It has a steep learning curve and I still got to master it. So far tho, all I can do is open up the file, learn some meta information about the file, like its size, resolution, datum, min max elevations, etc, and then dump the raster data (elevations) it into a grayscale PNG. However, my problem is that I don't know how to dump a 16 bit grayscale PNG instead of a 8 bit one. I think my problem stems from the fact that I don't know yet how to convert the raster file format from float to unsigned 16. Because I'm not doing that, my direct raster dump into PNG is causing the PNG 'driver' to default into a float to unsigned 8 conversion. Moreover, its clamping all the data at 255 and this results in a white image which is obviously not useful.
Luckily, GDAL comes with a few off the shelf utilities. In particular, I found gdal_translate. This utility not only allows me to dump the .IMG file directly into a 16 bit grayscale PNG, but also it allows me to crop it so that I only get the useful info out of the file. Because one picture replaces a thousand words, this is what I mean. Below is an example of how I used the utility to create the PNG I ended up loading in UE4.
gdal_translate -scale 0 2560 0 65535 -srcwin 0 0 5355 3465 -ot Uint16 -of PNG testIMG.img testPng.png
Notice that I knew the maximum elevation within this file when scaling to unsigned 16. I got that information from small utility I wrote. However, I think there is a gdal-info application already that can give out all the meta information about a given file. This max elevation value is important to keep in order to achieve the correct z-scaling when importing the landscape into Unreal. In my case the z-scaling factor was (2560*100)/(255*2) = 501.96. The scaling factor for x,y is straightforward: 300 so we can achieve the inherent 3 meters per post spacing resolution in the data.
Finally, this is a rendering of the result using a simple one layer landscape material (grassy), a bit of fog and a simple stationary directional light.
A couple of fly-through shots for my Unreal Studio using UE4 Matinee. Nothing fancy, just a few different shots under one director group. Adding a fade in and out between shots helps in the transitioning of cameras. Btw, I only changed the default FOV for the camera used in the shot that closes on to the golden mesh. It dramatically exaggerated the closing in effect but I liked it. Still got to learn more tricks and post-processing effects.
On my previous post I derived the following landscape scaling vector: (277.83, 277.83, 34.69). However, I've quickly realized my mistake: While it is safe to assume a degrees of latitude to linear distance conversion remains about the same for anywhere on the globe this is obviously not true for the degrees of longitude to linear distance conversion. Reason being that the lines of longitude converge as we approach the poles. Therefore, the scaling factor for the x component of the landscape (longitude lines) needs to be different from the y component (latitude lines). The trouble is that UE4 Editor won't allow you to scale this two dimensions independently: As soon as you change the x scaling factor and press enter, the y scaling factor automatically assumes the same scaling factor value as x. This is to prevent the quads in the landscape structure to become something other than squares, or so I read in the UE forums somewhere.
I found a solution to my dillemma: Scale the heightmap so that in effect the heightmap represents linear distance as opposed to degrees of lat/lon. As a quick recap, my heightmap was build using elevation samples every 9 AS (arc seconds = 1/3600th of a degree) latitude and longitude wise. 9 AS is rougly equivalent to (30.87*9) 277.83 meters latitude wise everywhere on the globe. But longitude wise, 9 AS is only equivalent to 277.83 meters on the equator longitude wise. But my heightmap is based around Mt. Raininer USA and according to Google it is located at the lat/lon pair (46.8529°, -121.7604°). Therefore, 9 AS at that latitude is equivalent to
(277.83 m)/cos(46.85 deg) = 406.237246926 m
Moreover, assuming we want the heightmap to represent a linear distance equivalent to 501x501 9 AS posts, then we need to rescale our heighmap along the horizontal (longitude) axis by this much.
((406.237246926)/277.83)*501 = 732.551778821 or 732 if flooring.
Bottomline, we need to rescale our image from the 505x505 original dimensions to 732x505 dimensions. This meant time to get help from our friend Imagemagick.
The following line does the trick
convert Rainier_505.png -sample 732x501\! Rainier_732x505.png
However, -sample simply repeats some of the columns in order to achieve the size requested. This ends up creating the following 'banding' looking lighting artifact:
A better solution is to use
convert Rainier_505.png -resize 732x501\! Rainier_732x505.png
The resulting heightmap here has its additional columns interpolated smoothly in order to avoid the banding problem illustrated above. A result of that is shown below.
Now, I've found the scaling factor I derived for the z component of the landscape has an issue as well. By construction, my 16-bit heightmap encodes values in a uint_16 so that 0-65535 represents heights ranging from 0-9848 meters. Mt Everest at 8,848 meters was taken as reference for the maximum value that could be encoded. The extra 1000 m are used to represent up to -1000 m below ocean. Furthermore, the default z scaling factor for a landscape is 100 and this represents -25000 to +25500 meters (yes I know I'm not using 1 UU as 1 cm but this is done so that I can move around in the editor easier).
Therefore my correct z scaling factor should be (9848 * 100)/51000 = 19.31
In summary, scaling factor (which I hope is correct this time) is (277.83, 277.83, 19.31). In order to position my landscape so that ocean level is at 1000 units I offset the landscape on the z-axis by 9482/2 = 4924 units/meters.
Finally, the image comparsion below shows the difference in realism that can be achieved by adding normal textures and ambient occlusion textures, per layer, to the landscape material.
In the images below I've added a fishnet type texture and a color gradient based on height.
I'm a software engineer with a passion for computer graphics.