Hi folks, it’s been a while since my last update. Unfortunately school got in the way so my progress on Monter stagnated a bit, but I was still able to put work into it. A terrain walking system has always been my next big goal, but there were still missing pieces here and there. So I spent the past week patching them up. And after this blog post, I will finally start to work on the terrain walking system.
Light transmittance in air
The only reason we can see anything at all is thanks to light. Light sources first emit light and some of the light hit some surfaces. The surface partially absorbs the light, and reflects the rest of the light in some direction. Eventually, some of the reflected light reaches our eyes, giving us the information we need to piece together an image in our brain.
It’s easy to think that light travel without energy loss. That is mostly true if the light travels in vacuum, but if I am simulating an earth-like world, I have to consider the atmosphere in that world. Long story short, light gets scattered as it travels inside our atmosphere due to collision with aerosols, which are bigger particles in the atmosphere that are closer to the ground.
Given the above, if light travels a greater distance, it probably hits more aerosols along the way and get scattered even more. Therefore, the farther a surface is, the light that it reflects is scattered more when it eventually reaches our eyes. This is why objects that are far away seem to blend into the atmosphere’s color. This phenomenon is also called Mie scattering.
A physically-based simulation of such atmospheric scattering is not trivial, so I decided to first just apply some hacks to see what kind of effect it would have on the scenery.
Here’s an image that’s rendered without taking Mie scattering into account:
Of course this is no good; the surfaces far away reflect the same amount light as the ones that are closer to the viewer. Luckily, when the pixel in the scene is shaded, I can access its view-space position. Because these positions are in view space, I can take their Z component and use it to approximate the distance between that point and the camera.
One way to do this is simply linearly interpolating between the shading point color and the sky color with the shading point’s distance being the interpolant. Of course the interpolant has to be a value within [0, 1], so I just divide the distance by ZFar or any arbitrary big value.
// mix() in GLSL is the same as a lerp() function
vec3 FinalColor = mix(LightExitance, SkyColor, Position.z / ZFar);
Same scene with the above code:
The code works correctly, but the blending is too severe and add a thick fog to the scene.
If we visualize the mix() function we are using as the easing function for Mie scattering, we can easily see that the fog level ramps up at a constant slope, which is too fast for our use case.
Instead, we want the fog level to stay low for the majority of the graph, then ramp up quickly as it ends. To do that, we can easily add some exponent to the interpolant value. Here I am adding an exponent of 3 to the interpolant.
Since that graph looks decent, I just apply the changes to the one liner we had:
// mix() in GLSL is the same as a lerp() function
vec3 FinalColor = mix(LightExitance, SkyColor, pow(Position.z / ZFar, 3.0));
The rendered scene looks much better now:
We can also do some cool weather effect with this trick. If we reverse the graph to ramp up quickly and change the ambient color and sky color to match dust storm, then we can simulate a dust storm:
We can also use this to simulate dark night, green alien sky, etc, which could come in handy when I start to develop the world.
Memory management issue
Another problem that’s been on my mind for a long time is memory management. Like handmade hero, I only do single allocation for the entire game. The way I manage memory is to subdivide that one memory chunk into multiple memory arenas.
It has its problems. When some bytes from the arena is allocated, it can only be freed if there is no subsequent allocations when these bytes are in use. The only way to deal with this is to further subdivide the arena or use the head and tail of the arena separately. However, in most of these cases, memory only persists during a single frame. So I can just use a global temporary arena that gets cleared every frame and not worry about memory leak. I managed to come this far by just using the arena, but it didn’t feel comfortable as it constrains the code I write to a certain extent. Just this week when I’m fixing some animation bugs, I ran into a scenario where memory arena is just not enough.
Memory lifetime problem
Like I mentioned above, most of the memory problems go away on their own when their lifetime is known before allocation. However, what if a memory allocation is requested, and its lifetime is only transient but has to persist across multiple frames? That is the case when I was doing cross fading for my animation clip transitioning. When I transition from one clip to another, I need to store the current pose of the skeleton and use it to LERP with the new pose during the transition phase. However, the transition phase could last multiple frames. So I can’t use the global temporary memory arena or any other memory arena, because while this memory is in use, another memory allocation request might occur doing the same thing.
Another idea is to just have a fixed sized temporary skeleton pose array that store all possible transient skeleton poses. It is also not ideal because the byte size for a skeleton pose can easily range up to 5KB, which is too much to store per entity if I were to have a large scene.
I couldn’t think of a good solution to solve this problem with just memory arenas, and even if there were one, the fact that memory allocation is taking this long to think about is a sign of needing a new memory system.
Since I’m not very well-acquainted with what kind of memory allocators there are, I’m just going to make the classic freelist general purpose memory allocator and see where it takes me.
The classic free list memory allocator
The concept of such general purpose allocator is simple. The memory is subdivided into blocks. Each block can be a free block or a occupied block. The total size of the block is stored atop of each block. That makes it possible to traverse the blocks as if they are arrays and find the next free entry to claim memory.
Because the blocks have to be traversed to find where to claim memory for a Malloc() call, the process could get slow. That’s where the concept of free list comes in. A free list is just a linked list of all the current free blocks with an external pointer pointing at the most recent free block. If such free list could be maintained, then the worst case of the time complexity of Malloc() is only O(number of free blocks) instead of O(number of blocks), because only that free list needs to be traversed once to find out where to claim memory.
Another concern is free block fragmentation. If Malloc() and free() calls happen frequently, Then huge free blocks will quickly get subdivided into small free blocks. Even if there is large memory available in the allocator, if it’s fragmented into small blocks, allocation can’t happen. A simple fix is just to traverse the entire block array, merging free blocks that are adjacent and completely reorganize the free list. However this operation could be costly, so I will only do this once a frame.
The last concern is memory fragmentation. As of right now, the allocator is only used for transient allocations that persist across frames. There won’t be any allocation just sitting there for the entire duration of the game. It isn’t a concern as of right now, but we will see.
Game is no longer top-down
Just a side note, the camera system is revised to be more of a third-person camera than a top-down one. Of course, it is buggy at the moment (clipping through trees and terrain), but fixing it will come after terrain walking system and collision system.
In my opinion since you have a very bright light, atmospheric attenuation should not happen before a few kilometers. The contrast between the very bright light and the fog behind make the screenshot look weird to me. Also having a sharp attenuation slope when things are far has the effect of "thing coming out of the fog" when you walk toward them.
For your memory allocator, how do you handle memory alignment ? Do you align all blocks to a certain value ? Is it the user responsibility ?
Hmm you got a point. I guess I was compensating for the lack of volumetric lighting and didn't realize how weird it looks. I also noticed the "things popping out of the fog", but I was too biased to see it as a problem at that time. I will remove it when I got something else to replace it, thanks for that!
As for my memory allocator, each allocated block is forced to be 8 byte aligned so that the less significant bits in each block's metadata could be used as flags. The allocator doesn't have the functionality to force any alignment beyond that right now, but it will be added when I do SIMD work.