Work Blog - JeffR

132 posts Page 14 of 14
damik
Posts: 20
Joined: Thu Jun 23, 2016 12:02 pm
by damik » Wed May 17, 2017 11:41 am
nice nice nice :D
JeffR
Steering Committee
Steering Committee
Posts: 682
Joined: Tue Feb 03, 2015 9:49 pm
 
by JeffR » Wed May 17, 2017 5:38 pm
So, I've been talking about probes a lot of late, and most recently, a fair bit about Spherical Harmonics. I linked to a couple of things, but I figure that for posterity, and clarity, it'd be worth it to just jot down what it is and how it do, if for nothing else except to have a good breakdown of the idea, theory and application for games.

So, the concept of utilizing Spherical Harmonics come out of the need for fast, cheap ambient lighting that also was directional. As in, if we say 'Hey, renderer, what's the lighting to our right look like', we can get particular colors rather than a blanket color like the flat ambient we get off of our sun lights.

There's a lot of different ways to get all that indrect lighting information, which I'll cover a few, but it all stems from the idea that when light is cast into a scene, unless it's completely absorptive, some light will reflect off it's surface, picking up some coloration from the surface, and bouncing onto other surfaces nearby.

The cornell box is a very standard representation of this idea:

Image

As you can see, red wall on the left, green wall on the right. When the light on the ceiling is cast into the box, it hits various surfaces, including those walls, and will bounce light around. When it bounces, color form those surfaces is picked up and bounced to other nearby surfaces, which can be seen on the 2 small boxes in the room. The red and green bounce onto them from their respective sides.

Now, the CORRECT way to simulate all this is by raytracing thousands of photons per frame from lightsources, which hit surfaces and bounce to hit more surfaces and the like, until the photon runs out of energy. This is, however, very. VERY. slow. It's why render times for scenes in offline renderers take minutes to hours to days.

So obviously that's right out for game rendering, we need to be WAY faster than that. Which is where all our various methods developed over the years come into play. One very common method that's still used today but has limitations is lightmapping.

That's where we do the raytracing as per our offline renderer, but we then save the results into a texture that can be very cheaply looked up and applied onto our objects so during runtime it's very fast while still getting those excellent offline render results. However, that's at the sacrifice of objects not being able to move if they want that fancy lighting. Dynamic objects such as players, cars, etc don't get that lighting information at all!

So work was done and a few other ways were found to convey that fancy lighting where light bounces and transfers colors and stuff via photons - henceforth referred to as 'indirect lighting'. With modern hardware, we can do some approximate methods that get the gist of the raytracing method, but can run in realtime(though this doesn't leave much room for other stuff to render fast if you don't have a VERY expensive graphics card) such as SVOTI or VXGI.

SVOTI, or Sparse-Voxel Octree Total Illumination takes the geometry of the scene, voxelizes it, and then with the much simpler voxel scene, raymarches from the camera to pixels, and samples nearby voxels to get the bounce information. It's not super accurate, but it's pretty accurate, rather fast and it looks good. Dynamic objects can get the bounced lighting info from the static objects around it, but dynamic objects don't contribute bounced lighting themselves. So the greenery of a forest will bounce green light onto your soldier guy, but your soldier guy won't bounce lighting onto the greenery. The voxels are calculated on the CPU asynchronously, so it doesn't drag the rendering down, but it's still not super fast(it can have problems keeping up if you're fast moving, for example) and has higher memory requirements if we don't want to keep recomputing the same voxels.

VXGI, or Voxel Global Illumination, is similar, but the voxelization happens each frame purely on the GPU. This lets everything bounce lighting, so it's comparatively more accurate, but it's also a lot more expensive to do the voxelization each frame. Even with dedicated hardware support, it's still basically too expensive to actually use it. But the results are very nice:



So it's pretty accurate, but it's rather slow still.

So a middle-ground between lightmaps and voxel tracing methods that has seen a lot of use in realtime rendering is Spherical Harmonics.

Spherical Harmonics is the idea where we want to encode the irradiance of a scene into as compact - but decodable - as possible while still being able to get that indirect lighting like we would expect. So what's irradiance? Good question.

Image

Irradiance is the concept that, for any given pixel of a surface, pragmatically, that surface can "see" a 180 degree hemisphere around it. So if we to take a ball, any given point on that ball can 'see' 180 degrees away from that surface. If you were to shoot a laser at that point, the point could be hit by that laser anywhere from that 180 degree hemisphere. This means that, when we're talking about indirect lighting, any given pixel, principally, will be receiving light from ALL directions inside that hemisphere it can 'see'.

In our offline raycast method, this is done by just firing an obscene number of photon rays away from each pixel and sampling them, basically brute forcing what the given pixel can 'see'. the voxel methods use 'cone tracing' which is a rougher approximation requiring fewer samples. For spherical harmonics, though, we have a cubemap.

A cubemap, you say? Yep, a cubemap. See, when the probes are baked to do reflections, we take 6 renders from the probe's position, Positive along the X axis, Negative along the X axis, Positive Y, Negative Y, Positive Z, Negative Z. This lets us know what a reflection would look like from literally any direction around the object.

When we do the renders for our cubemap, we're also rendering with lighting enabled. This is so reflections are actually accurate, but it ALSO means that we know what the scene looks like from a lighting perspective. If there's little light in the scene, say from a single flashlight, the cubemap is going to be pretty dark. If there's a lot of light, from the sun, we're going to see a LOT of light.

So that's cool, since the cubemap also represents our lighting around the scene, we can just take the pixel, sample from the cubemap, and we're good, right? Well...not quite.
See, the problem goes back to irradiance, with the full hemisphere thing I mentioned above. When we sample the cubemap, we can sample one specific point on it based on a direction. When sampling for reflections, this is great because it gives us as sharp or soft of reflections as we need, but when it comes to irradiance, that's not accurate because we're not getting the full hemisphere of lighting info that pixel can 'see'.

Image
An example of a cubemap, a blurred version, and a irradiance map. Irradiance is different from just running a blur filter on a cubemap, because it's specifically biasing towards bright, lit pixels in the cubemap. You can see in the example there that the lit spots from the windows are much better defined with irradiance because it's biased towards lights. This is important for our lighting information.

So we need to store it. There's 2 ways to do this. Irradiance Mapping, or Spherical Harmonics. Irradiance mapping is very accurate, as I've said before, but it requires an entire second cubemap. Even if the cubemap is low resolution, that's a fair bit of additional cost per probe. To calculate irradiance, we pretty much just mathematically take a pixel on an imaginary sphere, and then sample every pixel in the cubemap that pixel on our sphere can see, and use some math to average it out. It's pretty much brute forcing it, but it works. When we finish that, we know what the irradiance info for every pixel on our sphere is.

We could then save it to another cubemap, making an irradiance map, so that when we have our rendered pixel in the scene, when we sample from the cubemap, we basically precomputed the full hemisphere of lighting information that pixel can 'see', and we're done. But as said, second cubemap, more overhead, etc.

So the other way is Spherical Harmonics. We do mostly the same work with calculating irradiance as above, sampling the hemisphere of light for each pixel, but instead of saving it to a cubemap, we use some voodoo math to 'encode' it. Using some very particular math formulas, we can encode all our 360 degrees of irradiance information in just 9 RGB colors - our Spherical Harmonics Terms. At 3 orders(9 colors), Spherical Harmonics has around a 90-95% accuracy.

Wow!

So to use it, we pass those 9 colors to the shader, and when we render, we take the pixel's normal(which informs the direction the pixel faces) and run it through a decoding function, which uses some particular maths to manipulate the 9 terms we have to end out with a single, final RGB color that represents the irradiance that pixel can see.

So when we do a bake of a probe in our cornell box we can get that indirect lighting information happening by calculating the irradiance and encoding to SH terms for that probe. It lets us do this for any pixel that is inside the probe's radius, working for dynamic or static objects. The memory footprint is low because it's only 9 colors + our reflection cubemap we were going to have anyways. The only limiter is that updating requires re-baking the probe, doing the 6 directional renders again, so doing it realtime is rather costly and would be used VERY discretionally. But it's a good middle-ground technique between lightmaps and full voxel raymarching GI.

Image
An example scene using spherical harmonics. You can clearly see that we get lighting information directionally. The floor-facing pixels get the darker brows, the right-facing pixels get muted tans from the walls and the window-facing pixels are brightly lit from the direct light exposure. This is decoded off the 9 terms we made by encoding the irradiance data.

Once we have our reflections and our irradiance, we need to apply both to the scene. We're still hashing the best way to do this(do we write both to the same buffer, should irradiance be applied to the direct lighting buffer instead, etc, etc) but the data is largely there now, so we just need to decide the best way to apply it. I'll probably add a few more images later when I make them to better illustrate some parts, but hopefully this better explains what SH is and why we'd even want to use it.
132 posts Page 14 of 14

Who is online

Users browsing this forum: No registered users and 1 guest