Offline Light Propagation Volumes

Materials, textures, lighting, postfx
  • 1
  • 2
  • 3
  • 4
  • 5
  • 7
68 posts Page 1 of 7
andrewmac
Posts: 295
Joined: Tue Feb 03, 2015 9:45 pm
 
by andrewmac » Wed Feb 04, 2015 12:41 am
I've been interested in light propagation volumes since first seeing about them in a crytek presentation. I think it's a simple and novel idea for decent real time global illumination. Unfortunately, at the moment, Torque3D is still using DirectX9 so it lacks two major features I would need for real time LPVs: rendering to a 3D texture, and compute shaders. It CAN be done without them, but it's messy. What I decided to do in the mean time is try to implement an offline version of them. Completely calculated on the CPU, and static.

How does it work?

You place an OfflineLPV volume around an area in the level. It steps through the area of the volume and tests for static geometry, producing a voxelized version of the area inside the volume. Next it detects the lights in the scene and injects them into the grid. Finally it propagates the light outward through the grid. What is essentially a postfx is ran after the lighting pass which takes the worldspace position of each pixel and checks if it falls within the volume. If it's in the volume it pulls the color from that spot in the cube and blends it into the lighting buffer.

Here's an example of an area being lit entirely by a cube filled with testing values (point sampling on for debugging):
Image

I haven't had a chance to put a lot of time into the propagation algorithm yet, but here's a screenshot with some simple tracing:
Image

There's a single green pointlight offscreen to the right that is directly lighting the convex shape on the left hand side, bouncing off and illuminating the backside of the convex shape to the right.

Will post more screenshots and updates as I work on it. As usual my code is on my github: https://github.com/andr3wmac/Torque3D/tree/offlineLPV
andrewmac
Posts: 295
Joined: Tue Feb 03, 2015 9:45 pm
 
by andrewmac » Wed Feb 04, 2015 1:10 am
Here's a video of geometry being voxelized:

LukasPJ
Site Admin
Posts: 344
Joined: Tue Feb 03, 2015 7:25 pm
 
by LukasPJ » Wed Feb 04, 2015 3:20 am
We need like buttons!
Timmy
Posts: 306
Joined: Thu Feb 05, 2015 3:20 am
  by Timmy » Sat Feb 07, 2015 10:20 am
You could modify luis's GL branch and bump it up to GL 4.3 and use compute shaders. Bit of a work but more realistic than D3D11 at this stage.

Anyways, cool stuff :mrgreen:
buckmaster
Steering Committee
Steering Committee
Posts: 321
Joined: Thu Feb 05, 2015 1:02 am
by buckmaster » Sat Feb 07, 2015 1:34 pm
Cool stuff indeed. Is this practical to use on a whole level? Or at least something the size of a building?
andrewmac
Posts: 295
Joined: Tue Feb 03, 2015 9:45 pm
 
by andrewmac » Sat Feb 07, 2015 4:59 pm
@
User avatar
Timmy


I can't really dedicate a lot of time to my experiments in Torque anymore so I have to keep them very focused and short lived. What you see so far is a total of maybe 10 hours of programming. I was hoping to have it working well in a only a few days, but it's taken a bit longer to work out some bugs with 3d textures in torque that I didn't see coming. Luckily, JeffR took interest and is now contributing to the project. It was actually pretty much his idea anyway. When VXGI demos came out he was saying it would be interesting to try to implement it as an offline lighting solution in Torque, and as hardware (and Torque) got better it could be converted to an online solution. This is the first steps towards those dreams I suppose. It was also partially inspired by Godot engine. I never looked at the code but I strongly suspect their GI baking solution is an offline LPV implementation.

@
User avatar
buckmaster


All of the heavy calculations are done during editing, so even if it was so bad as to take 20 minutes to calculate the final result it can be stored in a binary format if needed. The actual application of the lighting results is pretty damn light. It's a single full screen postfx that samples the depth buffer, determines worldspace position, does a simple subtraction and division operation to determine UV coordinates and samples a 3d texture, then blends it into the lightbuffer. The heaviest bit for actually displaying it will come from memory usage of the volumes. If you were using lots of high resolution volumes in a level, the memory usage could get up there. I've seen people cover all of sponza with a single 256x256x256 volume. So, 256x256x256 x 4 bytes (RGBA8) = ~67 MB of video ram usage. Not too bad I don't think.

It also depends on the layout of your level. For instance with something like this:
Image

Maybe it's not the best decision to use a single volume to cover the entire level. That's a lot of unused area. It might make more sense to use multiple volumes. I'll likely have the final shader able to support up to 4 (arbitrarily chosen. maybe 8, maybe 16? we'll see) volumes in one shot, and calculate a visibility score for all the volumes in the level on the CPU and choose the 4 with the highest score to sample in the shader.
HeadClot
Posts: 58
Joined: Sat Feb 07, 2015 1:29 am
by HeadClot » Sun Feb 08, 2015 1:23 am
Timmy wrote:You could modify luis's GL branch and bump it up to GL 4.3 and use compute shaders. Bit of a work but more realistic than D3D11 at this stage.

Anyways, cool stuff :mrgreen:


If we are going to do that we might as well wait and migrate over to OpenGL next which is being unveiled at GDC this year :D
JeffR
Steering Committee
Steering Committee
Posts: 732
Joined: Tue Feb 03, 2015 9:49 pm
 
by JeffR » Sun Feb 08, 2015 4:53 am
As Andrew said, I've jumped in to help with this because this is sorta an awesome thing to have for Torque.

I've been focusing on improving the voxelization process. I've got pretty good headway on the new approach, but there are some bugs to iron out before it can be really considered "working".

Image

The new approach finds any object overlapping the LPV volume and polls it to do a buildPolyList.

Then I iterate through and find polies that are inside or intersect with our volume. This lets a volume only need to process part of a mesh, which is important in my screenshot above, as that's almost 20k polies for a full level's basic geometry.

For each valid triangle, it builds a bounding box and uses snapping logic to figure out what the min/max voxel grid indicies it overlaps. This means when we process for the actual voxels that make up the surface, we only need to process the ones that MIGHT intersect, as opposed to the entire volume each time.

Once we have our local block, it iterates trough and rejects any voxels that don't intersect the tri's plane. Finally, for the ones that do, we do a final test to detect if any of the edges of the voxel hit the triangle.

Because we localize everything down, it keeps it pretty fast, but the actual testing it does after paring down the data keeps things fairly accurate. There are outlier cases I need to account for, such as a triangle who's corner intersects the voxel, but none of the voxels edges touch the tri, but the basic logic is solid.

Once the better voxelization model is done, I'll start looking at a better propagation scheme. If possible, I'd like to look into baking the bounce data for a given voxel. So when you inject lights and tell it to propagate the light, each voxel already knows how the propagation will happen, and it just hammers through it.

*IF* that works well and is fast enough, you could theoretically have dynamic lighting do the propagation realtime. The geometry couldn't change, and dynamic objects still wouldn't impact the lighting, but you could do stuff like flashlights and Time of Day progression and the lighting would just handle it.

There are other hurdles to sort out before that's actually feasible, but that'd probably be the end-goal.
andrewmac
Posts: 295
Joined: Tue Feb 03, 2015 9:45 pm
 
by andrewmac » Sun Feb 08, 2015 2:58 pm
I resolved my volume texture issues, so it's no longer corrupting when changing screen resolution. I also added the ability to lock/unlock volume textures for updating so I'm no longer creating a new volume texture for each update, making the whole thing a lot easier to work with and test. I added the ability for the propagation algorithm to flip/flop between two propagation grids, so you can run the propagation algorithm as many times as you want. Can't stress enough, the propagation algorithm is just a simple bleed and not physically accurate at all. That's my next step, try to make the propagation geometry aware + inverse square falloff.

Now some better screenshots on the whole setup. First, this is what my testing area looks like lit by sun:
Image

Here is the lighting conditions I'm testing this is in. It's two point lights (red and green) off to the right hand side, both with cubemap shadows on (which has bugs. note the banding in the light. but that's not me, that's stock. they give the best occlusion so I chose them):
Image

That's what it looks like in stock. Now, here's what it looks like after a single propagation:
Image

And here it is after 3 propagations:
Image
andrewmac
Posts: 295
Joined: Tue Feb 03, 2015 9:45 pm
 
by andrewmac » Sun Feb 08, 2015 6:11 pm
Crappy, but valiant attempt at the cornell box:

Image

Image
  • 1
  • 2
  • 3
  • 4
  • 5
  • 7
68 posts Page 1 of 7

Who is online

Users browsing this forum: No registered users and 2 guests