Jump to content

Offline Light Propagation Volumes


andrewmac

Recommended Posts

I've been interested in light propagation volumes since first seeing about them in a crytek presentation. I think it's a simple and novel idea for decent real time global illumination. Unfortunately, at the moment, Torque3D is still using DirectX9 so it lacks two major features I would need for real time LPVs: rendering to a 3D texture, and compute shaders. It CAN be done without them, but it's messy. What I decided to do in the mean time is try to implement an offline version of them. Completely calculated on the CPU, and static.


How does it work?


You place an OfflineLPV volume around an area in the level. It steps through the area of the volume and tests for static geometry, producing a voxelized version of the area inside the volume. Next it detects the lights in the scene and injects them into the grid. Finally it propagates the light outward through the grid. What is essentially a postfx is ran after the lighting pass which takes the worldspace position of each pixel and checks if it falls within the volume. If it's in the volume it pulls the color from that spot in the cube and blends it into the lighting buffer.


Here's an example of an area being lit entirely by a cube filled with testing values (point sampling on for debugging):

http://i.imgur.com/e7jzyhz.png


I haven't had a chance to put a lot of time into the propagation algorithm yet, but here's a screenshot with some simple tracing:

http://i.imgur.com/VJfFk4D.png


There's a single green pointlight offscreen to the right that is directly lighting the convex shape on the left hand side, bouncing off and illuminating the backside of the convex shape to the right.


Will post more screenshots and updates as I work on it. As usual my code is on my github: https://github.com/andr3wmac/Torque3D/tree/offlineLPV

Link to comment
Share on other sites

  • Replies 67
  • Created
  • Last Reply

Top Posters In This Topic

@ Timmy


I can't really dedicate a lot of time to my experiments in Torque anymore so I have to keep them very focused and short lived. What you see so far is a total of maybe 10 hours of programming. I was hoping to have it working well in a only a few days, but it's taken a bit longer to work out some bugs with 3d textures in torque that I didn't see coming. Luckily, JeffR took interest and is now contributing to the project. It was actually pretty much his idea anyway. When VXGI demos came out he was saying it would be interesting to try to implement it as an offline lighting solution in Torque, and as hardware (and Torque) got better it could be converted to an online solution. This is the first steps towards those dreams I suppose. It was also partially inspired by Godot engine. I never looked at the code but I strongly suspect their GI baking solution is an offline LPV implementation.


@ buckmaster


All of the heavy calculations are done during editing, so even if it was so bad as to take 20 minutes to calculate the final result it can be stored in a binary format if needed. The actual application of the lighting results is pretty damn light. It's a single full screen postfx that samples the depth buffer, determines worldspace position, does a simple subtraction and division operation to determine UV coordinates and samples a 3d texture, then blends it into the lightbuffer. The heaviest bit for actually displaying it will come from memory usage of the volumes. If you were using lots of high resolution volumes in a level, the memory usage could get up there. I've seen people cover all of sponza with a single 256x256x256 volume. So, 256x256x256 x 4 bytes (RGBA8) = ~67 MB of video ram usage. Not too bad I don't think.


It also depends on the layout of your level. For instance with something like this:

http://i.imgur.com/eBswYia.jpg


Maybe it's not the best decision to use a single volume to cover the entire level. That's a lot of unused area. It might make more sense to use multiple volumes. I'll likely have the final shader able to support up to 4 (arbitrarily chosen. maybe 8, maybe 16? we'll see) volumes in one shot, and calculate a visibility score for all the volumes in the level on the CPU and choose the 4 with the highest score to sample in the shader.

Link to comment
Share on other sites

You could modify luis's GL branch and bump it up to GL 4.3 and use compute shaders. Bit of a work but more realistic than D3D11 at this stage.


Anyways, cool stuff :mrgreen:

 

If we are going to do that we might as well wait and migrate over to OpenGL next which is being unveiled at GDC this year :D

Link to comment
Share on other sites

As Andrew said, I've jumped in to help with this because this is sorta an awesome thing to have for Torque.


I've been focusing on improving the voxelization process. I've got pretty good headway on the new approach, but there are some bugs to iron out before it can be really considered "working".


http://ghc-games.com/public/LPVBetterGeomTesting2.png


The new approach finds any object overlapping the LPV volume and polls it to do a buildPolyList.


Then I iterate through and find polies that are inside or intersect with our volume. This lets a volume only need to process part of a mesh, which is important in my screenshot above, as that's almost 20k polies for a full level's basic geometry.


For each valid triangle, it builds a bounding box and uses snapping logic to figure out what the min/max voxel grid indicies it overlaps. This means when we process for the actual voxels that make up the surface, we only need to process the ones that MIGHT intersect, as opposed to the entire volume each time.


Once we have our local block, it iterates trough and rejects any voxels that don't intersect the tri's plane. Finally, for the ones that do, we do a final test to detect if any of the edges of the voxel hit the triangle.


Because we localize everything down, it keeps it pretty fast, but the actual testing it does after paring down the data keeps things fairly accurate. There are outlier cases I need to account for, such as a triangle who's corner intersects the voxel, but none of the voxels edges touch the tri, but the basic logic is solid.


Once the better voxelization model is done, I'll start looking at a better propagation scheme. If possible, I'd like to look into baking the bounce data for a given voxel. So when you inject lights and tell it to propagate the light, each voxel already knows how the propagation will happen, and it just hammers through it.


*IF* that works well and is fast enough, you could theoretically have dynamic lighting do the propagation realtime. The geometry couldn't change, and dynamic objects still wouldn't impact the lighting, but you could do stuff like flashlights and Time of Day progression and the lighting would just handle it.


There are other hurdles to sort out before that's actually feasible, but that'd probably be the end-goal.

Link to comment
Share on other sites

I resolved my volume texture issues, so it's no longer corrupting when changing screen resolution. I also added the ability to lock/unlock volume textures for updating so I'm no longer creating a new volume texture for each update, making the whole thing a lot easier to work with and test. I added the ability for the propagation algorithm to flip/flop between two propagation grids, so you can run the propagation algorithm as many times as you want. Can't stress enough, the propagation algorithm is just a simple bleed and not physically accurate at all. That's my next step, try to make the propagation geometry aware + inverse square falloff.


Now some better screenshots on the whole setup. First, this is what my testing area looks like lit by sun:

http://i.imgur.com/blfQ3h0.png


Here is the lighting conditions I'm testing this is in. It's two point lights (red and green) off to the right hand side, both with cubemap shadows on (which has bugs. note the banding in the light. but that's not me, that's stock. they give the best occlusion so I chose them):

http://i.imgur.com/sLNinNG.png


That's what it looks like in stock. Now, here's what it looks like after a single propagation:

http://i.imgur.com/a3ewWBj.png


And here it is after 3 propagations:

http://i.imgur.com/R5c6HR9.png

Link to comment
Share on other sites

You could modify luis's GL branch and bump it up to GL 4.3 and use compute shaders. Bit of a work but more realistic than D3D11 at this stage.


Anyways, cool stuff :mrgreen:

 

If we are going to do that we might as well wait and migrate over to OpenGL next which is being unveiled at GDC this year :D

 

OpenGL has been ready for quite awhile now to use in commercial games ;) , valve got great results when they ported .


Both NVidia and AMD have decent drivers now for GL, much better than in the past anyway.

Link to comment
Share on other sites

You could modify luis's GL branch and bump it up to GL 4.3 and use compute shaders. Bit of a work but more realistic than D3D11 at this stage.


Anyways, cool stuff :mrgreen:

 

If we are going to do that we might as well wait and migrate over to OpenGL next which is being unveiled at GDC this year :D

 

OpenGL has been ready for quite awhile now to use in commercial games ;) , valve got great results when they ported .


Both NVidia and AMD have decent drivers now for GL, much better than in the past anyway.

 

What I was trying to say is that the next version of OpenGL is coming and is going to be unvielded at GDC 2015 a month. If it fits our needs we should use it as it should give us better perfrormance than the current version of OpenGL 4.3 in theory atleast

Link to comment
Share on other sites

If we are going to do that we might as well wait and migrate over to OpenGL next which is being unveiled at GDC this year :D

 

OpenGL has been ready for quite awhile now to use in commercial games ;) , valve got great results when they ported .


Both NVidia and AMD have decent drivers now for GL, much better than in the past anyway.

 

What I was trying to say is that the next version of OpenGL is coming and is going to be unvielded at GDC 2015 a month. If it fits our needs we should use it as it should give us better perfrormance than the current version of OpenGL 4.3 in theory atleast

 

Aye, it should be interesting to see what they unveil. I'm feeling a bit excited to see where it goes m'self :D


As for work on this, got the updated voxelizing step mostly sorted with a few things left to patch.


Voxels are now a set size and so density is based on the size you set for them and the scale of the volume.


I also started implementation of better debug drawing, which andrew seems to be planning to further optimize. You can choose to have it draw a wireframe overlay of all tri's affected by the voxelization process, as well as a wire grid of the voxels that have been generated.


Screens below:

http://ghc-games.com/public/LPV_Wireframe.png


http://ghc-games.com/public/LPV_VoxelWireframe.png


http://ghc-games.com/public/LPV_VoxelWireframe2.png


That last shot is with a voxel size of 0.5. Voxelization time for that entire level area at that resolution is about 15 seconds and i believe it can get optimized even further going forward.


My next job is to get the propagation voxels and 3d texture to use the same density logic as the geometry grid. After that, I have a few ideas for the propagation logic. :)

Link to comment
Share on other sites

That's a pretty good question. I'm honestly not sure. I haven't looked too much into the guts of recast, but I'd hazard it's *probably* a split between voxelization and generating the new navmesh. You can do both pretty fast, but when you make it happen in large areas, it can get bad.


I did a lot of still to minimize how much computing this needs to go through and I'm still convinced I can make it even faster, but I wouldn't be surprised if the recast core does a bit more of a brute-forcey approach to the voxelization phase.


http://ghc-games.com/public/LPV3dTextureTest.png


This was tossed around in irc last night. But I got the 3d texture and propgagation grid stuffs updated to the arbitrary grid size.


Propagation itself is broken, but everything else works. The image above is the entire level getting pure white ambient light as long as it's inside the LPV volume. (To clarify, when I was saying 15 seconds to voxelize, that's the area I've been testing this whole time, not just that hangar area. So it's voxelizing an entire moderate-sized arena level's geometry in 15 seconds. That little dot in the middle is the Soldier.)


From there, just gotta get propagation fixed and the basics of the system are in. Me and Andrew have been going back and forth about looking into encoding spherical harmonics into the voxel ambient data so we can get directionalized ambient lighting in there as well. He's been doing some test runs of the math that make it seem promising.

Link to comment
Share on other sites

I just discovered that I messed up the code loading the final result into the volume texture. By loading it in the wrong order x was z and z was x so nothing was displaying right. After fixing that the results from the cornell box are a lot better:


http://i.imgur.com/nGj7qjy.png


What it really needs now is the ability to pull the color from the material and store that in the geometry voxel so the bounced light can pick up that color. That way it'll light the backside of the box with red on the left side, and green on the right side.

Link to comment
Share on other sites

So, JeffR had another one of his crazy ideas. Couldn't we trace a ray into the voxel grid in real time to do glossy reflections? Last night I set out to answer that question. After a number of hours of fighting with angles, normals, etc I emerged victorious!


-myAkLhwP4Q


As you can see the calculations are still a little rough around the edges, but I'd say it a good proof of concept for the idea. Combine this with a screen-space reflection shader and I think we could get pretty decent glossy reflections that still work with off screen objects.

Link to comment
Share on other sites

I just did a push the branch with two great updates. First I cleaned up the options in the properties a bit and I added the reflection shader as an option that's rendered on top of the regular stuff. You can turn it on/off.


Second, voxels now detect the diffuse color assigned to the material. The color detected is blended with the light that bounces off it. This finally gives us some color bleed and reflection color:


http://i.imgur.com/Kdevy2A.png


http://i.imgur.com/2iUxAWd.png

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...