Work Blog - JeffR

328 posts Page 22 of 33
JeffR
Steering Committee
Steering Committee
Posts: 878
Joined: Tue Feb 03, 2015 9:49 pm
 
by JeffR » Sat Apr 07, 2018 9:39 am
Hey everyone, that time again!

So, first and foremost, if you didn't see it, the hotfix build of 3.10.1 is out, that resolves a few issues that were being a problem for users working with 3.10, most specifically Microsoft breaking VS2017's ability to compile it.

So, what's been shaking in the land of the R & D? A loooooot.

The big work has tied to the rendering stuff being worked on, obviously. Timmy's nearing the end of the initial PBR work, so we should be able to get that in the beatup and merge-in soon, which is exciting.

I've also tracked down a rather unpleasant bug that caused crashes when you reloaded missions while using Reflection Probes, so that's another bee removed from the bonnet.


Associated to that is the Render Pipeline work I mentioned in the last blog. Solid progress was made on that, but it led into some problems with how the material/shadergen system integrates into...well, basically everything, currently. So we decided that the best bet would be to put a temporary hold on that and jumpstart the material system refactor. It was always coming, but we decided to move it up a bit.

Material System Refactor
So what's the deal with this? Well, while the current material system, and more specifically Shadergen have worked very reliably, anyone that's really tried to do advanced work with shaders, or materials, or expand the material/shadergen system knows that it's a tentacle monster molesting a plate of spaghetti. Everything is intertwined, relationships and how data is processed and passed around is hard to follow, and as mentioned with the render pipeline work, it extends well outside the mere 'shadergen' part and into lots of logic for what, where, when and how things render in general.

This makes updating and expanding stuff, like PBR, or the Render Pipeline, a lot more complex. So, the thing to do is rework it to not be.

The good news is, is that I wasn't going into this blind. I'd already blogged about the Custom Shader Feature stuff before. If you missed it, feel free to go back and read up on it. But it was basically a nice integration into the material system, but we could craft shader code in script and have it nicely integrate into rendering without having to manually muck through the shadergen logic to add that new behavior.

So the Material Refactor is basically the ultimate extrapolation on it. The New-Shadergen will be far, far, far simpler than the current monstrosity, and the primary focus for shader/material authoring will be either writing shader, or using a visual editor to design one. This will do the assembly work based of the shader based on the express logic written, and the shader will inform what inputs it utilizes and expects. From there, the system can plug the data in based on common(or custom) fields, so whatever you need the material/shader to do, it can do it, all without needing to worry about what the backend is doing.

How's that work?
Right, lets get into the particulars.

In the current system, everything is dictated by a surprisingly complex system of Material Feature Flags. Any time a material does anything, uses a diffuse texture, do an animation, etc. When the material is loaded to generate in shadergen, those settings flip the MF's on. Shadergen then parses the features and each feature has it's own functions called that inject in code, variables, etc into the shadergen system.

Shadergen ITSELF is actually pretty straightforward, and mostly is just about making sure variables used appear in the right order and writing out the code body. The MF system that sits ontop of it, however, is very involved.

That on it's own would be passable, but it extends further by having non-material systems inject MF's into the generation based on a number of things. Important things, rendering can't happen without them, but it's one further layer of complexity that spiderwebs aggressively and makes working with the system a huge pain. Things like foliage wind, deferred rendering, hardware skinning, etc all do this.

So when I went to work on the Render Pipeline, what we ran into was a requirement for it to inject those critical material features, which drastically pigeonholes the flexibility of the RP system and somewhat defeats the purpose.

So after some discussions, and spurred by my research of the Custom Shader Features, we agreed that pushing on for the Material Refactor would be a good idea, if only so working on stuff going forward doesn't involve dealing with that horrid monstrosity ever again.

Ok, so what's the refactor entail?
The big thing is dropping the easy-breezy features system that currently exists and have things MUCH more explicitly defined. If some element of the shader generation is triggered, it happens on purpose, in a clear way.

So, lets say we want to make a material. We'll ignore the authoring method for the moment and focus just on how the system deals with a user-authored shader/material.
In order for the system to plug in properly, you need to tell it what inputs the shader expects. This can be a wide range of things, but ultimately the list of things is actually rather predictable. As such, we plan to have a big list of normal inputs that go into the shader, so if you want to, say, pass in an AlbedoTexture, you just inform the material definition that that's an input.

At runtime, when we go to inform the shader of the data we're binding, we have a big ol' list of those common inputs, and because the material definition knows we're expecting an AlbedoTexture, we can quickly skim through the list and bind that Shader Constant and pass in our texture.

Because we're operating on a list of inputs, the system doesn't really care what or how many inputs they are, so long as it's named correctly. So your material can be as simple as displaying the color red, or as complex as the water surface.

The other huge advantage of this method is that Custom Materials are basically irrelevent now. Regular materials can do the exact same thing, but go through the standard methodology without any janky hacked-in voodoo. You give it the inputs, the engine binds the inputs, and the shader uses the inputs to do the work you wrote the shader to do.

So all the stuff that uses custom materials now, such as lights, water, terrain, etc. Can be shifted to utilize JUST Material, which lets us streamline a whole further big chunk of code out.

So, we have a cleaner, explicit backend without a spaghetti-apocalypse, and we can have a singular type of material. Any other benefits

Well, tying back to what I said about authoring method, the advantage there is because as long as the shader code uses the right inputs and, well, is valid shader code, it doesn't care if you hand write it, or get it some other way.
Some other way being a cleaned up and refined ShaderGen.

Separate from the horrid fester-pile that is the Material Features system, Shadergen itself, is as said, pretty clean and smart. It just does what you tell it to. So, we'd have a new interface to tell it what to do, and I'm sure most of you are familiar with the notion of Visual Shader editing. Heck, I even started working on a GUI control years back in anticipation of getting to this point. It's ugly and half-works, but the notion of being able to just connect the bits to author a shader if you don't know shader code itself should open up the whole spectrum of visual fanciness for everyone.

All it'll really do is basically act as an instruction flow. Each node has a small chunk of Shadergen logic in it, and you connect the nodes together in the visual script style connecting to an ending main material node. When it parses that, Shadergen has all the code bits and inputs it expects and will just generate the shader you just told it to make.
No voodoo, no arbitrary feature injections.

Well, except for Material Permutations.

Uh oh, "except for" sounds scary
Nah, it's really not that bad.
As I said, part of the problem is that the Material Features system felt really arbitrary and stuff kicked off features in seemingly random times. Depending on render mode, material definition, if the shape was animated or wind-driven or if you had settings low so it had to disable features, etc.

Permutations, instead will be much more explicit. When definiting the material, you basically have a list of accepted permutations.
Does this material work with dynamic lighting? Static lighting? Should it work on models that are animated and thus use hardware skinning? Should it work with wind a la foliage?

These different permutations are basically the explicit command of "Hey, when we generate our material, I need a permutation that supports this". Shadergen will have some code chunks to make that happen, such as with the hardware skinning permutation making sure to add the bone transforms to the vertex shader inputs. But it happens specifically because the material definition told it to, and thus, the end user said so.

If a permutation isn't supported and it's used, it'll just not utilize that material and use a fallback(such as the No Material warning mat).

Likewise, stemming off from this, I plan to take a note from the lighting shaders and much more smartly look for existing shader permutations. One of the problems everyone's seen is the first time you load, when you look at stuff, you get load hitches, and part of that is because it usually kicks off a regen of the shaders to make sure they're up to date for everything.

Generation takes time, so you get stutters. Given we'll have an explicit list of allowed permutations, we can just generate the permutations during editing time, and then when the material system goes to use a shader, we look for our 'MyCoolMaterial + Hardware Skinning' permutation and bind that sucker. No on the fly generation should cut down on the hitching a lot.

The other advantage is that because we have a predictable, simple list of permtuations, stuff like the Render Pipeline doesn't need to fuss with the absurd baggage of trying to figure out what junk needs to be activated on the fly and stuffed into any given material as we render and that should help a great deal in fighting off the spaghetti monster.

So all in all, should be a pretty sweet shindig. I've expanded my Custom Shader Feature work to basically author the entirety of a functioning shader completely apart from the normal material feature generation, so I can blaze forward on the main refactor now.

Now, that got a bit long and I don't have much graphics-wise to show for that end yet, so lets move onto a bit of shiney to make you feel better after that wall of technical junk.

C++ Assets
One problem that's been bothering me is that you have great portability for modules and assets, where you can just drop a folder into the data directory, load the game and pow, sweet new functionality.

But trying to add in new functionality on the engine side is a terrible, dumb, boring slog it's always been with having to add files, then write code, then compile and ugh. So terrible ;)

But more important than that, anyone trying to use a module you wrote that uses custom C++ was in a weird spot with how you'd actually distribute that C++ code in a non-PITA way.

Enter, the ability to make C++ assets in the asset browser:



As you can see, you can generate from a nice list of preestablished types:
  • * static classes - which are good for manager objects
    * regular classes - blank C++ classes for you to do basically anything of of
    * Game Objects - good for porting up a game object in script you used to prototype stuff out and now want to reap that delicious performance of native code
    * Components - same as game objects, just for whatever new components you drafted up
    * Script Object - a custom type of script object to do any weird behavior you want but in a simple-to-create way
    * Editor Tool - using the new streamlined EditorTool class as it's parent, it's a lean way to do fancy new editor tools without the baggage of writing an entire custom GuiEditor control
    * GuiControl - for any custom gui objects you may want to write
Each of those has a template file set that is filled out based on the asset's name and generates into the module's source/ directory. Tweaks to the cmake file means it scans for those and on a generate pass, populates into your engine project ready to be compiled.

So portability with custom C++ code becomes way easier again. The files can be bundled in the module alongside the assets that utilize it and all you need to do is drop it in, run a generate on your project, and do a compile and poof custom code is yours and is executed, easy-peasy.

You also see a *_Module.cpp file which is designed for any auto-execute behavior. It hooks through the engine's module system (as in Engine-Modules, not asset modules) and so any stuff you need pre-setup on engine launch can be kicked off there automatically so you don't have to manually set it up in script.

Other bits and bobs
I tracked down some further issues with the popup menu stuff and am working on a better cleanup of it so there's less jank behavior with stuff not firing off their functions and the like. I still haven't managed to fix the crash that Bloodknight originally spotted, but I've got the general area pegged, so it's mostly just tracking down the actual problem spot.

Speaking on regenerating projects, Timmy's got a start on the new Project Manager which is most excellent and should make managing modules like the above one much simpler and needing less hand-copying of everything, as well as quickly regenerating projects as required(such as with those C++ assets).

I have a nagging feeling that I'm forgetting a few bits yet, but It's late and this has rambled on quite a bit already. I'll be sure to update the thread with some nice pictures of the material refactor as that begins to take shape, as well as anything else we've worked on in the past month on the R&D side that I'd forgotten to jot down.

Peace out, guys!
-JeffR
Steve_Yorkshire
Posts: 299
Joined: Tue Feb 03, 2015 10:30 pm
 
by Steve_Yorkshire » Sat Apr 07, 2018 4:34 pm
but it led into some problems with how the material/shadergen system integrates into...well, basically everything, currently.
And, don't we all know that feeling ;)
tentacle monster molesting a plate of spaghetti
Image
Timmy's got a start on the new Project Manager
Image
And just as I've nearly mastered using cMake! ... kinda :?

Cool (and somewhat baffling at times :oops:) stuff as ever @ JeffR :mrgreen:
Razer
Posts: 38
Joined: Tue Jan 10, 2017 11:29 am
by Razer » Mon Apr 09, 2018 1:07 pm
With all the work to move to new graphics that needs Torque 3D, would have it been a better choice to switch to a ready open source DX11 3D engine like Urho 3D intead or writting a new 3D rendering engine ?
https://urho3d.github.io/
This would have been a good choice, and an opportunity for new changes new changes like a new data format and move the Torque editor to it ?
Timmy
Posts: 366
Joined: Thu Feb 05, 2015 3:20 am
by Timmy » Tue Apr 10, 2018 1:41 am
With all the work to move to new graphics that needs Torque 3D, would have it been a better choice to switch to a ready open source DX11 3D engine like Urho 3D intead or writting a new 3D rendering engine ?
https://urho3d.github.io/
This would have been a good choice, and an opportunity for new changes new changes like a new data format and move the Torque editor to it ?
No. Urho is a game engine not a rendering engine, it's the same as saying T3D should use godot, that statement makes no sense.
Last edited by Timmy on Tue Apr 10, 2018 8:02 am, edited 1 time in total.
Online Duion
Posts: 1132
Joined: Sun Feb 08, 2015 1:51 am
 
by Duion » Tue Apr 10, 2018 2:50 am
It makes no sense especially because Torque3D already works with DX11 for quite a while now, like over a year or even more.
And having DX11 does not give any benefit by itself, so thats also a misconception, DX11 just opens up new technical possibilities and frees up resources so you can improve the graphics, but just by itself you will not see a difference with DX11 vs DX9.
Timmy
Posts: 366
Joined: Thu Feb 05, 2015 3:20 am
by Timmy » Tue Apr 10, 2018 10:25 am
It makes no sense especially because Torque3D already works with DX11 for quite a while now, like over a year or even more.
And having DX11 does not give any benefit by itself, so thats also a misconception, DX11 just opens up new technical possibilities and frees up resources so you can improve the graphics, but just by itself you will not see a difference with DX11 vs DX9.
In T3D that is mostly true because the t3d gfx api is pretty old and crappy now, it's designed around d3d9 so when creating the d3d11 backend it was stuck 'emulating' d3d9 which doesn't allow it to take full advantage of the far superior design of d3d11. Moving away from T3D, there are decent performance advantages using d3d11 over d3d9, much like there are now big advantages using vulkan over older api designs such as d3d11/opengl.
Online Duion
Posts: 1132
Joined: Sun Feb 08, 2015 1:51 am
 
by Duion » Tue Apr 10, 2018 1:44 pm
I'm always a bit skeptic about magical claims of performance increase, since you would need to actually measure it, to confirm that and most peoples projects are probably not complex enough that they hit the performance limits.
Timmy
Posts: 366
Joined: Thu Feb 05, 2015 3:20 am
by Timmy » Tue Apr 10, 2018 2:25 pm
I'm always a bit skeptic about magical claims of performance increase, since you would need to actually measure it, to confirm that and most peoples projects are probably not complex enough that they hit the performance limits.
*edit nvm, i'll leave this discussion as it is.
JeffR
Steering Committee
Steering Committee
Posts: 878
Joined: Tue Feb 03, 2015 9:49 pm
 
by JeffR » Wed Apr 18, 2018 4:05 pm
Yeah, basically, it's more of a "more efficiency is always good".

The notion behind the improvements to draw performance between DX11 and DX9, for example could be conveyed with an anology about moving.

If you had to move from your current residence to a new house, and pick up each item, walk it over to the moving truck, deposit it, and walk back for a new thing, it would take forever. Similarly, it would take forever to unload the truck and put stuff in the new house as well.

However, if you pack stuff up into boxes, you can quickly move a whole bunch of stuff in one trip, which drastically cuts the amount of time it takes to put everything on the truck(and later take it off the truck when you get to the new house).

Now, this generally saves time, but where the big claims of performance increase in the new APIs comes from - which we're not currently properly capitalizing on because the existing GFX layer is still written for the old paradigm which needs a'changing - is the idea that not only do you pack stuff in boxes, but you want to pack stuff as efficiently as possible in boxes AND keep stuff that's related in the same box.

Rather than sticking your bath towels in with the knives, you would have all your kitchen stuff together so you only need to move it and unpack it once, etc. rather than wasting a bunch of time figuring out what-goes-where even after you've moved the boxes to the new place.

The metaphor is obviously hugely simplifying things, but the general notions of how newer APIs allow you to better organize, pack and process the draw data, which leads to much more efficient rendering, carries.
JeffR
Steering Committee
Steering Committee
Posts: 878
Joined: Tue Feb 03, 2015 9:49 pm
 
by JeffR » Fri Jun 22, 2018 9:26 am
Hey guys!

Whoo, been a bit. Had my head so far in the trenches I forgot to come up and do a workblog post last month. My deepest apologies about that. In an effort to make it up to you guys, this one'll likely end up being quite long as there's a ton to talk about. So, lets get into it!

PBR
This specter has been wafting around for quite a while now, huh? The good news is, it's really close. Most of the core math is sorted out. We're basically down to sorting out blending behavior, and zoning.

Now, when I say blending behavior, I'm talking about taking a probe's data and applying it into it's area of influence when there's another probe's data there already. If you've got some graphical know-how, you may be thinking "We already do that with lights, right? Light A + Light B = combined result. easy." And for the most part, you're right.

The problem stems out of the fact that sometimes, you want probes to blend, and sometimes you don't. I know, that sounds really dumb, but let me explain.

Lets take this hypothetical building, for example:

Image

We obviously have our skylight which provides our lighting information - irradiance and specular reflections - which lights the ground, outer walls, and obviously some light would indirectly bounce into the open tunnel area.

If we have our all-affecting skylight probe which provides the general info of the sky, we might also add a probe into the tunnel area to ensure it gets more accurate, directionalized lighting information. After all, as far as that tunnel-room is concerned, light isn't coming from EVERYWHERE, it's coming from one specific direction: out the tunnel.

We can see from the offline render there, that the correct behavior is we get some lighting in from the outside, but not 100% of it, which makes logical sense. The tunnel-room sees some directionalized lighting, but not lighting from straight up where the sky/sun is. Moreover, the room in the back there sees virtually no light at all. But it does see SOME.

Image

If we view from the room, it definitely gets a little bit of lighting from more indirect bounces. So we would put a probe in there as well. However, in this case, we want 0 influence from the skylight, as that room can't see the sky at all.

Lets look what happens when we just do a basic additive of all our probes' info including the skylight.

Image

So, to articulate, if our skylight adds a nice, sky blue influence from the reflected light of the sky, and our probe would add the aforementioned not-full darkness of the captured propagated light, then the bottom right corner is what we want. The probes inside the building add their influence, overriding, but casually blending into the skylight's blue from it's influence.
If we do a straight additive, however, we get the blue of the skylight contaminating and washing everything out. The room that should be barely lit from propagated light is actually tinted a lovely sky blue, which isn't at all correct. This occurs because when we do additive, we take the higher value. If our probe is barely lit, it's contributing a nearly-black color, whereas the skylight provides a much brighter blue. With a basic 1+1 additive, obviously the color that gets preferenced is that blue, contaminating the results, as said. We need a smarter blend that lets the local probes have final say of the colors in it's area, but blends nicely into the skylight.

So, what we've been juggling with the past couple weeks is finding an ideal way to achieve the blending behavior we want. We want probes to have dominance in it's area of influence so they can slightly light areas if needbe, while letting the skylight provide map-wide general lighting that the sky provides.

We've got a couple working ideas, one of which is to allow adding of zones. Similar to culling zones already in the engine, you could add zones that dictate what probes are utilized in that area. This comes with a ton of other benefits as well.

One of the things we've been looking at is how to better manage the probes. What probes should be baked, when they should be baked, can they be triggered to be rebaked, etc. Grouping them together via the zones lets us go 'ok, this zone here, rebake the probes inside you'. This is relevant for if you were to, say, open a door, or turn on a light, you could enact a rebake of just the probes for that zone/group, to get the changes in lighting without needing to waste time figuring out what to rebake.

It also lets us do tricks like stencil clipping. I'm sure most of you have had a situation where you add a light to a room, then find that the light bleeds through the wall, casting light on the opposing side's floor. This can be remedied by adding shadows, but that's kind of an expensive way to just rid ourselves of an artifact. With the zones approach, we can clip the rendering of everything in the zone to JUST the zone. So any probes(and eventually lights) that overstep the limits of the zone slightly for whatever reason, never 'bleed through'.

In order to better facilitate this behavior, we're shifting probes over to utilize texture arrays, which should let us render the zone in one go and do all the probes at once, instead of a drawcall per-probe, which will help efficiency a great deal. I also implemented a static list of active probes for the render bin, so instead of deleting and recreating the render instance for the probe every frame, we have a static list that is retained in memory frame-to-frame, only changing when we add a probe or delete one.
This allows us to have much better cache coherency and reduce memory thrashing since we're not constantly dumping a vector of our probe info then re-adding to it every frame.
Likewise, the plan is to shift the other render bins over to a similar setup which will reduce the overhead of basically everything drawing as much as possible, so our render cost is basically JUST the cost to render, as opposed to additional overhead from all the memory management and extra function hops that come from the existing list-thrash behavior.

So while PBR isn't *quite* there yet, a majority of the math itself is, it's the supporting behavior to make everything blend together nicely. To show you how it is currently, here's a few shots:

Image
This is actually a bit old, here. The BRDF textures weren't generating right at this point, so it's not technically correct, but it still looks fairly close even so.

Image
For this one, it's using the current math, so it's almost fully right, but I didn't correct the texture maps on export, so it's actually using incorrect channels for the PBR info, so we're missing out on the emissive stuff, and the metalness/roughness aren't correct either.

Even so, when you compare it to the ground-truth render:
Image

We're looking pretty danged close.

Random Misc. Stuff I Just Remembered
Man, what an inventive section title!

No, but jokes aside, 2 little tidbits that came up that I got mostly working but need some shoring up before roll-in would be: gui3DProjectCtrl and Autosave/restore.

For the gui control, check this guy out:


Pretty simple, but it lets you map any gui control to a point in space or track on an object. Awesome for objective markers and the like, though I do want to create a simplified interface for it so you can do Map Notes. This would be a bit of text you add to a marker placed in the level. During editing, it will display said text at that spot. Good for leaving your level designer suggestions for tweaks to a spot, or leaving some notes to yourself about positioning/flow changes, etc.

Talk of level designers leads into the next tidbit: Autosave/Restore:
Image

I need to add several autosaves instead of the single one it does now, but at a given interval it'll do a save of the current map to an *.autosave file. This way if you ever get a crash or some kind of destructive change, instead of losing anywhere from 5 minutes to a few hours of work, you can go to the menu item there, select an autosave file to load from, and pick back up having only lost a few minutes of work at most.

It currently only works with the level file itself, but it can be expanded over time to support terrain changes and other "associated" files.

Tomorrow, I'll post part 2(as this update is long, as said) to get to some other pretty nifty thingadoos and some thoughts to future updates.
328 posts Page 22 of 33

Who is online

Users browsing this forum: No registered users and 1 guest