Work Blog - JeffR

219 posts Page 22 of 22
JeffR
Steering Committee
Steering Committee
Posts: 808
Joined: Tue Feb 03, 2015 9:49 pm
 
by JeffR » Sat Apr 07, 2018 9:39 am
Hey everyone, that time again!

So, first and foremost, if you didn't see it, the hotfix build of 3.10.1 is out, that resolves a few issues that were being a problem for users working with 3.10, most specifically Microsoft breaking VS2017's ability to compile it.

So, what's been shaking in the land of the R & D? A loooooot.

The big work has tied to the rendering stuff being worked on, obviously. Timmy's nearing the end of the initial PBR work, so we should be able to get that in the beatup and merge-in soon, which is exciting.

I've also tracked down a rather unpleasant bug that caused crashes when you reloaded missions while using Reflection Probes, so that's another bee removed from the bonnet.


Associated to that is the Render Pipeline work I mentioned in the last blog. Solid progress was made on that, but it led into some problems with how the material/shadergen system integrates into...well, basically everything, currently. So we decided that the best bet would be to put a temporary hold on that and jumpstart the material system refactor. It was always coming, but we decided to move it up a bit.

Material System Refactor
So what's the deal with this? Well, while the current material system, and more specifically Shadergen have worked very reliably, anyone that's really tried to do advanced work with shaders, or materials, or expand the material/shadergen system knows that it's a tentacle monster molesting a plate of spaghetti. Everything is intertwined, relationships and how data is processed and passed around is hard to follow, and as mentioned with the render pipeline work, it extends well outside the mere 'shadergen' part and into lots of logic for what, where, when and how things render in general.

This makes updating and expanding stuff, like PBR, or the Render Pipeline, a lot more complex. So, the thing to do is rework it to not be.

The good news is, is that I wasn't going into this blind. I'd already blogged about the Custom Shader Feature stuff before. If you missed it, feel free to go back and read up on it. But it was basically a nice integration into the material system, but we could craft shader code in script and have it nicely integrate into rendering without having to manually muck through the shadergen logic to add that new behavior.

So the Material Refactor is basically the ultimate extrapolation on it. The New-Shadergen will be far, far, far simpler than the current monstrosity, and the primary focus for shader/material authoring will be either writing shader, or using a visual editor to design one. This will do the assembly work based of the shader based on the express logic written, and the shader will inform what inputs it utilizes and expects. From there, the system can plug the data in based on common(or custom) fields, so whatever you need the material/shader to do, it can do it, all without needing to worry about what the backend is doing.

How's that work?
Right, lets get into the particulars.

In the current system, everything is dictated by a surprisingly complex system of Material Feature Flags. Any time a material does anything, uses a diffuse texture, do an animation, etc. When the material is loaded to generate in shadergen, those settings flip the MF's on. Shadergen then parses the features and each feature has it's own functions called that inject in code, variables, etc into the shadergen system.

Shadergen ITSELF is actually pretty straightforward, and mostly is just about making sure variables used appear in the right order and writing out the code body. The MF system that sits ontop of it, however, is very involved.

That on it's own would be passable, but it extends further by having non-material systems inject MF's into the generation based on a number of things. Important things, rendering can't happen without them, but it's one further layer of complexity that spiderwebs aggressively and makes working with the system a huge pain. Things like foliage wind, deferred rendering, hardware skinning, etc all do this.

So when I went to work on the Render Pipeline, what we ran into was a requirement for it to inject those critical material features, which drastically pigeonholes the flexibility of the RP system and somewhat defeats the purpose.

So after some discussions, and spurred by my research of the Custom Shader Features, we agreed that pushing on for the Material Refactor would be a good idea, if only so working on stuff going forward doesn't involve dealing with that horrid monstrosity ever again.

Ok, so what's the refactor entail?
The big thing is dropping the easy-breezy features system that currently exists and have things MUCH more explicitly defined. If some element of the shader generation is triggered, it happens on purpose, in a clear way.

So, lets say we want to make a material. We'll ignore the authoring method for the moment and focus just on how the system deals with a user-authored shader/material.
In order for the system to plug in properly, you need to tell it what inputs the shader expects. This can be a wide range of things, but ultimately the list of things is actually rather predictable. As such, we plan to have a big list of normal inputs that go into the shader, so if you want to, say, pass in an AlbedoTexture, you just inform the material definition that that's an input.

At runtime, when we go to inform the shader of the data we're binding, we have a big ol' list of those common inputs, and because the material definition knows we're expecting an AlbedoTexture, we can quickly skim through the list and bind that Shader Constant and pass in our texture.

Because we're operating on a list of inputs, the system doesn't really care what or how many inputs they are, so long as it's named correctly. So your material can be as simple as displaying the color red, or as complex as the water surface.

The other huge advantage of this method is that Custom Materials are basically irrelevent now. Regular materials can do the exact same thing, but go through the standard methodology without any janky hacked-in voodoo. You give it the inputs, the engine binds the inputs, and the shader uses the inputs to do the work you wrote the shader to do.

So all the stuff that uses custom materials now, such as lights, water, terrain, etc. Can be shifted to utilize JUST Material, which lets us streamline a whole further big chunk of code out.

So, we have a cleaner, explicit backend without a spaghetti-apocalypse, and we can have a singular type of material. Any other benefits

Well, tying back to what I said about authoring method, the advantage there is because as long as the shader code uses the right inputs and, well, is valid shader code, it doesn't care if you hand write it, or get it some other way.
Some other way being a cleaned up and refined ShaderGen.

Separate from the horrid fester-pile that is the Material Features system, Shadergen itself, is as said, pretty clean and smart. It just does what you tell it to. So, we'd have a new interface to tell it what to do, and I'm sure most of you are familiar with the notion of Visual Shader editing. Heck, I even started working on a GUI control years back in anticipation of getting to this point. It's ugly and half-works, but the notion of being able to just connect the bits to author a shader if you don't know shader code itself should open up the whole spectrum of visual fanciness for everyone.

All it'll really do is basically act as an instruction flow. Each node has a small chunk of Shadergen logic in it, and you connect the nodes together in the visual script style connecting to an ending main material node. When it parses that, Shadergen has all the code bits and inputs it expects and will just generate the shader you just told it to make.
No voodoo, no arbitrary feature injections.

Well, except for Material Permutations.

Uh oh, "except for" sounds scary
Nah, it's really not that bad.
As I said, part of the problem is that the Material Features system felt really arbitrary and stuff kicked off features in seemingly random times. Depending on render mode, material definition, if the shape was animated or wind-driven or if you had settings low so it had to disable features, etc.

Permutations, instead will be much more explicit. When definiting the material, you basically have a list of accepted permutations.
Does this material work with dynamic lighting? Static lighting? Should it work on models that are animated and thus use hardware skinning? Should it work with wind a la foliage?

These different permutations are basically the explicit command of "Hey, when we generate our material, I need a permutation that supports this". Shadergen will have some code chunks to make that happen, such as with the hardware skinning permutation making sure to add the bone transforms to the vertex shader inputs. But it happens specifically because the material definition told it to, and thus, the end user said so.

If a permutation isn't supported and it's used, it'll just not utilize that material and use a fallback(such as the No Material warning mat).

Likewise, stemming off from this, I plan to take a note from the lighting shaders and much more smartly look for existing shader permutations. One of the problems everyone's seen is the first time you load, when you look at stuff, you get load hitches, and part of that is because it usually kicks off a regen of the shaders to make sure they're up to date for everything.

Generation takes time, so you get stutters. Given we'll have an explicit list of allowed permutations, we can just generate the permutations during editing time, and then when the material system goes to use a shader, we look for our 'MyCoolMaterial + Hardware Skinning' permutation and bind that sucker. No on the fly generation should cut down on the hitching a lot.

The other advantage is that because we have a predictable, simple list of permtuations, stuff like the Render Pipeline doesn't need to fuss with the absurd baggage of trying to figure out what junk needs to be activated on the fly and stuffed into any given material as we render and that should help a great deal in fighting off the spaghetti monster.

So all in all, should be a pretty sweet shindig. I've expanded my Custom Shader Feature work to basically author the entirety of a functioning shader completely apart from the normal material feature generation, so I can blaze forward on the main refactor now.

Now, that got a bit long and I don't have much graphics-wise to show for that end yet, so lets move onto a bit of shiney to make you feel better after that wall of technical junk.

C++ Assets
One problem that's been bothering me is that you have great portability for modules and assets, where you can just drop a folder into the data directory, load the game and pow, sweet new functionality.

But trying to add in new functionality on the engine side is a terrible, dumb, boring slog it's always been with having to add files, then write code, then compile and ugh. So terrible ;)

But more important than that, anyone trying to use a module you wrote that uses custom C++ was in a weird spot with how you'd actually distribute that C++ code in a non-PITA way.

Enter, the ability to make C++ assets in the asset browser:



As you can see, you can generate from a nice list of preestablished types:
    * static classes - which are good for manager objects
    * regular classes - blank C++ classes for you to do basically anything of of
    * Game Objects - good for porting up a game object in script you used to prototype stuff out and now want to reap that delicious performance of native code
    * Components - same as game objects, just for whatever new components you drafted up
    * Script Object - a custom type of script object to do any weird behavior you want but in a simple-to-create way
    * Editor Tool - using the new streamlined EditorTool class as it's parent, it's a lean way to do fancy new editor tools without the baggage of writing an entire custom GuiEditor control
    * GuiControl - for any custom gui objects you may want to write

Each of those has a template file set that is filled out based on the asset's name and generates into the module's source/ directory. Tweaks to the cmake file means it scans for those and on a generate pass, populates into your engine project ready to be compiled.

So portability with custom C++ code becomes way easier again. The files can be bundled in the module alongside the assets that utilize it and all you need to do is drop it in, run a generate on your project, and do a compile and poof custom code is yours and is executed, easy-peasy.

You also see a *_Module.cpp file which is designed for any auto-execute behavior. It hooks through the engine's module system (as in Engine-Modules, not asset modules) and so any stuff you need pre-setup on engine launch can be kicked off there automatically so you don't have to manually set it up in script.

Other bits and bobs
I tracked down some further issues with the popup menu stuff and am working on a better cleanup of it so there's less jank behavior with stuff not firing off their functions and the like. I still haven't managed to fix the crash that Bloodknight originally spotted, but I've got the general area pegged, so it's mostly just tracking down the actual problem spot.

Speaking on regenerating projects, Timmy's got a start on the new Project Manager which is most excellent and should make managing modules like the above one much simpler and needing less hand-copying of everything, as well as quickly regenerating projects as required(such as with those C++ assets).

I have a nagging feeling that I'm forgetting a few bits yet, but It's late and this has rambled on quite a bit already. I'll be sure to update the thread with some nice pictures of the material refactor as that begins to take shape, as well as anything else we've worked on in the past month on the R&D side that I'd forgotten to jot down.

Peace out, guys!
-JeffR
Steve_Yorkshire
Posts: 236
Joined: Tue Feb 03, 2015 10:30 pm
 
by Steve_Yorkshire » Sat Apr 07, 2018 4:34 pm
but it led into some problems with how the material/shadergen system integrates into...well, basically everything, currently.


And, don't we all know that feeling ;)

tentacle monster molesting a plate of spaghetti

Image

Timmy's got a start on the new Project Manager

Image
And just as I've nearly mastered using cMake! ... kinda :?

Cool (and somewhat baffling at times :oops:) stuff as ever @
User avatar
JeffR
:mrgreen:
Razer
Posts: 15
Joined: Tue Jan 10, 2017 11:29 am
by Razer » Mon Apr 09, 2018 1:07 pm
With all the work to move to new graphics that needs Torque 3D, would have it been a better choice to switch to a ready open source DX11 3D engine like Urho 3D intead or writting a new 3D rendering engine ?
https://urho3d.github.io/
This would have been a good choice, and an opportunity for new changes new changes like a new data format and move the Torque editor to it ?
Timmy
Posts: 346
Joined: Thu Feb 05, 2015 3:20 am
by Timmy » Tue Apr 10, 2018 1:41 am
Razer wrote:With all the work to move to new graphics that needs Torque 3D, would have it been a better choice to switch to a ready open source DX11 3D engine like Urho 3D intead or writting a new 3D rendering engine ?
https://urho3d.github.io/
This would have been a good choice, and an opportunity for new changes new changes like a new data format and move the Torque editor to it ?


No. Urho is a game engine not a rendering engine, it's the same as saying T3D should use godot, that statement makes no sense.
Last edited by Timmy on Tue Apr 10, 2018 8:02 am, edited 1 time in total.
Duion
Posts: 957
Joined: Sun Feb 08, 2015 1:51 am
 
by Duion » Tue Apr 10, 2018 2:50 am
It makes no sense especially because Torque3D already works with DX11 for quite a while now, like over a year or even more.
And having DX11 does not give any benefit by itself, so thats also a misconception, DX11 just opens up new technical possibilities and frees up resources so you can improve the graphics, but just by itself you will not see a difference with DX11 vs DX9.
Timmy
Posts: 346
Joined: Thu Feb 05, 2015 3:20 am
by Timmy » Tue Apr 10, 2018 10:25 am
Duion wrote:It makes no sense especially because Torque3D already works with DX11 for quite a while now, like over a year or even more.
And having DX11 does not give any benefit by itself, so thats also a misconception, DX11 just opens up new technical possibilities and frees up resources so you can improve the graphics, but just by itself you will not see a difference with DX11 vs DX9.


In T3D that is mostly true because the t3d gfx api is pretty old and crappy now, it's designed around d3d9 so when creating the d3d11 backend it was stuck 'emulating' d3d9 which doesn't allow it to take full advantage of the far superior design of d3d11. Moving away from T3D, there are decent performance advantages using d3d11 over d3d9, much like there are now big advantages using vulkan over older api designs such as d3d11/opengl.
Duion
Posts: 957
Joined: Sun Feb 08, 2015 1:51 am
 
by Duion » Tue Apr 10, 2018 1:44 pm
I'm always a bit skeptic about magical claims of performance increase, since you would need to actually measure it, to confirm that and most peoples projects are probably not complex enough that they hit the performance limits.
Timmy
Posts: 346
Joined: Thu Feb 05, 2015 3:20 am
by Timmy » Tue Apr 10, 2018 2:25 pm
Duion wrote:I'm always a bit skeptic about magical claims of performance increase, since you would need to actually measure it, to confirm that and most peoples projects are probably not complex enough that they hit the performance limits.


*edit nvm, i'll leave this discussion as it is.
JeffR
Steering Committee
Steering Committee
Posts: 808
Joined: Tue Feb 03, 2015 9:49 pm
 
by JeffR » Wed Apr 18, 2018 4:05 pm
Yeah, basically, it's more of a "more efficiency is always good".

The notion behind the improvements to draw performance between DX11 and DX9, for example could be conveyed with an anology about moving.

If you had to move from your current residence to a new house, and pick up each item, walk it over to the moving truck, deposit it, and walk back for a new thing, it would take forever. Similarly, it would take forever to unload the truck and put stuff in the new house as well.

However, if you pack stuff up into boxes, you can quickly move a whole bunch of stuff in one trip, which drastically cuts the amount of time it takes to put everything on the truck(and later take it off the truck when you get to the new house).

Now, this generally saves time, but where the big claims of performance increase in the new APIs comes from - which we're not currently properly capitalizing on because the existing GFX layer is still written for the old paradigm which needs a'changing - is the idea that not only do you pack stuff in boxes, but you want to pack stuff as efficiently as possible in boxes AND keep stuff that's related in the same box.

Rather than sticking your bath towels in with the knives, you would have all your kitchen stuff together so you only need to move it and unpack it once, etc. rather than wasting a bunch of time figuring out what-goes-where even after you've moved the boxes to the new place.

The metaphor is obviously hugely simplifying things, but the general notions of how newer APIs allow you to better organize, pack and process the draw data, which leads to much more efficient rendering, carries.
219 posts Page 22 of 22

Who is online

Users browsing this forum: No registered users and 2 guests