Hey everyone, I figure I'll take a stab at making a blog to cover work I've been doing the past week or two.
Unsurprisingly, I've worked a fair bit on the Entity/Component stuff. Improvements are happening all the time, but it's a granual thing. The good news is, the next big update should improve a lot of things, and in large part due to one of the other things I've been working on:
Taml and Assets in T3D
I made mention in the tasset idea and discussion thead I'd been working on this, and it's nearing completion for the port-over. I need to put it through it's paces, but then it'll be tossed up in branches and PR'd against development. If everything looks good, hopefully those can go in for 3.8! It's pretty much direct-ports from T2D, so it should be utilized in much the same way there. The serialization is especially important to the entity/component stuff...
Back to Entity/Components
The reason they're so useful - more specifically, why TAML is so useful - is because it streamlines several aspects I've been floundering around trying to make workarounds for. Stuff like updates to the prefab system, or component templates.
See, templates in the original draft were like datablock-lites. They served a fair bit of the same purpose, being sent down ahead of any actual components on entities, used as default field references, etc. But it drastically complicated the code, created network dependency-order issues, and caused problems with namespaces and naming conventions.
With TAML, the template version of a component can exist in a non-object, serialized format. A component then just has a reference to that particular TAML file. If it needs to initialize it's default fields, or re-load them later for whatever reason, it'll be a snap to just crack open the TAML file, read the field(s) of interest, and set our live component's data. Doesn't require any extra simobjects lying around and is lightweight.
It'll also be a big help with the next iteration on the networked code. It works well enough for now, but each component that has relevancy on the client, such as rendering components, have to be ghosted down. In small projects that's fine, but that can build up in larger ones. So I'm testing out having the owner entity call down into it's components passing it's network stream to get data added to it instead.
Each component manages it's own netmasks, so you can have up to 32 masks PER COMPONENT. Which will allow people to break stuff down in a really granular way to minimize how much you need to write with a given update.
As long as the components stay in the same order in the entity, everything should network fine. To facilitate "ghosted" components working properly, we come back to the TAML stuff. We can send down each component's template TAML file when we add/remove a component and create a local component on the client. Then the component has it's fields set per the TAML template file. Once that's done, the networking as mentioned above will update changed fields as usual.
This should end up meaning you can have as many components on an entity as you want and it doesn't get in the way of how many ghosted objects you can have. The only thing actually ghosted in the normal networking stuffs is the entities themselves.
Assets have the potential to become the tasset stuff as I was describing in the other thread
Node and Web Graph Guis
This is something I've been wanting to get to for a while(I started on it a while back but it needed a rewrite drastically). Fortunately, I got a chance to return to this this week. I started tests during my lunch breaks at work, and then friday pushed it into full prototyping mode. The result are as follows:
So what's the deal with these? They'll act as starting points for any visual editors. The node graph would be good for stuff like Visual Scripting and Visual Shader/Material editing. The web graph would be used for stuff pertaining to state machines. Animation state machine controllers, AI, weapons, etc.
These are pretty much purely the functional GUI, there's no specific rules for them. That'd be what the derived editors would do.
They look a little...bland/basic because they also serve a secondary purpose. Until most of T3D's GUI stuffs, which is image-based, these are purely generated/rendered by the code.
This has the advantage of requiring a LOT less code to get them to render right, it's also harder to break them when you make them arbitrary/code based. This improves iteration time when making changes, and makes it more flexible over-all. It's easy to change the color of a particular node, connection, socket, etc without needing entirely new sets of images.
And THAT ties into another thing this is good for as a litmus test. One of the things I was curious to look into was retooling the editors to use fully programmatic GUIs rather than the aforementioned image-based system it uses now. The editors themselves likely wouldn't change much(yet), but they'd be a lot easier to maintain and modify.
There'd be several other benefits to it, such as being reaaaaaaaally easy to implement color themes that affect the whole editor. Rather than the current process which is a massive chore of replacing images and hoping stuff doesn't break, this would be as simple as changing some preferences that control the colors of the controls and it'd be changable during run-time. Think like how in blender you can easily change the colors of stuff to get a different look.
It'd also be WAY easier to implement 'dynamics', such as dragging windows into dockable regions, being able to drag windows into tabs and tabs into windows, and popping those out into their own second canvases for multi-monitor support.
This would also make it easier to extend the editors themselves. If you've looked at the current editors, there's a fair bit of extra code in all of them to get the UIs to render with all the doodads working. A rewrite in this style would let the particular editor GUI manage JUST what it needs to without having to worry about the other parts of the editor imploding. To say nothing of how much easier it'd be to extend/modify the existing editors when they would automatically scale to fit stuff rather than needing black-voodoo coding to try and the gui controls to line up like they do now.
This wouldn't touch the regular game GUIs - this style of UI is inferior when it comes to end-user interfaces for games - but for the EDITORS? I feel it'd be a massive step up. Which is why working on the basics via these graph gui controls has been helpful.
Well, that's a breakdown of my little corner of the R&D world. I feel I'm missing a few things i dabbled on on the side, but if I remember, I can edit those in here.
Anywho, feel free to level some feedback and ideas on this stuff. More data is always good when trying to hash out where stuff is headed.