Jump to content
  • Torque3D

    The pinnacle raw Torque power, Torque3D has been used for everything from driving simulators to MMOs!


    Check Out Torque3D

    Torque2D

    All the power of Torque with less of those pesky dimensions to tie you down!


    Check Out Torque2D

    Forums

     

    Looking for answers, wanna see some cool projects or talk about new / upcoming features?


    Read the Forums
  • Work Blog Updates

    • By JeffR in Torque 3D
      Hey Everyone!
      Given that the rapid-fire releases have waned and we’re back to working on the primary feature release of 4.1, it’s good to get back onto regular work blogs.
      And so, to that end of things, here. We. goooooo.
       
      4.1 Progress
      So how is 4.1 going? Getting there. As you know, we shifted the target for development of the engine for 4.1 from the ECS, to Editor/UI work.
      So how’s it going? Not bad, overall. Lots of foundational work’s been chipped at previously with work done going into 4.0, but since it wasn’t “ready” it wasn’t frontline for the featurestuffs.
      But, now that that IS the point, the actual work of “updating the editor suite to not be all old spaghetti” is chugging along.
      So, lets get into talking about the work done so far, and what’s on the TODO!
       
      Objectives module
      Az and I had recently put up the 1.0 of the Objectives Module onto the Torque3DResources repository(you can get it here.)
      Specifically, it’s a module for managing and utilizing Objectives in your game. Objectives being a more general-purpose name, but it can be used for any task a player must complete. Quests, Mission Objectives, yada yada.
      We worked on it because we needed it for quests tracking in Catographer anyways, but obviously opening that up for beatups and further refinement by everyone else was an easy win.
      Well that doesn’t sound like it has much to do with Editor or UI updates
      Right, right. Back on track. One element that was very useful as a tester of the paradigm, but also just useful for our needs, was a custom drop-in editor tool.
      Specifically, the ObjectivesModule within it contains a tools folder for the ObjectivesEditor, as seen below:

      This works out quite nicely as a demonstrator for module-tools generally, of course. Being able to drop a module into the data directory and immediately get whatever tooling for the modules’ needs right then and there without needing additional copies is an obvious quality of life gain.
      But beyond that, there’s the actual tooling utility itself.
      One thing we’ve had in the engine and used lightly in various places previously was the “VariableInspector”.
      What is it? It’s basically the existing GuiInspector used anytime you went to inspect an object, with all the display of the various fields and properties of the object.
      Except for the VariableInspector, it was originally fashioned as a simple inspector for variables, per it’s namesake.
      For 4.0, I’d expanded the functionality to inspecting objects, arbitrary fields, global variables, callback actions, and support for completely arbitrary field types.
      It was certainly useful, and is used in stuff like the Editor Settings editor and a few other places.
      But ultimately it was a duplicate Inspector class, so the next step - both in the interests of simplifying how many GUI controls we have, as well as simplifying usage of inspecting things generally - was to fold the ability to do arbitrary fields and field types into the GuiInspector directly.
      And so now, you have what we’re doing in the ObjectiveEditor. Below is a code snippet showing off the onInspect callback function:

      As you can see, we have the onInspect callback. This is called when any object is inspected(natch).
      This is an important notion, though, because normally the GuiInspector doesn’t do that at all. It all handles the field logic internally with the statically-defined fields in the engine side for the class.
      Now, though, we inspect the object, we get the onInspect callback, and then can go on the manage or create groups as well as add fields to said groups entirely in script.
      There is no source class defining the logic or behavior of the ObjectiveInfo. In the module, that’s just a scriptObject with the namespace ObjectiveInfo defined. Once it’s inspected, we call through, and do the prompted logic.
      Further, we can see pretty common types on the fields. Strings, bools, etc. But there’s an interesting one. “ObjectiveList”
      Now, you may have surmised that that’s not a normal type in T3D, and you’d be correct. That type doesn’t exist. On the engine side.
      Instead, what happens is when the GuiInspector is processing the fields, if it hits a type it is unfamiliar with, it will attempt to call down into the script for a build function for that type, allowing 100% custom field types to be implemented.
      So, we do this function here:

      And when that builds the field for us as part of the field population logic, we end out with a nice custom field type that is a dropdown that populates a list of Objectives the engine currently has loaded:

       
      Pretty slick, right?
      So that’s why this is important to the Editor and UI update work. A litmus test and showcase of more free-form handling and boosting the script-only viability of editor tools.
      This will become increasingly important as we overhaul field-heavy editor tools like the Material or Particle Editors.
      They currently use custom GUIs with lots of backend logic for all the binding behavior, but we’ll be updating them to utilize GuiInspectors with the script hooks as needbe for special field types for 4.1.
      This will give us less special snowflake code, more standardized layouts, easier to expand or update tooling, and just generally be cleaner and more consistent an experience overall.
       
      Editor Core work
      Next on the discuss-a-roo for what’s currently baking in the oven at the moment, is the EditorCore module.
      What is that?
      Well, calm down a second and I’ll tell ya!
      The EditorCore module is designed to rectify the whole old-spaghetti issue that currently plagues the whole of the tools folder.
      If you’ve ever gone digging into the tools stuffs, you’ll realize it’s a big blob of auto-executing scripts, seemingly completely arbitrary execution orders of scripts and guis, multiple redundant re-executions of stuff like GuiControlProfiles, and then everything is duct taped together by just adding every single GUI and EditorTool to the EditorGUi control, smooshing everything together and making it almost impossible to edit or detangle without a ton of manual work.
      Not great.
      So the EditorCore is a “clean slate” after a fashion, while also skipping the need to actually start the entire editor suite over, because that would be really, really dumb.
      Instead, we’ll do a progressive update structure with a common core.
      An *EditorCore* 😉
      Specifically, the EditorCore will do the following:
      It acts as the central loading point for the entire tools suite. It always executes first, and all the common-use stuff like icons, common images, GuiControlProfiles, etc are handled in it. Since it is always executed first, you can rely on the fact that anything within it will be ready and loaded before any subsequent tolls are loaded. This will simplify the loading behavior considerably It will provide several standardized utility functions for common behaviors. Stuff like layout management, Menubar/Context Menu building, etc. This will cut out a lot of the mess we currently do with the menubars and RMB popup menus being defined inconsistently, and also all over the place. Help standardize theming, by letting us massively reduce how many random, floating GuiControlProfiles we have everywhere. The EditorCore profiles can be relied on to be there if the tools are loaded, and will be rebuilt from the ground up to be consistent. We’ll also look at theming support with them so tweaking colors and the like for better user experience is less of a nightmare. Once the EditorCore is in place, we can then begin porting the existing tools to actual modules, simplify and standardize the hook-ins and execution code, and then doing any other thematic/format updates to the tools(like the aforementioned updating of the Material/Particle editors to use the Inspectors)
      In this way, we can keep the vast majority of the existing functionality, but we can ensure all the tools get some TLC and brought up to a modern standard.
       
      Nil’s Theme and style work
      Topical to theming and style, Nils had done some work on a side-branch of the engine in his repo that does some very lovely touch-ups. It standardizes some of the colors and stylistic notions, and updates quite a number of the icons or images utilized throughout the editor.

       
      Additionally, it has some nice expansions like the ability to snap certain windows to a layout. While the final implementation in devhead won’t be identical to this, you can expect a similar ability to lock certain windows and panes to a layout(and have it keep that layout next time the editor loads)
       
      Work on updated AB
      Tying into editor form and style updates, we’ve gotten a bunch of feedback with the Asset Browser, which is very good!
      It seems like by and large, the layout and operation of the AB is pretty widely liked, but there’s room for improvement on clarity and functionality.
      So I’ve been working on updates to the layout and behavior. Here’s an image of the WIP:

       
      One thing that gets ‘lost’ in terms of immediate user experience affordances is how spawning/creating objects for the scene works.
      While the drag-and-drop action is intuitive to a lot of people, it isn’t intuitive to everyone - or it could be intuitive if it was more apparent what was a spawnable item listing versus non-spawnable.
      For example, one can drag a datablock listed in the AB into the scene and it will spawn the class associated to that DB. Drag a ItemDatablock, spawns an item with that datablock.
      But an image is not spawnable and dragging and dropping does nothing.
      So, while the functionality generally won’t change for drag-and-drop, We’ll be having a separate window/tab for “Creator” that is JUST spawnable assets and classes. If you want to spawn an item, waterblock, or light. You can go into the Creator tab and only spawnable items will be listed there.
      Beyond that, once customizable layouts are working, it’ll be easy for you guys to put the Creator tab up by the Scene Tree like the old layout if that’s your fancy, or you can keep the Browser and Creator docked in a panel at the bottom of the screen, or whatever else makes the mose sense for you.
      Additionally, per the image, you can see we’re going with icon buttons that also have text for the main action buttons. So for newer users not used to the iconography can still very easily find what actions they need to get their work done(this will be a common design thread throughout the editor going forward, incidentally).
      And you can see some tweaks to the display of the ‘cards’ for the items listed in the browser, with a whole sub-line indicating the type for it. In conjunction with the border colorization we already had, there should be multiple ways a user can quickly identify an image from a material, or a material from a shape.
       
      Work on updated importer
      Another one that’s gotten a lot of excellent feedback is the Asset Import system. In it’s current form, it’s all processed in the engine, and since very few people have really fiddled with it, it’s a sort of ‘black box’ in terms of how it behaves.
      While there’s a lot of customization settings in the Editor Settings:

      It can be a pretty daunting wall of things you “have” to tweak. Plus some settings impact other settings and it’s hard to convey that well in this format.
      So, in the interest of boosting maintainability, making it so other people can more readily fiddle with the importer behavior, and making it easier to customize AND understand, the Asset Import toolstuffs will be getting an overhaul as well.
      Specifically, I’ve been experimenting with something like this:

       
      What’s all this, then?
      Well, it’s a WIP of the updated UI for the importer editor/designer.
      Rather than a monolithic blob-class of executable logic, it’ll be shifted over to a “tasks” system. By which I mean, when you go to import a given file, it figures out the type of file, and then that type will have a list of “tasks” assigned to it.
      Each task has a discrete set of settings that tell the task how to do it’s work.
      For example, a RenameAssetTask simply establishes rules to rename the asset from the originating file name, since file names can use characters that asset names don’t support.
      Once it knows the type and has the tasks for that type, it then just walks through them in-order, performing the informed task based on the settings.
      This gives complete control over the person setting up the import pipeline without having to manually edit any code for sequencing.
      Once the tasks for that item are done, the item is marked as processed, and - presuming all items are processed - final asset creation is performed.
      At which point, the asset is loaded, registered and ready for use immediately.
      Now, ideally, we have a good config out of the box so most people won’t need to fiddle with this at all, but the idea is that if you do - whether it’s because your artists have… interesting naming conventions, or there’s an entirely new type of file and asset you want to import - it’s easy to modify and expand.
      Tasks will be script-defined objects with standard callback functions for them all, so adding new tasks or processing new asset types will be quite easy.
       
      ScriptAssets
      Now, topical to “new types of assets”, one thing that was confirmed to work(but ultimately not a good fit for the ObjectivesModule) was expanded behavior for ScriptAssets.
      Originally a ScriptAsset was an asset for a script file(natch).
      However, this is kinda ‘meh’ for utility, since most of the time scripts are either associated to other stuff(GUIs, shape constructor files, etc) or they’re standalone and need to be told specifically when to execute as part of the module/gameplay setup logic.
      So, not especially useful most of the time
      But then it occurred: why not just…let ScriptAssets be SCRIPTED assets?
      In the same way we have “ScriptObject”, aka an object who’s properties and behaviors are defined entirely in script, you have the ability for ScriptAssets to do similar.
      If you need management and tracking of a special asset type for something, but it doesn’t need special handling code drafted on the engine side, then you can define it as a new ScriptAsset type.
      This can hook against the prior mentioned onInspect behavior to be able to easily draft in custom editor logic.
      Additionally, it has standard callbacks for when the asset is initialized, reloaded or unloaded.
      AND you can keep the pre-existing scriptFile association if needbe.
      So you can have a full micro-ecosystem of a special type of “thing” via ScriptAsset, with full editor, resource management and execution tracking.
      While still being able to lean on the Asset Dependencies system, easily finding stuff regardless of location with AssetId’s and being able to find by type with the AssetType.
      They’ll even show up in the Asset Browser, and have callbacks for working with that.
      Want a special data type asset for a maze generator ruleset, then be able to drag the asset in and spawn an object that uses that ruleset to spawn a maze on map load?
      All pretty easy to set up by using the callbacks and interop provided.
      There’s definitely a lot of room for cool stuff with that, and it also makes the drop-in-and-go-ness of modules with custom types and editors even more expandable, since it means you can often avoid needing to add a new asset type on the engine side.
      Just drop in the module and bam. It’s ready to go now.
       
      Marauder’s work with graphical updates
      While not a focal point for 4.1(since the feature target for it is the UI/Editor updates) it’s worth observing that ongoing work is being done to keep updating the rendering side of thing.
      Marauder had been doing work into updating some stuff, like expanding the global illumination behavior from just the IBL with probes, to an Screen-Space GI implementation. While there’s always more work to do, the work done thus far with it is certainly very promising:

      It's subtle, but there's color bleeding happening with the bounced light from the curtains and the like. The next step is ensuring rays properly contribute to the AO for occluded areas like back behind said curtains and the technique'll be pretty well top-notch AND fully dynamic.
      He’d also done some work to do more physically correct camera behavior, specifically a higher quality Depth of Field effect. While not used during gameplay too often, it can be very useful for framing certain shots or cutscenes:

      And also technique update for shadows, to make them higher quality, for less resources and fewer irregularities:

      You can see it even does penumbra and loss of sharpness the further the shadow is from the caster. This helps better ground objects and makes the shadows more realistic.
      Additionally, there's updates to the physical bloom, so even when over-exposed and bright, it doesn't completely blow out detailing:

      Now, as said, it’s not the focus, but if any of these techniques get finished out by release, then they’ll probably go in. But even if they miss the release target for 4.1, you can expect further fidelity refinements and new techniques to go in render-wise.
       
      Dialogue System
      Now, to change gears, another module that’s juuuuust about ready to go in, with a few bits of refinement is another one that we’d made for Catographer.
      Specifically, it’s a dialogue system that utilizes the ‘yarnspinner’ format.
      If you’re unfamiliar, yarn, or yarnspinner, is a dialogue library/format standard that has been used in a buuuuunch of indie games.
      It’s pretty well beat up and proven, and we needed a dialogue system for cats. So, the implementation to work with the *.yarn files was done.

      Internally, it converts all logic to torquescript, so you can easily invoke functions, access global variables and do other evaluatable logic as part of the normal flow of it.
      Conversations can have any number of speakers, and the dialogue bubbles will ‘find’ the speaker as long as the AI or Player has an NPCName field to match the speaker from the dialogue file.
      It also supports ‘barks’, or non-conversational dialogue pop-ups, like if a shopkeep tries to call the player over as they walk past.
      As said, a few bits to polish up, but you can expect that in the Torque3DResources in the neat future as well.
      For future expansions, I’d like to have it more easily support different styles of UI(as is, it’s designed around ‘word bubble over head’) like being able to have the dialogue show across the bottom of the screen, have character portraits, etc, etc.
      With a fairly consistent file format like yarn, it’s mostly just a question of ensuring flexibility of the UI/UX stuff(and possibly a fully integrated dialogue editor in the future).
       
      Quick overview of TTR gameplan
      Speaking of future prospects, if you’ve been in the discord, especially in the past months, you’ll probably have heard me mention ‘TTR’. Originally a UT99 mod, The Third Reich was a WW2 total conversion mod that emphasized “fun realism”. It had a pretty good sized community back in the day, and a good swath of content.
      It was actually what got me into the gamedev scene when I was onboarded and as the original work from the project waned and faded off, I was put in charge of.
      Unfortunately, being poopy baby-faced new developer that I was, I didn’t have the skills or know-how to meaningfully port a full-fat complete project to Torque back in the day.
      So it’s just sorta been simmering in the backburner waiting for an opportunity to be ported and done justice.
      Ok, why is this relevant? Aren’t you working on Catographer?
      Excellent inquiry, reader-stand-in!
      Indeed, far as commercial prospects go, Catographer is the main game project I’m working on. Very much the primary-focus of my time if it isn’t for T3D itself.
      Plus, in the interests of maintaining the legacy of the original mod that was fun and free, I don’t wanna sell TTR for a price tag anyways.
      So why would I bring this up at all?
      Because what TTR can do for the T3D community is quite a lot, actually!
      One thing I’d noted some months back in the discord as the discussion at the time covered modding, was that some of the best introduction to game development was the modding scene.
      And while modding may not be quite as gigantic as it’s heyday because people can just grab a game engine and jump straight into their dream project, I think there’s a misjudgement on if that’s the best route for a complete newbie developer.
      You see, what modding did, was it gave a new developer a fully functional, fully playable game.
      No duh, right?
      But this matters, because it means that what the developer themselves has to do is very, very small.
      If you wanna learn how to map? You have gigabytes of assets already and maps to look at and reference from.
      You don’t have to make or buy trees, and cars, and rock assets to put into your map just so you can learn the ropes.
      Same with modelling, or texturing, or materials, or sounds.
      You can focus on that tiny sliver of game development, drop it in, and immediately play with it in the context of a full project.
      This minimizes how overwhelming starting out brand new to game development can be.
      If you’ve ever watched tutorials for getting started - with any engine, really - the problem is it’s basically a DEEPEST LORE DUMP of the engine you’re trying to use.
      You wanna make maps? Well you gotta  have assets first. And code a game mode. And make sure you have a player to spawn. And that player has to have physics and movement. And input/keybinds. And, and and.
      So even simple introduction tutorials tend to have quite a spread of obligatory knowledge, and for someone that has NO experience, it can be paralyzing.
      So, What if you can remove a lot of that initial burden? Then tutorials can be purely focused on a specific thing, because there’s already a full game’s content there.
      It’s also a good morale booster, because getting a gun model in, or new texture, can be part of a full, playable experience immediately.
      It’s a small victory, but for someone just starting out, it can be a big validator that they’re going the right direction.
      So, having a fully functional game that’s 100% designed for actual play, not just a random demo scene is a good foundation for newbies to build off of, rather than jumping and drowning in the ocean of gamedev.
      So it’s just a moddable project?
      Well, not JUST that.
      As noted, it being a fully featured game, with a lot of existing content that’s just fun to play is the first step.
      Once it’s there, then it can be made moddable.
      This lets us have “first contact” tutorials emphasize modding TTR first as a ‘dip your toes’ deal. And once some foundational knowledge is learned, they can go on and start doing their own projects.
      But beyond that, having a project that’s a sort of “The Engine’s Game” is good for marketing, drawing attention and a bunch of other stuff.
      Like?
      Well, in no specific order of priority, I see TTR as being able to provide the following for the engine and the community:
       Newbie Developers ‘first contact’ tutorials Super small, focused tutorials for learning all the baby-step foundational knowledge of gamedev without needing other foundational knowledge to even get to the point of displaying the results helps keep new people “in” A game people can play that is upfront about what engine it uses, bringing attention Similar to Unreal’s Unreal Torunament, or most any game from Valve for Source, having a game that’s up front in it’s association to the engine, and linkbacks and directed flow back to the engine and community is a good funnel. If people like playing the game, they may want to mod it. And if they mod it, they may just use the engine full-time for their personal projects A Benchmark testbed.  We can put camera markers and flight paths, and run canned benchmarks in the various levels which lets us find performance hotspots, or see how new tech fares in different environments Demonstrates/Tests basically every feature of the engine The game had a bunch of different maps, and fairly in-depth gameplay. Indoor, outdoor. Forests, water. Vehicles, infantry combat. AI, teams. Times of day and weather and more. So if there’s a thing the engine can do, the game can show it off. And that means it can also do…. Feature Regression tests One thing that’s tricky to deal with is regression testing. While we have a few test maps like Outpost, it’s hard to be truely comprehensive for any given feature for the engine. Especially as we look into doing stuff like Components in the future.   Having all the different features the engine can do in the various maps and gameplay allow much more comprehensive spread of cases and easier to see when something goes wrong. Performance regressions can also be detected with the aforementioned benchmarking Network demonstration As a multiplayer game, it’s a good tester for one of the most fundamentally core features of the engine, the networking. New Feature Testing/Demonstrations As new features go in, like SSGI or Components or whatever, they can be tossed into a build for TTR as a good all-rounder way to test them in different scenarios, to ensure they work well with them all before being rolled in. This should help ensure stability of new roll-ins and minimize how much hotfixing happens. Not bad. So it’ll be ready soon then?
      Oh. Ha. Haha, aahhh. No. Not exactly.
      To be specific, most of the OG art and content is basically ready and waiting for being rolled in and turned back into TTR, but on T3D.
      But that takes time, and I have to ensure all the bits are formatted right, conversion from the old archive files or extracted content isn’t hideously mangled, and so on.
      I also gotta finish up the Design Document and probably a roadmap for work on it so if people want to pitch in, they can and know what needs doing.
      Plus, as noted, outside of engine work itself, my main priority is Catographer, at the moment.
      Having said that, the utility TTR can provide to the engine and community is, in my opinion, quite apparent, so I’m not going to just let it sit in a dusty box for several more years.
      My plan is to over the next months, slowly get all the pieces pre-arranged and positioned, with documentation for design and intent on it. Sorta like laying out all the bits before assembling your IKEA furniture.
      Once that’s ready, and Catographer is punted out(fingers crossed, this summer!) I can spend some intermediate time after that goes out ripping through the work and getting TTR assembled in T3D and put out there.
      Once it’s live, it’ll be easier for people to poke and prod everything to fill in any missing bits or make their own mods for it to expand upon it.
      So while it’s not right around the corner, I wouldn’t say it’s far off, either. Just a matter of managing the timing and priorities.
       
      In Closing
      So, uh, yeah. Wow, that was quite a bit of yapping, huh?
      Buuuut, it’s been a hot minute since I did a workblog, and the release updates for 4.0 and the hotfixes aren’t exactly real replacements for these then either.
      But now you’ve got the jist of the big bits that are cooking. Hopefully that’s got the ol’ game development appetite going 🙂
      And if you want to help facilitate all this cool stuff and the engine as a whole, I DO have(even though I never remember/bother to mention it) a Patreon.

      I’ll be working on updating it this week and providing more details, linking back against roadmaps, and do more to detail out what exactly the whole shindig with it and how it helps the engine.
      But it does help. First and foremost helps cover all the hosting and service costs we accrue. But also so I can help feed Az so he doesn’t return to his true shoggoth form and bring about the end of at least 1 small country.
      Quite important indeed 😛
      So, yeah. Every bit helps. But I also acknowledge that just throwing money with absolutely no return isn’t always appealing either. Budgets are tight for everyone these days, and even a token return is better than nothing.
      I’ll be looking into Discord integration for patrons to get a nifty special role indicating that you’re donating, and when TTR comes out, me and Az have talked about some bits for that too. Like if you are a patron, you can get your name in for one of the AI Bot names.
      Small, but fun little things like that.
      Anywho. Lots of work to do, as you can probably guess from the above, so I’d better get back to it!
      Thanks everyone, and happy dev’ing!
      -JeffR
       
    • By JeffR in Torque 3D
      So, on today's installment of "random rambles about development things"
      But for real, it's a good time to do a new workblog and keep people in the loop for those not in the discord, or those that aren't spending every day in it  
      So, what's on the ol' discussion stuffs today? 
      Well, for the big one, the main Feature target for 4.1: Components. 
      Or, more specifically DOCs. What does DOCs mean? Well... 
      Directors, Objects, Components 
      You may have heard us discuss 'Entity-Components or Entity-Component-Systems(EC and ECS, respectively). For a brief refresher in concept, here's a simple breakdown: 
      Entity-Components as a paradigm can be described simply as having an Entity object, and then Component objects that contain and implement data. As in, if you have a component to render a shape, the component not only holds the info for what shape to render, but also the logic to render the shape. This is how, most other engines do components. 
      The reason for that is pretty simple. It's robust, and it's easy to work with. It's not the most efficient system, but it's pretty hard to screw up. You slap a component onto an object, set the properties on the component, and then the component does the thing. 
      When I did the main previous implementation of components, this was also the system I went with. The MAIN problem with this approach is that any given component is kinda...chonky. And you also have a lot of bloat on the Entity object in most cases. And with all the bits that have to cross-communicate to ensure dependencies work(you gotta have collisions for physics to work properly for example) as well as order of events(collisions are calculated then physics) as well as any deeper engine system dependencies. It can spiral quite a lot. 
      Beyond that, it's also very difficult to thread any of the component's workloads because everything cross-communicates in order to work. You can't easily punt a physics component into a thread if other threads need to talk to the same collision component or entity it uses, etc. 
      So, advancements to the theory of components implementations lead to ECS: Entity-Component-Systems. 
      Now, the confusing use of "Systems" aside, the main differentiator to EC is that Components now ONLY contain data. They don't implement any logic whatsoever. Likewise, ALL the burden of functionality is moved off of Entities. In a 'pure' ECS implementation, an Entity is nothing more than an ID for Components and Systems to reference. Instead, Systems implement all functionality logic. If you have a physics component, there's  PhysicsSystem that implements the actual logic for it. 
      This is certainly more complex to implement. In fact, very few engines or games use ECS. Unity's new DOTS approach is based on ECS, and a few games like Overwatch have utilized it. But the innate complexity of the approach and how abstracted the data and implementation means that it's far less common. 
      So why use it? 
      Because it is MUCH easier to be cache-coherent and thread things. For the non-coders out there, cache-coherency is the idea of wanting to keep all the memory a given chunk of code in the engine uses all bunched together. Think of it like how if you're studying. Rather than getting a book, reading a paragraph, then walking back to the shelf, putting that book away, and getting a new book and reading the next paragraph, and so on - which would be very, very inefficient - instead you just get all the books you need, and can quickly reference between them. 
      In practice, memory in the computer works similarly. So if you can cram all the data you need to work into the same blob of memory. Performance is improved SIGNIFICANTLY. But it's not very 'human friendly'. Which is why you get stuff like ECS. All components of a type can be crammed into a dense set of data. So when a System goes to implement logic, you've got all the relevent components in a tight blob of memory, and the whole thing can be processed without having to go "get another book" as it were. 
      In addition to this, the data being more detached from implementing logic(and better managed in memory) makes it much easier to implement the logic in multiple threads. This allows the machine to crunch a lot of objects in parallel - which is especially good on modern CPUs that have sometimes dozens of threads. 
      But there's a good number of downsides to this approach as well. It is, as said, not very human friendly. Implementing new components and their associated systems is not how most code is implemented, so it can be difficult to work with. It also requires much more tracking of when things are added, removed, when things should run. Dependencies are still burdened on the components and systems to keep track of for when to implement things, and scripting it is very, very difficult. 
      All those and a bunch of other smaller inconveniences make it generally a pretty poor paradigm to work with in something as complex as a game engine. There's a LOT of ECS implementations out on the internet. But they're more academic than practical because of the inherent limitations of the approach. Cramming it into a game engine while still making it easy to work with from a scripter, designer or artist's perspective is pretty hard. 
      And both of these have various limitations in how to deal with it from a networking perspective. It's very difficult to have the server and client safely agree on the data the client has without trafficking a ton of data, which is bad for net performance. 
      So, between what I learned from implementing an EC style deal in the first pass of components, and a lot of tests and research into ECS. I settled on the fact that, both approaches just kinda aren't ideal. 
      So I did some work and fashioned up a - as far as I can tell - novel components implementation for Torque3D. 
      Directors, Objects, Components, natch. 
      So, what’s the deal then? Well, per the name, there’s 3 main components(heh) to the model, which we’ll cover here: 
      Directors 
      So what’s a director? Well, in practice a Director is a simple class that ‘directs’ when and where updates to components happen, hence the name. The idea is that we want to move the burden of when and why updates happen off the objects and components. 
      At it’s core, a Director is in charge of doing a particular thing, generally updating a specific component or set of components. Like, say, when we want our RenderMesh component to draw. The Director has a specific timing to it(aka, Rendering) that the rest of the engine can invoke to the DirectorManager, which is pretty much just a simple container class. 
      When we want anything with the Rendering timing to kick off, we tell that DirectorManager to run an update on said timing. And in turn, any Director with that timing is told to do it’s work. Simple enough. 
      So in our example of the RenderMesh components, the RenderMeshDirector has the Rendering timing, the engine, when it goes to draw objects, can tell the DirectorManager to run the Rendering timing, and our RenderMeshDirector gets told to update. When this happens, the Director loops over valid RenderMesh components and directs them to do their work. 
      And thus, our RenderMesh components have drawn their meshes. 
      Now this sounds like a lot of work compared to just looping over the objects or components directly, but there’s a bunch of benefits to this. 
      For example, as noted with EC and ECS implementations, one of the biggest tangle-up points is dependency management. Normally components have to track what dependencies they have, if they’re fulfilled, and if not then they are not enabled. Any time a new component is added to it’s owner, the component is in charge of validating its dependencies. 
      This is important, certainly, but it also can lead to a lot of complexity, spiraling dependency chains, and code bulk on the components themselves. So instead, we move that to the director. 
      Because ultimately, the director is in charge of a set of components, like our RenderMesh components, we can track which ones are valid in the Director. If an Object adds a new RenderMesh component, it naturally associates to our RenderMeshDirector, and it now knows if it’s valid or not. The component itself doesn’t have to care in the slightest. 
      This keeps the component code leaner and cleaner, so it’s easier to maintain. 
      It also means we can very much more explicitly control the timing of when things run and sequence in the engine. I used the example of the “Rendering” timing before, but it’s powered by a simple enum. So you have as many entries as you can cram into an enum for when to kick off updates. You can update just physics things, or just rendering things, or specifically objects that have client inputs because they’re controlled. 
      This gives a much more comprehensible order of operations about when and where stuff is executed in the engine, making it easier to track and debug when stuff kicks off. 
      Additionally, because the director has an explicit list of components it’s in charge of, and we are specifically working on that list of components at a specific time in the execution of the engine’s loop, it means that we have MUCH more control over the memory in play for the engine. 
      This ties back to the aforementioned cache coherency. We can keep a list of components, like RenderMesh components, and that list can be much more easily just shoved into memory as a straight shot, minimizing how much the CPU needs to jump around. The Director works on THESE objects, so the CPU can have all the data on hand. 
      It also means that, between the more tightly bound memory and the express execution timing, we can much more safely handle when things are threadable. Which is a big thing for game engines. 
      Even major engines are still predominantly single threaded. So when you’re busting out your brand new CPU with 36 cores. Most of them are sitting around doing nothing. With Directors controlling the memory and execution, we can spool up a bunch of tasks in the threadpool and split the workload across those cores/threads that aren’t doing anything, allowing the regular workloads to be processed way faster. And this should, in theory, scale well with object counts. 
      So yeah, Directors are kinda the MVP of the system, what with keeping a tight wrangle on memory, streamlining execution of parts of the engine, standardizing a lot of bits, and also making the engine significantly more threadable than before. 
      So, you know. A little bit of a thing there. Which takes us to the next bit of our paradigm: 
      Objects 
      Compared to everything that directors do and are, Objects are pretty simple in the end. These are your entities that you slap components onto. Unlike a full-fat ECS implementation, Entities are ultimately still full objects. 
      There’s a good reason for that, of course. The big one is that T3D has a scripting language, and that’s super useful. So an Entity can’t just be a ID that exists in the void, because we gotta have an object for the scripts to work through. 
      Additionally, T3D also has a very good networking system, and not maximizing that is just dumb. And rather than having each component be manually replicated, or ghosted to clients, we exploit the way T3D does networking streams and packing to go through our Entities. 
      Specifically, Entities keep a list of components they own. And if a component is marked to be networked, it has a separate list for that. When a networking event happens, such as a component is added, removed, or updated, the Entity is itself flagged for networking action. 
      Since the Entity is ghosted to clients as normal, we can then piggyback the Entity’s network updates. Each component that’s networked has it’s own mask bits for granularity - we only need to update what actually changes - and this is packed into the Entity’s network update. 
      This means we can fully network whatever number of components, but only 1 ghost per Entity. And what updates we DO traffic to the client is able to be as lean as possible. This keeps the traffic as thin as physically possible without giving up the very solid networking that T3D offers. 
      And lastly, we have the mainstay of any component system(duh): 
      Components 
      In DOCs, Components behave very similarly to in the previously mentioned EC paradigm. They hold data and implement the functionality for that data. So a RenderMesh component holds what mesh we want to render, and also does the logic to render the mesh. 
      The main difference, as covered in the section on directors, is that the components are stripped down to JUST the data and implementing logic and all the surrounding boilerplate is largely standardized up into the Directors, as well as when the components are told when to kick off their logic. 
      This means that implementing new components can be relatively easy, as you have the basic data/implement setup, along with any networking pass-through logic as noted in the Objects section, and then a companion Director to manage when the whole shindig activates. 
      All in all, while a bit more complex than standard game object classes, you get a lot of flexibility and ability to quickly slap stuff together without completely shifting to a new conceptual paradigm like a pure ECS implementation. 
      It also keeps networking lean, keeps scripting on the table, but also opens up the door to massively thread workloads in the engine. 
      So more flexibility, cleaner structure, more performance, and without compromising the good bits the engine already offers. 
      Not too shabby a deal, eh? 😉
      Now, this update’s already quite a long one, pretty technical and tragically limited on pictures, so I’ll do a follow up post next weekend going into the front-end usage(which is realistically where most people will work with DOCs) as well as other development stuff going on or planned. 
      So I’ll see you all then! 
      -JeffR
    • By JeffR in Torque 3D
      Mid-October workblog time! (Which should've been last month, but chasing down bugs like memleaks and straight up crashes before I wanted to post caused delays, so...whoops!)
      So, how's it going everyone? Time for fanciful development news.
      First, lets go over what all work has happened thus far since the last workblog:
      76 pull requests merged in, which had over 164 changes which ranged from bugfixes, to improvements to additions.
      Notable examples include:
      Updating SDL to latest Steve passing along a fix to correct render issues for the ribbon particles Preference settings(which will get integration into the options menu soon) for forcing static object fadeout at a distance, as well as limiting the number of local lights renderiable at a time, and if they fade out over distance as well. These can potentially help a lot in very object-dense scenes with lots of small clutter stuff that doesn't need to render at a distance Some better default values for local lights, and cleaning unneeded fields Fixing of gamepad inputs Various shadergen cleanups A whole metric butt-ton of fixes, improvements and QoL changes for the asset workflow Ability to better control script queue ordering between modules A crossplat option for 'open in browser', which could see a lot of use in jumping to documentation Improvements to baking of meshes Adds populating of a level's utilized assets into it's dependencies list to ensure everything preloads as expected instead of trying to do it at the last second, which could cause scripts executing during object creation and lead to variable corruption Settings files are now properly sorted(A small change, but it keeps the layout of the settings.xml and asset import config files consistent making them easier to catch up on changes) Re-implementing SIS files for the importer so there can be special-case overrides for any given file type Fixes the resource change detection for TSStatics, so if a shape files is changed, it auto-updates the objects in the scene Fixed several potential memleaks and one confirmed one that could balloon the mem usage pretty substantially over time Misc improvements to asset import pipeline stuff, such as suffix parsing improvements Shuffled some gui profiles into core to better standardize them Created a number of macros to wrapper around defining and setup/usage of assets(image asset for now, but others later). This is so you don't have to define a bunch of supporting stuff in a given class over and over. Basically, convenience and code templating thing. GL and GCC compilation fixes Integrated the old MeshRoad Profile editor, so you can have more control over the shape of the meshroad Added guiRenderTargetViz control, which lets you specify any given render target and display it to a GUI control. Minimal direct use currently, but useful in debug operations, and in the future could drastically simplify doing stuff like picture-in-picture displays or multi view GUIs and the like. Fixed a pretty gnarly memleak, so we're memory stable again. Lukas got his C# project caught up to current BaseGame so we can better test the cinterface changes(and soon people can play around with all that too) For some of the bigger changes worth going into more detail:
      First, Mars contributed some very important improvements to window and resolution handling.
      This adds in Borderless window mode, as well as the ability to set in the options what display(if more than one is detected) the game window should be on. Additionally better handling for what screen resolution should be in play based on window mode(ie, Borderless is always the desktop resolution) as well as ensuring display options apply properly.
      I added handling to disable options fields if they are non-applicable to avoid confusion
      Secondly, an update to both PostFX organization, Integration, and Editing behavior
      All stock PostFXs and their shaders are now safely tucked into the core/PostFX module. Easier to find, easier to edit, and the shaders aren't in a completely different module.
      The loader logic was tweaked slightly as well, so that the levelAsset has a field which indicates what the postFX file is. This requires less manual logic for 'lemme go dig around for a posteffectpreset file' and should be a bit more reliable.
      Additionally, the PostFX editor saw some fairly big updates, both in how its accessed, and how it works.
      You can now either edit the default postFX preset, or edit the current Scene's preset, as seen here:

      The editor now better integrates into the PostFXs themselves, as well as auto-applying changes as they happen which is much, much, much better for dialing in how they impact a scene.

      Importantly, you'll note that the list of PostFX's displayed in the editor there looks kinda...lean.
      This is because it now only displays active PostFXs rather than the entire registered library, which should help cut down on confusion about what PostFXs are actually active and impacting the scene.
      Adding is as simple as pressing the green plus button at the top and then picking which to add from the list that auto-populates with all registered postFXs

      And then selecting from the list on the left to edit a given postFX:

      Removal is as easy as selecting the PostFX in question in the list and pressing the red minus button.
      You may also note a bit of change in what PostFXs are there. Most are the same as always, but a few tweaks happened to make things a little more consistent and interop-friendly, such as moving the LUT color correction logic out of HDR and into it's own. I also added a simple Sharpen PostFX, and integrated a Chromatic Aberration PostFX.
      We also recently shifted over to utilize 'ORM' formatting for the PBR image maps.
      In order to better integrate with tools, and because GLTF assumes ORM, and it's the closest thing to an industry standard for the PBR map, we're shuffling it around internally to work as ORM as well. What's ORM? It stands for (Ambient) Occlusion, Roughness, Metalness. It's basically the PBR image maps arranged in a specific channel order. GLTF and a handful of engines assume for it in that order, making it the closest thing to a standard, so to keep it simple, we're doing the reorg.
      Likewise, we were operating with Smoothness being the front-facing value instead of Roughness. This is - similar to the PBR channel order - sorta a 'eh whatever works', but Roughness is a bit more standard, so we're going ahead and making that the front-face standard as well.
      Internally the math assumed roughness anyways, so it isn't a huge change principally.
      Touching to the above, we'd noted some oddball behavior when adjusting roughness and metalness values, so with Timmy's fantastical eye for that stuff, he was able to spot some problem areas with the logic and our BRDF image. The changes he passed result in much, much better behavior across the full roughness/metalness ranges and thus looks much more correct on any given material. Huge props there.
      I had also mentioned some crashes and stuff. There was a bug that snuck in with the probes where baking them could cause crashes. It hit some hardware more consistently than others, and was a right pain to track down. In the end, though, I put up a PR with a big update to the probe handling. Before, you could have up to 50 active probes, and they would render all in one big go. It worked, and it guaranteed consistent performance, but a lot of scenes don't utilize anywhere near than many probes in your line of sight.
      So I shifted the logic to where you have registered probes, and active probes.
      You can have up to 250 registered probes in the scene now, which is quite a lot for anything other than a big open world deal(and you can always unregister them selectively as needed), and an adjustable amount of active probes. The default is 8 but it can technically go all the way up the whole 250(though that's not recommended for performance reasons).
      One of the big advantages outside of the basic performance consideration of not needing to actively render as many probes at once, is we can now lean on culling to ignore probes you can't even see, which just compounds the performance gains, and be smarter about which ones to bother with. It calculates and picks the best probes based on the camera's position, and I'll be adding an 'influence' value for more artistic control so certain probes can be noted as more important than others.
      All of this together means it selects the best probes and renders only those up to the set per-frame limit(which again, is adjustable for a given game's needs) yielding the same results in terms of blending between probes, but much smarter and targeted selection of which ones to render, yielding improvements in performance.
      We also fixed it finally so that surfaces that are metallic will correctly render in the baked cubemaps for probes instead of the flat black they were before, which should yield more accurate results in the reflections.
      While the above probe reorg and crash chase-down sorta dominated the last 2 weeks(crashes are a pretty big deal, so it was important to get out of the way), we can get back on the track of getting the asset integration with the game classes sorted out. I mentioned the utility macros added for image assets, which are most recently utilized by the materials class now.
      This has streamlined quite a bit of code and makes everything Just Work, and between it and the asset usage in TSStatic, we're feeling pretty confident in finally moving forward and adding asset integration for the remaining classes. This is probably the last big obstacle for 4.0, especially as the general asset pipeline is proving to be fairly stable at this point, and PBR is getting a good workout in several projects and what minor issues crop up are getting plugged quickly. The issues list I've got keeps creeping downward, and things are looking quite nice 🙂 
      For a parting bit, one thing we also added in recently was a Prototyping module. It has a number of color-coded materials designed to be used for certain surface/object types and be visually distinctive so as to help understand the map/design space without needing to worry about the fine texture detail. Additionally, a number of primitive objects like a cube, cylinder, etc are in there.
      However, we also needed a size reference stand-in, and I thought, who better for that than our very own Kork-chan?

      Definitely won't be as long for the next workblog, and I imagine we'll have a number of modules to try, art to ogle, and a new test build up quite soon, so stay posted!
      -JeffR
  • Topics

  • Show-Off

  • Built with Torque

×
×
  • Create New...