@ LukasPJ has already implemented TAML in T3D in some form, which I'm not too familiar with but I'm sure he can elaborate on. @ Hutch recently expressed some opinions on the use of TAML, which prompted me to create this thread. We should have a bit of a discussion about how we want to carry forward in this direction.
I'll write in more detail later, but I want to kick off the discussion by saying I'm very much in favour of separating code and data. Which means, TAML for data, and TorqueScript for code. At the moment, our .gui and .mis files, for example, are actually code, which is executed like code. I think separating them would allow for better editing and tooling. I'll finish for now by quoting Rene Damm, from some private correspondence we had a while ago which I've asked for his permission to share:
Torque’s serialization/loading system again is basically absent. Melv did some work here for T2D based on the work he did for T4, but in good old T3D that odd idea of serialization by generating script code still is the de-facto persistence mechanism (if that hasn’t changed in the MIT version). This leads to so many problems. Object references need to be by name and can only be resolved after loading. The script VM is dead slow so loading is extremely slow. And so on.
This, however, leads me to another issue. I always liked how “alive” things were in the Torque editor – your game is really running while you edit it. It’s fun. However, in the end, it’s not a good thing. It makes much more sense to separate play testing from editing and while Unity has some issues of its own here, it does this much better.
Finally, there’s the build pipeline – another concept that is sorely missing in Torque. In Torque, neither importing nor building is well-defined. Importing is sort of a per-asset-type kind of thing without any architectural backbone. And building basically equates to you somehow packaging your executable with the scripts and data. In Unity, building is essentially a final compile step with its own pipeline and processors. This allows targeting a metric ton of platforms from the same project data and again makes for a fully automated, deterministic, and repeatable process.