Jump to content

buckmaster

Contributor
  • Posts

    322
  • Joined

  • Last visited

1 Follower

About buckmaster

  • Birthday 01/15/1991

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

buckmaster's Achievements

  1. In case you haven't seen it already, I highly recommend this series of articles on Gamasutra to anyone trying to to large-scale rendering. I don't think you've having exactly the issue I thought of first (the author uses double-precision for storing object locations, but still renders using floats), but they definitely solved a lot of issues.
  2. *Applause* @JeffR you deserve tons of kudos for being the only active SC member for a large part of this release. Cheers and great work! Looking forward to 3.9 :)
  3. I've heard rumours about someone trying that. I agree, I think it might be a decent idea, though IIRC @MangoFusion was close to getting GPU skinning working, so it might be moot. It might honestly be easiest to just add more documentation about T3D's threading features and add more examples of their use (maybe in the navmesh/pathfinding code?) so that game devs can decide how they want to use parallelism for their own problems. Sounds like the Life is Feudal team were having problems with parallelising their game; I wonder if better docs and guidance around what parts of the engine can and should be subject to it would have helped. The biggest, biggest thing is obviously the resource manager but that'd be a huge amount of effort I reckon.
  4. Thanks :). Looking good... just those reflections and it'll almost look the same!
  5. @koros is it just me or are those images not public?
  6. Yeah, I definitely agree that having more stuff in the engine would make it easier to ditch TS - and something we were chatting idly about within the SC around the start of the year was starting to port the editor suite into C++ for just that reason. But the effort involved would be monumental.
  7. Thanks for the report! Did you compile a 32-bit or 64-bit binary? CMake or Project Generator? I need to get myself some Windows 10 happening...
  8. Since we were talking about engine startup I'm going to mention t3d-bones, which contains a very minimal example of starting up the engine. My 'minimal' I mean it still has 1MB of scripts in the sys/ directory, but at least it's a bit more approachable. Also, since credit is due: Michael Hall is responsible for pretty much all of sys/; I figured out most of the stuff outside it. @eugene2k while I tend to agree with making the engine more configurable at compile time, this strikes me as wishful thinking. T3D is pretty tiny at ~15MB, and unused code will be sitting in RAM, not in your processor's cache; compared to other game assets I think the code is probably the smallest memory worry. As for faster startup times, I'm very skeptical that you'll gain much. Bigger gains are to be had optimising the code that does run rather than removing code that doesn't. That may be true, but driver support still isn't ubiquitous. Though if we're talking about D3D maybe it's a moot point; are there any known cases where OpenGL works better on Windows?
  9. I don't think you're likely to kill memory unless you have like 500 people doing this at the same time, or your mesh files are huge. What do you mean by removing it soon after the mouse moves?
  10. Yep, the asserts would go in if we didn't change the division functions. @Azaezel I'm not in favour of either of those solutions. 1) adds complexity, and 2) is just really ad-hoc and might cause other behavioural oddities - though to be fair, probably nothing more odd than regular floating-point behaviour. @Caleb the idea would be that in some cases, you know you can skip the checks (for example, if you're dividing by some constant point you know, or if the maths used to calculate a point is such that it can't result in 0, or if you want to do the check once then divide by the same point several times). These cases are all hypothetical, but in each of them you either don't need to check, or only need to check once.
  11. There was an excellent tutorial series on simple destroyable shapes. I collected all the links here but haven't ported them to the wiki yet.
  12. @Azaezel has been running into cases where divide-by-zero errors, particularly on vectors, are causing errors. Az reckons this is behind some decal stretching issues (653 and 1160 suspected). We had a bit of a chat about this today and haven't reached an agreement, so we'd like to enlist YOUR OPINIONS on the issue. Basically, the question is this. Should we make the vector division operations safe by default, or should we require that you check before each call that might end up doing a divide by zero? Here's an example of the first approach: inline Point3F Point3F::operator/(const Point3F &_vec) const { Point3F tVec = _vec; // new code -> if (tVec.x == 0.0f) tVec.x = POINT_EPSILON; if (tVec.y == 0.0f) tVec.y = POINT_EPSILON; if (tVec.z == 0.0f) tVec.z = POINT_EPSILON; // <- new code ends return Point3F(x / tVec.x, y / tVec.y, z / tVec.z); } I.e. inside the division operator, we check if any of the denominators are zero, and if so, set them to some small value to avoid the divide-by-zero error. This means you can always use point1 / point2 without worrying about whether point2 is safe to divide by. It's more foolproof and is fixing the bugs in a single place, rather than having to fix every call site where vector division happens. The second option looks more like this: Point3F x = getPoint(), y = getAnotherPoint(); if (y.divisorSafe()) { return x / y; } else { return z; // something else appropriate in this situation } In this case, divisorSafe would be a new method that essentially returns true if none of the vector's components are 0. We require that every user who might want to divide by a vector should check this, unless for example they know the vector will never have 0 components. This approach means not changing the implementation of Point3F::operator/, except for adding a lot more assertions like this one so that div-by-zero errors were caught as soon as they happened, instead of introducing NaNs and Infs to cause mischief at some later point in the program. Note that this is kind of the way / works for floats already (i.e. it's unsafe). So - thoughts? Az has characterised the former option as 'speed of implementation' (not having to change all the call sites, not having to worry about whether your division is safe), and the latter as 'speed of execution' (no branching inside operator/).
  13. The reason for using CMake was that it supports basically any build system you care to use, and it's a large popular project, which means we benefit from its documentation, developers, etc., and don't have to maintain that stuff ourselves. On the other hand, its macro language is abominable. Which is why I was suggesting trying to create some sort of nicer interface on top of that, similar to the current project manager, so developers have to deal with that as little as possible. My next biggest motivation for replacing the current project manager is to rewrite it as a Torque application rather than Qt. Because a) dogfooding, b) better support for all the platforms Torque runs on, c) not splitting the codebase. My final and largest motivation to redo the project manager was to start to support more features for devs. The biggest one, for me, was being able to pull in script packages instead of just entire templates. I really, really want to eliminate the duplicated scripts in our repo :p, but even more than that, I want to introduce some standard way to add content packs (and script libraries, and even source mods, etc.) to your project, so we can stop dumping everything cool people make into the templates or the main engine. As for what I use - if I'm on Windows I'll always use the PM, unless I'm specifically trying to test something CMake-related. Takes fewer clicks to spin up a new project. The project generator has the advantage of being Torque-specific so we can tailor the generated files to Torque. The downside is obviously we have to write those files ourselves and keep them up-to-date, and I know I'm sure as hell not up for that. The other benefit to the project generator is that the config is kept in your project. With CMake, you essentially have to run the same command-line command every time (for example if someone else pulls down your project and wants to compile it with the same modules/flags/etc.). That's something I'd want to fix with the new project manager - providing some Torque-specific config (like what modules are enabled) that tells CMake what to install. The PG does this already... by using PHP scripts. I'd recommend using XML or JSON files to manage this config instead. Okay so TLDR: my opinion is we should ditch the PG (in the long run) and use CMake as the backend of a new PM frontend tool. We should obviously support both for a while, because our CMake files still aren't feature complete.
  14. FYI the config.cs file is auto-generated when you run the engine if it doesn't exist - but if it DOES exist then it's sometimes used in preference to default.bind.cs, which is annoying. It stores any changes you've made to keybinds in the options GUI.
  15. Where does the control object get selected? It's probably somewhere outside the process tick loop right? Problem is even if that's currently the case, someone could write a class that does update the control object. Then we'd have a bug, assuming the second if is significant.
×
×
  • Create New...