Epic upping the ante again...

Friendly conversations, and everything that doesn't fit into the other forums.
23 posts Page 3 of 3
Posts: 1639
Joined: Sun Feb 08, 2015 1:51 am
by Duion » Fri Jun 05, 2020 8:42 pm
I saw a reaction video from a developer to this demo and it was quite positive: https://www.youtube.com/watch?v=9PmjQvowfAI
Posts: 957
Joined: Tue Feb 03, 2015 9:49 pm
by JeffR » Sun Jun 07, 2020 7:30 am
I've got my own impressions of how they're doing the tech(some of which more nerdly technical analysis people online look to generally align with, such as Digital Foundry). It's undeniably cool tech, but people tend to forget in the moment of a glitzy demonstration that literally everything has compromises and downsides. From the looks, the 'nanite' tech can push an incredible amount of tris to the GPU efficiently, which is awesome. But it looks to only work on un-animated objects(as in no bone animations, i'm sure you could move around the objects like any instanced piece), and unless the artist is slapping procedural generation noise on everything, or you're doing photogrammetry, the likelyhood of anyone but the AAA of the AAA is going to dedicate thousands in man-hours to make art THAT detailed everywhere(presuming the game disk sizes don't, predictably, balloon yet again as well) is blindingly low.
It's primary advantage is ultimately dropping photogrammetry or special set-piece art in and just having the art pipeline figure it out without extra steps like normal map baking.
For the huge majority of art assets and projects, it's not going to offer anything regular content pipelines do, with perhaps a few more steps in the creation tools.
All you need to do is look at other games that already used photogrammetry for their environment geometry like Battlefront 2, or compare AAA character models to their demo character to see that it really isn't some space magic 'this revolutionizes game art as we know it' so much as a very, very slick art pipeline for special-case art that most development teams won't have the budget to really utilize effectively.

Still, it'll be interesting to see the tech trickle into the knowledge-sphere of gamedev as a whole for integration down the line, and see if art tools do anything to make that level of art any easier.
Posts: 1639
Joined: Sun Feb 08, 2015 1:51 am
by Duion » Mon Jun 08, 2020 12:55 pm
They say somewhere that they bring the billions and trillions of polys down to 20 million that are actually rendered in the scene and 20 million is very possible to render, I already did a scene with 20 million polys in Torque and there are some games that already push close to that number.

Regarding if creating assets that detailed is harder I would say yes and no, no because a good artist usually starts with high poly and then reduces it for the final game and yes because to create photoscanned objects you need new hardware and software and put more work in there than before. It can take hundreds of photos to create a photoscanned art asset and later it has to be optimized by hand again, well in case of Epics super tech maybe that step can be skipped. The problem is still that creating photoscanned assets you are limited to things that exist in the real world for real and if you want to scan larger objects you need aircrafts to scan them.

The game engine market is a bit of a scam anyway, since indies will not be able to create such high quality, only if they use the big proprietary engines, with other proprietary software that already has pre-build assets that cost extra money in most cases and even then, big studios can afford much more of those.
23 posts Page 3 of 3

Who is online

Users browsing this forum: Majestic-12 [Bot] and 5 guests