I've got my own impressions of how they're doing the tech(some of which more nerdly technical analysis people online look to generally align with, such as Digital Foundry). It's undeniably cool tech, but people tend to forget in the moment of a glitzy demonstration that literally everything has compromises and downsides. From the looks, the 'nanite' tech can push an incredible amount of tris to the GPU efficiently, which is awesome. But it looks to only work on un-animated objects(as in no bone animations, i'm sure you could move around the objects like any instanced piece), and unless the artist is slapping procedural generation noise on everything, or you're doing photogrammetry, the likelyhood of anyone but the AAA of the AAA is going to dedicate thousands in man-hours to make art THAT detailed everywhere(presuming the game disk sizes don't, predictably, balloon yet again as well) is blindingly low.
It's primary advantage is ultimately dropping photogrammetry or special set-piece art in and just having the art pipeline figure it out without extra steps like normal map baking.
For the huge majority of art assets and projects, it's not going to offer anything regular content pipelines do, with perhaps a few more steps in the creation tools.
All you need to do is look at other games that already used photogrammetry for their environment geometry like Battlefront 2, or compare AAA character models to their demo character to see that it really isn't some space magic 'this revolutionizes game art as we know it' so much as a very, very slick art pipeline for special-case art that most development teams won't have the budget to really utilize effectively.
Still, it'll be interesting to see the tech trickle into the knowledge-sphere of gamedev as a whole for integration down the line, and see if art tools do anything to make that level of art any easier.