The biggest issue with morph targets is that lots of art pipelines don't support them at all anymore.
Assimp doesn't support it, for example.
Torque had some nominal support for them, but I haven't personally tried it myself.
Most current games I've looked into for this sort of thing seem to have largely shifted to bones, with bone scaling and clever rigging to change the character shape. For example the Sims 3 and 4 do this.
For a basic breakdown in how both approaches would work:
For morph targets, you would create a series of shape keys, or separate models with the different adjustments you'd want(such as, say, the nose being bigger). You'd export these out and load them into the engine and basically store the vert positions as a separate type of animation rather than bones which stores the positions of the verts at their modified placement.
When you'd go to blend on the shape key animation, you'd handle those separately from bone animations by having your 0-1 value dictate how blended between the 'unchanged' and 'totally changed' you want for that shape animation. You'd do that for each of the morph targets to get a final blended position.
I think the main reason this has phased out is with the shift to instanced meshes. Instancing require the verts to be in the same position, so in order to morph targets on TOP of hardware skinning the bone transforms, means you'd not only be passing up the transforms for all the bones to affect the model, but you'd also have to pass along vert permutation info. You can potentially chew up a good bit of bandwidth hocking all that data around and it also complicates the shaders some.
Not impossible, but less than ideal, which again may be why it's less common now.
The other approach, which seems to be the main way these days, is using bone positions and scaling. This method doesn't require any special pipeline handling, and doesn't change anything about the rendering/shaders in regards to instancing/hardware skinning, which is most likely why it's favored.
The basic idea with this is that you would, when rigging your character as normal, add in additional bones that are children of your regular rig bones. Various parts of the model are skinned to those bones, so they animate normally, but when you adjust their position, or their scale, it influences the look of the geometry of the model.
For example, you could have a bone in the nose, and if you move it up or down via a simple offset value(or a blended animation), this moves the nose up and down on the face. If you scale the bone, the nose gets bigger or smaller, etc.
I don't know at a glance what Mixamo Fuse uses, since they don't need to do optimizations for realtime rendering while editing the character, it's possible they went with morphs, but if they did end up using the bones approach, you can see if those bones are exported along with the model. If they are, you should be able to fairly readily utilize them to have similar customization.
If you're rolling your own character from scratch, you'd naturally need to add those bones yourself. Once the bones exist and are rigged to the mesh though, adjusting them via blended anims or code is fairly straightforward.