Morph targets give you greater control over the vertex positions - for example, if you wanted to animate a face or a ball deforming - you'd find it very difficult to do using skinning methods.
In fact mesh-animation techniques like morphing are some of the earliest approaches - becase they're usually much easier to understand and implement - for instance the file formats used in quake 2 for the animated charaters (.md2 format) - stores the mesh as multiple copies in the file. The animation was created by interpolating between the copies of the mesh!
The md2 file format was fine for simple fixed animaatons - but if you wanted to mix a walk with a jump animaion - you couldn't - as it stored the whole mesh (an identical copy).
Another way of doing animations using morph targets - is to store the difference between the original mesh and the desired mesh. One such file format that does this is the glTF format (popular open source 3d file formats).
The glTF file stores what are known as morph targets. These morph targets have the same number of vertices as the original mesh - however, they only store the
differences
from the original mesh to the desired offset.
The benefit of this is you can have multiple morph targets which can be added to the original mesh vertices to create the final deformed mesh.
The reason the glTF file format stores the differences is so multiple morph targets can be combined together to create the final solution. For example imagine you have morph targets for the eyes, nose and mouth - they can all have there own morph targets and be combined with the original mesh to create the final animation.
When you implement morph targets on the GPU - you want don't want to go looping over all the vertices on the CPU - it can be really slow if you've got hundreds of thousands of vertices and you're doing it 60 times a second.
Instead you want to copy them over to the GPU in a buffer - then you can index into them as you need. As you might have a few different morph targets - you don't want to create dozens of buffers - instead you can create a single one and keep track of the offsets.
The morph targets has the same number of vertices as the original mesh (so if the original mesh has 1000 vertices, then each morph target also has 1000 vertices). If there are 10 morph targets, you have 10,000 morph vertices.
Remember - the morph targets need to be mixed with the original mesh and cannot be used on their own.
The following is an example of a wgsl vertex shader - which combines the morph target with the original mesh to create the final output.
let numMorphVertices:u32 = morphuniforms.numVertices; var morphPosition:vec3<f32> = input.position.xyz;
// Apply morph targets for (var i:u32 = 0; i < numMorphTargets; i = i + 1) { // builtin 'vertexIndex' index lets us know the index of the vertex in the original mesh // we can use this to offset into the morph target data let offset = i*numMorphVertices + vertexIndex;
// the weights let us know how much of each morph target should be mixed with the original // remember multiple morph targets can be combined with the oringla mesh for complex morph animations morphPosition += morphuniforms.weights[i].x * morphtargets[offset].xyz; }