After watching this demo today:
And this one:
I am convinced we no longer need to track ourselves. This is what's known as markerless tracking. What I mean is that no longer do we need to attach all those tiny white dots on our faces and bodies
Near as I can tell from the website:
It uses multiple cameras (to establish perspective) and creates a 3d model just from images. Many sites use this technique, this demo better shows how it's done:
So in essence, the computer is smart enough to pick up edges of an object by creating a silhouette, then at the same time it's texture mapping that object, and then when it's done computing, it's able to track the live object, and update the virtual 3d model.
Why do I bring all this up? Well webcams are cheap, even the HD ones. A decent one can be had for under $100. Even if you had to buy 2, 3, or 4 of them, that's considerably cheaper and less time consuming than the old way you had to do it. I'm not entirely sure if this software is automatic, or if you have to still manually match dots (from your live face to the 3d model), much in the same way you used to do when you create morphing:
Now I don't know what sort of computing power this requires to do in realtime, but this is pretty exciting. The idea that I can use just webcams to not only capture a 3d model, but update that 3d model in realtime is pretty amazing. This isn't unlike project natal for the xbox360: