This is an abridged version of the original project's write-up
"Capture human movement over time; capture a "quiddity" or non-generic quality of the movement."
I’ve been wanting to integrate my background in dance into my art practice for a while now and the Person in Time project proved to be a perfect opportunity. Originally I was interested in exploring some hidden basic or fundamental movement because, to me, the “quiddity” of these dance movements felt too apparent and obvious. However, after actually doing the motion capture, what I could see was that the most visually interesting aspect of my recorded motion was not the minute movement details but the circles. I'd forgotten that circular movements are a key component to Chinese dance.
Activity/Situation Captured: circular movements in Chinese Dance moves
Inspiration: This motion capture visualization video that includes some Chinese traditional dance samples, a bunch of SCAD Renderman animations that use mocap data (notably, these do not create meshes or objects), and Lucio Arese’s Motion Studies
Process:
This project had a two-fold process: performing and interpreting. For the performing part I needed to research, decide, and rehearse my movements before arranging a motion capture session. I started with some residual knowledge from when I used to do Chinese classical dance but watching recordings helped me re-familiarise myself with the movements. I decided on four small sections of choreography: “turning (with arms) in place,” a traveling version of turning (with arms), a large leaning movement, and some subtler arm movements.
I was able to arrange a time with Justin Macey to get into the Wean 1 motion capture studio and record all of my dance movements. Justin sent me all of the motion capture data we’d captured in .fbx format and I was able to import all of if directly into Blender. Inside blender, I wrote and then ran a Python script to extract the location data of each of the bones inside of the the animated armatures into text files (thanks to Ashley Kim for helping me here).
The next step I took was to write and a script to turn every point into spheres, this is initially where I stopped.
The interpreting process as a whole has been open ended and non-conclusive. I originally wanted to actually copy the SCAD Renderman method (link: http://www.fundza.com/rfm/ri_mel/mocap/index.html) before realizing that since it was renderman, the effects were only in the renders. I’d also hoped to use blenders meta balls because they have a cool sculptural quality but those posed a challenge to give separate materials to (also it crashed my entire file once so I gave up on them).
Even after figuring out a pipeline, it took a long time (for me) to do relatively little developmentally and (though I also didn’t really optimize the mesh-creation process) each cloud of 500-3000 balls took around 30 minutes to generate and then finagle into an exportable .obj format. As for the motions’ readability, it was OK: I found the results to be somewhat cool as abstract forms (I was familiar with the movements and shapes at this point). I played around a lot with assigning different colors across frames or body part to varying visual success I think (was a bit burnt out by all the coding so I did this by hand in excel and this gradient website).
On Nov. 1, during work time in class, while trying to figure out ways to get Sketchfab to display correctly on WordPress, Golan took a look at the models I had so far and said I should visualize motion trails instead of just disparate points. He was able to very quickly create a processing sketch that took every frame of data (I’d been using every 5th or 10th) and make lines and then turn those lines into an exportable mesh.
After exporting the .obj files, I took the models back into blender and turned the lines into tubes before exporting the final models you can see below.