‘Motion sculptures’ 3D-printed from videos

BOSTON: MIT scientists have developed a way to 3D print “motion sculptures” from two dimensional videos that can help professional athletes and dancers to better visualise human body movements.
The system uses an algorithm that can take 2D videos and turn them into that show how a human body moves through space.
In addition to being an intriguing aesthetic visualisation of shape and time, the team envisions that their “MoSculp” system could enable a much more detailed study of motion for professional athletes, dancers, or anyone who wants to improve their physical skills.
“Imagine you have a video of Roger Federer serving a ball in a tennis match, and a video of yourself learning tennis,” said PhD student Xiuming Zhang at Massachusetts Institute of Technology (MIT) in the US.
“You could then build motion sculptures of both scenarios to compare them and more comprehensively study where you need to improve,” said Zhang.
Since motion sculptures are 3D, users can use a computer interface to navigate around the structures and see them from different viewpoints, revealing motion-related information inaccessible from the original viewpoint.
Artists and scientists have long struggled to gain better insight into movement, limited by their own camera lens and what it could provide.
Previous work has mostly used so-called “stroboscopic” photography techniques, which look a lot like the images in a flip book stitched together.
However, since these photos only show snapshots of movement, you wouldn’t be able to see as much of the trajectory of a person’s arm when they’re hitting a golf ball, for example.
These photographs also require laborious pre-shoot setup, such as using a clean background and specialised depth cameras and lighting equipment. All MoSculp needs is a video sequence, researchers said.
Given an input video, the system first automatically detects 2D key points on the subject’s body, such as the hip, knee, and ankle of a ballerina while she’s doing a complex dance sequence.
Then, it takes the best possible poses from those points to be turned into 3D “skeletons.”
After stitching these skeletons together, the system generates a motion sculpture that can be 3D printed, showing the smooth, continuous path of movement traced out by the subject.
Users can customise their figures to focus on different body parts, assign different materials to distinguish among parts, and even customise lighting.
The system works best for larger movements, like throwing a ball or taking a sweeping leap during a dance sequence.
It also works for situations that might obstruct or complicate movement, such as people wearing loose clothing or carrying objects.
Currently, the system only uses single-person scenarios, but the team soon hopes to expand to multiple people.
This could open up the potential to study things like social disorders, interpersonal interactions, and team dynamics. (AGENCIES)

LEAVE A REPLY

Please enter your comment!
Please enter your name here