15-462 Assignment 3 (original) (raw)
Assignment 2: Human Animation Screen Saver
**Due Date:**March 5
Grade Value: 15%
In this assignment, you'll use computer animation and motion capture (mocap) data to make a screen saver of an animated human walking around indefinitely. The idea is to generate a computer animation by connecting different pieces of motion. We'll provide you a number of segments of mocap data as your raw material, and you'll build a graph data structure to represent the connections between different pieces of these segments. Each frame in the motion can be treated as a node, and an edge is created between two nodes if the character's poses in the two frames are close enough. By randomly traversing the graph, you can generate a continuously playing animation whose content is randomly chosen from the original motion clips.
This assignment re-implements a paper called video textureusing motion capture data instead of video data. Given a single video clip this paper generates an infinite amount of similar looking video. The general approach is to find places in the original video where a transition can be made to some other place in the video in the video clip without introducing noticeable discontinuities (“good” transitions). The video texture paper talks about finding "good" transitions within a single video clip. In this assignment you are given a number of different motion data segments of a human walking, you should look for "good" transitions within each individual segment as well as for “good” transitions between different motion segments.
You will first need to define “good” transition by defining an evaluation function to determine how similar are the two poses you want to transition between. You will also need to design a fading algorithm to make these transitions look smooth. To represent these transitions you will need to build a graph data structure.
Requirements and Implementation
- Define an evaluation function to determine how similar two poses are, i.e., a distance to measure the difference between two poses. Because a character's pose is described by all of his/her joint angles and the position and orientation of his/her root node, a simple way to compare two poses is to calculate the sum of the distances between each joint angle and the position and orientation of the root node. Of course, this would not be a good pose metric since the weighting of each joint is the same whereas some joints are probably more important than others in choosing a “good” transition in the motion. You should experiment with different weightings to get a good metric.
To generate a large number of transitions you should allow transitions between frames that have very different character orientation (looking direction) and root positions as long as they have similar joint angles (see Fig. 1). To preserve C0 continuity of the root position in the generated motion, you will need to translate the second segment (the segment you are about to use) by the difference between the two transition framesin XZ plane (see Fig. 2a). To preserve C0 continuity in the character orientation (looking direction) you will also need to rotate the second motion (see Fig. 2b).
You can also include velocity or other information related to the continuity of a motion into the metric. You can also take into account the frames before and after the transition when defining the metric. Experiment with several different options and find a metric that gives you a good measurement of the perceptible difference of the motion in the two frames.
Two notes:
If you decide to compute the character looking direction, be careful how you do it. First of all the character may not always look in the same direction as he walks (for backward or sideway walking). Also, in .amc file the orientation of a root node is given in fixed angles in XYZ order (or you can think about it as euler angles in ZYX order). Since the order of rotations is X-Y-Z, you can not derive looking direction directly from Y. One way to do it, is to convert it to Y-X-Z order for example (so that Y goes first). Alternatively you could rotate the default looking direction (toward positive Z axis) by the root node orientation at a given frame to get looking direction for that frame.
If you are using euler angles for rotation representation, you may want to exclude root orientation from your evaluation functioncompletely as the same orientation is often represented by very different rotations around XYZ axes (this is especially noticeable for root node orientation since it varies greatly). For this reason quaternions may be better for rotation representation.
Fig. 1. Blue and yellow lines represent the trajectory of the root node projected into XZ plane. Because we give a zero weight to the X and Z components of the root position and to the yaw orientation of the root, the transition shown in the figure will be declared “good” if all other joint angles are similar.
Fig. 2. (a) translate second motion segment to preserve C0 continuity of the root position. (b) rotate second motion to preserve C0 continuity of the root orientation.
- Design a fading algorithm to make the transition look smooth. Although the distance function helps you to find a close pose, the difference between two poses will usually be noticeable and will cause the transition look jerky. You can reduce the jerkiness by blending the ending frames of the first motion and the starting frames of the second motion. Or you can generate several additional transitional frames by interpolating between the ending frames of the segment before the transition and the starting frames of the segment after the transition. This is much like what you did for Assignment1.
- Construct a directed graph to represent the transition. Using the evaluation function, you can calculate the distance between pair of nodes (poses) and determine if an edge should be created between those two nodes. Because each mocap file contains many frames, it would be quite time consuming to calculate the distance between for each pair of frames. You may want to downsample the data (only allow transitions only every few frames). You may want to cull transitions quickly, if a major joint such as the hip is not close, then perhaps it is not worth checking the rest of the joints for similarity. You will also probably want to ignore links that would cause the motion to jump just a few frames ahead or behind of a current frame. You may also want to prune transitions to select only local maxima or set all probabilities below some threshold to 0 (see Video Texture paper for more details). You should also think about other approaches that would save computational time in this step and result in a better transition graph.
- Test on two databases of motions. We provide you with two databases of motion capture files. First database contains motion data of human walking forward (6 files total). Second database contains motion data of human walking in different styles (forward, backward, sideways, …). We recommend you first test your code on the forward database. It should be easier to get you code to work on this database since you will probably be able to get away with simpler metric. Then, test your code on the second (larger) database.
Hand in
- Your source code.
- Three movies demonstrating your results (500 frames in length, each movie should contain at least 15 transitions and transitions should vary). For each movie provide a transition table that shows all the transitions in that movie (see Table 1 for an example of transition table) and .amc file (you can use writeAMCfile function in Motion class to write .amc file for a particular motion).
- Movie #1 and Movie #2. Use database with forward motion to create these movies. For the first movie allow only a small number of transitions (best transitions). For the second movie allow more transitions to happen.
- Movie #3. Use the database with human walking in different styles to create this movie.
- Report. Turn in a writeup of what you did. Tell us what algorithms you tried for each of the points above. Tell us what worked, what didn't work, and what you would have tried given more time.
![]() |
---|
Table 1. Shows transitions taken while creating a new movie. Current Frame - frame number in a new movie when transition happened. Transition From - a number of a motion capture file and a frame number within that file of pose before the transition. Transition To - a number of a motion capture file and a frame number within that file of pose after the transition.
Grading:
- Report: 25%
- If your code works on the forward database: 60%
- If your code works on the database with walking in different styles: 15%
Given To You
� The same acclaim file reader/displayer as for the first homework.
� Motion capture data for human walking. Two databases: forward walking and walking in different styles (forward, backward, sideway, ...).
� Note that the source files are at /afs/cs/user/kiranb/www/animation/src and the motion files are at /afs/cs.cmu.edu/user/alla/www/animation/fwd_walk & /afs/cs.cmu.edu/user/alla/www/animation/all_walk
Some Ideas for Extra Credit
� Set a destination for the character, and plan a shortest path to reach the destination with smooth motion transitions. You can use a weighted edge to record the distance between two nodes. When traversing from one node to another, the accumulated cost on the path is a measurement of how much effort is required to generate a smooth transition as well as the distance traveled by the character between nodes. Use a breadth first search to find a minimum cost path given these two error metrics.
Some references
- This assignment re-implements a paper called video texture using motion capture data instead of video data. Although the data are different, you might find some good ideas in the original paper: http://www.cc.gatech.edu/cpl/projects/videotexture/