Real-Time Rendering / Motion
Shown below are results from several projects done for the Computer Graphics classes (CSCE 441 and VIZA 672). All the implementations were done in C++.
Polygon scan conversion and clipping
Active edge table & list data structures were implemented for the interactive polygon scan conversion following the class lecture. Polygons are drawn based on vertices determined by mouse clicks. Later, the polygons can be clipped by drawing a rectangular clipping window (by clicking and dragging with the mouse).
Initial set of polygons
(drawn through mouse clicks)
Clipped polygons
Cook-Torrance - Physically based rendering
(spectral ray-tracer)
Light source (Solar 'Global tilt'): http://rredc.nrel.gov/solar/spectra/am1.5/ASTMG173/ASTMG173.html
​
XYZ Values (CIE 1931 2-deg): http://www.cvrl.org/cmfs.htm
​
Refractive indices and extinction coefficients: https://refractiveindex.info/
Copper (three different surface roughness values)
Titanium
Iron
Brass
Mipmaps
(5 levels)
Perspective view of the textured surface
Source texture
(mipmaps generated using a triangular filter with a circular footprint)
Perspective view of the textured surface
Source texture
(mipmaps generated using a box filter with a circular footprint)
3D Hierarchical model of an insect
For this project, I modeled an insect just using shapes provided by GLUT. Entire model was developed by hierarchically arranging the shapes and their transformations (scaling, translation, rotation) in the display loop. The insect I modeled resembles an ant with six legs (two segments each) and a pair of antennas.
​
With the developed model, basic insect motion was scripted and made to work interactively using keyboard inputs. The antenna and leg motion were implemented through rotational transformations pivoted at the tips of the segments. The rotations were clamped within certain boundary values. And the translation is applied to a root shape based on which the other dependent shapes are translated as well.
Non-Photorealistic Rendering (NPR)
(Gooch shading)
Motion capture - key frame interpolation
For this project, ten different actions were captured using Vicon motion capture system. The motion capture data were used as input/source for the key frame interpolation.
​
Once motion capture data is loaded using 'Load Motion' button, key frames for interpolation can be chosen by moving the slider and clicking on 'Add Pose' button. Once the key frames are selected, interpolated motion is computed using the 'Interpolate' button.
​
Catmull-Rom algorithm was used to interpolate the poses for in-between frames.
​
Implementation was done on top of a skeleton code that reads and plays the source motion capture data.
Catmull-Rom - key frame interpolation
(63 key frames were chosen randomly out of 803 total frames for interpolation)
Radiosity - Global Illumination
Pre-computed radiosity (10000 iterations)
Progressive refinement