3D Video Fragments


Rendering

We use point based rendering to produce 2D images of the 3D video fragment cloud from arbitrary viewpoints. Each fragment represents a small circular disk - called surfel - tangential to the object surface. Each disk has an associated normal and radius. The radii are chosen just big enough to generate small overlaps between the surfels in order to prevent holes in the rendering. The image is generated by projecting the surfels into the screen space. Each projected surfel is rendered with an elliptical gaussian alpha mask. This provides a smooth blending between surfels in the overlap regions. The whole rendering process can be performed in the vertex and fragment shaders of current graphics hardware.

Radius and normal of a surfel are determined by the positions of its neighbors. By processing the fragment cloud from each acquisition camera separately we can exploit its regularity given by the camera's pixel array for the neighborhood search. This permits computing those attributes on the fly during rendering.

View-dependent Blending

The renderer processes only those fragments that are relevant for the current view. These are the fragments provided by the set of texture active cameras which is determined view-dependently by the dynamic system control . The impact of each active camera is weighted by the angle between its look-at vector and the current viewing direction in order to achieve a smooth transition between changing viewpoints. We then render the point clouds from each active camera separately and finally blend the resulting images according to the camera weights.

Blending factors of the texture active cameras (yellow) and inactive cameras (green) for two slightly different viewpoints.

 
  Home Info

Back to top
© SW, 2004
Last Update: 05.01.2004