The basic primitive of our framework is the 3D video fragment
which is a point sample that can be dynamically generated, deleted
and updated. As opposed to mesh based representations, 3D video
fragments provide a one-to-one mapping between points and associated
color and normal attributes avoiding interpolation and alignment
artifacts. In particular the lack of local connectivity makes 3D
video fragments much more efficient for updating, coarse-to-fine
sampling, progressive streaming, and compression. Another benefit
of retaining an underlying point based representation is graphics
We found the term 3D video fragment as three-dimensional analog
for the 2D video fragment. The fragment in the classic computer
graphics literature is defined as a display pixel - thus a 2D video
fragment - with attached attributes such as depth or alpha value.
Consequently, a 3D video fragment is a three-dimensional point sample
with attached attributes (e.g. a position, a surface normal, a color)
which is generated from a 2D video pixel or fragment.
The framework is generic in the sense that it works with any
real-time 3D reconstruction method which extracts depth from images.
Thus it is quite complementary to model or scene reconstruction
methods using volumetric (e.g. space carving, voxel coloring), polygonal
(e.g. polygonal visual hulls) or image-based (e.g. image-based visual
hulls) approaches. Therefore, the framework can be used as a nice
abstraction of the free-viewpoint video representation and its streaming
from 3D reconstruction.