Computer Graphics Laboratory ETH Zurich


Advanced%20Image%20Synthesis%20(SS%2006) - Exercises

Home | Course Notes | Exercises | Schedule


As an exercise you will work on a project where you program a demo software or game combining some state-of-the-art image synthesis techniques to create appealing visual effects. Some possible project ideas are listed below. This list is neither complete not mandatory but it is just an example how such a project could look like. We strongly encourage you to develop your own ideas. So be creative! Maybe you found some interesting algorithm on the internet that you want to re-implement. Maybe you've seen some cool effect in a movie. Maybe you have a cool idea for a game. Look at the links provided below and search the web to get your inspiration. Everything that has to do with graphics is welcome. You are not constrained to the techniques learned in class but you can implement or invent any graphics algorithm you like. It just has to look cool!

Introductory slides

Final Student Projects

Best Utility Award
Oliver Saurer, Gerhard Röthlin
Lazy Cutting
Best Demo Award
David Steiger, Daniel Kistler
Level of Detail and Culling
Best Implementation Award
Kaspar Rohrer
Procedural Buildings
Jessy Kiriyanthan, Marcel Meili
Ingemar Rask, Johannes Schmid
Rendering Smoke and Fire in Real-Time
Lukas Novosad, Michael Gubser
Water & Fire
Jens Puwein, Pascal Rota
Andri Bühler, André Schmidt
Image Processing via Graph Cuts


04.04.2006 Introduction
The procedure of the mini project is introduced in the first exercise lesson. Various ideas for possible projects are presented.
The students form groups of two or three people in which the projects will be carried out. Each group should start looking for ideas and creating a web site for documentation of their project.

11.04.2006 Submission deadline for project proposals
Each group hands in a proposal describing the planned project. The proposal should consist of approximately one page covering the following issues:
• What do we want to do?
• What results do we want to show?
• What are the technical problems we have to solve?
• Which steps are necessary to solve those problems?
• What third-party software or libraries do we want to use?
• What software parts do we implement ourselves?
The proposals should be put on the individual project web sites. A link to the site should be sent to the assistant Michael Waschbüsch (
Each group presents its proposal in the exercise lesson.

18.04.2006 Confirmation of acceptance of the proposals
The assistant contacts the groups to approve their projects. The students can start implementing.

27.06.2006 Final project deadline
All projects should be finished. The groups finalize their project web sites containing:
• The initial project proposal.
• Result images and videos.
• A description of the used and implemented technologies.
• The source files of their software.
• An executable of their programs.
• A software documentation.
A link to the site should be sent to the assistant Michael Waschbüsch (

04.07.2006 Project presentation
Each group gives a demonstration of its project in the lecture. The fanciest projects win cool prizes!

End of Semester: Grading
The projects are graded by the lecturers. The project grade will count 25% of the final grade for this course.

If you have any questions, feel free to contact Michael Waschbüsch (

Project ideas

Image-based rendering

Waschbüsch et al.; Scalable 3D Video of Dynamic Scenes;
Pacific Graphics 2005

In image-based rendering, both the geometry and appearance of the scene are derived from real photographs. In contrast to artificial models, this technique usually allows for a better level of photorealism. The input data is generated by taking images of an object or scene from multiple views. Applications of image-based rendering are re-rendering of the scene from novel views or synthetic aperture effects such as digital refocusing. Various image-based rendering technologies use a different amount of geometry. Light fields require no geometry at all but many images. On the other hand, the 3D Video projects by Microsoft Research, the MPI Saarbrücken and ETH Zurich explicitly model the geometry of the scene and thus require only a sparse camera setup. In general, the fewer images are available, the more accurate the geometry has to be. A very good overview over image-based rendering is provided by the Siggraph'99 course.

As a project, you could explore different aspects of image-based rendering, from acquisition over 3D reconstruction to the rendering. Possible ideas include:
  • Use available data sets of light fields or videos with depth. Try to render them from novel views. Unstructured lumigraph rendering provides a mathematical framework to generate high-quality images. You can also try to do synthetic aperture effects on the light fields.
  • Acquire a light field by taking photos of an object or scene from multiple viewpoints with your digital camera and try to re-render it. For acquisition you need to calibrate the camera which can be done by placing a checkerboard in the scene and using the Camera Calibration Toolbox for Matlab. This software also provides a tool to compensate for the radial distortion of the camera lens.
  • Acquire a single object in front of a uniform colored background. Extract its silhouette and reconstruct an approximate geometry using visual hulls.
  • Try to acquire depth maps from complete scenes using stereo vision. Various implementations are available here. Before performing stereo matching you also have to do an image rectification.
More on computer vision can be found in the book "Oliver Faugeras. Three-Dimensional Computer Vision" and the Computer Vision Homepage. Basic computer vision algorithms are provided by the OpenCV library.

Image editing

Kwatra et al.; Graphcut Textures: Image and Video Synthesis Using Graph Cuts;
Siggraph 2003

The lecture introduces several advanced image editing algorithms including segmentation, matting and texture synthesis. Implement one of them and integrate it in a small image editing tool. Those tasks involve some advanced numerical computations. A good introduction into this field can be found in Numerical Recipes in C, code is available in the GNU Scientific Library.

Video editing

Wang et al.; Interactive Video Cutout;
Siggraph 2005

Video cubes are a convenient way to represent spatio-temporal movie data as a static three-dimensional volume. By defining cut planes the user can navigate freely through both space and time domain. Complex editing tasks can already be achieved by simple transforming the cube, as shown in the Proscenium framework. Well known image editing techniques can be easily adapted to videos by extending them into the third domain allowing for complex special effects such as video cutout or video object cut and paste. The last example also provides a tool for defining arbitrary cutout paths through the cube, yielding a higher flexibility than cut planes. A very simple video cube demo software is available from Microsoft Research (scroll down to "Video Cube").

Your task in this project is to write a video editing tool. Possible features are:
  • Represent the movie as a video cube.
  • Provide a navigation tool by defining cut planes through the cube.
  • Provide a navigation tool by defining arbitrary cut paths through the cube.
  • Do a simple object removal by inserting the background of other frames, as shown in Proscenium.
  • Adapt some image processing algorithms to the 3D cube.
  • Make cool special effects.

Real-time graphics

Andromeda Software Development; Iconoclast;
Winner of demo awards 2005

Write some demo or game with cool real-time visual effects. Get your inspiration from the Demo Scene and the technology demos by NVIDIA and ATI.
  • Use OpenGL or Direct3D for accelerated graphics.
  • Write your own shaders for the GPU. See ShaderTech for infos, tutorials and sample code.
  • Use procedural (i.e. computer generated) geometry and textures.
  • Use the acceleration methods learned in the lecture, like culling or level of detail.


Adams, Antunez, Talvala; Redwood Trees in Fog;
Winner of the Stanford Rendering Competition 2005

Render a cool picture or video. Get your inspiration from the Stanford or UCSD rendering competitions. Implement and combine various technologies to generate fancy visual effects, such as:
  • Complex geometry such as feathers.
  • Realistic appearance models such as donut coatings.
  • Exotic wavelength-dependent effects such as the skin of a snake.
  • Volumetric effects such as clouds.
  • Or other interesting things like fire, water, snow, plants, jellyfish, ...
We strongly recommend that you construct your scene using the RenderMan language because this is the de-facto standard specification for an interface to rendering software. It is defined by Pixar who use it for their famous movies. Most other special effect companies use it, too. Besides definition of scene geometry its full power lies in the possibility to implement custom shaders for complex materials and lighting effects. Thus it is comparable to high-level GPU programming languages but it is designed for software renderers. There are many freely-available modeling and rendering programs supporting RenderMan. Here is a small selection:
  • Aqsys is a RenderMan compatible renderer implementing the traditional Reyes algorithm.
  • Pixie is another RenderMan compatible renderer which also supports ray tracing and global illumination in addition to Reyes.
  • k-3D is a simple RenderMan-compatible 3D modeler. It has built-in support for the Aqsys renderer.
  • Blender is the most powerful and most complicated open source 3D modeling software. However, the renderer it supports is not compatible to RenderMan. A RenderMan export plugin can be installed separately.