SIGGRAPH 2003: Day two - Textures, Video, and


The second day of SIGGRAPH this year (Monday) was headlined by the keynote and awards ceremony and then went right in to the meat of the conference: papers, sketches, and applications.

Most of my day was spent at the papers, hearing about what the researchers are doing and looking at what would be interesting to follow up in the ToG (Transactions on Graphics).

Keynote

The keynote this morning was an interesting choice. Last year, it was about Intellectual Property (relegated to an AM course that I skipped this year), but this year, a renowned cosmologist Dr. Lasenbee from Cambridge University spoke on Conformal Geometric Algebra and its relationship to predicting the age of the universe and its continued expansion. The talk was interesting, but unfortunately the room was a bit warm and the slides had a bit too much math on them... not a good combination for somebody without coffee. But, it was clear that there are interesting implications in using Conformal Geometric Algebra to tie together algebra to tie together geometry in Euclidean, hyperbolic and spherical space.

Papers - Textures

The morning paper section was on textures, and in particular the creation of textures from samples. Each of the papers was interesting, but there were some that clearly had more real-world applications than others.

GraphCut Textures: Image and Video Synthesis Using Graph Cuts described a method for algorithmically analyzing texture samples to create tile-able textures using Graph Cuts, a process by which the textures are searched and matched with appropriate seams as opposed to using averaging techniques. This method isn't revolutionary, as it has been done before, however the application in both spatial and temporal dimensions for use with video was new and the results (both for video and images) were stunning.

Wang Tiles for Image and Texture Generation was also interesting, focusing on the creation and use of orientation-specific Wang tiles for creating massive textures. The goal was to create an infinite field of repeating textures that don't visually repeat. They illustrated using the techniques to create enormous fields of textures, but also combined it with Poisson Disc Distributions to create points for growing three dimensional objects. The most impressive demo was a field of sunflowers that had been grown using this technique. To top it off, they added another dimension (literally) by taking the Poisson point tiles, rendering the sunflowers from 4 directions and a few distances and heights, resulting in 64 tiles that could be rendered back-to- front to create a real-time three dimensional flyby that was quite compelling.

Images, Video and Texture

The second set of papers dealt primarily with video and real-time rendering issues. There were three good papers here (of four, not bad) and two that had some really interesting content.

Poisson Image Editing

This paper discusses creating fuzzy compositing, tiling, and smoothing tools using very simple selection techniques (like drawing a very rough outline) and using Poisson differentials to select and smooth the textures between the source and the destination. This was some fascinating stuff in terms of ease of use and effectiveness, and unlike much here at SIGGRAPH, people will be able to get their hands on at least some of it pretty quickly. The cloning tool will be in Microsoft's Digital Image Pro which is slated for release this week.

High Dynamic Range Video

This paper dealt with near-real time acquisition of High Dynamic Range Video with inexpensive equipment. HDR video is video that has widely varying levels of light and therefore can't reasonably be captured at a single exposure. The technique described in this paper involves modifying the software in an inexpensive VGA-scale camera to alternate between short an long exposures every other frame and then passing that data through an output filter algorithm that composites the frames after analysis. Because this is done with video and in near-real time (the cameras were only 30 fps originally), the algorithm needs to quickly determine motion so that it can match the two frames to each other and then choose which sections of the frames contain more data at each of the three light levels (original slow, original fast, or a boosted slow exposure created by adjusting the contrast in the original slow frames). The results of the technique were very impressive.

Matchmaker: Computing Constrained Texture Maps

Matchmaker is a piece of software that is used to modify geometry and fit texture maps based on a series of constraints. If you've ever seen somebody's face remapped onto somebody else's head, this is the kind of technology that helps that operation. In the past, most of the texture fitting for these operations have been done by hand, but this technique allows the user to describe matching points on the texture and the model and then will modify the model and remap the texture so that the resulting output will look reasonable. The technique used to describe the points is similar to that used when creating morph targets and the key is accurately choosing which features must match which parts of the picture, however, it was done in this case with a small number of vertex and texture coordinate pairs. Based on this work, a generic set of "heads" could be modeled and marked (indicating which vertices correspond to what major features, like eyeballs, chin, etc) and a set of textures could be similarly marked and then the software could be used to mix and match any combination. Pretty spiffy stuff.

Papers- Precomputed Radiance Transfer

This section of the papers was dedicated to environmental lighting using precomputer radiance transfer. Using this technique, the way that light will interact with the model is precomputed based on the environment and is used later for real-time lighting (or relighting) of modeled objects).

All-Frequency Shadows Using Non-Linear Wavelet Lighting Approximation

By far the most interesting of the techniques that were described in this section, the author of this paper made use of wavelet compression to compress both the lighting map and the model transformation matrix, leading to near- real time rendering of some pretty amazing lighting effects, including caustics (light focusing through reflective and translucent surfaces, like that seen when light shines through a crystal vase). The technique is still pretty compute intensive for developing the initial maps, but it was clear from the discussion afterwards that the developer is interested in seeing the approach used in games in the near future.

And that was the end of the first day's papers.