CMPS 161 Final Project
Radiosity Primer
Andrew Ames

Introduction

The built-in OpenGL and Direct3D lighting models are local illumination models. This means that the light contributions to a single vertex of a single polygon only take into account the relationship between the vertex and the light sources, such as angle of incidence and distance. No information is used regarding occluding objects or semi-reflective surfaces. Additionally, the only light types supported are typically point, directional, spot, and ambient light sources.

Ray-tracing is a popular global illumination solution to this problem. Effectively, a ray is traced from the center of projection through every pixel in screen space. As the ray intersects objects in the scene, the ray collects information regarding the material properties of the object. Rays are recursively generated from that point to determine light source contributions. Ultimately, the final color is computed and stored at the pixel on the screen. Figure 1 is an example ray-traced scene.


Figure 1. Ray-traced scene.

One side effect of ray-tracing is that shadow boundaries tend to be extremely sharp due to the nature of the ray computations. When firing a ray from a point on an object to a light source, the ray either is occluded by an object or is not. Ray-traced images, even though they have advanced greatly, still end up looking so sharp, that our eye is not fooled into thinking it's real. In fact, our brain usually alerts us that something's wrong with the picture.

Additionally, if only ray-tracing was used to compute color contributions, then all the areas completely in shadow would be black. Obviously, if the example scene is a room being lit by sunlight through the windows, then the surfaces in shadow would not be completely black. This is usually solved by using an ambient light source that contributes light equally to all surfaces. This is what was done with the scene in Figure 1.

Radiosity, on the other hand, is a global illumination technique designed to solve these problems. The effects of every surface are considered when computing the light contributions at not only each vertex of a polygon, but at interior points of the polygon as well. Also, it is very intuitive with radiosity to support planar light sources, since each polygon is effectively an absorber and emitter of light.

As a result, realistic soft shadows are generated and no ambient light source is needed to light shadowed surfaces. Even shadowed surfaces will have light contributed to them indirectly via other surfaces that are hit directly by light. See Figure 2. Notice how the ceiling has a slightly pink tint due to reflecting the floor color.


Figure 2. Radiosity scene.

As is ray-tracing, radiosity is an offline algorithm in that a significant amount of pre-computation must be performed before rendering takes place. However, unlike ray-tracing, radiosity is a view-independent algorithm. Once the pre-computation is complete, the scene can be rendered from any viewpoint in real-time. Ray-tracing requires rays to be generated every time the viewpoint changes.

The view-independence is also a drawback because specular effects are not directly handled by radiosity. Often, radiosity is used to complement other dynamic and view-dependent algorithms.

Another limitation of radiosity is that the traditional light sources (e.g. point and directional) are difficult to model since all light sources are planar. In my implementation, this is solved by using a ray-tracer to cast the initial light into the scene from non-planar light sources.

Algorithm

Preparation

For reasons that will become clear later, my radiosity implementation only supports oriented rectangular surfaces. No triangles. Each surface must be a quad whose opposite edge lengths are equal and all angles must be right angles. Radiosity can be implemented with arbitrary polygons, but restricting surfaces to rectangles greatly simplifies the implementation.

For a better and more consistent solution, each rectangular face is subdivided into a regular grid of patches. Each patch is initialized such that its incident light value is (0, 0, 0, 0) (RGBA) and its excident light value is (0, 0, 0, 0) for non-emitting faces and (Re, Ge, Be, Ae) for emitting faces.

At this point, if each patch was rendered as the color of its excident light value, only the light emitters would be visible.

Non-Planar Lights

If there are any non-planar light sources, such as point, directional, spot, or ambient, the incident light value from each source is computed for every patch in the scene. Their sum is set as the incident light value for the patch. Following is a description of how the incident light is calculated for non-planar light sources.

Point

Point light sources emit light equally in all directions. The point light source effects are attenuated by a quadratic attenuation curve and are finally clamped at a specified range.

For a given patch, a ray is traced from each corner of the patch to the point light source. The incident light for that patch from that light source is computed as follows.

incident = P * c * SUM((N dot Li) / (att0 + att1 * di + att2 * di*di))

where SUM sums the values for all rays that are NOT occluded by an intermediate face and are NOT out of range from light source; P is the percentage of the rays that reach the light source; att0, att1, and att2 are the constant, linear, and quadratic attenuation factors; di is the distance between the vertex and the light source; c is the diffuse color of the light source; (N dot Li) is the dot product of the face's normal and the light ray direction.

Directional

Directional light sources have a direction, but no position. Attenuation and range factors are not applied for them.

Directional lights are said to be a light source at infinity, such that all of its rays are paralle. To determine if a directional light ray reaches a vertex of a patch, a ray is cast from that vertex, in the opposite direction of the directional light vector. If the ray is NOT occluded by a face prior to reaching the scene's bounding box, then the vertex is lit by the directional light.

The bounding box is enlarged for scene faces that coincide with a bounding box plane. Following is the incident light equation for a directional light's contribution to a patch.

incident = P * c * (N dot L)

where P is the percentage of the rays that reach the scene's bounding box; (N dot L) is the dot product of the face's normal and the light ray direction; c is the diffuse color of the light source.

Spot

Spot lights are similar to point lights, with the exception of the added spotlight attenuation factor. The equations are as follows.

incident = P * c * SUM((N dot Li) * spoti / (att0 + att1 * di + att2 * di*di))

where spoti is the spotlight attenuation factor computed as follows.

spoti = 1, if rhoi > cos(thetai/2) spoti = 0, if rhoi <= cos(phi/2) spoti = ((rhoi - cos(phi/2)) / (cos(theta/2) - cos(phi/2)))^falloff

where rhoi = norm(Ld) dot norm(-L); Ld is the spotlight direction; -L is the opposite of the ray's direction vector; theta is the angle of the spotlight's umbra in radians, [0,pi); phi is the penumbra angle in radians, [theta, pi); falloff is the spotlight falloff factor.

Ambient

The incident light for a patch from an ambient light source is simply the light source's diffuse color.

Excident Light

Once all the light contributions have been summed for a patch, the patch's excident light value is computed as follows.

excident = incident * diffuse + emissive

where diffuse is the patch face's diffuse material component; emissive is the patch face's emissive material component.

If the patches were rendered at this point, all the patches that are directly receiving light from a non-planar light source would be visible in addition to the light-emitting patches. Figure 3 shows what that might look like for our example scene with a directional light for the sun shining through two windows.


Figure 3. Non-planar light sources.

Notice how the patches on the shadow boundaries are partially lit because not all 4 corners generated rays that intersected the scene's bounding box.

Radiosity: 1st Pass

Finally, we can begin the radiosity portion. What is described here is not considered the traditional radiosity algorithm. Instead, the graphics hardware is used for its efficient Z-Buffer to handle visibility.

For a given patch, we want to compute how much light is incident on the patch from the scene. In other words, how much light does the patch see? This is the basic visibility problem.

Using the graphics hardware, render the scene with the viewpoint fixed at the center of the patch. The view direction is the patch's normal. The near Z plane should be set to the average of the two edge lengths of the rectangular patch.

The view direction is then set to point parallel to one of the patch's edges. The near Z plane is half that edge length. Only the half of the scene rendered on the front side of the patch is used. Figure 4 shows the results of the 5 rendering phases: front, left, right, up and down. The planes form an unfolded hemicube around the patch.


Figure 3. Hemicube planes.

Every pixel in each hemicube plane is a light contribution to the patch. The hemicube must go through 2 filters in order to use the data correctly. The first filter takes into account the hemicube shape and compensates for distortion that occurs when a 3D space is projected onto a plane.

The second filter takes into account Lambert's Cosine Law. The color contribution is related to the cosine of the angle between the direction vector and the patch's normal. The two filters are shown in Figure 4.


Figure 4. Hemicube filters: (a) hemicube perspective distortion, (b) Lambert's Cosine Law, and (c) combined.

Before the combined filter is used, it is normalized such that the sum of the pixels is (1, 1, 1, 1). The hemicube planes are multiplied by the filter to achieve the results in Figure 5.


Figure 5. Filtered hemicube. (Non-normalized filter was used.)

Finally, the pixels from the filtered hemicube are summed to yield the incident light for the patch. The excident light is computed the same as for the non-planar light sources. It is shown here again for convenience.

excident = incident * diffuse + emissive

The process is performed for every patch in the scene. Figure 6 shows the results.


Figure 6. Radiosity: 1st pass.

Radiosity: N passes

Any number of passes may be performed. Eventually, the solution begins to stabilize. Figure 7 shows the results after the 2nd, 3rd, and 15th passes.


Figure 7. Radiosity: 2nd, 3rd, and 15th passes.

Advanced Techniques

There are a few additional techniques implemented in my project. They are described here.

Lightmaps

Rendering the faces in the scene as a collection of quad patches is not as efficient as rendering only the faces themselves. To optimize the real-time rendering, lightmaps are generated for each face. The lightmap resolution is the same as the patch resolution. Bilinear and mipmap filtering is applied to the lightmap to smooth the results. Although not implemented in my project, a base texture can be modulated with the lightmap.

Since only rectangular faces are used, it is simple to compute the lightmap resolution and pixel data. This is why my project restricts surfaces to be rectangular.

For machines with generous texture budgets, computing lightmaps greatly increase performance.

Adaptive Subdivision

It should be obvious that there are surfaces where the light gradient is very small, while other surfaces have a large light gradient. My algorithm employs adaptive subdivision in order to subdivide a face's patches only when necessary.

Adaptive subdivision occurs in two places: 1) non-planar light source ray-tracing and 2) radiosity passes. In the first case, when rays are fired from a patch to a light source, the patch is subdivided if all the vertices of the patch do not agree on whether the ray was occluded. This subdivision occurs down to a user-specified resolution.

In the second case, when a patch is to have its hemicube computed, it is first subdivided if the patch's excident light differs from any of its neghboring patches by more than some user-specified threshold.

When lightmaps are generated for the faces, the lightmap resolution used is the highest resolution that will fit the furthest subdivided path in the face.

Demo Scene

The Interchange World Format (IWF) is used for storing radiosity scenes. An extended version of IWF is used for storing the radiosity-specific information. I call this the Radiosity World Format (RWF).

IWF was designed at GameInstitute.com for their courseware. I used the GameInstitute Level Editing Software (GILES) to construct the demonstration scene(s). The demo scene is shown in Figure 8.


Figure 8. GILES editor and scene.

Following is a collection of screen shots of the GUI and demo scene.


Screen 1. Initial demo room rendered with OpenGL lighting model. There is a directional light shining in through the windows. The background color is white to show the window openings.


Screen 2. The directional light has been ray-traced. Only surfaces that have non-occluded rays to the directional light source are visible.


Screen 3. A patch's hemicube is shown. The left window shows the selected patch with white light rays indicating where the light is relative to the patch. The right window shows the patch's hemicube.


Screen 4. The adaptive subdivision of patches along the border of the shadow caused by the directional light source.


Screen 5. After 1 radiosity pass.


Screen 6. After 3 radiosity passes.


Screen 7. After 5 radiosity passes using ligntmaps to smooth gradient.


Screen 8. Another angle showing soft shadows in the corners and behind the pillars.


Screen 9. A freak of nature half-way through the project.