 Apr. 18, 2011 In part 2 of this series I explained about how to get the sky. This entry is about how to compute an irradiance environment map from this cubemap.

Lighting is an important part of graphics. One of the simplest lighting models are diffuse and specular lambertian reflectivity. These lighting models can look fairly boring when all you have is one or two lights. A solution to this is to use a cubemap for lighting. In principle you get an infinite amount of lights, in practise you're bounded by how large you can make your cubemap.

## Demo

You can try the live demo. The source is available in the mercurial repository.

## Screenshots

If you cannot run the demo, below you will find some screenshots.  ## Cubemap Lookup

A cubemap is a special kind of texture that has 6 sides, each of which is positive or negative in either of the 3 principal axes (x, y or z).

Texture lookups into cubemaps are peformed by a normal vector instead of a UV coordinate.

```vec4 diffuse_color = textureCube(diffuse, normal);
```

If you are going to use a cubemap for lighting, you are addressing the cubemap using a world space normal. For diffuse lookups the normal from the model is used.

```vec3 normal = normalize(v_normal);
vec4 diffuse_color = textureCube(diffuse, normal);
```

For specular lookups you reflect the viewing direction on the surface normal.

```vec3 specular_normal = reflect(eye_normal, normal);
vec4 specular_color = textureCube(specular, specular_normal);
```

The illustration below shows the concept. Please consult the cube lighting shader file for the details of the implementation.

A cubemap filled with a normally rendered scene can only represent mirror like reflection. This is interesting, but diffuse and semi-specular reflections would be missing. Using lamberts law, you can imagine a cubemap as a large number of lights, and you'll have to compute the influence of each light (that is pixel) in the cubemap for a given cubemap lookup direction. It would obviously be quite expensive to perform many lookups in realtime. Fortunately the irradiance environment map can be computed once, and then reused for every frame (until it changes).

## Cubemap Scaling

Each side of the cubemap I start with is has a size of 128x128 pixels. It would be quite expensive to convolve this, so I scale it down in 3 passes once for each side of the cubemap face. The core of the shader to do this looks like this:

```vec4 sample(float xoff, float yoff){
vec2 off = gl_FragCoord.xy*2.0+vec2(xoff, yoff);
vec3 normal = get_world_normal(off, viewport*2.0);
return textureCube(source, normal);
}

void main(void){
vec4 color = (
sample(-0.5, -0.5) +
sample(-0.5, +0.5) +
sample(+0.5, -0.5) +
sample(+0.5, +0.5)
) * 0.25;
gl_FragColor = vec4(color.rgb, 1.0);
}
```

The same shader is then applied at each level before proceeding, hence rendering these passes works in that order:

```this.scattering.render();
this.level1.render();
this.level2.render();
this.level3.render();
```

Please consult the downsample shader and the javascript implementation for more details.

## Cubemap Convolution

Convolution is applying a computation to each entry in a dataset considering all other entries. When convolving a cubemap it means that each pixel in the cubemap needs to consider all other pixels in order to produce the result.

The desired result in this case is when considering a cube pixel ( ) the sum of all other pixels ( ) multiplied by the cosine of the angle between Pn and Pm ( ). This function is evaluated in the fragment shader. Since each fragment (in a postprocessing fashion) represent one pixel on the cubemap, the eye ray of that pixel is Pn. I need then to work out where to look up all other other pixels. I can get this by applying the 45 degree inverse projection matrix to the device coordinate for each pixel of that side. So I need to work out the device normal stepping first.

```const float size = 16.0;
const float start = ((0.5/size)-0.5)*2.0;
const float end = -start;
const float incr = 2.0/size;
```

Then I need to setup a running variable to collect the result in, get the eye direction for that fragment and start iterating the loop.

```void main(){
vec4 result = vec4(0.0);
vec3 eyedir = get_eye_normal(), ray;

for(float xi=start; xi<=end; xi+=incr){
for(float yi=start; yi<=end; yi+=incr){
ray = normalize(
(inv_proj * vec4(xi, yi, 0.0, 1.0)).xyz
);
}
}
```

When I've worked out the Pm for one side this way, I can reuse it for each of the 6 sides of the cubemap, all I need to do is properly rotate it. This can be done by 3x3 matrices when setup right to act as a coordinate system conversion.

```const vec3 x = vec3(1.0, 0.0, 0.0);
const vec3 y = vec3(0.0, 1.0, 0.0);
const vec3 z = vec3(0.0, 0.0, 1.0);

const mat3 front = mat3(x, y, z);
const mat3 back = mat3(x, y, -z);
const mat3 right = mat3(z, y, x);
const mat3 left = mat3(z, y, -x);
const mat3 top = mat3(x, z, y);
const mat3 bottom = mat3(x, z, -y);
```

By multiplying the ray with one of these matrices, I can rotate it to each of the desired cubemap faces. I've encapsulated this logic in a sample function that looks like this:

```vec4 sample(mat3 side, vec3 eyedir, vec3 base_ray){
vec3 ray = side*base_ray;
float lambert = max(0.0, dot(ray, eyedir));
float term = pow(lambert, specularity)*base_ray.z;
return vec4(textureCube(source, ray).rgb*term, term);
}
```

This rotates the ray to the given side, computes the lambert term and performs the lookup in the cubemap. Please note three details about this function:

• It multiplies the term by the base_ray.z, this is done to account for perspective distortion (due to projection)
• The lambert term is taken to the power of specularity. So the same shader can compute diffuse and specular irradiance maps.
• For later dividing the result, the alpha channel in the returned result is the computed term.

Now I can simply sum up the sampled result for each side:

```for(float xi=start; xi<=end; xi+=incr){
for(float yi=start; yi<=end; yi+=incr){
ray = normalize(
(inv_proj * vec4(xi, yi, 0.0, 1.0)).xyz
);
result += sample(front, eyedir, ray);
result += sample(back, eyedir, ray);
result += sample(top, eyedir, ray);
result += sample(bottom, eyedir, ray);
result += sample(left, eyedir, ray);
result += sample(right, eyedir, ray);
}
}
result /= result.w;
gl_FragColor = vec4(result.rgb, 1.0);
```

Please see the shader and javascript implemention for the full details of how the convolution works.

## Performance Considerations

It is certainly not cheap to do over two million cubemap lookups (16x16x6 for each 16x16x6 pixels in the cubemap). On the other hand, it is surprisingly fast at least on my graphics card (GTX-460).

It would seem that the Fermi architecture GPUs can run such algorithms quite efficiently. Since each fragment accesses basically the same data over and over again, it is likely making good use of the cache.

## Caveats

The result is not perfect, due to the heavy downsampling it suffers somewhat from varying luminosities, especially on cubemap borders.

Unfortunately WebGL does not implement the seamless cubemap extension, so some artefacts of linear interpolation due to edge clamping can be visible (altough they're likely not very noticable when used with varied geometry and textures).

It would be possible, but kinda expensive to run this computation for every frame. This is not required however, it can be run when the lighting changes.

## Alternatives

This GPU Gem implements irradiance environment lighting by working out the spherical harmonics coefficients for a environment map and storing them in a texture. I think the main disadvantage of this technique is that it would require 54 texture lookups at frame render time in order to fetch the required coefficients.

## Next Part

I hope these techniques are useful to you, in part 4 I'm going to talk about screen space ambient occlusion.