How to write portable WebGL

Feb. 22, 2013
comments

When programming WebGL you need to be careful to make it portable. The following post will explain how to make WebGL portable across many devices, what to look out for and techniques to work around the limitations you face.

If you are interested to read more about the reasons to use WebGL you can check out my earlier article on that. This post also assumes that you have a good familiarity with WebGL, if you are just starting out I recommend learningwebgl.com lessons.

Contents

Why portable?

If you are a web developer you will likely approach WebGL like any other web technology. You might reasonably expect it to work fully in all circumstances or not at all.

This is unfortunately not true for WebGL (or any other 3D graphics development, even Direct3D) because capabilities in hardware differ, such as:

  • Amount of available VRAM and what happens when you overstep it
  • Available extensions
  • Limits of usable ESSL (GLSL for those coming from OpenGL)
  • Differences in queryable capabilities.

How to get WebGL

WebGL is obtained from a canvas object by a call to canvas.getContext.

  • The return value may be null
  • getContext may throw an exception

The name of the context will be either "experimental-webgl" or just "webgl". So to reliably get WebGL you need follow this pattern:

var gl = null;
try{
    gl = canvas.getContext('experimental-webgl');
    if(gl == null){
        gl = canvas.getContext('webgl');
    }
}
catch(error){}

if(gl != null){
    // WebGL is supported 
}
else{
    // WebGL is not supported
}

Testing

One good way to avoid pitfalls in WebGL is to do a lot of testing on different platforms. Since different platforms use different rendering backends, such as OpenGL, OpenGL ES or Direct3D, issues tend to emerge by going between those.

  • Test it on Linux or OSX for OpenGL
  • Test on Android phones/tablets for OpenGL ES
  • Test on Windows for Direct3D

Debugging

You should always have the WebGL debug context utility from Khronos at hand to drop into your application. It will slow down your application, but it will also point out issues quite well.

Video Memory

Graphics cards hold their own dedicated ram to work. VRAM sizes vary a lot between graphics cards and devices. The following commands in WebGL fill that ram in bulk.

gl.bufferData(target, buffer);
gl.bufferData(target, size, usage);
gl.bufferData(target, data, usage);

gl.texImage2D(target, level, internalformat, width, height, border, format, type, buffer)
gl.texImage2D(target, level, internalformat, format, type, data)

gl.renderBufferStorage(target, internalformat, width, height)

Other objects (than buffers and textures) also consume a little VRAM, but commonly nearly as much.

Can I know how much VRAM I can use?

No: The browser vendors deny that information because you could use it to help identify a user uniquely without his consent.

What happens when I overstep it?

If you overstep the invisible VRAM bounds two things happen.

  • Depending on your platform the system might begin to swap VRAM to RAM, at this point things will get very slow.
  • At some point in allocating VRAM beyond possible bounds, the browser will kill your WebGL context.

How do I deal with VRAM limits?

There are three things you can do to migitate VRAM issues in your application.

  1. Use as little VRAM as possible, this lessens the chance of bad performance and loosing your context.
  2. Allocate VRAM gradually over multiple frames and test when FPS suddenly drops, you have now hit VRAM swapping.
  3. Deal with context loss (expained in the next section).

WebGL Context Loss

There are a number of circumstances in which your WebGL context will get killed. This is known as context loss.

This happens in the following circumstances:

  • When you overstep VRAM limits
  • If a shader takes an extraordinary long time to compile
  • On some mobile hardware capable of switching browser pages, when you switch to a different tab
  • If too many WebGL contexts are open in the browser simultaneously
  • When your browser vendor decided to do it at random

Focus related context losses should not happen as long as your context has focus, but this is not a guarantee that it will not happen for other reasons.

How to detect context loss?

The canvas exposes two events to detect context loss and restoration:

canvas.addEventListener('webglcontextlost', function(){
    // context is lost
}, false);

canvas.addEventListener('webglcontextrestored', function(){
    // context is restored
}, false);

You can also query on a context if it is lost:

gl.isContextLost();

Trigger context loss

You can use the WEBGL_lose_context extension to intentionally trigger a context loss and restoration for test purposes.

var ext = gl.getExtension('WEBGL_lose_context');
ext.loseContext(); // trigger a context loss
ext.restoreContext(); // restores the context

How do I deal with context loss?

There are really two options. The first one is to ignore it. It can be really difficult to deal with depending on your usecase and application architecture.

The second option is to try to restore your lost resources when context is restored as outlined in this page by khronos.

There are some drawbacks to context loss handling that limit its practical use:

  • You cannot save your resources before a loss happens.
  • Resources (such as computed textures) that only live on the GPU and take a long time to produce will be lost unrecoverably.
  • Some variants of application architecture that make writing code easy, make dealing with context loss hard.
  • If you lose context because of using too many resources, restoring the resources will result in another context loss.

Antialiasing

Multisample antialiasing in WebGL is only available to drawing directly to a canvas. It may not be available depending on your platform, gpu and driver.

How to request antialiasing?

By default antialiasing is enabled. You can change the preference when you request a WebGL context:

var gl = canvas.getContext('webgl', {antialias:true})

How to detect antialiasing?

The WebGLContextAttributes carry the actually observed value for antialias as true or false.

var antialias = gl.getContextAttributes().antialias;

You can also query the coverage size of antialiasing:

var size = gl.getParameter(gl.SAMPLES);

If the value is 0 you do not have antialiasing. A value of 4 would indicate 4x4 MSAA.

How to deal with no antialiasing?

Aparts from simply ignoring the problem and accepting the degraded quality, there are ways to supply your own antialiasing.

  • You can implement the last version of FXAA 3.11. Note that this is not a good substitute for MSAA in all cases.
  • You can implement super sample antialias (SSAA) by either rendering your scene at twice or four times the size and then downscale, or by rendering the scene 4 or 16 times with small offsets applied to the device normal coordinate.
  • You can implement morphological antialiasing
  • An edge detection (such as sobel or laplace) can be used on the depth and/or normals to compute weights for a gaussian blur.
  • Any of the above methods may be accelerated by multi-frame accumulation and temporal reprojection.

There are many more subtle ways to achieve antialiasing, those are just the most obvious.

Shader Problems

Shader problems fall in roughly two categories:

  • Your shader might just not be valid, this is easily detected on your development machine since WebGL will do very rigorous validation of shaders.
  • Specific rendering backends (such as Direct3D) might choke on some shaders that you have and cause the WebGL context to be lost or weird errors without explanation in them to be thrown.

Dealing with shader bugs

If a shader fails to compile after you supplied the source with gl.shaderSource and asked for a compile with gl.compileShader you can query it:

if(gl.getShaderParameter(shader, gl.COMPILE_STATUS) == false){
    var error = gl.getShaderInfoLog(shader);
}

The resulting errors have a platform independent format and can look like this:

ERROR: 0:6: 'size' : undeclared identifier 
ERROR: 0:9: '=' :  cannot convert from 'const mediump float' to '3-component vector of float'

You can parse them in the following way:

var lines = error.split('\n');
for(var i=0; i<lines.length; i++){
    var match = line.match(/ERROR: (\d+):(\d+): (.*)/);
    if(match){
        var fileno = parseInt(match[1], 10)-1;
        var lineno = parseInt(match[2], 10)-1;
        var message = match[3]
    }
}

If you insert line directives into your GLSL you can modify the fileno and lineno you will get, for example:

#line 10 100
float foo = size; // size would not exist

Would lead to the following error

ERROR: 100:11: 'size' : undeclared identifier

You can use this to build a debugging facility for your shaders that can pinpoint from which line and file an error originates. Note that files can only be denoted by an integer, not by a string.

Fragment Shader precision

In fragment shaders WebGL wants to know the desired precision for ints and floats (but not for vec2, vec3 or vec4). If the precision is not available it will degrade gracefully to a lower one.

You can include this by default:

precision highp int;
precision highp float;

Note that for mobiles you'll probably want to replace this by mediump since highp might be slower.

How to query fragment shader precision?

You can query precision for formats in the following way:

var highp = gl.getShaderPrecisionFormat(gl.FRAGMENT_SHADER, gl.HIGH_FLOAT);
var highpSupported = highp.precision != 0;

If highp is not supported, then highp.precision will be 0.

How to query precision inside a shader?

ESSL predefines the macro GL_FRAGMENT_PRECISION_HIGH to be 1 if highp is supported, so you can test it like this:

#if GL_FRAGMENT_PRECISION_HIGH == 1
    // highp is supported
#else
    // high is not supported
#endif

Texture/Renderbuffer size limits

Textures and Renderbuffers cannot assume arbitrary sizes.

How to query possible sizes?

You can query the maximum renderbuffer and texture sizes for 2D and Cube (sides) as a parameter. The value returned is the maximum width/height you can use.

var maxTexSize = gl.getParameter(gl.MAX_TEXTURE_SIZE);
var maxCubeSize = gl.getParameter(gl.MAX_CUBE_MAP_TEXTURE_SIZE);
var maxRenderbufferSize = gl.getParameter(gl.MAX_RENDERBUFFER_SIZE);

Cube and texture size matter to gl.texImage2D and renderbuffer size matters to gl.renderbufferStorage as well as canvas width/height.

What are safe texture sizes?

WebGL Stats measures texture size. A safe value for 2D textures is 2048x2048. Bigger gets down to 90% of people, even bigger loses more people.

What are safe renderbuffer sizes?

It is usual that renderbuffers are ok up to the size of a devices display. More may be supported, but is not guaranteed to be.

Texture Unit limits

When texturing things you are using up a "slot" for a possible texture. These slots are known as *texture unit" and their number varies between machines and between shading stages.

How to query texture units?

There are three values you can query:

  • The max vertex stage texture units. These matter to the vertex shader.
  • The max fragment stage texture units. These matter to the fragment shader.
  • The max combined texture units. These matter to both shading stages combined.

The latter point on combined texture units relates to the fact that if you use the same texture in both the vertex and fragment shader, it will only consume one unit. The combined value may be higher than the sum of both vertex and fragment texture units. However it could also be lower, in which case you have to watch out.

var vertexUnits = gl.getParameter(gl.MAX_VERTEX_TEXTURE_IMAGE_UNITS);
var fragmentUnits = gl.getParameter(gl.MAX_TEXTURE_IMAGE_UNITS);
var combinedUnits = gl.getParameter(gl.MAX_COMBINED_TEXTURE_IMAGE_UNITS);

What are safe texture unit counts?

This differs between the vertex and fragment shader. WebGL Stats collects information on these counts.

  • Vertex Shaders: the unit count may be 0. This indicates that no vertex shader texturing is supported, as is the case for 15% of people who have WebGL. If you really need that feature, a safe unit count is 4, which is still supported by 85%.
  • Fragment shaders: 16 texturing units are supported by 99.5% of people with WebGL.

Maximum vertex Attributes

Vertex data is handed to WebGL by creating buffers, filling them and binding them to a vertex attribute location with a call to gl.vertexAttribPointer. Like for textures, a limited amount of slots is available.

How to query maximum vertex attributes?

var maxVSattribs = gl.getParameter(gl.MAX_VERTEX_ATTRIBS);

What is a safe vertex attribute count?

WebGL Stats collects this information and it is safe to use 16 vertex attributes supported by 99.9%.

Uniform Limits

Uniforms are passed to shaders by calls to gl.uniform[1234][fi] and gl.uniform[1234][fi]v. For instance gl.uniform4f. Each uniform you pass is aligned to 4 floats. So even if you pass a two float uniform, you will consume 4 floats. If you pass an array of 5 floats, you will consume 8 floats.

How do I query how many uniforms I can use?

You can query the value for the maximum amount of 4-component floats by these calls:

var maxVertexShader = gl.getParameter(gl.MAX_VERTEX_UNIFORM_VECTORS);
var maxFragmentShader = gl.getParameter(gl.MAX_FRAGMENT_UNIFORM_VECTORS);

What are safe uniform counts?

WebGL Stats collects this information. There is a lot of variation in the values but relatively safe are:

  • vertex shader: 253 4-float components supported by 99.6%.
  • fragment shader: 29 4-float components supported by 100% or 221 4-float components supported by 90%.

Varying limits

Varyings are used to pass values from the vertex shader to the fragment shader. Like uniforms they are aligned to 4-component floats.

How do I query varying limits?

The varying limits are expressed in how many 4-component floats you can use.

var maxVaryings gl.getParameter(gl.MAX_VARYING_VECTORS);

What are save varying counts?

WebGL Stats collects this information. It is safe to use 8 varyings supported by 100%. 10 varyings are still supported by 90%.

Performance Differences

Due to the fact that GPUs/devices have very different performance characteristics, it is possible your WebGL code will run slower or faster on other machines than the one you wrote it on.

How do I detect performance problems?

There are two ways to do this, the first method relies on gl.finish (not recommended, but sometimes useful).

var start = performance.now(); // or Date.now()
// rendering here
gl.finish();
var end = performance.now();
var delta = end - start;

Delta will contain the length it took to render. The call to gl.finish forces your JS to wait until WebGL is done rendering. This will slow down your applications performance. It is however a fairly accurate way to measure it.

A second method is to measure FPS:

var last = performance.now();
var update = function(){
    var now = performance.now();
    var delta = now - last;
    last = now;
    
    // rendering here

    requestAnimationFrame(update);
};
requestAnimationFrame(update);

The idea behind this is to measure how long it took for the next frame to arrive. This method is somewhat limited as a browser will cap framerates at 30 to 60 FPS. However you can reliably detect framerate this way and react to slow framerates.

How to deal with performance differences?

The simplest strategy is to use as little performance you can get away with to ensure that it will run satisfactory for most people.

You can render at reduced size, this is acomplished by setting the canvases width/height. For instance:

canvas.width = canvas.offsetWidth/2
canvas.height = canvas.offsetHeight/2

Some usecases can expect a relatively fixed range of hardware (such as when developing for mobiles, or for high end machines) and there you can tune your performance to these targets.

Beyond these strategies, you can employ various schemes of Level of detail adjustments, either dynamically scaling at runtime, or exposing settings of quality to a user.

Valid Framebuffer Objects

A framebuffer object is a way to render things off-screen and into textures. Not all combinations/sizes of framebuffers may be supported.

How to test if a framebuffer object is valid?

After you have created a framebuffer with gl.createFramebuffer() and attached some buffers/textures you should execute the following validation:

var checkFramebuffer(framebuffer){
    // assumes the framebuffer is bound
    var valid = gl.checkFramebufferStatus(gl.FRAMEBUFFER);
    switch(valid){
        case gl.FRAMEBUFFER_UNSUPPORTED:
            throw 'Framebuffer is unsupported';
        case gl.FRAMEBUFFER_INCOMPLETE_ATTACHMENT:
            throw 'Framebuffer incomplete attachment';
        case gl.FRAMEBUFFER_INCOMPLETE_DIMENSIONS:
            throw 'Framebuffer incomplete dimensions';
        case gl.FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT:
            throw 'Framebuffer incomplete missing attachment';
    }

Extensions

WebGL supports a variety of extensions. Some of these are available for many people, some are not. You can get support levels for most extensions from WebGL stats

How to test if an extension is supported?

You can either test if the extension name is in the list of supported extensions like so:

var extensions = gl.getSupportedExtensions();
var extname = 'OES_standard_derivatives';
var idx = extensions.indexOf(extname);
if(idx != -1){
    // extension supported
}
else{
    // extension not supported
}

Or you can try getting the extension and see if you get null:

var extname = 'OES_standard_derivatives';
var ext = gl.getExtension(extname);
if(ext != null){
    // extension supported
}
else{
    // extension not supported
}

Depth Textures

Depth textures allow you to capture both the rendering output and the depth in one renderpass. They are used to accelerate rendering when you need to lookup both later. Brandon Jones has a good tutorial on how to use them.

How can I detect depth texture support?

You query the extension for depth textures:

var ext = gl.getExtension('WEBGL_depth_texture');
if(ext != null) // depth texture supported
else // depth texture not supported

How to deal with no depth textures support?

About 40% have support for depth textures.

If you have floating point texture support and you do not use the alpha channel, you can render depth into the alpha channel:

gl_FragColor = vec4(color, depth);

If you need the alpha channel you need to render the depth in a second renderpass.

Without floating point texture support you can render the depth in a second pass into a byte texture, packing it into 2 bytes:

// scale the depth to between 0 and 1
float scaledDepth = clamp((depth-near)/(far-near), 0.0, 1.0);
float highByte = scaledDepth;
float lowByte = fract(scaleDepth*255.0);
gl_FragColor = vec4(highByte, lowByte, 0.0, 0.0);

You can then unpack when looking it up:

vec2 bytes = texture2D(source, texcoord).xy;
float scaledDepth = bytes.x + bytes.y/255.0;

Floating Point Textures

Floating point textures are useful for a range of problems ranging from shadow map computations to deferred rendering and terrain displays.

However they have more separate limitations than normal textures such as:

  • They might not be available at all
  • You might not be able to render to them
  • Linear interpolation might not work

How can I detect floating point texture support?

Floating point textures come in two flavors. Single float at 4 bytes per channel OES_texture_float and half float at 2 bytes per channel OES_texture_half_float

var singleFloat = gl.getExtension('OES_texture_float');
if(singleFloat != null) // single float supported
else // single float not supported

var halfFloat = gl.getExtension('OES_texture_half_float');
if(halfFloat != null) // half float supported
else // half float not supported

How can I detect if I can render to floating point textures?

You can check that by attaching a floating point texture to a framebuffer and run the validation.

// setup the texture
var texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(
    gl.TEXTURE_2D,
    0,
    gl.RGBA,
    2, 2, // width and height
    0,
    gl.RGBA,
    gl.FLOAT, // or halfFloat.HALF_FLOAT_OES
    null
);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);

// setup the framebuffer
var framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.framebufferTexture2D(
    gl.FRAMEBUFFER,
    gl.COLOR_ATTACHMENT0,
    gl.TEXTURE_2D, 
    texture,
    0
);

// check the framebuffer
var check = gl.checkFramebufferStatus(gl.FRAMEBUFFER);
if(check != gl.FRAMEBUFFER_COMPLETE){
    // rendering to that texture is supported
}
else{
    // rendering to that texture not supported
}

// cleanup
gl.deleteTexture(texture);
gl.deleteFramebuffer(framebuffer);
gl.bindTexture(gl.TEXTURE_2D, null);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);

How can I detect linear filtering support?

There is no available extension for that yet. There are two proposed extensions (OES_texture_float_linear and OES_texture_half_float_linear) to solve this problem, when they become available this is how this will work hopefully soon:

var singleFloat = gl.getExtension('OES_texture_float_linear');
if(singleFloat != null) // single float linear supported
else // single float linear not supported

var halfFloat = gl.getExtension('OES_texture_half_float');
if(halfFloat != null) // half float linear supported
else // half float linear not supported

How to deal with no linear filtering support?

You can write your own linear filter lookup function in ESSL.

// the size parameter would be the texture size as width and height
vec4 texture2DLerp(sampler2D source, vec2 texcoord, vec2 size){
    // texel size, fractional position and centroid UV
    float tex = 1.0/size;
    vec2 f = fract(texcoord*size+0.5);
    vec2 uv = floor(texcoord*size+0.5)/size;

    // lookup the 4 corners
    vec4 lb = texture2D(source, uv);
    vec4 lt = texture2D(source, uv+vec2(0.0, tex));
    vec4 rb = texture2D(source, uv+vec2(tex, 0.0));
    vec4 rt = texture2D(source, uv+vec2(tex, tex));

    // interpolation in y
    vec4 a = mix(lb, lt, f.y);
    vec4 b = mix(rb, rt, f.y);

    // interpolation in x
    return mix(a, b, f.x);
}

How do I pack normals without floating point textures?

Normals can be packed in multiple ways. If you have 3 byte channels available you can just pack a normal into those:

// scales a normal to between 0 and 1
vec3 scaleNormal = normal*0.5+0.5;
gl_FragColor = vec4(scaleNormal, 1.0);

Unpack as follows:

vec3 scaleNormal = texture2D(source, texcoord).xyz;
vec3 normal = normalize(scaleNormal*2.0-1.0);

You may want to use 2 bytes because you might like to pack depth and normals together. There are multiple ways to do this, this site compares good normal packing methods.

How do I pack HDR colors without floating point textures?

If you have an alpha channel free to use, you can store your color in an exponential format.

vec4 packColor(vec3 color){
    float maxColor = max(max(color.r, color.g), color.b)
    float exponent = ceil(log(maxColor)/log(2.0))
    float scaledExp = (exponent+128.0)/255.0
    float f = pow(2.0, exponent)
    return vec4(color/f, scaledExp);
}

Unpack as follows:

vec3 unpackColor(vec4 color){
    float exponent = color.a*255.0-128.0;
    float f = pow(2.0, exponent);
    return color.rgb*f;
}

The major drawback of this method of course is color banding. If you know a better one, please let me know.