Backporting GL3 software to older versions

Today's games usually support only the current hardware and major operating systems. However, in case you are nice enough to consider older hardware and exotic software (which is never a bad thing, the more the better), here's a cheatsheet for porting modern GL programs to older versions.

I am not an expert in graphics hardware, chips, and anything of the sort. I am merely a hobbyist who considers people with old laptops :), so I may get many things wrong here, feel free to shoot for me an e-mail if you've seen a mistake I've made here.


This is simple enough, VAOs are mostly a convenience thing, so when you would bind to a VAO you must instead bind the VBO and it's vertex attributes separately (enabling and disabling them as they are needed). You can also support VAOs using the extension ARB_vertex_array_object. Although there have been cases where typical driver bugs where ELEMENT_ARRAY_BUFFER was not saved. I am not sure whether or not this extension has an EXT predecessor.


Arguably the most useful GL3 feature is the FBO, which is used for many fancy effects. There are three ways to go about this. One is to use EXT_framebuffer_object, which is quite obvious.

The second way is related to the function glCopyTexSubImage2D. Basically, what was to be drawn to an FBO can be drawn to the main backbuffer, and then copied over to a texture with glCopyTexSubImage2D. A few problems include:

  1. There is a pipeline stall. I don't understand why a stall is necessary, but that's the sad truth, make sure to use this function as little as possible.
  2. As you cannot render colors outside the 0 - 1 range to the framebuffer, HDR is not doable. Consider using MRT and saving the "exponent" value to some other texture (RGBE).

On the bright side, depth textures are supported, so this method still allows you to create dynamic shadows. The way to use depth textures is to create a texture of internal format GL_DEPTH_COMPONENT. glCopyTexSubImage2D will automatically pick up on that.

The third method is more related to MRT, but we can take advantage of auxiliary buffers. The mere mention of these will make many a modern GL fanboy shudder. Simply use glDrawBuffer or glDrawBuffers with the arguments GL_AUX0 + n where n is the index of the aux buffer. If you use shaders, you can set gl_FragData[n] to write to buffers you set via glDrawBuffers. Auxiliary buffers have even more disadvantages. They are color only. This means that you can't even do dynamic shadows with them without doing nasty hacks such as writing depth as a color. Secondly, they have the exact same pixel format as the main framebuffer, so HDR is still not easily done. On the other hand, they are mighty handy if you want to draw to multiple buffers in one pass without FBO support, so I'd still consider them from time to time. Auxiliary buffers must be explicitly requested when creating an OpenGL context, so you'll have to look into your windowing toolkit to see if they support it.

Limits (ignore this section at the moment)

ARB_vertex_program 2.1 / ARB_vertex_shader 3.0
Vertex Attributes 16
Vertex Uniforms Components 512
Varying Floats 32


Modern GL has switched to in and out as it describes the pipeline better (each pipeline step uses the output of the previous steps).

In your vertex shader, switch all in and out occurrences with attribute and varying, respectively. In the fragment shader, all in occourances become varying.

Fragment shaders in modern GL can have multiple outs, all of which are named. In older versions, you must instead write to gl_FragData[n], where n is the index of a draw buffer specified with glDrawBuffers. If you only write colors, then write to gl_FragColor instead (writing to gl_FragData[0] and gl_FragColor is undefined behaviour).

Shader Storage

Well.. yeah, I couldn't find much information on this one. You could try using extension ARB_uniform_buffer_object, but that was only ever approved on 2009, when GL3.1 was released, so realistically speaking you won't find this extension on older drivers nor hardware. Buffer textures appeared earlier but they also happen to require EXT_gpu_shader4. Just try squeezing in as much as you can, you know, don't be lazy nor spoiled; but if you can't, you're doomed.

NVidia hardware since the GeForce 8 Series has supported EXT_bindable_uniform, the predecessor to ARB_uniform_buffer_object, which works similarly.


Same issues as the above, you're likely to run out of vertex attribute or uniform space if you try to use ARB_draw_instanced or ARB_instanced_arrays. If your target hardware supports vertex texture lookups (and probably ARB_texture_float), consider putting some data like skeletal animation data in a texture. Don't forget to profile! Texture lookups will probably be very slow.

This all seems like too much work!

Well, nobody said your game has to look just as good as on newer hardware. People with such hardware don't care about the graphics anyways, they care about the gameplay, make sure that the gameplay is good!

Dude, feature "x" has existed for "y" years, all hardware supports it!

You seem to have missed the point of this guide.