Last week I was working on improving the management of OpenGL framebuffers. This is needed by the new lens flare rendering, and also for planned use cases in the future.
The framebuffer is essentially an area of (graphics) memory where drawing is being done. Once a frame has been rendered, the contents of the framebuffer become visible on screen.
The core idea was to introduce more flexible framebuffers that allow rendering to GL textures in such a way that all the needed pixel components (color, depth and stencil) can be accessed from shaders. For example, the new lens flare renderer relies on access to depth information.
I added a class called GLFramebuffer for this purpose: we can now use the same class for the main window, Oculus Rift, post-processing, and busy transition framebuffers. In each case, the frame's depth information can be accessed via a texture in shaders.
The result is a great deal of added flexibility when drawing the frame. For instance, previously when drawing 3D weapon HUD models, it was necessary to clear the depth buffer beforehand so that the models wouldn't be occluded by map geometry — now that solution is no longer feasible because we need valid depth information later on during the rendering of a frame. The new flexibility allows switching temporarily to a different depth buffer while keeping the rest of the frame the same.
One big hurdle was antialiasing, or more accurately, multisampling. In practice this means that the GPU evaluates multiple sub-points within a single pixel for smoother results. Unfortunately, the version of OpenGL we are targeting does not support rendering to textures with multisampling out of the box, so we have to rely on OpenGL extensions. In practice, the multisampled graphics need to be drawn to a separate, multisampled buffer that is then copied to the framebuffer's non-multisampled GL texture.
This is pretty tricky stuff, though. For a couple of days I was really struggling to get anything valid to the screen. As an example of the level of complexity in one of the worst case scenarios, here's what happens when rendering in Oculus Rift mode with a post-processing shader and multisampling — if any small detail is wrong in this chain, the visible results will be wrong:
- switch target to Oculus Rift framebuffer (bigger than the main window)
- switch target to post-processing framebuffer (one per player view)
- render frame to the multisampled buffer of the post-processing framebuffer
- copy multisampled results to color and depth textures (merging samples)
- use post-processing shader to apply an effect on produced color texture, drawing to Oculus Rift framebuffer
- repeat above for the other eye (stereoscopic)
- copy the Oculus Rift framebuffer's multisampled contents to a color texture (merging samples)
- use shader to apply barrel distortion on produced color texture, drawing to window framebuffer
- copy multisampled window framebuffer to the visible buffer (merging samples)
I still haven't got full confidence that everything is running quite right. I will continue debugging and improving this until the stable release.
As we're rapidly closing in on the planned release date, it is quite possible that I won't have enough time to finish integrating and adjusting the new lens flares. However, this week we intend to merge all ongoing work to the master and start stabilizing it for the release. If there's still some time after we've reached an adequate level of stability, I will continue working on the lens flares. I would really like to get it in 1.13, though.