Auto Generating Texture Atlases
<i>This post was originally made by <b>danij</b> on the dengDevs blog. It was posted under the category: Engine.</i>
As you may be aware there are currently various issues with the management of the font patches as used (primarily) for the in-game menu and the player message log. The main problems being:
<ul>
<li>High res font patches are currently ignored.</li>
<li>"Upscale and sharpen" processing is only done on initial load.</li>
</ul>
A couple of days ago I began musing on how to address these problems but I continually kept bumping into architectural shortcomings that either flat-out prevented or severally limited whatever scheme I came up with. Throughout this process I continued reminding myself of many of the secondary issues we have with these resources and that led me to thinking about larger changes.
It occured to me that the best solution for resolving the current problems was to abstract away the game-side drawing of the UI patches completely (i.e., do not use lump names/numbers for patches) and to integrate patch texture handling with the <tt>gltexture</tt> stuff. This then led to further thoughts on improving render quality of the UI...
[caption id="attachment_232" align="alignnone" width="511" caption="Texture atlas preview"]<img src="http://www.dengine.net/blog/wp-content/uploads/2009/05/texatlas1.png" alt="Texture atlas preview" title="texatlas1" width="511" height="510" class="size-full wp-image-232" />[/caption]
The above image is a preview of a texture produced by a new algorithm that I've been working on, which will automatically generate a texture atlas from a list of patch lump names, combine them into one texture and then generate a texture coordinate LUT.
The packing algorithm itself is fundamentally based on a (specialised) kdtree-like approach, which recursively subdivides the available texture space at texture insert time.
Although I would deem the packing algorithm fairly naive at this point it is however managing to pack the textures fairly well now (the main shortcoming being that dimensions for the final texture must be specified in order to build the atlas and if once built, the used area is smaller than this; the resulting texture has a lot of wasted space) and is ready for integration into our texture manager. I can continue working on improving the packing later on.
So, what are your thoughts (skyjake) on integrating auto generated texture atlases with our texture manager?
As you may be aware there are currently various issues with the management of the font patches as used (primarily) for the in-game menu and the player message log. The main problems being:
<ul>
<li>High res font patches are currently ignored.</li>
<li>"Upscale and sharpen" processing is only done on initial load.</li>
</ul>
A couple of days ago I began musing on how to address these problems but I continually kept bumping into architectural shortcomings that either flat-out prevented or severally limited whatever scheme I came up with. Throughout this process I continued reminding myself of many of the secondary issues we have with these resources and that led me to thinking about larger changes.
It occured to me that the best solution for resolving the current problems was to abstract away the game-side drawing of the UI patches completely (i.e., do not use lump names/numbers for patches) and to integrate patch texture handling with the <tt>gltexture</tt> stuff. This then led to further thoughts on improving render quality of the UI...
[caption id="attachment_232" align="alignnone" width="511" caption="Texture atlas preview"]<img src="http://www.dengine.net/blog/wp-content/uploads/2009/05/texatlas1.png" alt="Texture atlas preview" title="texatlas1" width="511" height="510" class="size-full wp-image-232" />[/caption]
The above image is a preview of a texture produced by a new algorithm that I've been working on, which will automatically generate a texture atlas from a list of patch lump names, combine them into one texture and then generate a texture coordinate LUT.
The packing algorithm itself is fundamentally based on a (specialised) kdtree-like approach, which recursively subdivides the available texture space at texture insert time.
Although I would deem the packing algorithm fairly naive at this point it is however managing to pack the textures fairly well now (the main shortcoming being that dimensions for the final texture must be specified in order to build the atlas and if once built, the used area is smaller than this; the resulting texture has a lot of wasted space) and is ready for integration into our texture manager. I can continue working on improving the packing later on.
So, what are your thoughts (skyjake) on integrating auto generated texture atlases with our texture manager?
Comments
On the whole I think it's a good idea to use atlases, especially if they can be generated on the fly, but my main worry is that we shouldn't introduce extra complexity into the system. My initial feeling is that if atlases are taken into use they should be applied to all graphics as broadly as possible, so that we don't end up just introducing entropy into the engine, so to speak.
I would appreciate seeing a DEW proposal about the changes this would mean for the games, the public API, and engine internals.
The game would indicate which patch to draw via the existing material selection mechanism. At material selection time, the game would additionally specify how the material is to be drawn (the key bits of info we are looking for here is whether tiling and/or mipmaps are to be used). On the rare occasion when tiling is needed then we would load and bind a copy of the patch as we do currently. Otherwise, the engine would elect to bind the atlas texture containing the patch.
When selecting a texture from an atlas by way of material selection a (new) special mode is enabled in the public DGL interface which alters how texture coordinates specified by the game are interpreted. This mode would remain active until it is logically disabled automatically (e.g., a non-atlas texture is bound or other state change). Whilst in this mode, all texture coordinates specified by the game (from game-side the coordinates of the patch are 0...1 as usual) are translated engine-side, relative to the coordinates of the current atlas texture before being passed to OpenGL.
By using this approach the game has no idea that the engine is in fact using texture atlases.
<blockquote>On the whole I think it's a good idea to use atlases, especially if they can be generated on the fly, but my main worry is that we shouldn't introduce extra complexity into the system. My initial feeling is that if atlases are taken into use they should be applied to all graphics as broadly as possible, so that we don't end up just introducing entropy into the engine, so to speak.</blockquote>Yes, I agree. The atlas preview image above was perhaps not the best choice I could have made to illustrate the idea. I further compounded that misdirection by talking about wasted space in the atlases.
My plan is not to introduce any kind of arbitrary grouping scheme, the engine would add any and all non-tiled textures to an atlas irrespective of the games' use of them. Consequently, there would be no significant wasted space as the engine would simply keep using one atlas until forced to create another.
Currently, the texture blit from atlas is done in one pass (it is not possible to update a section of the atlas texture). This will change once integrated into the engine.
<blockquote>I would appreciate seeing a DEW proposal about the changes this would mean for the games, the public API, and engine internals.</blockquote>I do plan to create a proposal for this, I just wanted to raise it here first so that we could thrash out the core implementation specifics first.
Regarding tiling: I'm fairly certain it's a straightforward matter to write a fragment shader to handle tiling within the atlas, so it shouldn't be necessary to consider tiling a special case. We might consider the use of some simple shaders like this before the eventual renderer redesign/rewrite that we discussed earlier. There's lot of fluff especially in the dynamic lights blending that would benefit from some fragment shaders.
<blockquote>We might consider the use of some simple shaders like this before the eventual renderer redesign/rewrite that we discussed earlier.</blockquote>Yes, I think we are just about ready to make the move now and perhaps this and a light diminishing shader would be a good place to start.
How do you envision intergrating the shader management? I suppose ideally, we should be compiling shaders via our defered GL task system when possible?
There should be a new source file (gl_shader.c maybe?) which will handle managing the shaders (activating by name, knowing whether they're compiled or not).