Week 44/2013: Input and GL composition
Much of my last week was still related to Oculus Rift support in one way or another.
Oculus Rift's head tracking angles are a new type of input, so I added a new virtual input device for them into Doomsday's input subsystem. This allows binding these input values like any other input controller.
I also took some time to look into low-latency player controls, as head tracking by nature should be as latency-free as possible. I started a new work branch where I applied some low-level playsim hacks that basically allow players and their mobjs to be updated as frequently as needed, without having to conform to Doom's 35 Hz input rate. I hope this will eventually allow adding an official new input mode for "modern" player controls that react as fast as possible. However, there is still plenty of work needed to fine-tune the side effects this has on gameplay. Needless to say, the "modern" controls will result in quite a different feel — especially if one is very accustomed to the vanilla controls. We will naturally keep the original 35 Hz input rate as the default configuration, though.
We then continued to hack, debug, and fix various parts of the UI and GL rendering to support the Oculus Rift barrel distortion shader. cmbruns recorded a brief video to show the results. All in all, Oculus Rift support has now cleared most of the major hurdles. The remaining work is related to improving how the 2D parts of the UI are displayed and fine-tuning the overall experience. I am very happy to see this progressing at a good pace, although I sadly cannot yet take advantage of this myself as I don't have the Oculus Rift development kit.
I've since been making further improvements to the UI framework. My goal is to allow parts of the widget tree to be drawn onto offscreen render targets (textures) that can then be drawn back to the screen with effects applied. The intent here is to draw all non-3D-world graphics onto a separate layer. This is needed because there has to be a way to have more control over how the 2D portions of the UI (which assume a flat 2D drawing surface) will be positioned in a 3D view.
My plan is to continue debugging and improving the UI framework's offscreen composition. This also ties quite nicely into the future map renderer improvements because it is forcing us to make the old GL code more compatible with the new GL2 graphics. After I've sorted out the current set of issues it will be time to start cleaning things up for merging all the recent suitable progress into the master branch.
Oculus Rift's head tracking angles are a new type of input, so I added a new virtual input device for them into Doomsday's input subsystem. This allows binding these input values like any other input controller.
I also took some time to look into low-latency player controls, as head tracking by nature should be as latency-free as possible. I started a new work branch where I applied some low-level playsim hacks that basically allow players and their mobjs to be updated as frequently as needed, without having to conform to Doom's 35 Hz input rate. I hope this will eventually allow adding an official new input mode for "modern" player controls that react as fast as possible. However, there is still plenty of work needed to fine-tune the side effects this has on gameplay. Needless to say, the "modern" controls will result in quite a different feel — especially if one is very accustomed to the vanilla controls. We will naturally keep the original 35 Hz input rate as the default configuration, though.
We then continued to hack, debug, and fix various parts of the UI and GL rendering to support the Oculus Rift barrel distortion shader. cmbruns recorded a brief video to show the results. All in all, Oculus Rift support has now cleared most of the major hurdles. The remaining work is related to improving how the 2D parts of the UI are displayed and fine-tuning the overall experience. I am very happy to see this progressing at a good pace, although I sadly cannot yet take advantage of this myself as I don't have the Oculus Rift development kit.
I've since been making further improvements to the UI framework. My goal is to allow parts of the widget tree to be drawn onto offscreen render targets (textures) that can then be drawn back to the screen with effects applied. The intent here is to draw all non-3D-world graphics onto a separate layer. This is needed because there has to be a way to have more control over how the 2D portions of the UI (which assume a flat 2D drawing surface) will be positioned in a 3D view.
My plan is to continue debugging and improving the UI framework's offscreen composition. This also ties quite nicely into the future map renderer improvements because it is forcing us to make the old GL code more compatible with the new GL2 graphics. After I've sorted out the current set of issues it will be time to start cleaning things up for merging all the recent suitable progress into the master branch.
Comments
Following the 1.12 release I began a new work branch aimed at completely overhauling the map renderer's render lists using an object oriented API design and leveraging the newer GL2 components. This branch is now rapidly approaching the point where it can be merged back to the master.
One of the issues I kept running into during 1.12 (and earlier) is how to begin redesigning the map geometry generation to use GL2 components and geometry representations while maintaining an acceptable framerate. The biggest issue there was the inflexible interface to the render lists, which, placed some large restrictions on how the geometry is passed and then stored to the central global geometry buffer.
In the future map rendering will rely more on drawing with geometry resident on the GPU, however until that framework is in place I found it difficult to progress the generation very far. In my work branch this bottleneck/roadblock is no longer an issue as we can now easily extend the "DrawList" class API with methods accepting GL2 geometry representations.
The final task in this branch is to replace the map renderer's central geometry store with a GLBuffer. In the process this means revising plane geometry construction to use triangle strips rather than fans (with zero-area triangles for discontinuities). The idea being that with all map geometry using strips we can transfer the whole of the buffer's contents to GL much more efficiently.
My plan for the coming week is to complete this work and merge it into the master branch.
I've also added the option to bind the Rift's yaw angle so that it affects only looking, not aiming/moving, but it's not probably working well enough.