Development System Upgrade

edited 2009 May 15 in Developers
<i>This post was originally made by <b>danij</b> on the dengDevs blog. It was posted under the categories: Blog, Windows.</i>

A couple of weeks ago I decided it was high time to upgrade my development system. That system was no slouch by any means but I was begining to notice it struggling with some of the wizz bang visual effects in the latest games.

After a bit of research I settled on the following system:

CPU: Intel Core i7-920 (D0 step)
Cooler: Zalman CNPS 9900
Motherboard: Gigabyte EX58-UD5
RAM: 6GB Corsair Dominator 1066
Video: Geforce GTX 275
Audio: Auzentech Prelude 7.1
Primary Storage: 80GB Western Digital SATA hard disk
Auxilary Storage: 500GB Buffalo Link Station Live

It took me the best part of three days to install and configure this new system but I'm now ready to continue deng development.

I am extremely happy with this new system, it has proven most capable and is yet to flinch at anything I've thrown at it. So much so that I've not bothered overclocking it at all (yet).

When configuring the video card I noted the presence of an "Ambient Occlusion" option. Apparently this is a driver-level implementation of the <a href="http://en.wikipedia.org/wiki/Screen_Space_Ambient_Occlusion">screen space ambient occlusion algorithm</a> first seen in Crysis. Naturally, I was eager to compare the results to that of Doomsday's fakeradio system.

I found that although shadow fidelity was noticeably more refined and intricate when using Nvidia's ambient occlusion it was (expectedly) much slower than our own fakeradio method.

It will be interesting to see whether ATI follow suite.

I will try and upload some comparison screenshots soon.

Comments

  • Sounds like a very nice system, congrats.
    <blockquote>I found that although shadow fidelity was noticeably more refined and intricate when using Nvidia?s ambient occlusion it was (expectedly) much slower than our own fakeradio method.</blockquote> I bet that has something to do with the fact that we are basically re-uploading all geometry on every frame, leaving the card & driver no chance to cache the data. If we froze static parts of the world into drawing lists, or used vertex buffers, the performance might be much better for the ambient occlusion. (Well, it would be much better for normal rendering, too.)
  • I'm not convinced that the speed hit has much to do with the way we upload geometry. Given that this ambient occlusion algorithm is working in screenspace and the input for which is essentially just the content of the depth buffer, that would logically suggest (to me) that it would be relatively unaffected by the method and/or frequency of geometry transfer.

    Regardless, we really do need to start thinking about how we are going to implement this geometry freezing as it will affect pretty much everything we want to achieve in the 1.9.x series.

    I'm liking the looks of the <i>shard</i> stuff in the Hawthorn repo. With respect to the world geometry, have you had any thoughts on granularity and what a single shard would consist of?

    Perhaps the immediate choice would be to divide the world into shards at subsector level. This would likely be the easiest approach to implement initially as we transistion to a cached geometry schema.
  • <blockquote>Given that this ambient occlusion algorithm is working in screenspace and the input for which is essentially just the content of the depth buffer</blockquote> I see, well that does sound like it would be independent of how the geometry is defined.

    <blockquote>we really do need to start thinking about how we are going to implement this geometry freezing </blockquote> I agree.

    The Hawthorn shards basically consist of the static geometry, an atlas of textures, and vertex & fragment shaders that allow for dynamic things (like animating vertices and textures, and of course surface effects such as per-pixel lighting).

    When we start freezing geometry in Doomsday, I think we also need to start using shaders. This way we can implement a distance fog effect much like in the original games, that isn't affected by the typical problems of vertex-based lighting. Also, we should look into animating planes using vertex shaders, which reduces the need for un/refreezing geometry.

    Dynamic lights are a separate issue, though. They need to remain unfrozen. However, fragment shaders should provide a more straightforward implementation for rendering them.

    When it comes to the granularity of shards, I think that subsectors may be too small. It should somehow be a dynamic system so that if the level contains no "moving parts", it should automatically converge into one big frozen shard. If that gets too big, we could maintain a simple 2D grid over the world that separates subsectors into different shards.

    Another issue are the textures. We could of course include texture changes into OpenGL drawing lists, but the rendering is more efficient when state changes aren't needed. It might be difficult or impractical to divide the textures into atlases based on where they appear on the map, though.

    This feels a lot like a complete rewrite of the renderer, though. Maybe this is one area where we could utilize the Hawthorn code, while at the same time allowing me to develop it a bit further.
  • <blockquote>
    <p>When we start freezing geometry in Doomsday, I think we also need to start using shaders.</p></blockquote>I agree.
    <blockquote><p>This way we can implement a distance fog effect much like in the original games, that isn?t affected by the typical problems of vertex-based lighting.</p></blockquote>I would think this isn't really necessary to begin with but I imagine the implementation for which shouldn't be particularly difficult so why not.
    <blockquote><p>Also, we should look into animating planes using vertex shaders, which reduces the need for un/refreezing geometry.</p></blockquote>Interesting. I had been considering vertex shaders for planes but only so much as in using them for dynamic surfaces, such as water. It hadn't occured to me to handle this on the GPU also.

    I like this idea though it does mean we will need to carefully design much of the rest of the renderer around it.

    With that in mind, am I correct in my assumption that you are now suggesting we abandon a fixed function render pipeline entirely?
    <blockquote><p>When it comes to the granularity of shards, I think that subsectors may be too small. It should somehow be a dynamic system so that if the level contains no ?moving parts?, it should automatically converge into one big frozen shard. If that gets too big, we could maintain a simple 2D grid over the world that separates subsectors into different shards.</p></blockquote>I've been thinking about this a lot recently and how this fits with the BSP and the role of subsectors in our renderer vs the playsim. It seems to me that (in general) we should be aiming to reduce our dependency upon the BSP, at least for drawing purposes. With world geometry tied so closely to the subsectors, we are somewhat restricted in what we can do.

    Another factor of which is the irregularity/density of plane geometry due to being derived straight from subsectors. This has many implications due to the sheer size of the individual polygons that often result.

    I would like to suggest that rather than derive plane geometry directly from the subsectors that on completion of the BSP process we then further subdivide the world, using a regular (i.e., uniform) grid (isometric?) which overlays the entire map, generating new segs as necessary. Although this will obviously increase world complexity but what with throughput what it is today, I don't think this will matter much, especially given we plan to freeze the world geometry anyway. I feel the many benefits of doing something like this would certainly be worth it.

    As we would then have world geometry that would nicely lend itself to "chunking" it would theoretically make converging ajoining chunks into shards more feasible.
    <blockquote><p>Another issue are the textures. We could of course include texture changes into OpenGL drawing lists, but the rendering is more efficient when state changes aren?t needed. It might be difficult or impractical to divide the textures into atlases based on where they appear on the map, though.</p></blockquote>In general I think that a texture atlas per shard is a good idea but only when dealing with cases that lend themselves to it. Implementing some form of spatial relationship logic governing world geometry, atlas texture grouping sounds to be a rather difficult problem to me. I think it might be altogether easier to handle world geometry shards a little differently. How? I'm not sure right now, I'll get my thinking cap on.
    <blockquote><p>This feels a lot like a complete rewrite of the renderer, though. Maybe this is one area where we could utilize the Hawthorn code, while at the same time allowing me to develop it a bit further.</p></blockquote>I think for now, we should put all thoughts on this to the back of our minds (yes, I know I brought it up and yes, I know this is the fun stuff) and try to concentrate our efforts on the unified networking.

    As I've mentioned previously, my knowledge of our current net code is limited to say the least but by contrast, I feel quite comfortable with pretty much everything else we are doing right now. So, how about this:

    We focus all our effort on the unified networking (and related work) and get 1.9.0-beta7 out sooner rather than later. Once unified networking is in and the core issues around which have been resolved, we then divide our development effort thus; you (skyjake) can focus more on the renderer side while I work on the back end management of the world data and the game-side stuff.
  • <blockquote>With that in mind, am I correct in my assumption that you are now suggesting we abandon a fixed function render pipeline entirely?</blockquote> Yes, that would be the case.

    <blockquote>We focus all our effort on the unified networking (and related work) and get 1.9.0-beta7 out sooner rather than later.</blockquote> I fully agree that even though the graphics stuff is fascinating, the focus should currently be on beta7 things. I've been actually doing some thinking about the unified networking, related to how our code is organized. I think it makes sense now to make a separate "libdeng" library for the core engine functionality, which can then be utilized as a shared libary both in the (dedicated) server process and the client (i.e. UI) process. This should remove the need for the relatively unorthodox practice of back-linking to stuff exported from the Doomsday executable. Also, all plugins could simply be linked against libdeng and everything should be fine and dandy. As far as the API is concerned, most of our public engine API should be directly usable as a shared library API.

    I'm wondering whether it's OK that a plugin like jDoom contains both server-side and client-side functionality. Perhaps it would be possible to create a generic client-side game plugin that handles the UI stuff using some common code, configured with DEDs and other textual configuration files (for menus, etc.).
  • <blockquote><p>I?ve been actually doing some thinking about the unified networking, related to how our code is organized. I think it makes sense now to make a separate ?libdeng? library for the core engine functionality, which can then be utilized as a shared libary both in the (dedicated) server process and the client (i.e. UI) process. This should remove the need for the relatively unorthodox practice of back-linking to stuff exported from the Doomsday executable. Also, all plugins could simply be linked against libdeng and everything should be fine and dandy. As far as the API is concerned, most of our public engine API should be directly usable as a shared library API.</p></blockquote>Sounds good to me. Does this mean you intend to use a daemon too (similar to Hawthorn)? Hows does this plan mesh with our use of SDL for video on Mac OS and *nix?
    <blockquote><p>I?m wondering whether it?s OK that a plugin like jDoom contains both server-side and client-side functionality. Perhaps it would be possible to create a generic client-side game plugin that handles the UI stuff using some common code, configured with DEDs and other textual configuration files (for menus, etc.).</p></blockquote>I think that right now, having both server and client side functionality in the game plugin makes the most sense. I agree that it should be the ideal to have a generic client-side plugin that, that is fully configurable via DEDs/whatever. However in order to get to that point it would require a considerable amount of work to elevate the common code library into a complete, flexible API for menus, UI and other displays. I suggest that we do the former for now and make the generic client-side game plugin a future goal.

    EDIT:
    Thinking about this more, the generic client-side game plugin idea does actually present a natural opportunity for dividing our development effort. As I won't be of much use when it comes to developing the server-side back end I could instead focus my efforts on the client-side front end (menus and other UIs). It seems to make sense to me given that I've already got some of the work done in private branches (e.g., merged control panel UI and game menu). Just a thought.
Sign In or Register to comment.