Resizing Map Data Structures
<i>This post was originally made by <b>danij</b> on the dengDevs blog. It was posted under the category: Engine.</i>
I'm working on the map loading/node caching atm. During the load process we have need to prune duplicate data structures (e.g. vertexes) and at other times, create new ones (e.g. sidedefs). We could just allocate another complete array to do what we need and copy over the original, but should we be thinking about replacing them with structures more efficent to these tasks?
Ideally, the same structures could be used during normal play also but we need to be carefull in our choice(s) so as not to negatively impact on performance. For example, various existing processes like PG_InitForNewFrame() would become more complex than they are currently (which is a quick memset() of a parallel array).
The benefits of using parallel arrays to hold the map data objects (and related data) are compelling all the while the sets are stable (linear traversal, locality of reference etc). However, now that we need at least some ability to grow or shrink them, it might be a good idea to use something else (and with future run-time editing in mind it will become a growing future concern).
One particular example being the $nplanes stuff I have been working on recently where a sector can support an unlimited number of planes.
I'm working on the map loading/node caching atm. During the load process we have need to prune duplicate data structures (e.g. vertexes) and at other times, create new ones (e.g. sidedefs). We could just allocate another complete array to do what we need and copy over the original, but should we be thinking about replacing them with structures more efficent to these tasks?
Ideally, the same structures could be used during normal play also but we need to be carefull in our choice(s) so as not to negatively impact on performance. For example, various existing processes like PG_InitForNewFrame() would become more complex than they are currently (which is a quick memset() of a parallel array).
The benefits of using parallel arrays to hold the map data objects (and related data) are compelling all the while the sets are stable (linear traversal, locality of reference etc). However, now that we need at least some ability to grow or shrink them, it might be a good idea to use something else (and with future run-time editing in mind it will become a growing future concern).
One particular example being the $nplanes stuff I have been working on recently where a sector can support an unlimited number of planes.
Comments
From a pure performance perspective re-arranging the data so we can "strip mine" each layer is best, but I don't believe the data lends itself to that technique. Ideally we want to keep the data aligned on a 4 byte boundary, with padding bytes if needed.
Could you list all the requirements we need from these structures, and the current sizes ? It would help me better pick it apart for you.