Tales of the New Order

edited 2009 Jul 8 in Developers
<i>This post was originally made by <b>skyjake</b> on the dengDevs blog. It was posted under the categories: Engine, Games, Mac OS X, Unix/Linux, Windows.</i>

I'm about to complete Phase 1 of my plan to transform the current Doomsday into what will eventually become the 2.0 incarnation. At the moment the entire project (sans jdoom64) builds and I can start the dedicated server (dengsv) in both OS X and Windows. There is still some tweaking to do on Win32 before everything is up and running, though, since on Windows it currently just crashes after setting up the window.

I thought this would be a good time to mention some of the important changes happening in the New Order branch. The changes will be merged back to the trunk after I have Phase 1 running on Windows.

<ul>

<li> Exported symbols are no longer defined in the .def file on Windows. Instead, they are marked with a PUBLIC_API macro in the <tt>doomsday.h</tt> header. Also, the public declarations have been removed from the internal headers to avoid redundancy. This should simplify maintenance of the code base while also improving the readability, as the exported functions are clearly marked in the headers.

<li>CMake is now the Official Build System for all platforms. (This Time It's For Real™) This is good for scripted builds as everything can be done from the command line. I have been revising all the CMakeLists in the project (that is, rewriting all of them).

<li>There are significant changes in the source tree directory structure: The "engine" directory has been renamed "libdeng". The game plugins have lost the "j" prefix. The plugins are named using a different logic, using "deng_" and "dengplugin_" prefixes.

<li>There are now two executables instead of just doomsday(.exe). "dengcl" is the client and "dengsv" is the server. The plan is that the client will run the server in the background for single player games, starting dengsv when a single-player game begins.

<li>libdeng2 has its own source tree structure and conventions. It uses the "de" namespace for all of its classes. The public headers are under a "de" directory, so they are always included like this:
<pre>#include <de/Something></pre>
There is a convention borrowed from Qt: each class has a convenience header named after the class, for instance to use the class <code>de::CommandLine</code>, one would just need <tt>#include <de/CommandLine></tt>. The (portable) sources and internal headers are all under "libdeng2/portable".

</ul>

There are some exciting changes waiting for Phase 2...

Comments

  • Nice work.

    Is the CMake build system now able to generate the Visual C++ 2008 solution and project files or will I need to create and commit these myself?
  • Thanks. Yes, CMake can generate the Visual C++ solution. I'm using the Visual Studio 9 2008 Express edition (?) myself. In fact, the command line tool vcbuild uses the very same solution/vcproj files that the IDE does. It's quite nifty. I really want to eliminate the need for any manually-maintained project files.

    CMake will generate a deng.sln and some vcprojs. Then it's possible to, for instance:
    <pre>
    vcbuild deng.sln "Debug|Win32"
    </pre>
    to build a specific configuration, and then
    <pre>
    vcbuild INSTALL.vcproj debug
    </pre>
    to install the built files into a runtime directory, with the necessary data files and DLLs. I've set the install directory to "./install" in the CMakeLists.txt. The idea is then that the "installed" files can be packaged into a distribution package.
  • So, whats next on the agenda? I presume the next step will be a completely new Doomsday plugin loader/manager. I recall you said that you wanted to pursue run-time game plugin changes (good idea), what other changes/features do you intend for this rewrite?

    I agree with your comment at the forum regarding the future of Snowberry as once the engine is capable of changing the game plugin at runtime a dedicated launcher application becomes somewhat redundant. Combine this with a new VFS and addon management facilities in the engine itself and a launcher becomes unnecessary.
  • Yes, you have the gist of it.

    The very next step I was planning is to start work on the new file system -- haven't quite decided on a name for it, though. As far as the plugins are concerned, I was planning on a more dynamic approach, so that in essence plugins wherever in the virtual file system would be located and treated appropriately, whether they be games or other kind of plugins like an audio driver. libdeng2 will query the plugins for some metadata on how they should be treated (e.g., game or something else).

    In the end, it would be possible to package all of jDoom, for example, into a single bundle and have it show up in the "ring zero" Doomsday setup UI with all its game modes and whatnots.

    But that's the long term vision.

    Now that phase 1 is done I will also start looking into what's bugging the multiplayer. Specifically the goal is to get co-op working in Doom1 E1M1 for a single player. As far as the plugin thing is concerned, as I said I will start with the file system basics, so I can get rid of the hardcoded command line arguments and paths currently in dengsv/dengcl.
  • Something I would particularly like to see with the new filesystem is the implementation of an abstract file/directory node system where third parties can add/remove nodes from it. Ideally, the container formats such as the existing WAD and PK3 managers would be externalized into plugins but all the files within would be added to The One file system, as virtual file nodes. I seem to recall our discussing something along these lines a while back.

    I feel that externalizing support for the various container formats is the next logical step for this system.
  • I picture the new file system as being something akin to the Hawthorn "data object hierarchy", although perhaps one step closer to real files and folders. The basic idea is that the engine (or the <code>de::FS</code> class) deals with a tree of File and Folder objects. There is then a variety of subclasses of Files and Folders, for example a ZipFolder for ZIP/PK3s.

    All file access happens through these classes and <code>de::FS</code> itself, both reading and writing. For instance, I expect that the future save games will just be a ZipFolder that contains the data in a bunch of files, that gets created at runtime in memory and then written to disk.

    However, when it comes to using actual plugins for each format, the only justification I see is the isolation of dependencies to third party libraries. The benefit of having something as a plugin is that it can be disabled. I think that it's perfectly fine to have libdeng2 depend directly on, e.g., zlib and SDL_image, though. That's very basic functionality you'd really not want to disable.

    Thanks to the object-oriented code, it's pretty straightforward to isolate some classes from the main libdeng2 library and make them optional plugins in the future, if need be.
  • The rational behind allowing for support of container formats such as WAD and PK3 in plugins rather than the engine is primarily to allow for third party extensibility. If for example I decided to begin work on a Duke Nukem 3D plugin; in order to do so I would have to work on the engine itself to implement the necessary GRP reader (another container format).
  • Extensibility is a valid reason, yes. However, the default formats nevertheless need not be plugins; in the end it doesn't matter whether the class instances come from a plugin via a factory, or directly from libdeng2 classes (via the same factory).

    I guess what I'm saying is that the factory stuff, and extension plugins, is something that can be done later on -- there is no need to jump in the deep end right away.
  • I agree that its not needed right now (i.e., for beta7).

    However, I would argue that considering the plan is to freeze the public API for 1.9.0 (the purpose being to encourage third party plugin development); without some method for third parties to develop support for any new container formats they need (and tie them into the VFS) that the plan to freeze is fundamentally undermined.
  • Yes, I see your point.

    The nice thing about freezing, though, is that new API functions and classes (and in some cases, even member functions) can be added as long as the frozen ones stay in place and work as expected. Even though at some point in time there is no facility for, say, registering new file format interpreters into the file system, such an API can be added later on.

    Although, I would say that when the actual freeze occurs (I would call it version 2.0), there will definitely be a way to register new format interpreters into the FS (among other things).
  • My interpretation of "freezing the public API for 1.9.0" (the intended name at the time) was a litteral one, i.e., <em>it will <strong>not</strong> change</em>. Clearly though, that is not your intention (perhaps changed since the conception of libdeng2 and the revised unified networking plan) so my point is rather moot given the different dynamic.
  • There are actually two public APIs to consider now, due to the revised plans:
    <ul><li>The original Doomsday public API, which is now called libdeng. This is a set of C functions. I expect that this API will be freezed so that nothing will be added (or removed, obviously) at the point of stable release (which is when it makes sense to do the freezing). Nothing needs to be added because this is essentially a legacy API, and not a place where future development needs to occur. Plugins linked against libdeng are guaranteed to work the same way indefinitely. The downside? New features added to the engine at some point in the future may not be accessible through this API, or they will be inconvenient to access.
    <li>Then there is the new public API, which is a set of C++ classes exported from libdeng2. This is completely separate from the libdeng API. I expect that this API will never be fully freezed because of functionality evolving in the engine. However, it makes sense to freeze certain classes for forward compatibility: like the abstract base classes that handle files and folders in the FS. This way libdeng2-based plugins can rely on these classes and remain compatible indefinitely.
    </ul>

    The interesting question is that does it makes any sense to, e.g., allow writing FS plugins through the libdeng API. There should be ways to do it, although I doubt they are actually sensible: it's simpler to just derive a few classes in a plugin.
  • <blockquote><p>The interesting question is that does it makes any sense to, e.g., allow writing FS plugins through the libdeng API. There should be ways to do it, although I doubt they are actually sensible: it?s simpler to just derive a few classes in a plugin.</p></blockquote>
    At this point I'm struggling to think of a situation where a libdeng API for FS plugins would be needed when said plugin could simply make use of the classes provided by libdeng2. For now I think we can do without a libdeng C API wrapper for this functionality.
  • Over the past few days I have begun work on the 2.0 version of world (map) management. So far this work has been limited to defining a new hierarchy of abstraction layers between the client and server side worlds.

    Most of the design is now complete (in my head) and I'm about ready to begin implementation. To make the transition easier I'm planning to begin work by introducing a couple of new abstraction layers to serve as wrappers for the planned new functionality in libdeng2.

    So, is the trunk now ready for this type of work or should I hold back a little while longer?
  • I'm currently revising the lowest level stuff in the new-order branches (init/shutdown). The trunk has the phase1 changes, so it's pretty much hard-coded to launch Doom1 E1M1 multiplayer.

    On the whole I think that the branching approach is pretty nice, as it enforces to finish a relatively small but functionally complete set of changes before making them "visible" in the main trunk. Also, when it comes time to merge (and close) the branch, it's up to the author of the changes to resolve any conflicts with other changes that have occurred in the trunk. And finally, the trunk stays unbroken because the author of the branch is responsible that the merge is successful. (Extra bonus: you can commit unfinished stuff whenever you want, since it's your private branch - good when working on multiple computers)

    So, I would recommend that you create a branch for your map management changes.

    You probably could branch it out right now from the trunk, but since the launch is hardcoded now for multiplayer debugging purposes it may be a bit cumbersome in practice to test that your changes work.

    I don't expect to make game logic related changes in the new order branches so there should be no conflicts due to that. (Of course, apart from what is needed for fixing the multiplayer.)

    It would be good to get yourself comfortable with the revised CMake build system, though.

    Those are the pros and cons I can think of...
  • For now, I think I'll create a private branch from the current beta6 branch. There is plenty of stuff within the current framework that will need to be addressed (before I can start with the real, evolutionary changes) which can be done while you work on the phase two, new-order changes.

    Once new-order phase two is merged back to trunk I'll then update my branch and start on the bigger stuff.
  • I've created a branch from the head of the current beta6, named beta6-with-mapcache. This branch will see the preparatory work towards the new map cache and backend management of maps.

    The basic concept is as follows:
    <ul><li>Internally, the engine only supports read/write of maps in a Doomsday-bespoke format.</li>
    <li>When a map load request is made the engine first checks the map cache to see if the map is available (in the Doomsday format) and if present it is loaded server-side (will be transmitted to clients later). If the requested map is not present in the cache, the engine then envokes all currently loaded map conversion plugins systematically until one of which recognises a map which can be converted.</li>
    <li>Converting a map into the Doomsday map format works as follows; firstly, the conversion plugin(s) does the low-level read and interprets any format-specific data into a form that can be easily transfered to the engine's runtime map building interface. Once the map is transfered engine-side (and the signal received that map editing has finished) the map is then written to the map cache. The load process then re-runs except this time a map will be found in the cache and thus loaded.</li>
    </ul>

    We should now make the final decision on whether we want to make use of the UDMF map format as our native map format or whether another format is more suited to our needs (it would always be possible to support UDMF at a future point via a converter plugin). I don't think UDMF is really what we want as I feel a text-based format is both unecessary and unwieldy for use as our native map format.

    The biggest mark in the 'against' column for UDMF though is that it is an open format. The whole point in a Doomsday native map format is that we are reading maps in a format specifically tailored to our needs. If we opted to use UDMF then we would be opening the door for another level of compatibility issues within the engine itself. As such, I think that as far as UDMF is concerned that we only support by way of a conversion plugin and not natively.

    A better solution for us would be a ZIP with multiple binary files for each class of map data easily patched by their counterparts in a saved game.

    Thoughts?

    <strong>EDIT</strong>: verbosity++
  • Sounds sensible. When it comes to UDMF, I see us having at most a UDMF import (read-only) plugin.

    I think a ZIP-based format for storing maps would be good. It's flexible as it allows both extra files and subdirectories within the ZIP without breaking compatibility. The point about the savegames is a good one.

    Since libdeng2 will also provide the facilities for modifying loaded ZipFolders, this opens up interesting possibilities for in-game editing of maps. It'll basically be as simple as editing regular files in some directory.

    Any ideas for the file name extensions? There's already going to be <tt>.addon</tt> and <tt>.box</tt> for generic loadable packages (which could be ZIP compressed also thanks to the upcoming libdeng2 ZipFolder, btw). I think the maps and savegames should have their own extensions. Maybe just <tt>.map</tt> and <tt>.save</tt>? And why not call PK3s just <tt>.pack</tt>, while we're at it? (And there's going to be <tt>.demo</tt> for the demos.)
  • <blockquote>Sounds sensible. When it comes to UDMF, I see us having at most a UDMF import (read-only) plugin.</blockquote>
    Agreed.
    <blockquote>I think a ZIP-based format for storing maps would be good. It's flexible as it allows both extra files and subdirectories within the ZIP without breaking compatibility. The point about the savegames is a good one.

    Since libdeng2 will also provide the facilities for modifying loaded ZipFolders, this opens up interesting possibilities for in-game editing of maps. It'll basically be as simple as editing regular files in some directory.</blockquote>
    My thinking is that if we do this right, patching a cached map with change deltas from a saved game can be done at file-level (let files in <tt>.save</tt> override those in <tt>.map</tt>).
    The possibilities for in-game editing are huge with such a design.

    The key will be coming up with a native map format that can be loaded near-instantly (i.e., format mirrors our run time representation.
    <blockquote>Any ideas for the file name extensions? There's already going to be <tt>.addon</tt> and <tt>.box</tt> for generic loadable packages (which could be ZIP compressed also thanks to the upcoming libdeng2 ZipFolder, btw). I think the maps and savegames should have their own extensions. Maybe just <tt>.map</tt> and <tt>.save</tt>? And why not call PK3s just <tt>.pack</tt>, while we're at it? (And there's going to be <tt>.demo</tt> for the demos.)</blockquote>Sounds good to me, agreed on all names too.
  • <blockquote>My thinking is that if we do this right, patching a cached map with change deltas from a saved game can be done at file-level (let files in .save override those in .map).</blockquote> Yes, the binary format and files in <code>.save</code> and <code>.map</code> should be identical. The <code>.save</code> is essentially a subset of <code>.map</code>, after all (with some extra stuff).

    However, keep in mind when you proceed with your implementation that libdeng2 will be providing generic network-byte-ordered writer/reader utilities, a la Hawthorn. They should be used on all data read and written by Doomsday, including network packets, maps, and savegames.
  • How do you plan to construct the interface for the native file write/read utilities, is there a particular design pattern you have in mind (I'm thinking that a map data read should be a linear, sequential process without need to jump around a file "randomly")?

    We should decide upon standard (basic) binary format structure that is applied to all map data files other than resources (not the saved game, client-view screenshot for example). I propose a very simple chunked structure, with a header (containing additional metadata describing the content of the file so that the type of data and its validity can be discerned easily) followed by a chunk of data.
  • <blockquote>How do you plan to construct the interface for the native file write/read utilities, is there a particular design pattern you have in mind (I?m thinking that a map data read should be a linear, sequential process without need to jump around a file ?randomly?)?</blockquote> Most of the classes are there now in the phase3 branch (running Doxygen on it recommended). The read/write system has a couple of components:
    <ul>
    <li>Random access to an array of bytes is provided through the <code>IByteArray</code> interface (get(), set(), size()).
    <li><code>NativeFile</code> (among others) implements this interface. In the case of native files, the get() and set() translate to file reads and writes. (However, <code>NativeFile</code> will probably use some buffering for the writing, so that the actual writing to a file is performed upon flushing/closing.)
    <li>The <code>Reader</code> and <code>Writer</code> classes take any object that implements <code>IByteArray</code> and provide stream-like access (i.e., they keep track of the position in the byte array when things get read/written). <code>Reader</code> and <code>Writer</code> take care of byte order conversion (from/to network byte order) and provide methods for writing the basic data types (ints, floats, double, IByteArray).
    <li>Finally, there is the <code>ISerializable</code> interface. An object that implements this interface can be given directly to a <code>Writer</code> or <code>Reader</code>. In practice, the interface contains methods that convert the object into a byte array and restore it from one. (Actually, now that I think about it, the interface methods should be given a <code>Reader</code> or <code>Writer</code> object, not an IByteArray... I'll change it.)
    </ul>

    Some simple examples:
    <pre>
    Block block; // Just an array of bytes.
    Writer writer(block);
    writer << "Hello world" << dint(10);

    std::string str;
    dint num;
    Reader(block) >> str >> num;

    (...)

    File* someFile; // From the de::FS
    Writer writer(*someFile);
    writer << mySerializableObject;
    </pre>

    <blockquote>We should decide upon standard (basic) binary format structure that is applied to all map data files other than resources (not the saved game, client-view screenshot for example). I propose a very simple chunked structure, with a header (containing additional metadata describing the content of the file so that the type of data and its validity can be discerned easily) followed by a chunk of data.</blockquote> Yes, let's keep it as simple as possible. Something like this maybe:
    <pre>
    CHUNK HEADER: magic number ID (determines type of chunk)
    length of data
    CHUNK DATA: ...bytes...

    (next chunk header... / end of file)
    </pre>

    When it comes to versioning (e.g., when a new data member is added to a sector and it needs to be saved), it would be nice if the format could accommodate it without needing to resort to splitting the data of a single sector into multiple chunks (where the new data is stored in a separate chunk after the old one). Maybe each data member (e.g., sector floor height) should have its own ID number, so that the members can be read/written in any order and new ones can be added without breaking the format.

    Whatever the format ends up being, it should documented in the wiki.
Sign In or Register to comment.