GL2 models; Oculus Rift DK2
I've merged my remaining bigger 1.15 work items to the master: the "beta" version of the GL2 model renderer and updates to the VR code for Oculus Rift DK2. The objective is now to clean these up and start bug fixing for 1.15.
GL2 model renderer
I've been working on a new 3D model renderer that is based on the Doomsday 2 libraries. It is completely separate from the old renderer; all existing MD2/DMD model packs are still rendered using the old renderer and this new code will not improve their appearance at all. At some point I anticipate that the old renderer will be removed; the features of the old renderer will be emulated on top of the new code.
In the 1.15 release, the new GL2 model renderer will not yet be feature complete. I'm calling this a "beta" because I want to develop the renderer together with you guys who actually create the models. The functionality that is currently implemented is enough for developing models with new file formats, skeletal animation, normal/specular/emission maps, and basic state-triggered animations. I've written a wiki page that explains the current status: GL2 model renderer.
The new renderer is significantly different from the old one; you should basically forget everything you know about the old model definitions and get acquainted with the Doomsday 2 package and asset system:
Feedback and questions are welcome! After the 1.15 release is out of the way, I'll continue developing the GL2 renderer in unstable builds guided by your needs. By 1.16, this should be solid enough for actual gameplay with full-blown model packs.
NOTE: The current unstable builds don't yet allow loading packages by user request. This will be implemented soon in an upcoming build.
Oculus Rift DK2
Another bigger topic is the updated VR code that is now compatible with Oculus Rift DK2. With the much improved head tracking, the Doomsday UI is now visualized as a more comfortable large wall-like display in front of you. It is automatically scaled for a suitable size. You can also look around more freely in the game world, peeking around corners etc. Remember to reset your tracking position while standing up to match the Doom Guy's pose.
I faced some trouble getting the Windows build up and running; at this time I haven't been able to verify that it actually works at all. In other words, YMMV! We'll sort out any issues in upcoming builds, of course.
See How to use Oculus Rift in the wiki for instructions.
Binary packages for Ubuntu
Finally, a note about the Ubuntu binary package we offer in the Build Repository. Going forward, they will be for 14.04 LTS. (Also, don't forget about the PPA.)
GL2 model renderer
I've been working on a new 3D model renderer that is based on the Doomsday 2 libraries. It is completely separate from the old renderer; all existing MD2/DMD model packs are still rendered using the old renderer and this new code will not improve their appearance at all. At some point I anticipate that the old renderer will be removed; the features of the old renderer will be emulated on top of the new code.
In the 1.15 release, the new GL2 model renderer will not yet be feature complete. I'm calling this a "beta" because I want to develop the renderer together with you guys who actually create the models. The functionality that is currently implemented is enough for developing models with new file formats, skeletal animation, normal/specular/emission maps, and basic state-triggered animations. I've written a wiki page that explains the current status: GL2 model renderer.
The new renderer is significantly different from the old one; you should basically forget everything you know about the old model definitions and get acquainted with the Doomsday 2 package and asset system:
Feedback and questions are welcome! After the 1.15 release is out of the way, I'll continue developing the GL2 renderer in unstable builds guided by your needs. By 1.16, this should be solid enough for actual gameplay with full-blown model packs.
NOTE: The current unstable builds don't yet allow loading packages by user request. This will be implemented soon in an upcoming build.
Oculus Rift DK2
Another bigger topic is the updated VR code that is now compatible with Oculus Rift DK2. With the much improved head tracking, the Doomsday UI is now visualized as a more comfortable large wall-like display in front of you. It is automatically scaled for a suitable size. You can also look around more freely in the game world, peeking around corners etc. Remember to reset your tracking position while standing up to match the Doom Guy's pose.
I faced some trouble getting the Windows build up and running; at this time I haven't been able to verify that it actually works at all. In other words, YMMV! We'll sort out any issues in upcoming builds, of course.
See How to use Oculus Rift in the wiki for instructions.
Binary packages for Ubuntu
Finally, a note about the Ubuntu binary package we offer in the Build Repository. Going forward, they will be for 14.04 LTS. (Also, don't forget about the PPA.)
Comments
Keep up the good work.
I saw on the page that, for certain textures, there is no alpha value permitted. It did not say anything for the emission maps. Does this then make it possible to use an alpha value, allowing for a smooth transition from dark to glowing for, for example, a cacodemon's eye when it gets ready to fire?
Also, since they emission maps are basically just added afterwards, this means that you use it as an emission mask, right?
Sorry to bother so much about it, but it is just that it is extremely interesting to me!
-Jon
The current generic shader applies the emission map simply additively as RGB multiplied by the emission map's alpha channel, i.e., it's just an additional layer on top of the model.
I haven't been creating 3D models myself a lot; what would be the most useful, default way to apply the emission map:
- What a supported model formats we can use? The page on a wiki is empty http://dengine.net/dew/index.php?title=Supported_3D_model_formats
- Does your general purpose shader support different blending modes such as alpha-test, alpha-blend and additive blend.
- For the effects it's extremely needed to support material animation features (opacity, emissive mask intensity, texture change and a uv scrolling). Do you planning to implement it?
I'm ready to switch my models on a new rails and help you to polish a new model render.
So far I've been using MD5, however other formats have more features. We should pick one or two recommended formats that allow the artist to specify as many things as possible in the model itself, so that Doomsday's model asset definitions can be less verbose / easier to create. I expect that we'll need to compromise between specifying some things in the model file and some in the definitions, though.
Which format would you recommend out of the ones Assimp supports?
Blending modes are not yet implemented, however there will at least be alpha blended and additive modes.
It does seem very important to be able to animate the materials/UVs, yes. Is this something that can be specified in the model file (in which case Assimp hopefully can pass the information along to Doomsday), or should this stuff be specified externally in the model definition? I suppose if it's possible to specify a set of materials in the model, Doomsday could animate the sequence of materials.
Excellent! It'll be a lot of help to have some real-world models to develop the renderer with.
I've got to put a vote in for FBX though. It's a modern format and is used by several different game engines such as Unity to load models. I think having that option would be a boon to Doomsday in general by encouraging a wider range of 3D artists to contribute. Again, there is a wide range of tools and converters viewers etc. available.
KuriKai and myself can supply a variety of meshes in different formats, what do you need? <:-P
However, I know nothing about model making and thus this may be a stupid question, but I notice that there is a limit of 4 maps, but the new renderer currently supports 5 different maps.
Is 4 the limit due to the typical video card, are two of the map types so similar that they would never need or can't be combined? Just the number 4, when there's 5 map types stod out to me.
There are a lot of different maps supported by different types of renderer. MD5 supports height maps and there are certain renderers that support separate specular and gloss maps. It is possible to combine them. Parallax maps have a height map packaged into the alpha channel of a normal map. So you can save on texture maps (and thus on package sizes, loading times etc.).
One thing I would suggest is to support PNG graphics. Managing alpha on PNG is a lot more straight forward.
I think it is better to keep the material definitions outside the models. Doomsday materials may have much more parameters than the model format can handle.
N.B. of course I mean not the animating UV's but a repeatable texture scrolling through static UV's
I'll send a new model to you for testing in the end of the week.
Also, are you going to have blocking meshes to stop player progression? If so, you could import models and use them as map units (terrain, streets, buildings etc). The possibilities are endless.
http://forum.unity3d.com/attachments/bi ... jpg.20237/
Skyjake: I just deleted what I wrote in this message, 'cause I made an error. Next message is to you, though!
-Jon
Oh no! I am so tired after work that I completely misread what you wrote!
The best way, in my opinion, would be to have the pixel be fully bright, ignoring lighting, when it is the highest value.
When it is dark, there is no change to the texture.
Is the emit texture black and white, or is it colored? If colored, I don't know - maybe the color would override the diffuse with a higher value.
Here's the thing: what's most useful?
A solid-color emission obeying light wouldn't make much sense in a model or be very useful, since you may as well have just drawn the diffuse texture as the end result of the diffuse+emit.
A colorized emit might be better than monochrome, since it might be good to have something like the Iron Lich's eyes, for instance, be fully bright red when it's animated, but dull grey or black when it is destroyed. With a monochrome map, that wouldn't be achievable. All that can be achievable with a monochrome would be achievable with a colored one, just that you'd use the same color on the model as the original diffuse texture to make it appear bright regardless of light level.
Brightness regardless of light level, based on the luminosity (there's that word I was looking for - still recuperating from work) would affect the model regardless of light level to always keep that area at at least a certain brightness, depending on the luminosity.
In other words, it could work like how this that always stays bright:
http://blenderartists.org/forum/attachm ... 1261318287
Notice how the blue windows are bright, regardless of the shadow? Something affected by the light wouldn't be as useful.
Now, the luminosity or intensity of the color in the emit map (how close to 255) would determine how bright, but not how "white" that it is.
I'm a little confused as to whether or not that would cause problems with white lights or not. No, because white would be the default full intensity, right? Just as (0,0,255) would be a solid, bright blue, and the darker pixels would be less bright or faded, right? No such thing as grey or dark lights in the real world. The way a dim light would be simulated (e.g. a darker red light) would be to just have those pixels have less intensity anyway. Yes, that sounds proper! Yes, only adding and not subtracting any brightness, and this with color, would be the most versatile and useful solution. This would be on a per-pixel basis for the texture, the brightness that is! It would help simulate more realistic glows that fade with distance from the place where the emission is intended to be brightest.
I hope that helps, because I think that that would be the most useful. After all, I can't think of a use for a use for light-affected solid emit maps that only act as decals except maybe to fade from one texture to another, but I can't see as more realistic application of that than a glow. As far as similar textures with things added onto them, like pain skins, then you could just as easily make differing diffuse textures already drawn differently, right?
Also models with complex materials (normals/height and specular, at least) would be good.
This is because I've added normal map and height map separately. Both result in surface normals; the height map gets converted to a normal map before use. Only one surface normal map is passed to the shader.
The renderer combines all the texture maps of a model into an atlas, so the shader can apply as many maps as it wants. One can freely modify the number of texture coordinates passed to the renderer, so it really depends on what the maps are being used for.
Yes, PNGs are supported.
I see, yeah. Thanks for the UV example!
In the long run, we should be able to use similar GL2 materials with world/map surfaces, too, if they're implemented in the engine.
Thanks!
This is a good idea. In the future, in approximate terms, the map renderer will be operating behind the scenes so that it basically breaks up the map into static "3D models" that then get drawn using GL2 code much like the 3D models representing objects are drawn. We can well use some custom-designed 3D models representing parts of the map, too. My only reservation is regarding collision testing, if that is derived automatically from the shape of the model. The solid volumes could be defined manually, of course.
The emissive map is a colored (RGBA) image.
The way the current shader applies emitted light, it basically requires having black regions in the diffuse map, like in this picture: http://www.witchbeam.com.au/unityboard/ ... _enemy.jpg
If I understood you correctly, this is basically what you meant?
I didn't know it was a truly emissive map! No complaints! No suggestions! I misunderstood exactly what it was! I think you totally knew what you were doing with that! So it's basically for lights and glows then? That's what it looks like. That looks very useful!
You said it requires black on the diffuse, right? If there is not black on the diffuse map, does it merely blend with the portion of the diffuse texture that it overlaps? If so, that's even better, given one other thing: the strength of the emission can be controlled. You know, like a throbbing glow in some instances, like say if you added that material to a wall with glowing alert lights or something! You would have the "off" texture of the wire lamp or something like this:
http://www.classicnosparts.com/wp-conte ... mp%20a.JPG
The glow then would blend with and/or brighten it to look like it's turning on. Does it work like that, or is it strictly only working if the diffuse texture is black there?
The example I gave in the other post was with eyes, which would require their normal texture. Let's take the cacodemon for instance. You know, he has regular old cat-looking eyes or something like that. Then, when he goes to shoot fireballs, his eye lights up light a Christmas bulb! In such a case, it would be useful to be able to have the emission map over a texture that is not completely black, you know what I mean? Otherwise, if it must be only over black, would the emission strength be controllable to the point where it can be dulled to the point where it doesn't look glowing, and only brightened intensely when it needs to be bright?
Like this:
http://3.bp.blogspot.com/-LYXIHZc4bqQ/T ... ender1.jpg
I'm just trying to think about the games themselves, you know what I mean?
For instance, a glow on a weapon would only be in effect while the weapon is being fired, except of course for certain variations of plasma weapons.
In Heretic, the elvenwand crystal would glow.
In Hexen, maybe the serpent staff would glow when used.
You know what I mean? I can't think of too many things that are solid lights in the games besides lights themselves, you know what I'm saying?
Sorry to bother you with too much stuff - I am just so excited!
I noticed in the documentation for model assets that, to have an animation for a certain section of the model, you use the node variable. I'm wondering how you define this. Are you defining, for instance, in MD5 files the root joint, and it affects all of the joints branching out from it? I'm just slightly unclear on this.
I just mean to say with root joints like this:
Say you only want for the running animation, your character's legs always running, whether he's shooting or not. So, the foot bone is connected to the shin bone and the shin bone is connected to the leg bone; the leg bone is connected to the pelvis bone (lol). Anyway, say the pelvis bone is named "pelvis" in the MD5 files. Would the code look like this?
Not sure how that would work with a player, but is that basically how the node is defined?
Will it be possible to change texture maps within an animation cycle? Say for example, during a pain animation, I can swap out the texture maps for different pain skins within that animation, so that different skins can be applied to the model for different frames of the animation?
So, please do prepare alternative damage/animation textures, although keep in mind that there will be the limitation that they need to fit on a single texture atlas. This will impose a restriction on either texture resolution or number of frames used.
I'm mainly thinking of the movement modifying code pointers; Heretic's A_Sor1Chase and in particular HeXen's A_FastChase.
I suppose A_Sor1Chase just speeds up the mobj's movement states for a bit and thus doesn't 'really' need a special animation. Though it does have a have an internal counter (i.e it speeds up the mobj for the next 20 walking states IIRC) for basing a special animation off.
But A_FastChase, I could imagine being a good candidate for a special animation; i.e Zedek visually strafing or strafing and attacking. Of course A_FastChase has no counter like A_Sor1Chase; as is my understanding, it literally just thrusts the mobj 90 degree's.
Of course this is something that can go further and further... I accept where does one stop...
For two examples, one could go beyond 'damage skin's' and have 'damage animations' on top of that (i.e a highly damaged Barron of Hell switching to a limping movement animation while a healthy one uses the standard movement animation. Or why not special animations for when A_Facetarget is called (i.e if you are strafing an Imp throwing a fireball, it will enter a unique animation of it turning to track you, that will be different depending on whether A_FaceTarget leads to it turning left or right).
And then there are the player mobjs; special animations for strafing while attack, turning on the spot... etc etc...
When it comes to MD5 in particular, I'm not sure what the established convention is, however there are multiple possibilities we can go with (none of this is implemented yet):
- An .md5anim can store all the animation sequences as one long animation, and we use Doomsday's model asset definition to break it down into shorter sequences that are then assigned to different actions of the object. This is the most straightforward to implement.
- One .md5anim can be used to animate several different models, if they share similar shape/structure.
- There can be a different .md5anim for each animation sequence.
The current plan is to allow mobj states to trigger animation sequences. However, I would very much like to have Doomsday Script controlled animation triggers as well for more advanced logic. Furthermore, combining movement and other animations should bring a lot of extra value; strafing would be a movement animation separate from walking.Assuming a model that has a separate bone for turning the head, it should be possible to have it controlled by Doomsday so that the head angle is aligned with the player's direction. I like the idea (filing to the tracker).
I don't know if the above is the best wording, so I'll try give an example; the wing's of Heretic's Gargoyle being separate models going through a 'flapping animation' regardless of what animation the rest of the body is doing, with Dday being told to try to keep the wing's attached to a certain point on the main body (i.e if the body arches back, leans forward etc, Dday will automatically move and rotate the independently animating wing's).
I suppose looking at the original sprites, I could sort of point at the Herisarch's mana cubes in HeXen.
Again, I may be thinking of impossible functionality.
GZDoom is currently publically beta-testing a GL4.0 centred overhaul of it's renderer, which seems to have the ability to scale down features to those only available with GL3.0.
Newer versions of OpenGL certainly do feature some things that are potentially beneficial to drawing DOOM maps. However, the plan is to make some radical changes here in any case, which will likely not require the latest and greatest version of the API. Right now its more beneficial to set the bar at a conservative level and then re-evaluate as necessary as the design for the 2.0 map renderer becomes more concrete.
Also it doesn't make sense to put anb increasingly hardware intensive game on a puny smartphone with battery limitations, let alone no gamepad or mouse or keyboard. I remember when they made a Star Fox game for the DS that used the stylus, and that completely ruined the gameplay and I regretted buying it.
Let's not hold back progress just to appeal for a ridiculous platform like a smartphone. It isn't made for games like Doom, and using a smartphone to play it isn't at all practical, and I don't think it would be any fun. Using a touch screen for a game is no fun. A touchscreen for a game is only practical for games like Candy Crush and puzzles. What is the gaming world coming to these days?
The other mistake that I see a lot on the internet is the idea that "mobile gaming means Candy Crush -like experiences". This is naive to say the least and if you believe that this is all mobile gaming has to offer then you are missing out considerably.
Yesterday I was led in bed and very much enjoying a spot of Quake 3 Arena, on my Galaxy Note 3 with a Moga Pro. This is console-quality entertainment on a mobile device I can play anywhere that easily fits in my pocket.
As for the screen size, thats really not even an issue. If I'm playing in bed then its only inches from my face. Otherwise I can beam the image to any TV in the house if I do want to play on the big screen.
Long story short- Mobiles are absolutely viable platforms for games like this. It is just taking the wider gaming community a long time to wake up and take notice.
To provide further anecdotal evidence, another game I play a lot of is Hearthstone, the CCG by Blizzard. Despite being a natural fit for mobile and tablet devices I'm presently forced to boot up my desktop simply to play it. So from my perspective, the ties to the desktop are actually a negative influence on my gaming tendencies. Once Hearthstone is ported to Android (this fall I believe) and in the future, Doomsday also - then the majority of my gaming will be done on mobile. Clearly, I'm not the average user as I don't play all that many games. It is however, food for thought.
What is interesting to me is that the overall trend is that the desktop is slowly moving away from the "jack of all trades" role and is instead becoming more the preserve of creative industry - artists, musicians, programmers, etc..