GL2 models; Oculus Rift DK2

edited 2014 Sep 24 in Developers
I've merged my remaining bigger 1.15 work items to the master: the "beta" version of the GL2 model renderer and updates to the VR code for Oculus Rift DK2. The objective is now to clean these up and start bug fixing for 1.15.

GL2 model renderer

I've been working on a new 3D model renderer that is based on the Doomsday 2 libraries. It is completely separate from the old renderer; all existing MD2/DMD model packs are still rendered using the old renderer and this new code will not improve their appearance at all. At some point I anticipate that the old renderer will be removed; the features of the old renderer will be emulated on top of the new code.

In the 1.15 release, the new GL2 model renderer will not yet be feature complete. I'm calling this a "beta" because I want to develop the renderer together with you guys who actually create the models. The functionality that is currently implemented is enough for developing models with new file formats, skeletal animation, normal/specular/emission maps, and basic state-triggered animations. I've written a wiki page that explains the current status: GL2 model renderer.

The new renderer is significantly different from the old one; you should basically forget everything you know about the old model definitions and get acquainted with the Doomsday 2 package and asset system:
Feedback and questions are welcome! After the 1.15 release is out of the way, I'll continue developing the GL2 renderer in unstable builds guided by your needs. By 1.16, this should be solid enough for actual gameplay with full-blown model packs.

NOTE: The current unstable builds don't yet allow loading packages by user request. This will be implemented soon in an upcoming build.

Oculus Rift DK2

Another bigger topic is the updated VR code that is now compatible with Oculus Rift DK2. With the much improved head tracking, the Doomsday UI is now visualized as a more comfortable large wall-like display in front of you. It is automatically scaled for a suitable size. You can also look around more freely in the game world, peeking around corners etc. Remember to reset your tracking position while standing up to match the Doom Guy's pose.

I faced some trouble getting the Windows build up and running; at this time I haven't been able to verify that it actually works at all. In other words, YMMV! We'll sort out any issues in upcoming builds, of course.

See How to use Oculus Rift in the wiki for instructions.

Binary packages for Ubuntu

Finally, a note about the Ubuntu binary package we offer in the Build Repository. Going forward, they will be for 14.04 LTS. (Also, don't forget about the PPA.)
«1

Comments

  • Sounds nice!

    Keep up the good work.
  • Hey SkyJake (or DaniJ),

    I saw on the page that, for certain textures, there is no alpha value permitted. It did not say anything for the emission maps. Does this then make it possible to use an alpha value, allowing for a smooth transition from dark to glowing for, for example, a cacodemon's eye when it gets ready to fire? :)

    Also, since they emission maps are basically just added afterwards, this means that you use it as an emission mask, right?

    Sorry to bother so much about it, but it is just that it is extremely interesting to me!
    :)

    -Jon
  • Psychikon wrote:
    I saw on the page that, for certain textures, there is no alpha value permitted. It did not say anything for the emission maps. Does this then make it possible to use an alpha value, allowing for a smooth transition from dark to glowing for, for example, a cacodemon's eye when it gets ready to fire? :)
    It's good to note that the exact way the texture maps are applied depends on the shader. Later there will be an option to choose a custom shader for a model, which means you can apply the texture color values exactly as you want.

    The current generic shader applies the emission map simply additively as RGB multiplied by the emission map's alpha channel, i.e., it's just an additional layer on top of the model.
    Also, since they emission maps are basically just added afterwards, this means that you use it as an emission mask, right?
    I haven't been creating 3D models myself a lot; what would be the most useful, default way to apply the emission map:
    • Simply additively on top of everything else, like now.
    • As a mask (A=255 means that only the emission map is visible at the pixel?).
    • Added to diffuse lighting, meaning that a white emission map pixel means the diffuse map is fully lit at that spot. Specular light added normally.
    • Something else?
  • Earlier I forgot to mention one feature that model animation will have: random alternative sequences. Meaning that you animate a few alternative walk styles, attacks, pain, etc. and they will play out randomly, giving some nice variation to the behavior of the objects and thus hopefully making things appear more life-like. No more robotic monotonous zombie walking, etc.
  • Hi Skyjake! That's a really good news about new GL2 models, and I have some questions.
    - What a supported model formats we can use? The page on a wiki is empty http://dengine.net/dew/index.php?title=Supported_3D_model_formats
    - Does your general purpose shader support different blending modes such as alpha-test, alpha-blend and additive blend.
    - For the effects it's extremely needed to support material animation features (opacity, emissive mask intensity, texture change and a uv scrolling). Do you planning to implement it?

    I'm ready to switch my models on a new rails and help you to polish a new model render. :)
  • veirdo wrote:
    What a supported model formats we can use?
    I'll update the wiki page a bit later. We use the Open Asset Import Library, which supports a wide variety of file formats: http://assimp.sourceforge.net/main_feat ... rmats.html

    So far I've been using MD5, however other formats have more features. We should pick one or two recommended formats that allow the artist to specify as many things as possible in the model itself, so that Doomsday's model asset definitions can be less verbose / easier to create. I expect that we'll need to compromise between specifying some things in the model file and some in the definitions, though.

    Which format would you recommend out of the ones Assimp supports?
    Does your general purpose shader support different blending modes such as alpha-test, alpha-blend and additive blend.
    Blending modes are not yet implemented, however there will at least be alpha blended and additive modes.
    For the effects it's extremely needed to support material animation features (opacity, emissive mask intensity, texture change and a uv scrolling). Do you planning to implement it?
    It does seem very important to be able to animate the materials/UVs, yes. Is this something that can be specified in the model file (in which case Assimp hopefully can pass the information along to Doomsday), or should this stuff be specified externally in the model definition? I suppose if it's possible to specify a set of materials in the model, Doomsday could animate the sequence of materials.
    I'm ready to switch my models on a new rails and help you to polish a new model render.
    Excellent! It'll be a lot of help to have some real-world models to develop the renderer with.
  • Very welcome to hear this! Also bravo for the multiple formats. I have a personal preference for something like Valve or MD5, just as I'm used to working with them and there are loads of tools like viewers, compilers, tutorials available for using them.

    I've got to put a vote in for FBX though. It's a modern format and is used by several different game engines such as Unity to load models. I think having that option would be a boon to Doomsday in general by encouraging a wider range of 3D artists to contribute. Again, there is a wide range of tools and converters viewers etc. available.

    KuriKai and myself can supply a variety of meshes in different formats, what do you need? <:-P
  • Firstly, of course this is awesome.

    However, I know nothing about model making and thus this may be a stupid question, but I notice that there is a limit of 4 maps, but the new renderer currently supports 5 different maps.

    Is 4 the limit due to the typical video card, are two of the map types so similar that they would never need or can't be combined? Just the number 4, when there's 5 map types stod out to me.
  • I'm pretty sure you can have a variety of maps on modern cards. I think the map limit exists from previous shader models that are supported for legacy purposes. I can make a next gen model, but I'm not really up on the technical specs for cards or shader models.

    There are a lot of different maps supported by different types of renderer. MD5 supports height maps and there are certain renderers that support separate specular and gloss maps. It is possible to combine them. Parallax maps have a height map packaged into the alpha channel of a normal map. So you can save on texture maps (and thus on package sizes, loading times etc.).

    One thing I would suggest is to support PNG graphics. Managing alpha on PNG is a lot more straight forward.
  • skyjake wrote:
    Which format would you recommend out of the ones Assimp supports?
    .SMD and .MD5 are widely supported by 3d modeling packages, and I agree with a Tea Monster, it's easier to find any information about them.
    skyjake wrote:
    It does seem very important to be able to animate the materials/UVs, yes. Is this something that can be specified in the model file (in which case Assimp hopefully can pass the information along to Doomsday), or should this stuff be specified externally in the model definition?
    I think it is better to keep the material definitions outside the models. Doomsday materials may have much more parameters than the model format can handle.
    N.B. of course I mean not the animating UV's but a repeatable texture scrolling through static UV's
    uvscroll.gif
    skyjake wrote:
    Excellent! It'll be a lot of help to have some real-world models to develop the renderer with.
    I'll send a new model to you for testing in the end of the week.
  • Yes, that looks very nice!

    Also, are you going to have blocking meshes to stop player progression? If so, you could import models and use them as map units (terrain, streets, buildings etc). The possibilities are endless.

    http://forum.unity3d.com/attachments/bi ... jpg.20237/
  • edited 2014 Aug 27
    Veirdo: Good idea! That reminds me of a certain Quake 3 shader.

    Skyjake: I just deleted what I wrote in this message, 'cause I made an error. Next message is to you, though! :)

    -Jon
  • @Skyjake:
    Oh no! I am so tired after work that I completely misread what you wrote!

    The best way, in my opinion, would be to have the pixel be fully bright, ignoring lighting, when it is the highest value.
    When it is dark, there is no change to the texture.

    Is the emit texture black and white, or is it colored? If colored, I don't know - maybe the color would override the diffuse with a higher value.

    Here's the thing: what's most useful?

    A solid-color emission obeying light wouldn't make much sense in a model or be very useful, since you may as well have just drawn the diffuse texture as the end result of the diffuse+emit.

    A colorized emit might be better than monochrome, since it might be good to have something like the Iron Lich's eyes, for instance, be fully bright red when it's animated, but dull grey or black when it is destroyed. With a monochrome map, that wouldn't be achievable. All that can be achievable with a monochrome would be achievable with a colored one, just that you'd use the same color on the model as the original diffuse texture to make it appear bright regardless of light level.

    Brightness regardless of light level, based on the luminosity (there's that word I was looking for - still recuperating from work) would affect the model regardless of light level to always keep that area at at least a certain brightness, depending on the luminosity.

    In other words, it could work like how this that always stays bright:
    http://blenderartists.org/forum/attachm ... 1261318287

    Notice how the blue windows are bright, regardless of the shadow? Something affected by the light wouldn't be as useful.

    Now, the luminosity or intensity of the color in the emit map (how close to 255) would determine how bright, but not how "white" that it is.

    I'm a little confused as to whether or not that would cause problems with white lights or not. No, because white would be the default full intensity, right? Just as (0,0,255) would be a solid, bright blue, and the darker pixels would be less bright or faded, right? No such thing as grey or dark lights in the real world. The way a dim light would be simulated (e.g. a darker red light) would be to just have those pixels have less intensity anyway. Yes, that sounds proper! Yes, only adding and not subtracting any brightness, and this with color, would be the most versatile and useful solution. This would be on a per-pixel basis for the texture, the brightness that is! It would help simulate more realistic glows that fade with distance from the place where the emission is intended to be brightest.

    I hope that helps, because I think that that would be the most useful. After all, I can't think of a use for a use for light-affected solid emit maps that only act as decals except maybe to fade from one texture to another, but I can't see as more realistic application of that than a glow. As far as similar textures with things added onto them, like pain skins, then you could just as easily make differing diffuse textures already drawn differently, right?
  • imagine the possibilities if one day you combine this with advanced brutal doom behavior. But this is a major step forward and is long awaited. looks like we are almost at Doomsday 2.0.
  • KuriKai and myself can supply a variety of meshes in different formats, what do you need?
    It would be great to have some animated models so that I can develop the animation system further: could be for monsters, the player, or just animated decoration objects (trees, etc.). Don't worry about matching game tick timing exactly; these will be adjustable with definitions. The important part is that the animations look natural. For instance, an attack animation can have the monster's arms fully following their natural inertia and return to the basic pose. The animation can still be played out fully regardless of what the mobj state is.

    Also models with complex materials (normals/height and specular, at least) would be good.
    Vermil wrote:
    I notice that there is a limit of 4 maps, but the new renderer currently supports 5 different maps.
    This is because I've added normal map and height map separately. Both result in surface normals; the height map gets converted to a normal map before use. Only one surface normal map is passed to the shader.

    The renderer combines all the texture maps of a model into an atlas, so the shader can apply as many maps as it wants. One can freely modify the number of texture coordinates passed to the renderer, so it really depends on what the maps are being used for.
    One thing I would suggest is to support PNG graphics.
    Yes, PNGs are supported.
    veirdo wrote:
    I think it is better to keep the material definitions outside the models.
    I see, yeah. Thanks for the UV example!

    In the long run, we should be able to use similar GL2 materials with world/map surfaces, too, if they're implemented in the engine.
    veirdo wrote:
    I'll send a new model to you for testing in the end of the week.
    Thanks!
    Also, are you going to have blocking meshes to stop player progression? If so, you could import models and use them as map units (terrain, streets, buildings etc). The possibilities are endless.
    This is a good idea. In the future, in approximate terms, the map renderer will be operating behind the scenes so that it basically breaks up the map into static "3D models" that then get drawn using GL2 code much like the 3D models representing objects are drawn. We can well use some custom-designed 3D models representing parts of the map, too. My only reservation is regarding collision testing, if that is derived automatically from the shape of the model. The solid volumes could be defined manually, of course.
    Psychikon wrote:
    Is the emit texture black and white, or is it colored?
    The emissive map is a colored (RGBA) image.

    The way the current shader applies emitted light, it basically requires having black regions in the diffuse map, like in this picture: http://www.witchbeam.com.au/unityboard/ ... _enemy.jpg

    If I understood you correctly, this is basically what you meant?
  • @Skyjake:

    I didn't know it was a truly emissive map! No complaints! No suggestions! I misunderstood exactly what it was! I think you totally knew what you were doing with that! So it's basically for lights and glows then? That's what it looks like. That looks very useful!

    You said it requires black on the diffuse, right? If there is not black on the diffuse map, does it merely blend with the portion of the diffuse texture that it overlaps? If so, that's even better, given one other thing: the strength of the emission can be controlled. You know, like a throbbing glow in some instances, like say if you added that material to a wall with glowing alert lights or something! You would have the "off" texture of the wire lamp or something like this:
    http://www.classicnosparts.com/wp-conte ... mp%20a.JPG
    The glow then would blend with and/or brighten it to look like it's turning on. Does it work like that, or is it strictly only working if the diffuse texture is black there?

    The example I gave in the other post was with eyes, which would require their normal texture. Let's take the cacodemon for instance. You know, he has regular old cat-looking eyes or something like that. Then, when he goes to shoot fireballs, his eye lights up light a Christmas bulb! In such a case, it would be useful to be able to have the emission map over a texture that is not completely black, you know what I mean? Otherwise, if it must be only over black, would the emission strength be controllable to the point where it can be dulled to the point where it doesn't look glowing, and only brightened intensely when it needs to be bright?

    Like this:
    http://3.bp.blogspot.com/-LYXIHZc4bqQ/T ... ender1.jpg

    I'm just trying to think about the games themselves, you know what I mean?

    For instance, a glow on a weapon would only be in effect while the weapon is being fired, except of course for certain variations of plasma weapons.
    In Heretic, the elvenwand crystal would glow.
    In Hexen, maybe the serpent staff would glow when used.

    You know what I mean? I can't think of too many things that are solid lights in the games besides lights themselves, you know what I'm saying? :)

    Sorry to bother you with too much stuff - I am just so excited! :D
  • skyjake wrote:
    Excellent! It'll be a lot of help to have some real-world models to develop the renderer with.
    Check your PM box.
  • Curious on something else:

    I noticed in the documentation for model assets that, to have an animation for a certain section of the model, you use the node variable. I'm wondering how you define this. Are you defining, for instance, in MD5 files the root joint, and it affects all of the joints branching out from it? I'm just slightly unclear on this.

    I just mean to say with root joints like this:
    Say you only want for the running animation, your character's legs always running, whether he's shooting or not. So, the foot bone is connected to the shin bone and the shin bone is connected to the leg bone; the leg bone is connected to the pelvis bone (lol). Anyway, say the pelvis bone is named "pelvis" in the MD5 files. Would the code look like this?
    state animation.POSS_RUN1 {
    sequence @0 { prob = 1.0
    node = "pelvis" } //leg running
    }
    Not sure how that would work with a player, but is that basically how the node is defined?
  • Psychikon wrote:
    Not sure how that would work with a player, but is that basically how the node is defined?
    You have the gist of it, yeah. However, this part of the animation mechanism hasn't been fully implemented yet. My plan was to have Doomsday automatically control the running/walking/standing animation based on the speed of the object. This would not be defined with "state" defs, though, but with some kind of "movement" defs. This will allow a monster to play its attack animation (via the attack state sequence) even though it's walking at the same time.
  • Just curious, and I think you stated this earlier, but just confirming this...

    Will it be possible to change texture maps within an animation cycle? Say for example, during a pain animation, I can swap out the texture maps for different pain skins within that animation, so that different skins can be applied to the model for different frames of the animation?
  • Will it be possible to change texture maps within an animation cycle? Say for example, during a pain animation, I can swap out the texture maps for different pain skins within that animation, so that different skins can be applied to the model for different frames of the animation?
    Changing texture maps "on the fly" will be supported at some point in the future. I'm thinking this will be an improved version of the old "selector" feature, where one can specify when texture maps change either as part of an animation sequence or due to state changes in the represented object.


    So, please do prepare alternative damage/animation textures, although keep in mind that there will be the limitation that they need to fit on a single texture atlas. This will impose a restriction on either texture resolution or number of frames used.
  • We have got to the point where we have started to animate different monsters in the new pack. Can you get in touch with Kuri Kai, or let us know here some more details of how the new animation system works? Is it just like MD5 in that you have to animate different actions? Does MD5 operate just like in Doom 3 or are there any changes?
  • It's probably the sort of thing that is always going to be low priority, but something I wonder about with model animation is ultimately being able to define special (i.e dedicated) animations for when mobj states are dynamically modified by codepointers?

    I'm mainly thinking of the movement modifying code pointers; Heretic's A_Sor1Chase and in particular HeXen's A_FastChase.

    I suppose A_Sor1Chase just speeds up the mobj's movement states for a bit and thus doesn't 'really' need a special animation. Though it does have a have an internal counter (i.e it speeds up the mobj for the next 20 walking states IIRC) for basing a special animation off.

    But A_FastChase, I could imagine being a good candidate for a special animation; i.e Zedek visually strafing or strafing and attacking. Of course A_FastChase has no counter like A_Sor1Chase; as is my understanding, it literally just thrusts the mobj 90 degree's.

    Of course this is something that can go further and further... I accept where does one stop...

    For two examples, one could go beyond 'damage skin's' and have 'damage animations' on top of that (i.e a highly damaged Barron of Hell switching to a limping movement animation while a healthy one uses the standard movement animation. Or why not special animations for when A_Facetarget is called (i.e if you are strafing an Imp throwing a fireball, it will enter a unique animation of it turning to track you, that will be different depending on whether A_FaceTarget leads to it turning left or right).

    And then there are the player mobjs; special animations for strafing while attack, turning on the spot... etc etc...
  • Not sure that this fits but, I've always thought that while an enemy is making his zig zag attack, he should be keeping an eye on the player, sort of scoping him out, sizing him up. Do you know what I mean? Can this be done? I know that the zig zag was a result of how sprites are presented, but with models, maybe it could make things more believable. I only offer this because of VR which is something that I am very interested in.
  • We have got to the point where we have started to animate different monsters in the new pack. Can you get in touch with Kuri Kai, or let us know here some more details of how the new animation system works? Is it just like MD5 in that you have to animate different actions? Does MD5 operate just like in Doom 3 or are there any changes?
    There are no details yet to share here because the animation system isn't fully implemented, nor will it be until 1.16. However, what you should be preparing for is having a skeletal animation sequence defined for each of the actions (state sequences) that an enemy can perform in Doom: idle/standing, walking/running, melee attack, missile attack, death (and possibly the "gib"/extra death), and raising from dead. You can also animate alternate variants of each sequence to give some extra flavor; Doomsday will choose between the specified variants randomly. Furthermore, I'm planning to allow mixing movement and attack animations (assigning separate sequences for different subtrees of the model nodes).

    When it comes to MD5 in particular, I'm not sure what the established convention is, however there are multiple possibilities we can go with (none of this is implemented yet):
    • An .md5anim can store all the animation sequences as one long animation, and we use Doomsday's model asset definition to break it down into shorter sequences that are then assigned to different actions of the object. This is the most straightforward to implement.
    • One .md5anim can be used to animate several different models, if they share similar shape/structure.
    • There can be a different .md5anim for each animation sequence.
    Vermil wrote:
    I wonder about with model animation is ultimately being able to define special (i.e dedicated) animations for when mobj states are dynamically modified by code pointers?
    The current plan is to allow mobj states to trigger animation sequences. However, I would very much like to have Doomsday Script controlled animation triggers as well for more advanced logic. Furthermore, combining movement and other animations should bring a lot of extra value; strafing would be a movement animation separate from walking.
    PostFatal wrote:
    keeping an eye on the player, sort of scoping him out, sizing him up.
    Assuming a model that has a separate bone for turning the head, it should be possible to have it controlled by Doomsday so that the head angle is aligned with the player's direction. I like the idea (filing to the tracker).
  • Something I've wondered about is the possibility of being able to have multiple unrelated animations occurring on a model at the same time.

    I don't know if the above is the best wording, so I'll try give an example; the wing's of Heretic's Gargoyle being separate models going through a 'flapping animation' regardless of what animation the rest of the body is doing, with Dday being told to try to keep the wing's attached to a certain point on the main body (i.e if the body arches back, leans forward etc, Dday will automatically move and rotate the independently animating wing's).

    I suppose looking at the original sprites, I could sort of point at the Herisarch's mana cubes in HeXen.

    Again, I may be thinking of impossible functionality.
  • I'm not sure I'm going to understand the response (all I really know is that X visual/rendering feature/method requires X version of OpenGL of higher), but why is Dday's new renderer seemingly aimed at GL2.0 and not 3.0 or 4.0?

    GZDoom is currently publically beta-testing a GL4.0 centred overhaul of it's renderer, which seems to have the ability to scale down features to those only available with GL3.0.
  • The straight forward answer is because we are aiming, where possible, for feature parity with mobile platforms like Android and iOS. We intend to bring Doomsday to mobile in the not too distant future. Support for anything newer than OpenGL ES 2.0 is still quite limited on those platforms, so to hit the widest possible user base we're aiming for OpenGL 2.0 -level functionality at a minimum.

    Newer versions of OpenGL certainly do feature some things that are potentially beneficial to drawing DOOM maps. However, the plan is to make some radical changes here in any case, which will likely not require the latest and greatest version of the API. Right now its more beneficial to set the bar at a conservative level and then re-evaluate as necessary as the design for the 2.0 map renderer becomes more concrete.
  • Why play a game like that on a mobile tiny screen with no keyboard of joystick? Too many games are made for mobile devices these days as it is. More focus should be for desktops, consoles, and laptops. It was like how some game companies decided to make games on tiny handhelds and reduce quality instead of the good old days when they made games for advanced consoles. You can use an emulator, but resolution and quality is sacrificed since they decided to take the cheap lower quality route and make it for a puny handheld. Even worse on a smartphone because no buttons. I don't think a 1st person shooter has any business on a smartphone. How on earth could that work? And why play it on a tiny screen? This era of tiny screens makes no sense. When it comes to displays, bigger width and height is better as long as resolution is on par.

    Also it doesn't make sense to put anb increasingly hardware intensive game on a puny smartphone with battery limitations, let alone no gamepad or mouse or keyboard. I remember when they made a Star Fox game for the DS that used the stylus, and that completely ruined the gameplay and I regretted buying it.

    Let's not hold back progress just to appeal for a ridiculous platform like a smartphone. It isn't made for games like Doom, and using a smartphone to play it isn't at all practical, and I don't think it would be any fun. Using a touch screen for a game is no fun. A touchscreen for a game is only practical for games like Candy Crush and puzzles. What is the gaming world coming to these days?
  • One mistake you are making is assuming that the presence of Doomsday on mobile platforms will negatively affect the experience on other platforms (like your preferred Windows desktop). Fact is that market trends show that the traditional desktop is slowly but surely disappearing from the set of devices the average user owns. This means that in order to stay relevant it is absolutely necessary for Doomsday to be made available on platforms that are on the rise. If recent figures are to be believed there are a approximately 2 billion smartphone devices in use worldwide and many of which are potentially capable of running Doomsday.

    The other mistake that I see a lot on the internet is the idea that "mobile gaming means Candy Crush -like experiences". This is naive to say the least and if you believe that this is all mobile gaming has to offer then you are missing out considerably.

    Yesterday I was led in bed and very much enjoying a spot of Quake 3 Arena, on my Galaxy Note 3 with a Moga Pro. This is console-quality entertainment on a mobile device I can play anywhere that easily fits in my pocket.

    As for the screen size, thats really not even an issue. If I'm playing in bed then its only inches from my face. Otherwise I can beam the image to any TV in the house if I do want to play on the big screen.

    Long story short- Mobiles are absolutely viable platforms for games like this. It is just taking the wider gaming community a long time to wake up and take notice.

    To provide further anecdotal evidence, another game I play a lot of is Hearthstone, the CCG by Blizzard. Despite being a natural fit for mobile and tablet devices I'm presently forced to boot up my desktop simply to play it. So from my perspective, the ties to the desktop are actually a negative influence on my gaming tendencies. Once Hearthstone is ported to Android (this fall I believe) and in the future, Doomsday also - then the majority of my gaming will be done on mobile. Clearly, I'm not the average user as I don't play all that many games. It is however, food for thought.

    What is interesting to me is that the overall trend is that the desktop is slowly moving away from the "jack of all trades" role and is instead becoming more the preserve of creative industry - artists, musicians, programmers, etc..
Sign In or Register to comment.