The test release is going to happen any day now. I'm thinking that with the extent of changes since the last time around a bump in the version number is now appropriate, so say a big "hello and welcome" to DirectQ 1.8.8.
As I have said, this is going to be a test release. I personally consider it quite stable, and it has circulated privately among a few people at various stages recently, but the real proof doesn't happen until something goes public.
One thing that has been on my mind lately.
The quality of DirectQ's external texture loader is something that I remember mentioning some months ago. This is probably some of the most horrible code I've ever written and it was really brought home to me when I gave a copy of the full code to someone a coupla days ago. It works for sure, but it's messy, it's difficult to follow, it's not always clear what's going on in it, and it's a maintenance nightmare for me.
Adapting it to load .lmp files for Kurok skyboxes was the final straw. I tried once, gave up, and wrote a separate skybox loader for Kurok.
It's now clear that this was a misguided attempt at being all things to all people. The ability to load any texture from any directory, and caching of all external textures when a game directory is loaded - they're nice features for sure, but they're pretty damn marginal.
So this has been on the cards for being ripped apart and replaced with something more sensible for quite some time now, and it's going to happen soon. I won't hold up the release on account of it, but it will happen over the next few weeks.
The way I see the new code working is that it will first attempt to load a texture directly from the path specified, then fall back on /textures if not found. I can cache info about previously found textures for faster loading next time they're needed, but building a cache on startup will go.
Textures in BSPs are the only real special case (there might be one other...) - in this case the search paths are /textures/mapname and /textures. I personally object to /textures/mapname on religious grounds (I think it should use the WAD name specified in worldspawn) but I'm going to be pragmatic and swallow my pride.
The one other special case might be player models. DirectQ doesn't really support external textures on player models owing to the need to support different shirt and pants colours, and that needs to change. I have some ideas on how to do it, one of which is greyscaling the texture and doing some pixel shader magic, but overall compatibility with other engines will be an important factor so I need to study before I commit.
All of this will finally and definitively resolve the ongoing glitches with external textures that have been reported to me, and that are another motivation for tackling this.
And there we have it, one part of the roadmap for the future (no, I haven't forgotten sounds...)
Tuesday, May 31, 2011
The test release is going to happen any day now. I'm thinking that with the extent of changes since the last time around a bump in the version number is now appropriate, so say a big "hello and welcome" to DirectQ 1.8.8.
Posted by mhquake at 7:13 PM
As indicated previously, I'm going to be doing a test release shortly. This should be fun.
Today I started bringing on Kurok support. I guess that Kurok is a mite old now and may no longer be in it's first flush of interest, but it's something I had a stated intention of doing for a long time, so here we are.
I'm not going to do full-on 100% support; if nothing else there are many default cvar values that it changes, so it's just flat-out not possible. I also have no intention of writing a new menu system, rewriting the HUD (again!) and writing another set of fog shaders - Kurok will just have to make do with some hacks around what DirectQ offers as standard. I'm not going to disrupt the basic game for the sake of a mod.
Skyboxes don't load for some reason; I haven't investigated what it's doing at all, but since it's based on an older Fitz I can compare and figure what's happening.
It uses the older fog start/end params so I'll need to intercept it's fog commands and tweak so that it at least looks kinda right in DirectQ.
It stuffcmds a load of stuff every single frame. Eeeeeevil.
It seems to assume that fullbright support is not present in the engine. So does Nehahra, and the solution is the same - just don't load fullbrights if Kurok is detected running.
The release will include Kurok support, but I'm not sure yet how comprehensive it's going to be. We'll see.
Posted by mhquake at 12:42 AM
Monday, May 30, 2011
Textures went quick and easy and I've now got a lit, textured and animated IQM in-game. Still have loose ends to finish up, but it's plain sailing from here.
Right now I'm just hacking a model in to replace the default grunt (and hacking a bit more to get scaling right), so the animations use the grunt's frame numbers but the actual frames come from the model I'm using - they don't quite match. Yes, we have QC $frame compatibility.
It's slow. By God it's slow.
Part of the blame for that must lie with the vertex counts in the sample model I'm using (which are quite high), but a good chunk of it comes from a pure software animation scheme (high vertex counts don't help here either).
In the normal run of things I can get maybe 1200 FPS in that scene so the drop to under 500 comes from IQM alone. Run without animation and we get almost 1000, so software animation effectively cuts framerates in half (and I've a ninja CPU so you can't blame that). This is where designing a 3D model format around graphics hardware would have been a Good Thing; animate it in hardware and we'll get MUCH better performance.
I could probably do some tricks like cache animations, coarsen interpolation factors, and so on, and claw some of that performance back, but it shouldn't be necessary. Yes, I'm pissed off at this.
Posted by mhquake at 8:19 PM
Today I got the basics of IQM loading and drawing up and running in RMQ. By basics I mean bare minimum necessary to get an IQM in-game - the loader, some animation, and minimal drawing. It's untextured, I'm using a sample model, animation just cycles through the frames in the model, and a lot of other stuff is missing.
My general impression is that the loader is disgusting, animation is disgusting and drawing is clean and easy.
First up, the loader. I'm not overly familiar with model formats so I can't say what I would do differently, but I was rather shocked at the amount of load-time processing that's necessary. What seems even more bizarre is that it doesn't appear to have been from the perspective of reducing file size or anything like that - as it more or less does the processing in-place. I'm not entirely certain what the thinking behind that was, so I'll move on.
Animation. It's all CPU-side. Software T&L country, 12-year-old technology, steam-power man! DirectQ animates MDLs on the GPU and it more than doubled framerates in many cases, and the format should have been designed around GPU-side animation. Store your bone matrixes in a shader's constant registers, store an index with each vertex position, and do a lookup. Bam. Of course it reduces the number of bones you can have, but for most practical purposes you're not going to be going overboard with this in a game engine. Any forward thinking format should really design around doing as much as possible on the GPU. Quake is already CPU-bound, this just makes it worse.
On the bright side though it looks like I'm going to be able to support the $frame stuff QC-side. I don't see any reason whatsoever why it wouldn't work.
Drawing. Just set your vertex arrays and make a glDrawElements call, that simple. Right now I'm making no attempt to be efficient about stuff like state change batching, as the priority was just to get a working implementation. I'll come back and handle that later.
More drawing. The format uses 32-bit indexes by default. More software T&L country. If your 3D hardware doesn't support 32-bit indexes - guess what - OpenGL will drop your vertex pipeline to software emulation. Granted, most hardware does nowadays, but Quake remains one area where legacy hardware is still more widely used than normal. I switched the in-memory format to 16-bit indexes, which puts an upper limit of 64k vertexes on an IQM for use in RMQ. It's still high enough.
Adding a new model type to Quake is moderately painful. They don't fit seamlessly in, and there are a few places where you need to handle things. Going OO, with a base model_t class and every other type inheriting from it would be a huge improvement in code cleanliness and work required.
I decided not to put IQMs into cache memory as it simplifies a lot of things in the loader and the renderer. Cache memory is really an old-hat concept from the Pentium 60/MS-DOS/8 MB days and should probably be taken out back and sent to a better place anyway.
Think that's about it for now; the next step is to finish one outstanding part of the loader (textures) and clean up things OpenGL-side a bit more. I don't doubt that I'll have something more to report after that.
Posted by mhquake at 12:34 AM
Saturday, May 28, 2011
I'm away from my main machine at the moment and not getting much done, but I did finally bite the bullet and remove normal vector compression from MDL rendering. This comes reasonably close to almost tripling the size of an MDL in video RAM, but the tradeoff is that drawing the thing can now skip two decompression steps per vertex (for current and previous interpolation blends) and get some good extra performance when under load.
On a typical 128 MB D3D9 class accelerator this won't cause any problems for anybody, but it is something that it's fair that you're aware of. To balance this, I've reduced the system memory overhead of MDLs by a roughly equal amount; ideally I shouldn't even be storing them in system memory anymore as the GPU-side copy is all that's needed for drawing, but I still need the data for recreating vertex buffers in the event of a lost device (that's another reason to support a proposed move to D3D 11).
I haven't done any scientifically valid measurements, but I have a reasonably solid hunch that in a typical scene many triangles in an MDL are going to end up being transformed to sub-pixel (or close to sub-pixel) sizes. Even if not, it would require a transformation to 4 or more pixels to justify loading extra work on the vertex shader stage for this particular type of object, so the savings mount up.
This is interesting to me because it backs up something I've been aware of for quite some time now - doing things that on the surface save memory may seriously compromise performance in other areas. That should be a total no-brainer for anyone, but yet we still see it happening in a lot of places.
It's all a matter of profiling, choosing the correct tradeoffs, and having the raw results to back it up.
I'm thinking that I'm probably going to need to do a new release shortly. The current code has circulated around a few people at various recent stages of it's evolution, but it's getting near time for a more intensive public bashing at it. What comes out is going to be semi-unfinished (although quite close) as there will remain a few places where cleanups are necessary. A lot of this revolves around the area of correctly sequencing the spawning of dynamic effects - particles and lights.
I mentioned D3D 11 above. Previously my main motivation for wanting to move was to do particles in a geometry shader; recent work on my particle system has completely removed that motivation, but I now have two other reasons, one minor and one major.
The minor one was mentioned above - removing the need to retain system memory copies of data. The major one however is pretty significant, as it addresses one of the main remaining bottlenecks in DirectQ's renderer.
This is in the area of needing to update a dynamic vertex buffer with brush surfaces in order to draw the world and all brush models. This is quite a painful operation, and profiling reveals that about a quarter to a third of time spent drawing a scene is spent doing this. With D3D 11 I can more or less just do a straight copy from one vertex buffer to another and have the operation happen entirely on the GPU, which completely removes the CPU-to-GPU bandwidth overhead, as well as the overhead of locking and unlocking buffers. That's quite significant and should translate into measurable gains.
There are other experiments in this area which I'm doing right now, including moving all brush models above a certain complexity threshold to static vertex buffers. There is as always a tradeoff and in this case it was losing backface culling, but I reasoned that since the data was already on the GPU anyway, and since the GPU will backface cull anyway (but after the vertex shader stage in the pipeline, so a certain amount of work is done only to be thrown away) it was worth a try. End result is that I netted a x 1.5 improvement in certain scenes, which was impressive enough to get me considering similar options for drawing the world. Some initial research work has been done, but nothing concrete or usable has yet emerged.
All of this also has a cost in code complexity with D3D 9, and that does pile up and start looking ugly. It's inevitable that at some stage D3D 11 will happen for sure, so the only question is of timing the move. Right now I think it's at least maybe a year off, but that is off course subject to change, and if another significant advantage appears it may tip the balance to sooner. Likewise - as happened with the particle system - if a more elegant way of achieving the same end result with high performance is found, it may tip to later.
Posted by mhquake at 10:38 PM
Thursday, May 26, 2011
One of the really neat things about working on two different engines at once is that I get to try out many ideas in both, and cross-check how something works in one with the other. The net result is that something that's potentially new and dangerous becomes a lot safer and more robust.
So the whole FPS-independence thing is coming to a very satisfactory close. There are just a few edge-cases to work out, but overall it's working extremely well and completely glitch-free. Both RMQ and DirectQ now have it, with DirectQ being used as the trailblazer and RMQ being used to consolidate, confirm and cross-check.
One really good thing about it is that the code took a very unexpected turn and ended up being far far simpler than I had ever anticipated. In fact, on a first read it seems more or less identical to ID Quake; a few functions have been moved around, a few got an extra "frametime" parameter, and a few were split into two functions. But nothing totally earth-shattering came out of it.
MDL movement interpolation has also taken a strange turn - again in both. It turns out that the old QER code is - fundamentally - total bollocks for a lot of cases. I'm still using it for multiplayer games, but for single player I ended up scavenging some code from an early DarkPlaces that does movement interpolation on the server. This works a LOT cleaner and neater, and doesn't suffer from occasional timing glitches.
The by now mandatory "other news".
RMQ is getting IQM as an optional replacement for MDLs. Things have gotten to the stage where the limitations of MDL are becoming a serious factor, and something better is just flat-out needed. Thankfully a sane and sensible option is available, rather than a horrible monstrosity with everyone's favourite feature bolted on. Now if only similar was available for BSP...
DirectQ has gone through another evolution of it's particle system. Previously it used one of two modes - either with or without hardware geometry instancing. That's all been ripped apart and replaced by a much cleaner and simpler (and less CPU/bandwidth deficient) system. If you're familiar with the DirectX SDK Instancing Sample, it's option 2 - "Shader Instancing with Draw Call Batching". This gets particle submission down to a single vertex per-particle (instead of 4), runs on SM 2.0 hardware, and - in all of my tests - is a good deal faster than anything else when under load.
I've used a similar technique for controlling the view model interpolation, so it's no longer necessary to refill a dynamic vertex buffer with blendweights each frame. All small stuff in the performance stakes, but small stuff can add up.
Occlusion queries have come and gone again in DirectQ. I was considering them for RMQ as well, but they won't be used in either now. The simple fact is that I was optimizing for freak extreme conditions which ended up being slower in 99% of more common cases - sometimes much slower. Occlusion queries are really only of value when the effort required to just draw the thing is higher than the overhead from issuing queries, drawing bounding boxes and collecting results. You also need to have most of the objects in your scene actually occluded for them to work right - otherwise you're expending extra effort just to get back a "yeah, draw this object anyway" result.
I'm still slightly intrigued by the possibility of using them as a replacement for PVS, but that of course is only valid on the client. The server still needs it's own PVS and you can't replace that with renderer code.
Finally, both engines have now got ultra-smooth player movement. There's always been some low-level grittiness or jerkiness in Quake's player movement, which has been completely eradicated. It's now even possible to run at the standard 72 FPS and not get even a single jerk, but combined with the FPS-independent stuff, if you want to go faster for whatever reason (to match your monitor's refresh rate, say) you can.
Over the next short while, implementing IQM is going to be my primary thing, so I'll probably write some on that soon-ish.
Posted by mhquake at 12:47 AM
Thursday, May 19, 2011
...if I got rid of the view pitch drifting code?
This was old stuff from the days of keyboard looking, or having to hold down a key to mouselook, which automatically recentered your view after a short while as soon as you stopped looking.
I think everybody plays with mouselooking on these days, so it would help resolve a few complexities if I could just delete it.
Posted by mhquake at 8:17 PM
Tuesday, May 17, 2011
There are a lot of subtleties involved when framerates go over 1000, and a lot of strange things suddenly start happening. I guess this is partially reflected in the "don't allow really short or really long frames" comment in the ID Quake source, but it would have been nice if that had been followed up with "...because this, that and the other happen". All the same, I don't expect that Quake was ever tested at this kind of framerate back in 1996 so I'll let it pass without further comment.
One item of concern is dynamic lights. There are some dynamic lights in the engine which are given a die time of 0.001 seconds after they are spawned, with the obvious intention being that they will last for one frame and be respawned again immediately afterwards. At over 1000 FPS we suddenly have them lasting for more than one frame, with the end result being that we could have multiple such dynamic lights on the go at one time. If we were running at 5000 FPS we would actually have 5 dynamic lights being thrown around a player who has the Quad Damage - bad for performance indeed (although at that kind of speed performance isn't something you worry about).
A really weird and subtle one emerged in conjunction with timer decoupling. In order to get smooth and responsive input it's necessary to accumulate input events every frame, then gather them up and send them to the server at 72 FPS. This only becomes an issue at this kind of framerate (when movement suddenly becomes ultra-jittery otherwise); run any slower and you can ignore it.
Anyway, when I simulated a framerate of 10,000 FPS (via host_framerate) I discovered a very strange effect - pressing the forward key would cause me to move backwards, and vice-versa. Some digging around revealed the answer - Quake transmits the forward movement to the server as a short (16-bit) integer. This has a maximum and minimum of about 32,000, and the accumulated input was causing it to overflow and wrap.
I guess that's a definite case of a protocol limit on how fast Quake can run. I'm not too certain of where the precise cutoff point is; I measured it around the 5,800 FPS mark but can't be any more precise.
Particle effects are another interesting one, and this affects framerates below 1,000 too. I mentioned this one earlier, but the technical details are that it's necessary to split particle spawning off from CL_RelinkEntities. CL_RelinkEntities is called by CL_ReadFromServer, which must be called every frame (otherwise movement goes totally to herky-jerky land), but if you spawn particles behind a rocket every frame you'll be spawning maybe 10 times as much as you should. So particle spawning needs to also run at a slowed-down rate otherwise the faster we go the more we'll hurt framerate. At maybe 1000 FPS it translates into a halving of framerates every time a smoke trail is spawned.
All rather curious stuff indeed.
Posted by mhquake at 12:05 AM
Monday, May 16, 2011
I've taken another pass through the whole timer and timer decoupling code, and now have a much better, more stable and more flexible solution. Instead of being hacked together based on a best-guess, this one is actually based on the proper documented way of doing this stuff, which is nice. There are still lots of subtleties with Quake's timing (I'll be mentioning one later) so it's still somewhat in the experimental/disabled-by-default bracket, but I can confidently say that I'm now at the stage where I can see it becoming the hard-coded enforced behaviour at some time in the future.
Particles have been worked over some more, and a lot of what I wrote about a coupla days back is now totally overturned. There is no longer any particle texture in the engine (and therefore no quality cvar to control it): instead it's entirely generated on the GPU which gives extremely high quality up-close but scales back beautifully when particles are further away. It's a mite slower in some circumstances but faster in others, and overall I think the quality tradeoff is well worth it.
The only thing relating to particles left on the CPU is now velocity and position updates; everything else is GPU-side.
Speaking of particles, that timer subtlety I mentioned rears it's ugly head here. It turns out that when running at a high framerate and connected to a remote server (or playing a demo) a lot more particles are being generated than when running at a lower framerate (which is the cause of a preformance problenm too). You can see this yourself by checking out the lava ball trail in the Start map at different values of host_maxfps. One solution is to use timer decoupling to scale back the rate at which particles are generated to a constant rate irrespective of framerate.
My proposed breaking change to the video code is likely going to be deferred for a while; I've reviewed the code and made a first attempt (which I very quickly reverted from), and it's quite obvious that the whole startup code is a mess that probably needs to be gutted and rewritten more than anything else. A huge part of the reason for that is that much of it dates back to my original D3D port and has been hacked around to make things work rather than being properly implemented.
All in all an interesting batch of updates.
Posted by mhquake at 1:03 PM
Saturday, May 14, 2011
Been a while since the last update but I have been working away behind the scenes on a few things, just tuning and optimizing more parts of the engine.
There is a change in the visual appearance of the particle dot, which has now moved from being a resource embedded in the engine to being a procedurally generated texture. It's not generated in the shader, but rather in engine code; all the same it does mean that you can now control it's quality level (although in practice there's no performance gain from using a lower level, so it's for visual preference only) via the r_particlequality cvar.
Particle transforms have moved entirely to the GPU (which was very nice to do) and performance is up overall.
Likewise the underwater warp texture has made a similar move to a procedurally generated texture and can be controlled with r_waterquality. This texture is also used for controlling regular water warps, so I felt justified in using the same name as FitzQuake uses. I've also chosen the same default value for this as Fitz for better engine cohabitation. In this case there is actually no quality gain from chooing a higher value, and I recommend that you just leave it at the default.
Dynamic light updates have been improved with better lighting falloff and faster updating in general; the lighting model looks a little different to GLQuake now, but overall I think it's better.
Brush surfaces have also been changed a little with better vertex buffer locking/unlocking behaviour; there's still huge room for performance improvements with this code, but it is getting better all the time. I'm currently debating what to do with the old gl_keeptjunctions cvar; this was present but did nothing in previous versions, but I've now restored the behaviour. In case you don't know what it does, setting it to 0 (GLQuake's default) will remove some vertexes from surfaces, which gives higher performance but at the expense of the occasional blinking pixel onscreen. I've tested a few heavy scenes and can confirm that it can make things go up to 20% faster (typical scenes will be lower, of course), but is the visual tradeoff worth it? So what should the default be? You decide.
Finally, and now that it seems that on-the-fly window resizing is stable, I've added two cvars - d3d_width and d3d_height - to save out the new sizes. The reason why these are "d3d_" and not "vid_" is the same reason why I chose "d3d_mode" instead of "vid_mode" - these cvars with Direct3D don't coexist peacefully with OpenGL engines. I'll probably also add cvars for saving out the window position too - right now all I do is just center the window onscreen whenever a mode change happens.
So with all of that I'm considering a breaking change to video startup. The current mode list is divided in two, with approx the first half being windowed modes and the second half being fullscreen. Under the new way I'm planning, mode 0 will be the only windowed mode and the d3d_width and d3d_height cvars will control it's size; modes 1 and above will be the fullscreen modes. This will actually roughly realign DirectQ with the way GLQuake handled it, and it's also attractive because I can provide arbitrarily sized windowed modes in the menu, so it's fairly definite that I'm going to do it, but the only question is; do I do it now or do I wait until after the next release?
Posted by mhquake at 8:39 PM
Monday, May 9, 2011
Currently working on fixing up DirectQ's FOV support for widescreen resolutions; the old code I had was fairly cruddy and hacked around with over time, so it's good to go back and clean things out.
At this point I think it's reasonable to throw a question out in the open: how do people want FOV to work? There are a number of options and things to consider here:
- A cvar to switch DirectQ's FOV handling back to the way GLQuake did it seems reasonable and sensible. This can also serve as a last-resort panic button: if things get terminally screwed up for you then at least it'll get you back to something that works. It may not be great, but at least it will work. This is currently present (and has been for the past coupla years) and is called "fov_compatible".
- In relation to this, what should the default handling be? This is one clear case where I think "the way GLQuake did it" is not a good default; the new method should be the default. Everyone agree?
- Correcting FOV for widscreen aspect ratios requires a baseline aspect ratio to derive the values from. Should this be software Quake's 320x(200-48) or GLQuake's 640x(480-48)? (The -48 is for the default status bar size). I favour GLQuake as the baseline here; it might not be the absolutely "correct" baseline derived from the original Quake engine, but people are so used to it that going back to the original just looks weird.
- Handling of the gun. Previously I've (except when I've done it wrong) drawn the gun as if FOV was 90 when FOV is >= 90, but drawn it at the reduced FOV otherwise (with the new handling) or just drawn it the way GLQuake did (with the old handling). Is there any requirement at all to draw it the way GLQuake did for FOV > 90? I'm thinking this is another one of those cases where "the way GLQuake did it" is actually crap and - this time - should not only not be a default, but should not even happen at all.
Posted by mhquake at 6:21 PM
Sunday, May 8, 2011
As is usual when I do an extensive update with lots of things changed, a patch release is looking like it's going to be necessary sometime soon. This is just to cover a few problems that are manifesting on some peoples machines, and to fix up a few things that I'd left out.
That doesn't mean that new features won't be forthcoming. Two for you so far; first one is the return of gl_picmip for you QuakeWorld-look fanatics. Now you can set it to 10 billion and get flat shaded everything in DirectQ too!
The second is optional software Quake mipmapping. What this means in that DirectQ can generate mipmaps in a similar manner to software Quake - only 4 miplevels get generated, and liquid textures are not mipmapped at all. You can toggle this at runtime by setting gl_softwarequakemipmaps to 1; there's no need to restart the map or the renderer to make the change. Combine it with point filtering and square particles for the full effect.
Posted by mhquake at 8:09 PM
Friday, May 6, 2011
Been digging into the timer functions again; I've reverted some of them back to the way ID Quake did things because I started running into some serious precision issues at stupidly high framerates. Overall it became a case of "if you find yourself adding more and more complexity to something, then you're probably doing it wrong and need to row back and rethink", so it's good to have this sorted out before releasing.
Brush surface draw call batching behaviour is now user-configurable (via r_surfacebatchcutoff and a menu option); default behaviour is to batch as agressively as possible (unless there is only one surface to draw, in which case batching incurs unnecessary overhead). On some hardware it may be slightly faster to tweak this parameter.
Dynamic light updates have been reworked for some small extra performance. There are still a few more frames to be pulled out of this, but overall it's already very fast anyway so it's not too high a priority.
...and we're getting closer...
Posted by mhquake at 9:14 PM
Wednesday, May 4, 2011
Just some small sanity checking and fine-tuning to be done before releasing; think we're almost there.
Marcher Fortress performance on the Intel 945 is now up to 170 FPS; close to a doubling as a result of recent work on reducing CPU load.
As well as raw performance, there are a few extra features coming through in this version which are worth mentioning.
Gamma adjusted lightmaps, via the lm_gamma cvar. This defaults to 1.0 (no adjustment), and may be useful for tweaking brightness in cases where you don't want to adjust your global gamma (e.g. if running in a windowed mode). Note that it only affects lit objects (solid surfaces and MDLs); sky and liquids are not gamma adjusted using this method, and nor are 2D GUI textures. Try it and see.
Mousewheel support in the menus and console. Pretty rudimentary, but all the same. The mousewheel now scrolls the console properly (a long-outstanding request) and will also scroll through menu options.
FitzQuake-compatible menu/console/status bar scaling. (Updated) - this is now more or less fully compatible with Fitz for peaceful engine coexistence (which is very important). DirectQ's old gl_conscale cvar still exists and still maintains the old behaviour if you prefer that. For layout reasons DirectQ doesn't allow a virtual size below 640x480 using either method.
Crappy edges around textboxes have been fixed. ;)
More news as it happens.
Posted by mhquake at 1:56 PM
Tuesday, May 3, 2011
I'm going to be dropping the automap from the upcoming release; reason why is that it's just not playing nice with some of the more recent code changes and needs a row-back and re-think in a few places. Until then I'd rather not have it at all than have a buggy half-working implementation.
I'm interested in knowing if there's actually any need to bring it back at all. Is it a feature that people actually feel they need, or is it just a novelty thing that you look at once and then forget about?
The current schedule for this is that the release is going to happen over the next coupla days, so stay tuned.
If you've been commenting on posts and have noticed the comments disappearing, you should be aware that Blogger now has an automated spam filtering service which seems to have trapped quite a few posts as false-positives. I've only just become aware of this, and will be reviewing it from time to time to make sure that nothing gets caught in it. It's a bit of a pain that I can't opt out of it (I don't get any spam anyway) but them's the breaks. Anyway, I've let through all posts that have been trapped by it to date, and will be doing so each time I review it going forward (unless it's donkey porn, of course).
Posted by mhquake at 1:29 PM