To celebrate my return to DirectQ I'm going to fire off a rant. Aaaaah - just like I've never been away...
But before I say why I don't like it, I want to discuss what I think is good about about it. It's nice to be able to give the user a generic performance vs quality option without expecting them to have any technical knowledge of the API. glHint is great for this; GL_NICEST or GL_FASTEST, job done. Also - and this is a more general strength of OpenGL - when developing test apps you can use it to get a quick and easy result without having to delve too deep into details. It's sorta like "I need this part to be fast, I want this part to look good" and so on.
However, that last one is one of precise reasons why it fails, because in so many cases OpenGL goes no further. The setting is purely binary, there's no intermediate shading. It's like if you were saving a JPEG and could only select 0 or 100 for the quality level.
Even worse, and I'm quoting from the OpenGL spec here:
Though the implementation aspects that can be hinted are well defined, the interpretation of the hints depends on the implementation.In other words, no matter what you select for your hint, an implementation is not bound to honour it. So back to the JPEG example, it's not even 0 or 100; it could potentially be any intermediate value. The implementation could even totally ignore the hint and just do it's own thing. So here we have a situation where code that does one thing on one 3D card could very well do something entirely different on another.
Control is completely removed from the developer and you are at the mercy of your driver. GL_FASTEST may be optimal on one vendor's card but ugly as a box of frogs on another's, GL_NICEST could give good performance on one but be a slideshow on another, there might be no difference at all between them on a third. You have no control over what your code does.
Now, call me old-fashioned but I think this is a Bad Thing writ large.