1,360
edits
| Line 182: | Line 182: | ||
== Some context on different perceptions == | == Some context on different perceptions == | ||
Assume you have this interesting feature in mind which is not in FG. Chances are, you assume that just nobody thought about having it before. Chances are however that this assumption is wrong (a rough estimate would be that perhaps 5% of the suggestions are of that type). People working with shader effects have spent thousands of hours working with rendering code and usually have a habit of comparing nature and FG to see how to improve things. They also know what other games do and have read books and tutorials on GLSL shaders. So what's the case most of the time is that if a feature is not there, it failed a cost-benefit analysis - either it's not interesting enough so that it's low on a to-do list, or it would cost too much framerate to implement it, or too much coding time. As a result, the reaction to simply suggesting the feature will be along 'Oh yeah, though about this a while ago, not so interesting...' So if you want to change this, you have to understand and change the cost-benefit analysis. | |||
=== Case study - when is rendering simple? === | |||
A fairly recurrent question is why, with all the rendering effects we have on models, we don't have something simple as a reflection in water. After all, the math to determine the varying colors of a cloud at sunrise dependent on whether the light shines through them or not is university level, one needs to solve two nested integrals for the correct answer, whereas everyone knows how to compute a reflection. | |||
The problem is that real-time rendering is not simple when the math is simple, but when it can be parallelized. Graphics cards are built to solve a particular problem very fast - when a ray from the eye (E) hits a surface (S) and then is tracked to the light (L). If at the pixel location the surface normal is known, all relevant vectors are knows without having to know anything else about the scene, the computation is purely local and can be parallelized. So whenever an effect uses and ESL situation, it is simple to implement. | |||
Water reflection however needs to compute at minimum ESSL (follow the light from the eye to the water, then to whatever is reflected in the water, then to the light). Raytracing codes solve these problems just fine, they track a large number of test rays from the first surface hit to everywhere else in the scene to see where the reflected light comes from. However, that doesn't run real time - if you use 10.000 test rays, the computational effort goes a factor 10.000 up and the framerate down by the same factor. What's perfectly okay if you can take a minute to render a picture is not if you need 60 per second. | |||
There are ways around this, for instance shadows are also ESSL problems, and they can (approximately) be solved by techniques like a shadow camera pass in deferred rendering (Rembrandt in FG) - then the scene needs to be rendered just two times, which is a more acceptable value. But then this needs lots of additional resources and cleverness. | |||
Lesson here: Many things which look (and even are mathematically) simple to do are not implemented because they are not as far as real-time rendering works because they are not ESL, and hence their cost-benefit analysis is judged poor. Clouds reflecting in a pool of still water are a cool effect but - would you work half a year for coding it and have your framerate reduced by half - would that still be cool? | |||
(Side note: Plenty of first person shooters and racing games can run very fancy 3d effects. However, they can also run very agressive optimization because your movement is very restricted to 'levels' - so the fact that game XY can do something doesn't mean that FG where we need to render scenes from the ground into orbit can do the same thing.) | |||
edits