Graphics Performance Ideas
Vertex vs. Fragment Shader use for AI/MP vs. Skydome
The vertex shader is run for each vertex of the mesh each time it is rendered. The fragment shader is run for each fragment generated from the mesh - if the mesh covers a small part of the screen there will be few fragments - if the mesh covers a large part of the screen there will be many fragments (excluding the early Z-clipping of fragments that some GPUs may do).
Some implications:
- The amount of vertex shader work (to draw a mesh) ought to increase with the #vertices and the complexity of the vertex shader. For a particular mesh and shader the cost is the same each time it is drawn.
- The amount of fragment shader work ought to increase with the #fragments/pixels generated for the mesh and the complexity of the fragment shader. This changes online depending on how many pixels of the screen the mesh covers.
So, e.g. for effects used on (MP/AI) aircraft it would seem preferable to push complexity to the fragment shader since, most of the time, most of the aircraft will be very small on the screen.
OTOH this would seem to suggest the opposite case for the sky-dome - relatively few vertices and often covering a significant part of the screen.
Pushing most of the haze shader computation from the vertex to the fragment level would seem to suggest an approximately constant cost for the haze for the same view regardless of scenery complexity since the number of hazy fragments remain about the same.