Classic Pipeline

From FlightGear wiki
Jump to navigation Jump to search

The Classic pipeline is a Compositor pipeline that attempts to recreate the rendering pipeline FlightGear has used since the move to OpenSceneGraph.

FlightGear displays an enormous visual range, from 4 inches in front of your eyes out to 120km now and more in the future. It's a basic fact when using Z-buffered computer graphics that the precision of the Z-buffer deteriorates with huge near-far spreads and that the near plane distance has a much greater effect than the far plane. The symptoms are flickering, jitter, and other unpleasantness.Tim Moore added a scheme to use several cameras within the scene graph to work around this problem. Each camera draws a slice of the scene using the full range of the Z buffer. This mostly works well, though depending on your video hardware you can occasionally see a line in the scene.

  • clouds are drawn from outside in, because they are in a depth sorted bin, and this is why clouds obscure other clouds properly.
  • hills are not drawn from outside in but in some (unspecified) order, but they are drawn in two passes, and the second pass has
    <write-mask type="bool">false</write-mask>

declared which presumably does the trick of running a fragment only if its depth is lesser or equal to the buffered value but does not alter the depth buffer itself. There is no <depth> tag during the first pass, so the depth buffer seems to be doing something by default - at least write and perhaps also lequal testing. Render bin numbers don't have to be positive. Rendering a transparent object twice alter its transparency. Of course, you can avoid to render it in the color buffer using write mask in one pass.

The scene is first traversed to collect objects, see if they fit in the view frustum and put them in render bins. This stage is called the cull traversal. The cull callback is called from OSG's culling traversal. If OSG determines that an object's bounding sphere intersects the viewing frustum, it calls the cull callback -- if there is one -- to traverse that object and perform finer grain culling. If there is no cull callback, OSG does the traversal itself. Anyway, it's a good way to perform an action when an object is in view. It is possible for the tile cache code, which runs in the update traversal, and the cull callback to run in different threads. However, they should never run at the same time: the cull traversal starts when the the update traversal has finished, and the next update traversal blocks on the cull and draw traversals. Now, the code that actually loads the tiles -- the database pager -- does run asynchronously. I'll need to check if any of the tile cache code runs in the database pager thread, but don't think any does.

Then render bins are sorted by their numbers and drawn in that order.

When we declare multiple passes in an effects, all objects affected by the effect are duplicated the number of passes mentioned in the effect during the cull traversal. As each pass can have a render bin clause, all these duplicates are distributed in all the render bins before the draw stage.

To summarize, all objects having a pass of render bin -1 are rendered before any object having a render bin 1. If an object have two passes, it is rendered twice, once with the objects of the same render bin than the first pass, once with the objects of the same render bin than the second pass. The second pass can be rendered before the first pass if the render bin numbers are inverted (the pass number has no rendering meaning).


Note  the way, the far camera being rendered before the near camera, I don't see how we can mask the outside view with the cockpit. The cockpit is the biggest potential gain, but due to the near camera - far camera thingy, I don't see how this can be done on the level of editing effect files - maybe a suitable edit of the camera group code can pull that off


Note  We could use the stencil buffer without copying anything: render the near scene first, setting stencil bits, then enable the stencil test for the far scene. I believe that the stencil test has been extremely fast for years. But that can't handle transparent objects in the near scene -- e.g., the windshield - without using either alpha bits in the frame buffer or doing a third pass for near transparent objects. Historically we have avoided frame buffer alpha as being exotic and /or slow

A pass is a state set: all the OpenGL attribute of the geometry. When you declare multiple pass, it's because you want the same geometry be drawn several times. You may want to initialize the stencil buffer in one pass (you don't need material properties then) and then draw the object with the stencil test enabled. If you play with the render bins and the draw order that are settable in each pass you can achieve effect such as the light cone (pre-Rembrandt)

In order to combat depth buffer precision problems, we draw the whole scene in two passes, with a near camera and a far camera. See Viewer/CameraGroup.cxx. The far view is drawn first, then the depth buffer is cleared and the near scene is drawn on top. Within each of those ranges we get good depth buffer precision.

In order to draw our huge Z range -- from the tip of your nose (more or less) out to the horizon -- without flickering and other artifacts, the scene is drawn twice. It's drawn with the near plane set to 100 meters, then the depth buffer is cleared, and the scene is drawn again with the far plane at 100 meters and the near plane at its nominal value, currently .1 meters by default. It's been done this way for some time by a ViewPartitionNode in the scene graph. I recently changed the scheme to use two slave cameras as the camera-like nature of the ViewPartitionNode was screwing up view-dependent shadow work I am doing. Plus, this is the recommended way to do such a partition, according to the wisdom of the OSG users list. There shouldn't be any performance difference in the change to slave cameras, but the statistics for the two cameras will be displayed in the stats display.

The relevant code for passes is here:

A pass is an OSG StateSet, a collection of OpenGL states that have a draw order ( the <render-bin> bin number ).

This code renders geometry n times, one time for each pass. You have to understand that at this stage, geometry is only stored in collections and after all the scene is traversed, these collections (render bins) are sorted before the draw stage. The end result is that all geometry having a pass that have an order num of 0 is rendered before any geometry having a pass of higher order num.

What can happen with the two cameras is that the stencil buffer is shared, but

If you want to experiment, try to change line 668 of renderer.cxx, and change :


to :



You can integrate arbitrary OpenGL code with an OSG application. It is most friendly to setup and change all OpenGL state using OSG's StateSet mechanism, but even that is not necessary. We use the GUI code from PLIB, which doesn't know anything about OSG. See the SGPuDrawable class in Main/renderer.cxx for the implementation. The one catch is that OSG has a strong notion of separation between the update of a "scene" and its rendering, and that might not play well with arbitrary existing OpenGL code.

StateAttributes and state modes deeper in the tree (closer at the leafs) replace the StateAttributes and state modes closer at the root. There are override and protected flags for them, but leave them away for the first cut.

Those overrides must also be handled with care. These should be used with extreme care because the might interfere with some effects like shadows where we need to do multi pass rendering of the same scenegraph with different override attributes. Even if there is currently no effect that breaks with using overrides or protected state attributes, this has the potential to break. something like that.

The osg::State can collapse the state sets a little more efficient since it is already finished if the pointers to the state sets are compared and match. It does not need to look into the StateSets' attributes and modes.

StateSets can be put in any node in the scene graph. If you want to share drawables with different StateSets, the StateSets would go in the geodes above the drawables and the drawables wouldn't have StateSets.


CameraGroup objects are the bridge between an FGViewer and the OSG cameras that render the view. An FGViewer points to one CameraGroup, and only one active view can drive a CameraGroup at a time. The CameraGroup manipulates osg::Camera objects as necessary. Subclasses of CameraGroup might not respond to FGViewer requests to change camera parameters.

CameraGroups allow the specification of graphics windows to which slave cameras in CameraGroup objects are assigned. Allow the full specification of viewing parameters -- position, orientation -- either as relative to a master camera or independent. Allow the camera parameters to be specified relative to the master

Camera groups can be created and destroyed on the fly; the CameraGroup will create OSG cameras as necessary and attach them to the proper graphics window.

FGViewer objects can either use named camera groups or can create new ones on the fly.

The cameras in a camera group don't need to render directly to the screen. They can render to a texture which can be used either in the scene, like in a video screen in the instrument panel, or for distortion correction in a projected or dome environment.

Open Scene Graph supports a CompositeViewer object that supports rendering from several widely separated viewpoints, complete with support for multiple terrain pager threads. We could move to CompositeViewer and support simultaneous views from e.g., the tower, AI models, drones, etc.

The Far Camera

Because of the depth ordering, the far camera must be drawn before the near camera. Note that this is not new behavior, it is just now exposed in the timing statistics.


Also see

Cquote1.png we DO load all textures for all effects right now - this is bug #610, which I was recently reminded about, and am doing some hacking on. This is certainly not helping our performance or memory footprint on lower-end machines since the various textures for highest-quality effects (the water depth shader, bump maps, reflection maps) are all being loaded. It’s also making startup / reset slower.
— James Turner (Apr 1st, 2014). Re: [Flightgear-devel] Towards better scenery rendering.
(powered by Instant-Cquotes)

  • Effects are created at startup and during model load, eventually from different threads.
  • Effects are reused if instantiated multiple times. But this is only true for the "root" Effect, not for the Effects it inherits from
  • Material effects are never reused, their root effect is created from c++ code, not from xml files.
  • The uniforms an effect uses are NOT reused. Every instance of an effect creates its own set of uniforms.
  • Uniforms use SGPropertyChangeListeners to get the values of the properties defined in <use> (directly of via the <parameter> section)
  • The uniform's listeners attach and detach to the property tree at a very high frequency when objects enter or leave the Scene, this is done from an OSG thread while traversing the scenegraph. (And leads to the well known crash). The more complex the scenery is, and the more dynamic objects (e.g. AI traffic) move around, the higher the activity at the property tree is.

Cquote1.png Truth to tell, I have little control over this. The properties of the point sprites are set somewhere in the C++ part outside the shader - they all have a color, a base size, a minimum size beyond which they're faded and a max. size beyond which they're no longer magnified and an attenuation behaviour, and that's all.

I've been using color and size information to display their relative brightness, max. and min. size and attenuation behaviour are taken over by ALS code, and that's all I have. If two sprites are initialized with the same size, I have no way of making them different in the shader. Relative brightness of runway vs. taxiway lights seems to be part of the fixed pipeline rendering already (see the first pic), and so someone needs to visit the C++ code generating the lights to do this.

— Thorsten (Tue Feb 25). Re: Surface light shaders.
(powered by Instant-Cquotes)
Cquote1.png Light objects are built in simgear/scene/tgdb/pt_lights.cxx

There effect is built in C++ in the getLightEffect function. It is not
configurable as it is now. Ideally, this function should be replaced
by a lookup in the material file to find a configurable effect.
But I didn't thought about the implications of doing so.

— Frederic Bouvier (2013-04-08). Re: [Flightgear-devel] A collection of questions.
(powered by Instant-Cquotes)

Cquote1.png yes, I could put the light definitions in the effects file itself (in fact the C++ code sets the equivalent properties, so the code changes would be minimal), but it would mean an effects file for each light type, and that feels a bit like overkill here. But then, when has that every stopped us
— stuart (Wed Feb 26). Re: Surface light shaders.
(powered by Instant-Cquotes)
Cquote1.png I have to chime in now that there is shader support for lights. Because one of the things that annoyed me was the feature curt mentioned that was lost during transition to osg: light direction. All lights in the c++ code have the direction (normal) but it is not binded, I assume due to some issue in the fixed rendering pipeline? Anyways, with shaders the normal should be binded, and it's length could represent the intensity if needed. Thus runway lights would dim when looked from the side and be very bright when straight in front of them etc...

I assume the right place would be the pt_lights.cxx function getLightDrawable, where there is a BIND_OFF currnetly for normal? Any opposition to enabling normals?

— Zan (Mon Mar 03). Re: Surface light shaders.
(powered by Instant-Cquotes)
Cquote1.png Precipitations use the OSG particle effect. I don't thing it is something

configurable as the shader is coded in OSG C++ code. Maybe this is
something we should try rewriting in order to make the lighting
different. The implementation of the effect is in

— Frederic Bouvier (2013-04-08). Re: [Flightgear-devel] A collection of questions.
(powered by Instant-Cquotes)

You don't have to create a parameter for the properties you want to test in the predicate. All the parameters of all pass of all techniques of an effect need to be declared in a single section.

In the first rendering pass of default terrain rendering, we use default.vert and terrain-nocolor.frag as shaders. Its purpose seems to be to establish that faraway scenery is not rendered in front of nearby scenery (I think Fred called this 'initializing the z-buffer').

This is an optimization to avoid running really expensive shaders on geometry that will be hidden from view. The GPU has an "early Z" capability and won't run a fragment shader if it knows the result will be discarded. Of course, to be effective the shaders run when the depth buffer is filled need to be fast

The default effect for terrain has a shader that does per-pixel lighting, with a fallback to the traditional pipeline if a system doesn't support shaders. This effect is in Effects/terrain-default.eff.Also, you can disable the use of shader effects with the property /sim/rendering/shader-effects.

Within effects, we do a first pass that writes the depth buffer. If you remove that pass and don't change anything else, you will see artifacts because the later passes don't write to the depth buffer. Within each camera pass (far camera, near camera), all the passes of a technique are run.

We texture and fog during the first pass. The main reason to render textures at this stage is that textures with transparency do change the fragments that are rendered.

For instance, drawing the bridges without textures will show a wall instead of the suspension chain, the strands and the iron structure. I had the same problem rendering to the shadow map. So you won't see a boat behind through the structure or between the strands if you don't render the alpha-tested transparency embedded inside textures.

If you want to apply effects to other kinds of models, you would need to generalize MakeEffectVisitor "in both directions." StateSet objects can appear in any scene graph node and also in the Geometry nodes that sit below osg::Geode. A more general effects visitor needs to track the graphics state as it traverses the scene graph and also needs to examine the geometry. Effects sit at the Geode level.

The Effects system caches Texture2D texture objects. If a parameter such as the clamping is different from a texture that is otherwise identical in the cache, it has to create a new texture object. That's just the way it is. Note that this should not force the image to be loaded from disk again.


Cquote1.png Ideally we would throw out the tile-cache and let the OSG DB-Pager handle all this - Mathias has even done some work towards that solution, and the osgEarth renderer does something similar (I think it has some layers above the osgDB pager, but that’s what it uses underneath)
— James Turner (2014-02-15). Re: [Flightgear-devel] Memory and tile cache.
(powered by Instant-Cquotes)
Cquote1.png The ideal approach is to use PagedLOD, i.e let the osgDB pager do the job it's intended for. So the base tile would have a PagedLOD which loads the building / trees / objects when the LOD threshold trips, with the usual queuing system and unloading. What this needs is to make a pseudo-file-name for to add to the loader, which causes a custom osgDB ReaderWriter to run. (Likely with a custom Options instance set specifying any parameter data needed for the tile - is there any? I can't recall) That ReaderWriter can then return the root osg::Node for the trees/buildings/objects as we already do.
— James Turner (2013-09-18). Re: [Flightgear-devel] Upcoming Random Buildings changes.
(powered by Instant-Cquotes)
Cquote1.png Given that we already create LOD nodes, I assume it's switching those to be PagedLOD, and setting the filename / extension / reader-writerOptions to some magic, and creating a loader which matches that, which creates the buildings geometry.

That is how PagedLODs work, without too much magic. Some data will
need to be passed to the loader. A low-tech way to do that is to
encode parameters in the "file name" passed to the loader, but in this
case you will probably need access to the scenery to place buildings.
You can subclass the DatabaseOptions object stored in the PagedLOD to
store whatever you need.

— Tim Moore (2012-08-14). Re: [Flightgear-devel] Memory issues.
(powered by Instant-Cquotes)
Cquote1.png We register loading callbacks using both osgDB mechanisms and some

SimGear goo to control optimizations, construction bvh trees, etc.
grep for ModelRegistryCallbackProxy. For the body of the loader, there
is nothing special; just don't read any files.

— Tim Moore (2012-08-14). Re: [Flightgear-devel] Memory issues.
(powered by Instant-Cquotes)
Cquote1.png Even that stuff that is currently not managed by osgDB will be handled by

osgDB in the future. I intent to make model and tile loading as well a osgDB
reader. At least an internal reader that registers itself at osgDB.
This way we will not have any frame drop any more in the feature. Everything
io bound operation is done in an offline thread and osgDB make sure that
display lists are already compiled when the model/tile whatever is plugged
into the scenegraph.
... no hangs anymore when an AIModel comes in sight ...

— Mathias  (2007-05-26). Re: [Flightgear-devel] small "thread safe" patch.
(powered by Instant-Cquotes)
Cquote1.png The ReaderWriters run in the osgDB pager thread, which is exactly where the current ReaderWriterSTG runs (which ultimately does the current tree/object/building placement, I think) - so the threading should not change at all. Indeed, the more I think on it, the more it feels like this should be a very small restructuring of the code, just requiring some slightly delicate PagedLOD plumbing to make it work. We're already doing the right work (building an osg::Node) and doing it in the right thread, we just need to change *when* we do it.
— James Turner (2013-09-18). Re: [Flightgear-devel] Upcoming Random Buildings changes.
(powered by Instant-Cquotes)
Cquote1.png The Drawable's are the leaf nodes in osg. They can have StateSet's attached to it. With one Drawable there is one display list.

That means if we want to share the geometry te drawables must be shared. When you get models from the loaders there might be textures attached to the drawables which is not that good in presence of liveries and material animations.When a new model is loaded osgDB provides you a cached model that is already loaded. This one is cloned except the Texture StateAttributes and the drawables. This way you will share the display lists and the textures.

— Mathias  (2007-05-19). Re: [Flightgear-devel] osg material animaton.
(powered by Instant-Cquotes)

We take an already loaded model that is cloned except the drawables and textures to share the display lists and textures:

Cquote1.png Not only a geode, but a whole tree. You get a cached complete tree. That is cloned except the drawables and textures. These are shared.Note that the StateSet's are not shared anymore, just the Texture StateAttributes.
— Mathias  (2007-05-19). Re: [Flightgear-devel] osg material animaton.
(powered by Instant-Cquotes)

Then there is a visitor that walks the texture attributes and checks if the loaded image is the same than the one in the current livery load path. If that does not match it replaces the texture attribute with the one with the correct livery texture. Then osgDB walks again over the tree and collapses the textures again to a single one that is also shared with other models if it is the same.

When a new model is loaded osgDB provides you a cached model that is already loaded. This one is cloned except the Texture StateAttributes and the drawables. This way you will share the display lists and the textures. osgDB hands you over a cached Image. Comparing two texture state attributes with the same parameters and the same texture are identical and the osgDB step during model loading to share duplicate state will collapse them together.


Cquote1.png we deal with STATIC stg declarations differently from SHARED (simgear/scene/tgdb/ReaderWriterSTG.cxx) in a number ways :
  • We use a osg::ProxyNode for STATIC objects so the loading is offloaded until we get with 20km of the object, while the assumption is that we should just load SHARED models immediately.
  • We cache the object model for SHARED objects for later. (I haven't checke what the rules set set for ageing items out of the cache)
  • We use additional search paths for SHARED models - ../../../ from the .STG location, the terrasync root directory, and FG_ROOT. The terrasync directory is included as we distribute the shared models over Terrasync. Note also that model loading is offloaded to a separate thread these days, so there shouldn't a big frame-rate impact of loading a static model.
    — Stuart Buchanan (Mar 1st, 2016). Re: [Flightgear-devel] Shared/Static models?.
    (powered by Instant-Cquotes)


Happens all in one go, within the near/far scenes. All the geometry associated with a pass is "collected" and rendered at once.

Drawable's are the leaf nodes in osg. They can have StateSet's attached to it. With one Drawable there is one display list. That means if we want to share the geometry te drawables must be shared.

Rembrandt / Deferred

The Deferred Rendering technique that Project Rembrandt implements separate geometry and lighting. That means that shaders attached to models or terrain don't do lighting. Lighting is done globally with only one shader per light. Atmospheric effects are also done globally. These pass compute lighting and fog *only* on the visible surface, after hidden surface removal. The same shader pass computes light of the cockpit and the distant terrain, and it should be prepared to optimize near surface if they don't need to be fogged as well as distant surface that don't need to receive shadow.

So in other words, in Rembrandt, you'll don't have to implement fogging and lighting multiple times and worry if this model or this one has the correct haze calculation that match the terrain. They don't have one. period.

The small print is this one: Deferred rendering can't be applied to transparent surfaces. Clouds or windshields are added to the scene

  • after* light calculation. They don't cast shadow(if object like clouds or trees have an opaque part (the center

with a transparent corona), that part can cast shadow) and their shader need to match the global lighting pass as well as the global haze pass.

The skydome is also rendered separately because it is fake geometry, before anything else. Stars are collection of points, moon is a textured sphere lit by a constant OpenGL light source and sun is 2 small quads (one for the halo, one for the celestial object).

Instead of pure geometry, the sky could be drawn with a fullscreen quad (Fred did that in my unpublished engine).

Depth Buffer

Rembrandt needs a monotonic depth buffer, and erasing it in the middle of a frame is not an option.

Rembrandt can't use a scheme where the depth buffer is cleared in between because it rely on it to compute positions. But it exhibits depth buffer precision problems too, especially when computing lights (if the light volume is too tight, it can miss to intersect the terrain). So I was thinking of playing with depth ranges : the far camera renders with a range [0.5..1] and then the near camera renders with the range [0..0.5].