Uniform Buffer Objects
See Core Profile support for the main article about this subject. |
Rendering |
---|
Rendering Pipeline |
Effects and Shaders |
Ongoing Efforts |
Standalone Tools |
IPC/Networking |
Each individual graphic can be done in a shader, but that’s not the issue: the issue is the data we have coming in from Nasal and especially complex SVG shapes. Or for a pitch ladder, potentially hundreds of ticks / lines. Passing that data through in an efficient way to a Shader is tricky. What needs to be done is to convert the inputs we have into a list of geometric primitives which can then be passed as geometry+shaders: and that’s precisely what the various libraries do (Shiva or Nano or any of the others…): tesselate the paths, collect lines with matching drawing style (pen width, stroke pattern etc) and then send it to OpenGL using vertex arrays and shaders.
All of this needs to be done knowing for example what clip/mask is in effect, and with correct ordering for opacity of elements to work. (Clipping in particular adds a ton of complexity)
The good news is the work only needs to be re-done when the path (etc) changes style or shape: so long as those remain fixed, the created triangle/line representation can be drawn very efficiently.[1]
Longer term, we can look at passing this information using UBOs, which are exactly designed for this problem, but I’m not sure how OSG support is, for them. And of course, UBOs need some relatively high OpenGL version which means we again need to focus on Core profile support :)[2]
Background
It would be nice if the Effects framework had a way to load arbitrary textures and make them available to effects.[3]
you can't pass arrays through the effect framework, so if you want to extend it, you have to write every extension explicitly anyway, arrays have no advantage. The idea is good, and I looked for a good solution when making the cloud shadows, but there is none, every array element has to be handed over separately.[4]
It does not seem possible now to directly expose an array, but I think it should be possible to use an array in the shader code and use eg. coordinates[3] as name in the effect file.[5]
different strategies doing this: for instance generating a shadow map not via a pass on the GPU as Rembrandt does but procedurally on the CPU from meta info. Now I'm generating it procedurally on the GPU from meta info and just need to pass the meta info. In the current state, I'm passing 32 uniforms to encode position information, which seems to work without significant loss of performance and gives a decent visual impression. It might still grow a bit more, but my current thinking is to try to get priority for everything in the field of view to get more bang for the buck. A texture seems a bit of an overkill for such a number, especially given that we run out of texture units in some effects already (the model shader is using 8 texture units already). [6]
If you want to pass substantial amounts of data, it is currently recommended to use a texture (with filtering disabled, probably) to pass the info. Since we don’t have much chance of using the ‘correct’ solution (UBOs) in the near future.[7]
The main challenge seems to be using a standard alignment/packing format for the passed data: https://www.opengl.org/wiki/Uniform_Buffer_Object#Layout_queries And it seems that even Tim Moore was originally planning on supporting UBOs: http://markmail.org/message/nk3dswtmkcrn2j4m http://forum.openscenegraph.org/viewtopic.php?t=12732[8]
if/when the Canvas system is sufficiently integrated with the effects/shader system, any camera (MFD/scenery, offscreen or not) can trivially make use of this functionality without requiring tons of custom code - at the mere cost of adding the corresponding Canvas Elements to the Canvas::Group registry, which would also work for slave-cameras (or those using OSG CompositeViewer)[9]
References
References
|