| Work in progress|
This article or section will be worked on in the upcoming hours or days.
See for the latest developments.
This page is intended to keep track of some of the more commonly asked questions related to the Canvas system and if/how well it is integrated with other subsystems/features.
- 1 Canvas vs. Liveries
- 2 Canvas vs. Rembrandt
- 3 Canvas vs. Materials
- 4 Canvas vs. Effects/Shaders
- 5 Canvas vs. Tail/Scene Cams
- 6 Canvas vs. MFDs
- 7 Canvas vs. GUI
- 8 Canvas vs. OS Windows
- 9 Canvas vs. osgEarth
- 10 Canvas vs. OpenGL ES
- 11 Canvas vs. FGCanvas
- 12 Canvas vs. HUDs
- 13 Canvas vs. 2D Panels
- 14 Canvas vs. Splash Screens
- 15 References
Canvas vs. Liveries
we have even seen Canvas-based liveries: Howto:Dynamic Liveries via Canvas
If a canvas is internally referencable as a texture, it ought to be possible to dynamically also generate normal and reflection maps, no? And this ought to be a path to solve the feature request to provide the option to change such maps along with the livery? Also, another thought in a similar vein: How does canvas manage memory? If I declare a 20kx20k canvas and put just a few small raster images on it - do we need the full memory space of the 20k texture, or is there cleverness applied by dispatching lookup calls to the smaller texture (which likely costs extra performance, but hey...)?
Canvas vs. Rembrandt
Canvas vs. Materials
we could implement "chains" of materials where some are the input for a Canvas texture, while others are the "output" of a Canvas texture - which would mean that even schemes like Rembrandt (deferred rendering) could be implemented largely in fgdata space (effects/shaders, for the time being, Rembrandt is using C++ code to set up the corresponding buffers and stages to put everything together, because there really isero Nasal overhead involved as long as only Canvas stage is used, i.e. none of the elements (an empty Canvas referencing a material created by another Canvas/effect). And that point, you could have a multi-stage, and multi-pass, rendering pipeline implemented solely on top of the FlightGear property tree, by setting up a handful of "buffers" (Canvas FBOs/RTTs) and using them in a "creative" way to chain them together and create the final output. Also note that we're talking here about roughly 50-100 lines of C++ code to teach the materials manager to retrieve a texture from the Canvas subsystem (instead of the base package) - ThomasS recently implemented the code for the Canvas specific side of this, because he also required hooks to get a Canvas as an osg::Image - so that he could serve it via http (i.e. as a jpeg/png): 250px The Canvas C++ code already has a getTexture2D() callback that we can use.
Canvas vs. Effects/Shaders
Reflections are typically implemented using effects and shaders. For the time being, the Canvas system is not (yet) hooked up to the effects/shader system. However, we do have experimental code demonstrating that this is possible, i.e. proof-of-concept: Canvas Development#Effects .2F Shaders
This would be a welcome addition to the effect framework, but currently it doesn't work like this (also the ability to use a canvas as functional texture, i.e. do runtime-writable normal or specular maps would be a 'cool to have' feature).
Tim and Curt once talked about this idea - and more recently, James also suggested something along these lines, for a summary of the corresponding effects/Canvas related discussions, see: Canvas Development#Effects .2F Shaders However, I think Stuart is the one with the most recent experience touching the materials framework - which would be one of the lowest hanging fruits to mutually integrate the materials and Canvas frameworks so that a Canvas texture can be treated as an input for a material, but also as an output/target buffer - i.e. so that nested effects using chained Canvas FBOs would become possible, which is also touching on psadro_gm's recent work relating to procedural texturing/draping: Canvas Scenery Overlays If/when this functionality is added, it would become possible to move most of the Rembrandt related FBO setup logic into xml/fgdata space without sacrificing any performance, while gaining tons of flexibility, also for people wanting to experiment with additional/alternate rendering approaches, such as: Experimental terrain engine
Canvas vs. Tail/Scene Cams
See CompositeViewer Support for the main article about this subject.
|This article may require cleanup to meet the wiki's quality standards. Please|
It (i.e., simultaneously showing different views, such as cockpit view and helicopter view, on one or more screens) is AFAIK still not supported since it (from what I've heard) would require a fairly major rewrite of how FG uses the OSG camera/viewer and fixes for all explicit and/or implicit assumptions that the active cameras are all in the same position that may exist in the FG code.
The external camera and closely related rear view mirror has been asked for very many times and the consensus is that is quite feasable. However, the problem is that nobody with the relevant skills has yet taken up the challenge. My understanding is that most (but not all) of the interfaces are already there. 
if you want more than just a simple reflection (aka a mirror/tail cam view etc), you would need to add support for a dedicated Canvas Camera element:
While there's code doing this sort of thing, this also, isn't yet integrated with/exposed to Canvas - and it is also -at least in part- "blocked" by the ongoing PagedLOD work, which in turn blocks CompositeViewer adoption (having multiple independent viewers requires PagedLOD): CompositeViewer Support
the only thing that we can currently support with reasonable effort is "slaved views" (as per $FG_ROOT/Docs/README.multiscreen). That would not require too much in terms of coding, because the code is already there - in fact, CameraGroup.cxx already contains a RTT/FBO (render-to-texture) implementation that renders slaved views to an offscreen context. This is also how Rembrandt buffers are set up behind the scenes. So basically, the code is there, it would need to be extracted/genralied and turned into a CanvasElement, and possibly integrated with the existing view manager code. And then, there also is Zan's newcameras branch, which exposes rendering stages (passes) to XML/property tree space, so that individual stages are made accessible to shaders/effects. Thus, most of the code is there, it is mainly a matter of integrating things, i.e. that would require someone able to build SG/FG from source, familiar with C++ and willing/able to work through some OSG tutorials/docs to make this work: Canvas Development#Supporting Cameras On the other hand, Canvas is/was primarily about exposing 2D rendering to fgdata space, so that fgdata developers could incorporatedevelop and maintain 2D rendering related features without having to be core developers (core development being an obvious bottleneck, as well as having significant barrier to entry). In other words, people would need to be convinced that they want to let Canvas evolve beyond the 2D use-case, i.e. by allowing effects/shaders per element, but also to let Cameras be created/controlled easily. Personally, I do believe that this is a worthwhile thing to aim for, as it would help unify (and simplify) most RTT/FBO handling in SG/FG, and make this available to people like Thorsten who have a track record of doing really fancy, unprecedented stuff, with this flexibility. Equally, there are tons of use-cases where aircraft/scenery developers may want to set up custom cameras (A380 tail cam, space shuttle) and render those to an offscreen texture (e.g. GUI dialog and/or MFD screen). It is true that "slaved views" are kinda limited at the moment, but they are also comparatively easy to set up, so I think that supporting slaved camera views via Canvas could be a good way to bootstrap/boost this development and pave the way for CompositeViewer adoption/integration in the future. However, right now I am not aware of anybody working towards this. Ironically, this gives a lot of momentum to poweroftwo's osgEarth effort, because that can already support independent viewers/cameras, and it would be pretty straightforward to render an osgEarth camera/map to a Canvas texture and use that elsewhere (GUI dialog/MFD screen etc). However, currently, I am inclined to state that Canvas is falling victim to its own success, i.e. the way people (early-adopters) are using it is hugely problematic and does not scale at all. So we really need to stop documenting certain APIs and instead provide a single scalable extension mechanism, i.e. registering new features as dedicated Canvas Elements implemented in Nasal space, and registered with the CanvasGroup helper - absent that, the situation with Canvas contributions is likely to approach exactly the dilemma we're seeing with most Nasal spaghetti code, which is unmaintainable and is begging to be rewritten/ported from scratch. Which is simply because most aircraft developers are only interested in a single use-case (usually their own aircraft/instrument), and they don't care about long-term potential and maintenance, i.e. there are now tons of Canvas based features that would be useful in theory, but which are implemented in a fashion that renders them non-reusable elsewhere: Canvas Development#The Future of Canvas in FlightGear So at the moment, I am not too thrilled to add too many new features to Canvas, until this is solved - because we're seeing so much Nasal/Canvas code that is simply a dead-end due to the way it is structured, i.e. it won't be able to benefit from future optimizations short of a major rewrite or tons of 1:1 support by people familiar with the Canvas system. Which is why I am convinced that we need to stop implementing useful functionality using the existing approach, and instead adopt one that is CanvasElement-centric, where useful instruments, widgets, MFDs would be registered as custom elements implemented in Nasal space (via cppbind sub-classing). If we don't do that, we will continue to see cool Canvas features implemented as spaghetti code monsters that reflect badly upon Nasal and Canvas due to lack of of design, and performance.
tail cams are slaved cameras, so could be using code that already exists in FG, which would need to be integrated with the Canvas system, to be exposed as a dedicated Canvas element (kinda like the view manager rendering everything to a texture/osg::Geode). There's window setup/handling code in CameraGroup.cxx which sets up these slaved views and renders the whole thing to a osg::TextureRectangle, which is pretty much what needs to be extracted and integrated with a new "CanvasCamera" element - the boilerplate for which can be seen at: [Canvas The whole RTT/FBO texture setup can be seen here: http://sourceforge.net/p/flightgear/flightgear/ci/next/tree/src/Viewer/CameraGroup.cxx#l994 That code would be redundant in the Canvas context, i.e. could be replaced by a Canvas FBO instead. The next step would then be wrapping the whole thing in a CanvasCamera and exposing the corresponding view parameters as properties (propertyObject) so that slaved cameras can be controlled via Canvas. Otherwise, there is only very little else needed, because the CanvasMgr would handle updating the Camera, and render everything to the texture that you specified.
we've had a number of aircraft developers, who would also require this functionality for implementing mirrors and/or tailcam views rendered to instruments, or FLIR-type views. All of these wouuld be possible to support once the view manager is refactored such that it can Canvas Development#Supporting Cameras For the time being, I'd suggest to use the multi-instance approach mentioned by ludomotico - depending on your requirements (and your willingness to tinker with experimental code), you could also check out FGViewer: FGViewer Given how FlightGear has evolved over time, not just regarding effects/shaders, but also complementary efforts like deferred rendering (via rembrandt), we'll probably see cameras (and maybe individual rendering stages) exposed as Canvases, so that there's a well-defined interface for hooking up custom effects/shaders to each stage in the pipeline - Zan's newcamera work demonstrates just how much flexibility can be accomplished this way, basically schemes like Rembrandt could then be entirely maintained in XML/effects and shader (fgdata) space.
Canvas vs. MFDs
Canvas vs. GUI
Canvas vs. OS Windows
Canvas vs. osgEarth
Canvas vs. OpenGL ES
We have other folks interested in running such things on Android devices or mobile phones:
So it would make sense to coordinate such efforts, because the requirements will be very similar.
Even some core developers discussed this on the devel list: Howto:Optimiing FlightGear for mobile devices#Status .2809.2F2013.29 For example, see: Howto:Optimizing FlightGear for mobile devices The Canvas is a property tree-based subsystem using listeners, the canvas is primarily a wrapper on top of Shiva and OSG that is invoked via listeners.
So all the OpenGL code is either located in Shiva or in OSG - OSG can be told to use OpenGL ES. So it would be a matter of experimenting with it.
For this particular project, I would suggest to extract the Canvas into a standalone executable - something like this has been previously done by TorstenD when he came up with FGPanel, he "just" extracted FlightGear's 2D panel code and turned it into a standalone binary. This alone would help us ensure that we can optimize the canvas to support OpenGL ES - once that is working, you could cross-compile the standalone canvas binary. Technically, this will involve some -but not all- of the steps outlined at: FGCanvas
This should give you a rough idea on what's involved in extracting the canvas system into a separate code base, to cross-compile it for other devices.
Basically, the steps are:
- use the FGPanel/FGRadar or SGApplication code base to come up with a SGSubsystem-based program
- add the Nasal, events (timers) and property tree subsystems
- add the canvas system
- check where OpenGL ES is not yet supported, report issues or fix them directly
- come up with workarounds regarding the FBO issue
40-60% of this are already done inside FGPanel and FGRadar - so the first weekend will be primarily spent doing "copy & paste".
If you are interested in working on this, you should obviously know some C++ and you should be able to build from source.
If that's not a problem, I suggest to raise the question in the canvas forum, so that TheTom can provide some more informed input. It would definitely be a useful project, not just for Rasberry PI support, but for FG itself - because the whole FGPanel/FGCanvas idea is generally agreed to be useful, so any work related to this would be highly appreciated, and we're here to help you accordingly.
Canvas vs. FGCanvas
Canvas vs. HUDs
Canvas vs. 2D Panels
Canvas vs. Splash Screens