Canvas development

From FlightGear wiki
(Redirected from Canvas Development)
Jump to navigation Jump to search
Pyramid diagram visualizing the various abstraction layers involved in the Canvas system, including scripting space Canvas Frameworks


The following is a list of Canvas related proposals and discussions that have come up over the years, some of these are efforts currently in progress:

Note  This article is primarily of interest to people familiar with Building FlightGear from source, and wanting to extend the Canvas 2D rendering system in SimGear ($SG_SRC/canvas). Readers are assumed to be familiar with C++ and OSG , the Property Tree and fundamental FlightGear APIs like SGPropertyNode (doxygen), Property Objects, SGSubsystem and SGPropertyChangeListener (the latter being wrapped via simgear::PropertyBasedElement). The Canvas code itself makes extensive use of the STL and Boost. The latest Canvas/Doxygen docs can be found here.

There are two main ways to extend FlightGear's Canvas system:

  • new/extended elements (elements determine what/how something is to be rendered (text,image,line,circle), i.e. new rendering primitives that cannot be easily/efficiently expressed using existing means, e.g. a moving map/terrain heightmap, camera views or ESRI shapefile support)
  • new/extended placements (placements determine where a canvas texture is to be placed (shown), e.g. cockpit, scenery, GUI dialog or osgviewer window)

Whenever all existing Canvas elements (group, map, text, image, path) should benefit from an addition, such as for example adding effects/shader support, it makes sense to extend the underlying base class itself, i.e. Canvas::Element. In addition, the map element (a subclass of group) can be extended to support additional map projections (see simgear/simgear/canvas/elements/map/projection.hxx). People just wanting to add a new layer to an existing dialog or instrument, will probably want to refer to Canvas MapStructure instead.

The canvas system is a property-driven FlightGear subsystem that allows creating, rendering and updating dynamic OpenGL textures at runtime by setting properties in the main FlightGear Property Tree.

The [[Property Tree] is the sole interfacing mechanism in use by the Canvas system. A so called listener-based subsystem (via SGPropertyChangeListener) will watch the canvas sub tree in the main property tree for supported "events" (i.e. properties being set, written to/modified), and then update each associated texture accordingly, e.g. by adding a requested vector or raster image, drawing a map/item, placing symbols or placing text labels with custom fonts.

Elements can be nested and added to groups which support showing/hiding and clipping of segments. Vector drawing is handled via ShivaVG (OpenVG).

All property updates result in native C++/OSG data structures being updated (typically using OSG /STL/Boost containers), so that the property tree and scripting are solely used to send update events, which ensures that Canvas-based systems are typically fast enough, often delivering frame rates beyond ~40-60 fps.

Animations are currently not directly supported, instead these can be implemented by using separate canvas groups and hiding/showing them as needed, or simply by changing the size/color/styling attributes of a canvas group using Nasal timers/listeners. Another option to update a canvas without relying on Nasal timers (i.e. due to GC considerations) is using so called "Property Rules", which are currently not yet exposed to Nasal, but which can be used for any needs where scripting overhead should be minimal. Sooner or later, we're probably going to come up with a scripting space wrapper for encapsulating most animation needs, so that existing Canvas frameworks can use a single back-end, which can be customized and optimized over time, possibly by adding native support for animations and/or by allowing animations to be handled without going through scripting space.

The Canvas fully supports recursion, by allowing other canvases (and sub-regions of them via texture-mapping) to be referenced and used as raster images, so that multiple canvases can be chained together, but also through the notion of "groups", which are containers for other canvas elements, including child groups or elements referencing other canvases.

This can be particularly useful for projects requiring multi-texturing and other multi-pass texturing stages. This mechanism is also one of the main building blocks used by the MapStructure charting framework to implement caching support via texture maps, without needing any changes on the C++ side to handle symbol instancing.

The canvas itself is developed with a focus on primarily being an enabling technology. In other words, the canvas is not about implementing individual features like a PFD, ND, EFIS, EICAS or other MFD instruments like a moving map or a GUI library.

Rather, the Canvas system is all about coming up with a flexible and efficient system that allows end-users (aircraft developers and other base package contributors) to develop such features themselves in user space (i.e. the base package) via scripting - without having to be proficient C++ programmers, without having to rebuild FlightGear from source, and without having to be familiar with OpenSceneGraph, OpenGL or other technologies that typically involve a steep learning curve (i.e. STL/Boost).

This approach has several major advantages - in particular, it frees core developers from having to develop and maintain end-user features like a wxradar, groundradar or Navigational Display/PFD by empowering content/base package developers to handle the implementation of such features themselves.

Thus, development of content is moved to user space (i.e. the base package). Recently, we've seen a shift in trend here, because more and more core developers focusing on end user requests, instead of implementing those feature requests, they implement the building blocks and infrastructure to delegate the implementation of these features to user space.

Besides, core developers are generally overstretched, and there are not enough core developers to handle all core development related tasks:

Cquote1.png Unfortunately, most of the active FG developers are currently very overstretched in terms of the areas that they have ownership of, which is affecting how much can actually be done. Fundamentally we need more core devs. [1]
— Stuart Buchanan
Cquote2.png
  1. Stuart Buchanan (Thu, 25 Apr 2013 07:28:28 -0700). Atmospheric Light Scattering.

The only way to deal with this is to shift the core development focus from developing complex high level end users features (such as an ND, TCAS or WXRADAR) that take years to fully develop, to just providing lower level API s (like a navdb API or a 2D drawing API like Canvas) to enable base package developers to develop those really high level features themselves, without being affected by any core development related "bottlenecks".

This is the route that seemed to work out fairly well for the local weather system, which was prototyped and implemented by a single base package developer in scripting space, who just asked for certain scripting hooks to be provided at some point.

For example, when Stuart, Torsten or Erik implemented LW-specific core extensions, these were about providing new hooks to be used by Thorsten. They didn't commit to implementing a weather system, they just enabled somebody else to continue his work. So this strategy is as much about delegation, as it is about organizing core development.

Core developers cannot possibly implement all the ideas and feature requests that aircraft developers and end users may have, but they can at least provide a toolbox for base package developers to implement such features. Now, without doubt, implementing a WXRADAR, TCAS, AGRADAR or even a full ND /MFD is incredibly complex and time-consuming, especially when taking into account the plethora of instrument variations in existence today.

Exposing a 2D drawing API or a navdb API to base package developers would have been much simpler and less time-consuming, at the cost of possibly not providing certain instruments/features directly - while still providing the building blocks for skilled base package contributors to implement such instruments eventually within the base package, rather than within the C++ source code where evolution and maintenance of such instruments is inherently limited by the availability of C++ developers.

Given the progress we've seen in Canvas-related contributions boosted by having a 2D API, this is a very worthwhile route for developing MFD-style instruments or other end-users features without being limited by our shortage of core developers.

Furthermore, the amount of specialized code in the main FlightGear code base is significantly reduced and increasingly unified: One major aspect of adopting the Canvas system was Unifying the 2D rendering backend via canvas, so that more and more of the old/legacy code can be incrementally re-implemented and modernized through corresponding wrappers, which includes scripting-space frameworks for existing features like the Hud system, but also our existing PLIB/PUI-based GUI, and the old 2D panels code or the Map dialog.

Many of these features are currently using legacy code that hasn't been maintained in years, causing issues when it comes to making use of certain OSG optimizations, or interoperability with new code.

In addition, widgets and instruments will no longer be hard-coded, but rather "compiled" into hardware-accelerated Canvas data structures while being initialized, which will be typically animated by using timers or listeners (via scripting or property rules). The fact that previously hard-coded widgets or instruments are now fully implemented in scripting space also means that deployment of updates no longer requires manual installations of binaries necessarily.

This is analogous to how more and more software programs, such as browsers like Firefox/Chrome, are using an increasingly scripted approach towards implementing functionality, i.e. using JavaScript/XUL (Chrome) to move the implementation of certain features out of native code.

Finally, an increasingly unified 2D rendering back-end also provides the opportunity to make porting/re-targeting FlightGear increasingly feasible, no matter if this is about mobile gaming platforms, mobile phones (e.g. Android) or embedded hardware like a Rasberry PI: Without a unified 2D rendering back-end, all other subsystems doing 2D rendering would need to be manually ported (hud, cockpit, instruments, GUI etc):

Cquote1.png I'll get a Raspberry Pi soon, so I will probably try to also run the canvas there
I've just seen that the Raspberry Pi has hardware accelerated OpenVG support, so we won't need ShivaVG and the path rendering should be quite efficient.
— TheTom (Sat Feb 01). Re: .
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Right, not only is OpenVG natively supported in hardware, but there's even a vector font library available named "vgfont".

This OSG discussion may also be of interest for anybody pursuing this venture: http://forum.openscenegraph.org/viewtop ... &view=next
And specifically: https://code.google.com/p/osgrpi/


— Hooray (Sat Feb 01). Re: .
(powered by Instant-Cquotes)
Cquote2.png

A unified 2D rendering back-end using the Canvas system ensures instead that all Canvas-based features will remain functional, as long as the Canvas itself is able to run on the corresponding platform/hardware, because there's really just a single subsystem that handles all 2D rendering via different user-space wrappers, and that would need porting (e.g. to support OpenGL ES).

Also, GUI dialogs and instruments can make use of other Canvas-based features, e.g. for showing a GUI dialog on an instrument, or instruments in dialogs.

The property tree centric implementation approach of the Canvas also means that all Canvas-based frameworks could technically work in a standalone FGCanvas/FGPanel mode eventually, but also in multi-instance (master/slave) setups such as those common for FSWeekend/LinuxTag.

This is yet another novelty, because most existing hard-coded instruments cannot be easily modified to work in such multi-instance setup scenarios. The Canvas system however -being based on the property tree- could retrieve property updates from external instances, e.g. via telnet/UDP or HLA without requiring major re-architecting.

This also means that Canvas-based GUI dialogs could similarly be shown by a separate fgfs instance - for example, in order to provide an Instructor Station or to display a MapStructure-based moving map dialog/window.


property tree & canvas


Frameworks

Overview of the Canvas system and its scripting space frameworks

Obviously, the Canvas API s themselves are not intended for specific end-user features like developing a PFD, ND or EICAS - therefore, you will typically see wrappers implemented in scripting space for certain needs - i.e. Canvas frameworks intended to help with the development of certain types of instruments for example. Frameworks will usually use the Canvas scripting space API directly, while providing a more concrete, use-case specific API, on top.



Internals

It's a conventional OSG texture that is allocated.

The main difference is that certain OSG parameters are exposed in the form of properties, by setting properties using a certain name, type/value, it will call the corresponding OSG machinery to update the texture internally.[1]


However, the Canvas image element already supports texture mapping, i.e. you can treat a raster image (including another Canvas) as the source for a raster image, and only get a portion out of it: Howto:Using raster images and nested canvases#Texture Maps

Once you stop manipulating a Canvas in the tree (and especially its child elements), it's all native C++ code that is running - i.e. no Nasal or property overhead once the corresponding data structures are set up, but that only holds true until the next "update", at which point everything is removed, re-parsed and updated/re-drawn.[2]


For instance, for Rembrandt (buffer setup), that would require additional hooks, because things like the internal texture format are not currently configurable via "Canvas properties", i.e. it's a hard-coded thing - however, Rembrandt makes extensive use of different kinds of buffers and in-memory representations, probably for pretty much the same reasons that you have in mind regarding the first question you asked. I guess, to answer your first question, we would need to look at the way Rembrandt is setting up, and managing, its buffers and compare that to the standard Canvas FBO - but I really think that it's not doing anything fancy at all, because that would introduce hard-coded assumptions that may fail under certain circumstances. Basically, what you are suggesting would require some way to encode the internal representation using a configurable lookup. What is really taking place behind the scenes, is that the Canvas system is built on the old FGODGauge code, Tom ended up rewriting it from scratch basically, but it's still using the same mechanism that hard-coded "owner-drawn" (OD) gauges like the agradar/navdisplay were using. The allocation (OSG setup) can be found here: https://sourceforge.net/p/flightgear/simgear/ci/next/tree/simgear/canvas/ODGauge.cxx#l218

The internal representation/format is a hard-coded thing: https://sourceforge.net/p/flightgear/simgear/ci/next/tree/simgear/canvas/ODGauge.cxx#l255 And even the cameragroup stuff is using the same hard-coded assumption: https://sourceforge.net/p/flightgear/flightgear/ci/next/tree/src/Viewer/CameraGroup.cxx#l994 Rembrandt, and effects (simgear), are much more flexible (for now), e.g. see: https://sourceforge.net/p/flightgear/flightgear/ci/next/tree/src/Viewer/renderingpipeline.cxx#l164

https://sourceforge.net/p/flightgear/flightgear/ci/next/tree/src/Viewer/renderer.cxx#l769

[3]

Elements

Overview of currently supported Canvas Elements

All new canvas elements need to implement the Canvas::Element interface (new elements can also sub-class existing elements, e.g. see the implementation of the Map and Window($FG_SRC/Canvas) elements), the canvas system currently supports the following primitives (see $SG_SRC/canvas/elements):

  • CanvasGroup - main element: for grouping (a group of arbitrary canvas primitives, including other groups)
  • CanvasText - for rendering texts (mapped to osgText)
  • CanvasPath - for rendering vector graphics (mapped to OpenVG, currently also used to render SVGs into groups)
  • CanvasMap - for rendering maps (automatic projection of geographic coordinates to screen coordinates, subclass of group)
  • CanvasImage - for rendering raster images (mapped to osg::Image)
  • CanvasWindow - this is part of $FG_SRC/Canvas/Window.?xx, it's a subclass of CanvasImage, used to implement windows (as of 05/2014 also to be found in simgear)


Canvas vs. Canvas Elements

Most end-user features can be decomposed into lower-level components that need to be available in order to implement the corresponding feature in user-space.

Thus, the canvas system is based on a handful of rendering modes, each supporting different primitives - each of those modes is implemented as a so called "Canvas Element", which is a property-tree controlled subtree of a canvas texture using a certain name, that supports specific events and notifications.

According to the development philosophy outlined above, you obviously won't see new canvas elements that are highly use-case specific, such as a "night vision" or FLIR element. Instead, what is more likely to be supported are the lower level building blocks to enable end-users creating such features, i.e. by adding support for running custom effects/shaders and by rendering scenery views to canvas textures.

Adding a new Element

You will want to add a new Canvas::Element whenever you want to add support for features which cannot be currently expressed easily (or efficiently) using existing means (i.e. via existing elements and scripting space frameworks). For example, this may involve projects requiring camera support, rendering scenery views to a texture, rendering 3D models to a texture or doing a complete moving map with terrain elevations/height maps (even though the latter could be implemented by sub-classing Canvas::Image to some degree).

Another good example for implementing new elements is rendering file formats like PDF This is a link to the FlightGear forum., 3d models or ESRI shape files.

To add a new element, these are the main steps:

diff --git a/simgear/canvas/elements/CMakeLists.txt b/simgear/canvas/elements/CMakeLists.txt
index bd21c13..9fdd48d 100644
--- a/simgear/canvas/elements/CMakeLists.txt
+++ b/simgear/canvas/elements/CMakeLists.txt
@@ -1,6 +1,7 @@
 include (SimGearComponent)
 
 set(HEADERS
+  myElement.hxx
   CanvasElement.hxx
   CanvasGroup.hxx
   CanvasImage.hxx
@@ -14,6 +15,7 @@ set(DETAIL_HEADERS
 )
 
 set(SOURCES
+  myElement.cxx
   CanvasElement.cxx
   CanvasGroup.cxx
   CanvasImage.cxx
@@ -23,4 +25,4 @@ set(SOURCES
 )

Next, open the header file and add a new Element classs:


#ifndef CANVAS_MYELEMENT_HXX_
#define CANVAS_MYELEMENT_HXX_

#include <simgear/props/propsfwd.hxx>

#include "CanvasElement.hxx"


namespace simgear
{
namespace canvas
{

    class myElement :  public Element
  {
    public:
        static const std::string TYPE_NAME;
        static void staticInit();

        myElement( const CanvasWeakPtr& canvas,
               const SGPropertyNode_ptr& node,
               const Style& parent_style = Style(),
               Element* parent = 0 );
        virtual ~myElement();
    protected:
        virtual void update(double dt);
    private:
      myElement(const myElement&) /* = delete */;
      myElement& operator=(const myElement&) /* = delete */;
  };

} // namespace canvas
} // namespace simgear

#endif /* CANVAS_MYELEMENT_HXX_ */
)

Next, add the source file implementing the new myElement class:

#include "myElement.hxx"
#include <simgear/props/props.hxx>

#include <simgear/misc/sg_path.hxx>
namespace simgear
{
namespace canvas
{

    const std::string myElement::TYPE_NAME = "myelement";

    void myElement::staticInit()
    {
      if( isInit<myElement>() )
        return;
      
    }

    //----------------------------------------------------------------------------
    myElement::myElement( const CanvasWeakPtr& canvas,
                  const SGPropertyNode_ptr& node,
                  const Style& parent_style,
                  Element* parent ):
   Element(canvas, node, parent_style, parent)
    
    {
        SG_LOG(SG_GENERAL, SG_ALERT, "New Canvas::myElement element added!");

    }

    //----------------------------------------------------------------------------
    myElement::~myElement()
    {
        SG_LOG(SG_GENERAL, SG_ALERT, "Canvas::myElement element destroyed!");

    }

    void myElement::update(double dt)
    {
    }

} // namespace canvas
} // namespace simgear

Next, edit CanvasGroup.cxx to register your new element (each canvas has a top-level root group, so that's how elements show up), navigate to Group::staticInit() and add your new element type there (don't forget to add your new header):

diff --git a/simgear/canvas/elements/CanvasGroup.cxx b/simgear/canvas/elements/CanvasGroup.cxx
index 51523f4..24e19d3 100644
--- a/simgear/canvas/elements/CanvasGroup.cxx
+++ b/simgear/canvas/elements/CanvasGroup.cxx
@@ -21,6 +21,7 @@
 #include "CanvasMap.hxx"
 #include "CanvasPath.hxx"
 #include "CanvasText.hxx"
+#include "myElement.hxx"
 #include <simgear/canvas/CanvasEventVisitor.hxx>
 #include <simgear/canvas/MouseEvent.hxx>
 
@@ -60,6 +61,7 @@ namespace canvas
       return;
 
     add<Group>(_child_factories);
+    add<myElement>(_child_factories);
     add<Image>(_child_factories);
     add<Map  >(_child_factories);
     add<Path >(_child_factories);

Next, navigate to $FG_ROOT/Nasal/canvas/api.nas and extend the module to add support for your new element:


diff --git a/Nasal/canvas/api.nas b/Nasal/canvas/api.nas
index 85f336a..81c0fa0 100644
--- a/Nasal/canvas/api.nas
+++ b/Nasal/canvas/api.nas
@@ -314,6 +314,18 @@ var Element = {
   }
 };
 
+# myElement
+# ==============================================================================
+# Class for a group element on a canvas
+#
+var myElement = {
+# public:
+  new: func(ghost)
+  {
+    return { parents: [myElement, Element.new(ghost)] };
+  },
+};
+
 # Group
 # ==============================================================================
 # Class for a group element on a canvas
@@ -958,7 +970,8 @@ Group._element_factories = {
   "map": Map.new,
   "text": Text.new,
   "path": Path.new,
-  "image": Image.new
+  "image": Image.new,
+  "myelement": myElement.new,
 };

Next, rebuild SG/FG and open the Nasal Console and run a simple demo to test your new element:

var CanvasApplication = {
 ##
 # constructor
 new: func(x=300,y=200) {
  var m = { parents: [CanvasApplication] };
  m.dlg = canvas.Window.new([x,y],"dialog");
  m.canvas = m.dlg.createCanvas().setColorBackground(1,1,1,1);
  m.root = m.canvas.createGroup();

  ##
  # creates a new element
  m.myElement = m.root.createChild("myelement");

  m.init();
  return m;
 }, # new
 
 
init: func() {
 
 var filename = "Textures/Splash1.png";
 # create an image child for the texture
 var child=me.root.createChild("image")
			           .setFile( filename )
				   .setTranslation(25,25)
                                   .setSize(250,250);
}, #init

}; # end of CanvasApplication
 
 
var splash = CanvasApplication.new(x:300, y:300);
 
print("Script parsed");v

you may also want to check out $FG_SRC/Scripting/NasalCanvas.?xx to learn more about exposing custom elements to scripting space via Nasal/CppBind. Next, you'll want to implement the update() methods and the various notification methods supported by CanvasElement:

  • childAdded
  • childRemoved
  • childChanged
  • valueChanged

For event handling purposes, you'll also want to check out the following virtual CanvasElement methods:

  • accept()
  • ascend()
  • traverse()
  • handleEvent()
  • hitBound()

Integrating OSG/OpenGL Code

Custom hard-coded Canvas element demonstrating how to hook up existing OpenGL/OSG code with Canvas elements

Once you have the basic boilerplate code in place, you can directly invoke pretty muchh arbitrary OpenGL/OSG code - for instance, the following snippet will render an osgText string to the Canvas element (added simply to the constructor here for clarity):

osg::Geode* geode = new osg::Geode();
osg::Projection* ProjectionMatrix = new osg::Projection;
ProjectionMatrix->setMatrix(osg::Matrix::ortho2D(0,1024,0,768));

std::string timesFont("fonts/arial.ttf");

// turn lighting off for the text and disable depth test to ensure it's always ontop.
osg::StateSet* stateset = geode->getOrCreateStateSet();
stateset->setMode(GL_LIGHTING,osg::StateAttribute::OFF);


osgText::Text* text = new  osgText::Text;
geode->addDrawable(text);

text->setFont(timesFont);
osg::Vec3 position(200.0f,350.0f,0.0f);
text->setPosition(position);
text->setText("Some OpenGL/OSG Code ...");
text->setColor(osg::Vec4(1.0f,0.0f,0.0,1.0f));

// add the geode to the project matrix
ProjectionMatrix->addChild(geode);

// add the projection matrix to the transform used by the Canvas element
_transform->addChild(ProjectionMatrix);

For testing purposes, you can use the following Nasal snippet (e.g. executed via the Nasal Console:

var element_name = 'myelement';
var window = canvas.Window.new([640,480],"dialog");
var myCanvas = window.createCanvas().set("background", canvas.style.getColor("bg_color"));
var root = myCanvas.createGroup();
var osgemap = root.createChild(element_name);

Discussed Enhancements

Note  The features described in the following section aren't currently supported or being worked on, but they've seen lots of community discussion over the years, so that this serves as a rough overview. However, this doesn't necessarily mean that work on these features is any way prioritized or even endorsed by fellow contributors -often enough, such discussions may become outdated pretty quickly due to recent developments. So if in doubt, please do get in touch via the Canvas sub-forum before starting to work on anything related to help coordinate things a little. Thank you!

AI/MP models

It appears as though it is not possible for Canvas to locate a texture that is in a multiplayer aircraft model; this has also been seen in the efforts to get Canvas displays working on the B777.[4] in simgear Canvas::update it appears to be using the factories to find the element; and this means that it can't find the named OSG node, which makes me think that maybe it is only looking in the ownship (which is a null model).[5]

Property I/O observations

Speaking for the Shuttle, that (performance problems/lag) has very little to do with canvas as such. There are 11 MDUs on the Shuttle flightdeck, and the way the Shuttle avionics works, they typically display close to a hundred values each, so that's of the order of ~1000 different parameters that need to be simulated, fetched and displayed _per update cycle_ (and yeah, most parameters you see are really simulated and not just unchanging text).[6]

Structured in a reasonable way (i.e. minimiing property I/O in update cycles, avoiding canvas pitfalls which trigger high performance needs etc. ) canvas is pretty fast[7]


actual problem is property I/O - we can't read/write several hundreds of properties per frame without creating a bottleneck. So it's largely irrelevant how fast the Nasal code runs, whether it's parallel or whether it's Python-driven code running on the GPU - as long as property I/O speed doesn't change, performance will be stuck right there.[8]

We're talking 500 properties in addition to everything else that is happening (limit checks, thermal system updates, CWS queries, simulation of co-orbiting objects, simulation of sensor errors,...)[9]


For the MFDs... let's go through the numbers. We have 40 pages by now, each displays on average something like 50 properties. That's 2000 getprop calls for the data provider to manage. At 3 H, and 30 fps, that's 200 requests per frame. Now, if these 40, no more than 9 different ones can actually be on at any given time - so that's 450 getprop calls if you do it without data provider. Now, we're not updating them all at once, we're updating in a staggered fashion - user selectable but per default just one display in a frame - so that's 50 getprop calls per frame. So effectively you get an update rate of ~3 Hz and query only the properties you really need.[10]


The workload is certainly a function of the number of screens (canvas textures/FBOs) (unless you can assume have duplicate screens, in which case you can cut it by re-using a canvas or using a data provider). Creating the property structure by simply copying it turned out to be the largest drag in setting up a canvas display.[11] if you have a page that displays 90 data values in text, you actually have to fetch all 90 of them. With 9 displays open, that's 810 properties to be fetched and then to be written so that canvas can display them. If you try that per frame, you'll see quickly why it doesn't work.[12] Of course there needs to be an information merging and representation stage which the Shuttle doesn't have - but if you put this into the display code itself... see above. Fetching doens of properties when all you need is four pre-computed ones is a bad idea.[13]

In an extreme case, the shuttle needs to read (and canvas later write) some 800 properties for one screen processing cycle. Part of those trigger unit conversions, mappings to strings,... A small subset goes into translating, rotating and scaling line elements. Our experience is that property reading and writing is usually the most expensive part - with AW Thorsten did not manage even with complex cloud setup calls squeeed into a single frame to make even a dent in the framerate or latency (not for lack of trying), but property access does it as soon as you reach ~1000 per frame.[14]


500+ property updates (polling) would surely show up - especially given that a few years ago, that was pretty much the load caused by the whole simulator per frame. So it will be interesting to see if/how the complexity of these instruments is adding up (or not). But all the sprintf/getprop-level overhead that is accumulating through update() loops invoked via timers would be straightforward to reduce significantly (or even eliminate) by extending CanvasElement/CanvasText so that it supports labels in the form of sprintf format strings that are populated by using a property node (sub-tree), which would mean that there would beero Nasal overhead for those labels/nodes that can be expressed using static format strings and a fixed set of dynamic properties. All the polling could be prevented then, and updating would be moved to C++ space. We ended up using a similar approach when we noticed that drawing taxiway layers would create remarkable property overhead, so that we troubleshooted the whole thing, at which point, TheTom added helpers to further reduce system/Nasal load[15] common coding constructs (such as sprintf/getprop idiom) are put into a helper function, which can later on be re-implemented/optimied, without having to touch tons of files/functions.[16]

In the case of propery-driven labels that are formatted using sprintf(), it would probably be easier to just introduce a helper function, and delegate that to C++ code - as per the comments at: [3] This is a link to the FlightGear forum.[17] It would be better to extend the Canvas system to directly support a new "property mode" using sprintf-style format strings that are assembled/updated in C++ space, i.e. without any Nasal overhead, which would benefit other efforts, too - including the PFD/ND efforts, re-implementing HUD/2D panels on top of Canvas, but even pui2canvas[18]

It is all about updating properties and updating a label/text element accordingly, we could dramaticaly reduce the degree of Nasal overhead by allowing text to be specified using printf-style format strings that get their values from a sub-branch in the element's tree (one node for each %s, %d) - that way, the whole thing could be processed in C++ space, and we would not need to use any Nasal for updating/building strings. If this could be supported, we could also provide two modes: polling and on-update, to ensure that there is no unnecessary C++ overhead. Complex dialogs with lots of dynamic labels could then be re-implemented much more easily, without having to register 5-10 callbacks per metrics (or timers/listeners), even though a timer-based update mode may also be useful for the C++ change. Note that this would also be useful for the PUI parser itself, because that already supports values that may be provided by a property using printf-style formatting, there, it is limited to a single fomat string - with Canvas, we could support an arbitrary number of sub-nodes that are updated as needed. Ultimately, that would also help with HUD/2D panels stuff, because taking values from properties and updating them using sprintf-style code is extremely common there, too - and we could avoid tons of Nasal overhead like that.

[19]

References
  1. Hooray  (Nov 5th, 2016).  Re: Dirty airplanes?? .
  2. Hooray  (Nov 5th, 2016).  Re: Dirty airplanes?? .
  3. Hooray  (Nov 5th, 2016).  Re: Dirty airplanes?? .
  4. Hyde  (Mar 15th, 2016).  Re: Dual control for Boeing 777 .
  5. Richard Harrison  (May 15th, 2016).  [Flightgear-devel] Canvas in dynamically loaded scene models .
  6. Thorsten Renk  (Jul 3rd, 2017).  Re: [Flightgear-devel] RFD: FlightGear and the changing state of air navigation .
  7. Thorsten Renk  (Jul 3rd, 2017).  Re: [Flightgear-devel] RFD: FlightGear and the changing state of air navigation .
  8. Thorsten  (Oct 9th, 2016).  Re: Nasal must go .
  9. Thorsten  (Oct 29th, 2016).  Re: Nasal must go .
  10. Thorsten  (Oct 28th, 2016).  Re: Nasal must go .
  11. Thorsten  (May 10th, 2016).  Re: Space Shuttle .
  12. Thorsten  (May 10th, 2016).  Re: Space Shuttle .
  13. Thorsten  (May 10th, 2016).  Re: Space Shuttle .
  14. Thorsten  (May 12th, 2016).  Re: Space Shuttle .
  15. Hooray  (Dec 19th, 2015).  Re: Space Shuttle .
  16. Hooray  (Dec 19th, 2015).  Re: Space Shuttle .
  17. Hooray  (Dec 19th, 2015).  Re: Space Shuttle .
  18. Hooray  (Nov 30th, 2015).  Re: Space Shuttle .
  19. Hooray  (Nov 30th, 2015).  Canvas performance (property overhead) (pm/space shuttle) .

Instancing

Shuttle avionics meds oms mps.jpg
Shuttle avionics meds apu.jpg
MapStructureDialog.png
Map-canvas-dialog-native.png
Cquote1.png once we've parsed an SVG, is there a way to re-use the structure? This would have to be copy by value rather than passing a pointer, because we'd want these structures to be independently controllable for each MFD, so we need multiple instances of the corresponding canvas elements.
— Thorsten (Apr 25th, 2016). Re: Best way to learn Canvas?.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png The issue there (at least based on the NavDisplay) is that there's quite high variance in the symbols, e.g. colour changes. For the ND I keep the symbols at greyscale, and colour them based on parameter data (active vs tuned vs inactive for navaids, for example)
— zakalawe (Sep 25th, 2012). Re: Using a canvas map in the GUI.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png For efficiency reasons it would be good to draw all symbols to a single canvas/texture and put all quads into a single node. So probably I'll add a new element type for putting quads into a single element which are all rendered at once. Maybe we can even use a geometry shader to just copy the positions to the GPU and generate the full quads with the shader. Ideas and suggestions are always welcome
— TheTom (Sep 24th, 2012). Re: Using a canvas map in the GUI.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I suspect that without some C++ level sharing of instanced elements, large numbers of complex (SVG path) symbols + DATA will be an issue. Of course based on my experience at FSWeekend last year, this is an issue in the real systems too - eg show WPT at > 80nm range! Tom has mentioned a sprite cache for instancing, that would work great as a solution but I don't know if he or anyone else has worked on it.
— zakalawe (Oct 15th, 2013). Re: Dynamic duplication of elements.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I will add a new element type to render symbols from a "Cache-Texture" to improve speed of canvasses showing lots of symbols like eg. the navigation display. You will basically be able to set position (maybe rotation) and index of the symbol in the cache-texture and possibly a color for each instance...
— TheTom (Nov 12th, 2013). Re: How to display Airport Chart?.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png just wondering if it is possible to duplicate a SVG element in Nasal? I'd like to draw a symbol only once in the SVG and then place multiple instances of it on my display. They need to be unique elements, as I'd like to animate them independently. I've looked for "duplicate" or "copy" in the API, but didn't find anything...
— Gijs (Oct 15th, 2013). Dynamic duplication of elements.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png way to easily instantiate a symbol/geometry/group multiple times, in a cached fashion, without eating up unnecessary memory for multiple independently-animated instances of a symbol
— Hooray (Oct 15th, 2013). Re: Dynamic duplication of elements.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I'm currently not sure if we can share the canvas elements across displays, so I've made copies of everything for each display.
— Richard (Dec 19th, 2015). Re: Space Shuttle.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png You are right, that would help reduce the OSG-level workload, i.e. scene graph-level instancing. But for the time being, Canvas does not support anything like that.
— Hooray (Dec 19th, 2015). Re: Space Shuttle.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png It's also lead me onto wonder if instancing could be generally useful (as we have a quite a lot of items in the scenery that are the same model); but to be honest I've not really got enough of a clue how the culling would work.
Cquote2.png
Cquote1.png this is one of the most common feature requests related to Canvas
— Hooray (Dec 19th, 2015). Re: Canvas::Element Instancing at the OSG level.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png For GUI widgets, but also MFDs, as well as 2D panels, it would be good to implement "instancing" support by mapping the corresponding OSG APIs to Canvas::Element attributes (properties), so that certain element parts can be shared, i.e. a typical dialog/MFD may have tons of identical elements (OSG scene graph elements), that could use a common osg::group and/or osg::StateSet internally, which should help significantly reduce the osg overhead.

We could mark some properties as "invariant" (static) and map OSG's support for deep/shallow cloning to helpers, so that an existing element can be used as the "template" for other/similar elements (think GUI buttons, labels, but also MFD elements). Internally, OSG can even build a texture atlas for recurring textures to reduce unnecessry osg::StateSet changes.

We could also expose a method to "finalie" a Canvas Element, at which point its osg data variance may be changed to be STATIC instead of DYNAMIC - this could for example apply to background images and other static overlays.
— Hooray (Nov 23rd, 2015). Canvas::Element Instancing at the OSG level.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png as far as I am aware, there really isn't any "better" or more "efficient" way to do this. The thing is, Canvas still does not have any way of "instancing" support for OSG-level data structures - this would be the main thing that would help with reducing the rendering related workload. Aside from that, it's the update/animation logic that is typically implemented via Nasal timers/listeners that will add up, and show up over time.
— Hooray (Feb 21st, 2016). Re: Space Shuttle.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png The main reason for doing that is to ensure that you can easily adopt more native primitives if/when they become available - for instance, the lack of a dedicated animation-handling element at the Canvas::Element level is one of the most obvious issues, because it links rendering related OSG code to Nasal space callbacks that are running within the FlightGear main loop.

And one of the most logical optimiations would be to look up suitable OSG-level data structures and expose those as Canvas::Elements that we can then reuse to implement such animations/updates without going necessarily through Nasal space - there are quite a few osg classes that could help with that, some of which we are currently re-implementing via Nasal to animate PFD/ND logic.

Looking specifically at some of the most complex Canvas-based avionics we have in FlightGear, things like Avidyne Entegra R9 will be difficult to update easily once such a dedicated element becomes available - but people can easily make that possible by using a single helper function/class that handles the update/animation semantics, and which isolates the remaining code from any internals - so that things like an animated bar can be easily delegated to OSG/C++ code as soon as the corresponding OSG classes are mapped to a dedicated Canvas element: Canvas Sandbox#CanvasAnimation
— Hooray (Feb 21st, 2016). Re: Space Shuttle.
(powered by Instant-Cquotes)
Cquote2.png


Canvas-based Splash Screens

Note in this system it would be important the screenshots are *just* screenshots - no border, badges or other info as some splash screens currently do. The idea being to add this using OSG dynamically from the metadata and then it can be restyled as needed.[1]


Cquote1.png We've been discussing phasing out the hard-coded splash screen and replacing it with a Nasal/Canvas Wrapper showing a decoration-less window, with a CanvasImage and CanvasText elements - the splash screen itself can already be disabled via a property, so showing a corresponding Canvas window should be fairly straightforward.

For details, see:

This isn't currently prioritized - but it's fairly straightforward, and will help us with unifying the 2D rendering back-end via Canvas, so it's going to happen sooner or later - and any experimental code should help us prototype this, so that we can get rid of the corresponding C++ code.


— Hooray (Mon Dec 08). Re: How does serviceable and failures work?.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png The naming scheme is why the first run of a new stable release of FG always does a cache rebuild - it caused some complaints, since the feedback on cache rebuilds is not great (we indicate activity but not progress through the data). it does mean you can run stable and dev versions side by side without continual cache rebuilds of course.

— James Turner (2014-11-13). Re: [Flightgear-devel] Random oddities.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I have a plane and when you start it up I would like it to pick a custom splash screen, but instead of having the same splash screen every time it would pick one at random and display it. So someone could make multiple splash screens for a plane and the nasal script would randomly pick one of them. I was thinking it could be a custom nasal script right in the -set file or it would call up a .nas file. I don't see why this couldn't be done using nasal, but I don't know how to write the code. Anybody want to give it a shot?
— jonbourg (Thu Jul 10). Use Nasal to Randomize Splash Screens.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png this keeps coming up - we've had some aircraft developers who wanted to show random splash screens, multiple images (rotating) or who'd like to see some kind of "hall of arame" shown while booting.


The existing scheme is hard-coded and cannot be scripted. The only kind of flexibility it provides is that it can randomly select splash screens for aircraft that don't have a corresponding entry set. Otherwise, even splash screen updates are hard-coded, despite being property-based.


— Hooray (Thu Jul 10). Re: Use Nasal to Randomize Splash Screens.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png One possibility would be to have a group of splash images in a directory and then have a nasal script that would copy images from that directory in some kind of random or rotating order over the file that is used for the splash screen after each start up of the sim. This would change the splash screen every time the software was restarted. So in sudo code it might look something like this:


Have the potential splash screens images in a special directory like Aircraft/<my aircraft>/SplashImages
This would need to be image files of the same type ONLY to keep things simple.

Listen for initialization to finish.


— hvengel (Thu Jul 10). Re: Use Nasal to Randomize Splash Screens.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png this is the kind of thing that is trivial to do in Canvas using 5 lines of code, and changing an image is just the equivalent of a .set/setprop() call basically. So I wouldn't spend too much time working around this limitation - as can be seen, making the splash screen 100% dynamic at run-time isn't exactly rocket science using Canvas. The really hard work has already been done by TheTom. Thus, disabling the hard-coded splash screen and showing a GUI dialog without window decoration, would give aircraft developers all the flexibility they need - they could even fetch stuff via http easily
— Hooray (Thu Jul 10). Re: Use Nasal to Randomize Splash Screens.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png What I want as an aircraft dev and perhaps this is true of others as well is a standardized way to do this in the *-set.xml like: 1. Set either a splash screen image (just like the current set up for backwards compatibility but perhaps discourage this so that it goes away at some point) or a splash screen image directory in the *-set.xml file. If this is set this to use a single image then it works just like it does now. If this is set to a splash screen image directory then it rotates through the images in that directory. So it is one line of XML like the current setup that allows for rotating splash screen images. The aircraft devs shouldn't have to do more than that even if the alternative is only 5 lines of nasal code. IMO those 5 lines of canvas/Nasal code should be a standard part of the FG start up and the functionality should just be there and be a one liner to use.
— hvengel (Thu Jul 10). Re: Use Nasal to Randomize Splash Screens.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I think the best way to go WRT configurability is to use the same node (/sim/startup/splash-texture) for either a single file, a single directory, or a single URL (HTTP), and then allow several of these nodes, from which a list of images would be drawn up and a random one chosen. That way it's simple and backwards-compatible yet still supporting the features you want. This would require less than 60 LOC to support from Nasal (including some other features) and could be loaded ASAP with early Nasal.
— Philosopher (Thu Jul 10). Re: Use Nasal to Randomize Splash Screens.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I think the best way to go WRT configurability is to use the same node (/sim/startup/splash-texture) for either a single file, a single directory, or a single URL (HTTP), and then allow several of these nodes, from which a list of images would be drawn up and a random one chosen. That way it's simple and backwards-compatible yet still supporting the features you want. This would require less than 60 LOC to support from Nasal (including some other features) and could be loaded ASAP with early Nasal.
— Philosopher (Thu Jul 10). Re: Use Nasal to Randomize Splash Screens.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Implementing support for this should be under 50 lines of Nasal/Canvas code, and anybody wanting more elaborate schemes, should obviously use the 3rd option (even though that doesn't mean, that certain schemes couldn't be also registered. The flexibility provided by Nasal/Canvas is hard to beat here ... and it means to get rid of very old and ugly C++ code, too :D


I still find it amazing how often aircraft devs keep coming up with this - but technically, I agree that it would make sense to get rid of certain C++ code and make it more flexible


— Hooray (Thu Jul 10). Re: Use Nasal to Randomize Splash Screens.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I just always thought it would be neat if you could make a set of splash screens for a particular aircraft and it would simply pick one of them at startup. It's just one of those neat to have, but not necassary things.
— jonbourg (Thu Jul 10). Re: Use Nasal to Randomize Splash Screens.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png this (and much more!) will be trivial to do in a few months time using Nasal once the "reinit Nasal early" work has materialize/stabilized a little more - and it will not involve any C++ at all, like Philosopher said: it will probably be just ~30 lines of Nasal code looking for certain XML tags in your aircraft-set.xml file, and maybe an option to run your own Nasal code in case you want to do anything fancy.
— Hooray (Thu Jul 10). Re: Use Nasal to Randomize Splash Screens.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Regarding the initialization concerns, I would not worry about those too much - eventually, we will make Nasal available for other reasons, and that will inevitably include events (timers) and the property tree (listeners) - so this would already include most things required by the canvas to address a single texture and update it dynamically.


I think technically, it's just a new placement mode that we need here, and maybe supporting loading *.nas from $FG_ROOT/Textures/Splashs - i.e. we could simply recognize *.nas as an extension for the splash-texture tag in aircraft-set.xml tags and then directly attach a single canvas.


— Hooray (Fri Feb 07). Re: Splash Screens + Canvas.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I don't particularly like to idea to add even more low level OpenGL code to the init code to implement such things, but once we can initialize Nasal earlier (which will probably Philosopher's boootstrap.nas script, but which also depends on Zakalawe's reset/re-init work) we should be able to pull this off with less than 50 lines of Nasal/Canvas code.


I think a new splash screen "placement" would make a nice tutorial for the wiki


— Hooray (Sun Apr 27). Re: Splash Screens + Canvas.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I just had a look at the code ($FG_SRC/Viewer/splash.cxx) - it seems fairly straightforward to re-implement the existing functionality using ~50-80 lines of Nasal/Canvas code.


Currently, that's simply not a priority though - so you may still want to file a feature request (issue tracker) so that we don't forget about this.
Obviously, the existing feature works "well enough", and touching working code isn't exactly popular :D

Technically, it would even be a good thing to pursue - the splash screen code is not exactly compact and it handles several corner cases already.
So this could be greatly simplified by supporting Nasal & Canvas. So I may revisit this depending on progress in the FGCanvas/Nasal-initialization department, and spare time obviously.


— Hooray (Sat Jul 12). Re: Use Nasal to Randomize Splash Screens.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png it's actually trivial, we don't even have to touch the viewer/splash screen code at all: there's an existing option to disable the whole thing via --disable-splash-screen, which is currently defaulted to false - but we only need to edit options.cxx to default it to true, which will keep all the splash code disabled - so that we can directly use Nasal/Canvas to show a GUI dialog without decoration to render things, including randomly-selected splash screens. Amount of C++ changes: ~1 line - and then it's roughly 30 lines of Nasal code to come up with a scripted splash screen that will be loaded from $FG_ROOT/Boot/default.boot - and which could support the existing scheme, as well as a "random" selection scheme, but also a totally scripted Nasal/Canvas splash mode :D


Once this is in place, we can safely remove the hard-coded splash screen


— Hooray (Sat Jul 12). Re: Use Nasal to Randomize Splash Screens.
(powered by Instant-Cquotes)
Cquote2.png

Serializing a Canvas to SVG (brainstorming)

Note  For now this is just a brainstorming to explore possible ways to better integrate the recent mongoose/httpd work with Canvas-based efforts like Gijs' NavDisplay or PFD - i.e. the idea is to see if we can come up with a consistent framework that would allow a Canvas-based display/MFD (or any instrument) to be rendered in a browser, updated asynchronously via AJAX. Currently, the focus is on serializing an existing Canvas by iterating over all elements and turning each CanvasElement into its SVG equivalent (e.g. svg image, raster image or text). That alone would mean that we could serve a static image of the canvas, animations and updates would then be handled by a shim layer that is based on a safe subset of both, Nasal and JavaScript. The long-term idea is to allow MFDs like the NavDisplay to be served to, and viewed by, a browser.
Cquote1.png we've seen half a dozen of glass cockpit related efforts over the years - including stuff like OpenGC (early 2000s) and FGGC (mid 2000s), and quite a few others in the meantime.

At the end of the day, this always meant that we had competing, and even conflicting, technology stacks involved - where one technology (instrument/MFD) would not work within the other run-time environment. Canvas, coupled with HLA (or even just remote/telnet properties), has the potential to solve this once and for all.
— Hooray (Sun Jun 15). Re: Instruments for homecockpit panel..
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png ultimately, HTML5/Canvas + JavaScript isn't going to be hardware-accelerated in the form that FlightGear's Canvas system is. Then again, what you mentioned regarding HTML5/JavaScript support isn't all that far-fetched either - OSG can certainly already render WebKit views to a texture. So this is, once again, one of those cases where FlightGear turns out to be a very much disorganized, with even conflicting solutions being worked on by different contributors- obviously, this isn't the first time, and it's also not going to be the last time something like this happens. I think we simply have to embrace the opportunity and see what prevails.
From my standpoint, having -yet again- different types of instruments that are specific to an external run-time environment is very much a maintenance nightmare.
— Hooray (Sun Jun 15). Re: Instruments for homecockpit panel..
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png meanwhile, Canvas is our most solid and most unified approach to tackle those challenges, without them being specific to an external run-time environment, while still providing all the theoretical benefits, plus quite a few more.
— Hooray (Sun Jun 15). Re: Instruments for homecockpit panel..
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png The main advantage of standalone applications using HTML/CSS/JS/AJAX is that they run on almost every browser without the need for installation of software. My prototype PFD runs on iOS, Android, Windows, Linux, OSX by just punching a URL into the browser's address field.
— Torsten (Mon Jun 16). Re: Instruments for homecockpit panel..
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Being able to run such things in any supported browser is obviously pretty cool, and often time, people don't really need full hardware acceleration. The thing that I find a little difficult is the lack of consistency - obviously, glass/MFD support is something that's been lacking in FG pretty much since the very beginning.

And I find the idea very compelling not to have -yet again- different "code bases" implementing semantically equivalent instruments/functionality, like a PFD and/or ND, EICAS/EFIS or EFB displays.

Thus, I am really interested in supporting efforts like the work that Gijs and Hyde have done with the NavDisplay: It supports multiple instances per aircraft, and can easily be integrated with other aircraft - and it is even prepared to support different styling, and different types of NDs. The way the NavDisplay/MapStructure efforts have evolved meanwhile, the work originally written by a single guy (Gijs) can now be used in a ton of places ...hese displays are not longer aircraft or instrument specific, and can directly be used in any FlightGear dialog, PUI or Canvas.

The other issue with a purely "W3C-based" approach is that we'd need to become fairly creative once it comes to supporting modern avionics/MFD features, such as for example tail view cameras or even providing full GUI support along the lines of ARINC 661. Canvas has become sort of a technology enabler, it is not so much about end-user features, but it's become a platform that ties together features that were previously implemented in a very inconsistent fashion, one that also didn't exactly improve our degree of OpenGL compatibility due to a ton of legacy code all over the place, often not even using OSG StateSets etc.
— Hooray (Mon Jun 16). Re: Instruments for homecockpit panel..
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Who knows, maybe there's even a way that we can find a compromise to optionally integrate both worlds to /some/ degree - i.e. we could serve Canvas-based textures as PNGs to a browser and actually let users decide on which side they want to use "native" FlightGear solutions, and where they'd prefer to use W3C options instead.

Obviously, JavaScript is in many ways superior to Nasal, and the way Nasal is integrated in FG, we cannot easily write async code either.

Being able to stream Canvas images/video to an external browser/viewer (via a worker thread) would also allow us to support a variety of other interesting use-cases, such as UAV stuff, OpenCV post-processing etc. The only thing that's missing to pull this off is a new placement type that exposes a canvas as either an osg::Image buffer that is serialized to a browser-format like PNG, or to some video stream. At that point, a browser could -in theory- even render live FG camera views streamed via UDP to implement a browser-based instructor console that can view individual Canvas MFDs/instruments, but even scenery views.

This kind of stuff has been discussed a number of times, and even Curt & Tim agreed (in the pre-canvas days) that this would be cool to support at some point: http://wiki.flightgear.org/Canvas_Devel ... ter_Vision
— Hooray (Mon Jun 16). Re: Instruments for homecockpit panel..
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png We tend to have a lot of screen shots of PUI dialogs that are heavily annotated for use in our docs (manual/wiki), for example see:

Dialog CheckList.png
HUD default overview.png

Obviously, such screen shots "break" whenever the dialogs are updated, i.e. need to be manually re-created, and annotated again.

If we could extend the Canvas system to get an osg::Image for any canvas, we could easily create such screen shots procedurally and serialize them to disk. GUI dialogs could basically become self-documenting, because we could just annotate a canvas procedurally and write the image to a file that can be used in the manual/wiki


— Hooray (Sun Jun 22). Serializing a Canvas to an osg::Image.
(powered by Instant-Cquotes)
Cquote2.png

Canvas works mainly in terms of 1) OpenVG paths, and 2) raster images - most other elements are built on top of these two primitives. In fact, we don't even have native SVG support, we are merely using a Nasal script named "svg.nas" to turn SVG markup into OpenVG paths.

In other words, we could probably serialize a "live" canvas into a SVG image that merely references external files that are served via mongoose for each non-static element/group of the canvas, those would be either SVG files or raster images that would need to be internally serialized, sent to the browser and updated on demand.

Animation is a different thing obviously, but we're once wondering if we should come up with a "safe" subset of JavaScript that would be valid Nasal and vice versa - such a "subset" library could be used to animate instruments.

Basically, that would mean that we could combine both worlds to arbitrary degrees, and e.g. display MFDs like Gijs' ND or PFD in a browser that simply fetches a SVG from mongoose, which is a serialized canvas, broken up into 1) OpenVG paths and 2) raster images. To update individual elements selectively, we'd need to use your listener notifications or some other pub/sub mechanism. Something like that should be far more efficient than streaming the final texture, and it would allow us to reuse existing stuff, without necessarily asking people to re-invent instruments from scratch just because they want to use a different technology (Canvas vs. W3C).

Except for the "map" element supported by canvas (which directly projects symbols according to lat/lon), most things could be mapped onto SVG directly, i.e. referencing external SVGs and raster images via the <image> tag. If that is something that you find interesting, I am sure that I could help restructure the Canvas/MapStructure side of things to serve a SVG for a canvas - even event handling could be supported that way.

Another option might be generalizing the Nasal framework to be also valid JavaScript so that people could use a single framework that just animates SVGs and raster images via timers and listeners, so that both methods could benefit from each other in the long-term, because people could easily reuse stuff.

The main challenge being how to allow MapStructure/MFD stuff to be serialized as a canvas that consists of <image> entries for each group/element that either refers to another OpenVG group/SVG or a raster image. I think we could use a fairly thin Nasal/JavaScript subset as a shim layer to selectively update such SVGs even in the browser. I would probably need to restructure MapStructure to make better use of caching so that semi-static content is served as raster images. But otherwise it seems feasible to serialize a canvas to a SVG file that links to localhost:/canvas[x]/by-file/filename.png or filename.svg

The opposite we're already doing in svg.nas It's not important at the moment, I'd just like to explore ways to unify both worlds at least to /some/ degree.

Thinking about it, a canvas is already a "tree" due to the property tree, i.e. very much analogous to a SVG DOM, so I don't think that the to represent/serialize a canvas to a SVG that uses special URLs to address certain elements would be all that far-fetched eventually

Supporting Cameras

Note  There is a related patch available at https://forum.flightgear.org/viewtopic.php?p=317448#p317448
OSG slave camera rendered to an TextureRectangle [2] This is a link to the FlightGear forum.
Note  People interested in working on this may want to check out the following pointers:

Given how FlightGear has evolved over time, not just regarding effects/shaders, but also complementary efforts like deferred rendering (via rembrandt), we'll probably see cameras (and maybe individual rendering stages) exposed as Canvases, so that there's a well-defined interface for hooking up custom effects/shaders to each stage in the pipeline - Zan's newcamera work demonstrates just how much flexibility can be accomplished this way, basically schemes like Rembrandt could then be entirely maintained in XML/effects and shader (fgdata) space. And even the fgviewer code base could be significantly unified by just working in terms of canvases that deal with camera views, which also simplifies serialization for HLA.


Background:

Also see The FlightGear Rendering Pipeline

Cquote1.png if we ever get multi-view capability, or even use a crude method of simply making the FLIR display "full screen" and hence don't need multi-view capability (that's how FLIR in ArmA 2 actually works, since it didn't have picture in picture capability). Now that i think about it, a similar approach could be used to create a cheap method of a terrain avoidance radar display.
— clipper996 (Mar 16th, 2016). Re: ALS night vision (and others).
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Taxi Camera on navigation display (as seen on FSX and X-Plane)
— CaptainTech (Jan 24th, 2016). Thanks.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I have to create two master cameras to controls the two different views in two different scene rendering dynamically.

but flightgear only using viewer class . This is createing one master camera no.of slave cameras. but i need CompositeViewer class. how to use CompositeViewer class in flightgear and how to render through CompositeViewer class.


Cquote2.png
Cquote1.png any idea how loan to wait for this is add in canvas (with custom render options)?
— www2 (Oct 16th, 2015). Re: WINDOW IN WINDOW.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I think we need soon add this to canvas for camera.
— www2 (Oct 24th, 2015). Re: WINDOW IN WINDOW.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Anyway, I'm talking about rendering (terrain) camera view to texture using od_gauge. I know you can get terrain camera view and place it on the screen. Alright, it doesn't even have to be terrain, just normal camera view. It's rendered to screen every frame. The same way, can't we use the od_gauge instrument to render the view to texture? I just need some good info/doc on how we can do it.
— Merlion Aerosuperb (2012-03-21). [Flightgear-devel] Rendering Terrain Camera View to Texture.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Back when the whole Canvas idea was originally discussed, none of the people involved in that discussion stepped up to actually prototype, let alone, implement the system - so it took a few years until the idea took shape, and the developer who prototyped and designed the system went quite a bit further than originally anticipated - but I think it's safe to say that not even Tom was foreseeing the increasing focus on GUI and MFD use-cases, as well as as the increasing trend to use it for mapping/charting purposes.

So the original focus on 2D rendering is/was very valid, and the system is sufficiently flexible to allow it to be extended using custom elements for rendering camera/scenery views at some point. All the community support and momentum certainly is there, and I'm sure that TheTom will gladly review any contributions related to this.


Cquote2.png
Cquote1.png The external camera and closely related rear view mirror has been asked for very many times and the consensus is that is quite feasable. However, the problem is that nobody with the relevant skills has yet taken up the challenge. My understanding is that most (but not all) of the interfaces are already there.
— Alant (Wed Aug 27). Re: Gear view in cockpit computer.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I've seen a few screen shots from people working towards this independently, but those efforts seem to have stalled meanwhile.

I do know that omega95 and a few other aircraft developers are highly interested in supporting this feature - but as was said previously, it will involve C++ changes.
While I wouldn't necessarily say that we're lacking people with the skills to make this happen (we have at least half a dozen of developers who know perfectly well how to do this) - it's mainly a matter of different priorities for the time being - airliner development has never been a priority for FlightGear core developers - even though some aircraft developers are suggesting otherwise - but the truth is that the relatively high number of "airliners" in FlightGear is mainly because that's what many of our younger aircraft developers are interested in - still, there are many core features/building blocks missing to fully develop modern cockpit displays - in FlightGear terms, Canvas is a fairly recent and even "novel" addition.


— Hooray (Sat Aug 30). Re: Gear view in cockpit computer.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png It is probably going to be added via Canvas eventually - not so much for airliners in particular, but to support a number of other use-cases, regardless of airliners - such as e.g. tail cams, but also FLIR views etc.

Please keep in mind that it took several years to materialize for the Canvas system itself - the original proposal got first discussed pre-2010, and it got prototyped by TheTom in early 2012. In other words, even good ideas may have a certain "shelf life" and may need to grow momentum/demand to be recognized as such :D
So nobody is saying that the idea to add this to Canvas would be bad - quite the opposite actually: We'd all agree that this would be a great feature to have and that FlightGear would improve significantly - but for the time being core development resources are allocated elsewhere, and external core contributors are also more interested in other aspects of the simulator.


— Hooray (Sat Aug 30). Re: Gear view in cockpit computer.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png hi

i need to create 2 window with different view.
for example window1 show cockpit view and window2 show tower view


— sgb110 (Sat Aug 16). create window.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png To support this kind of thing via Canvas, we'd need to adapt the existing view manager code and render a slave camera to a Canvas group - i.e. by turning the whole thing into a CanvasElement sooner or later. That would allow cameras to be specified according to the existing syntax/properties.


Fully-independent cameras would still not be supported though, because that would require the previously mentioned switch to use OSG's CompositeViewer, so that each camera can have its own associated tile manager, the necessary steps to make this possible were completed by Zakalawe and Stuart as part of using PagedLOD meanwhile.

This is actually one of the longer-standing feature-request, that even pre-dates Canvas in its current form - the original idea to implement this, is detailed at (particularly, see Zan's comments): Howto:Use_a_Camera_View_in_an_Instrument


— Hooray (Tue Oct 21). Re: Gear view in cockpit computer.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I think, but I'm not really sure, that FligthGear does not support two different views even if you have two windows.
— ludomotico (Sat Aug 16). Re: create window.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png while we've had a number of discussions about possibly supporting camera views as Canvas elements, this isn't currently supported. At some point, this will probably be added, because it would simplify quite a bit of existing code (especially the view manager, and the way camera groups are set up) - however, the corresponding C++ predates Canvas by many years, so it would involve a bit of work.


But we've had a number of aircraft developers, who would also require this functionality for implementing mirrors and/or tailcam views rendered to instruments, or FLIR-type views. All of these wouuld be possible to support once the view manager is refactored such that it can Canvas_Development#Supporting_Cameras


— Hooray (Sat Aug 16). Re: create window.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png And then I have the biggest challenge in the aircraft, the Dynon Skyview SV-D1000 display, which has a PFD, a Terrain Map (not that hard as we already have code from the GPS), Engine Controls and the control center for the aircraft's TruTrak Autopilot System. The challenge here is synthetic vision. Which means I need to either be able to render 3D terrain view to texture, OR be able to create my own 3D terrain view with projection calculations.
— omega95 (Sun Mar 18). Re: Jabiru J-170 (DEVELOPMENT).
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png We're waiting for the Canvas Properties 2D drawing API and Camera View so we can create the PFD.
— omega95 (Sat Jun 02). Re: Jabiru J-170 (DEVELOPMENT).
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I'm looking to replicate a camera with a fixed viewpoint from the aircraft. For example looking directly down. Is there a way I can use some scripting method to call a new window displayed in the bottom right hand side of the screen showing a fixed camera view, without having to edit the preferences for my machine? I'd like it to be easily distributable. [2]
— Avionyx
Cquote2.png
Cquote1.png I was wondering if it were possible to restrict the camera output to only one half of the running FG window? I'm hoping to do this so that I may have the map and route manager GUIs active in the other half, so that they aren't obscuring the camera view (and also have the entire HUD visible). So basically, half the window straight down the center - left half is just black, right half is the camera.

Although this would also be solved if there were an external FG dynamic navigational map program, that also displayed waypoints... (I don't think there is one, right?).

Additionally, I would love to hear that this question can be answered with Nasal, as I really can't afford to edit the source code and recompile (it's for a project, and I have no admin rights on the laboratory machines).[3]
— seabutler
Cquote2.png
Cquote1.png I'm trying to debug reflection shader I'm working on. I have a camera attached to a scene graph, which pre-renders (osg::Camera::PRE_RENDER) scene into offscreen surface (osg::Camera::FRAME_BUFFER_OBJECT); For a debugging purposes I have to see the result of that render pass. I'm not very good yet in FG internal structure, so I'd like to ask - can this camera be somehow attached to FG camera views (v), or embedded as separate window ?[4]
— Vladimir Karmisin
Cquote2.png
Cquote1.png I want to give access to every stage of the rendering to the effect system. The geometry pass outputs to render target, but the fog, the lights, the bloom need to have access to the textures of the buffer, and there is a separate one for each camera associated to windows or sub windows. [5]
— Frederic Bouvier
Cquote2.png
Cquote1.png It would be nice if the Effects framework had a way to load arbitrary textures and make them available to effects.I don't know if there is a better way to create your texture offline than write C++ code in simgear. OSG will read a TIFF file with 32bits per component as a floating point texture... assuming you can create such a thing.[6]
— Tim Moore
Cquote2.png
Cquote1.png modify the Renderer class to separate from the scenegraph, terrain and models on one hand, the skydome and stars on the other, and finally the clouds. These three elements are passed to the CameraGroup class in order to be treated separately in the new rendering engine (and put together in the current one).[7]
— Frederic Bouvier
Cquote2.png
Cquote1.png I want to point out my work on my "newcameras" branch: https://gitorious.org/fg/zans-flightgear?p=fg:zans-flightgear.git;a=shortlog;h=refs/heads/newcameras which allows user to define the rendering pipeline in preferences.xml. It does not (yet?) have everything Rembrandt's pipeline needs, but most likely is easily enhanced to support those things.

Basically this version adds support for multiple camera passes, texture targets, texture formats, passing textures from one pass to another etc, while preserving the standard rendering line if user wants that.

I wish this work could be extended (or maybe even I can extend it ;) to handle the Rembrandt camera system. This will not solve all problems in the merge, but some of them.[8]
— Lauri Peltonen
Cquote2.png
Cquote1.png I was not aware of your work. But given what you write here, this looks pretty promising. Fred mentioned your name in an offline mail. I would highly apprechiate that we do not lock out low end graphics boards by not having any fallback. May you both should combine forces? From what I read, I think both are heading in the same global direction and both implementations have some benefits over the other?[9]
— Mathias Fröhlich
Cquote2.png
Cquote1.png I would like to extend the format to avoid duplicating the stages when you have more than one viewport. What I see is to specify a pipeline as a template, with conditions like in effects, and have the current camera layout refer the pipeline that would be duplicated, resized and positioned for each declared viewport[10]
— Frederic Bouvier
Cquote2.png
Cquote1.png Mapping cameras to different windows, which can be opened on arbitrary screens, will absolutely still be supported. I know that multi-GPU setups are important for professional users and our demos.[11]
— Tim Moore
Cquote2.png
Cquote1.png I believe that we need to distinguish between different render to texture cameras. Camera nodes must be accessible from within flightgear. That ones that will end in mfd displays or hud or whatever that is pinned into models. And one that are real application windows like what you describe - additional fly by view, and so on. And I believe that we should keep that separate and not intermix the code required for application level stuff with building of 3d models that do not need anything application level code to animate the models ... I think of some kind of separation that will also be good if we would do HLA between a viewer and an application computing physical models or controlling an additional view hooking into a federate ...[12]
— Mathias Fröhlich
Cquote2.png


Cquote1.png I've done some work with setting up a model of a pan/tilt camera system that can point at a specific wgs84 point or along a specific NED vector (i.e. nadir, or exactly at my shadow, etc.) This was [unfortunately] for a paid consulting project so that code doesn't live in the FlightGear tree. However, it's really easy to configure a view that stays locked on a specific lon/lat and I hacked a small bit of nasal to copy the point you click on over into the view target variables so you can click any where in the scene and the pan/tilt camera will hold center on that exact location. FlightGear offers a lot of flexibility and comparability in this arena.[13]
— Curtis Olson
Cquote2.png


Cquote1.png Would it be possible to place the new "view" into a window instead of having a dedicated view? That would allow you to have an instrument panel with a blank cut-out that could hold this newscam/FLIR window.The easiest way to visualize the idea I have is to think about the view you'd see in one of the rear-view mirrors that most fighters have along the canopy bow (and the Spitfire has mounted on top of the canopy bow, outside the cockpit). You'd see your full screen view as usual, but you'd also have these "mirrors" showing the view behind you at the same time.[14]
— Gene Buckle
Cquote2.png


Cquote1.png One thing we have to consider with rear view mirrors is that we don't currently have the ability to flip the display for the "mirror" affect.There's got to be a very simple view transform matrix that would invert the display in the horizontal direction. Probably the identity matrix with the appropriate axis negated (-1). It might be a relatively simple thing to add to the view transform pipeline at some point.[15]
— Curtis Olson
Cquote2.png
Cquote1.png I had a look at the this idea a while back - the problem I came across was that the camera would show the view to the rear NOT the mirror image. I couldn't see a way around that without a great deal of processing. At hat point I gave up.[16]
— Vivian Meazza
Cquote2.png
Cquote1.png Canvas is another excellent example for the dilemma that Rembrandt is facing: we've been wanting to support dedicated camera/sub-cameras passes for years, and it was thanks to Zan's work that this became possible a while ago, but all of the main rendering guys back then (Mathias, Tim and Fred) were completely unaware of this work, despite agreeing that they wanted to see this integrated - so Rembrandt pre-dated Zan's work. And equally, some of the features implemented in other areas of FG would be better implemented by adapting Zan's code, which is something that Zan, Fred and Mathias agreed on back then. Yet, so the problem is not lacking an agreement, it is having someone to do the integration work.

— Hooray (Fri Oct 17). Re: Orbital Makes the Sky Black.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Given all the recent development, the most natural development to take place would be modularizing it by splitting it up and re-implementing it on top of Zan's newcamera work - at that point, the integration layer would be much more modular, and could even be integrated with Canvas, which is another natural development step, simply because all the stuff that people cannot currently do, would become possible, without being tied to a particular rendering framework or even scenery engine.
— Hooray (Fri Oct 17). Re: Orbital Makes the Sky Black.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png it's all the Rembrandt init/setup code that is hard-coded, and which used to contain a few hard-coded shaders - those are basically different rendering buffers that are chained together to set up a deferred rendering pipeline - this isn't done in a "plug & play" fashion currently - exposing this to XML/property tree space would be a huge undertaking probably - Zan's "newcamera" work really is the best match here - and RTT/buffer management is exactly what Canvas is already doing under the hood.

Thus, each RTT buffer could simply be a Canvas texture internally - so that all the hard-coded Rembrandt logic could be maintained more easily at some point.

It is indeed lack of consistency and integration that is the main challenge here - because all of these features were developed at a different point in time, and people were usually only interested in making one thing work, instead of unifying those solutions (effects + newcamera branch + Canvas). And it is indeed a lot of work to do this properly - a unified approach takes a lot of time and energy.


— Hooray (Sun Oct 19). Re: Orbital Makes the Sky Black.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Rembrandt pre-dates the whole reset/re-init effort by several years, and while SGSubsystem does provide the corresponding interfaces to be implemented by each subsystem to handle simulator resets, our rendering system isn't a conventional SGSubsystem - equally, all the CameraGroup stuff has become fairly massive meanwhile.


It is definitely possible to implement dynamic reset/re-init even for the renderer, including all buffers and windows/views - Zan's work still is the most promising effort in this department.

But that, too, predates the whole Canvas effort.

Like wlbragg said: we don't necessarily need to use a lot of dedicated C++ support code to implement alternate rendering schemes like Rembrandt, the main hooks required to support arbitrary -and fully dynamic- schemes is already in place.


— Hooray (Sun Oct 19). Re: Orbital Makes the Sky Black.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Internally, FG doesn't usually use the C/C++ APIs for GLSL directly, but uses the OSG abstraction layers instead (which are fairly well documented, even for people new to shaders).


The "obscure" parts of Rembrandt are not necessarily its effects or shaders, but the underlying C++ code which sets up all the buffers and sequencing. Once that is either documented or exposed, it is foreseeable that deferred rendering will be resurrected again, even if FredB should still not be around - such an effort would not need to involve Zan's work or Canvas, but it would be the most logical step for the time being, absent some other overlapping development effort.

But as has been said by a number of people already, doing all the integration work can be really tedious and frustrating.


— Hooray (Sun Oct 19). Re: Orbital Makes the Sky Black.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Internally, a Rembrandt buffer is not much different from any other RTT context - Canvas is all about rendering to a dynamic texture and updating it dynamically by modifying a sub-tree in the property tree - but its primary primitives are 1) osgText, 2) shivaVG/OpenVG paths, 3) static raster images, 3) groups/maps - none of these would be particularly useful in this context. But Zan's newcamera work could be turned into a new "CanvasCamera" element to allow camera views to be rendered to a Canvas, including not just scenery views - but also individual rendering stages. Canvas itself maintains a FBO for each texture, which is also the mechanism in use by Rembrandt. Tim's CameraGroup code is designed such that it does expose a bunch of windowing-related attributes to the property tree - equally, our view manager is property-controlled.
— Hooray (Sun Oct 19). Re: Orbital Makes the Sky Black.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Meanwhile, we ended up with Canvas as an abstraction mechanism for FBO management via properties - so integrating Canvas would indeed be a logical choice, unrelated to any particular manifestation like ALS or Rembrandt - integrating these technologies would primarily mean that new features could be prototyped without necessarily having to customize the hard-coded renderer logic - including things like our hard-coded skydome for example, which could be implemented in fgdata space then - which would not just be relevant for efforts like Earthview (orbital flight), but also make other things possible that would currently require a fair amount of tinkering with the C++ code.

— Hooray (Sun Oct 19). Re: Orbital Makes the Sky Black.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I am not talking about Rembrandt and/or ALS in particular here - I am just seeing the main challenge being the lack of accessibility when it comes to required structural changes to the C++ code - regardless of the concrete renderer - the lack of Rembrandt maintenance, and the slow response whenever ALS requires C++ level changes, is primarily because the corresponding renderer code is not being maintained actively - moving this into fgdata space via effects and shaders is a logical thing to do, and will allow people like Thorsten (or yourself) to make corresponding modifications without facing a core development bottleneck when it comes to Rembrandt/FGRenderer or any other $FG_SRC/Viewer modifications.


The CameraGroup.cxx file is basically begging to be refactored sooner or later. None of this needs to involve Canvas, it would just be a straightforward and generic approach to do so, but certainly not mandatory - Zan's original work was implemented using directly XML and the property tree - however, Canvas contains a few helpers to make this increasingly straightforward, requiring very little in terms of code (e.g. PropertyBasedElement as a container for subsystems implemented on top of the property tree).


— Hooray (Sun Oct 19). Re: Orbital Makes the Sky Black.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png As has been said previously, the proper way to support "cameras" via Canvas is using CompositeViewer, which does require a re-architecting of several parts of FG: CompositeViewer SupportGiven the current state of things, that seems at least another 3-4 release cycles away. So, short of that, the only thing that we can currently support with reasonable effort is "slaved views" (as per $FG_ROOT/Docs/README.multiscreen).That would not require too much in terms of coding, because the code is already there - in fact, CameraGroup.cxx already contains a RTT/FBO (render-to-texture) implementation that renders slaved views to an offscreen context. This is also how Rembrandt buffers are set up behind the scenes.So basically, the code is there, it would need to be extracted/genralied and turned into a CanvasElement, and possibly integrated with the existing view manager code.
— Hooray (Oct 17th, 2015). Re: WINDOW IN WINDOW.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png And then, there also is Zan's newcameras branch, which exposes rendering stages (passes) to XML/property tree space, so that individual stages are made accessible to shaders/effects. Thus, most of the code is there, it is mainly a matter of integrating things, i.e. that would require someone able to build SG/FG from source, familiar with C++ and willing/able to work through some OSG tutorials/docs to make this work: Canvas Development#Supporting CamerasOn the other hand, Canvas is/was primarily about exposing 2D rendering to fgdata space, so that fgdata developers could incorporatedevelop and maintain 2D rendering related features without having to be core developers (core development being an obvious bottleneck, as well as having significant barrier to entry).In other words, people would need to be convinced that they want to let Canvas evolve beyond the 2D use-case, i.e. by allowing effects/shaders per element, but also to let Cameras be created/controlled easily.Personally, I do believe that this is a worthwhile thing to aim for, as it would help unify (and simplify) most RTT/FBO handling in SG/FG, and make this available to people like Thorsten who have a track record of doing really fancy, unprecedented stuff, with this flexibility.Equally, there are tons of use-cases where aircraft/scenery developers may want to set up custom cameras (A380 tail cam, space shuttle) and render those to an offscreen texture (e.g. GUI dialog and/or MFD screen).
— Hooray (Oct 17th, 2015). Re: WINDOW IN WINDOW.
(powered by Instant-Cquotes)
Cquote2.png


Cquote1.png tail cams are slaved cameras, so could be using code that already exists in FG, which would need to be integrated with the Canvas system, to be exposed as a dedicated Canvas element (kinda like the view manager rendering everything to a texture/osg::Geode).There's window setup/handling code in CameraGroup.cxx which sets up these slaved views and renders the whole thing to a osg::TextureRectangle, which is pretty much what needs to be extracted and integrated with a new "CanvasCamera" element - the boilerplate for which can be seen at: [CanvasThe whole RTT/FBO texture setup can be seen here: http://sourceforge.net/p/flightgear/flightgear/ci/next/tree/src/Viewer/CameraGroup.cxx#l994That code would be redundant in the Canvas context, i.e. could be replaced by a Canvas FBO instead.The next step would then be wrapping the whole thing in a CanvasCamera and exposing the corresponding view parameters as properties (propertyObject) so that slaved cameras can be controlled via Canvas.Otherwise, there is only very little else needed, because the CanvasMgr would handle updating the Camera, and render everything to the texture that you specified.
— Hooray (Oct 17th, 2015). Re: WINDOW IN WINDOW.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png As can be seen by the screen shots above, the code for doing this sort of stuff is readily available in FG.Obviously, someone interested in this, would need to know how to patch/build FG from source, i.e. after making C++ modifications, some of this is touching OSG (cameras and offscreen rendering specifically).Otherwise, it's relatively straightforward: CameraGroup.cxx already contains code to render a static camera to a texture, which is stored in a TextureMap named _textureTargets - internally, this is used for building the distortion camera - however, you can also exploit this to render an arbitrary camera view to a texture.At the Canvas level, you would then have to call the equivalent of flightgear::CameraGroup::getDefault() - this would be done at the FGCanvasSystemAdapter level, i.e. adding a getter function there, which returns the TextureRectangle map.Once you have a texture rectangle, you can also get the osg::Image for it, and that can be assigned to a Canvas image.Admittedly, that's a little brute force, but it should only require ~30 lines of code added to SG/FG to add a static camera view as a Canvas raster image.Ideally, something like this would be integrated with the existing view manager, i.e. using the same property names (via property objects), and then hooked up to CanvasImage, e.g. as a custom camera:// protocol (we already support canvas:// and http(s)://)So some kind of dedicated CanvasCamera element would make sense, possibly inheriting from CanvasImage.And it would also make sense to look at Zan's new-cameras patches, because those add tons of features to CameraGroup.cxxThis would already allow arbitrary views slaved to the main view (camera) So as you can see, PagedLOD/CompositeViewer don't need to be involved to make this happen.
— Hooray (Oct 25th, 2015). Re: WINDOW IN WINDOW.
(powered by Instant-Cquotes)
Cquote2.png
  1. James Turner  (Nov 16th, 2016).  [Flightgear-devel] Hangar thumbnails, screenshots, splash-screens .
  2. Avionyx (Wed Mar 12, 2014 7:08 am). Sub window view.
  3. seabutler (Fri Jan 24, 2014 5:38 am). "Half" the FG window?.
  4. Vladimir Karmisin (Thu, 08 Jan 2009 05:17:07 -0800). FG - camera for debugging purposes..
  5. Frederic Bouvier (Sun, 01 Jan 2012 07:14:43 -0800). Announcing Project Rembrandt.
  6. Tim Moore (Tue, 24 Jul 2012 22:38:35 -0700). Functions to textures?.
  7. Frederic Bouvier (Wed, 07 Mar 2012 05:08:06 -0800). RFC: changes to views and cameras.
  8. Lauri Peltonen (Wed, 07 Mar 2012 04:58:44 -0800). Rembrandt the plan.
  9. Mathias Fröhlich (Wed, 07 Mar 2012 10:15:31 -0800). Rembrandt the plan.
  10. Frederic Bouvier (Wed, 07 Mar 2012 05:08:06 -0800). RFC: changes to views and cameras.
  11. Tim Moore (30 Jun 2008 22:46:34 -0700). RFC: changes to views and cameras.
  12. Mathias Fröhlich (30 Jun 2008 22:46:34 -0700). RFC: changes to views and cameras.
  13. Curtis Olson (Tue, 15 May 2012 14:19:34 -700). LiDAR simulation in FG and powerline scenery.
  14. Gene Buckle (Thu, 23 Jul 2009 10:11:05 -0700). view manager "look at" mode.
  15. Curtis Olson (Thu, 23 Jul 2009 10:11:05 -0700). view manager "look at" mode.
  16. Vivian Meazza (Thu, 23 Jul 2009 10:11:05 -0700). view manager "look at" mode.

Effects / Shaders

Note  When it comes to supporting effects and shaders, people generally have two use-cases in mind for Canvas:
  • using effects/shaders per Canvas (or ideally per element)
  • using Canvas textures in effects (i.e. registered as materials via a corresponding new 'materials placement')

extending Canvas to allow effects (or at least shader) to be applied to a Window/Desktop, should be easy. [1]

In mid 2016, a number of contributors discussed another workaround to use Canvas textures in conjunction with effects/shaders: simply, by allowing an arbitrary Canvas to be registered as a material via SGMaterialLib, e.g. using an API in the form of myCanvas.registerMaterial(name: "myCanvasMaterial");

Equally, materials would make it possible to easily use arbitrary effects and shaders per Canvas element, i.e. just by setting a few properties that are then processed by a Canvas::Element helper function:

Effect *effect = 0;

SGMaterialCache* matcache = matlib->generateMatCache(b.get_center());
SGMaterial* mat = matcache->find( "myCanvasMaterial" );
delete matcache;

 if ( mat != NULL ) {
        // set OSG State
        effect = mat->get_effect();
    } else {
        SG_LOG( SG_TERRAIN, SG_ALERT, "Ack! unknown use material name = myCanvasMaterial");
    }
Cquote1.png Could canvas be used to take a view from a certain area in a certain direction and render it onto a fuselage--in other words, to create a reflection?
— MIG29pilot (Dec 29th, 2015). Using Canvas to create reflections.
(powered by Instant-Cquotes)
Cquote2.png

The effects system pre-dates Canvas by several years - meanwhile, it would be one of the more natural choices to optionally provide support for interfacing/integrating both, without this integration bein specific to a single use/case (e.g. aircraft/liveries). We've got other useful work related to effects that never made it into git and that predates Canvas by several years - but when it comes to managing dynamically created textures, canvas can probably be considered the common denominator and it doesn't make much sense to add even more disparate features that cannot be used elsewhere.

Cquote1.png I'm currently experimenting with a 2D Canvas and rendering everything to a texture. For this I use FGODGauge to render to texture and

FGODGauge::set_texture to replace a texture in the cockpit with the texture from the fbo. This works very well [...] I have just extended the ReplaceStaticTextureVisitor::apply(osg::Geode& node) method to also replace texture inside effects.

It works now by using the same technique as for the SGMaterialAnimation where a group is placed in between the object whose texture should be changed and its parent. This group overrides the texture:
virtual void apply(osg::Geode& node)
    {
      simgear::EffectGeode* eg =
        dynamic_cast<simgear::EffectGeode*>(&node);
      if( eg )
      {
        osg::StateSet* ss = eg->getEffect()->getDefaultStateSet();
        if( ss )
          changeStateSetTexture(ss);
      }
      else
        if( node.getStateSet() )
          changeStateSetTexture(node.getStateSet());
      int numDrawables = node.getNumDrawables();
      for (int i = 0; i < numDrawables; i++) {
          osg::Drawable* drawable = node.getDrawable(i);
          osg::StateSet* ss = drawable->getStateSet();
          if (ss)
              changeStateSetTexture(ss);
      }
      traverse(node);
    }
stateSet->setTextureAttribute(0, _new_texture,
osg::StateAttribute::OVERRIDE);
        stateSet->setTextureMode(0, GL_TEXTURE_2D, osg::StateAttribute::ON);
[2]
— Thomas Geymayer
Cquote2.png
Cquote1.png If you want to pass substantial amounts of data, I’d suggest to use a texture (with filtering disabled, probably) to pass the info. Since we don’t have much chance of using the ‘correct’ solution (UBOs) in the near future. If you need help generating a suitable texture on the CPU side, let me know.[3]
— James Turner
Cquote2.png
Cquote1.png I think getting large amount of data into a shader on a per frame basis may be a bit tricky. I could imagine using a texture but it will have to be copied to or updated in the graphics card memory for each frame which probably is fairly expensive. OTOH you'd get wakes and all if you succeed.
— AndersG (Fri Aug 08). Re: Export water/wave surface geometry.
(powered by Instant-Cquotes)
Cquote2.png
  1. https://sourceforge.net/p/flightgear/mailman/message/37608469/
  2. Thomas Geymayer (Tue, 01 May 2012 15:34:41 -0700). Replace texture with RTT.
  3. James Turner ( 2014-03-07 10:27:40). Passing arrays to a shader.

At some point, the canvas system itself could probably benefit from being also able to use the Effects/Shader framework, so that canvas textures can also be processed via effects and shaders optionally, before they get drawn. That should make all sorts of fancy effects possible, such as night vision cameras or thermal view, rendered to canvas textures/groups.

It is currently not yet clear how to address this best, the easiest option might be to specify if effects or vertex/fragment shaders shall be invoked via properties (boolean), including their file names referring to $FG_ROOT?

That would then disable the default rendering pipeline for those canvas textures and use shaders.

Basically, anything that's not directly possible via the core canvas system or via its Nasal wrappers, would then be handled via effects/shaders. So we would gain lots of flexibility, and performance benefits.

For the time being, neither effects nor shaders are exposed/accessible to the Canvas system, so depending on what you have in mind, you may need to extend the underlying base class accordingly - a simple proof-of-concept to get you going would be this:

 #include <osg/Shader>
....
 osg::ref_ptr<osg::Program> shadeProg(new osg::Program);

 // set up Vertex shader
 osg::ref_ptr<osg::Shader> vertShader(
 osg::Shader::readShaderFile(osg::Shader::VERTEX, filename1));

 // set up fragment shader
 osg::ref_ptr<osg::Shader> fragShader(
 osg::Shader::readShaderFile(osg::Shader::FRAGMENT, filename2));

 //Bind each shader to the program
 shadeProg->addShader(vertShader.get());
 shadeProg->addShader(vertShader.get());

 //Attaching the shader program to the node
 osg::ref_ptr<osg::StateSet> objSS = _transform->getOrCreateStateSet();
 objSS->setAttribute(shadeProg.get());

To make things better configurable, you can expose things like the type of shader and filename to the property tree by using the propertyObject<> template, e.g.:

#include <simgear/props/propertyObject.hxx>
....
simgear::propertyObject<std::string> vertex_filename(PropertyObject<std::string>::create(n,"shader.vert"));
simgear::propertyObject<std::string> fragment_filename(PropertyObject<std::string>::create(n,"shader.frag"));

For additional details, refer to Howto:Use Property Tree Objects.

Ideally, there could be a simple interface class, so that these things can be customized via listeners, like the property-observer helper, just specific to enabling shaders for a canvas texture.

So if people really want to create really fancy textures or camera views, they could use effects/shaders then, which would keep the design truly generic, and it would ensure that there's no bloat introduced into the main canvas system.

We did have some discussions about supporting per-canvas (actually per canvas::Element) effects and shaders via properties, TheTom even mentioned that he was interested in supporting this at some point, especially given the number of projects that could be realized like that (FLIR, night vision, thermal imaging etc) - but so far, there are quite a few other things that are taking precedence obviously - so, as far as I am aware, there's nobody working on effects/shader support for canvas, even though I am sure that this would be highly appreciated.

At the time of writing this (02/2014) the Canvas does not yet include any support for applying custom effects or Shaders to canvas elements or the whole canvas itself - however, supporting this is something that's been repeatedly discussed over time, so we're probably going to look into supporting this eventually[4].

If the canvas can internally be referenced by a texture2D() call, then it should be easy - the fragment shader knows screen resolution pixel coordinates, so it's straightforward to look up the local pixel from the texture and then blur or recolor it, distort it or whatever you have in mind.

Menu lighting based on light in the scene might be cool

These shouldn't even be very complicated to do

Assuming the canvas is internally a quad with a properly uv-mapped texture, then:

  • making the vertex shader just pass everything through and
  • uniform sampler2D myTexture; should make that texture available to the fragment shader
  • vec2 coords = gl_TexCoord[0].xy; should get the coordinates of the local pixel inside the texture


#version 120
uniform sampler2D input_tex;

void main() {

// get the texture coords of the pixel
 vec2 coords = gl_TexCoord[0].xy;

//look up the pixel color from the input texture
vec4 color =   texture2D( input_tex, coords) ;

// and pass the pixel color through
gl_FragColor = color;
}

There are at least 2-3 people who can help with pointers, but we don't have time to implement this ourselves - so if anybody is interested, please get in touch via the canvas subforum.

The Effects framework is implemented in SimGear: https://sourceforge.net/p/flightgear/simgear/ci/next/tree/simgear/scene/material


airport selection dialog with a greyscale fragment shader applied to the canvas


a trivial fragment shader applied to a canvas (the original colors used in the Nasal code were not changed!)
void main(void) {
	gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}


unmodified airport selection canvas with fragment shader applied
// based on:
// http://people.freedesktop.org/~idr/OpenGL_tutorials/03-fragment-intro.html
// adapted by i4dnf as per: http://wiki.flightgear.org/Talk:Canvas_Development
// ** untested **
void main(void)
{  
 vec4 baseColor = vec4(.90, .90, .90, 0.0);
 vec4 subtractColor = vec4(-.70, -.70, -.50, -0.2);
 float doSubtract = step(400.0, dist_squared);
 vec4 fragColor = doSubtract * subtractColor + baseColor;
 gl_FragColor = fragColor;
}
more canvas fragment shading experiments

Implementation-wise, supporting shaders per canvas seems straightforward to support - but it would probably be better to support shaders per element, where each element would render its own sub-texture if shaders/effects are specified, and apply the canvas' osg::StateSet otherwise. We could add an Interface on top of SimGear's "Effects" framework which would be implemented by the Canvas itself, but also by Canvas::Element


  • Probably need to extend the Effects framework to support reloading effects/shaders from disk for testing purposes


Also see:


References

GDAL/OGR

WIP.png Work in progress
This article or section will be worked on in the upcoming hours or days.
See history for the latest developments.

a large benefit for using the raw DEM will be for moving maps - the elevation is pretty much displayable as-is.[1]


PDF

there's also been talk about possibly supporting a dedicated PDF element eventually:

Cquote1.png Hmmm, I'm now wondering about a canvas PDF viewer This is a link to the FlightGear forum.!
— bugman (Sep 18th, 2015). Re: Space Shuttle.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Now to see what happens with the EFB ideas and the canvas PDF support This is a link to the FlightGear forum..
— bugman (Sep 28th, 2015). Re: Space Shuttle.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Canvas cannot currently deal with PDF files directly - even though OSG does have support for doing this kind of thing, but we would need to add a few dependencies, i.e. a PDF rendering library like "poppler" that would render a PDF to an osg::Image. At that point, it could be dealt with like a conventional canvas image, and could even be retrieved via HTTP. Extending Canvas accordingly could actually be useful, because it would even allow us to render other PDFs inside dialogs - such as for example the manual itself, i.e. as part of some kind of integrated "help" system. The question is if TheTom can be convinced that this is a worthwhile goal or not. But it's clearly something for post 3.2


Based on Tom's previous comments, he doesn't really favor procecural chart generation either, but would prefer having some kind of "web service" from which charts etc could be fetched.

Not sure if you'd really want to come up with a corresponding "standard" from scratch, it should be easier to support the real thing, i.e. ARINC 424 / AIXM.

Traditionally, there are "CAD" tools for designing terminal procedures, i.e. tools like ArcGIS: http://webhelp.esri.com/arcgisdesktop/9 ... l_Solution

Also see this related discussion on the XP forum: http://forums.x-plane.org/?showtopic=46676


— Hooray (Sun Jun 22). Re: 777 EFB: initial feedback.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png It may make sense to revisit this idea, supporting a subset of PDF would not be too difficult, but it would be better to really use a PDF library and OSG's built-in suport for rendering a PDF to a texture, which could the be easily turned into a new Canvas Element, as per the example at: Canvas Development#Adding a new ElementThe coding part is relatively straightforward (basically copy&paste), but getting the dependencies/cmake magic right for all supported FG platforms would probably require a bit of work.
— Hooray (Sep 21st, 2015). Re: 777 EFB: initial feedback.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png the whole EFB idea has also been discussed previously, with a focus on using Canvas and Nasal - analogous to how Richard's MFD framework is just a front-end on top of Canvas/Nasal. Obviously, this approach (it not using Phi), has the limitation that it can only display stuff inside the fgfs main window.But we do have code/prototypes doing EFB handling using Nasal & Canvas
— Hooray (Sep 27th, 2015). Canvas MFD framework vs. EFB functionality.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png More recently, another idea is to add dedicated PDF support to the core Canvas system, so that arbitrary PDF files can be rendered onto a Canvas: https://forum.flightgear.org/viewtopic.php?p=258282#p258282
— Hooray (Sep 27th, 2015). Canvas MFD framework vs. EFB functionality.
(powered by Instant-Cquotes)
Cquote2.png

If you are interested in working on any of these, please get in touch via the canvas sub forum first.

  1. psadro_gm  (Sep 10th, 2016).  Re: Next-generation scenery generating? .


You will want to add a new Canvas::Element subclass whenever you want to add support for features which cannot be currently expressed easily (or efficiently) using existing means/canvas drawing primitives (i.e. via existing elements and scripting space frameworks).

For example, this may involve projects requiring camera support, i.e. rendering scenery views to a texture, rendering 3D models to a texture or doing a complete moving map with terrain elevations/height maps (even though the latter could be implemented by sub-classing Canvas::Image to some degree).

Another good example for implementing new elements is rendering file formats like PDF, 3d models or ESRI shape files.

To create a new element, you need to create a new child class which inherits from Canvas::Element base class (or any of its child-classes, e.g. Canvas::Image) and implement the interface of the parent class by providing/overriding the correspond virtual methods.

To add a new element, these are the main steps:

  • Set up a working build environment (including simgear): Building FlightGear
  • update/pull simgear,flightgear and fgdata
  • check out a new set of topic branches for each repo: git checkout -b topic/canvas-CanvasPDF
  • Navigate to $SG_SRC/canvas/elements
  • Create a new set of files CanvasPDF.cxx/.hxx (as per Adding a new Canvas element)
  • add them to $SG_SRC/canvas/elements/CMakeLists.txt (as per Developing using CMake)
  • edit $SG_SRC/canvas/elements/CanvasGroup.cxx to register your new element (header and staticInit)
  • begin replacing the stubs with your own C++ code
  • map the corresponding OSG/library APIs to properties/events understood by the Canvas element (see the valueChanged() and update() methods)
  • alternatively, consider using dedicated Nasal/CppBind bindings

Below, you can find patches illustrating how to approach each of these steps using boilerplate code, which you will need to customize/replace accordingly:

Caution  This custom Canvas element requires a 3rd party library which is not currently used by SimGear/FlightGear, so that the top-level CMakeLists.txt file in $SG_SRC needs to be modified to add a corresponding findPackage() call and you also need to download/install the corresponding library for building sg/fg. In addition, the CMake module itself may need to be placed in $SG_SRC/CMakeModules:







Discussed new Elements

1rightarrow.png See Canvas Sandbox for the main article about this subject.

The previously mentioned primitives alone can already be used to create very sophisticated avionics and dialogs - however, depending on your needs, you may want to extend the canvas system to support additional primitives. Typically, you'll want to add new primitives in order to optimize performance or simplify the creation of more sophisticated avionics and/or dialogs (e.g. for mapping/charting purposes). If you are interested in adding new primitives, please take a look at the sources in $SG_SRC/canvas/elements.

For example, there's been talk about possibly adding the following additional primitives at some point. However, none of these are currently a priority or being worked on by anybody:

  • support for a vertical mapping mode (e.g. to create Vertical Situation Displays or flight path evaluation dialogs), would probably make sense to use PROJ4 for additional projcetion support?
  • support for rendering scenery views (e.g. for tail cameras or mirrors etc) [5] This is a link to the FlightGear forum. [6] This is a link to the FlightGear forum. ticket #1250
  • support for ESRI shapefiles (instead of using shapelib, it would make sense to use GDAL/OGR here, or directly the OSG/ReaderWriterOGR plugin) [7] (FlightGear/osgEarth now depends on GDAL, so should be straightforward dependency-wise):
  • support for GeoTIFF files or terrain height profiles using the tile cache
  • rendering 3D objects
  • support for ortographic moving map displays, e.g. using atlas [8] This is a link to the FlightGear forum. (ideally usingCompositeViewer Support):


There is already support for creating multiple osgviewer windows in FlightGear, this is commonly used in multiscreen setups - to support the creation and usage of osgviewer windows in Canvas, we would need to look into adding a new placement type to the canvas system, so that osgviewer/OS windows can be created and controlled via the canvas system and a handful of placement-specific properties [9] This is a link to the FlightGear forum..

Placements

Obviously, users can use the canvas system for developing all sorts of features that may need to be accessible using different interfaces - for these reasons, the canvas uses the concept of so called placements, so that a canvas-texture can be shown inside GUI windows, GUI dialogs, cockpits, aircraft textures (liveries) - and also as part of the scenery (e.g. for a VGDS).


in simgear Canvas::update it appears to be using the factories to find the element; and this means that it can't find the named OSG node, which makes me think that maybe it is only looking in the ownship (which is a null model).

PlacementFactoryMap::const_iterator placement_factory = _placement_factories.find( node->getStringValue("type", "object") ); 
if( placement_factory != _placement_factories.end() ) { 
Placements& placements = _placements[ node->getIndex() ] = placement_factory->second(node, this); 
node->setStringValue ( "status-msg", placements.empty() ? "No match" : "Ok" ); 
}

void CanvasMgr::init() calls sc::Canvas::addPlacementFactory. [1]


supported canvas placements as of 03/2014


Note  The features described in the following section aren't currently supported or being worked on, but they've seen lots of community discussion over the years, so that this serves as a rough overview.

Scenery Overlays


Also see Photoscenery via Canvas? post on the forum This is a link to the FlightGear forum. and A project to create a source of free geo-referenced instrument charts post on the forum This is a link to the FlightGear forum.

Cquote1.png I've been wondering how hard it would be to add a tile loader mode where the default texture is ignored, and instead, a photo texture of the tile is applied. It may not be an optimal photo-texture implementation (but it might be good enough to be fun and interesting?)
— Curtis Olson (Oct 1st, 2008). Re: [Flightgear-devel] Loading Textures for Photo-Scenery?.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png lightGear to superimpose a given texture over a whole terrain tile, given that a texture file with the same name as the tile is found. I think that this would require that either a) TerraGear generate appropriate texture coordinates for the tile, mapping the texture continuously over the whole tile, or b) in case of loadPhotoScenery, the texture coordinates contained in the .btg.g must be ignored and rebuilt on the fly by FlightGear.
— Ralf Gerlich (Oct 1st, 2008). Re: [Flightgear-devel] Loading Textures for Photo-Scenery?.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Chris Schmitt, Pete & myself have also discussed the MSFS approach, where we render surface information to textures, either on the CPU or GPU. This solves a whole bunch of issues in airports, and allows the generation of the textures to be defined based on user settings, performance, available texture RAM and so on. (Don’t render roads, render fancy boundaries for coastlines, paint snow onto crops based on season) If the textures are re-generated dynamically based on changing view, the user need never see a ‘blurry’ texture. The generated texture doesn’t need to encode RGB, it can encode whatever inputs the shaders like - eg material ID, gradient, distance to boundaries. (And of course, for far away areas, we generate or read a coarse, low-resolution map very cheaply) From my perspective the appeal is this work can be done on a spare CPU core, and it actually fits quite well with something like osgEarth - we let osgEarth handle the elevation data, and the texture-generating code simply becomes the source of raster data which osgEarth overlays on top. With the GPU-based flattening of elevation data it even works to make roads/railways interact with terrain nicely. Whether or not the memory-bandwith burned in moving textures to the GPU is better or worse than doing everything GPU-side as Tim suggests with decals, I have no clue about. Similarly I don’t know how disruptive this scheme would be architecturally - intuitively osgEarth must handle loading different resolutions of raster data interactively - that’s exactly what it does for photo-scenery after all - but I haven’t looked at the API to see how easy or hard such an integration would be.
— James Turner (Nov 27th, 2013). Re: [Flightgear-devel] Rendering strategies.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Texture overlays - FG scenery engine does the chopping and texture co-ord generation. [2]
— Paul Surgeon
Cquote2.png
Cquote1.png For the sake of completeness, and I am not saying that you should do this (and it almost certainly going to be much worse performance-wise than any shaders) - but if you want the shadow to be accurate despite potential terrain sloping, you could apply a Canvas texture onto the surface (admittedly, this is much more straightforward in the case of an actual 3D model like a carrier) - otherwise, you'll also want to use a workaround and attach the texture to the 3D model (aka main aircraft). But people have been using Canvas for all sorts of purposes, including even liveries: Howto:Dynamic_Liveries_via_Canvas

But unlike glsl/shaders, a Canvas is not primarily a GPU thing, i.e. there's lots of CPU-level stuff going on affecting performance.


— Hooray (Sat Jan 17). Re: 2D shadow on ground?.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I am looking for a method for adding a graphical overlay channel to Flightgear. This overlay would consist of a dynamic texture that can be modified in real time. I've used other OpenGL based systems with this feature but don't know where to start with implementing it in Flightgear.[3]
— Noah Brickman
Cquote2.png


Cquote1.png Once the frame is converted to an opengl texture, then it would be a very simple matter of displaying it on the screen with a textured rectangle drawn in immediate mode ... possibly with some level of transparancy, or not ...

I'm involved in some UAV research where we are using FlightGear to render a synthetic view from the perspective of a live flying uav. Would be really cool to super impose the live video over the top of the FlightGear synthetic view. Or super impose an F-16 style HUD on top of the live video ... I have lots of fun ideas for someone with a fast frame grabber and a bit of time [...]

Then do whatever bit fiddling is needed to scale/convert the raster image to an opengl texture. Then draw this texture on a quad that is aligned correctly relative to the camera. It might be possible to get fancy and alpha blend the edges a bit.

Given an image and the location and orientation of the camera, it would be possible to locate world coordinates across a grid on that image. That would allow a quick/crude orthorectification where the image could be rubber sheeted onto the terrain. This would take some offline processing, but you could end up building up a near real time 3d view of the world than could then be viewed from a variety of perspectives. The offline tools could update the master images based on resolution or currency ... that's probably a phd project for someone, but many of the pieces are already in place and the results could be extremely nice and extremely useful (think managing the effort to fight a dynamic forest fire, or other emergency/disaster management, traffic monitoring, construction sites, city/county management & planning, etc.) I could even imagine some distrubuted use of this so that if you have several uav's out flying over an area, they could send their imagery back to a central location to update a master database ... then the individual operators could see near real time 3d views of places that another uav has already overflown.

If we started building up more functionality in this area, there are a lot of different directions we could take it, all of which could be extremely cool.[4]
— Curtis Olson
Cquote2.png
Cquote1.png Could we generate the texture on the fly? Based on landclass and road data? I could see a number of advantages/disadvantages here as compared to our current, generic textures:
  • much better autogen scenery possible: many textured streets/railroads without additional scenery vertices
  • shared models with an individual piece of ground texture
  • get rid of sharp landclass borders
  • possibly improved resolution[5]
    — Thomas Albrecht
Cquote2.png
Cquote1.png A very interesting idea - so interesting I thought of it and discussed it with some people last year :) The summary answer is, it should be possible, it would

have pretty much the benefits and drawbacks you mention (especially the VRAM consumption), and it would allow nice LoD and solve some other issues. Especially it avoid the nasty clipping issues we have with surface data in TerraGear, since you just paint into the texture, no need to clip all the

linear data.[6]
— James Turner
Cquote2.png

What we could do is identify which hooks are needed to make this work and provide those via the Canvas system: Canvas textures can already be placed in the scenery, so there should be very little needed in terms of placement-specific attributes, and the corresponding code should be available in SimGear/FlightGear already.

The patch required to modify FlightGear obviously already uses shaders and effects, and it's mostly about exposing additional parameters to the shaders.


  1. Richard Harrison  (May 15th, 2016).  [Flightgear-devel] Canvas in dynamically loaded scene models .
  2. Paul Surgeon. Scenery engine features..
  3. Noah Brickman. Overlay Plane.
  4. Curtis Olson. [http://www.mail-archive.com/flightgear-devel@lists.sourceforge.net/msg15459.html Replace fg visualization with streaming video Curtis Olson Fri, 25 Jan 2008 07:51:41 -0800].
  5. Thomas Albrecht. Generating ground textures on the fly?.
  6. James Turner. Generating ground textures on the fly?.

Native Windows

Note  People interested in working on this may want to check out the following files:

Currently, all placements are within the main FlightGear window, however there's been talk about providing support for additional Canvas placements, such as e.g. osgviewer placements to help generalize our Window Management routines, so that a canvas can be rendered inside a dedicated OS window:

Would it be possible to place the new "view" into a window instead of having a dedicated view? That would allow you to have an instrument panel with a blank cut-out that could hold this newscam/FLIR window.[1] Several responded that you can have a view, or multiple camera offsets, shared across many screen. I have tried this and it works well on the mac. But what I want to do is have two windows, one with a custom view i have defined, and another window with the cockpit view. Ill keep digging, but i read somewhere that this particular thing is hard.... because there is only one view manager instance, and it can only allow multiple camera offsets...[2] We can define arbitrary areas of the screen and draw any view perspective into them. However, I think all the views need to be from the same eye point. (i.e. you can't have a cockpit view in one window and a chase view in another?) However, the capability we do have is very nice for supporting devices like the Matrox Triple Head 2 Go box, or Twin View, or any "spanning" desktop system. And we have the ability to extend this to multiple displays. There is an AMD/ATI demo movie floating around on youtube that shows FlightGear running on 8 monitors using 4 dual-headed video cards.[3]


Cquote1.png Support multiple views/windows: Currently the GUI can only be placed inside one view/window (see Docs/README.multiscreen) but it would be nice to be able to move windows between views.[4]
— Thomas Geymayer
Cquote2.png


Cquote1.png I have just been trying out the multiple screen feature in FG. I found that the GUI camera (including the menu bar, hud and 2D panel) appears in only one of the windows. Is there any way I can make the GUI to appear in all the windows? Actually I want to be able to view the hud and 2D panel in all the windows.[5]
— Kavya Meyyappan
Cquote2.png
Cquote1.png there's a limitation in Plib that forces the GUI to be drawn on one window.[6]
— Tim Moore
Cquote2.png
Cquote1.png I think you have just summarized all the limitations of the FlightGear multi-camera/view/display system. I know that in the case of menus, hud, 2d instrument panels, there would need to be some significant code restructuring to allow these to be displayed on other windows.[7]
— Curtis Olson
Cquote2.png
Cquote1.png Good thing to have!!! Just still support graphics context on different screens/displays too ...[8]
— Mathias Fröhlich
Cquote2.png
Cquote1.png it can be solved by using multiple osg windows to contain whatever GUI solution we go with - canvas, osgWidget or PUI-port.

Or to put it another way - the actual hard part is running the widgets in the main OpenGL window - which *is* a requirement for full-screen apps and multi-monitor setups. (Some people have claimed otherwise, but I believe we need the option of 'in-window' UI for many cases).

So, this is a desirable feature, but doesn't dictate the choice of GUI technology. And can be done as a separate step from replacing PLIB.[9]
— James Turner
Cquote2.png
  1. Gene Buckle  (Jul 23rd, 2009).  Re: [Flightgear-devel] view manager "look at" mode .
  2. Carson Fenimore  (Feb 6th, 2009).  [Flightgear-users] multiple views .
  3. Curtis Olson  (Jul 23rd, 2009).  Re: [Flightgear-devel] view manager "look at" mode .
  4. Thomas Geymayer (07-30-2012). Switching from PUI to osgWidget.
  5. Kavya Meyyappan (Fri, 19 Mar 2010 03:31:50 -0700). [Flightgear-devel] Help needed with multi-screen.
  6. Tim Moore (Sat, 20 Mar 2010 01:42:31 -0700). Re: [Flightgear-devel] Help needed with multi-screen.
  7. Curtis Olson (Fri, 19 Mar 2010 08:36:22 -0700). Re: [Flightgear-devel] Help needed with multi-screen.
  8. Mathias Fröhlich (Sat, 28 Jun 2008 00:05:19 -0700). Re: [Flightgear-devel] RFC: changes to views and cameras.
  9. James Turner (Wed, 25 Jul 2012 02:28:42 -0700). Switching from PUI to osgWidget.

Placement/Element for Streaming: Computer Vision

Note  There seem to be two main use-cases discussed by contributors:
  1. The UAV guys want to view/use external live video inside FlightGear as an instrument/texture (which would require a new Canvas::Element to render an external video stream to a canvas)
  2. the computer vision (OpenCV) guys want to stream FlightGear live video itself to another application for image processing purposes - the latter would require streaming FlightGear's main window view to an external program (i.e. by using FlightGear's CameraGroup code), possibly by using a corresponding "virtual Placement" that opens a socket to provide a live stream of the FlightGear main window via background thread. This makes only sense to pursue once we can render camera views to a canvas though.

One of the suggestion would be to develop some kind of shared memory interface, with metadata embedded on the same memory space. After each rendering step, the image would be simply copied to the memory along the metadata and a frame counter. I have already some tests done on Windows platform and it works quite well. It is also possible to enable/disable the copy process(Which is not too slow, but it is interesting to have a way of controlling it) using the command line parameters. >From the shared memory position, any other process could read it and do whatever it wants, which would create a complete horion of possibilities like streaming, video recording and a more modular architecture to anything related to gathering images, the jpeg server could be separated from FlightGear, for example. Obviously, this requires some kind of process synchronization such as mutexes, which relies on the reading softwares not to block it for a too long time. Another approach would be to have a different architecture inside FlightGear, something like: Renderer -> ImageGrabber -> ImageSaver Where the ImageGrabber is the part of code that reads image and saves it on a buffer and ImageSaver is the "externalizer" (JPEGSaver, SharedMemorySaver, MPEGSaver and so on). However, I personally prefer the first option, which enables people to grab image and do whatever they want without the necessity of understanding and recompiling FlightGear source code.[1]


The HTTP server already does this - if you select a ‘low compression’ image format such as TGA or uncompressed PNGs, it’s very close to what you want. It will be using a local TCP socket, not shared memory, but unless you want really large images, I am not sure the additional complexity is worth adding an entirely new image output system for. See the code for how to increase the max-fps (defaults to 5H but could be 30 or 60Hz) and file-format of the http-server; any image format supported by OSG ReaderWriter plugin should work. (Well, so long as the plugin implements writing!)[2]


Cquote1.png I have a project using flightgear to simulate the real 3D flight scenario, get the images and process the with image processing algorithms. So what I want are the coordinates of the airplane and the coordinates of the camera. I encountered some problems when I was trying to set my own view points.
— yanzhang (Fri Sep 12). Render image in Flightgear.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I am using the http stream feature to capture videos with ffmpeg. It is a great feature!
— Adam Dershowitz (Aug 17th, 2015). [Flightgear-devel] httpd stream question.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png what is the current suggested easiest way to capture images and videos from FlightGear on a Mac?
— Adam Dershowitz (May 29th, 2014). [Flightgear-devel] Saving Videos.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png the /way/ we acquire frame grabs changed completely, to interact much better with OSG - we no long render the scene again, instead we simply read the framebuffer back after OSG says rendering is complete. You can adjust the maximum frame capture rate, so it should be possible to achieve 25fps output or more this way. I think the default is capped to 5 or 10H however.
— James Turner (May 30th, 2014). Re: [Flightgear-devel] Saving Videos.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png It uses the same last-camera-callback technique and now supports mjpeg streaming, too.
— Torsten Dreyer (May 30th, 2014). Re: [Flightgear-devel] Saving Videos.
(powered by Instant-Cquotes)
Cquote2.png


Cquote1.png The problem is not the decoder but the encoder. I don't have a fast-enough real-time video encoder that lives happily in the FG main loop. I have experimented with ffmpeg which was promising, but it ended up on the very bottom of my backlog :-/ We can do MJPEG stream, try to use /screenshot?stream=y as the screenshot url. MJPEG is ugly and a resource hog but works reasonable well for image sies of probably 640x480. Scale down your FG window and give it a try.
— Torsten Dreyer (Oct 12th, 2015). Re: [Flightgear-devel] phi interface updates.
(powered by Instant-Cquotes)
Cquote2.png

People interested in doing UAV work that involves computer vision (e.g. using OpenCV, see ticket #924, [10] This is a link to the FlightGear forum., [11] This is a link to the FlightGear forum.) will probably also want to look into using a dedicated Canvas placement for this, in combination with adding a dedicated Canvas::Element to render scenery views to a texture using CompositeViewer Support - these two features would provide a straightforward mechanism to export a live video stream of FlightGear via a dedicated port.

Note  There were several early attempts at bringing streaming capabilities to FlightGear in the pre-OSG days that are meanwhile unmaintained, e.g.:
Cquote1.png Is there generally the possibility to use image-processing-algorithms in NASAL? I mean things like FFT, convolution or a sobel-operator...

This is not only for image-processing of course, but will be used often for this. And it is not only for Images!, also for other 2d-datas like a terrain-representation to detect edges there or so...


— St.Michel (Tue Dec 02). "Image-Processing" or FFT in NASAL/FG ???.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I am currently working with image processing and found that FlightGear is a extremely valuable resource for this kind of research. However, to work with these images, it is necessary to be able to gather image and metadata (Aircraft Position and Orientation, Camera info and other information like model position) from the simulator.

After some time reading the FlightGear forum and wiki, I found the following possibilities:

  • Jpeg Server
  • Sequential Snapshots from NASAL
  • Fraps

Nevertheless, these approaches have some small limitations:

  • Jpeg Server: Reading the file jpgfactory, I understood that each time an image is requested, a rendering is started and waited. An http processing is also necessary, which would be probably slow for my purposes. Compressing and saving file is done also inside FlightGear. Since I don't know exactly if it is done on the main loop or in another thread, I don't know its performance implications.
  • Snapshots : The problem is gathering metadata synchronized with the images. Also, there is a limitation on the code to 999 pictures, which is a problem (This limitation, however, is not difficult to solve). This approach has not been tested but may be too slow for videos and online processing.
  • Fraps or similar softwares: Completely impossible to get precisely synchronized metadata.

My suggestion would be to develop some kind of shared memory interface, with metadata embedded on the same memory space. After each rendering step, the image would be simply copied to the memory along the metadata and a frame counter. I have already some tests done on Windows platform and it works quite well. It is also possible to enable/disable the copy process(Which is not too slow, but it is interesting to have a way of controlling it) using the command line parameters.

From the shared memory position, any other process could read it and do whatever it wants, which would create a complete horizon of possibilities like streaming, video recording and a more modular architecture to anything related to gathering images, the jpeg server could be separated from FlightGear, for example. Obviously, this requires some kind of process synchronization such as mutexes, which relies on the reading softwares not to block it for a too long time.

Another approach would be to have a different architecture inside FlightGear, something like:

Renderer -> ImageGrabber -> ImageSaver

Where the ImageGrabber is the part of code that reads image and saves it on a buffer and ImageSaver is the "externalizer" (JPEGSaver, SharedMemorySaver, MPEGSaver and so on). However, I personally prefer the first option, which enables people to grab image and do whatever they want without the necessity of understanding and recompiling FlightGear source code.

I'm looking for opinions, suggestions and observations about this technique before implementing it in a more standardized way and

proposing the code.[3]
— Emilio Eduardo
Cquote2.png
Cquote1.png I'm new to FlightGear, and am trying to use it as an image generator for a simulator I'm developing...I've got it configured to take inputs from a UDP port to fly, but I want to disable a lot of features so that all FlightGear does is draw scenery. [4]
— Drew
Cquote2.png
Cquote1.png I would like to use FlightGear to generate the scene observed by a UAV's onboard camera.

Basically, this would translate to feeding FlightGear the FDM data and visualizing the image generated by FlightGear in another computer, across a network, using for example streaming video.

I suppose this is a bit of a far-fetched idea, but is there any sort of support for this (or something similar) already implemented? [5]
— Antonio Almeida
Cquote2.png


Cquote1.png I am interested in using it as a visualization tool for UAV's. I would like to replace the fg scenery with images captured from a camera onboard an aircraft. I was wondering if there is any way to import images into flightgear on the fly. The basic goal would be to show live video where available and fall over to flight gear visuals when the feed is lost(using a custom view from the camera perspective) .[6]
— STEPHEN THISTLE
Cquote2.png
Cquote1.png I'm hooking up a lumenera Camera for a live video feed from a UAV, so that the video gets handed to Flightgear, which then draws its HUD over the video stream. In order to do this, I need to be able to communicate with the window controls. My camera can display the video in a new window, but I want it to draw to the video screen that Flightgear is already using.[7]
— Bruce-Lockhart
Cquote2.png
Cquote1.png I don't think there's any current way to do this. However, I think what is needed is to link in some video capture library to do frame grabs from your video camera as quickly as possible. Then do whatever bit fiddling is needed to scale/convert the raster image to an opengl texture. Then draw this texture on a quad that is aligned correctly relative to the camera. It might be possible to get fancy and alpha blend the edges a bit.

Given an image and the location and orientation of the camera, it would be possible to locate world coordinates across a grid on that image. That would allow a quick/crude orthorectification where the image could be rubber sheeted onto the terrain. This would take some offline processing, but you could end up building up a near real time 3d view of the world than could then be viewed from a variety of perspectives. The offline tools could update the master images based on resolution or currency ... that's probably a phd project for someone, but many of the pieces are already in place and the results could be extremely nice and extremely useful (think managing the effort to fight a dynamic forest fire, or other emergency/disaster management, traffic monitoring, construction sites, city/county management & planning, etc.) I could even imagine some distrubuted use of this so that if you have several uav's out flying over an area, they could send their imagery back to a central location to update a master database ... then the individual operators could see near real time 3d views of places that another uav has already overflown.

If we started building up more functionality in this area, there are a lot of different directions we could take it, all of which could be extremely cool.[8]
— Curtis Olson
Cquote2.png


Cquote1.png Getting live video onto a texture is pretty standard stuff in the OpenSceneGraph community[9]
— Tim Moore
Cquote2.png
Cquote1.png I imagined embedding some minimal routine that talks to the camera and grabs an image frame. Then usually you can directly map this into an opengl texture if you figure out the pixel format of your frame grab and pass the right flags to the opengl texture create call. Then you should be able to draw this texture on top of any surface just like any other texture ... you could map it to a rectangular area of the screen, you could map it to a rotating cube, map it to the earth surface, etc. That's about as far as far as I've gone with thinking through the problem.[10]
— Curtis Olson
Cquote2.png


Cquote1.png I want draw something in the front face of the FlightGear view, but I don't wan to recompile / modify any codes, so, if the FlightGear could give me a interface to draw something myself through DLL, that's perfect.[11]
— CHIV
Cquote2.png
  1. Emilio Eduardo Tressoldi Moreira  (Jun 30th, 2014).  [Flightgear-devel] Rendered image export to Shared Memory .
  2. James Turner  (Jun 30th, 2014).  Re: [Flightgear-devel] Rendered image export to Shared Memory .
  3. Emilio Eduardo (2014-06-30 13:33:10). [http://osdir.com/ml/flightgear-sim/2014-06/msg00118.html Rendered image export to Shared Memory - msg#00118].
  4. Drew (Tue, 25 Jan 2005 09:24:30 -0800). Disabling functionality.
  5. Antonio Almeida (Tue, 22 May 2007 10:14:46 -0700). Flightgear visualization as streaming video.
  6. STEPHEN THISTLE (Fri, 25 Jan 2008 06:32:03 -0800). Replace fg visualization with streaming video.
  7. cullam Bruce-Lockhart (Tue,29 Jul 2008 09:23:54 -0700). Window controls.
  8. Curtis Olson. [http://www.mail-archive.com/flightgear-devel@lists.sourceforge.net/msg15459.html Replace fg visualization with streaming video Curtis Olson Fri, 25 Jan 2008 07:51:41 -0800].
  9. Tim Moore (Fri, 25 Jan 2008 08:31:40 -0800). Replace fg visualization with streaming video.
  10. Curtis Olson. Window controls.
  11. CHIV (Thu May 08, 2014 3:03 am). [[1] This is a link to the FlightGear forum. One suggestion: FlightGear wolud support plugins like this!].

Adding new Placements

Note  should be linking to the actual sources/line numbers here
Screenshot-streaming.png
WIP.png Work in progress
This article or section will be worked on in the upcoming hours or days.
See history for the latest developments.

Let's assume, we'd like to a new type of placement, one for treating any Canvas as a raster image that can be fetched via the built-in httpd server, or even streamed as a MJPEG. For that to work, we need to be able fetch the Canvas, convert it to an osg::Image and then register the whole thing with the mongoose integration ($FG_SRC/Network/httpd), next we need to register a corresponding camera drawback to obtain the image, and notify the mongoose code to register a new handler and a class providing the corresponding image [1].

In Canvas terms, the way a Canvas is placed is handled by a so called Placement, a placement is just another class that responds to placement-specific events, mainly relevant property updates.

In this particular case, it wouldl make sense to support a handful of events/attributes:

  • output format (png, jpeg, mjpeg)
  • size of the image to be streamed (width/height)
  • color depth
  • name (to be used for requests)
  • update frequency (usually, once or twice per second should suffice)


  1. create a new set of files in $SG_SRC/canvas, named CanvasHttpdPlacement.cxx (hxx)
  2. Use the CanvasObjectPlacement files as template, rename those files, change the include guards/comments respectively
  3. open FGCanvasSystemAdapter.cxx/hxx in $FG_SRC/Canvas to add helpers for your new placement, e.g. getImage(): http://wiki.flightgear.org/Canvas_Troubleshooting#Serializing_a_Canvas_to_disk_.28as_raster_image.29
  4. ...
  5. open $FG_SRC/Canvas/canvas_mgx.cxx, navgate to CanvasMgr::init(), add your new placement there

Projections

Also see

Background

for visualizing orbital mechanics, the two most useful projections are groundtrack (for inclination, node crossing and what you should see looking out of the window) and the projection orthogonal to the orbital plane[2]


Cquote1.png I stumbled across what is perhaps closer to the core of the issue in a flight over the North Pole. Flightplan legs are rendered as great circle segments, so long legs are drawn with a curve. Somewhere, the flightplan has to be flattened into a map view. It appears that this is easy to do over short distances in lower latitudes, but becomes increasingly difficult over long distances with a bigger component of Earth's curvature involved. The map view is not really geared for polar routes, so the leg that goes over the pole has an extreme curve drawn in it. And when that leg was in range, the frame rate dropped from 25-30 down to 8-12. Once it went out of range, frame rate was back to normal. It seems like calculating curvature may be the rate-limiting step.
— tikibar (Dec 23rd, 2014). Re: Canvas ND performance issues with route-manager.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I narrowed this down to the way map segments around the curving earth are calculated in the canvas ND. I added a hard coded distance limiter to it that restored the calculation speed as long as no leg was longer than about 800 nm. Hooray suggested an approach that was more dynamic, but I never got around to working on it. Bottom line, it's not a graphics card issue but a calculation issue. I've seen it in both the 757 and 747-8 series using the canvas ND. The old thread about it is here: [12] This is a link to the FlightGear forum.
— tikibar (Feb 9th, 2016). Re: Root Manager consumes a lot of Frame Rate.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Gijs provided a patch to fix the hard-coded Map dialog (and possibly the ND), it's the projection code that is causing this - as far as I know, Gijs' patches never got integrated with the Canvas system, my original suggestion was to extend the canvas projection code so that projection code can be implemented in the form of Nasal code and/or property rules.
— Hooray (Feb 10th, 2016). Re: Route Manager consumes a lot of Frame Rate.
(powered by Instant-Cquotes)
Cquote2.png

Adding new Projections

Note  Discuss base class that needs to be implemented
We've already fixed that in the (old) map dialog, by using an aimuthal equidistant projection. Porting the projection to Canvas is on the todo list. Such a projection is much much better for navigational use. Curves in routes are not calculated by Canvas, nor by the ND though. It's the route manager that splits up a route in segments in order to get smooth transitions.[1]

that's a coordinate singularity of a (lat/lon) grid and things like your course cease to be well-defined in the vicinity - so you can't expect normal code to work. Usually you need special provisions to deal with such singularities (from my own experience, the Shuttle has four different coordinate systems to switch, and fallback rules what to display when close to a singularity and the AP for liftoff uses a different coordinate grid (based on vectors rather than angles) from the AP later during launch because the launch is done right into the singularity (there's no course defined for vertical ascent) and so one can't steer to any particular course until later). [2]


Cquote1.png Supporting a few common GIS projections would seem useful though, i.e. integrating Proj4 - and if we really want to create a 3D projection, we'd probably be better off by rendering the scene to a texture and using canvas shaders then to add flight path info that way.
— Hooray (Fri Jul 18). Re: Cannot use .setText on SVG text element.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png There's a projection library available called "proj4", it comes with a number of different projections, we may absorb that into simgear and use it for projection handling - which would free us from having to implement/test and maintain our own
Cquote2.png
Cquote1.png We've already fixed that in the (old) map dialog, by using an azimuthal equidistant projection (see screenshot). Porting the projection to Canvas is on my todo list. Such a projection is much much better for navigational use.

Curves in routes are not calculated by Canvas, nor by the ND though. It's the route manager that splits up a route in segments in order to get smooth transitions.


Cquote2.png
Cquote1.png The new ND uses the actual route-manager paths, which allows it to draw holdings, flyby waypoints (thanks to James recent work) etc. But we'll need the azimuthal projection anyway, so I'll bump my todo list
Cquote2.png
Cquote1.png I do agree that it would make sense to sub-class the Canvas projection class and implement Gijs' changes there, like we originally discussed in the merge request: FlightGear commit 3f433e2c35ef533a847138e6ae10a5cb398323d7


flightgear/flightgear/96a2673dd8a08b70396e2be1e567c0e89d8cf6e3/src/GUI/MapWidget.cxx#l1573

Ideally, we would expose the projection as a property for each Map so that it can be changed dynamically.


Cquote2.png

Styling (osgText)

Gijs was looking for an outline that follows the shape of the text, which is what backdrop provides.

For his solution, see the two diffs below. He didn't add the full range of backdrop options, just outline for now [13] This is a link to the FlightGear forum..

Cquote1.png And this is how it looks in FlightGear now :-) Notice how the overlapping waypoints are easier to read (this image is a little exaggerated with all those fixes).


(see the linked image)


— Gijs (Mon Jul 07). Re: osgText backdrop.
(powered by Instant-Cquotes)
Cquote2.png
commit 5cc0adc778bda1773189b0119d24fbaf5ecd4500
Author: Gijs de Rooy
Date:   Mon Jul 7 18:26:16 2014 +0200

    Canvas: add backdrop option to text

diff --git a/simgear/canvas/elements/CanvasText.cxx b/simgear/canvas/elements/CanvasText.cxx
index d99760a..3a986e1 100644
--- a/simgear/canvas/elements/CanvasText.cxx
+++ b/simgear/canvas/elements/CanvasText.cxx
@@ -39,6 +39,7 @@ namespace canvas
       void setLineHeight(float factor);
       void setFill(const std::string& fill);
       void setBackgroundColor(const std::string& fill);
+      void setOutlineColor(const std::string& backdrop);
 
       SGVec2i sizeForWidth(int w) const;
       osg::Vec2 handleHit(const osg::Vec2f& pos);
@@ -97,6 +98,15 @@ namespace canvas
   }
 
   //----------------------------------------------------------------------------
+  void Text::TextOSG::setOutlineColor(const std::string& backdrop)
+  {
+    osg::Vec4 color;
+    setBackdropType(osgText::Text::OUTLINE);
+    if( parseColor(backdrop, color) )
+      setBackdropColor( color );
+  }
+
+  //----------------------------------------------------------------------------
   // simplified version of osgText::Text::computeGlyphRepresentation() to
   // just calculate the size for a given weight. Glpyh calculations/creating
   // is not necessary for this...
@@ -546,6 +556,7 @@ namespace canvas
 
     addStyle("fill", "color", &TextOSG::setFill, text);
     addStyle("background", "color", &TextOSG::setBackgroundColor, text);
+    addStyle("backdrop", "color", &TextOSG::setOutlineColor, text);
     addStyle("character-size",
              "numeric",
              static_cast<


commit 838cabd2a551834cbcac2b3edd21500409ff2e98
Author: Gijs de Rooy
Date:   Mon Jul 7 18:27:50 2014 +0200

    Canvas: add backdrop option to text

diff --git a/Nasal/canvas/api.nas b/Nasal/canvas/api.nas
index 8bc12d8..3047dbf 100644
--- a/Nasal/canvas/api.nas
+++ b/Nasal/canvas/api.nas
@@ -634,6 +634,8 @@ var Text = {
 
   setColorFill: func me.set('background', _getColor(arg)),
   getColorFill: func me.get('background'),
+  
+  setBackdropColor: func me.set('backdrop', _getColor(arg)),
 };
 
 # Path

Event Handling

Note  Discuss CanvasEventManager, CanvasEvent, CanvasEventVisitor

Canvas Integration

Note  Discuss FGCanvasSystemAdapter - for the time being, check out Howto:Extending_Canvas_to_support_rendering_3D_models#Extending_FGCanvasSystemAdapter to learn more about the purpose/usage of the CanvasSystemAdapter, which basically serves as a bridge between FlightGear and SimGear, i.e. to expose FG specific APIs to Canvas (which lives in SimGear).

there is a dedicated FGCanvasSystemAdapter in $FG_SRC/Canvas that encapsulates the model lookup [3]


For instance, say you'd like to access the FlightGear view manager via the access system: you don't need to move the view manager to SimGear to accomplish this - like I mentioned previously, the correct way to access FG-level subsystems via the Canvas system is to review/extend the FGCanvasSystemAdapter to expose the corresponding APIs. I actually posted code snippets that illustrate how to do this, for example in the 3D model loader, specifically look for the FGCanvasSystemAdapter changes in both $FG_SRC and $SG_SRC There is step by step instructions which can be found here: Howto:Extending Canvas to support rendering 3D models#Extending FGCanvasSystemAdapter In other words: Any API that you need to access from the Canvas system would need a correspondinger "getter" added to retrieve the handle from the FG host application.

And that should be reflected in the header file But the implementation would reside in $FG_SRC/Canvas/FGCanvasSystemAdapter.cxx - the SimGear code would only have a copy of the corresponding header file.[4]

Or let's say, you'd like to access the CameraGroup/Viewer APIs: it's relatively straightforward: CameraGroup.cxx already contains code to render a static camera to a texture, which is stored in a TextureMap named _textureTargets - internally, this is used for building the distortion camera - however, you can also exploit this to render an arbitrary camera view to a texture. At the Canvas level, you would then have to call the equivalent of flightgear::CameraGroup::getDefault() - this would be done at the FGCanvasSystemAdapter level, i.e. adding a getter function there, which returns the TextureRectangle map.

Once you have a texture rectangle, you can also get the osg::Image for it, and that can be assigned to a Canvas image.

Admittedly, that's a little brute force, but it should only require ~30 lines of code added to SG/FG to add a static camera view as a Canvas raster image. Ideally, something like this would be integrated with the existing view manager, i.e. using the same property names (via property objects), and then hooked up to CanvasImage, e.g. as a custom camera:// protocol (we already support canvas:// and http(s)://) So some kind of dedicated CanvasCamera element would make sense, possibly inheriting from CanvasImage.

And it would also make sense to look at Zan's new-cameras patches, because those add tons of features to CameraGroup.cxx This would already allow arbitrary views slaved to the main view (camera) So as you can see, PagedLOD/CompositeViewer don't need to be involved to make this happen.[5]

Finally, to use Canvas outside FG, you would also need to look at the FGCanvasSystemAdapter in $FG_SRC/Canvas and provide your own wrapper for your own app (trivial).[6]

Optimizing Canvas

Cquote1.png For efficiency reasons it would be good to draw all symbols to a single canvas/texture and put all quads into a single node. So probably I'll add a new element type for putting quads into a single element which are all rendered at once. Maybe we can even use a geometry shader to just copy the positions to the GPU and generate the full quads with the shader. Ideas and suggestions are always welcome
— TheTom (Mon Sep 24). Re: Using a .
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I'm also interested in any performance issues. For example a canvas is always redrawn if any property changes within the current frame, even if the same value is just written again or changes are too small to be noticeable. Also if a property of a hidden element/group is changed, the canvas is redrawn. Maybe checking if properties have changed enough will gain some speed, but I'm not sure if this will be noticeable at all (only if always the same values are written to the tree...)
— TheTom (Tue Nov 12). Re: How to display Airport Chart?.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I will add a new element type to render symbols from a "Cache-Texture" to improve speed of canvasses showing lots of symbols like eg. the navigation display. You will basically be able to set position (maybe rotation) and index of the symbol in the cache-texture and possibly a color for each instance...
— TheTom (Tue Nov 12). Re: How to display Airport Chart?.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Regarding spatial queries, on the nav-cache side, delta queries would be complex to support. What the C++ NavDisplay does is keep a persistent list which is updated infrequently - only when a setting changes (range, view options config), or when the aircraft moves > 1nm. In C++, computing the delta of two arrays is fast
— zakalawe (Mon Nov 04). Re: How to display Airport Chart?.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I guess we need to come up with some heuristics at the C++ level for selectively updating/rendering parts of the route that are visible/relevant (i.e. not necessarily visible, but part of a visible line segment)
Cquote2.png
Cquote1.png if it's just as fast, it's rendering / rasterization that is probably taking so long, which would mean that we'd need to explore selective updating/rendering of nodes that are neither visible, nor connected to anything visible (line segments).
Cquote2.png


Cquote1.png What would be good to have is the specify a completely different scenegraph in some subcameras. I think of having panel like instruments on an additional screen/display for example.
— Mathias Fröhlich (2008-06-28). Re: [Flightgear-devel] RFC: changes to views and cameras.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I have an animation that I call rendertexture, where you can replace a texture on a subobject with such a rtt camera. Then specify a usual scenegraph to render to that texture and voila. I believe that I could finish that in a few days - depending on the weather here :)

The idea is to make mfd instruments with usual scenegraphs and pin that on an
object ...


— Mathias Fröhlich (2008-06-28). Re: [Flightgear-devel] RFC: changes to views and cameras.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I believe that we need to distinguish between different render to texture

cameras. That ones that will end in mfd displays or hud or whatever that is
pinned into models. And one that are real application windows like what you
describe - additional fly by view, and so on. And I believe that we should
keep that separate and not intermix the code required for application level
stuff with building of 3d models that do not need anything application level
code to animate the models ...I think of some kind of separation that will also be good if we would do HLA
between a viewer and an application computing physical models or controlling
an additional view hooking into a federate ...


— Mathias Fröhlich (2008-07-01). Re: [Flightgear-devel] RFC: changes to views and cameras.
(powered by Instant-Cquotes)
Cquote2.png

The Future of Canvas in FlightGear

Lessons learnt

Canvas is being increasingly adopted, primarily by aircraft developers with little or no background in coding, which means that more and more Canvas related additions are unnecessarily violating design principles in terms of modularization and code reuse, which is to say that many Canvas related efforts are not sufficiently generic in nature and lacking a unified design/approach, which often makes them only useful in a single context (think instrument/aircraft/GUI dialog).

This is a challenge that Canvas-based features have in common with other aircraft-specific contributions, especially Nasal code. Aircraft developers tend to use copy&paste to adopt new features.

Concepts like object-oriented programming, encapsulation and having abstract interfaces to make code reusable and generic are not easily brought across to non-coders obviously, and even more experienced contributors faced challenges related to that:

Cquote1.png This sounds like a reusable framework but the encapsulation isn't as far and its optimised for the internal need. There are some calls going over parents where no interface is "rechable" or defined.
— D-Leon (Feb 22nd, 2014). extra500 - Avidyne Entegra 9 IFD - approach.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png complex MFD instruments like the G1000 series or the Avidyne Entegra R9 are better not implemented directly, but using a "bottom-up" approach, where you identify all required building blocks (e.g. screen component, page component) and build higher level components on top. Otherwise, there will be very tight coupling at some point, so that it will be really easy to generalie/maintain the underlying code (look at D-LEON's comments above).
— Hooray (Feb 2nd, 2015). Re: Project Farmin [Garmin Flightdeck Frame work].
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Canvas & Nasal are still fairly low-level for most aircraft developers, to come up with good -and fast displays (code)- people still need to be experienced coders, and familiar with FlightGear scripting and Canvas technologies/elements and the way performance is affected through certain constructs. So far, we now have the means to create the corresponding visuals, but there's still quite some work ahead to re-implement existing hard-coded displays - but to implement a compelling jet fighter, including a credible cockpit, you would need more than "just" the visuals, i.e. lots of handbooks/manuals, building blocks for creating systems and components, and scripting-space frameworks to help with the latter.The best option to pave the way for this is to keep on generalizing existing code, so that instruments support multiple instances, multiple aircraft, and multiple "sensors".
— Hooray (Thu May 29). Re: Does FlightGear has Multiplayer Combat mode?.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Regarding "de-skilling", that's exactly the point of introducing more specific frameworks on top of Nasal and Canvas, developed by more experienced programmers, usable by less-experienced contributors, who often don't need any programming experience at all (see for example Gijs' ND work, which can now be integrated and used with different aircraft, without requiring ANY coding, it's just configuration markup, analogous to XML, but more succinct)
— Hooray (Thu May 29). Re: Does FlightGear has Multiplayer Combat mode?.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Claiming that Nasal/Canvas would be "a failure as a tool" just because people can still implement slow code, is far too short-sighted - just because you are allowed to drive a car (or fly an airplane) doesn't make you an expert in car engines or airplane turbines - things like Nasal and Canvas are really just enablers, that are truly powerful in the hands of people who know how to use them, but that can still be misused by less-experienced contributors.


That is exactly why people are working towards more targeted frameworks on top of Nasal/Canvas - but it's a process that is very much still in progress, and probably will be for at least another 2-3 release cycles.


— Hooray (Fri May 30). Re: Does FlightGear has Multiplayer Combat mode?.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png currently, I am inclined to state that Canvas is falling victim to its own success, i.e. the way people (early-adopters) are using it is hugely problematic and does not scale at all. So we really need to stop documenting certain APIs and instead provide a single scalable extension mechanism, i.e. registering new features as dedicated Canvas Elements implemented in Nasal space, and registered with the CanvasGroup helper - absent that, the situation with Canvas contributions is likely to approach exactly the dilemma we're seeing with most Nasal spaghetti code, which is unmaintainable and is begging to be rewritten/ported from scratch.Which is simply because most aircraft developers are only interested in a single use-case (usually their own aircraft/instrument), and they don't care about long-term potential and maintenance, i.e. there are now tons of Canvas based features that would be useful in theory, but which are implemented in a fashion that renders them non-reusable elsewhere: Canvas Development#The Future of Canvas in FlightGearSo at the moment, I am not too thrilled to add too many new features to Canvas, until this is solved - because we're seeing so much Nasal/Canvas code that is simply a dead-end due to the way it is structured, i.e. it won't be able to benefit from future optimiations short of a major rewrite or tons of 1:1 support by people familiar with the Canvas system. Which is why I am convinced that we need to stop implementing useful functionality using the existing approach, and instead adopt one that is CanvasElement-centric, where useful instruments, widgets, MFDs would be registered as custom elements implemented in Nasal space (via cppbind sub-classing).If we don't do that, we will continue to see cool Canvas features implemented as spaghetti code monsters that reflect badly upon Nasal and Canvas due to lack of of design, and performance.
— Hooray (Oct 17th, 2015). [[14] This is a link to the FlightGear forum. Re: WINDOW IN WINDOW].
(powered by Instant-Cquotes)
Cquote2.png

Yet, many Canvas early-adopters were/are working on conceptually-similar, and often even identical, features and functionality so that a lot of time is being wasted by people not knowing how to provide, and reuse, functionality in a "library"-fashion that is agnostic to the original use-case/aircraft (think MapStructure).

Still, most contributions developed by aircraft developers are often "singletons by accident", i.e. support only a single system-wide instance or are at least implemented in an aircraft-specific fashion, so that they cannot be easily reused elsewhere (original 747 ND/PFD, 777 EFB, extra500/Avidyne Entegra R9).

Cquote1.png the most complex and most sophisticated MFD still is the Avidyne Entegra R9 (extra500): Avidyne Entegra R9
— Hooray (Feb 7th, 2016). Re: Garmin gns530.
(powered by Instant-Cquotes)
Cquote2.png


In addition, contributions tend to be insufficiently structured so that the only way of adopting a popular feature is "Copy & Paste-programming". Even the original Canvas-based airport selection dialog was primarily done using "Copy&Paste" and is still a maintenance challenge, despite having been developed by an experienced FlightGear core developer.

Furthermore, coordinating related efforts to help people come up with generic, reusable and modular implementations is a tedious process that is taking up lots of resources, i.e. energy and time (e.g. see the MapStructure and ND/PFD efforts) - especially because people tend to get in touch only once they have to "show" something, at which point it is often too late to affect the design of a Canvas based feature to make it sufficiently generic and reusable without too much effort, or it's going to take a lot of time and energy to restructure the code accordingly (e.g. 777 EFB) - which often renders the result un-maintainable for people less familiar with fundamental coding concepts, at which point ownership/maintenance is typically delegated to the very people trying to help with design issues, who are usually already juggling dozens of projects.

Additionally, many aircraft developers simply don't know how to identify overlapping functionality and how to come up with generic building blocks that can be used elsewhere, while others are generally not interested in helping contribute to a unified framework out of fear that their time is "wasted" and should be better spent working on their own aircraft/feature instead (extra500/Avidyne Entegra R9).

Equally, multi-instance setups like those at FSWeekend are still not explicitly supported by any Canvas-related efforts, which means that glass-cockpit functionality (MFDs like a PFD or ND) cannot currently be easily replicated/synchronized across several instances (think multiplayer/dual-pilot or master/slave setups). This matches restrictions found in the original od_gauge based instruments, without that having to be the case given the generic nature of the Canvas system, and it being based on key property tree concepts.

One key concept that aircraft developers are familiar with however is the property tree, which could -and should- thus be the mechanism to provide interfaces that "just work" using existing Canvas APIs by exposing those to scripting space and encouraging new features to be provided as PropertyBasedElements that can be registered with the main Canvas system, that implicitly support multiple instances, different aircraft, styling or multi-instance setups.

In the last couple of years we've been increasingly prototyping useful features in scripting space, so that Canvas is primarily useful due to extensive Nasal support. In fact, many recent additions would be crippled without also using Nasal and its cppbind/canvas bindings. However, adding new Nasal dependencies is generally frowned upon by core developers due to Nasal's GC issue. In addition, Nasal is too low-level for most aircraft developers, who often don't know how to create a component in such a way that the component is truly generic and reusable. Nasal coding makes this job even harder for many people.

However, the nature of property tree makes it possible to map components onto a property tree hierarchy, so that these components inherently support important design characteristics (multiple instances, property-inheritance, aircraft independence etc).

Currently, we're adding an increasing number of useful Canvas-based systems to FlightGear, such as the ND, PFD, MapStructure, Avidyne Entegra R9 and various other modules. However, all of these are mainly Nasal-based, and there's no way for people to instantiate these modules without also knowing Nasal. This is breaking some important concepts of the property tree and Canvas: namely, system-wide orthogonality. A properly-designed Canvas module would be usable even outside just Nasal space, e.g. just via the property tree (refer to the AI traffic system or the Canvas system for example).

Thus, a new Canvas component like a ND or PFD would ideally still be implemented in scripting space using a few Canvas bindings, but the abstract interface for setting up and controlling the system would live solely in property tree space, without people necessarily having to touch any Nasal code.

This would be in line with existing hard-coded gauges, whose external interface is solely the property tree (e.g. wxradar, od_gauge, agradar etc). In addition, establishing the property as the main interfacing mechanism for new Canvas-based elements, also means that a stable API is much easier to provide/maintain, as it would mainly live in property space.

That can be accomplished by allowing custom Canvas elements to be implemented in Nasal and registered with the Canvas system. So that a ND/PFD widget could be instantiated analogous to any other Canvas element by modifiying the property tree, which would internally map things to a Canvas::Element/PropertyBasedElement sub-class implemented in scripting space.

The major advantage here being a strong focus on encapsulation, as well having clean interfaces that lends themselves to being easily re-implemented/optimized in C++ space, e.g. by moving certain prototyped functionality (think Canvas animations using timers/listneners) out of Nasal space into C++ for the sake of better performance once the need arises.

Equally, such a modular approach would allow us to easily sync multiple fgfs instances (think dual-pilot/multiplayer) by using just properties, without any explicit Nasal calls having to be made in other instances, because things would be transparently dispatched behind the scenes, using just properties.

Goals

Cquote1.png the main motivation is that we want to provide some more "structure" for people creating canvas-based features like the pfd, nd, efb, HUD, cdu etc - but also MapStructure/Avidyne stuff. As long as new displays/instruments can be registered as higher-level canvas elements, we would ensure that some form of encapsulation is enforced, i.e. so that multiple instances can be trivially supported, and I/O really just takes places via the property tree. People would need to declare properties that they read/write through custom attributes, which would make it straightforward to support distributed setups for any canvas-based textures, including multiplayer, but also multi-instance setups like those at FSWeekend.
— Hooray (Mon Jun 02). Re: NasalElement vs. CanvasMgr::elementCreated.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png The long-term idea is to establish sufficient encapsulation, so that we can also support recursion, and use the whole thing in stand-alone mode, e.g. something like FGCanvas, without being extremely specific to a single aircraft/instrument. I think this could be a worthwhile direction to explore in the long-term
— Hooray (Mon Jun 02). Re: NasalElement vs. CanvasMgr::elementCreated.
(powered by Instant-Cquotes)
Cquote2.png

The other goal being improved accessibility for existing features/code wanting to use Canvas-based functionality (think MapStructure layers), without wanting to add any explicit Nasal dependencies, e.g. the new Integrated Qt5 Launcher, where there's the issue of increasing code duplication and added maintenance workload, too.

As can be seen by functionality that is now getting added/re-implemented in Qt5/C++ space, despite already existing elsewhere:

As can be seen, there's currently no code reuse taking place when it comes to the Qt5-based location tab, and the Canvas/MapStructure-based airport/taxiway layers are very much superior in comparison, too (as well as being much better maintainable (living in fgdata)) - so it would make sense to work out a way to reuse existing code instead.

Once the PropertyBasedElement/CanvasElement wrappers are fully exposed to scripting space, we could easily register MapStructure as a new Canvas element for directly making the corresponding layers available to C++ code, without introducing any direct Nasal dependencies - i.e. the corresponding airports/runway and taxiways diagrams would be transparently created by Nasal-based Canvas elements, with the Qt5/C++ code only ever having to set a handful of properties (airport ICAO id, styling, range etc).

Examples

From a design standpoint, we would then be able to use something like group.createChild("widget-button").set("label","Exit") which would be straightforward to synchronize (a handful of properties vs. a full Canvas group) - which would not just be relevant for MP scenarios, but also external GUIs that could be interfaced to FG, e.g. an instructor console.

We should probably keep this in mind, even if we end up using some compromise - personally, I would appreciate being able to expose *complex* canvas systems like the ND/PFD as a dedicated PropertyBasedElement that has its own property interface, possibly even by locking/hiding some internal state at some point.

Exposing PropertyBasedElement as a base class would be a good first step, and maybe we could add some methods to set up "interface properties" via attributes - Canvas kinda has all the code in place already because of the CSS/styling parsing code it has in the CanvasElement base class.

Approach

Cquote1.png what is the recommended way to register new elements at runtime ?I can see that sc::Element is already exposed to Nasal via NasalCanvas.cxx - I have added another _newCanvasElement() member to the canvas namespace, and looking to add a method to expose the std::map with group elements so that new groups can be added.

The main idea here is that I want to be able to extend the core canvas system by allowing custom elements to be prototyped & registered via Nasal (analogous to addcommand() ). According to CanvasMgr this seems to be supported already to some extent ?


— Hooray (Mon May 12). NasalElement vs. CanvasMgr::elementCreated.
(powered by Instant-Cquotes)
Cquote2.png

That is why I believe that we should work out a way to allow new Canvas functionality to be OPTIONALLY also mapped to a PropertyBased*-interface - for example, we could have a propertyBased element that maps property writes to calls for widget creation using factories (as is the case already for existing elements)- the difficult code is in already in place, the main thing we would need is a property tree interface and a mapping scheme for calling the right APIs for user-defined elements.

Similarly, we could expose MFDs like a PFD, ND or EFB as a property-based system within each instance's property tree.

Challenge: Instancing

Despite Canvas internally using/being OOP, and despite using OOP at the Nasal level - the representation in the property tree itself is mainly dealing with texture elements, that lack any notion of formalized dependencies and behavior, i.e. in terms of what is represented, which events (signals) are supported - so that the texture state only means something during an active fg session, and it is specific to that single session, too - i.e. MFD state is not currently replicated easily to other instances (think multiplayer, dual pilot, fsweekend-like setups), due to this lack of encoding data dependencies at the tree level, where really just Canvas primitives are animated/updated - without the tree/Canvas system itself having any concept of what it is doing from a high-level standpoint.

And because of all this, we are sacrificing potential to optimize things, i.e. OSG no longer knows that it is rendering the same thing (sub-scene graph) when showing 20 trajectory maps, 10 PFD/MFDs - it will just happily be as wasteful as it can be by creating each scenegraph from scratch.

All this, because we are currently missing to provide the required meta information by annotating Canvas related state/groups that can/should be shared, or merely parametrized.

This may not seem relevant in the context of the trajectory map, because OSG/SG will internally cache the teture, but more complex dialogs/MFDs with their own scenegraph would greatly benefit from encoding what is instance-specific, and what isn't (what is common and can be shared) - e.g. imagine a complex dialog showing several instances of the same PFD/MFD, driven by different data (think AI aircraft) - at the scenegraph level, would make sense to use instancing whenever possible, including shared geometries - i.e. shallow clones whenever possible, deep clones if necessary.

Looking at Canvas-based features that are massively slow (think extra500/Avidyne Entegra R9), those would indeed be faster in C++ - but only because C++ is closer to the metal than Nasal/Canvas, the underlying approach is still unfriendly to OSG/OpenGL overall, because there is hardly any stateset/resource sharing going on, and because of the code doing unnecessary/redundant things.

Referring to the Avidyne Entegra R9, just imagine runnning 10 independent instances of the instrument shown in a GUI dialog.

At the Canvas/Element level, we could change that by encoding meta information, to declare what Canvas state/groups (osg::StateSet/osg::Node) can/should be instanced, and which ones cannot.

Sooner or later we will need to come up with features that allow avionics developers to declare if a group can be considered static/final, e.g. for background images (no DYNAMIC variance, sharing/instancing allowed), or if a group actually represents fully dynamic state, such as a MFD screen, whose elements may still be instanced (imagine GUI widgets like a button)

Shared Cockpit MFDs

Cquote1.png It would be a great feature, if multiplayer mode could allow two or more online or local network players to share one cockpit. Is this already possible or not yet? Then one player can be the captain, another one the first officier, and the third one is the flight engineer. Maybe another second officier
— CaptainTech (Dec 29th, 2015). Global Feature Suggestion for FlightGear: Cockpit Sharing.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png it would be possible to support the whole thing in a "global" fashion with a handful of minor additions, mainly based on looking at related subsystems, e.g. the "instant replay" (flight recorder) feature - and hooking that up to the multiplayer system by XDR-encoding the corresponding properties. The main thing that aircraft developers would still have to do is to create a corresponding "flightrecorder" configuration for their aircraft/cockpit to encode the transmission/update semantics accordingly.
Cquote2.png
Cquote1.png ore complex cockpits/aircraft require more changes. But under the hood, it is mainly about formaliing state management - which is overlapping with the way the flight recorder has to work, but also the MP protocol.
Cquote2.png
Cquote1.png any aircraft that 1) supports multiplayer and 2) supports the flight recorder/replay feature and 3) distributed setups (like those at FSWeekend/LinuxTag), could /in theory/ also support "Dual Control" - certainly once/if the underlying systems are merged.

The building blocks to make something like this possible are already there - the difficult stuff is convincing aircraft developers (like yourself) to adopt the corresponding systems (multiplayer and the flight recorder). So the whole "global" thing would be possible to pull off, but it should not be done using Nasal and the existing MP system. In the case of the shuttle, or even just complex airliners, formaliing data dependencies (switch states, annunicator states etc), that would be tons of work to do manually, given the plethora of switches and state indicators - which is why I am not convinced that this should be done manually, but done semi-automatically by annotating properties (and possibly even branches of properties in the tree). A while ago, I did experiment with replicating a Canvas-based PFD/ND display in another fgfs instance using the "brute force" approach - i.e. copying the whole property branch of the corresponding Canvas via telnet and patching it up via Nasal subsequently, the whole thing was not very elegant, but it actually worked. So I do understand how difficult this is, as well as the limitations of the current system - however, if aircraft/cockpit developers had a handful of property attributes to differentiate between different kinds of simulator state (local/remote vs. switches vs. displays), it would be possible to pull this off, pretty much by using the existing technology stack - the main limitation would be bandwidth then, i.e. you would have to be on the same LAN as the other instances, because it simply isn't feasible to replicate a PFD/ND using low-level calls (primitives) - instead, the whole instrument logic would need to be running in each fgfs instance, with only events being propagated accordingly - i.e .in a master/slave fashion. Admittedly, this is a restriction/challenge that even recent MFD cockpits are sharing with ODGauge-based cockpits (think wxradar, agradar, navdisplay etc), but that does not have to be the case necessarily, because we can already readily access all the internal state by looking at the property tree. But even if such a system were in place today, the way we are using Nasal and Canvas to create MFDs would need to change, i.e. to formalize data dependencies, and to move away from low-level primitives that are only understood by Nasal code - which is to say that new Canvas-based features (e.g. MFDs) would need to be themselves registered as Canvas::Element instances, implemented in scripting space, to ensure that a sane property-based interface is provided and used, without adding explicit Nasal dependencies all over the place: Canvas Development#The Future of Canvas in FlightGear

So we would need both 1) an updated transport/IPC mechanism, and 2) a better way to encapsulate Canvas-based features in a way that properties are the primary I/O means, which is ironically how hard-coded instruments are working already - we are just violating the whole design concept via Nasal currently, which is also making it more difficult to augment/replace Nasal-based components that turn out to be performance-critical.
Cquote2.png

Challenge: IPC and Serialization

Cquote1.png In terms of integration with the property tree, I'm thinking that in the short term all the different components that we split out into separate threads or executables will simply use their own properties trees, and use the RTI to reflect the particular (minimal) data that needs to be passed between components.
— Stuart Buchanan (Nov 19th, 2015). Re: [Flightgear-devel] HLA developments.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Will network-linking of FG sessions synchronise ALL of the aircraft's property data, thus also syncing radio, instrument and cockpit data? For the visuals, only the basic 6DOF are needed, but is there also a way to keep everything inside the A/C's panels up to date all the time?
— Robin van Steenbergen (Sep 22nd, 2007). Re: [Flightgear-devel] Serious simmer.
(powered by Instant-Cquotes)
Cquote2.png


Cquote1.png my original issue was to make external instrumentation possible over the network, not on a single PC with 6 monitors on it. Distribute the computing power, allowing more processing power for the flight dynamics and visuals and a flexible instrument setup.
— Robin van Steenbergen (Sep 22nd, 2007). Re: [Flightgear-devel] Serious simmer.
(powered by Instant-Cquotes)
Cquote2.png


Cquote1.png Some of the intelligence could be transferred from FG to the external applications and interface logic, while still keeping FG up to date on any changes, through the property system.
— Robin van Steenbergen (Sep 21st, 2007). Re: [Flightgear-devel] Serious simmer.
(powered by Instant-Cquotes)
Cquote2.png


Cquote1.png ARINC661, for example, has a clear separation between the display graphics and the rendering engine.
— Robin van Steenbergen (Sep 21st, 2007). Re: [Flightgear-devel] Serious simmer.
(powered by Instant-Cquotes)
Cquote2.png


Cquote1.png Also it would be nice if the state of the canvas can be serialized easily and with only little data into an other application. That is to be able to set up multiple viewer applications all displaying the same content. Think of an mfd that is shown in a bigger multi viewer environment. This should be efficient. How to achieve this efficiently requires a lot of thought.
— Mathias Fröhlich (2012-10-22). Re: [Flightgear-devel] Canvas reuse/restructuring.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png If we're rendering each display as an OSG sub-camera, extracting that logic and wrapping it in a stand-alone OSG viewer should be simplicity itself - and so long as it's driven by properties, those can be sent over a socket. That's an approach which seems a lot more bearable to me than sending per-frame pixel surfaces over shared memory or sockets / pipes.
Cquote2.png
Cquote1.png I think of some kind of separation that will also be good if we would do HLA between a viewer and an application computing physical models or controlling an additional view hooking into a federate ...[7]
— Mathias Fröhlich
Cquote2.png


We don't want to abandon the "property-for-IPC" mechanism in place due to using the PropertyBasedElement interface.

Currently, replicating the ND in another instance is a fairly massive undertaking across telnet - while telnet is unnecessarily slow, we really only need to sync very specific state, and not the full canvas. And I do feel that this approach could serve us well in the long-term, not just fgcanvas usage - but anything that would involve multiple fgfs instances.

Challenge: Multithreading

Cquote1.png once it [Canvas] is in simgear It should be really multi viewer/threading capable. Everything that is not, might be changed at some time to match this criterion.

Such a change often comes with changes in the behavior that are not strictly needed but where people started relying on at some time. So better think about that at the first time.


— Mathias Fröhlich (2012-10-22). Re: [Flightgear-devel] Canvas reuse/restructuring.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png If it's really the amount of property accesses that contribute to the performance issue, it'd be a classical case where a conventional GUI fires a worker thread to remain responsive until all the background work has been proceseed.
— Hooray (Thu Sep 20). Re: Using a .
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png This would seem to suggest that, with some changes to the C++ code, we could assemble a canvas tree asynchronously (in the background) and afterwards make it available in the global tree. In that case, it would probably make sense to add a hook to the canvas system to disable its listeners for a certain branch, so that the copy operation doesn't invoke any listeners for any of the added children - and just set a "finished" signal after the copy operation has completed, so that the canvas can "parse" and process the new tree.
— Hooray (Thu Sep 20). Re: Using a .
(powered by Instant-Cquotes)
Cquote2.png

Originally, the whole Canvas idea started out as a property-driven 2D drawing system, but admittedly, what we ended up with is a system that is meanwhile tightly coupled to Nasal unfortunately. Indeed, there are some things where you definitely need to use Nasal to set up/initialize things. But under the hood, 99% still is pure property I/O, which is also why the property tree is becoming a bottleneck.

In general, Nasal is not the problem here - but the way the Canvas system is designed, and the way both, Nasal and Canvas, are integrated - it's a single-threaded setup, i.e. we are inevitably adding framerate-limited scripted code that runs at <= 60 hz to the main loop, to update rendering related state. This is a bit problematic, but it's not a real problem to fix.[8]

One option would be turning each Canvas into a canvas with its own private property tree that merely receives/dispatches events, possibly even with its own FGNasalSys instance to ensure that there is no unnecessary serialization overhead - at that point, you could update Canvas textures ("displays") asynchronously and let OSG's CompositeViewer handle the nitty gritty details of getting each sub-camera drawn/updated without running in the main loop.[9]

Most Canvases could in fact have their own private property tree and a private Nasal instance directly hooked up to that tree, instead of using the current approach - as long as we're working with the assumption that all stuff only ever runs in the main loop, we are not exactly doing Nasal a huge service ....[10]

It is trivial to run Nasal in another thread, and even to thread out algorithms using Nasal. Nasal itself was designed with thread-safety in mind, by an enormously talented software engineer with a massive track record doing this kind of thing (background in embedded engineering at the time). FlightGear however was never "designed" like Thorsten alluded to, rather its architecture "happened" by dozens of people over the course of almost 2 decades meanwhile.

The bottleneck when it comes to threading in Nasal is indeed FlightGear, the very instant you access any non-native Nasal APIs, i.e. anything that is FlightGear specific (property tree, extension functions, fgcommands, canvas) - the whole thing is no longer easy to make work correctly, without re-architecting the corresponding component (think Canvas).

In the case of Canvas, it would be relatively straight-forward to do just that, by introducing a new canvas mode, where each canvas (texture) gets its own private property tree node (SGPropertyNode) that is part of simgear::canvas, at that point, you can also add a dedicated FGNasalSys instance to each canvas texture (Nasal interpreter), and that could be threaded out using either Nasal's threading support or using simgear's SGThread API.

Obviously, there would remain synchronization points, where this "canvas process" (thread) would fetch data from FlightGear (properties) and also send back its output to FlightGear (aka the final texture).

Other than that, it really is surprisingly straightforward to come up with a thread-safe version of the Canvas system by making these two major changes - the FGNasalSys interpreter would then no longer have access to the global namespace or any of the standard extension functions, it could only manipulate its own canvas property tree - all I/O between the canvas texture thread (Nasal) and the main loop (thread) would have to take place using a well defined I/O mechanism, in its simplest form a simple network protocol (even telnet/props or Torsten's AJAX/mongoose layer would work "as is") - more likely, this would evolve into something like Richard's Emesary system.[11]


[...]there is a thing called the global property tree, and that there is a single global scripting interpreter. The bottleneck when it comes to Nasal and Canvas is unnecessary, because the property tree merely serves as an encapsulation mechanism, i.e. strictly speaking, we're abusing the FlightGear property tree to use listeners that are mapped to events, which in turn are mapped to lower-level OSG/OpenGL calls - which is to say, this bottleneck would not exist, if a different property tree instance were used (per Canvas (texture)).

This, in turn, is easy to change - because during the creation of each canvas, the global property tree _root is set, which could also be a private tree instead.

Quite literally, this means changing 5 lines of C++ code to use an instance-specific SGPropertyNode_ptr instead of the global one.

At that point, you have a canvas that is inaccessible from the main thread (which sounds dumb, but once you think about it, that's exactly the point). So, the next step is to provide this canvas instance with a way to access its property tree, which boils down to adding a FGNasalSys instance to each Canvas - that way, each canvas texture would get its own instance of SGPropertyNode + FGNasalSys

Anybody who's ever done any avionics coding will quickly realize that you still need a way to fetch properties from the main loop (think /fdm, /position, /orientation) but that's really easy to do using the existing infrastructure, you could really use any of the existing I/O protocols (think Torsten's ajax stuff), and you'd end up with Nasal/Canvas running outside the main loop.

The final step is obviously making the updated texture available to the main loop, but other than that, it's much easier to fix up the current infrastructure than fixing up all the legacy code ...

[...] telling the canvas system to use another property tree (SGPropertyNode instance) is really straightforward - but at that point, it's no longer accessible to the rest of the sim. You can easily try it for yourself, and just add a "text" element to that private canvas. The interesting part is making that show up again (i.e. via placements). Once you are able to tell a placement to use such a private property tree, you can use synchronize access by using a separate thread for each canvas texture (property tree). But again, it would be a static property tree until you provide /some/ access to it - so that it can be modified at runtime, and given what we have already, hooking up FGNasalSys is the most convenient method. But all of the canvas bindings/APIs we have already would need to be reviewed to get rid of the hard-coded assumption that there is only a single canvas tree in use.

Like you said, changing fgfs to operate on a hidden/private property tree is the easy part, interacting with that property tree is the interesting part.

Also, it would be a very different way of coding, we would need to use some kind of dedicated scheduling mechanism, or such background threads might "busy wait" unnecessarily.[12]

providing a new execution model for new Canvas modules where a Canvas texture has a private property tree that can only be updated by a Nasal script that runs outside the main loop would be feasible, and is in line with ideas previously discussed on the developers mailing list - furthermore, that approach is also in line with the way web browsers have come to address the long-standing issue of JavaScript blocking tabs, by coming up with the "web extension" framework, using a message-passing based approach - with one script context running outside the main thread ("background scripts"=, and another one ("content scripts") running inside the main loop communicating only via "events" (messages).

This kind of setup could be made to work by providing a new/alternate Canvas mode, where the Canvas-tree would never show up in the global tree, but instead bound to a private FGNasalSys instance, minus all the global extension functions.

With the exception of nested canvases, i.e. those referencing another canvas via a raster image lookup - those canvas textures could be updated/re-drawn outside the main loop, and would only require a few well-defined synchronization points, i.e. those fetching updated properties/navaid info, and providing the final texture to the main loop, and this is where Emesary could become a real asset.

In and of itself, this won't help with legacy aircraft/code - at least not directly, but it would provide an alternative that people interested in better performance could adopt over time, while investigating how legacy code could be dealt with, so that it can benefit without too much manual work (such as providing a list of subscribed properties, that are automatically copied to the private property tree running in the background context) - this won't be as efficient, but having a list of input/output properties could work well enough for most people's code[13]

Use Cases

The main point being that we do want to support complex FG setups that are using multiple inter-linked fgfs instances/sessions - back when we played with this ~12 months ago, this was working simply by replicating Canvas raw properties from one instance to another - I think we were using just telnet + listeners to copy one canvas tree to another instance.

And this is an important consideration because we are still supporting native protocol master/slave setups, but our existing hard-coded od_gauge based glass instruments do not provide support for sync'ing.

With canvas we can easily "sync", but it will be fairly low-level to sync a MFD using just a handful of canvas primitives.

The issue here is that while that works, it is understandably very low-level - not so much for primitives like placing a label, an image - but for complex canvas contents like for instance widgets, and especially, MFDs.

That way, each fgfs/fgcanvas instance would have some awareness of what it is rendering, and could be much more efficient when it comes to updating/sync'ing state .

The "raw" mode would require all canvas primitives to be copied 1:1 - while a "smart" propertybased-approach would know exactly that it only needs to make a certain call to replicate a certain canvas - such as a PFD/ND or even just a button/widget, because the encapsulated propertybased-element would expose its own interface.

That would mean that in an inter-linked fgfs setup, exchange between multiple instances could be much more efficient.

Candidates

Cquote1.png I don't think that we need many Nasal-driven elements - the motivation is really just to provide a mechanism so that new building blocks can be directly registered as new elements, which may initially be prototyped in scripting space, i.e. by the MapStructure/ND folks. I am primarily thinking in terms of things like our SymbolCache for now, which would seem like a sure candidate for being re-implemented in C++ at some point.

This would mean that we can prototype new element types, and if/when those are optimized through C++ additions, any back-end code will automatically benefit from such optimizations, without having to be ported, because it already was a proper CanvasElement previously.


— Hooray (Mon May 12). NasalElement vs. CanvasMgr::elementCreated.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png Likewise, our current "SymbolCache" could be registered as a custom NasalCanvasElement, and easily re-implemented in C++ at some point.

MFDs are another good example, because page/mode management is one of the most common requirements here, which is explicitly implemented using fairly ugly Nasal code at the moment, but can be trivially expressed through supporting a custom "MFD" element with "page" children (basically groups).


— Hooray (Mon Jun 02). Re: NasalElement vs. CanvasMgr::elementCreated.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I am basically thinking of turning important building blocks (PFD/ND, HUD/EFB) into NasalCanvasElements and allowing those to be registered, so that any canvas can use such elements, i.e. to implement support for animations, caching.

The idea is to establish boundaries and interfacing mechanisms, so that things can be easily optimized/re-implemented or replaced once the need arises, without us having to touch a ton of places. For example, a simple "animation" element would accept a handful of property events and internally manage timers and listeners - if that shows up as a bottleneck at some point, it can be optimized or re-implemented through native C++ code, without having to change the front-end code.


— Hooray (Mon Jun 02). Re: NasalElement vs. CanvasMgr::elementCreated.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png For starters, this may involve a simple Nasal-driven element for handling things like 1) styling & animations, 2) terrain height maps or even 3) ESRI shape files.

Currently, we are implementing animations by using timers and listeners - in the long run, we may want to use SGCondition/SGStateMachine or even some of the AP/PID components to reduce Nasal overhead - thus, exposing a dedicated wrapper means encapsulating things, so that we only have to update a single place then. We could prototype such things in Nasal and if we find things being too slow, we can trivially update stuff.


— Hooray (Mon May 12). NasalElement vs. CanvasMgr::elementCreated.
(powered by Instant-Cquotes)
Cquote2.png

Benefits

Using the PropertyBased-approach where each canvas feature can register itself as an extension of the core system, would mean that the sync mechanism can be really lightweight, and even be implemented on top of our existing I/O protocols (think multiplayer/dual-pilot).

The other issue here is that with all these canvas based efforts going on, people need to be "forced" to establish generic systems and interfaces - or they'll just use copy& paste, and -unnecessarily- end up with widgets and instruments that are singletons or that are aircraft-specific.

Conclusion

If we continue "as is", we're abandoning the "sole-property" philosophy in the mid-term, simply because we're implementing increasingly complex systems (MFDs, GUI widgets, HUDs, 2D panels) on top of canvas, without the property tree being aware of what a given canvas tree actually represents internally in terms of actual functionality, and external data dependencies.

But as soon as we expose PropertyBasedElement as an interface via cppbind, we can establish "best practices" to demonstrate how new Canvas features can be implemented in a property-tree aware fashion. The only thing that is missing is some kind of simply access restriction in a public/private/protected fashion so that internal state of a widget cannot be mutated.

This may all seem very complicated and like over-engineering, but it can be implemented by inheriting from PropertyBasedElement and using a handful of attributes that specify an interface in the form of XML attributes.

Implementation

1rightarrow.png See Canvas_Sandbox#CanvasNasal for the main article about this subject.

When it comes to exposing Nasal features via the property in the scope of Canvas, there are 3 main building blocks:

  • PropertyBasedElement
  • Canvas::Element
  • Canvas::Group

Of these three, only the last one really needs to be exposed to accomplish our goal of allowing Nasal space elements to be registered as Canvas elements, while retaining the existing property tree interface, without having to go through scripting space for instantiating a new element.

This would for example make it possible to allocate a new window, widget (button, checkbox, label etc) or MFD (navdisplay, pfd, efb) just by setting a few properties, analogous to already existing Canvas elements (text, images, paths, images). By using Canvas Group as the base class for doing that, we are ensuring that we can create arbitrarily-nested hierarchies of top-level wrappers for custom elements, which would internally preserve the usual Canvas structure.

The code required to expose an existing C++ base class to Nasal space in order to allow it to be sub-classed there can be seen in $SG_SRC/canvas/layout/NasalWidget.cxx , where the underlying C++ interface is registered as a base class - so that more specific widgets can be inherited from the C++ interface class, while all the key functionality is implemented in scripting space.

The same approach would allow us to allow new Canvas-based features to be developed while maintaining parity with existing property interfaces, especially the clean separation of different canvas elements.

Over time, we would be moving away from the increasingly Nasal-focused approach of declaring, using and instanting/maintaining Canvas-based features, towards retaining the FlightGear property tree as the sole/main interfacing mechanism for creating/controlling new functionality, including GUI and MFD features.

This would be in stark contrast to the current practice of having relatively low-level building blocks represented in the property tree, with functionality mainly being determined in scripting space, and data dependencies not being properly formalized.

Sooner or later, this would help us establish the property tree as the main access point for any GUI/MFD functionality, which also means that we can trivially support backward compatibility - e.g. by honoring a corresponding version property for "meta" elements like a PFD, ND or GUI widget.

Equally, it would be possible to identify performance-critical components (think animation handling) and easily augment/re-implement those in C++ space, without breaking existing code - as long as the latter is only using dedicated property tree APIs, and not any scripting space calls directly.

Nasal would mainly be used for quickly prototyping new elements, while ensuring that all new functionality is a first class concept - without introducing any unnecessary Nasal dependencies.

Aircraft would no longer need to call custom Nasal space APIs for using a certain MFD or GUI widget, but merely invoke canvas.createChild() using the corresponding arguments, e.g.:

  • myGroup.createChild('label-widget');
  • myGroup.createChild('checkbox-widget');
  • myGroup.createChild('button-widget');
  • myGroup.createChild('repl-widget');
  • myGroup.createChild('map-widget');
  • myGroup.createChild('pfd-mfd');
  • myGroup.createChild('nd-mfd');

Internally, these would still be mapped to the already existing Nasal APIs (think Widget.nas, Button.nas etc) - while establishing a clean interfacing boundary, so that aircraft developers can rely on certain features to "just work".

Likewise, supporting multi-instance Canvas use-cases would become much more straightforward this way. If we should ever have the need to optimize/re-implement certain parts in C++ space, there would be a clean property interface to do so (which could even support versioning/backward compatibility easily) - in fact, by using this approach, we could even entirely replace the Nasal engine or add a new scripting engine at some point, without Canvas based MFDs having any external Nasal interfacing requirements - because the main interfacing mechanism for any Canvas MFDs would still be the property tree by allowing custom Canvas elements to be registered and implemented in scripting space.

Suggested reading

References