Canvas development
The FlightGear forum has a subforum related to: Canvas |
The following is a list of Canvas related proposals and discussions that have come up over the years, some of these are efforts currently in progress:
- Hackathon Proposal:Canvas Widgets - getting rid of PUI via the Canvas GUI system
- Unifying the 2D rendering backend via canvas - getting rid of 2D Panels and the hard-coded HUD via the Canvas
- Shiva Alternatives - porting Canvas.Path
- Canvas SVG - making Canvas SVG handling faster
- Canvas popout windows - allowing canvas popout windows (Jules came up with related code as part of the CompositeViewer code)
- Canvas instancing - making Canvas displays more efficient
- Canvas Threading - a New Canvas Execution Model) (RFC)
- Canvas sandbox - various Canvas/core related patches that were contributed by various folks over the years
Note This article is primarily of interest to people familiar with Building FlightGear from source, and wanting to extend the Canvas 2D rendering system in SimGear ($SG_SRC/canvas). Readers are assumed to be familiar with C++ and OSG
, the Property Tree and fundamental FlightGear APIs like SGPropertyNode (doxygen), Property Objects, SGSubsystem and SGPropertyChangeListener (the latter being wrapped via simgear::PropertyBasedElement). The Canvas code itself makes extensive use of the STL and Boost. The latest Canvas/Doxygen docs can be found here.
There are two main ways to extend FlightGear's Canvas system:
Whenever all existing Canvas elements (group, map, text, image, path) should benefit from an addition, such as for example adding effects/shader support, it makes sense to extend the underlying base class itself, i.e. Canvas::Element. In addition, the map element (a subclass of group) can be extended to support additional map projections (see simgear/simgear/canvas/elements/map/projection.hxx). People just wanting to add a new layer to an existing dialog or instrument, will probably want to refer to Canvas MapStructure instead. |
The canvas system is a property-driven FlightGear subsystem that allows creating, rendering and updating dynamic OpenGL textures at runtime by setting properties in the main FlightGear Property Tree.
The [[Property Tree] is the sole interfacing mechanism in use by the Canvas system. A so called listener-based subsystem (via SGPropertyChangeListener) will watch the canvas sub tree in the main property tree for supported "events" (i.e. properties being set, written to/modified), and then update each associated texture accordingly, e.g. by adding a requested vector or raster image, drawing a map/item, placing symbols or placing text labels with custom fonts.
Elements can be nested and added to groups which support showing/hiding and clipping of segments. Vector drawing is handled via ShivaVG (OpenVG).
All property updates result in native C++/OSG data structures being updated (typically using OSG /STL/Boost containers), so that the property tree and scripting are solely used to send update events, which ensures that Canvas-based systems are typically fast enough, often delivering frame rates beyond ~40-60 fps.
Animations are currently not directly supported, instead these can be implemented by using separate canvas groups and hiding/showing them as needed, or simply by changing the size/color/styling attributes of a canvas group using Nasal timers/listeners. Another option to update a canvas without relying on Nasal timers (i.e. due to GC considerations) is using so called "Property Rules", which are currently not yet exposed to Nasal, but which can be used for any needs where scripting overhead should be minimal. Sooner or later, we're probably going to come up with a scripting space wrapper for encapsulating most animation needs, so that existing Canvas frameworks can use a single back-end, which can be customized and optimized over time, possibly by adding native support for animations and/or by allowing animations to be handled without going through scripting space.
The Canvas fully supports recursion, by allowing other canvases (and sub-regions of them via texture-mapping) to be referenced and used as raster images, so that multiple canvases can be chained together, but also through the notion of "groups", which are containers for other canvas elements, including child groups or elements referencing other canvases.
This can be particularly useful for projects requiring multi-texturing and other multi-pass texturing stages. This mechanism is also one of the main building blocks used by the MapStructure charting framework to implement caching support via texture maps, without needing any changes on the C++ side to handle symbol instancing.
The canvas itself is developed with a focus on primarily being an enabling technology. In other words, the canvas is not about implementing individual features like a PFD, ND, EFIS, EICAS or other MFD instruments like a moving map or a GUI library.
Rather, the Canvas system is all about coming up with a flexible and efficient system that allows end-users (aircraft developers and other base package contributors) to develop such features themselves in user space (i.e. the base package) via scripting - without having to be proficient C++ programmers, without having to rebuild FlightGear from source, and without having to be familiar with OpenSceneGraph, OpenGL or other technologies that typically involve a steep learning curve (i.e. STL/Boost).
This approach has several major advantages - in particular, it frees core developers from having to develop and maintain end-user features like a wxradar, groundradar or Navigational Display/PFD by empowering content/base package developers to handle the implementation of such features themselves.
Thus, development of content is moved to user space (i.e. the base package). Recently, we've seen a shift in trend here, because more and more core developers focusing on end user requests, instead of implementing those feature requests, they implement the building blocks and infrastructure to delegate the implementation of these features to user space.
Besides, core developers are generally overstretched, and there are not enough core developers to handle all core development related tasks:
Unfortunately, most of the active FG developers are currently very overstretched in terms of the areas that they have ownership of, which is affecting how much can actually be done. Fundamentally we need more core devs. [1] — Stuart Buchanan
|
- ↑ Stuart Buchanan (Thu, 25 Apr 2013 07:28:28 -0700). Atmospheric Light Scattering.
The only way to deal with this is to shift the core development focus from developing complex high level end users features (such as an ND, TCAS or WXRADAR) that take years to fully develop, to just providing lower level API s (like a navdb API or a 2D drawing API like Canvas) to enable base package developers to develop those really high level features themselves, without being affected by any core development related "bottlenecks".
This is the route that seemed to work out fairly well for the local weather system, which was prototyped and implemented by a single base package developer in scripting space, who just asked for certain scripting hooks to be provided at some point.
For example, when Stuart, Torsten or Erik implemented LW-specific core extensions, these were about providing new hooks to be used by Thorsten. They didn't commit to implementing a weather system, they just enabled somebody else to continue his work. So this strategy is as much about delegation, as it is about organizing core development.
Core developers cannot possibly implement all the ideas and feature requests that aircraft developers and end users may have, but they can at least provide a toolbox for base package developers to implement such features. Now, without doubt, implementing a WXRADAR, TCAS, AGRADAR or even a full ND /MFD is incredibly complex and time-consuming, especially when taking into account the plethora of instrument variations in existence today.
Exposing a 2D drawing API or a navdb API to base package developers would have been much simpler and less time-consuming, at the cost of possibly not providing certain instruments/features directly - while still providing the building blocks for skilled base package contributors to implement such instruments eventually within the base package, rather than within the C++ source code where evolution and maintenance of such instruments is inherently limited by the availability of C++ developers.
Given the progress we've seen in Canvas-related contributions boosted by having a 2D API, this is a very worthwhile route for developing MFD-style instruments or other end-users features without being limited by our shortage of core developers.
Furthermore, the amount of specialized code in the main FlightGear code base is significantly reduced and increasingly unified: One major aspect of adopting the Canvas system was Unifying the 2D rendering backend via canvas, so that more and more of the old/legacy code can be incrementally re-implemented and modernized through corresponding wrappers, which includes scripting-space frameworks for existing features like the Hud system, but also our existing PLIB/PUI-based GUI, and the old 2D panels code or the Map dialog.
Many of these features are currently using legacy code that hasn't been maintained in years, causing issues when it comes to making use of certain OSG optimizations, or interoperability with new code.
In addition, widgets and instruments will no longer be hard-coded, but rather "compiled" into hardware-accelerated Canvas data structures while being initialized, which will be typically animated by using timers or listeners (via scripting or property rules). The fact that previously hard-coded widgets or instruments are now fully implemented in scripting space also means that deployment of updates no longer requires manual installations of binaries necessarily.
This is analogous to how more and more software programs, such as browsers like Firefox/Chrome, are using an increasingly scripted approach towards implementing functionality, i.e. using JavaScript/XUL (Chrome) to move the implementation of certain features out of native code.
Finally, an increasingly unified 2D rendering back-end also provides the opportunity to make porting/re-targeting FlightGear increasingly feasible, no matter if this is about mobile gaming platforms, mobile phones (e.g. Android) or embedded hardware like a Rasberry PI: Without a unified 2D rendering back-end, all other subsystems doing 2D rendering would need to be manually ported (hud, cockpit, instruments, GUI etc):
Right, not only is OpenVG natively supported in hardware, but there's even a vector font library available named "vgfont". This OSG discussion may also be of interest for anybody pursuing this venture: http://forum.openscenegraph.org/viewtop ... &view=next |
A unified 2D rendering back-end using the Canvas system ensures instead that all Canvas-based features will remain functional, as long as the Canvas itself is able to run on the corresponding platform/hardware, because there's really just a single subsystem that handles all 2D rendering via different user-space wrappers, and that would need porting (e.g. to support OpenGL ES).
Also, GUI dialogs and instruments can make use of other Canvas-based features, e.g. for showing a GUI dialog on an instrument, or instruments in dialogs.
The property tree centric implementation approach of the Canvas also means that all Canvas-based frameworks could technically work in a standalone FGCanvas/FGPanel mode eventually, but also in multi-instance (master/slave) setups such as those common for FSWeekend/LinuxTag.
This is yet another novelty, because most existing hard-coded instruments cannot be easily modified to work in such multi-instance setup scenarios. The Canvas system however -being based on the property tree- could retrieve property updates from external instances, e.g. via telnet/UDP or HLA without requiring major re-architecting.
This also means that Canvas-based GUI dialogs could similarly be shown by a separate fgfs instance - for example, in order to provide an Instructor Station or to display a MapStructure-based moving map dialog/window.
Frameworks
Obviously, the Canvas API s themselves are not intended for specific end-user features like developing a PFD, ND or EICAS - therefore, you will typically see wrappers implemented in scripting space for certain needs - i.e. Canvas frameworks intended to help with the development of certain types of instruments for example. Frameworks will usually use the Canvas scripting space API directly, while providing a more concrete, use-case specific API, on top.
- Canvas EFIS Framework (2020/02): jsb
- Canvas MapStructure (2013/2014: Philosopher & Hooray)
- NavDisplay (2013/2014: Gijs, Hyde)
- Canvas GUI (2013-2015: TheTom)
- Canvas MCDU Framework (2012: TheTom)
Internals
It's a conventional OSG texture that is allocated.
The main difference is that certain OSG parameters are exposed in the form of properties, by setting properties using a certain name, type/value, it will call the corresponding OSG machinery to update the texture internally.[1]
However, the Canvas image element already supports texture mapping, i.e. you can treat a raster image (including another Canvas) as the source for a raster image, and only get a portion out of it: Howto:Using raster images and nested canvases#Texture Maps
Once you stop manipulating a Canvas in the tree (and especially its child elements), it's all native C++ code that is running - i.e. no Nasal or property overhead once the corresponding data structures are set up, but that only holds true until the next "update", at which point everything is removed, re-parsed and updated/re-drawn.[2]
For instance, for Rembrandt (buffer setup), that would require additional hooks, because things like the internal texture format are not currently configurable via "Canvas properties", i.e. it's a hard-coded thing - however, Rembrandt makes extensive use of different kinds of buffers and in-memory representations, probably for pretty much the same reasons that you have in mind regarding the first question you asked.
I guess, to answer your first question, we would need to look at the way Rembrandt is setting up, and managing, its buffers and compare that to the standard Canvas FBO - but I really think that it's not doing anything fancy at all, because that would introduce hard-coded assumptions that may fail under certain circumstances.
Basically, what you are suggesting would require some way to encode the internal representation using a configurable lookup.
What is really taking place behind the scenes, is that the Canvas system is built on the old FGODGauge code, Tom ended up rewriting it from scratch basically, but it's still using the same mechanism that hard-coded "owner-drawn" (OD) gauges like the agradar/navdisplay were using.
The allocation (OSG setup) can be found here: https://sourceforge.net/p/flightgear/simgear/ci/next/tree/simgear/canvas/ODGauge.cxx#l218
The internal representation/format is a hard-coded thing: https://sourceforge.net/p/flightgear/simgear/ci/next/tree/simgear/canvas/ODGauge.cxx#l255 And even the cameragroup stuff is using the same hard-coded assumption: https://sourceforge.net/p/flightgear/flightgear/ci/next/tree/src/Viewer/CameraGroup.cxx#l994 Rembrandt, and effects (simgear), are much more flexible (for now), e.g. see: https://sourceforge.net/p/flightgear/flightgear/ci/next/tree/src/Viewer/renderingpipeline.cxx#l164
https://sourceforge.net/p/flightgear/flightgear/ci/next/tree/src/Viewer/renderer.cxx#l769
Elements
All new canvas elements need to implement the Canvas::Element interface (new elements can also sub-class existing elements, e.g. see the implementation of the Map and Window($FG_SRC/Canvas) elements), the canvas system currently supports the following primitives (see $SG_SRC/canvas/elements):
- CanvasGroup - main element: for grouping (a group of arbitrary canvas primitives, including other groups)
- CanvasText - for rendering texts (mapped to osgText)
- CanvasPath - for rendering vector graphics (mapped to OpenVG, currently also used to render SVGs into groups)
- CanvasMap - for rendering maps (automatic projection of geographic coordinates to screen coordinates, subclass of group)
- CanvasImage - for rendering raster images (mapped to osg::Image)
- CanvasWindow - this is part of $FG_SRC/Canvas/Window.?xx, it's a subclass of CanvasImage, used to implement windows (as of 05/2014 also to be found in simgear)
Most end-user features can be decomposed into lower-level components that need to be available in order to implement the corresponding feature in user-space.
Thus, the canvas system is based on a handful of rendering modes, each supporting different primitives - each of those modes is implemented as a so called "Canvas Element", which is a property-tree controlled subtree of a canvas texture using a certain name, that supports specific events and notifications.
According to the development philosophy outlined above, you obviously won't see new canvas elements that are highly use-case specific, such as a "night vision" or FLIR element. Instead, what is more likely to be supported are the lower level building blocks to enable end-users creating such features, i.e. by adding support for running custom effects/shaders and by rendering scenery views to canvas textures.
Adding a new Element
You will want to add a new Canvas::Element whenever you want to add support for features which cannot be currently expressed easily (or efficiently) using existing means (i.e. via existing elements and scripting space frameworks). For example, this may involve projects requiring camera support, rendering scenery views to a texture, rendering 3D models to a texture or doing a complete moving map with terrain elevations/height maps (even though the latter could be implemented by sub-classing Canvas::Image to some degree).
Another good example for implementing new elements is rendering file formats like PDF , 3d models or ESRI shape files.
To add a new element, these are the main steps:
- Navigate to $SG_SRC/canvas/elements
- Create a new set of files myElement.cxx myElement.hxx
- add them to $SG_SRC/canvas/elements/CMakeLists.txt (as per Developing using CMake)
diff --git a/simgear/canvas/elements/CMakeLists.txt b/simgear/canvas/elements/CMakeLists.txt
index bd21c13..9fdd48d 100644
--- a/simgear/canvas/elements/CMakeLists.txt
+++ b/simgear/canvas/elements/CMakeLists.txt
@@ -1,6 +1,7 @@
include (SimGearComponent)
set(HEADERS
+ myElement.hxx
CanvasElement.hxx
CanvasGroup.hxx
CanvasImage.hxx
@@ -14,6 +15,7 @@ set(DETAIL_HEADERS
)
set(SOURCES
+ myElement.cxx
CanvasElement.cxx
CanvasGroup.cxx
CanvasImage.cxx
@@ -23,4 +25,4 @@ set(SOURCES
)
Next, open the header file and add a new Element classs:
#ifndef CANVAS_MYELEMENT_HXX_
#define CANVAS_MYELEMENT_HXX_
#include <simgear/props/propsfwd.hxx>
#include "CanvasElement.hxx"
namespace simgear
{
namespace canvas
{
class myElement : public Element
{
public:
static const std::string TYPE_NAME;
static void staticInit();
myElement( const CanvasWeakPtr& canvas,
const SGPropertyNode_ptr& node,
const Style& parent_style = Style(),
Element* parent = 0 );
virtual ~myElement();
protected:
virtual void update(double dt);
private:
myElement(const myElement&) /* = delete */;
myElement& operator=(const myElement&) /* = delete */;
};
} // namespace canvas
} // namespace simgear
#endif /* CANVAS_MYELEMENT_HXX_ */
)
Next, add the source file implementing the new myElement class:
#include "myElement.hxx"
#include <simgear/props/props.hxx>
#include <simgear/misc/sg_path.hxx>
namespace simgear
{
namespace canvas
{
const std::string myElement::TYPE_NAME = "myelement";
void myElement::staticInit()
{
if( isInit<myElement>() )
return;
}
//----------------------------------------------------------------------------
myElement::myElement( const CanvasWeakPtr& canvas,
const SGPropertyNode_ptr& node,
const Style& parent_style,
Element* parent ):
Element(canvas, node, parent_style, parent)
{
SG_LOG(SG_GENERAL, SG_ALERT, "New Canvas::myElement element added!");
}
//----------------------------------------------------------------------------
myElement::~myElement()
{
SG_LOG(SG_GENERAL, SG_ALERT, "Canvas::myElement element destroyed!");
}
void myElement::update(double dt)
{
}
} // namespace canvas
} // namespace simgear
Next, edit CanvasGroup.cxx to register your new element (each canvas has a top-level root group, so that's how elements show up), navigate to Group::staticInit() and add your new element type there (don't forget to add your new header):
diff --git a/simgear/canvas/elements/CanvasGroup.cxx b/simgear/canvas/elements/CanvasGroup.cxx
index 51523f4..24e19d3 100644
--- a/simgear/canvas/elements/CanvasGroup.cxx
+++ b/simgear/canvas/elements/CanvasGroup.cxx
@@ -21,6 +21,7 @@
#include "CanvasMap.hxx"
#include "CanvasPath.hxx"
#include "CanvasText.hxx"
+#include "myElement.hxx"
#include <simgear/canvas/CanvasEventVisitor.hxx>
#include <simgear/canvas/MouseEvent.hxx>
@@ -60,6 +61,7 @@ namespace canvas
return;
add<Group>(_child_factories);
+ add<myElement>(_child_factories);
add<Image>(_child_factories);
add<Map >(_child_factories);
add<Path >(_child_factories);
Next, navigate to $FG_ROOT/Nasal/canvas/api.nas and extend the module to add support for your new element:
diff --git a/Nasal/canvas/api.nas b/Nasal/canvas/api.nas
index 85f336a..81c0fa0 100644
--- a/Nasal/canvas/api.nas
+++ b/Nasal/canvas/api.nas
@@ -314,6 +314,18 @@ var Element = {
}
};
+# myElement
+# ==============================================================================
+# Class for a group element on a canvas
+#
+var myElement = {
+# public:
+ new: func(ghost)
+ {
+ return { parents: [myElement, Element.new(ghost)] };
+ },
+};
+
# Group
# ==============================================================================
# Class for a group element on a canvas
@@ -958,7 +970,8 @@ Group._element_factories = {
"map": Map.new,
"text": Text.new,
"path": Path.new,
- "image": Image.new
+ "image": Image.new,
+ "myelement": myElement.new,
};
Next, rebuild SG/FG and open the Nasal Console and run a simple demo to test your new element:
var CanvasApplication = {
##
# constructor
new: func(x=300,y=200) {
var m = { parents: [CanvasApplication] };
m.dlg = canvas.Window.new([x,y],"dialog");
m.canvas = m.dlg.createCanvas().setColorBackground(1,1,1,1);
m.root = m.canvas.createGroup();
##
# creates a new element
m.myElement = m.root.createChild("myelement");
m.init();
return m;
}, # new
init: func() {
var filename = "Textures/Splash1.png";
# create an image child for the texture
var child=me.root.createChild("image")
.setFile( filename )
.setTranslation(25,25)
.setSize(250,250);
}, #init
}; # end of CanvasApplication
var splash = CanvasApplication.new(x:300, y:300);
print("Script parsed");v
you may also want to check out $FG_SRC/Scripting/NasalCanvas.?xx to learn more about exposing custom elements to scripting space via Nasal/CppBind. Next, you'll want to implement the update() methods and the various notification methods supported by CanvasElement:
- childAdded
- childRemoved
- childChanged
- valueChanged
For event handling purposes, you'll also want to check out the following virtual CanvasElement methods:
- accept()
- ascend()
- traverse()
- handleEvent()
- hitBound()
Integrating OSG/OpenGL Code
Once you have the basic boilerplate code in place, you can directly invoke pretty muchh arbitrary OpenGL/OSG code - for instance, the following snippet will render an osgText string to the Canvas element (added simply to the constructor here for clarity):
osg::Geode* geode = new osg::Geode();
osg::Projection* ProjectionMatrix = new osg::Projection;
ProjectionMatrix->setMatrix(osg::Matrix::ortho2D(0,1024,0,768));
std::string timesFont("fonts/arial.ttf");
// turn lighting off for the text and disable depth test to ensure it's always ontop.
osg::StateSet* stateset = geode->getOrCreateStateSet();
stateset->setMode(GL_LIGHTING,osg::StateAttribute::OFF);
osgText::Text* text = new osgText::Text;
geode->addDrawable(text);
text->setFont(timesFont);
osg::Vec3 position(200.0f,350.0f,0.0f);
text->setPosition(position);
text->setText("Some OpenGL/OSG Code ...");
text->setColor(osg::Vec4(1.0f,0.0f,0.0,1.0f));
// add the geode to the project matrix
ProjectionMatrix->addChild(geode);
// add the projection matrix to the transform used by the Canvas element
_transform->addChild(ProjectionMatrix);
For testing purposes, you can use the following Nasal snippet (e.g. executed via the Nasal Console:
var element_name = 'myelement';
var window = canvas.Window.new([640,480],"dialog");
var myCanvas = window.createCanvas().set("background", canvas.style.getColor("bg_color"));
var root = myCanvas.createGroup();
var osgemap = root.createChild(element_name);
Discussed Enhancements
Note The features described in the following section aren't currently supported or being worked on, but they've seen lots of community discussion over the years, so that this serves as a rough overview. However, this doesn't necessarily mean that work on these features is any way prioritized or even endorsed by fellow contributors -often enough, such discussions may become outdated pretty quickly due to recent developments. So if in doubt, please do get in touch via the Canvas sub-forum before starting to work on anything related to help coordinate things a little. Thank you! |
AI/MP models
It appears as though it is not possible for Canvas to locate a texture that is in a multiplayer aircraft model; this has also been seen in the efforts to get Canvas displays working on the B777.[4] in simgear Canvas::update it appears to be using the factories to find the element; and this means that it can't find the named OSG node, which makes me think that maybe it is only looking in the ownship (which is a null model).[5]
Property I/O observations
Speaking for the Shuttle, that (performance problems/lag) has very little to do with canvas as such. There are 11 MDUs on the Shuttle flightdeck, and the way the Shuttle avionics works, they typically display close to a hundred values each, so that's of the order of ~1000 different parameters that need to be simulated, fetched and displayed _per update cycle_ (and yeah, most parameters you see are really simulated and not just unchanging text).[6]
Structured in a reasonable way (i.e. minimiing property I/O in update cycles, avoiding canvas pitfalls which trigger high performance needs etc. ) canvas is pretty fast[7]
actual problem is property I/O - we can't read/write several hundreds of properties per frame without creating a bottleneck. So it's largely irrelevant how fast the Nasal code runs, whether it's parallel or whether it's Python-driven code running on the GPU - as long as property I/O speed doesn't change, performance will be stuck right there.[8]
We're talking 500 properties in addition to everything else that is happening (limit checks, thermal system updates, CWS queries, simulation of co-orbiting objects, simulation of sensor errors,...)[9]
For the MFDs... let's go through the numbers. We have 40 pages by now, each displays on average something like 50 properties. That's 2000 getprop calls for the data provider to manage. At 3 H, and 30 fps, that's 200 requests per frame.
Now, if these 40, no more than 9 different ones can actually be on at any given time - so that's 450 getprop calls if you do it without data provider.
Now, we're not updating them all at once, we're updating in a staggered fashion - user selectable but per default just one display in a frame - so that's 50 getprop calls per frame. So effectively you get an update rate of ~3 Hz and query only the properties you really need.[10]
The workload is certainly a function of the number of screens (canvas textures/FBOs) (unless you can assume have duplicate screens, in which case you can cut it by re-using a canvas or using a data provider).
Creating the property structure by simply copying it turned out to be the largest drag in setting up a canvas display.[11]
if you have a page that displays 90 data values in text, you actually have to fetch all 90 of them. With 9 displays open, that's 810 properties to be fetched and then to be written so that canvas can display them. If you try that per frame, you'll see quickly why it doesn't work.[12]
Of course there needs to be an information merging and representation stage which the Shuttle doesn't have - but if you put this into the display code itself... see above. Fetching doens of properties when all you need is four pre-computed ones is a bad idea.[13]
In an extreme case, the shuttle needs to read (and canvas later write) some 800 properties for one screen processing cycle. Part of those trigger unit conversions, mappings to strings,... A small subset goes into translating, rotating and scaling line elements. Our experience is that property reading and writing is usually the most expensive part - with AW Thorsten did not manage even with complex cloud setup calls squeeed into a single frame to make even a dent in the framerate or latency (not for lack of trying), but property access does it as soon as you reach ~1000 per frame.[14]
500+ property updates (polling) would surely show up - especially given that a few years ago, that was pretty much the load caused by the whole simulator per frame. So it will be interesting to see if/how the complexity of these instruments is adding up (or not).
But all the sprintf/getprop-level overhead that is accumulating through update() loops invoked via timers would be straightforward to reduce significantly (or even eliminate) by extending CanvasElement/CanvasText so that it supports labels in the form of sprintf format strings that are populated by using a property node (sub-tree), which would mean that there would beero Nasal overhead for those labels/nodes that can be expressed using static format strings and a fixed set of dynamic properties.
All the polling could be prevented then, and updating would be moved to C++ space.
We ended up using a similar approach when we noticed that drawing taxiway layers would create remarkable property overhead, so that we troubleshooted the whole thing, at which point, TheTom added helpers to further reduce system/Nasal load[15]
common coding constructs (such as sprintf/getprop idiom) are put into a helper function, which can later on be re-implemented/optimied, without having to touch tons of files/functions.[16]
In the case of propery-driven labels that are formatted using sprintf(), it would probably be easier to just introduce a helper function, and delegate that to C++ code - as per the comments at: [3] [17] It would be better to extend the Canvas system to directly support a new "property mode" using sprintf-style format strings that are assembled/updated in C++ space, i.e. without any Nasal overhead, which would benefit other efforts, too - including the PFD/ND efforts, re-implementing HUD/2D panels on top of Canvas, but even pui2canvas[18]
It is all about updating properties and updating a label/text element accordingly, we could dramaticaly reduce the degree of Nasal overhead by allowing text to be specified using printf-style format strings that get their values from a sub-branch in the element's tree (one node for each %s, %d) - that way, the whole thing could be processed in C++ space, and we would not need to use any Nasal for updating/building strings. If this could be supported, we could also provide two modes: polling and on-update, to ensure that there is no unnecessary C++ overhead. Complex dialogs with lots of dynamic labels could then be re-implemented much more easily, without having to register 5-10 callbacks per metrics (or timers/listeners), even though a timer-based update mode may also be useful for the C++ change. Note that this would also be useful for the PUI parser itself, because that already supports values that may be provided by a property using printf-style formatting, there, it is limited to a single fomat string - with Canvas, we could support an arbitrary number of sub-nodes that are updated as needed. Ultimately, that would also help with HUD/2D panels stuff, because taking values from properties and updating them using sprintf-style code is extremely common there, too - and we could avoid tons of Nasal overhead like that.
References
|
Instancing
way to easily instantiate a symbol/geometry/group multiple times, in a cached fashion, without eating up unnecessary memory for multiple independently-animated instances of a symbol |
I'm currently not sure if we can share the canvas elements across displays, so I've made copies of everything for each display. |
You are right, that would help reduce the OSG-level workload, i.e. scene graph-level instancing.
But for the time being, Canvas does not support anything like that. |
It's also lead me onto wonder if instancing could be generally useful (as we have a quite a lot of items in the scenery that are the same model); but to be honest I've not really got enough of a clue how the culling would work. — Richard (Dec 19th, 2015). Re: optimizing random trees: strong effect on performance ?.
(powered by Instant-Cquotes) |
this is one of the most common feature requests related to Canvas — Hooray (Dec 19th, 2015). Re: Canvas::Element Instancing at the OSG level.
(powered by Instant-Cquotes) |
The main reason for doing that is to ensure that you can easily adopt more native primitives if/when they become available - for instance, the lack of a dedicated animation-handling element at the Canvas::Element level is one of the most obvious issues, because it links rendering related OSG code to Nasal space callbacks that are running within the FlightGear main loop.
And one of the most logical optimiations would be to look up suitable OSG-level data structures and expose those as Canvas::Elements that we can then reuse to implement such animations/updates without going necessarily through Nasal space - there are quite a few osg classes that could help with that, some of which we are currently re-implementing via Nasal to animate PFD/ND logic. Looking specifically at some of the most complex Canvas-based avionics we have in FlightGear, things like Avidyne Entegra R9 will be difficult to update easily once such a dedicated element becomes available - but people can easily make that possible by using a single helper function/class that handles the update/animation semantics, and which isolates the remaining code from any internals - so that things like an animated bar can be easily delegated to OSG/C++ code as soon as the corresponding OSG classes are mapped to a dedicated Canvas element: Canvas Sandbox#CanvasAnimation |
- http://learnopengl.com/#!Advanced-OpenGL/Instancing
- http://www.opengl-tutorial.org/intermediate-tutorials/billboards-particles/particles-instancing/
- http://forum.openscenegraph.org/viewtopic.php?t=5592
- http://android-developers.blogspot.com/2015/05/game-performance-geometry-instancing.html
- http://www.informit.com/articles/article.aspx?p=2033340&seqNum=5
- http://3dcgtutorials.blogspot.com/2013/08/instancing-with-openscenegraph.html
- http://3dcgtutorials.blogspot.com/2013/09/instancing-with-openscenegraph-part-ii.html
- http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter03.html
- http://trac.openscenegraph.org/projects/osg//browser/OpenSceneGraph/trunk/examples/osgdrawinstanced/osgdrawinstanced.cpp
Canvas-based Splash Screens
Note in this system it would be important the screenshots are *just* screenshots - no border, badges or other info as some splash screens currently do. The idea being to add this using OSG dynamically from the metadata and then it can be restyled as needed.[1]
Serializing a Canvas to SVG (brainstorming)
Note For now this is just a brainstorming to explore possible ways to better integrate the recent mongoose/httpd work with Canvas-based efforts like Gijs' NavDisplay or PFD - i.e. the idea is to see if we can come up with a consistent framework that would allow a Canvas-based display/MFD (or any instrument) to be rendered in a browser, updated asynchronously via AJAX. Currently, the focus is on serializing an existing Canvas by iterating over all elements and turning each CanvasElement into its SVG equivalent (e.g. svg image, raster image or text). That alone would mean that we could serve a static image of the canvas, animations and updates would then be handled by a shim layer that is based on a safe subset of both, Nasal and JavaScript. The long-term idea is to allow MFDs like the NavDisplay to be served to, and viewed by, a browser. |
Who knows, maybe there's even a way that we can find a compromise to optionally integrate both worlds to /some/ degree - i.e. we could serve Canvas-based textures as PNGs to a browser and actually let users decide on which side they want to use "native" FlightGear solutions, and where they'd prefer to use W3C options instead. Obviously, JavaScript is in many ways superior to Nasal, and the way Nasal is integrated in FG, we cannot easily write async code either. Being able to stream Canvas images/video to an external browser/viewer (via a worker thread) would also allow us to support a variety of other interesting use-cases, such as UAV stuff, OpenCV post-processing etc. The only thing that's missing to pull this off is a new placement type that exposes a canvas as either an osg::Image buffer that is serialized to a browser-format like PNG, or to some video stream. At that point, a browser could -in theory- even render live FG camera views streamed via UDP to implement a browser-based instructor console that can view individual Canvas MFDs/instruments, but even scenery views. This kind of stuff has been discussed a number of times, and even Curt & Tim agreed (in the pre-canvas days) that this would be cool to support at some point: http://wiki.flightgear.org/Canvas_Devel ... ter_Vision |
Canvas works mainly in terms of 1) OpenVG paths, and 2) raster images - most other elements are built on top of these two primitives. In fact, we don't even have native SVG support, we are merely using a Nasal script named "svg.nas" to turn SVG markup into OpenVG paths.
In other words, we could probably serialize a "live" canvas into a SVG image that merely references external files that are served via mongoose for each non-static element/group of the canvas, those would be either SVG files or raster images that would need to be internally serialized, sent to the browser and updated on demand.
Animation is a different thing obviously, but we're once wondering if we should come up with a "safe" subset of JavaScript that would be valid Nasal and vice versa - such a "subset" library could be used to animate instruments.
Basically, that would mean that we could combine both worlds to arbitrary degrees, and e.g. display MFDs like Gijs' ND or PFD in a browser that simply fetches a SVG from mongoose, which is a serialized canvas, broken up into 1) OpenVG paths and 2) raster images. To update individual elements selectively, we'd need to use your listener notifications or some other pub/sub mechanism. Something like that should be far more efficient than streaming the final texture, and it would allow us to reuse existing stuff, without necessarily asking people to re-invent instruments from scratch just because they want to use a different technology (Canvas vs. W3C).
Except for the "map" element supported by canvas (which directly projects symbols according to lat/lon), most things could be mapped onto SVG directly, i.e. referencing external SVGs and raster images via the <image> tag. If that is something that you find interesting, I am sure that I could help restructure the Canvas/MapStructure side of things to serve a SVG for a canvas - even event handling could be supported that way.
Another option might be generalizing the Nasal framework to be also valid JavaScript so that people could use a single framework that just animates SVGs and raster images via timers and listeners, so that both methods could benefit from each other in the long-term, because people could easily reuse stuff.
The main challenge being how to allow MapStructure/MFD stuff to be serialized as a canvas that consists of <image> entries for each group/element that either refers to another OpenVG group/SVG or a raster image. I think we could use a fairly thin Nasal/JavaScript subset as a shim layer to selectively update such SVGs even in the browser. I would probably need to restructure MapStructure to make better use of caching so that semi-static content is served as raster images. But otherwise it seems feasible to serialize a canvas to a SVG file that links to localhost:/canvas[x]/by-file/filename.png or filename.svg
The opposite we're already doing in svg.nas It's not important at the moment, I'd just like to explore ways to unify both worlds at least to /some/ degree.
Thinking about it, a canvas is already a "tree" due to the property tree, i.e. very much analogous to a SVG DOM, so I don't think that the to represent/serialize a canvas to a SVG that uses special URLs to address certain elements would be all that far-fetched eventually
Supporting Cameras
Note There is a related patch available at https://forum.flightgear.org/viewtopic.php?p=317448#p317448 |
Note People interested in working on this may want to check out the following pointers:
|
Given how FlightGear has evolved over time, not just regarding effects/shaders, but also complementary efforts like deferred rendering (via rembrandt), we'll probably see cameras (and maybe individual rendering stages) exposed as Canvases, so that there's a well-defined interface for hooking up custom effects/shaders to each stage in the pipeline - Zan's newcamera work demonstrates just how much flexibility can be accomplished this way, basically schemes like Rembrandt could then be entirely maintained in XML/effects and shader (fgdata) space. And even the fgviewer code base could be significantly unified by just working in terms of canvases that deal with camera views, which also simplifies serialization for HLA.
Background:
- Rembrandt Status
- Talk:Project_Rembrandt
- CompositeViewer Support
- Howto:Use a Camera View in an Instrument
- $FG_ROOT/Docs/README.multiscreen
- http://api-docs.freeflightsim.org/flightgear/classflightgear_1_1CameraGroup.html
- https://gitorious.org/fg/zans-flightgear?p=fg:zans-flightgear.git;a=commit;h=09832d8076a985a329500c027c1ed4f9b72bb1a9
- http://trac.openscenegraph.org/projects/osg/wiki/Support/ProgrammingGuide/ViewerVSCompositeViewer
Also see The FlightGear Rendering Pipeline
Taxi Camera on navigation display (as seen on FSX and X-Plane) |
I have to create two master cameras to controls the two different views in two different scene rendering dynamically.
but flightgear only using viewer class . This is createing one master camera no.of slave cameras. but i need CompositeViewer class. how to use CompositeViewer class in flightgear and how to render through CompositeViewer class. — Divi (Tue Dec 23). How to create 2 master camera and 2 views in flightgear 3.0.
(powered by Instant-Cquotes) |
any idea how loan to wait for this is add in canvas (with custom render options)? |
I think we need soon add this to canvas for camera. |
Anyway, I'm talking about rendering (terrain) camera view to texture using od_gauge. I know you can get terrain camera view and place it on the screen. Alright, it doesn't even have to be terrain, just normal camera view. It's rendered to screen every frame. The same way, can't we use the od_gauge instrument to render the view to texture? I just need some good info/doc on how we can do it.
— Merlion Aerosuperb (2012-03-21). [Flightgear-devel] Rendering Terrain Camera View to Texture.
(powered by Instant-Cquotes) |
Back when the whole Canvas idea was originally discussed, none of the people involved in that discussion stepped up to actually prototype, let alone, implement the system - so it took a few years until the idea took shape, and the developer who prototyped and designed the system went quite a bit further than originally anticipated - but I think it's safe to say that not even Tom was foreseeing the increasing focus on GUI and MFD use-cases, as well as as the increasing trend to use it for mapping/charting purposes.
So the original focus on 2D rendering is/was very valid, and the system is sufficiently flexible to allow it to be extended using custom elements for rendering camera/scenery views at some point. All the community support and momentum certainly is there, and I'm sure that TheTom will gladly review any contributions related to this. — Hooray (Sun Jan 04). Re: How to create 2 master camera and 2 views in flightgear.
(powered by Instant-Cquotes) |
hi i need to create 2 window with different view. |
To support this kind of thing via Canvas, we'd need to adapt the existing view manager code and render a slave camera to a Canvas group - i.e. by turning the whole thing into a CanvasElement sooner or later. That would allow cameras to be specified according to the existing syntax/properties.
|
I think, but I'm not really sure, that FligthGear does not support two different views even if you have two windows.
|
while we've had a number of discussions about possibly supporting camera views as Canvas elements, this isn't currently supported. At some point, this will probably be added, because it would simplify quite a bit of existing code (especially the view manager, and the way camera groups are set up) - however, the corresponding C++ predates Canvas by many years, so it would involve a bit of work.
|
We're waiting for the Canvas Properties 2D drawing API and Camera View so we can create the PFD.
|
I'm looking to replicate a camera with a fixed viewpoint from the aircraft. For example looking directly down.
Is there a way I can use some scripting method to call a new window displayed in the bottom right hand side of the screen showing a fixed camera view, without having to edit the preferences for my machine? I'd like it to be easily distributable. [2] — Avionyx
|
I was wondering if it were possible to restrict the camera output to only one half of the running FG window? I'm hoping to do this so that I may have the map and route manager GUIs active in the other half, so that they aren't obscuring the camera view (and also have the entire HUD visible). So basically, half the window straight down the center - left half is just black, right half is the camera.
Although this would also be solved if there were an external FG dynamic navigational map program, that also displayed waypoints... (I don't think there is one, right?). Additionally, I would love to hear that this question can be answered with Nasal, as I really can't afford to edit the source code and recompile (it's for a project, and I have no admin rights on the laboratory machines).[3]— seabutler
|
I'm trying to debug reflection shader I'm working on. I have a camera attached to a scene graph, which pre-renders (osg::Camera::PRE_RENDER) scene into offscreen surface (osg::Camera::FRAME_BUFFER_OBJECT); For a debugging purposes I have to see the result of that render pass.
I'm not very good yet in FG internal structure, so I'd like to ask - can this camera be somehow attached to FG camera views (v), or embedded as separate window ?[4] — Vladimir Karmisin
|
I want to give access to every stage of the rendering to the effect system. The geometry pass outputs to render target, but the fog, the lights, the bloom need to have access to the textures of the buffer, and there is a separate one for each camera associated to windows or sub windows. [5] — Frederic Bouvier
|
It would be nice if the Effects framework had a way to load arbitrary textures and make them available to effects.I don't know if there is a better way to create your texture offline than write C++ code in simgear. OSG will read a TIFF file with 32bits per component as a floating point texture... assuming you can create
such a thing.[6] — Tim Moore
|
modify the Renderer class to separate from the scenegraph, terrain and models on one hand, the skydome and stars on the other, and finally the clouds.
These three elements are passed to the CameraGroup class in order to be treated separately in the new rendering engine (and put together in the current one).[7] — Frederic Bouvier
|
I want to point out my work on my "newcameras" branch: https://gitorious.org/fg/zans-flightgear?p=fg:zans-flightgear.git;a=shortlog;h=refs/heads/newcameras which allows user to define the rendering pipeline in preferences.xml. It does not (yet?) have everything Rembrandt's pipeline needs, but most likely is easily enhanced to support those things.
Basically this version adds support for multiple camera passes, texture targets, texture formats, passing textures from one pass to another etc, while preserving the standard rendering line if user wants that. I wish this work could be extended (or maybe even I can extend it ;) to handle the Rembrandt camera system. This will not solve all problems in the merge, but some of them.[8]— Lauri Peltonen
|
I was not aware of your work. But given what you write here, this looks pretty promising. Fred mentioned your name in an offline mail. I would highly apprechiate that we do not lock out low end graphics boards by not having any fallback. May you both should combine forces?
From what I read, I think both are heading in the same global direction and both implementations have some benefits over the other?[9] — Mathias Fröhlich
|
I would like to extend the format to avoid duplicating the stages when you have more than one viewport. What I see is to specify a pipeline as a template, with conditions like in effects, and have the current camera layout refer the pipeline that would be duplicated, resized and positioned for each declared viewport[10] — Frederic Bouvier
|
Mapping cameras to different windows, which can be opened on arbitrary screens, will absolutely still be supported. I know that multi-GPU setups are
important for professional users and our demos.[11] — Tim Moore
|
I believe that we need to distinguish between different render to texture cameras. Camera nodes must be accessible from within flightgear. That ones that will end in mfd displays or hud or whatever that is pinned into models. And one that are real application windows like what you describe - additional fly by view, and so on. And I believe that we should keep that separate and not intermix the code required for application level stuff with building of 3d models that do not need anything application level code to animate the models ... I think of some kind of separation that will also be good if we would do HLA between a viewer and an application computing physical models or controlling an additional view hooking into a federate ...[12] — Mathias Fröhlich
|
I've done some work with setting up a model of a pan/tilt camera system that can point at a specific wgs84 point or along a specific NED vector
(i.e. nadir, or exactly at my shadow, etc.) This was [unfortunately] for a paid consulting project so that code doesn't live in the FlightGear tree. However, it's really easy to configure a view that stays locked on a specific lon/lat and I hacked a small bit of nasal to copy the point you click on over into the view target variables so you can click any where in the scene and the pan/tilt camera will hold center on that exact location. FlightGear offers a lot of flexibility and comparability in this arena.[13] — Curtis Olson
|
Would it be possible to place the new "view" into a window instead of having a dedicated view? That would allow you to have an instrument panel with a blank cut-out that could hold this newscam/FLIR window.The easiest way to visualize the idea I have is to think about the view you'd see in one of the rear-view mirrors that most fighters have along the canopy bow (and the Spitfire has mounted on top of the canopy bow, outside the cockpit). You'd see your full screen view as usual, but you'd also have these "mirrors" showing the view behind you at the same time.[14] — Gene Buckle
|
One thing we have to consider with rear view mirrors is that we don't currently have the ability to flip the display for the "mirror" affect.There's got to be a very simple view transform matrix that would invert the display in the horizontal direction. Probably the identity matrix with the appropriate axis negated (-1). It might be a relatively simple thing to add to the view transform pipeline at some point.[15] — Curtis Olson
|
I had a look at the this idea a while back - the problem I came across was that the camera would show the view to the rear NOT the mirror image. I
couldn't see a way around that without a great deal of processing. At hat point I gave up.[16] — Vivian Meazza
|
As has been said previously, the proper way to support "cameras" via Canvas is using CompositeViewer, which does require a re-architecting of several parts of FG: CompositeViewer SupportGiven the current state of things, that seems at least another 3-4 release cycles away. So, short of that, the only thing that we can currently support with reasonable effort is "slaved views" (as per $FG_ROOT/Docs/README.multiscreen).That would not require too much in terms of coding, because the code is already there - in fact, CameraGroup.cxx already contains a RTT/FBO (render-to-texture) implementation that renders slaved views to an offscreen context. This is also how Rembrandt buffers are set up behind the scenes.So basically, the code is there, it would need to be extracted/genralied and turned into a CanvasElement, and possibly integrated with the existing view manager code. |
And then, there also is Zan's newcameras branch, which exposes rendering stages (passes) to XML/property tree space, so that individual stages are made accessible to shaders/effects. Thus, most of the code is there, it is mainly a matter of integrating things, i.e. that would require someone able to build SG/FG from source, familiar with C++ and willing/able to work through some OSG tutorials/docs to make this work: Canvas Development#Supporting CamerasOn the other hand, Canvas is/was primarily about exposing 2D rendering to fgdata space, so that fgdata developers could incorporatedevelop and maintain 2D rendering related features without having to be core developers (core development being an obvious bottleneck, as well as having significant barrier to entry).In other words, people would need to be convinced that they want to let Canvas evolve beyond the 2D use-case, i.e. by allowing effects/shaders per element, but also to let Cameras be created/controlled easily.Personally, I do believe that this is a worthwhile thing to aim for, as it would help unify (and simplify) most RTT/FBO handling in SG/FG, and make this available to people like Thorsten who have a track record of doing really fancy, unprecedented stuff, with this flexibility.Equally, there are tons of use-cases where aircraft/scenery developers may want to set up custom cameras (A380 tail cam, space shuttle) and render those to an offscreen texture (e.g. GUI dialog and/or MFD screen). |
tail cams are slaved cameras, so could be using code that already exists in FG, which would need to be integrated with the Canvas system, to be exposed as a dedicated Canvas element (kinda like the view manager rendering everything to a texture/osg::Geode).There's window setup/handling code in CameraGroup.cxx which sets up these slaved views and renders the whole thing to a osg::TextureRectangle, which is pretty much what needs to be extracted and integrated with a new "CanvasCamera" element - the boilerplate for which can be seen at: [CanvasThe whole RTT/FBO texture setup can be seen here: http://sourceforge.net/p/flightgear/flightgear/ci/next/tree/src/Viewer/CameraGroup.cxx#l994That code would be redundant in the Canvas context, i.e. could be replaced by a Canvas FBO instead.The next step would then be wrapping the whole thing in a CanvasCamera and exposing the corresponding view parameters as properties (propertyObject) so that slaved cameras can be controlled via Canvas.Otherwise, there is only very little else needed, because the CanvasMgr would handle updating the Camera, and render everything to the texture that you specified. |
- ↑ James Turner (Nov 16th, 2016). [Flightgear-devel] Hangar thumbnails, screenshots, splash-screens .
- ↑ Avionyx (Wed Mar 12, 2014 7:08 am). Sub window view.
- ↑ seabutler (Fri Jan 24, 2014 5:38 am). "Half" the FG window?.
- ↑ Vladimir Karmisin (Thu, 08 Jan 2009 05:17:07 -0800). FG - camera for debugging purposes..
- ↑ Frederic Bouvier (Sun, 01 Jan 2012 07:14:43 -0800). Announcing Project Rembrandt.
- ↑ Tim Moore (Tue, 24 Jul 2012 22:38:35 -0700). Functions to textures?.
- ↑ Frederic Bouvier (Wed, 07 Mar 2012 05:08:06 -0800). RFC: changes to views and cameras.
- ↑ Lauri Peltonen (Wed, 07 Mar 2012 04:58:44 -0800). Rembrandt the plan.
- ↑ Mathias Fröhlich (Wed, 07 Mar 2012 10:15:31 -0800). Rembrandt the plan.
- ↑ Frederic Bouvier (Wed, 07 Mar 2012 05:08:06 -0800). RFC: changes to views and cameras.
- ↑ Tim Moore (30 Jun 2008 22:46:34 -0700). RFC: changes to views and cameras.
- ↑ Mathias Fröhlich (30 Jun 2008 22:46:34 -0700). RFC: changes to views and cameras.
- ↑ Curtis Olson (Tue, 15 May 2012 14:19:34 -700). LiDAR simulation in FG and powerline scenery.
- ↑ Gene Buckle (Thu, 23 Jul 2009 10:11:05 -0700). view manager "look at" mode.
- ↑ Curtis Olson (Thu, 23 Jul 2009 10:11:05 -0700). view manager "look at" mode.
- ↑ Vivian Meazza (Thu, 23 Jul 2009 10:11:05 -0700). view manager "look at" mode.
Effects / Shaders
Note When it comes to supporting effects and shaders, people generally have two use-cases in mind for Canvas:
|
extending Canvas to allow effects (or at least shader) to be applied to a Window/Desktop, should be easy. [1]
In mid 2016, a number of contributors discussed another workaround to use Canvas textures in conjunction with effects/shaders: simply, by allowing an arbitrary Canvas to be registered as a material via SGMaterialLib, e.g. using an API in the form of myCanvas.registerMaterial(name: "myCanvasMaterial");
Equally, materials would make it possible to easily use arbitrary effects and shaders per Canvas element, i.e. just by setting a few properties that are then processed by a Canvas::Element helper function:
Effect *effect = 0;
SGMaterialCache* matcache = matlib->generateMatCache(b.get_center());
SGMaterial* mat = matcache->find( "myCanvasMaterial" );
delete matcache;
if ( mat != NULL ) {
// set OSG State
effect = mat->get_effect();
} else {
SG_LOG( SG_TERRAIN, SG_ALERT, "Ack! unknown use material name = myCanvasMaterial");
}
Could canvas be used to take a view from a certain area in a certain direction and render it onto a fuselage--in other words, to create a reflection? |
The effects system pre-dates Canvas by several years - meanwhile, it would be one of the more natural choices to optionally provide support for interfacing/integrating both, without this integration bein specific to a single use/case (e.g. aircraft/liveries). We've got other useful work related to effects that never made it into git and that predates Canvas by several years - but when it comes to managing dynamically created textures, canvas can probably be considered the common denominator and it doesn't make much sense to add even more disparate features that cannot be used elsewhere.
I'm currently experimenting with a 2D Canvas and rendering everything to a texture. For this I use FGODGauge to render to texture and
FGODGauge::set_texture to replace a texture in the cockpit with the texture from the fbo. This works very well [...] I have just extended the ReplaceStaticTextureVisitor::apply(osg::Geode& node) method to also replace texture inside effects. It works now by using the same technique as for the SGMaterialAnimation where a group is placed in between the object whose texture should be changed and its parent. This group overrides the texture:virtual void apply(osg::Geode& node)
{
simgear::EffectGeode* eg =
dynamic_cast<simgear::EffectGeode*>(&node);
if( eg )
{
osg::StateSet* ss = eg->getEffect()->getDefaultStateSet();
if( ss )
changeStateSetTexture(ss);
}
else
if( node.getStateSet() )
changeStateSetTexture(node.getStateSet());
int numDrawables = node.getNumDrawables();
for (int i = 0; i < numDrawables; i++) {
osg::Drawable* drawable = node.getDrawable(i);
osg::StateSet* ss = drawable->getStateSet();
if (ss)
changeStateSetTexture(ss);
}
traverse(node);
}
stateSet->setTextureAttribute(0, _new_texture,
osg::StateAttribute::OVERRIDE);
stateSet->setTextureMode(0, GL_TEXTURE_2D, osg::StateAttribute::ON);
— Thomas Geymayer
|
If you want to pass substantial amounts of data, I’d suggest to use a texture (with filtering disabled, probably) to pass the info. Since we don’t have much chance of using the ‘correct’ solution (UBOs) in the near future.
If you need help generating a suitable texture on the CPU side, let me know.[3] — James Turner
|
- ↑ https://sourceforge.net/p/flightgear/mailman/message/37608469/
- ↑ Thomas Geymayer (Tue, 01 May 2012 15:34:41 -0700). Replace texture with RTT.
- ↑ James Turner ( 2014-03-07 10:27:40). Passing arrays to a shader.
At some point, the canvas system itself could probably benefit from being also able to use the Effects/Shader framework, so that canvas textures can also be processed via effects and shaders optionally, before they get drawn. That should make all sorts of fancy effects possible, such as night vision cameras or thermal view, rendered to canvas textures/groups.
It is currently not yet clear how to address this best, the easiest option might be to specify if effects or vertex/fragment shaders shall be invoked via properties (boolean), including their file names referring to $FG_ROOT?
That would then disable the default rendering pipeline for those canvas textures and use shaders.
Basically, anything that's not directly possible via the core canvas system or via its Nasal wrappers, would then be handled via effects/shaders. So we would gain lots of flexibility, and performance benefits.
For the time being, neither effects nor shaders are exposed/accessible to the Canvas system, so depending on what you have in mind, you may need to extend the underlying base class accordingly - a simple proof-of-concept to get you going would be this:
#include <osg/Shader>
....
osg::ref_ptr<osg::Program> shadeProg(new osg::Program);
// set up Vertex shader
osg::ref_ptr<osg::Shader> vertShader(
osg::Shader::readShaderFile(osg::Shader::VERTEX, filename1));
// set up fragment shader
osg::ref_ptr<osg::Shader> fragShader(
osg::Shader::readShaderFile(osg::Shader::FRAGMENT, filename2));
//Bind each shader to the program
shadeProg->addShader(vertShader.get());
shadeProg->addShader(vertShader.get());
//Attaching the shader program to the node
osg::ref_ptr<osg::StateSet> objSS = _transform->getOrCreateStateSet();
objSS->setAttribute(shadeProg.get());
To make things better configurable, you can expose things like the type of shader and filename to the property tree by using the propertyObject<> template, e.g.:
#include <simgear/props/propertyObject.hxx>
....
simgear::propertyObject<std::string> vertex_filename(PropertyObject<std::string>::create(n,"shader.vert"));
simgear::propertyObject<std::string> fragment_filename(PropertyObject<std::string>::create(n,"shader.frag"));
For additional details, refer to Howto:Use Property Tree Objects.
Ideally, there could be a simple interface class, so that these things can be customized via listeners, like the property-observer helper, just specific to enabling shaders for a canvas texture.
So if people really want to create really fancy textures or camera views, they could use effects/shaders then, which would keep the design truly generic, and it would ensure that there's no bloat introduced into the main canvas system.
We did have some discussions about supporting per-canvas (actually per canvas::Element) effects and shaders via properties, TheTom even mentioned that he was interested in supporting this at some point, especially given the number of projects that could be realized like that (FLIR, night vision, thermal imaging etc) - but so far, there are quite a few other things that are taking precedence obviously - so, as far as I am aware, there's nobody working on effects/shader support for canvas, even though I am sure that this would be highly appreciated.
At the time of writing this (02/2014) the Canvas does not yet include any support for applying custom effects or Shaders to canvas elements or the whole canvas itself - however, supporting this is something that's been repeatedly discussed over time, so we're probably going to look into supporting this eventually[4].
If the canvas can internally be referenced by a texture2D() call, then it should be easy - the fragment shader knows screen resolution pixel coordinates, so it's straightforward to look up the local pixel from the texture and then blur or recolor it, distort it or whatever you have in mind.
Menu lighting based on light in the scene might be cool
These shouldn't even be very complicated to do
Assuming the canvas is internally a quad with a properly uv-mapped texture, then:
- making the vertex shader just pass everything through and
- uniform sampler2D myTexture; should make that texture available to the fragment shader
- vec2 coords = gl_TexCoord[0].xy; should get the coordinates of the local pixel inside the texture
#version 120
uniform sampler2D input_tex;
void main() {
// get the texture coords of the pixel
vec2 coords = gl_TexCoord[0].xy;
//look up the pixel color from the input texture
vec4 color = texture2D( input_tex, coords) ;
// and pass the pixel color through
gl_FragColor = color;
}
There are at least 2-3 people who can help with pointers, but we don't have time to implement this ourselves - so if anybody is interested, please get in touch via the canvas subforum.
The Effects framework is implemented in SimGear: https://sourceforge.net/p/flightgear/simgear/ci/next/tree/simgear/scene/material
void main(void) {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
// based on:
// http://people.freedesktop.org/~idr/OpenGL_tutorials/03-fragment-intro.html
// adapted by i4dnf as per: http://wiki.flightgear.org/Talk:Canvas_Development
// ** untested **
void main(void)
{
vec4 baseColor = vec4(.90, .90, .90, 0.0);
vec4 subtractColor = vec4(-.70, -.70, -.50, -0.2);
float doSubtract = step(400.0, dist_squared);
vec4 fragColor = doSubtract * subtractColor + baseColor;
gl_FragColor = fragColor;
}
Implementation-wise, supporting shaders per canvas seems straightforward to support - but it would probably be better to support shaders per element, where each element would render its own sub-texture if shaders/effects are specified, and apply the canvas' osg::StateSet otherwise. We could add an Interface on top of SimGear's "Effects" framework which would be implemented by the Canvas itself, but also by Canvas::Element
- Probably need to extend the Effects framework to support reloading effects/shaders from disk for testing purposes
Also see:
- http://trac.openscenegraph.org/projects/osg/wiki/Support/Tutorials/ShadersIntroduction
- http://trac.openscenegraph.org/projects/osg/wiki/Support/Tutorials/ShadersUpdating
- http://en.wikipedia.org/wiki/OpenGL_Shading_Language
- http://www.cuboslocos.com/tutorials/OSG-Shader
- https://forum.flightgear.org/viewtopic.php?t=22166
- https://forum.flightgear.org/search.php?keywords=canvas+shader
- https://forum.flightgear.org/search.php?keywords=canvas+shaders
- https://forum.flightgear.org/search.php?keywords=canvas+effects
References
|
GDAL/OGR
Work in progress This article or section will be worked on in the upcoming hours or days. See history for the latest developments. |
a large benefit for using the raw DEM will be for moving maps - the elevation is pretty much displayable as-is.[1]
there's also been talk about possibly supporting a dedicated PDF element eventually:
Hmmm, I'm now wondering about a canvas PDF viewer ! |
Now to see what happens with the EFB ideas and the canvas PDF support . |
Canvas cannot currently deal with PDF files directly - even though OSG does have support for doing this kind of thing, but we would need to add a few dependencies, i.e. a PDF rendering library like "poppler" that would render a PDF to an osg::Image. At that point, it could be dealt with like a conventional canvas image, and could even be retrieved via HTTP. Extending Canvas accordingly could actually be useful, because it would even allow us to render other PDFs inside dialogs - such as for example the manual itself, i.e. as part of some kind of integrated "help" system. The question is if TheTom can be convinced that this is a worthwhile goal or not. But it's clearly something for post 3.2
|
It may make sense to revisit this idea, supporting a subset of PDF would not be too difficult, but it would be better to really use a PDF library and OSG's built-in suport for rendering a PDF to a texture, which could the be easily turned into a new Canvas Element, as per the example at: Canvas Development#Adding a new ElementThe coding part is relatively straightforward (basically copy&paste), but getting the dependencies/cmake magic right for all supported FG platforms would probably require a bit of work. |
More recently, another idea is to add dedicated PDF support to the core Canvas system, so that arbitrary PDF files can be rendered onto a Canvas: https://forum.flightgear.org/viewtopic.php?p=258282#p258282 |
If you are interested in working on any of these, please get in touch via the canvas sub forum first.
- ↑ psadro_gm (Sep 10th, 2016). Re: Next-generation scenery generating? .
You will want to add a new Canvas::Element
subclass whenever you want to add support for features which cannot be currently expressed easily (or efficiently) using existing means/canvas drawing primitives (i.e. via existing elements and scripting space frameworks).
For example, this may involve projects requiring camera support, i.e. rendering scenery views to a texture, rendering 3D models to a texture or doing a complete moving map with terrain elevations/height maps (even though the latter could be implemented by sub-classing Canvas::Image to some degree).
Another good example for implementing new elements is rendering file formats like PDF, 3d models or ESRI shape files.
To create a new element, you need to create a new child class which inherits from Canvas::Element
base class (or any of its child-classes, e.g. Canvas::Image) and implement the interface of the parent class by providing/overriding the correspond virtual methods.
To add a new element, these are the main steps:
- Set up a working build environment (including simgear): Building FlightGear
- update/pull simgear,flightgear and fgdata
- check out a new set of topic branches for each repo: git checkout -b topic/canvas-CanvasPDF
- Navigate to $SG_SRC/canvas/elements
- Create a new set of files CanvasPDF.cxx/.hxx (as per Adding a new Canvas element)
- add them to $SG_SRC/canvas/elements/CMakeLists.txt (as per Developing using CMake)
- edit $SG_SRC/canvas/elements/CanvasGroup.cxx to register your new element (header and staticInit)
- begin replacing the stubs with your own C++ code
- map the corresponding OSG/library APIs to properties/events understood by the Canvas element (see the valueChanged() and update() methods)
- alternatively, consider using dedicated Nasal/CppBind bindings
Below, you can find patches illustrating how to approach each of these steps using boilerplate code, which you will need to customize/replace accordingly:
Caution This custom Canvas element requires a 3rd party library which is not currently used by SimGear/FlightGear, so that the top-level CMakeLists.txt file in $SG_SRC needs to be modified to add a corresponding findPackage() call and you also need to download/install the corresponding library for building sg/fg. In addition, the CMake module itself may need to be placed in $SG_SRC/CMakeModules:
CanvasPDF: (required cmake changes)
|
Discussed new Elements
See Canvas Sandbox for the main article about this subject. |
The previously mentioned primitives alone can already be used to create very sophisticated avionics and dialogs - however, depending on your needs, you may want to extend the canvas system to support additional primitives. Typically, you'll want to add new primitives in order to optimize performance or simplify the creation of more sophisticated avionics and/or dialogs (e.g. for mapping/charting purposes). If you are interested in adding new primitives, please take a look at the sources in $SG_SRC/canvas/elements.
For example, there's been talk about possibly adding the following additional primitives at some point. However, none of these are currently a priority or being worked on by anybody:
- support for a vertical mapping mode (e.g. to create Vertical Situation Displays or flight path evaluation dialogs), would probably make sense to use PROJ4 for additional projcetion support?
- support for rendering scenery views (e.g. for tail cameras or mirrors etc) [5] [6] ticket #1250
- support for ESRI shapefiles (instead of using shapelib, it would make sense to use GDAL/OGR here, or directly the OSG/ReaderWriterOGR plugin) [7] (FlightGear/osgEarth now depends on GDAL, so should be straightforward dependency-wise):
- support for GeoTIFF files or terrain height profiles using the tile cache
- rendering 3D objects
- support for ortographic moving map displays, e.g. using atlas [8] (ideally usingCompositeViewer Support):
There is already support for creating multiple osgviewer windows in FlightGear, this is commonly used in multiscreen setups - to support the creation and usage of osgviewer windows in Canvas, we would need to look into adding a new placement type to the canvas system, so that osgviewer/OS windows can be created and controlled via the canvas system and a handful of placement-specific properties [9] .
Placements
Obviously, users can use the canvas system for developing all sorts of features that may need to be accessible using different interfaces - for these reasons, the canvas uses the concept of so called placements, so that a canvas-texture can be shown inside GUI windows, GUI dialogs, cockpits, aircraft textures (liveries) - and also as part of the scenery (e.g. for a VGDS).
in simgear Canvas::update it appears to be using the factories to find the element; and this means that it can't find the named OSG node, which makes me think that maybe it is only looking in the ownship (which is a null model).
PlacementFactoryMap::const_iterator placement_factory = _placement_factories.find( node->getStringValue("type", "object") );
if( placement_factory != _placement_factories.end() ) {
Placements& placements = _placements[ node->getIndex() ] = placement_factory->second(node, this);
node->setStringValue ( "status-msg", placements.empty() ? "No match" : "Ok" );
}
void CanvasMgr::init() calls sc::Canvas::addPlacementFactory. [1]
Note The features described in the following section aren't currently supported or being worked on, but they've seen lots of community discussion over the years, so that this serves as a rough overview. |
Scenery Overlays
Also see Photoscenery via Canvas? post on the forum and A project to create a source of free geo-referenced instrument charts post on the forum
I've been wondering how hard it would be to add a tile loader mode where the default texture is ignored, and instead, a photo texture of the tile is applied. It may not be an optimal photo-texture implementation (but it might be good enough to be fun and interesting?) — Curtis Olson (Oct 1st, 2008). Re: [Flightgear-devel] Loading Textures for Photo-Scenery?.
(powered by Instant-Cquotes) |
lightGear to superimpose a given texture over a whole terrain tile, given that a texture file with the same name as the tile is found. I think that this would require that either a) TerraGear generate appropriate texture coordinates for the tile, mapping the texture continuously over the whole tile, or b) in case of loadPhotoScenery, the texture coordinates contained in the .btg.g must be ignored and rebuilt on the fly by FlightGear. — Ralf Gerlich (Oct 1st, 2008). Re: [Flightgear-devel] Loading Textures for Photo-Scenery?.
(powered by Instant-Cquotes) |
Chris Schmitt, Pete & myself have also discussed the MSFS approach, where we render surface information to textures, either on the CPU or GPU. This solves a whole bunch of issues in airports, and allows the generation of the textures to be defined based on user settings, performance, available texture RAM and so on. (Don’t render roads, render fancy boundaries for coastlines, paint snow onto crops based on season) If the textures are re-generated dynamically based on changing view, the user need never see a ‘blurry’ texture. The generated texture doesn’t need to encode RGB, it can encode whatever inputs the shaders like - eg material ID, gradient, distance to boundaries. (And of course, for far away areas, we generate or read a coarse, low-resolution map very cheaply) From my perspective the appeal is this work can be done on a spare CPU core, and it actually fits quite well with something like osgEarth - we let osgEarth handle the elevation data, and the texture-generating code simply becomes the source of raster data which osgEarth overlays on top. With the GPU-based flattening of elevation data it even works to make roads/railways interact with terrain nicely. Whether or not the memory-bandwith burned in moving textures to the GPU is better or worse than doing everything GPU-side as Tim suggests with decals, I have no clue about. Similarly I don’t know how disruptive this scheme would be architecturally - intuitively osgEarth must handle loading different resolutions of raster data interactively - that’s exactly what it does for photo-scenery after all - but I haven’t looked at the API to see how easy or hard such an integration would be. — James Turner (Nov 27th, 2013). Re: [Flightgear-devel] Rendering strategies.
(powered by Instant-Cquotes) |
Texture overlays - FG scenery engine does the chopping and texture co-ord generation. [2] — Paul Surgeon
|
For the sake of completeness, and I am not saying that you should do this (and it almost certainly going to be much worse performance-wise than any shaders) - but if you want the shadow to be accurate despite potential terrain sloping, you could apply a Canvas texture onto the surface (admittedly, this is much more straightforward in the case of an actual 3D model like a carrier) - otherwise, you'll also want to use a workaround and attach the texture to the 3D model (aka main aircraft). But people have been using Canvas for all sorts of purposes, including even liveries: Howto:Dynamic_Liveries_via_Canvas
But unlike glsl/shaders, a Canvas is not primarily a GPU thing, i.e. there's lots of CPU-level stuff going on affecting performance. |
I am looking for a method for adding a graphical overlay channel to Flightgear. This overlay would consist of a dynamic texture that can be
modified in real time. I've used other OpenGL based systems with this feature but don't know where to start with implementing it in Flightgear.[3] — Noah Brickman
|
Once the frame is converted to an opengl texture, then it would be a very simple matter of displaying it on the screen with a textured rectangle drawn in immediate mode ... possibly with some level of transparancy, or not ...
I'm involved in some UAV research where we are using FlightGear to render a synthetic view from the perspective of a live flying uav. Would be really cool to super impose the live video over the top of the FlightGear synthetic view. Or super impose an F-16 style HUD on top of the live video ... I have lots of fun ideas for someone with a fast frame grabber and a bit of time [...] Then do whatever bit fiddling is needed to scale/convert the raster image to an opengl texture. Then draw this texture on a quad that is aligned correctly relative to the camera. It might be possible to get fancy and alpha blend the edges a bit. Given an image and the location and orientation of the camera, it would be possible to locate world coordinates across a grid on that image. That would allow a quick/crude orthorectification where the image could be rubber sheeted onto the terrain. This would take some offline processing, but you could end up building up a near real time 3d view of the world than could then be viewed from a variety of perspectives. The offline tools could update the master images based on resolution or currency ... that's probably a phd project for someone, but many of the pieces are already in place and the results could be extremely nice and extremely useful (think managing the effort to fight a dynamic forest fire, or other emergency/disaster management, traffic monitoring, construction sites, city/county management & planning, etc.) I could even imagine some distrubuted use of this so that if you have several uav's out flying over an area, they could send their imagery back to a central location to update a master database ... then the individual operators could see near real time 3d views of places that another uav has already overflown. If we started building up more functionality in this area, there are a lot of different directions we could take it, all of which could be extremely cool.[4]— Curtis Olson
|
Could we generate the texture on the fly? Based on landclass and road data? I could see a number of advantages/disadvantages here as compared to our current, generic textures:
|
A very interesting idea - so interesting I thought of it and discussed it with some people last year :) The summary answer is, it should be possible, it would
have pretty much the benefits and drawbacks you mention (especially the VRAM consumption), and it would allow nice LoD and solve some other issues. Especially it avoid the nasty clipping issues we have with surface data in TerraGear, since you just paint into the texture, no need to clip all the linear data.[6]— James Turner
|
What we could do is identify which hooks are needed to make this work and provide those via the Canvas system: Canvas textures can already be placed in the scenery, so there should be very little needed in terms of placement-specific attributes, and the corresponding code should be available in SimGear/FlightGear already.
The patch required to modify FlightGear obviously already uses shaders and effects, and it's mostly about exposing additional parameters to the shaders.
- ↑ Richard Harrison (May 15th, 2016). [Flightgear-devel] Canvas in dynamically loaded scene models .
- ↑ Paul Surgeon. Scenery engine features..
- ↑ Noah Brickman. Overlay Plane.
- ↑ Curtis Olson. [http://www.mail-archive.com/flightgear-devel@lists.sourceforge.net/msg15459.html Replace fg visualization with streaming video Curtis Olson Fri, 25 Jan 2008 07:51:41 -0800].
- ↑ Thomas Albrecht. Generating ground textures on the fly?.
- ↑ James Turner. Generating ground textures on the fly?.
Native Windows
Note People interested in working on this may want to check out the following files: |
Currently, all placements are within the main FlightGear window, however there's been talk about providing support for additional Canvas placements, such as e.g. osgviewer placements to help generalize our Window Management routines, so that a canvas can be rendered inside a dedicated OS window:
Would it be possible to place the new "view" into a window instead of having a dedicated view? That would allow you to have an instrument panel with a blank cut-out that could hold this newscam/FLIR window.[1] Several responded that you can have a view, or multiple camera offsets, shared across many screen. I have tried this and it works well on the mac. But what I want to do is have two windows, one with a custom view i have defined, and another window with the cockpit view. Ill keep digging, but i read somewhere that this particular thing is hard.... because there is only one view manager instance, and it can only allow multiple camera offsets...[2] We can define arbitrary areas of the screen and draw any view perspective into them. However, I think all the views need to be from the same eye point. (i.e. you can't have a cockpit view in one window and a chase view in another?) However, the capability we do have is very nice for supporting devices like the Matrox Triple Head 2 Go box, or Twin View, or any "spanning" desktop system. And we have the ability to extend this to multiple displays. There is an AMD/ATI demo movie floating around on youtube that shows FlightGear running on 8 monitors using 4 dual-headed video cards.[3]
Support multiple views/windows: Currently the GUI can only be placed inside one view/window (see Docs/README.multiscreen) but it would be nice to be able to move windows between views.[4] — Thomas Geymayer
|
I have just been trying out the multiple screen feature in FG. I found that the GUI camera (including the menu bar, hud and 2D panel) appears in only one of the windows. Is there any way I can make the GUI to appear in all the windows? Actually I want to be able to view the hud and 2D panel in all the windows.[5] — Kavya Meyyappan
|
there's a limitation in Plib that forces the GUI to be drawn on one window.[6] — Tim Moore
|
I think you have just summarized all the limitations of the FlightGear multi-camera/view/display system. I know that in the case of menus, hud, 2d instrument panels, there would need to be some significant code restructuring to allow these to be displayed on other windows.[7] — Curtis Olson
|
Good thing to have!!! Just still support graphics context on different screens/displays too ...[8] — Mathias Fröhlich
|
it can be solved by using multiple osg windows to contain whatever GUI solution we go with - canvas, osgWidget or PUI-port.
Or to put it another way - the actual hard part is running the widgets in the main OpenGL window - which *is* a requirement for full-screen apps and multi-monitor setups. (Some people have claimed otherwise, but I believe we need the option of 'in-window' UI for many cases). So, this is a desirable feature, but doesn't dictate the choice of GUI technology. And can be done as a separate step from replacing PLIB.[9]— James Turner
|
- ↑ Gene Buckle (Jul 23rd, 2009). Re: [Flightgear-devel] view manager "look at" mode .
- ↑ Carson Fenimore (Feb 6th, 2009). [Flightgear-users] multiple views .
- ↑ Curtis Olson (Jul 23rd, 2009). Re: [Flightgear-devel] view manager "look at" mode .
- ↑ Thomas Geymayer (07-30-2012). Switching from PUI to osgWidget.
- ↑ Kavya Meyyappan (Fri, 19 Mar 2010 03:31:50 -0700). [Flightgear-devel] Help needed with multi-screen.
- ↑ Tim Moore (Sat, 20 Mar 2010 01:42:31 -0700). Re: [Flightgear-devel] Help needed with multi-screen.
- ↑ Curtis Olson (Fri, 19 Mar 2010 08:36:22 -0700). Re: [Flightgear-devel] Help needed with multi-screen.
- ↑ Mathias Fröhlich (Sat, 28 Jun 2008 00:05:19 -0700). Re: [Flightgear-devel] RFC: changes to views and cameras.
- ↑ James Turner (Wed, 25 Jul 2012 02:28:42 -0700). Switching from PUI to osgWidget.
Placement/Element for Streaming: Computer Vision
Note There seem to be two main use-cases discussed by contributors:
|
One of the suggestion would be to develop some kind of shared memory interface, with metadata embedded on the same memory space. After each rendering step, the image would be simply copied to the memory along the metadata and a frame counter. I have already some tests done on Windows platform and it works quite well. It is also possible to enable/disable the copy process(Which is not too slow, but it is interesting to have a way of controlling it) using the command line parameters. >From the shared memory position, any other process could read it and do whatever it wants, which would create a complete horion of possibilities like streaming, video recording and a more modular architecture to anything related to gathering images, the jpeg server could be separated from FlightGear, for example. Obviously, this requires some kind of process synchronization such as mutexes, which relies on the reading softwares not to block it for a too long time. Another approach would be to have a different architecture inside FlightGear, something like: Renderer -> ImageGrabber -> ImageSaver Where the ImageGrabber is the part of code that reads image and saves it on a buffer and ImageSaver is the "externalizer" (JPEGSaver, SharedMemorySaver, MPEGSaver and so on). However, I personally prefer the first option, which enables people to grab image and do whatever they want without the necessity of understanding and recompiling FlightGear source code.[1]
The HTTP server already does this - if you select a ‘low compression’ image format such as TGA or uncompressed PNGs, it’s very close to what you want. It will be using a local TCP socket, not shared memory, but unless you want really large images, I am not sure the additional complexity is worth adding an entirely new image output system for. See the code for how to increase the max-fps (defaults to 5H but could be 30 or 60Hz) and file-format of the http-server; any image format supported by OSG ReaderWriter plugin should work. (Well, so long as the plugin implements writing!)[2]
I am using the http stream feature to capture videos with ffmpeg. It is a great feature! — Adam Dershowitz (Aug 17th, 2015). [Flightgear-devel] httpd stream question.
(powered by Instant-Cquotes) |
what is the current suggested easiest way to capture images and videos from FlightGear on a Mac? |
It uses the same last-camera-callback technique and now supports mjpeg streaming, too. — Torsten Dreyer (May 30th, 2014). Re: [Flightgear-devel] Saving Videos.
(powered by Instant-Cquotes) |
The problem is not the decoder but the encoder. I don't have a fast-enough real-time video encoder that lives happily in the FG main loop. I have experimented with ffmpeg which was promising, but it ended up on the very bottom of my backlog :-/ We can do MJPEG stream, try to use /screenshot?stream=y as the screenshot url. MJPEG is ugly and a resource hog but works reasonable well for image sies of probably 640x480. Scale down your FG window and give it a try. — Torsten Dreyer (Oct 12th, 2015). Re: [Flightgear-devel] phi interface updates.
(powered by Instant-Cquotes) |
People interested in doing UAV work that involves computer vision (e.g. using OpenCV, see ticket #924, [10] , [11] ) will probably also want to look into using a dedicated Canvas placement for this, in combination with adding a dedicated Canvas::Element to render scenery views to a texture using CompositeViewer Support - these two features would provide a straightforward mechanism to export a live video stream of FlightGear via a dedicated port.
Note There were several early attempts at bringing streaming capabilities to FlightGear in the pre-OSG days that are meanwhile unmaintained, e.g.: |
I am currently working with image processing and found that FlightGear is a extremely valuable resource for this kind of research. However, to work with these images, it is necessary to be able to gather image and metadata (Aircraft Position and Orientation, Camera info and other information like model position) from the simulator.
After some time reading the FlightGear forum and wiki, I found the following possibilities:
Nevertheless, these approaches have some small limitations:
My suggestion would be to develop some kind of shared memory interface, with metadata embedded on the same memory space. After each rendering step, the image would be simply copied to the memory along the metadata and a frame counter. I have already some tests done on Windows platform and it works quite well. It is also possible to enable/disable the copy process(Which is not too slow, but it is interesting to have a way of controlling it) using the command line parameters. From the shared memory position, any other process could read it and do whatever it wants, which would create a complete horizon of possibilities like streaming, video recording and a more modular architecture to anything related to gathering images, the jpeg server could be separated from FlightGear, for example. Obviously, this requires some kind of process synchronization such as mutexes, which relies on the reading softwares not to block it for a too long time. Another approach would be to have a different architecture inside FlightGear, something like: Renderer -> ImageGrabber -> ImageSaver Where the ImageGrabber is the part of code that reads image and saves it on a buffer and ImageSaver is the "externalizer" (JPEGSaver, SharedMemorySaver, MPEGSaver and so on). However, I personally prefer the first option, which enables people to grab image and do whatever they want without the necessity of understanding and recompiling FlightGear source code. I'm looking for opinions, suggestions and observations about this technique before implementing it in a more standardized way and proposing the code.[3]— Emilio Eduardo
|
I'm new to FlightGear, and am trying to use it as an image generator for a simulator I'm developing...I've got it configured to take inputs
from a UDP port to fly, but I want to disable a lot of features so that all FlightGear does is draw scenery. [4] — Drew
|
I would like to use FlightGear to generate the scene observed by a UAV's onboard camera.
Basically, this would translate to feeding FlightGear the FDM data and visualizing the image generated by FlightGear in another computer, across a network, using for example streaming video. I suppose this is a bit of a far-fetched idea, but is there any sort of support for this (or something similar) already implemented? [5]— Antonio Almeida
|
I am interested in using it as a visualization tool for UAV's. I would like to replace the fg scenery with images captured from a camera onboard an aircraft. I was wondering if there is any way to import images into flightgear on the fly. The basic goal would be to show live video where available and fall over to flight gear visuals when the feed is lost(using a custom view from the camera perspective) .[6] — STEPHEN THISTLE
|
I'm hooking up a lumenera Camera for a live video feed from a UAV, so that the video gets handed to Flightgear, which then draws its HUD over the video stream. In order to do this, I need to be able to communicate with the window controls. My camera can display the video in a new window, but I want it to draw to the video screen that Flightgear is already using.[7] — Bruce-Lockhart
|
I don't think there's any current way to do this. However, I think what is needed is to link in some video capture library to do frame grabs from your video camera as quickly as possible. Then do whatever bit fiddling is needed to scale/convert the raster image to an opengl texture. Then draw this texture on a quad that is aligned correctly relative to the camera. It might be possible to get fancy and alpha blend the edges a bit.
Given an image and the location and orientation of the camera, it would be possible to locate world coordinates across a grid on that image. That would allow a quick/crude orthorectification where the image could be rubber sheeted onto the terrain. This would take some offline processing, but you could end up building up a near real time 3d view of the world than could then be viewed from a variety of perspectives. The offline tools could update the master images based on resolution or currency ... that's probably a phd project for someone, but many of the pieces are already in place and the results could be extremely nice and extremely useful (think managing the effort to fight a dynamic forest fire, or other emergency/disaster management, traffic monitoring, construction sites, city/county management & planning, etc.) I could even imagine some distrubuted use of this so that if you have several uav's out flying over an area, they could send their imagery back to a central location to update a master database ... then the individual operators could see near real time 3d views of places that another uav has already overflown. If we started building up more functionality in this area, there are a lot of different directions we could take it, all of which could be extremely cool.[8]— Curtis Olson
|
Getting live video onto a texture is pretty standard stuff in the OpenSceneGraph community[9] — Tim Moore
|
I imagined embedding some minimal routine that talks to the camera and grabs an image frame. Then usually you can directly map this into an opengl texture if you figure out the pixel format of your frame grab and pass the right flags to the opengl texture create call. Then you should be able to draw this texture on top of any surface just like any other texture ... you could map it to a rectangular area of the screen, you could map it to a rotating cube, map it to the earth surface, etc. That's about as far as far as I've gone with thinking through the problem.[10] — Curtis Olson
|
I want draw something in the front face of the FlightGear view, but I don't wan to recompile / modify any codes, so, if the FlightGear could give me a interface to draw something myself through DLL, that's perfect.[11] — CHIV
|
- ↑ Emilio Eduardo Tressoldi Moreira (Jun 30th, 2014). [Flightgear-devel] Rendered image export to Shared Memory .
- ↑ James Turner (Jun 30th, 2014). Re: [Flightgear-devel] Rendered image export to Shared Memory .
- ↑ Emilio Eduardo (2014-06-30 13:33:10). [http://osdir.com/ml/flightgear-sim/2014-06/msg00118.html Rendered image export to Shared Memory - msg#00118].
- ↑ Drew (Tue, 25 Jan 2005 09:24:30 -0800). Disabling functionality.
- ↑ Antonio Almeida (Tue, 22 May 2007 10:14:46 -0700). Flightgear visualization as streaming video.
- ↑ STEPHEN THISTLE (Fri, 25 Jan 2008 06:32:03 -0800). Replace fg visualization with streaming video.
- ↑ cullam Bruce-Lockhart (Tue,29 Jul 2008 09:23:54 -0700). Window controls.
- ↑ Curtis Olson. [http://www.mail-archive.com/flightgear-devel@lists.sourceforge.net/msg15459.html Replace fg visualization with streaming video Curtis Olson Fri, 25 Jan 2008 07:51:41 -0800].
- ↑ Tim Moore (Fri, 25 Jan 2008 08:31:40 -0800). Replace fg visualization with streaming video.
- ↑ Curtis Olson. Window controls.
- ↑ CHIV (Thu May 08, 2014 3:03 am). [[1] One suggestion: FlightGear wolud support plugins like this!].
Adding new Placements
Note should be linking to the actual sources/line numbers here |
Work in progress This article or section will be worked on in the upcoming hours or days. See history for the latest developments. |
Let's assume, we'd like to a new type of placement, one for treating any Canvas as a raster image that can be fetched via the built-in httpd server, or even streamed as a MJPEG. For that to work, we need to be able fetch the Canvas, convert it to an osg::Image and then register the whole thing with the mongoose integration ($FG_SRC/Network/httpd), next we need to register a corresponding camera drawback to obtain the image, and notify the mongoose code to register a new handler and a class providing the corresponding image [1].
In Canvas terms, the way a Canvas is placed is handled by a so called Placement, a placement is just another class that responds to placement-specific events, mainly relevant property updates.
In this particular case, it wouldl make sense to support a handful of events/attributes:
- output format (png, jpeg, mjpeg)
- size of the image to be streamed (width/height)
- color depth
- name (to be used for requests)
- update frequency (usually, once or twice per second should suffice)
- create a new set of files in $SG_SRC/canvas, named CanvasHttpdPlacement.cxx (hxx)
- Use the CanvasObjectPlacement files as template, rename those files, change the include guards/comments respectively
- open FGCanvasSystemAdapter.cxx/hxx in $FG_SRC/Canvas to add helpers for your new placement, e.g. getImage(): http://wiki.flightgear.org/Canvas_Troubleshooting#Serializing_a_Canvas_to_disk_.28as_raster_image.29
- ...
- open $FG_SRC/Canvas/canvas_mgx.cxx, navgate to
CanvasMgr::init()
, add your new placement there
Projections
Also see
Background
for visualizing orbital mechanics, the two most useful projections are groundtrack (for inclination, node crossing and what you should see looking out of the window) and the projection orthogonal to the orbital plane[2]
I stumbled across what is perhaps closer to the core of the issue in a flight over the North Pole. Flightplan legs are rendered as great circle segments, so long legs are drawn with a curve. Somewhere, the flightplan has to be flattened into a map view. It appears that this is easy to do over short distances in lower latitudes, but becomes increasingly difficult over long distances with a bigger component of Earth's curvature involved. The map view is not really geared for polar routes, so the leg that goes over the pole has an extreme curve drawn in it. And when that leg was in range, the frame rate dropped from 25-30 down to 8-12. Once it went out of range, frame rate was back to normal. It seems like calculating curvature may be the rate-limiting step. — tikibar (Dec 23rd, 2014). Re: Canvas ND performance issues with route-manager.
(powered by Instant-Cquotes) |
I narrowed this down to the way map segments around the curving earth are calculated in the canvas ND. I added a hard coded distance limiter to it that restored the calculation speed as long as no leg was longer than about 800 nm. Hooray suggested an approach that was more dynamic, but I never got around to working on it. Bottom line, it's not a graphics card issue but a calculation issue. I've seen it in both the 757 and 747-8 series using the canvas ND.
The old thread about it is here: [12] — tikibar (Feb 9th, 2016). Re: Root Manager consumes a lot of Frame Rate.
(powered by Instant-Cquotes) |
Gijs provided a patch to fix the hard-coded Map dialog (and possibly the ND), it's the projection code that is causing this - as far as I know, Gijs' patches never got integrated with the Canvas system, my original suggestion was to extend the canvas projection code so that projection code can be implemented in the form of Nasal code and/or property rules. — Hooray (Feb 10th, 2016). Re: Route Manager consumes a lot of Frame Rate.
(powered by Instant-Cquotes) |
- ↑ https://forum.flightgear.org/viewtopic.php?p=297413#p297413
- ↑ Thorsten (May 20th, 2016). Re: Space Shuttle .
Adding new Projections
Note Discuss base class that needs to be implemented |
that's a coordinate singularity of a (lat/lon) grid and things like your course cease to be well-defined in the vicinity - so you can't expect normal code to work. Usually you need special provisions to deal with such singularities (from my own experience, the Shuttle has four different coordinate systems to switch, and fallback rules what to display when close to a singularity and the AP for liftoff uses a different coordinate grid (based on vectors rather than angles) from the AP later during launch because the launch is done right into the singularity (there's no course defined for vertical ascent) and so one can't steer to any particular course until later). [2]
There's a projection library available called "proj4", it comes with a number of different projections, we may absorb that into simgear and use it for projection handling - which would free us from having to implement/test and maintain our own
— Hooray (Thu Jul 17). Re: NavDisplay & MapStructure discussion (previously via PM).
(powered by Instant-Cquotes) |
We've already fixed that in the (old) map dialog, by using an azimuthal equidistant projection (see screenshot). Porting the projection to Canvas is on my todo list. Such a projection is much much better for navigational use.
Curves in routes are not calculated by Canvas, nor by the ND though. It's the route manager that splits up a route in segments in order to get smooth transitions. — Gijs (Tue Dec 23). Re: Canvas ND performance issues with route-manager.
(powered by Instant-Cquotes) |
The new ND uses the actual route-manager paths, which allows it to draw holdings, flyby waypoints (thanks to James recent work) etc. But we'll need the azimuthal projection anyway, so I'll bump my todo list
— Gijs (Tue Dec 23). Re: Canvas ND performance issues with route-manager.
(powered by Instant-Cquotes) |
I do agree that it would make sense to sub-class the Canvas projection class and implement Gijs' changes there, like we originally discussed in the merge request: FlightGear commit 3f433e2c35ef533a847138e6ae10a5cb398323d7
— Hooray (Wed Dec 24). Re: Canvas ND performance issues with route-manager.
(powered by Instant-Cquotes) |
Styling (osgText)
Gijs was looking for an outline that follows the shape of the text, which is what backdrop provides.
For his solution, see the two diffs below. He didn't add the full range of backdrop options, just outline for now [13] .
And this is how it looks in FlightGear now :-) Notice how the overlapping waypoints are easier to read (this image is a little exaggerated with all those fixes).
|
commit 5cc0adc778bda1773189b0119d24fbaf5ecd4500
Author: Gijs de Rooy
Date: Mon Jul 7 18:26:16 2014 +0200
Canvas: add backdrop option to text
diff --git a/simgear/canvas/elements/CanvasText.cxx b/simgear/canvas/elements/CanvasText.cxx
index d99760a..3a986e1 100644
--- a/simgear/canvas/elements/CanvasText.cxx
+++ b/simgear/canvas/elements/CanvasText.cxx
@@ -39,6 +39,7 @@ namespace canvas
void setLineHeight(float factor);
void setFill(const std::string& fill);
void setBackgroundColor(const std::string& fill);
+ void setOutlineColor(const std::string& backdrop);
SGVec2i sizeForWidth(int w) const;
osg::Vec2 handleHit(const osg::Vec2f& pos);
@@ -97,6 +98,15 @@ namespace canvas
}
//----------------------------------------------------------------------------
+ void Text::TextOSG::setOutlineColor(const std::string& backdrop)
+ {
+ osg::Vec4 color;
+ setBackdropType(osgText::Text::OUTLINE);
+ if( parseColor(backdrop, color) )
+ setBackdropColor( color );
+ }
+
+ //----------------------------------------------------------------------------
// simplified version of osgText::Text::computeGlyphRepresentation() to
// just calculate the size for a given weight. Glpyh calculations/creating
// is not necessary for this...
@@ -546,6 +556,7 @@ namespace canvas
addStyle("fill", "color", &TextOSG::setFill, text);
addStyle("background", "color", &TextOSG::setBackgroundColor, text);
+ addStyle("backdrop", "color", &TextOSG::setOutlineColor, text);
addStyle("character-size",
"numeric",
static_cast<
commit 838cabd2a551834cbcac2b3edd21500409ff2e98
Author: Gijs de Rooy
Date: Mon Jul 7 18:27:50 2014 +0200
Canvas: add backdrop option to text
diff --git a/Nasal/canvas/api.nas b/Nasal/canvas/api.nas
index 8bc12d8..3047dbf 100644
--- a/Nasal/canvas/api.nas
+++ b/Nasal/canvas/api.nas
@@ -634,6 +634,8 @@ var Text = {
setColorFill: func me.set('background', _getColor(arg)),
getColorFill: func me.get('background'),
+
+ setBackdropColor: func me.set('backdrop', _getColor(arg)),
};
# Path
Event Handling
Note Discuss CanvasEventManager, CanvasEvent, CanvasEventVisitor |
Canvas Integration
Note Discuss FGCanvasSystemAdapter - for the time being, check out Howto:Extending_Canvas_to_support_rendering_3D_models#Extending_FGCanvasSystemAdapter to learn more about the purpose/usage of the CanvasSystemAdapter, which basically serves as a bridge between FlightGear and SimGear, i.e. to expose FG specific APIs to Canvas (which lives in SimGear). |
there is a dedicated FGCanvasSystemAdapter in $FG_SRC/Canvas that encapsulates the model lookup [3]
For instance, say you'd like to access the FlightGear view manager via the access system: you don't need to move the view manager to SimGear to accomplish this - like I mentioned previously, the correct way to access FG-level subsystems via the Canvas system is to review/extend the FGCanvasSystemAdapter to expose the corresponding APIs.
I actually posted code snippets that illustrate how to do this, for example in the 3D model loader, specifically look for the FGCanvasSystemAdapter changes in both $FG_SRC and $SG_SRC
There is step by step instructions which can be found here: Howto:Extending Canvas to support rendering 3D models#Extending FGCanvasSystemAdapter
In other words: Any API that you need to access from the Canvas system would need a correspondinger "getter" added to retrieve the handle from the FG host application.
And that should be reflected in the header file But the implementation would reside in $FG_SRC/Canvas/FGCanvasSystemAdapter.cxx - the SimGear code would only have a copy of the corresponding header file.[4]
Or let's say, you'd like to access the CameraGroup/Viewer APIs: it's relatively straightforward: CameraGroup.cxx already contains code to render a static camera to a texture, which is stored in a TextureMap named _textureTargets - internally, this is used for building the distortion camera - however, you can also exploit this to render an arbitrary camera view to a texture. At the Canvas level, you would then have to call the equivalent of flightgear::CameraGroup::getDefault() - this would be done at the FGCanvasSystemAdapter level, i.e. adding a getter function there, which returns the TextureRectangle map.
Once you have a texture rectangle, you can also get the osg::Image for it, and that can be assigned to a Canvas image.
Admittedly, that's a little brute force, but it should only require ~30 lines of code added to SG/FG to add a static camera view as a Canvas raster image. Ideally, something like this would be integrated with the existing view manager, i.e. using the same property names (via property objects), and then hooked up to CanvasImage, e.g. as a custom camera:// protocol (we already support canvas:// and http(s)://) So some kind of dedicated CanvasCamera element would make sense, possibly inheriting from CanvasImage.
And it would also make sense to look at Zan's new-cameras patches, because those add tons of features to CameraGroup.cxx This would already allow arbitrary views slaved to the main view (camera) So as you can see, PagedLOD/CompositeViewer don't need to be involved to make this happen.[5]
Finally, to use Canvas outside FG, you would also need to look at the FGCanvasSystemAdapter in $FG_SRC/Canvas and provide your own wrapper for your own app (trivial).[6]
Optimizing Canvas
I guess we need to come up with some heuristics at the C++ level for selectively updating/rendering parts of the route that are visible/relevant (i.e. not necessarily visible, but part of a visible line segment)
— Hooray (Sat Dec 20). Re: Canvas ND performance issues with route-manager.
(powered by Instant-Cquotes) |
if it's just as fast, it's rendering / rasterization that is probably taking so long, which would mean that we'd need to explore selective updating/rendering of nodes that are neither visible, nor connected to anything visible (line segments).
— Hooray (Sat Dec 20). Re: Canvas ND performance issues with route-manager.
(powered by Instant-Cquotes) |
What would be good to have is the specify a completely different scenegraph in some subcameras. I think of having panel like instruments on an additional screen/display for example.
— Mathias Fröhlich (2008-06-28). Re: [Flightgear-devel] RFC: changes to views and cameras.
(powered by Instant-Cquotes) |
I have an animation that I call rendertexture, where you can replace a texture on a subobject with such a rtt camera. Then specify a usual scenegraph to render to that texture and voila. I believe that I could finish that in a few days - depending on the weather here :)
The idea is to make mfd instruments with usual scenegraphs and pin that on an — Mathias Fröhlich (2008-06-28). Re: [Flightgear-devel] RFC: changes to views and cameras.
(powered by Instant-Cquotes) |
I believe that we need to distinguish between different render to texture cameras. That ones that will end in mfd displays or hud or whatever that is — Mathias Fröhlich (2008-07-01). Re: [Flightgear-devel] RFC: changes to views and cameras.
(powered by Instant-Cquotes) |
The Future of Canvas in FlightGear
Lessons learnt
Canvas is being increasingly adopted, primarily by aircraft developers with little or no background in coding, which means that more and more Canvas related additions are unnecessarily violating design principles in terms of modularization and code reuse, which is to say that many Canvas related efforts are not sufficiently generic in nature and lacking a unified design/approach, which often makes them only useful in a single context (think instrument/aircraft/GUI dialog).
This is a challenge that Canvas-based features have in common with other aircraft-specific contributions, especially Nasal code. Aircraft developers tend to use copy&paste to adopt new features.
Concepts like object-oriented programming, encapsulation and having abstract interfaces to make code reusable and generic are not easily brought across to non-coders obviously, and even more experienced contributors faced challenges related to that:
This sounds like a reusable framework but the encapsulation isn't as far and its optimised for the internal need.
There are some calls going over parents where no interface is "rechable" or defined. |
complex MFD instruments like the G1000 series or the Avidyne Entegra R9 are better not implemented directly, but using a "bottom-up" approach, where you identify all required building blocks (e.g. screen component, page component) and build higher level components on top. Otherwise, there will be very tight coupling at some point, so that it will be really easy to generalie/maintain the underlying code (look at D-LEON's comments above). — Hooray (Feb 2nd, 2015). Re: Project Farmin [Garmin Flightdeck Frame work].
(powered by Instant-Cquotes) |
Canvas & Nasal are still fairly low-level for most aircraft developers, to come up with good -and fast displays (code)- people still need to be experienced coders, and familiar with FlightGear scripting and Canvas technologies/elements and the way performance is affected through certain constructs. So far, we now have the means to create the corresponding visuals, but there's still quite some work ahead to re-implement existing hard-coded displays - but to implement a compelling jet fighter, including a credible cockpit, you would need more than "just" the visuals, i.e. lots of handbooks/manuals, building blocks for creating systems and components, and scripting-space frameworks to help with the latter.The best option to pave the way for this is to keep on generalizing existing code, so that instruments support multiple instances, multiple aircraft, and multiple "sensors".
— Hooray (Thu May 29). Re: Does FlightGear has Multiplayer Combat mode?.
(powered by Instant-Cquotes) |
Regarding "de-skilling", that's exactly the point of introducing more specific frameworks on top of Nasal and Canvas, developed by more experienced programmers, usable by less-experienced contributors, who often don't need any programming experience at all (see for example Gijs' ND work, which can now be integrated and used with different aircraft, without requiring ANY coding, it's just configuration markup, analogous to XML, but more succinct)
— Hooray (Thu May 29). Re: Does FlightGear has Multiplayer Combat mode?.
(powered by Instant-Cquotes) |
Claiming that Nasal/Canvas would be "a failure as a tool" just because people can still implement slow code, is far too short-sighted - just because you are allowed to drive a car (or fly an airplane) doesn't make you an expert in car engines or airplane turbines - things like Nasal and Canvas are really just enablers, that are truly powerful in the hands of people who know how to use them, but that can still be misused by less-experienced contributors.
— Hooray (Fri May 30). Re: Does FlightGear has Multiplayer Combat mode?.
(powered by Instant-Cquotes) |
currently, I am inclined to state that Canvas is falling victim to its own success, i.e. the way people (early-adopters) are using it is hugely problematic and does not scale at all. So we really need to stop documenting certain APIs and instead provide a single scalable extension mechanism, i.e. registering new features as dedicated Canvas Elements implemented in Nasal space, and registered with the CanvasGroup helper - absent that, the situation with Canvas contributions is likely to approach exactly the dilemma we're seeing with most Nasal spaghetti code, which is unmaintainable and is begging to be rewritten/ported from scratch.Which is simply because most aircraft developers are only interested in a single use-case (usually their own aircraft/instrument), and they don't care about long-term potential and maintenance, i.e. there are now tons of Canvas based features that would be useful in theory, but which are implemented in a fashion that renders them non-reusable elsewhere: Canvas Development#The Future of Canvas in FlightGearSo at the moment, I am not too thrilled to add too many new features to Canvas, until this is solved - because we're seeing so much Nasal/Canvas code that is simply a dead-end due to the way it is structured, i.e. it won't be able to benefit from future optimiations short of a major rewrite or tons of 1:1 support by people familiar with the Canvas system. Which is why I am convinced that we need to stop implementing useful functionality using the existing approach, and instead adopt one that is CanvasElement-centric, where useful instruments, widgets, MFDs would be registered as custom elements implemented in Nasal space (via cppbind sub-classing).If we don't do that, we will continue to see cool Canvas features implemented as spaghetti code monsters that reflect badly upon Nasal and Canvas due to lack of of design, and performance. |
Yet, many Canvas early-adopters were/are working on conceptually-similar, and often even identical, features and functionality so that a lot of time is being wasted by people not knowing how to provide, and reuse, functionality in a "library"-fashion that is agnostic to the original use-case/aircraft (think MapStructure).
Still, most contributions developed by aircraft developers are often "singletons by accident", i.e. support only a single system-wide instance or are at least implemented in an aircraft-specific fashion, so that they cannot be easily reused elsewhere (original 747 ND/PFD, 777 EFB, extra500/Avidyne Entegra R9).
In addition, contributions tend to be insufficiently structured so that the only way of adopting a popular feature is "Copy & Paste-programming". Even the original Canvas-based airport selection dialog was primarily done using "Copy&Paste" and is still a maintenance challenge, despite having been developed by an experienced FlightGear core developer.
Furthermore, coordinating related efforts to help people come up with generic, reusable and modular implementations is a tedious process that is taking up lots of resources, i.e. energy and time (e.g. see the MapStructure and ND/PFD efforts) - especially because people tend to get in touch only once they have to "show" something, at which point it is often too late to affect the design of a Canvas based feature to make it sufficiently generic and reusable without too much effort, or it's going to take a lot of time and energy to restructure the code accordingly (e.g. 777 EFB) - which often renders the result un-maintainable for people less familiar with fundamental coding concepts, at which point ownership/maintenance is typically delegated to the very people trying to help with design issues, who are usually already juggling dozens of projects.
Additionally, many aircraft developers simply don't know how to identify overlapping functionality and how to come up with generic building blocks that can be used elsewhere, while others are generally not interested in helping contribute to a unified framework out of fear that their time is "wasted" and should be better spent working on their own aircraft/feature instead (extra500/Avidyne Entegra R9).
Equally, multi-instance setups like those at FSWeekend are still not explicitly supported by any Canvas-related efforts, which means that glass-cockpit functionality (MFDs like a PFD or ND) cannot currently be easily replicated/synchronized across several instances (think multiplayer/dual-pilot or master/slave setups). This matches restrictions found in the original od_gauge based instruments, without that having to be the case given the generic nature of the Canvas system, and it being based on key property tree concepts.
One key concept that aircraft developers are familiar with however is the property tree, which could -and should- thus be the mechanism to provide interfaces that "just work" using existing Canvas APIs by exposing those to scripting space and encouraging new features to be provided as PropertyBasedElements that can be registered with the main Canvas system, that implicitly support multiple instances, different aircraft, styling or multi-instance setups.
In the last couple of years we've been increasingly prototyping useful features in scripting space, so that Canvas is primarily useful due to extensive Nasal support. In fact, many recent additions would be crippled without also using Nasal and its cppbind/canvas bindings. However, adding new Nasal dependencies is generally frowned upon by core developers due to Nasal's GC issue. In addition, Nasal is too low-level for most aircraft developers, who often don't know how to create a component in such a way that the component is truly generic and reusable. Nasal coding makes this job even harder for many people.
However, the nature of property tree makes it possible to map components onto a property tree hierarchy, so that these components inherently support important design characteristics (multiple instances, property-inheritance, aircraft independence etc).
Currently, we're adding an increasing number of useful Canvas-based systems to FlightGear, such as the ND, PFD, MapStructure, Avidyne Entegra R9 and various other modules. However, all of these are mainly Nasal-based, and there's no way for people to instantiate these modules without also knowing Nasal. This is breaking some important concepts of the property tree and Canvas: namely, system-wide orthogonality. A properly-designed Canvas module would be usable even outside just Nasal space, e.g. just via the property tree (refer to the AI traffic system or the Canvas system for example).
Thus, a new Canvas component like a ND or PFD would ideally still be implemented in scripting space using a few Canvas bindings, but the abstract interface for setting up and controlling the system would live solely in property tree space, without people necessarily having to touch any Nasal code.
This would be in line with existing hard-coded gauges, whose external interface is solely the property tree (e.g. wxradar, od_gauge, agradar etc). In addition, establishing the property as the main interfacing mechanism for new Canvas-based elements, also means that a stable API is much easier to provide/maintain, as it would mainly live in property space.
That can be accomplished by allowing custom Canvas elements to be implemented in Nasal and registered with the Canvas system. So that a ND/PFD widget could be instantiated analogous to any other Canvas element by modifiying the property tree, which would internally map things to a Canvas::Element/PropertyBasedElement sub-class implemented in scripting space.
The major advantage here being a strong focus on encapsulation, as well having clean interfaces that lends themselves to being easily re-implemented/optimized in C++ space, e.g. by moving certain prototyped functionality (think Canvas animations using timers/listneners) out of Nasal space into C++ for the sake of better performance once the need arises.
Equally, such a modular approach would allow us to easily sync multiple fgfs instances (think dual-pilot/multiplayer) by using just properties, without any explicit Nasal calls having to be made in other instances, because things would be transparently dispatched behind the scenes, using just properties.
Goals
The other goal being improved accessibility for existing features/code wanting to use Canvas-based functionality (think MapStructure layers), without wanting to add any explicit Nasal dependencies, e.g. the new Integrated Qt5 Launcher, where there's the issue of increasing code duplication and added maintenance workload, too.
As can be seen by functionality that is now getting added/re-implemented in Qt5/C++ space, despite already existing elsewhere:
A Canvas/MapStructure based view of airports with runways/taxiways
As can be seen, there's currently no code reuse taking place when it comes to the Qt5-based location tab, and the Canvas/MapStructure-based airport/taxiway layers are very much superior in comparison, too (as well as being much better maintainable (living in fgdata)) - so it would make sense to work out a way to reuse existing code instead.
Once the PropertyBasedElement/CanvasElement wrappers are fully exposed to scripting space, we could easily register MapStructure as a new Canvas element for directly making the corresponding layers available to C++ code, without introducing any direct Nasal dependencies - i.e. the corresponding airports/runway and taxiways diagrams would be transparently created by Nasal-based Canvas elements, with the Qt5/C++ code only ever having to set a handful of properties (airport ICAO id, styling, range etc).
Examples
From a design standpoint, we would then be able to use something like group.createChild("widget-button").set("label","Exit") which would be straightforward to synchronize (a handful of properties vs. a full Canvas group) - which would not just be relevant for MP scenarios, but also external GUIs that could be interfaced to FG, e.g. an instructor console.
We should probably keep this in mind, even if we end up using some compromise - personally, I would appreciate being able to expose *complex* canvas systems like the ND/PFD as a dedicated PropertyBasedElement that has its own property interface, possibly even by locking/hiding some internal state at some point.
Exposing PropertyBasedElement as a base class would be a good first step, and maybe we could add some methods to set up "interface properties" via attributes - Canvas kinda has all the code in place already because of the CSS/styling parsing code it has in the CanvasElement base class.
Approach
That is why I believe that we should work out a way to allow new Canvas functionality to be OPTIONALLY also mapped to a PropertyBased*-interface - for example, we could have a propertyBased element that maps property writes to calls for widget creation using factories (as is the case already for existing elements)- the difficult code is in already in place, the main thing we would need is a property tree interface and a mapping scheme for calling the right APIs for user-defined elements.
Similarly, we could expose MFDs like a PFD, ND or EFB as a property-based system within each instance's property tree.
Challenge: Instancing
Despite Canvas internally using/being OOP, and despite using OOP at the Nasal level - the representation in the property tree itself is mainly dealing with texture elements, that lack any notion of formalized dependencies and behavior, i.e. in terms of what is represented, which events (signals) are supported - so that the texture state only means something during an active fg session, and it is specific to that single session, too - i.e. MFD state is not currently replicated easily to other instances (think multiplayer, dual pilot, fsweekend-like setups), due to this lack of encoding data dependencies at the tree level, where really just Canvas primitives are animated/updated - without the tree/Canvas system itself having any concept of what it is doing from a high-level standpoint.
And because of all this, we are sacrificing potential to optimize things, i.e. OSG no longer knows that it is rendering the same thing (sub-scene graph) when showing 20 trajectory maps, 10 PFD/MFDs - it will just happily be as wasteful as it can be by creating each scenegraph from scratch.
All this, because we are currently missing to provide the required meta information by annotating Canvas related state/groups that can/should be shared, or merely parametrized.
This may not seem relevant in the context of the trajectory map, because OSG/SG will internally cache the teture, but more complex dialogs/MFDs with their own scenegraph would greatly benefit from encoding what is instance-specific, and what isn't (what is common and can be shared) - e.g. imagine a complex dialog showing several instances of the same PFD/MFD, driven by different data (think AI aircraft) - at the scenegraph level, would make sense to use instancing whenever possible, including shared geometries - i.e. shallow clones whenever possible, deep clones if necessary.
Looking at Canvas-based features that are massively slow (think extra500/Avidyne Entegra R9), those would indeed be faster in C++ - but only because C++ is closer to the metal than Nasal/Canvas, the underlying approach is still unfriendly to OSG/OpenGL overall, because there is hardly any stateset/resource sharing going on, and because of the code doing unnecessary/redundant things.
Referring to the Avidyne Entegra R9, just imagine runnning 10 independent instances of the instrument shown in a GUI dialog.
At the Canvas/Element level, we could change that by encoding meta information, to declare what Canvas state/groups (osg::StateSet/osg::Node) can/should be instanced, and which ones cannot.
Sooner or later we will need to come up with features that allow avionics developers to declare if a group can be considered static/final, e.g. for background images (no DYNAMIC variance, sharing/instancing allowed), or if a group actually represents fully dynamic state, such as a MFD screen, whose elements may still be instanced (imagine GUI widgets like a button)
It would be a great feature, if multiplayer mode could allow two or more online or local network players to share one cockpit. Is this already possible or not yet?
Then one player can be the captain, another one the first officier, and the third one is the flight engineer. Maybe another second officier — CaptainTech (Dec 29th, 2015). Global Feature Suggestion for FlightGear: Cockpit Sharing.
(powered by Instant-Cquotes) |
it would be possible to support the whole thing in a "global" fashion with a handful of minor additions, mainly based on looking at related subsystems, e.g. the "instant replay" (flight recorder) feature - and hooking that up to the multiplayer system by XDR-encoding the corresponding properties.
The main thing that aircraft developers would still have to do is to create a corresponding "flightrecorder" configuration for their aircraft/cockpit to encode the transmission/update semantics accordingly. — Hooray (Dec 29th, 2015). Re: Global Feature Suggestion for FlightGear: Cockpit Sharin.
(powered by Instant-Cquotes) |
ore complex cockpits/aircraft require more changes.
But under the hood, it is mainly about formaliing state management - which is overlapping with the way the flight recorder has to work, but also the MP protocol. — Hooray (Dec 29th, 2015). Re: Global Feature Suggestion for FlightGear: Cockpit Sharin.
(powered by Instant-Cquotes) |
any aircraft that 1) supports multiplayer and 2) supports the flight recorder/replay feature and 3) distributed setups (like those at FSWeekend/LinuxTag), could /in theory/ also support "Dual Control" - certainly once/if the underlying systems are merged.
The building blocks to make something like this possible are already there - the difficult stuff is convincing aircraft developers (like yourself) to adopt the corresponding systems (multiplayer and the flight recorder). So the whole "global" thing would be possible to pull off, but it should not be done using Nasal and the existing MP system. In the case of the shuttle, or even just complex airliners, formaliing data dependencies (switch states, annunicator states etc), that would be tons of work to do manually, given the plethora of switches and state indicators - which is why I am not convinced that this should be done manually, but done semi-automatically by annotating properties (and possibly even branches of properties in the tree). A while ago, I did experiment with replicating a Canvas-based PFD/ND display in another fgfs instance using the "brute force" approach - i.e. copying the whole property branch of the corresponding Canvas via telnet and patching it up via Nasal subsequently, the whole thing was not very elegant, but it actually worked. So I do understand how difficult this is, as well as the limitations of the current system - however, if aircraft/cockpit developers had a handful of property attributes to differentiate between different kinds of simulator state (local/remote vs. switches vs. displays), it would be possible to pull this off, pretty much by using the existing technology stack - the main limitation would be bandwidth then, i.e. you would have to be on the same LAN as the other instances, because it simply isn't feasible to replicate a PFD/ND using low-level calls (primitives) - instead, the whole instrument logic would need to be running in each fgfs instance, with only events being propagated accordingly - i.e .in a master/slave fashion. Admittedly, this is a restriction/challenge that even recent MFD cockpits are sharing with ODGauge-based cockpits (think wxradar, agradar, navdisplay etc), but that does not have to be the case necessarily, because we can already readily access all the internal state by looking at the property tree. But even if such a system were in place today, the way we are using Nasal and Canvas to create MFDs would need to change, i.e. to formalize data dependencies, and to move away from low-level primitives that are only understood by Nasal code - which is to say that new Canvas-based features (e.g. MFDs) would need to be themselves registered as Canvas::Element instances, implemented in scripting space, to ensure that a sane property-based interface is provided and used, without adding explicit Nasal dependencies all over the place: Canvas Development#The Future of Canvas in FlightGear So we would need both 1) an updated transport/IPC mechanism, and 2) a better way to encapsulate Canvas-based features in a way that properties are the primary I/O means, which is ironically how hard-coded instruments are working already - we are just violating the whole design concept via Nasal currently, which is also making it more difficult to augment/replace Nasal-based components that turn out to be performance-critical.— Hooray (Dec 29th, 2015). Re: Global Feature Suggestion for FlightGear: Cockpit Sharin.
(powered by Instant-Cquotes) |
Challenge: IPC and Serialization
In terms of integration with the property tree, I'm thinking that in the short term all the different components that we split out into separate threads or executables will simply use their own properties trees, and use the RTI to reflect the particular (minimal) data that needs to be passed between components. — Stuart Buchanan (Nov 19th, 2015). Re: [Flightgear-devel] HLA developments.
(powered by Instant-Cquotes) |
Will network-linking of FG sessions synchronise ALL of the aircraft's property data, thus also syncing radio, instrument and cockpit data? For the visuals, only the basic 6DOF are needed, but is there also a way to keep everything inside the A/C's panels up to date all the time? — Robin van Steenbergen (Sep 22nd, 2007). Re: [Flightgear-devel] Serious simmer.
(powered by Instant-Cquotes) |
my original issue was to make external instrumentation possible over the network, not on a single PC with 6 monitors on it. Distribute the computing power, allowing more processing power for the flight dynamics and visuals and a flexible instrument setup. — Robin van Steenbergen (Sep 22nd, 2007). Re: [Flightgear-devel] Serious simmer.
(powered by Instant-Cquotes) |
Some of the intelligence could be transferred from FG to the external applications and interface logic, while still keeping FG up to date on any changes, through the property system. — Robin van Steenbergen (Sep 21st, 2007). Re: [Flightgear-devel] Serious simmer.
(powered by Instant-Cquotes) |
ARINC661, for example, has a clear separation between the display graphics and the rendering engine. — Robin van Steenbergen (Sep 21st, 2007). Re: [Flightgear-devel] Serious simmer.
(powered by Instant-Cquotes) |
Also it would be nice if the state of the canvas can be serialized easily and with only little data into an other application. That is to be able to set up multiple viewer applications all displaying the same content. Think of an mfd that is shown in a bigger multi viewer environment. This should be efficient. How to achieve this efficiently requires a lot of thought.
— Mathias Fröhlich (2012-10-22). Re: [Flightgear-devel] Canvas reuse/restructuring.
(powered by Instant-Cquotes) |
If we're rendering each display as an OSG sub-camera, extracting that logic and wrapping it in a stand-alone OSG viewer should be simplicity itself - and so long as it's driven by properties, those can be sent over a socket. That's an approach which seems a lot more bearable to me than sending per-frame pixel surfaces over shared memory or sockets / pipes.
— James Turner (2008-08-04). Re: [Flightgear-devel] Cockpit displays (rendering, modelling).
(powered by Instant-Cquotes) |
I think of some kind of separation that will also be good if we would do HLA between a viewer and an application computing physical models or controlling an additional view hooking into a federate ...[7] — Mathias Fröhlich
|
We don't want to abandon the "property-for-IPC" mechanism in place due to using the PropertyBasedElement interface.
Currently, replicating the ND in another instance is a fairly massive undertaking across telnet - while telnet is unnecessarily slow, we really only need to sync very specific state, and not the full canvas. And I do feel that this approach could serve us well in the long-term, not just fgcanvas usage - but anything that would involve multiple fgfs instances.
Challenge: Multithreading
once it [Canvas] is in simgear It should be really multi viewer/threading capable. Everything that is not, might be changed at some time to match this criterion.
Such a change often comes with changes in the behavior that are not strictly needed but where people started relying on at some time. So better think about that at the first time. — Mathias Fröhlich (2012-10-22). Re: [Flightgear-devel] Canvas reuse/restructuring.
(powered by Instant-Cquotes) |
Originally, the whole Canvas idea started out as a property-driven 2D drawing system, but admittedly, what we ended up with is a system that is meanwhile tightly coupled to Nasal unfortunately. Indeed, there are some things where you definitely need to use Nasal to set up/initialize things. But under the hood, 99% still is pure property I/O, which is also why the property tree is becoming a bottleneck.
In general, Nasal is not the problem here - but the way the Canvas system is designed, and the way both, Nasal and Canvas, are integrated - it's a single-threaded setup, i.e. we are inevitably adding framerate-limited scripted code that runs at <= 60 hz to the main loop, to update rendering related state. This is a bit problematic, but it's not a real problem to fix.[8]
One option would be turning each Canvas into a canvas with its own private property tree that merely receives/dispatches events, possibly even with its own FGNasalSys instance to ensure that there is no unnecessary serialization overhead - at that point, you could update Canvas textures ("displays") asynchronously and let OSG's CompositeViewer handle the nitty gritty details of getting each sub-camera drawn/updated without running in the main loop.[9]
Most Canvases could in fact have their own private property tree and a private Nasal instance directly hooked up to that tree, instead of using the current approach - as long as we're working with the assumption that all stuff only ever runs in the main loop, we are not exactly doing Nasal a huge service ....[10]
It is trivial to run Nasal in another thread, and even to thread out algorithms using Nasal. Nasal itself was designed with thread-safety in mind, by an enormously talented software engineer with a massive track record doing this kind of thing (background in embedded engineering at the time). FlightGear however was never "designed" like Thorsten alluded to, rather its architecture "happened" by dozens of people over the course of almost 2 decades meanwhile.
The bottleneck when it comes to threading in Nasal is indeed FlightGear, the very instant you access any non-native Nasal APIs, i.e. anything that is FlightGear specific (property tree, extension functions, fgcommands, canvas) - the whole thing is no longer easy to make work correctly, without re-architecting the corresponding component (think Canvas).
In the case of Canvas, it would be relatively straight-forward to do just that, by introducing a new canvas mode, where each canvas (texture) gets its own private property tree node (SGPropertyNode) that is part of simgear::canvas, at that point, you can also add a dedicated FGNasalSys instance to each canvas texture (Nasal interpreter), and that could be threaded out using either Nasal's threading support or using simgear's SGThread API.
Obviously, there would remain synchronization points, where this "canvas process" (thread) would fetch data from FlightGear (properties) and also send back its output to FlightGear (aka the final texture).
Other than that, it really is surprisingly straightforward to come up with a thread-safe version of the Canvas system by making these two major changes - the FGNasalSys interpreter would then no longer have access to the global namespace or any of the standard extension functions, it could only manipulate its own canvas property tree - all I/O between the canvas texture thread (Nasal) and the main loop (thread) would have to take place using a well defined I/O mechanism, in its simplest form a simple network protocol (even telnet/props or Torsten's AJAX/mongoose layer would work "as is") - more likely, this would evolve into something like Richard's Emesary system.[11]
[...]there is a thing called the global property tree, and that there is a single global scripting interpreter. The bottleneck when it comes to Nasal and Canvas is unnecessary, because the property tree merely serves as an encapsulation mechanism, i.e. strictly speaking, we're abusing the FlightGear property tree to use listeners that are mapped to events, which in turn are mapped to lower-level OSG/OpenGL calls - which is to say, this bottleneck would not exist, if a different property tree instance were used (per Canvas (texture)).
This, in turn, is easy to change - because during the creation of each canvas, the global property tree _root is set, which could also be a private tree instead.
Quite literally, this means changing 5 lines of C++ code to use an instance-specific SGPropertyNode_ptr instead of the global one.
At that point, you have a canvas that is inaccessible from the main thread (which sounds dumb, but once you think about it, that's exactly the point). So, the next step is to provide this canvas instance with a way to access its property tree, which boils down to adding a FGNasalSys instance to each Canvas - that way, each canvas texture would get its own instance of SGPropertyNode + FGNasalSys
Anybody who's ever done any avionics coding will quickly realize that you still need a way to fetch properties from the main loop (think /fdm, /position, /orientation) but that's really easy to do using the existing infrastructure, you could really use any of the existing I/O protocols (think Torsten's ajax stuff), and you'd end up with Nasal/Canvas running outside the main loop.
The final step is obviously making the updated texture available to the main loop, but other than that, it's much easier to fix up the current infrastructure than fixing up all the legacy code ...
[...] telling the canvas system to use another property tree (SGPropertyNode instance) is really straightforward - but at that point, it's no longer accessible to the rest of the sim. You can easily try it for yourself, and just add a "text" element to that private canvas. The interesting part is making that show up again (i.e. via placements). Once you are able to tell a placement to use such a private property tree, you can use synchronize access by using a separate thread for each canvas texture (property tree). But again, it would be a static property tree until you provide /some/ access to it - so that it can be modified at runtime, and given what we have already, hooking up FGNasalSys is the most convenient method. But all of the canvas bindings/APIs we have already would need to be reviewed to get rid of the hard-coded assumption that there is only a single canvas tree in use.
Like you said, changing fgfs to operate on a hidden/private property tree is the easy part, interacting with that property tree is the interesting part.
Also, it would be a very different way of coding, we would need to use some kind of dedicated scheduling mechanism, or such background threads might "busy wait" unnecessarily.[12]
providing a new execution model for new Canvas modules where a Canvas texture has a private property tree that can only be updated by a Nasal script that runs outside the main loop would be feasible, and is in line with ideas previously discussed on the developers mailing list - furthermore, that approach is also in line with the way web browsers have come to address the long-standing issue of JavaScript blocking tabs, by coming up with the "web extension" framework, using a message-passing based approach - with one script context running outside the main thread ("background scripts"=, and another one ("content scripts") running inside the main loop communicating only via "events" (messages).
This kind of setup could be made to work by providing a new/alternate Canvas mode, where the Canvas-tree would never show up in the global tree, but instead bound to a private FGNasalSys instance, minus all the global extension functions.
With the exception of nested canvases, i.e. those referencing another canvas via a raster image lookup - those canvas textures could be updated/re-drawn outside the main loop, and would only require a few well-defined synchronization points, i.e. those fetching updated properties/navaid info, and providing the final texture to the main loop, and this is where Emesary could become a real asset.
In and of itself, this won't help with legacy aircraft/code - at least not directly, but it would provide an alternative that people interested in better performance could adopt over time, while investigating how legacy code could be dealt with, so that it can benefit without too much manual work (such as providing a list of subscribed properties, that are automatically copied to the private property tree running in the background context) - this won't be as efficient, but having a list of input/output properties could work well enough for most people's code[13]
Use Cases
The main point being that we do want to support complex FG setups that are using multiple inter-linked fgfs instances/sessions - back when we played with this ~12 months ago, this was working simply by replicating Canvas raw properties from one instance to another - I think we were using just telnet + listeners to copy one canvas tree to another instance.
And this is an important consideration because we are still supporting native protocol master/slave setups, but our existing hard-coded od_gauge based glass instruments do not provide support for sync'ing.
With canvas we can easily "sync", but it will be fairly low-level to sync a MFD using just a handful of canvas primitives.
The issue here is that while that works, it is understandably very low-level - not so much for primitives like placing a label, an image - but for complex canvas contents like for instance widgets, and especially, MFDs.
That way, each fgfs/fgcanvas instance would have some awareness of what it is rendering, and could be much more efficient when it comes to updating/sync'ing state .
The "raw" mode would require all canvas primitives to be copied 1:1 - while a "smart" propertybased-approach would know exactly that it only needs to make a certain call to replicate a certain canvas - such as a PFD/ND or even just a button/widget, because the encapsulated propertybased-element would expose its own interface.
That would mean that in an inter-linked fgfs setup, exchange between multiple instances could be much more efficient.
Candidates
Benefits
Using the PropertyBased-approach where each canvas feature can register itself as an extension of the core system, would mean that the sync mechanism can be really lightweight, and even be implemented on top of our existing I/O protocols (think multiplayer/dual-pilot).
The other issue here is that with all these canvas based efforts going on, people need to be "forced" to establish generic systems and interfaces - or they'll just use copy& paste, and -unnecessarily- end up with widgets and instruments that are singletons or that are aircraft-specific.
Conclusion
If we continue "as is", we're abandoning the "sole-property" philosophy in the mid-term, simply because we're implementing increasingly complex systems (MFDs, GUI widgets, HUDs, 2D panels) on top of canvas, without the property tree being aware of what a given canvas tree actually represents internally in terms of actual functionality, and external data dependencies.
But as soon as we expose PropertyBasedElement as an interface via cppbind, we can establish "best practices" to demonstrate how new Canvas features can be implemented in a property-tree aware fashion. The only thing that is missing is some kind of simply access restriction in a public/private/protected fashion so that internal state of a widget cannot be mutated.
This may all seem very complicated and like over-engineering, but it can be implemented by inheriting from PropertyBasedElement and using a handful of attributes that specify an interface in the form of XML attributes.
Implementation
See Canvas_Sandbox#CanvasNasal for the main article about this subject. |
When it comes to exposing Nasal features via the property in the scope of Canvas, there are 3 main building blocks:
- PropertyBasedElement
- Canvas::Element
- Canvas::Group
Of these three, only the last one really needs to be exposed to accomplish our goal of allowing Nasal space elements to be registered as Canvas elements, while retaining the existing property tree interface, without having to go through scripting space for instantiating a new element.
This would for example make it possible to allocate a new window, widget (button, checkbox, label etc) or MFD (navdisplay, pfd, efb) just by setting a few properties, analogous to already existing Canvas elements (text, images, paths, images). By using Canvas Group as the base class for doing that, we are ensuring that we can create arbitrarily-nested hierarchies of top-level wrappers for custom elements, which would internally preserve the usual Canvas structure.
The code required to expose an existing C++ base class to Nasal space in order to allow it to be sub-classed there can be seen in $SG_SRC/canvas/layout/NasalWidget.cxx , where the underlying C++ interface is registered as a base class - so that more specific widgets can be inherited from the C++ interface class, while all the key functionality is implemented in scripting space.
The same approach would allow us to allow new Canvas-based features to be developed while maintaining parity with existing property interfaces, especially the clean separation of different canvas elements.
Over time, we would be moving away from the increasingly Nasal-focused approach of declaring, using and instanting/maintaining Canvas-based features, towards retaining the FlightGear property tree as the sole/main interfacing mechanism for creating/controlling new functionality, including GUI and MFD features.
This would be in stark contrast to the current practice of having relatively low-level building blocks represented in the property tree, with functionality mainly being determined in scripting space, and data dependencies not being properly formalized.
Sooner or later, this would help us establish the property tree as the main access point for any GUI/MFD functionality, which also means that we can trivially support backward compatibility - e.g. by honoring a corresponding version property for "meta" elements like a PFD, ND or GUI widget.
Equally, it would be possible to identify performance-critical components (think animation handling) and easily augment/re-implement those in C++ space, without breaking existing code - as long as the latter is only using dedicated property tree APIs, and not any scripting space calls directly.
Nasal would mainly be used for quickly prototyping new elements, while ensuring that all new functionality is a first class concept - without introducing any unnecessary Nasal dependencies.
Aircraft would no longer need to call custom Nasal space APIs for using a certain MFD or GUI widget, but merely invoke canvas.createChild()
using the corresponding arguments, e.g.:
myGroup.createChild('label-widget')
;myGroup.createChild('checkbox-widget')
;myGroup.createChild('button-widget')
;myGroup.createChild('repl-widget')
;myGroup.createChild('map-widget')
;myGroup.createChild('pfd-mfd')
;myGroup.createChild('nd-mfd')
;
Internally, these would still be mapped to the already existing Nasal APIs (think Widget.nas, Button.nas etc) - while establishing a clean interfacing boundary, so that aircraft developers can rely on certain features to "just work".
Likewise, supporting multi-instance Canvas use-cases would become much more straightforward this way. If we should ever have the need to optimize/re-implement certain parts in C++ space, there would be a clean property interface to do so (which could even support versioning/backward compatibility easily) - in fact, by using this approach, we could even entirely replace the Nasal engine or add a new scripting engine at some point, without Canvas based MFDs having any external Nasal interfacing requirements - because the main interfacing mechanism for any Canvas MFDs would still be the property tree by allowing custom Canvas elements to be registered and implemented in scripting space.
Suggested reading
- Using Property Listeners
SGPropertyChangeListener
(doxygen)- Howto:Use Property Tree Objects
- OSG Programming
References
|