Canvas widgets

From FlightGear wiki
Revision as of 10:15, 24 June 2014 by TheTom (talk | contribs)
Jump to navigation Jump to search


As of 07/2012, there's a general consensus to replace the current FlightGear GUI (based on PLIB/PUI) by using the new Canvas system.

However, before we can work on adding new widgets to FlightGear, we need to provide wrappers for the existing hardcoded PUI widgets, see Canvas GUI.

Canvas widgets are dynamically created GUI widgets that use the Canvas subsystem to create custom, owner-drawn, GUI widgets in a scripted fashion using Nasal. The textures are conventional canvas textures, however they are rendered by the GUI system, so that scripted event handlers can be implemented which respond to GUI events, such as keyboard/mouse input or other events (resize, update, redraw etc).

Using canvas widgets, it will be possible to create your own GUI styles and even completely new, fully interactive, GUI controls (buttons, text boxes, list views, tree views etc) just by using Nasal, without touching any C++ code and without rebuilding FlightGear. The Canvas system supports loading SVG files and raster images, so that canvas elements and GUI widgets can be assembled from SVG elements, meaning that you'll be able to use a conventional SVG editor like Inkscape to create/edit FlightGear GUI widgets.


The Idea

This section lists a bunch of ideas related to supporting Canvas textures for GUI drawing needs, i.e. to render Canvases as textures, but also to increasingly implement parts of the GUI (such as custom widgets) using interactive Canvases with scripted callbacks, the idea is to work around existing GUI/PUI shortcomings by using Canvases to implement custom styles and widgets.

Cquote1.png We could could allow aircraft dialogs to include custom widgets, although that might be unwise for other reasons[1]
— James Turner
Cquote2.png
  1. James Turner (Tue, 24 Jul 2012 10:36:26 -0700). Re: [Flightgear-devel] Switching from PUI to osgWidget.

The Plan

In contrast to using some hardcoded GUI system (PUI, osgWidget, etc.) this approach would give much more flexibility and also the means of modifying and creating new widgets without the need to touch any core code.

With the Canvas system every type of widget would be possible, so that also things like submenus can be realized.

Another advantage of the Canvas approach is that it is heavily using the property tree and therefore is already fully accessible from Nasal code and also configurable with the existing xml formats. [1]

The point of the canvas widget demo is to demonstrate how powerful and flexible the new canvas system really is, i.e. it cannot just be used for aircraft scripting (instruments, MFDs), but also for GUI scripting - which means that using the canvas system would unify the 2D rendering backend (all 2D rendering can be handled via canvas), while reducing the amount of C++ code we have doing these things, which would mean that the GUI system could be entirely maintained in scripting space, i.e. as part of the base package, by people who don't need to know C++ - some basic Nasal knowledge will do.

Basically, adopting the new canvas system for such and similar purposes, will mean that tons of old/oudated C++ code can be phased out and replaced by a single consistent implementation in C++, that is using modern C++/OSG code - which ultimately also means that OSG itself can make more assumptions about what's being rendered, so that more optimizations (= better frame rates) can be more easily accomplished by using OSG coding patterns and protocols in a single place, instead of outdated/custom/3rd party libraries which would need to be manually baked into the existing FG/SG/OSG eco system.

Currently, the canvas system is integrated in such a fashion that it will keep working with the old GUI code still in place.In addition, all of the exsting GUI features (layouting, XML processing) are implicitly supported due to the way the canvas system is implemented at the moment. These are real roadblocks when implementing a new GUI library next to PUI, because all of the existing stuff would need to be explicitly ported (either in C++ space or by converting tons of XML files).

Overall, the canvas system will give us all of this "for free", and it will mean less C++ code in the source tree, too - i.e. better maintainability.

Also, once the standalone "FGCanvas" is available, it would also be possible to run the GUI in multiple windows or even in separate processes.

In addition, by using the canvas system for GUI widgets, it would also be possible to render aircraft instruments, MFDs, HUDs etc WITHIN GUI dialogs, too.

Status (06/2014)

Warning  The canvas gui always handles events first and only if no window was hit forwards events to scenery picks and PUI dialogs. Which means that PUI (old) dialogs rendered on top of new canvas windows, will not receive their GUI events currently. With FlightGear 3.1+ this is no problem anymore, as Canvas windows are drawn on top of PUI dialogs to match the rendering order and the event handling order.
  • TheTom (05/2013): "I'm still not completely sure how to implement the GUI, but currently I'm thinking of something similar to most available UI toolkits with mainly using images together with 9-scale/slicing. Theming would be possible by simply exchanging the images and/or modulate them with a color. For some icons/elements also SVG could be used, and maybe I'll implement the possibility to cache rendered images of SVG elements for faster rendering of them." [2]
  • TheTom: (30/07/2012) I have now pushed some updates to my branch. It is now possible to create windows (texture rectangles) with just using the property tree and place a canvas texture onto it. Mouse events are passed to the active window (=the window the cursor is hovering over, or for dragging the window where the drag gesture started) and can be handled from Nasal or anything else that has access to the property tree.

Missing / Todo

Last updated: 08/2012

List of missing things (if you'd like to get involved to help with any of these, please get in touch via the canvas forum):

Misc

  • Documentation: Read, ask questions, extend. I haven't done too much documentation (apart from inline documentation) just due to the reason that the API is not completely stable yet. You could also try different use-cases and maybe find some examples where the API lacks some features.Not done Not done
  • Own ideas: Come up with a new idea or something that I have already mentioned somewhere else Feedback required (Feedback required)
  • Find more work I've currently forgotten about :) Feedback required (Feedback required)

C++

  • Keyboard input: I haven't thought too much about it and also haven't done anything, but we will definitely need access to keyboard events Not done Not done
  • Clipping: For different reasons we will need to be able to clip some elements to certain regions. It should work with specifying either a clipping rectangle or by using a path. OpenVG seems to have support for it, although I haven't looked into it too deep. We also need to ensure that it also works with text. At least rectangular regions are needed.(eg. group/clip-min[0..1], group/clip-max[0..1]) Not done Not done
  • Animations: I don't know if we should do animations just by using interpolator and settimer from Nasal or if we should implement some time-based animations directly in C++. At least we need some helper functions (eg. for blinking elements -> cursor, fading, ...) It would also be possible to implement animations purely in Nasal space, e.g. by supporting a subset of SMIL for SVG, so that existing tools could be used to created animated vector images that are converted to canvas properties by the Nasal parser.
  • Check what is missing to implement the different hardcoded instruments.Not done Not done
  • provide Nasal hooks to access: taxiways, <parking positions etc
  • Maybe support displaying shapefiles.Not done Not done
  • Unify the canvas creation a bit: such that canvases can be moved seamlessly between the different placements (gui, model,hud, etc.). The normal model placement is great but the gui widget placement needs to be able to also use an already existing canvas.Not done Not done
  • Support multiple views/windows: Currently the GUI can only be placed inside one view/window (see Docs/README.multiscreen) but it would be nice to be able to move windows between views.Not done Not done

Nasal

Fully Canvas based implementation (in progress as of 08/2012)

This section describes a possible way of completely getting rid of PUI and providing a GUI system by only using the Canvas system.

C++ core requirements Pending Pending

40}% completed

Certain properties can be set to affect the appearance and behavior of the dialog:

  • Initial position
  • Dragable
  • Resizeable
    • If true, min/max/initial size
    • If false, size
  • Modal
  • Texture coordinates (allow using just part of the canvas, eg. to have enough space for larger window after resize)
  • ...
  • The Canvas manager needs to be extended to support using such a window as target for a placeable definition. If a window is resizeable the size property nodes will be updated to allow the canvas and/or its viewport being resized. Done Done
  • It would be also be good to allow the canvas textures to be resized at runtime, so that we don't have to reserve to large canvases just because the dialogs could be resized. Pending Pending
  • The GUI system should take care of handling and forwarding mouse and keyboard events to the property tree as needed (Some parts of the existing code could probably be reused). Eg. if a canvas is assigned picking should occur on mouse clicks and forwarded to the property tree. Pending Pending
  • The existing dialog-show command needs to be modified to call the according function in Nasal space which will handle the whole creation and updating of the GUI. Pending Pending (can be done by using the new removecommand/addcommand APIs meanwhile)

Keyboard Handling

Feedback required (Feedback required)

Looking at the mouse handling code, the same technique we use for mouse handling could obviously also be used for keyboard handling:

So it just involves checking for osgGA::GUIEventAdapter::KEYDOWN and propagating the events to a handful of child properties for each canvas element that subscribed to keyboard events.

It'd make sense to also use a boolean property to enable/disable reception of keyboard events, that will allow us to set the "focus" of windows and widgets. And then we only need to expose some keyboard-specific events like key-up, key-down, key-value etc at the C++ level, which all boils down to just calling fgSet*Value().

The rest will be handled at the Nasal level, so that users can register their own listeners.

The nice thing is that once this is properly implemented, we'll already have a unified I/O handling system for GUIs, instruments, HUDs etc. and it moves all the implementation details to the base package again.

BTW: There is already a property to toggle the mouse cursor via setprop, so this could also be wrapped in gui.nas, obviously being able to use a canvas texture as the cursor would prove the point that "everything is a canvas" in the design.

I was looking through our earlier discussions, and how osgGA can be used for keyboard support. And like I mentioned, I think it would make sense to adopt the same approach that you used for mouse support here.

Overall, I feel it would make sense to closely model this after the way this is handled in Java, i.e. using the same "property events" that are supported in Java for mouse/keyboad and "windows" - just specific to canvas textures, instead of just "windows".

So instead of "window" events, we'd have "texture" events, so that these can also be used for non-GUI purposes, such as instruments

Looking at our last discussion on forwarding mouse events to nested canvases, it would then really make sense to also have "window/texture" events (create, update, redraw, resize, move etc). So that we don't need to add any C++ code for such things.

And by having "high level" events such as "mouse-enter", "mouse-leave" or "texture-resize", "texture-moved", "has-focus","lost-focus" etc, it would be very simple to implement MFDs and GUI widgets in Nasal without requiring C++ workarounds.

So it's really just about supporting 10-15 event types in the C++ code which set some properties, so that the rest can be implemented in Nasal.

Obviously, some events would be texture specific, while others would be specific to groups or elements and should probably be forwarded, like you suggested earlier.

And then it'd be very close to how this is handled in Java, because we could use conventional Nasal listener callbacks to model the details in scripting space.

Specialized widgets could then use these events to implement widget-specific events, i.e. for list view, tree views or other complex widgets - but these could then be implemented in Nasal.

We probably want to support Java-style Key listeners and Key bindings, so that widget-behavior can be easily implemented in Nasal space

Nasal requirements

Feedback required (Feedback required)

  • The API needs to be extended to allow the creation of windows and placing canvases onto them. Done Done
  • We also need a function which parses existing dialog xml files (reimplementing http://gitorious.org/fg/flightgear/blobs/next/src/GUI/FGPUIDialog.cxx#line708 FGPUIDialog.cxx/FGPUIDialog::makeObject]) and maps them to the new canvas widgets. Each widgets sits in is own Nasal file (eg. inside $FG_DATA/gui/widgets) and has to be implemented using a hash with several required functions, implementing the abstract interface of a widget:
var SampleWidget = {
  # Add the widget to the parent
  #
  # @param config A hash containing all parameters for this widget.
  new: func(parent, config),
  # Get the minimal required size. Used for positioning following elements
  # if no absolute coordinates are given and to calculated available space
  # for widgets with vertical or horizontal stretch enabled.
  getMinSize: func(),
};

In addition, the interface should probably contain methods to:

  • show/hide widget
  • destroy widget
  • enable/disable propagation of events (mouse/keyboard)


Also, we are currently able to reload the GUI in FlightGear, we will want to retain this feature. But once we start implementing widgets in Nasal, we won't just want to reload the GUI XML files, but also the widget modules from $FG_ROOT/Nasal/widgets, so that widgets can be easily developed and tested, without having to restart FG.

This can be implemented in Nasal space, there's no need to modify the Nasal sub modules code for this - however, we also need to ensure that there's a sane way to terminate all active widgets, i.e. by stopping all running instances. This will also be important to handle simulator reinit/reset.

Thus, during instantiation, all widgets would need to register a listener, so that they can be terminated properly using a signal property.

Dialog Parser

Feedback required (Feedback required)

The parsegui function parses a dialog xml file and calls the constructor of the corresponding widget. The GUI parser should handle XML versioning, so that future updates to the underlying XML format can be easily supported, without tons of custom code. It might make sense to provide a base class, so that future parsers can be easily implemented next to the existing code, so that new parsers don't need to touch any of the old code!

We need to keep the existing way of specifying GUI files via XML - it's a nice, declarative way of building the dialogs. Switching to an imperative system would be a step backwards. I do like the idea of a gui/widget/widgetname.nas structure so we can easily create a factory function and hack / add widgets.

It is important to keep in mind that the Canvas system will have at least 3 related uses:

  • HUDs Pending Pending
  • GUIs Pending Pending
  • 2D panels Pending Pending

All of these will require an XML parser that turns the existing structure into canvas nodes. The existing SVG parser is purely implemented in scripting space (see svg.nas) using the XML parser in $FG_ROOT/Nasal/io.nas

Given that all three file formats are PropertyList-encoded XML files, it should be possible to come up with a "Xml2Canvas" interface class which implements the XML parser and the Canvas interfaces. That will ensure a maximum degree of code reuse.


Afterwards the getMinSize method is used to calculated the available space and stretch widgets if required.

To ensure that the core widgets in $FG_ROOT/Nasal/widgets cannot be accidently invalidated by users, the widget namespace/hash will be made immutable using globals.nas magic.

Building Widgets at the C++ Level

We can always create widgets from C++ by adding canvas elements directly through the property tree, but I think a fully scripted GUI is the most flexible and powerful approach. Let's keep in mind that all the original hard coded PUI dialogs were increasingly replaced with XML dialogs in FlightGear, which was important and a lot of work.

One of the most important goals of the canvas system is to make 2D drawing accessible to end users, without having to know C++ and without having to rebuild FlightGear. This automatically means that large parts of FlightGear will become more maintainable, because they can be moved over to the base package. Starting to implement GUI widgets at the C++ level would defeat the purpose.

While it would definitely be possible to implement widgets by directly creating a canvas via the property tree, that would probably be counter-productive, because we clearly don't want any hard coded C++ special cases.

Now, if someone really wants to implement some custom canvas widget that cannot/shouldn't be modeled in scripting space, then the canvas infrastructure can be extended.

Just take a look at how the existing Canvas Maps are implemented currently. The same approach could be used to provide support for new/more specific drawing modes, such as:

  • aircraft/3D model previews
  • scenery cameras
  • or moving map layers
  • shapefiles

The canvas isn't really about end user features, it's a provider of an infrastructure, so that end user features can be more easily expressed and modeled in scripting space, using Nasal. That's why all end user features should be expressible in scripting space, and only the core infrastructure should need C++ extensions to make this easier. The end user APIs will be mostly designed in scripting space.

Related

Related Discussions