Canvas widgets: Difference between revisions

Jump to navigation Jump to search
m
→‎Picking & Widget Callbacks {{Pending}}: implemented by Tom: http://flightgear.org/forums/viewtopic.php?f=71&t=19850&p=183115#p183108
(→‎Status (08/2012): http://flightgear.org/forums/viewtopic.php?f=71&t=19850#p182509)
m (→‎Picking & Widget Callbacks {{Pending}}: implemented by Tom: http://flightgear.org/forums/viewtopic.php?f=71&t=19850&p=183115#p183108)
Line 116: Line 116:
* The GUI system should take care of handling and forwarding mouse and keyboard events to the property tree as needed (Some parts of the existing code could probably be reused). Eg. if a canvas is assigned picking should occur on mouse clicks and forwarded to the property tree. {{Pending}}
* The GUI system should take care of handling and forwarding mouse and keyboard events to the property tree as needed (Some parts of the existing code could probably be reused). Eg. if a canvas is assigned picking should occur on mouse clicks and forwarded to the property tree. {{Pending}}
* The existing dialog-show command needs to be modified to call the according function in Nasal space which will handle the whole creation and updating of the GUI. {{Pending}}
* The existing dialog-show command needs to be modified to call the according function in Nasal space which will handle the whole creation and updating of the GUI. {{Pending}}
=== Picking & Widget Callbacks {{Pending}} ===
{{Progressbar|50}}
For mouse handling I like the idea of having event handlers (eg. click, drag, hover, etc.). So instead of just one property holding the events of the whole dialog/canvas I want to forward the event to the corresponding element by using picking or maybe for a first step just bounding box checking with the current mouse position. It would still just set the three properties like before (button, x, y) but we could add an helper function which adds a listener to the button property and calls a function with all three parameters if the event was triggered. (I always want to keep the basic idea of only communicatingvia the property tree).
To determine which element the mouse is currently pointing at, this could either be the (leaf) element where the mouse position is inside the bounding box or a bit more complicated exactly inside the element by checking which element the current pixel belongs to (similar to OpenGL picking). The exact mouse position should also be passed to the widget, as this could be needed eg. by a map widget to determine where the user actually clicked in world coordinates and add waypoints or something else.
I'm currently thinking of adding a property to each element which allows enabling picking for this element. If the element is a group it receives picking events from all children otherwise only its own. We should also keep in mind that the Canvas won't be used only for the GUI but also eg. for MFDs which may have a touch interface where not necessarily widgets are used...
The Canvas is not primarily/solely about GUIs, that's just one particular use case ...
So picking really '''needs''' to be supported at the core/C++ level, as part of the canvas infrastructure itself, because that means that it will be automatically available for other canvas uses, including not just the GUI, but also the 2D panel code (once we have a 2D panel parser/converter), and MFD/touch screens.
If done this way, the GUI in FG 3.0 could probably be entirely canvas-driven, with very little coding required in Nasal and especially in C++, because all the widget markup would be loaded from SVGs (dynamically turned into canvas nodes) and linked to events via the property tree, so that each widget really just needs to implement a handful of callbacks via Nasal using the discussed syntax.
<syntaxhighlight lang="php">
var scroll_bar = scroll.createChild("path")
                                            .moveTo(764,2)
                                            .vert(100)
                                            .setStrokeLineWidth(4)
                                            .setColor(0.94, 0.47, 0.27)
                                            .setHandler( func { print("GUI event!"); } );
</syntaxhighlight>
The "'''setHandler'''" method would then just register a listener via _setlistener() for its own group/child region.
Another advantage would be that intersection tests would be all done in C++ space, i.e. very fast.
But the C++ code would only toggle canvas child properties ever.
Also, the same technique could not just be useful for GUI widgets but also for MFD touchscreens - because it would always just be a property that is triggered for mouse/keyboard events.
More complex features could be implemented on top of this easily.
This would make it possible to implement pretty advanced nested hierarchies, i.e. like a treeview with edit boxes - all just using properties to fire off their Nasal callbacks.
All of this would be optional, and only explicitly enabled - so that there's no cost for conventional displays which don't require picking.
These "events" could be recursively passed on to other group items, so that the final child knows that it's the last item and responsible for the event.
And then there could be boolean "focus" property to indicate that a child is reponsible and that no other chilren should be informed
And new GUI styles could be easily created by adding new CSS files, while new widgets could be added by adding new SVG files and Nasal modules which load the elements and implement their callbacks
Currently all values are copied to the property tree. The following properties are set on the canvas of the active window:
* mouse/x
* mouse/y
* mouse/dx
* mouse/dy
* mouse/button
* mouse/state
* mouse/mod
* mouse/scroll
* mouse/event
The values are only valid for one single window (coordinates are relative to window origin).


=== Keyboard Handling  ===
=== Keyboard Handling  ===

Navigation menu