Canvas widgets: Difference between revisions

Jump to navigation Jump to search
Line 201: Line 201:
* That will get rid of the legacy (GLUT based) mouse event API that PUI uses, and mean we can support any mouse event supported by osgGA and osgViewer - mouse wheel, buttons and mouse-over. {{Pending}}
* That will get rid of the legacy (GLUT based) mouse event API that PUI uses, and mean we can support any mouse event supported by osgGA and osgViewer - mouse wheel, buttons and mouse-over. {{Pending}}
* Basically we only need to pass mouse/keyboard events and handle dragging/resizing and input focus/stacking order of multiple dialogs. {{Pending}}
* Basically we only need to pass mouse/keyboard events and handle dragging/resizing and input focus/stacking order of multiple dialogs. {{Pending}}
* This is certainly where things need to start, then. I'd hope dragging and resizing can be mostly handled by the Nasal layer, the C++ only needs to keep the window and canvas texture in sync when a resize happens.  
* This is certainly where things need to start, then. I'd hope dragging and resizing can be mostly handled by the Nasal layer, the C++ only needs to keep the window and canvas texture in sync when a resize happens. {{Done}}
* <del>We should consider the way osgWidget gets events from osgGA as the 'correct' way. We want to be passing osgGA events around, not raw x,y values, and pretty much all of fg_os.hxx dates from support GLUT and OSG in a single codebase - which forces us to use a very crude API. If we use the real OSG events we get  lots of information about button index, event type and modifier state all encapsulated in a clean way.</del> {{Done}}
* <del>We should consider the way osgWidget gets events from osgGA as the 'correct' way. We want to be passing osgGA events around, not raw x,y values, and pretty much all of fg_os.hxx dates from support GLUT and OSG in a single codebase - which forces us to use a very crude API. If we use the real OSG events we get  lots of information about button index, event type and modifier state all encapsulated in a clean way.</del> {{Done}}
* On a different note: window management. I've seen that in Tom's private branch, he has started on a window class (albeit, it is just a skeleton currently). I think we indeed want a c++ window manager, which is basically just a dumb "visible thing that renders a texture". That way, the canvas code only has to render stuff to a texture and expose that, which is a nice abstraction point. {{Pending}}
* On a different note: window management. I've seen that in Tom's private branch, he has started on a window class (albeit, it is just a skeleton currently). I think we indeed want a c++ window manager, which is basically just a dumb "visible thing that renders a texture". That way, the canvas code only has to render stuff to a texture and expose that, which is a nice abstraction point. {{Done}}
* Agreed - I was planning to dive into adapting the window on Tom's branch, with a C++ host for a canvas. Note that the canvas  currently assumes render-to-texture, but for the GUI I'm not sure that's actually desirable - simply a separate camera per GUI window may be sufficient. Since the camera already arranges everything beneath the RTT camera this should be fairly minor change, if it's desirable. (Saves some memory, makes re-sizing GUI windows a little easier, might make clipping or other state management less efficient in the main GUI camera ... but probably not) {{Pending}}
* Agreed - I was planning to dive into adapting the window on Tom's branch, with a C++ host for a canvas. Note that the canvas  currently assumes render-to-texture, but for the GUI I'm not sure that's actually desirable - simply a separate camera per GUI window may be sufficient. Since the camera already arranges everything beneath the RTT camera this should be fairly minor change, if it's desirable. (Saves some memory, makes re-sizing GUI windows a little easier, might make clipping or other state management less efficient in the main GUI camera ... but probably not) {{Pending}}
* I'd suggest starting with the current GUICamera code, and especially the osgWidget base class. In particular my goal is to be able to kill off fg_os.hxx in the near future, i.e to have all events being passed into the canvas as osgGA types, not the old GLUT interface of raw x/y floats.
* I'd suggest starting with the current GUICamera code, and especially the osgWidget base class. In particular my goal is to be able to kill off fg_os.hxx in the near future, i.e to have all events being passed into the canvas as osgGA types, not the old GLUT interface of raw x/y floats.
* It should probably event be enough to use just one single camera for all windows. The big advantage we gain with render-to-texture is that by using a larger texture we can get better anti-aliasing. For rendering paths the stencil buffer is used so we only have pixel resolution which can be seen at non horizontal or vertical lines or curves.
* It should probably even be enough to use just one single camera for all windows. The big advantage we gain with render-to-texture is that by using a larger texture we can get better anti-aliasing. For rendering paths the stencil buffer is used so we only have pixel resolution which can be seen at non horizontal or vertical lines or curves.
* We could also use lazy rendering to only update the GUI texture if something changes. Normally dialogs should be pretty much static... {{Pending}}
* We could also use lazy rendering to only update the GUI texture if something changes. Normally dialogs should be pretty much static... {{Pending}}
* <del>The current window manager registers itself to the main viewer and subscribes on osgGA events, so the canvas has no dependencies on the old system.</del> {{Done}}
* <del>The current window manager registers itself to the main viewer and subscribes on osgGA events, so the canvas has no dependencies on the old system.</del> {{Done}}

Navigation menu