Canvas development: Difference between revisions

Jump to navigation Jump to search
Line 1,972: Line 1,972:
# The UAV guys want to view/use external live video inside FlightGear as an instrument/texture (which would require a new Canvas::Element to render an external video stream to a canvas)  
# The UAV guys want to view/use external live video inside FlightGear as an instrument/texture (which would require a new Canvas::Element to render an external video stream to a canvas)  
# the computer vision (OpenCV) guys want to stream FlightGear live video itself to another application for image processing purposes - the latter would require streaming FlightGear's main window view to an external program (i.e. by using FlightGear's CameraGroup code), possibly by using a corresponding "virtual Placement" that opens a socket to provide a live stream of the FlightGear main window via background thread. This makes only sense to pursue once we can [[#Supporting Cameras|render camera views to a canvas]] though.}}  
# the computer vision (OpenCV) guys want to stream FlightGear live video itself to another application for image processing purposes - the latter would require streaming FlightGear's main window view to an external program (i.e. by using FlightGear's CameraGroup code), possibly by using a corresponding "virtual Placement" that opens a socket to provide a live stream of the FlightGear main window via background thread. This makes only sense to pursue once we can [[#Supporting Cameras|render camera views to a canvas]] though.}}  
One of the suggestion would be to develop some kind of shared memory interface, with metadata embedded on the same memory space. After each rendering step, the image would be simply copied to the memory along the metadata and a frame counter. I have already some tests done on Windows platform and it works quite well. It is also possible to enable/disable the copy process(Which is not too slow, but it is interesting to have a way of controlling it) using the command line parameters. >From the shared memory position, any other process could read it and do whatever it wants, which would create a complete horion of possibilities like streaming, video recording and a more modular architecture to anything related to gathering images, the jpeg server could be separated from FlightGear, for example. Obviously, this requires some kind of process synchronization such as mutexes, which relies on the reading softwares not to block it for a too long time. Another approach would be to have a different architecture inside FlightGear, something like: Renderer -> ImageGrabber -> ImageSaver Where the ImageGrabber is the part of code that reads image and saves it on a buffer and ImageSaver is the "externalizer" (JPEGSaver, SharedMemorySaver, MPEGSaver and so on). However, I personally prefer the first option, which enables people to grab image and do whatever they want without the necessity of understanding and recompiling FlightGear source code.<ref>{{cite web
  |url    =  https://sourceforge.net/p/flightgear/mailman/message/32521939/
  |title  =  <nowiki> [Flightgear-devel] Rendered image export to Shared Memory </nowiki>
  |author =  <nowiki> Emilio Eduardo Tressoldi Moreira </nowiki>
  |date  =  Jun 30th, 2014
  |added  =  Jun 30th, 2014
  |script_version = 0.36
  }}</ref>


The HTTP server already does this - if you select a ‘low compression’ image format such as TGA or uncompressed PNGs, it’s very close to what you want. It will be using a local TCP socket, not shared memory, but unless you want really large images, I am not sure the additional complexity is worth adding an entirely new image output system for. See the code for how to increase the max-fps (defaults to 5H but could be 30 or 60Hz) and file-format of the http-server; any image format supported by OSG ReaderWriter plugin should work. (Well, so long as the plugin implements writing!)<ref>{{cite web
The HTTP server already does this - if you select a ‘low compression’ image format such as TGA or uncompressed PNGs, it’s very close to what you want. It will be using a local TCP socket, not shared memory, but unless you want really large images, I am not sure the additional complexity is worth adding an entirely new image output system for. See the code for how to increase the max-fps (defaults to 5H but could be 30 or 60Hz) and file-format of the http-server; any image format supported by OSG ReaderWriter plugin should work. (Well, so long as the plugin implements writing!)<ref>{{cite web

Navigation menu