FlightGear CIGI support (Common Image Generator Interface)
|IMPORTANT: Some, and possibly most, of the features/ideas discussed below are likely to be affected, and possibly even deprecated, by the ongoing work on providing a property tree-configurable rendering pipeline accessible via XML using the new Compositor system available since FlightGear 2020.3 (11/2020): The main rendering pipeline (on next at least) is now the Compositor using ALS. The "classical renderer" and "Project Rembrandt" have been superseded. From the perspective of creating regional material definitions, you can just develop against ALS and you will be fine.
Please see: Post FlightGear 2020.2 LTS changes for further information You are advised not to start working on anything directly related to this without first discussing/coordinating your ideas with other FlightGear contributors using the FlightGear developers mailing list or the Effects & Shaders subforum . talk page.
Effects and Shaders
The Common Image Generator Interface (CIGI) is an interface specification designed to establish a standard way for a host device to communicate with an image generator (IG) in the simulation industry, its purpose is to provide interoperability between real-time Image Generator (IG) and Host computational system providers by using a common method of communications. CIGI is envisioned to work in concert with the Distributed Interactive Simulation (DIS) and High Level Architecture (HLA) standards to leverage and benefit from previous investments in these two interfaces .
CIGI is designed to assist suppliers and integrators of IG systems with ease of integration, code reuse, and overall cost reduction. IGs share common controlling attributes, and can capitalize on previous investments through the use of a common interface. CIGI and the tools suite are released open-source, non-proprietary under GNU and GNU Lesser General Public Licenses (GPL). CIGI software tools are available for CIGI development and testing.
HLA vs. CIGI
CIGI is all about the Image Generator/Host relationship, i.e. rendering and standalone viewers - while HLA is a standard for distributed simulations (so called federations), where standalone entities/processes (so called "federates") agree on a certain vocabulary (the FOM) and set of supported interfaces/events/messages, that are propagated through a central broker (called the RTI), it's all publish/subscribe based. HLA is primarily adopted for distributed simulations, FG developers are adopting it to partition the simulator into federates, so that the main loop becomes tighter - federates would then run through a local RTI, where each federate and the RTI would all be running in dedicated threads. Distributed simulations will be relevant to replace the multiplayer and AI traffic systems. Thus, the current fgviewer/hla code is slightly overlapping with a potential CIGI implementation, because the remote viewer capability is being built on top of HLA
the three rendering engines built into FG are very good and more importantly work with the terrain model we have. If anyone asked me what the most time consuming and difficult part of creating a new simulator would be, I'd answer creating the scenery database for the world, simply because it is such an enormous task.
FlightGear & CIGI
At the time of writing this, FlightGear does not yet support CIGI, however over the years, a number of people have discussed implementing CIGI support, and more recently (05/2013), a FlightGear contributor is looking into using CIGI via OSG. There's an outdated OSG-based MPV (multi-purpose viewer) implementation that uses CIGI (last updated in 2009, still based on osgProducer).
- OpenIG (2015)
An interesting topic popped up on the forums. I will forward it here out of curiosity. https://forum.flightgear.org/viewtopic.php?f=3&t=29307 the project’s website is http://openig.compro.net/ and unless the plugins are commercial, the rest is free an open source, though I have yet to find a license.
|"The Common Image Generator Interface (CIGI) is an interface designed to promote a standard way for a host device to communicate with an image
generator (IG) in the simulation industry."
— Norman Vine
|To understand just what CIGI (Common Image Generator Interface) is you should take a look at the CIGI API User's Guide Overview section,
http://cigi.sourceforge.net/manual/section_1.html Most high end simulators do not have everything running on a single machine the way FlightGear is currently implemented. The airplane model is run on one machine normally refered to as the host and the out the window visuals or scene graph program is run on another usually refered to as an Image Generator. CIGI is the interface between the 'host' and the 'image generator'. I hope this clearsthings up abit. I currently using CIGI for a PC Image Generator at work.
— Bruce Finney
|FlightGear has been used as an image generator on an FAA Level 3 FTD certified simulator. I've seen people post questions who are also working on leveraging FG as an image generator in one way or another ... either interfacing it to an existing simulator, or trying to import the FG scenery into their existing image generation software, or trying to import their existing image generation scenery back into FG.
— Curtis Olson
|I'm new to FlightGear, and am trying to use it as an image generator for a simulator I'm developing...I've got it configured to take inputs
from a UDP port to fly, but I want to disable a lot of features so that all FlightGear does is draw scenery. 
|I am interested in using it as a visualization tool for UAV's. I would like to replace the fg scenery with images captured from a camera onboard an aircraft. I was wondering if there is any way to import images into flightgear on the fly. The basic goal would be to show live video where available and fall over to flight gear visuals when the feed is lost(using a custom view from the camera perspective) .
— STEPHEN THISTLE
|I'm hooking up a lumenera Camera for a live video feed from a UAV, so that the video gets handed to Flightgear, which then draws its HUD over the video stream. In order to do this, I need to be able to communicate with the window controls. My camera can display the video in a new window, but I want it to draw to the video screen that Flightgear is already using.
|I imagined embedding some minimal routine that talks to the camera and grabs an image frame. Then usually you can directly map this into an opengl texture if you figure out the pixel format of your frame grab and pass the right flags to the opengl texture create call. Then you should be able to draw this texture on top of any surface just like any other texture ... you could map it to a rectangular area of the screen, you could map it to a rotating cube, map it to the earth surface, etc.
That's about as far as far as I've gone with thinking through the problem.
— Curtis Olson
|The issue of non-portability of the time-critical net code in FlightGear (and thereof the lack of a stable protocol layout) is a long
standing one, has been discussed several times and has already prevented some projects from using FlightGear as an image generator. 
— Martin Spott
|I'm using Flightgear as a means of showing what a UAV is doing, is it is a much more natural display mode than simply seeing some telemetry numbers, and a map-view flight plan. Thus far, I've painted over the flightgear graphics with a video stream taken from the plane, and a HUD which displays needed information.
|I would like to use FlightGear to generate the scene observed by a UAV's onboard camera.
Basically, this would translate to feeding FlightGear the FDM data and visualizing the image generated by FlightGear in another computer, across a network, using for example streaming video.
I suppose this is a bit of a far-fetched idea, but is there any sort of support for this (or something similar) already implemented? [...] A standard streaming video would still be nicer. If FlightGear is able to generate those jpgs, then it may be possible to encode them to a video in real-time,using third party tools, and stream it. 
— Antonio Almeida
|I don't think there's any current way to do this. However, I think what is needed is to link in some video capture library to do frame grabs from yourvideo camera as quickly as possible. Then do whatever bit fiddling is needed to scale/convert the raster image to an opengl texture. Then draw
this texture on a quad that is aligned correctly relative to the camera. It might be possible to get fancy and alpha blend the edges a bit.
Given an image and the location and orientation of the camera, it would be possible to locate world coordinates across a grid on that image. That would allow a quick/crude orthorectification where the image could be rubber sheeted onto the terrain. This would take some offline processing, but you could end up building up a near real time 3d view of the world than could then be viewed from a variety of perspectives. The offline tools could update the master images based on resolution or currency ... that's probably a phd project for someone, but many of the pieces are already in place and the results could be extremely nice and extremely useful (think managing the effort to fight a dynamic forest fire, or other emergency/disaster management, traffic monitoring, construction sites, city/county management & planning, etc.) I could even imagine some distrubuted use of this so that if you have several uav's out flying over an area, they could send their imagery back to a central location to update a master database ... then the individual operators could see near real time 3d views of places that another uav has already overflown.
If we started building up more functionality in this area, there are a lot of different directions we could take it, all of which could be extremelycool.
— Curtis Olson
|I am looking for a method for adding a graphical overlay channel to Flightgear. This overlay would consist of a dynamic texture that can be
modified in real time. I've used other OpenGL based systems with this feature but don't know where to start with implementing it in Flightgear.
— Noah Brickman
|I was thinking that a nice project for "someday" would be to get some video frame grabbing hardware and be able to capture video frames and convert them to opengl textures. I've seen this done in only a few lines of code once you get the byte ordering and various graphics formats issues figured out. I don't know, this might even be possible with a cheap usb web cam?
Once the frame is converted to an opengl texture, then it would be a very simple matter of displaying it on the screen with a textured rectangle drawn in immediate mode ... possibly with some level of transparancy, or not ...
I'm involved in some UAV research where we are using FlightGear to render a synthetic view from the perspective of a live flying uav. Would be reallycool to super impose the live video over the top of the FlightGear synthetic view.
— Curtis Olson
|I am working with a aero simulation that has multiple aircraft. We have never had a graphics display, only position data to analyze post flight. I want to use Flightgear to display all the aircraft real-time. 
|I need to use FG for calculate the flight and navigation, and as an image generator using a different system.
— ing.Petr Ondra