Slaving for Dummies

From FlightGear wiki
Jump to navigation Jump to search
This article is a stub. You can help the wiki by expanding it.

Back in the early days of the project (pre-dating OSG and multi-window/multi-view support), multi-screen setups would be using a master/slave configuration using the netfdm "hack" - i.e. multiple standalone fgfs instances (usually running on different computers) slaved to a single master via networking, all of which being synchronized across UDP (mainly FDM properties or plain C structs): Property_Tree/Native_Protocol_Slaving

There are a few hard-coded protocols for sync'ing other state across multiple instances.

But overall, it is and remains a huge ugly hack that only happens to work well enough for some use-cases, but fails the very instant someone wants to sync multiple subsystems (think AI/ATC or weather/environment).

Equally, instrumentation stuff (especially hard-coded MFDs) isn't easily sync'ed:

Cquote1.png The main problem here is lack of consistency: we've seen half a dozen of glass cockpit related efforts over the years - including stuff like OpenGC (early 2000s) and FGGC (mid 2000s), and quite a few others in the meantime.At the end of the day, this always meant that we had competing, and even conflicting, technology stacks involved - where one technology (instrument/MFD) would not work within the other run-time environment. Canvas, coupled with HLA (or even just remote/telnet properties), has the potential to solve this once and for all.
— Hooray (Sun Jun 15). Re: Instruments for homecockpit panel..
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png I played around with showing canvas instruments in a slaved fgfs instance - and it actually worked pretty well just by using the existing telnet interface, the subscribe command and 20 lines of Nasal to fix up property path index numbers, while the telnet protocol isn't very fast - it was sufficient to show the airport selection map in a slave instance, as a cockpit texture - without any C++ changes. We once had a long discussion about necessary changes to "mount" remote property trees in a local property tree to replicate state - but the experiment showed that even a crude synchronization mechanism like the props protocol works well-enough for a single instrument.

While it may not seem important right now, because most other simulator features are similarly broken or "crippled" when it comes to distributed multi-instance setups, the canvas system is the most feasible chance to address these once and for all.


— Hooray (Tue Oct 15). Re: Dynamic duplication of elements.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png You should be aware of glass cockpit related efforts, especially Canvas - most airliners & jets will sooner or later benefit from being ported to Canvas, e.g. to use Gijs' NavDisplay framework, or at least Philosopher's MapStructure framework for mapping purposes.

Thus, if this is also about the actual display itself, people should be aware of related canvas efforts, especially FGCanvas: FGCanvas
I am involved in both, the NavDisplay and MapStructure efforts, and my mid-term plan involves supporting a standalone mode for all Canvas-based glass instruments, including the ND, but also other instruments like the PFD, EICAS, CDU or EFB. This may sound like a lot of work, but it's mainly a matter of introducing a a few helper classes and ensuring that people actually adopt and use those.

In the long-term, I really want to support distributed FlightGear setups like those at FSWeekend/LinuxTag, where multiple computers may be used to run a single simulator session - including properly synchronized glass instruments like the PFD/ND etc. This would also help improve the multiplayer experience, especially dual-pilot setups etc.


— Hooray (Sat Jun 07). Re: computer2cockpit.
(powered by Instant-Cquotes)
Cquote2.png
Cquote1.png our MP system is one of those components that will greatly benefit from being re-implemented sooner or later. Discussing this with non-developers is kinda pointless however. HLA is the right technology here, as it also handles multi-instance state synchronization/replication, i.e. for distributed setups, or even just professional multi-machine setups.

Mostly, FlightGear is an extremely inconsistent piece of software with many features being either partially re-invented in other places, or even completely incompatible. Things like the MP protocol or the native/controls protocols, but also the generic protocols system, are basically solving the same underlying problem but were never unified, so have some great ideas and concepts that are usually incompatible still.


— Hooray (Sat May 03). Re: Flightgear and vatsim.
(powered by Instant-Cquotes)
Cquote2.png


Canvas-based MFDs can in theory be explicitly sync'ed using either a generic protocol and/or a telnet connection (which does have support for basic "on demand" push semantics).


But in reality, using a single instances and multiple views/windows tends to work better for more involved use cases, simply because much/most of FG hasn't been designed with a distributed IG setup in mind: Howto:Configure_camera_view_windows

Obviously, there are performance issues, and especially restrictions WRT to only supporting slaved views - i.e. CompositeViewer support still is "pie in the sky" unfortunately, despite being regularly brought up: CompositeViewer_Support

FG devs are currently re-inventing CIGI functionality on top of HLA (see FGViewer), so that could be a more appropriate workaround than some generic protocol hacks:

Given that CIGI support doesn't exist so far, jumping on the HLA bandwagon would seem to be the right thing for a "proper" IG-based setup. But a workaround would seem possible using existing/extended I/O means. For FlightGear and any professional users, HLA and/or CIGI would obviously seem more relevant/interesting, because there's are already so many hacks in various places - which is how $FG_SRC/Networking came into existence, i.e. with tons of C structs put on the wire via UDP ...

It is worth noting though that the existing multi-screen/multi-window implementation seems to be particularly prone to race conditions unfortunately: Howto:Activate_multi_core_and_multi_GPU_support

Examples

First, let's start up a fgfs slave instance, with the FDM being disabled so that a native FDM socket can drive the instance:

fgfs --airport=KSFO --runway=28R --aircraft=ufo --native-fdm=socket,in,60,,5500,udp --fdm=null

Next, start the master and tell it to send native FDM packets to the address specified:

fgfs --airport=KSFO --runway=28R --aircraft=ufo --native-fdm=socket,out,60,,5500,udp

And here's how to start up a master that's driven by a standalone JSBSim instance

fgfs --airport=KSFO --runway=28R --aircraft=ufo --native-fdm=socket,out,60,,5500,udp --fdm=null --native-fdm=socket,in,60,,5600,udp
Note
Cquote1.png the primary purpose if this type of JSBSim->FG setup is visualiing a non-interactive test case (FG is not sending any user inputs back to JSBSim). JSBSim also only sends a limited set of parameters to FG so FG needs to be started with a representative aircraft (--aircraft=Short_Empire in the example) and, preferably, with a initial location near to where the scripted run takes place (--airport=SP01) to facilitate scenery fetching/loading and to make any command line time commands useful (e.g. --timeofday=morning).
— AndersG (Feb 23rd, 2016). Re: JSBSim interfacing with FlightGear.
(powered by Instant-Cquotes)
Cquote2.png

To the JSBSim FDM, you'll want to add this to the toplevel fdm-config section:

 <output name="localhost" type="FLIGHTGEAR" port="5600" rate="60" protocol="UDP"/>

And then, start up JSBSim by the --realtime parameter.