Property threading
Caution Developer information:
Some, and possibly most, of the details below are likely to be affected, or even deprecated, by the ongoing work on implementing and improving HLA support in FlightGear. For recent updates, please refer to HLA Timeline. |
Multicore |
---|
Configuration |
Ongoing Efforts |
Proposals & RFCs |
Background |
For developers |
Summary of ideas for making the property system thread-aware and safe. The intention here is that if the property subsystem was transparently thread-safe, many existing parts (subsystems). of the code would be thread-independent, or very close too. This includes large portions of the simulation code - FDM, instruments and the environment code. It potentially also includes Nasal scripts (with some additional work)
Basic concept
- A property tree per thread
- existing functions to get/set/tie have the same external API
- first get/lookup of a node on a non-main thread creates that property locally, adding it to a per-thread list of properties to be synced
- sets of a property schedule the new value to be pushed back to the main tree on sync
Sync Operation
Sync operations are assumed to occur at some periodic, well-defined point on each thread. The obvious example would be just prior, or just after, SGSubsystemManager::update runs for all the subsystems belonging to that thread.
The sync operation has two main components
- Push updated values out to the main tree, firing property change listeners in the main tree at that point. There's a control-flow transfer here, but obviously we don't want to block either thread, so we need some asynchronous mechanism to contain the changed property values until the main thread accepts them.
- Pull updated values in from the main thread. This could be a complete traversal of all properties used by the thread, or a clever change-tracking scheme where only properties changed since the last sync of this thread are updated - depending if the cost of the book-keeping in terms of time/code complexity is worth it. (at this point we fire change listeners belonging to the slave thread)
The key data-structure is a list of changed properties (and their new values) that can be safely passed between threads (and potentially merged, if the property is updated multiple tines), that can be incrementally built up. This updated state should only be owned by a single thread at any instant (i.e, it should not need any locking objects associated with it), but would be explicitly passed from one thread to another during a sync.
If the main thread was running slowly, at its sync point it might have many (potentially overlapping) change-sets to incorporate into the main tree - but incorporating the sets is presumed to be very fast (linear in the number of changes), and there should be a minimum number of property listeners on the main thread.
Discussion
Tied properties still need some special handling - for JSBSIm we can do an evil hack, otherwise they need to do fireChangeListener() or something equivalent to ensure they are pushed to other threads correctly. My preference would be to do the evil hackery for JSBSIm, and get rid of tied props everywhere else.
The proposed design means the latency between two slave threads might increase significantly - and the same for listeners. In practice I think this would be fine, because of how systems are or aren't coupled. The one interesting area will be Nasal, which has many listeners - it would be great to run Nasal in its own thread. In fact I expect the breakdown would be the main thread (running OSG/view/tile-manager), a simulation thread running FDM, environment and systems (and AI?) and then a Nasal thread.
Relation to HLA and multi-processing
It should be observed that in the proposed scheme, if properties were the only IPC mechanism between threads, we could trivially replace threads with processes, and the resulting situation would be very close to the HLA setup - processes publishing snapshots (the changed property lists) of their state at time values (the sync points) for other processes to receive.