FlightGear benchmark: Difference between revisions

Jump to navigation Jump to search
Line 2: Line 2:


== Objective ==
== Objective ==
A long time ago, we once had a FG-specific benchmark suite called "FGBenchmark" - meanwhile, a number of end-users and long-term contributors have been talking about re-introducing a form of scriptable benchmark, directly as part of FlightGear itself, using [[Nasal]] scripting to recreate certain situations (location, aircraft, rendering settings) in order to gather runtime statistics, but also for better regression testing.  
A long time ago, we once had a FG-specific benchmark suite called "FGBenchmark" over time this wasn't updated anylonger and got phased out- meanwhile, a number of end-users and long-term contributors have been talking about re-introducing a form of scriptable benchmark, directly as part of FlightGear itself, using [[Nasal]] scripting to recreate certain situations (location, aircraft, rendering settings etc) in order to gather runtime statistics, but also for better regression testing.  


FlightGear has drastically evolved since the early days of FGBenchmark, so that many of these things could now be accomplished, even without touching the C++ source code. The technical main restrictions at the moment are:
Obviously, FlightGear has drastically evolved since the early days of FGBenchmark, so lots of benchmarking metrics can now be gathered, even without touching the C++ source code and without using any external tools or introducing other platform-specific dependencies. Basically, a simple form of regression testing or benchmark (unit tests) can now be implemented directly through FlightGear and Nasal scripting. Technically, the main restrictions are currently:


* FlightGear expects an aircraft to be selected at startup, so that benchmarks could only be self-contained if they're are provided as a custom set of aircraft-set.xml files for, simply because [[FlightGear Sessions|we cannot yet switch aircraft at runtime]]
* FlightGear expects an aircraft to be selected at startup, so that benchmarks could only be self-contained if they're are provided as a custom set of aircraft-set.xml files for, simply because [[FlightGear Sessions|we cannot yet switch aircraft at runtime]]
Line 12: Line 12:
* The Nasal scripting interpreter is initialized pretty late because it has some hard-coded assumptions regarding available subsystem, OTOH it could be doing useful work if a restricted interpreter was available earlier, i.e. to help with simulator-reinitialization, see [[Initializing Nasal early]]
* The Nasal scripting interpreter is initialized pretty late because it has some hard-coded assumptions regarding available subsystem, OTOH it could be doing useful work if a restricted interpreter was available earlier, i.e. to help with simulator-reinitialization, see [[Initializing Nasal early]]


Our hope is that we'll be able to come up with a simple benchmark suite to help users provide better troubleshooting reports, but also allow developers to do largely automated regression tests, i.e. through benchmarks or scripted flights. The recent advances in deferred rendering support ([[Rembrandt)) also resulted in tons of GPU/GLSL related bug reports that are often hardware-specific and difficult to reproduce. Also see: [[Howto:Debugging extreme lag]].
Our hope is that we'll be able to come up with a simple benchmark suite to help users provide better troubleshooting reports, but also allow developers to do largely automated regression tests, i.e. through benchmarks or scripted flights. The recent advances in deferred rendering support ([[Rembrandt]]) also resulted in tons of GPU/GLSL related bug reports that are often hardware-specific and difficult to reproduce. Also see: [[Howto:Debugging extreme lag]].


In the long run, the corresponding data could also help us to provide more reliable [[Hardware Recommendations]].
In the long run, the corresponding data could also help us to provide more reliable [[Hardware Recommendations]].

Navigation menu