Software testing: Difference between revisions

Jump to navigation Jump to search
m
Line 127: Line 127:


Under each of these directories, are topic directory
Under each of these directories, are topic directory
For example, under flightgear/test_suite/unit_tests, you'll find directory names (topics) that roughly correspond to the directories under flightgear/src.
For example, under flightgear/test_suite/unit_tests, you'll find directory names (topics) that roughly correspond to the directories under flightgear/src.└── unit_tests
```
└── unit_tests
     ├── Add-ons
     ├── Add-ons
     ├── AI
     ├── AI
Line 143: Line 141:
     ├── Network
     ├── Network
     └── Scripting
     └── Scripting
```
For this example we'll look at the steps taken to add the httpd tests under test_suite/Network.
For this example we'll look at the steps taken to add the httpd tests under test_suite/Network.


Steps:
Steps:
Create two new files: test_suite/Network/test_httpd.cxx and test_httpd.hxx. Note that these filenames start with test_. That's a Flightgear standard.
*Create two new files: test_suite/Network/test_httpd.cxx and test_httpd.hxx. Note that these filenames start with test_. That's a Flightgear standard.
*Add the appropriate license files and boilerplate code
*Modify two names in the boilerplate
    ├── Add-ons
    ├── AI
    ├── Airports
    ├── Autopilot
    ├── CMakeLists.txt
    ├── FDM
    ├── general
    ├── Input
    ├── Instrumentation
    ├── Main
    ├── Navaids
    ├── Network
    └── Scripting
For this example we'll look at the steps taken to add the httpd tests under test_suite/Network.


Add the appropriate license files at the top of each of these files.
Steps:
#<blockquote>class HttpdTest : public CppUnit::TestFixture {  // Set up the test suite.  CPPUNIT_TEST_SUITE(HttpdTest);  CPPUNIT_TEST(testHttpd);  CPPUNIT_TEST_SUITE_END();  public:  // Set up function for each test.  void setUp();  // Clean up after each test. void tearDown();  // Test  void testHttpd();  };</blockquote>
#<blockquote></blockquote>
#


=== Headless testing ===
===Headless testing===
{{Main article|FlightGear Headless}}
{{Main article|FlightGear Headless}}


For an FDM+systems test, we should run FG without a renderer (which is what the test_suite does) to benchmark the pure C++ performance of whatever system we care about (FDM or whatever). But a few hours playing with 'perf' on Linux or Instruments on macOS will show you that OSG + the GL drivers use 80% of our CPU time, and hence Amdhal's law will always get you.<ref>https://sourceforge.net/p/flightgear/mailman/message/36977666/</ref>
For an FDM+systems test, we should run FG without a renderer (which is what the test_suite does) to benchmark the pure C++ performance of whatever system we care about (FDM or whatever). But a few hours playing with 'perf' on Linux or Instruments on macOS will show you that OSG + the GL drivers use 80% of our CPU time, and hence Amdhal's law will always get you.<ref>https://sourceforge.net/p/flightgear/mailman/message/36977666/</ref>


=== Graphics testing ===
===Graphics testing===
{{See also|FlightGear Benchmark}}
{{See also|FlightGear Benchmark}}


Line 162: Line 178:


For example, a test would use several rc files (.[[fgfsrc]] variants, with different renderers and threading modes, including static values for:
For example, a test would use several rc files (.[[fgfsrc]] variants, with different renderers and threading modes, including static values for:
* c172p at some parking in a detailed airport
*c172p at some parking in a detailed airport
* camera set with a specific direction and field of view
*camera set with a specific direction and field of view
* AW with specific METARs around (if not possible, BW with specific METAR)
*AW with specific METARs around (if not possible, BW with specific METAR)
* fixed rendering settings ( <ref>https://sourceforge.net/p/flightgear/mailman/message/36975122/</ref>
*fixed rendering settings ( <ref>https://sourceforge.net/p/flightgear/mailman/message/36975122/</ref>


We can load a [[Instant Replay|replay tape]] on startup. Since the FDM and User interface are unavailable, replays are suitable for testing rendering and performance.
We can load a [[Instant Replay|replay tape]] on startup. Since the FDM and User interface are unavailable, replays are suitable for testing rendering and performance.
Line 173: Line 189:
A rendering performance would likely do the following:
A rendering performance would likely do the following:


* Select some particular rendering settings (clouds, draw distance, etc)
*Select some particular rendering settings (clouds, draw distance, etc)
* Run a saved fgtape recording
*Run a saved fgtape recording
* Record the mean/min/max FPS during this and save it in some text file/copy to the clipboard
* Record the mean/min/max FPS during this and save it in some text file/copy to the clipboard


Line 185: Line 201:
Also, the Multi-monitor setup is an area that could use additional unit testing. <ref>https://sourceforge.net/p/flightgear/mailman/message/36904782/</ref>
Also, the Multi-monitor setup is an area that could use additional unit testing. <ref>https://sourceforge.net/p/flightgear/mailman/message/36904782/</ref>


== Fgdata ==
==Fgdata==
=== Nasal scripting (comments) ===
===Nasal scripting (comments)===
{{WIP}}
{{WIP}}


Line 220: Line 236:
However, all scripts found and executed will be seen as a single test within the test suite. So maybe we should have a $FG_SRC/test_suite/nasal_staging/ directory for the initial development of such auto-scanned scripts. But then we have someone shift them into $FG_SRC/test_suite/system_tests/, $FG_SRC/test_suite/unit_tests/, or $FG_SRC/test_suite/fgdata_tests/ later on?  That would give better diagnostics and would avoid long-term clutter.<ref>https://sourceforge.net/p/flightgear/mailman/message/36991198/</ref>
However, all scripts found and executed will be seen as a single test within the test suite. So maybe we should have a $FG_SRC/test_suite/nasal_staging/ directory for the initial development of such auto-scanned scripts. But then we have someone shift them into $FG_SRC/test_suite/system_tests/, $FG_SRC/test_suite/unit_tests/, or $FG_SRC/test_suite/fgdata_tests/ later on?  That would give better diagnostics and would avoid long-term clutter.<ref>https://sourceforge.net/p/flightgear/mailman/message/36991198/</ref>


* First, we should probably hard code tests into the C++ framework. For this, the CppUnit assertion macros will have to be wrapped up as Nasal functions.
*First, we should probably hard code tests into the C++ framework. For this, the CppUnit assertion macros will have to be wrapped up as Nasal functions.
* Implement the scanning code as we need some CMake magic (probably using file(COPY, ...)).
*Implement the scanning code as we need some CMake magic (probably using file(COPY, ...)).
* Finally, we must determine if and how to improve the Nnasaldebugging output.
*Finally, we must determine if and how to improve the Nnasaldebugging output.


the code could go into a subdirectory in $FG_SRC/test_suite/fgdata_tests/, and the Nasal script in $FG_SRC/test_suite/shared_data/nasal/.
the code could go into a subdirectory in $FG_SRC/test_suite/fgdata_tests/, and the Nasal script in $FG_SRC/test_suite/shared_data/nasal/.


== References ==
==References==
{{Appendix}}
{{Appendix}}


[[Category:Core development]]
[[Category:Core development]]
980

edits

Navigation menu