Software testing: Difference between revisions

From FlightGear wiki
Jump to navigation Jump to search
No edit summary
Line 91: Line 91:
and git skills required for implementing your test on your fork's new
and git skills required for implementing your test on your fork's new
development branch :)
development branch :)
=== Headless testing ===


== Fgdata ==
== Fgdata ==

Revision as of 14:17, 25 April 2020

Note  There’s already the test_suite in flightgear/test_suite using CppUnit, thanks to some hard work by Edward. We need more tests written for it, submissions welcome. (Pick an area of interest)[1]

The FlightGear source code and data are tested by the FlightGear developers using a number of tools. This includes automated testing via unit tests in SimGear and a full test suite with multiple test categories in the flightgear repository, as well as manual in-sim testing. Writing tests is one of the best ways to jump into FlightGear development.

SimGear

The SimGear sources are checked via unit testing. This is implemented using the CMake CTest unit testing infrastructure. A number of tests are currently implemented using the BOOST unit testing infrastructure tied to the build system using CTest, however this should be avoided when writing tests as there is a shift towards eliminating BOOST by the FlightGear developers.

Building and running the SimGear tests

$ make
$ ctest

FlightGear

Building the test suite

To run the tests you will need to build FlightGear from source using cmake. See Building FlightGear for details.

Once you have your cmake build environment do the following:

  1. Change to your FlightGear build directory
  2. Enable building the tests by setting a cmake variable: cmake -LBUILD_TESTING=ON .
  3. Build the test suite: make test_suite

This will also run the full test suite (see below), because you will typically want to write a test and then immediately compile and run it.

Running the test suite

To run the test suite, simply run ./test_suite/fgfs_test_suite

This will run the full test suite and print a Synopsis of results similar to the following:

Synopsis
========

System/functional tests ....................................... [ OK ]
Unit tests .................................................... [ OK ]
Simgear unit tests ............................................ [ OK ]
FGData tests .................................................. [ OK ]
Synopsis ...................................................... [ OK ]


You can also run individual test cases. Run ./test_suite/fgfs_test_suite -h to see the various options

Why write unit tests?

A well tested piece of software will have a much lower bug count/load. An extensive test suite with unit tests, system/functional tests, GUI tests, installer tests, and other categories of tests can significantly help in this regard.

The benefits of not just chasing clear "wins" are great: An awesome learning experience for new developers; the ability to catch latent, unreported bugs; making it easier to refactor current code by creating a safety net; making it easier for current developers to accept new contributions (when accompanied with passing tests); helping other test writers by contributing to the common test suite infrastructure; and being able to easily check for memory leaks or other issues via Valgrind.[2]

If you are a new developer, just jump in and write any test! It does not need to catch a bug. Do whatever how ever you wish! Just dive in this shallow end and you'll see that the water is not cold.

Writing a test as a safety net. You write the test to pass, make your changes, then make sure that the test still passes. Then you push both the test and core changes.[3]

Benefits of unit testing

There are lots of benefits to writing tests that pass.

Benefits include :

  • Learning! New developers can learn a ton from writing a number of passing tests in the area they are interested in. This is one of the quickest ways to learn about a pre-existing and mature code base. You have zero worries about breaking things. This is diving in the shallow end.
  • Latent bug uncovering. For every ten tests you write expecting them to pass, one will probably fail. Or at least uncover unexpected behavior that can be improved.
  • Refactoring. If we had 10,000 passing tests (assuming universal test coverage), large scale refactoring of the entire code base would be quick and reliable. It would enable refactoring on a scale currently unimaginable. I cannot emphasize enough how much of a benefit this would be.
  • Developer turnover. Again if we had 10,000 passing tests (assuming universal test coverage), it would encourage new developers. This is because the fear of breaking something is removed. It is a total safety net. It also would give existing developers peace of mind when a new developer is touching one of the dark parts of FlightGear that no current developer understands (there are plenty of those) .
  • Test suite infrastructure. The more passing tests written, the better the test suite infrastructure will become. We can already do a lot. But the addition of more passing tests will help other test writers.
  • Memory checking. Running a single test through Valgrind is amazing. Running FlightGear through Valgrind is close to impossible. Passing tests can be written to catch memory leaks!
  • Low code quality and standards. This is related to the learning point. As long as a test compiles on all OSes without warning, it passes, and Valgrind gives you an ok, it is good enough. You dont need to be a C++ expert to dive into this shallow end of the pool.

Bootstrapping completely new tests

To start diving straight into the test suite code, firstly copy what has been done in this commit: edauvergne/flightgear/8474df commit view


Just modify all names for a JSBSim test (or any other test fixture you want to code). You should then be able to compile and check that your new testDummy() test passes as expected. You can then slowly build up from this basic infrastructure as you learn the fgfs internals, c++, and git skills required for implementing your test on your fork's new development branch :)

Headless testing

Fgdata

Nasal scripting (comments)

WIP.png Work in progress
This article or section will be worked on in the upcoming hours or days.
See history for the latest developments.

The now builtin CppUnit framework can obviously solve all the issues identified in the old Nasal Unit Testing Framework wiki article and the discussions it points to and provide the full framework required.[4]

we have route-manager tests which validate route_manager.nas is working correctly, and we have Canavs tests which poke the Nasal API. [5]

We need more FGData testing via the test suite.

Adding the CppUnit assertions to Nasal, is the task James has been suggesting he would do, so others can write tests in pure Nasal.

The other piece is some C++ code to scan a directory for files matching a pattern, eg test_XYZ.nas, and to run each of those automatically.


The idea for testing Nasal would be that you write a small CppUnit interface in c++ in $FG_SRC/test_suite/*_tests/ (the FGData Nasal testing would be in a fgdata_tests/ directory). This would register each test which points to the script in $FG_SRC/test_suite/shared_data/nasal/, and the setUp() and tearDown() functions would use helper functions in the fgtest namespace to start and stop Nasal. The Nasal scripts could then call the CppUnit assertion macros wrapped up as Nasal functions for communicating failures and errors to the test suite. [6]

Scanning for scripts is a great idea. Then developers (core and content) could write tests in pure Nasal.

However all scripts found and executed will be seen as a single test within the test suite. So maybe we should have a $FG_SRC/test_suite/nasal_staging/ directory for initial development of such auto-scanned scripts. But then we have someone shift them into $FG_SRC/test_suite/system_tests/, $FG_SRC/test_suite/unit_tests/, or $FG_SRC/test_suite/fgdata_tests/ later on? That would give better diagnostics and would avoid long-term clutter.[7]

  • First we should probably hard code tests into the C++ framework. The CppUnit assertion macros will have to be wrapped up as Nasal functions for this.
  • Then implement the scanning code as we need some CMake magic (probably using file(COPY, ...)).
  • Finally work out if and how we need to improve the Nasal debugging output.

the code could go into a subdirectory in $FG_SRC/test_suite/fgdata_tests/, and the Nasal script in $FG_SRC/test_suite/shared_data/nasal/.

References

References