Software testing

From FlightGear wiki
Revision as of 19:35, 21 April 2020 by Stuart (talk | contribs) (Add useful information on building and running the test suite.)
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
Note  There’s already the test_suite in flightgear/test_suite using CppUnit, thanks to some hard work by Edward. We need more tests written for it, submissions welcome. (Pick an area of interest)[1]

FlightGear has a small (but hopefully growing!) set of Unit Tests. These are small tests designed to quickly test small pieces of code. That constrasts with more broadly scoped functional and system tests.

Building the test suite

To run the tests you will need to build FlightGear from source using cmake. See Building FlightGear for details.

Once you have your cmake build environment do the following:

  1. Change to your FlightGear build directory
  2. Enable building the tests by setting a cmake variable: cmake -LBUILD_TESTING=ON .
  3. Build the test suite: make test_suite

This will also run the full test suite (see below), because you will typically want to write a test and then immediately compile and run it.

Running the test suite

To run the test suite, simply run ./test_suite/fgfs_test_suite

This will run the full test suite and print a Synopsis of results similar to the following:

Synopsis
========

System/functional tests ....................................... [ OK ]
Unit tests .................................................... [ OK ]
Simgear unit tests ............................................ [ OK ]
FGData tests .................................................. [ OK ]
Synopsis ...................................................... [ OK ]


You can also run individual test cases. Run ./test_suite/fgfs_test_suite -h to see the various options

Why write unit tests?

A well tested piece of software will have a much lower bug count/load. An extensive test suite with unit tests, system/functional tests, GUI tests, installer tests, and other categories of tests can significantly help in this regard.

The benefits of not just chasing clear "wins" are great: An awesome learning experience for new developers; the ability to catch latent, unreported bugs; making it easier to refactor current code by creating a safety net; making it easier for current developers to accept new contributions (when accompanied with passing tests); helping other test writers by contributing to the common test suite infrastructure; and being able to easily check for memory leaks or other issues via Valgrind.[2]

If you are a new developers, just jump in and write any test! It does not need to catch a bug. Do whatever how ever you wish! Just dive in this shallow end and you'll see that the water is not cold.

Writing a test as a safety net. You write the test to pass, make your changes, then make sure that the test still passes. Then you push both the test and core changes.[3]

Benefits of unit testing

There are lots of benefits to writing tests that pass.

Benefits include :

  • Learning! New developers can learn a ton from writing a number of passing tests in the area they are interested in. This is one of the quickest ways to learn about a pre-existing and mature code base. You have zero worries about breaking things. This is diving in the shallow end.
  • Latent bug uncovering. For every ten tests you write expecting them to pass, one will probably fail. Or at least uncover unexpected behavior that can be improved.
  • Refactoring. If we had 10,000 passing tests (assuming universal test coverage), large scale refactoring of the entire code base would be quick and reliable. It would enable refactoring on a scale currently unimaginable. I cannot emphasize enough how much of a benefit this would be.
  • Developer turnover. Again if we had 10,000 passing tests (assuming universal test coverage), it would encourage new developers. This is because the fear of breaking something is removed. It is a total safety net. It also would give existing developers peace of mind when a new developer is touching one of the dark parts of FlightGear that no current developer understands (there are plenty of those) .
  • Test suite infrastructure. The more passing tests written, the better the test suite infrastructure will become. We can already do a lot. But the addition of more passing tests will help other test writers.
  • Memory checking. Running a single test through Valgrind is amazing. Running FlightGear through Valgrind is close to impossible. Passing tests can be written to catch memory leaks!
  • Low code quality and standards. This is related to the learning point. As long as a test compiles on all OSes without warning, it passes, and Valgrind gives you an ok, it is good enough. You dont need to be a C++ expert to dive into this shallow end of the pool.

Bootstrapping completely new tests

To start diving strait into the test suite code, firstly copy what has been done in this commit: https://sourceforge.net/u/edauvergne/flightgear/ci/8474df

Just modify all names for a JSBSim test (or any other test fixture you want to code). You should then be able to compile and check that your new testDummy() test passes as expected. You can then slowly build up from this basic infrastructure as you learn the fgfs internals, c++, and git skills required for implementing your test on your fork's new development branch :)

References

References