Nasal unit tests
Nasal unit tests are a way to verify that Nasal functions return the expected values.
By writing unit tests for functions before you implement them, you can verify that they behave as intended by comparing expected and actual return values. Writing the tests first often clarifies what the function should do and helps you define its expected behaviour before you start coding.
Unit tests also help you detect bugs introduced when you extend functionality, fix issues, refactor, or otherwise maintain your Nasal code.
Later on, they also help others ensure they don't accidentally break your code - in other words, introduce regressions - when they continue maintaining it.
Overview
FlightGear has two frameworks for unit testing Nasal code. Most of the time you will want to use the standalone framework, but the in-sim TestSuite framework is still useful when you need to test things that interact with the live property tree.
The standalone framework was written by James Turner in 2020. It is built into FlightGear and provides a unitTest namespace with assertion functions. Test files use the .nut file extension. This is the recommended framework for new tests.
The in-sim TestSuite framework was written by Anton Gomez Alvedro in 2014. It is written entirely in Nasal and uses an object-oriented TestSuite pattern with .nas files. It is still used by the FailureMgr tests and is useful when you need to test things that interact with the live property tree inside a running simulator.
How the standalone framework works
The unitTest namespace is automatically available in Nasal. It provides assertion functions similar to those in CppUnit.
There are two underlying C++ implementations of the same Nasal testing API. When you run fgfs normally, the in‑sim version is used and test results appear in the console. When the test suite (fgfs_test_suite) runs on CI/Jenkins, a CppUnit-backed implementation is used instead, allowing failures to be reported as CppUnit assertions.
This difference is handled entirely by FlightGear’s internals; your Nasal test code works the same in both environments.
Unit test files
Test files are usually placed in the same directory as the Nasal modules they cover, and there is typically a one‑to‑one relationship between them: each module has a corresponding test file, and each function has its own test function.
The test files usually have the prefix test_ and then the same name as the Nasal module to be tested. They are regular Nasal files, but have the file suffix .nut (for Nasal unit test). For example test_math.nut is the test file for math.nas.
Any top-level var whose name starts with test_ and is a function will be automatically discovered and executed by the framework. You do not need to register your tests anywhere. Each test file runs in its own isolated namespace (prefixed with _test_), so your tests will not interfere with other code.
Optional header comment block
Test files often begin with a comment block with a header with a short description of what the file is, followed by a commented line with how to run the test, for example something like this:
#-------------------------------------------------------------------------------
# Test file for multiplayer time synchronization
# File: test_mp_time_sync.nut
# Author: Iam Cardholder
# Created: 2026-03-14
# Licence: GPLv2 or later
#-------------------------------------------------------------------------------
# fgcommand("nasal-test", props.Node.new({"path":"test_mp_time_sync.nut"}));Including the fgcommand run command as a comment at the top is a useful convention, as it makes it easy to copy and paste it into the Nasal Console later.
Optional setUp and tearDown functions
Optionally the unit test file can have a setUp() and a tearDown() function. These are called before and after each test function. They are useful for example when a module have dependencies to other modules.
Most of the time these functions are used for printing to the console and/or log that the test has started or finished.
# Optional setup function
var setUp = func {
logprint(LOG_INFO, "mp_time_sync tests begin");
};
# Optional tear-down function
var tearDown = func {
logprint(LOG_INFO, "mp_time_sync tests finished");
};If you do not need setup or tear-down, you can simply leave them out.
Test functions and test cases
The names of the test functions, like the names of the test files, usually begin with test_ and matches the names of the functions to test.
The test functions set up variables etc, and then have one or more test cases using the assertion functions described below.
If a test function has a failing assertion, execution of that test function stops immediately. The framework will still call tearDown (if you have one) before moving on to the next test function.
You can also define helper functions in the same file. As long as their names do not start with test_, the framework will ignore them and not try to run them as tests.
unitTest.assert()
unitTest.assert(bool[, message])Asserts that a boolean is true. If false the test fails with an optional message, the file path, line number, and the runtime error "Test assert failed".
- bool
- A boolean. Usually the return from the function to test. Can be an expression.
- message
- An optional string with a message to show with the file path and line number.
Example
Checking basic conditions
unitTest.assert(1 == 1, "Math equality");
unitTest.assert(myValue > 0, "Value must be positive");unitTest.fail()
unitTest.fail([message])Will fail the test with an optional message, the file path, line number, and the runtime error "Test failed".
- message
- An optional string with a message to show with the file path and line number.
Example
Forcing a test failure
unitTest.fail("This feature is not yet implemented");unitTest.assert_equal()
unitTest.assert_equal(a, b[, message])Compares a and b. If not equal the test fails with an optional message, the file path, line number, and the runtime error "assert_equal failed", or if defined the message. Works with strings, numbers, vectors, and hashes.
- a
- A value. Usually the expected value to be returned. Can be an expression.
- b
- A value. Usually the actual value returned from the function to be tested. Can be an expression.
- message
- An optional string with a message to show with the file path and line number. Will also be the runtime error. Defaults to "assert_equal failed".
Example
Comparing values
unitTest.assert_equal("apples", result, "String comparison");
unitTest.assert_equal(x.vector, ["a", "b", "c"], "Vector contents");unitTest.assert_doubles_equal()
unitTest.assert_doubles_equal(a, b, tolerance[, message])Compares two doubles. If the difference is larger or equal to the tolerance the test fails with an optional message, the file path, line number, and the runtime error "assert_doubles_equal failed", or if defined the message.
- a
- A double. Usually the expected value to be returned. Can be an expression.
- b
- A double. Usually the actual value returned from the function to be tested. Can be an expression.
- tolerance
- A double with the tolerance.
- message
- An optional string with a message to show with the file path and line number. Will also be the runtime error. Defaults to "assert_doubles_equal failed".
Example
Comparing floating point values with tolerance
unitTest.assert_doubles_equal(3.141, 3, 0.1, "Pi-ish");unitTest.equal()
unitTest.equal(a, b)Compares two values for structural equality. Returns 1 if equal or 0 if not. Unlike the other functions this one does not fail the test. It is a query function that you can use for conditional logic within your tests.
- a
- A value.
- b
- A value.
Example
Using equal() for conditional checks
if (unitTest.equal(result, expected)) {
# do something
}Writing a test file
A minimal test file looks something like this:
# fgcommand("nasal-test", props.Node.new({"path":"my_test.nut"}));
# Called before each test function (optional)
var setUp = func {
# initialisation code
};
# Called after each test function (optional)
var tearDown = func {
# cleanup code
};
var test_basic_assertion = func {
unitTest.assert(1 == 1, "Math equality");
unitTest.assert(1 < 2, "Math less than");
};
var test_equality = func {
var result = "ap" ~ "ples";
unitTest.assert_equal("apples", result, "String concatenation");
};
var test_doubles = func {
var pi = 3.14159;
unitTest.assert_doubles_equal(pi, 3.14, 0.01, "Pi approximation");
};Running the tests
The tests are run from the Nasal Console with the following FGCommands:
# Run a single test file (relative path searches in $FG_ROOT/Nasal/)
fgcommand("nasal-test", props.Node.new({"path":"test_math.nut"}));
# Run all .nut files in a directory
fgcommand("nasal-test-dir", props.Node.new({"path":"/full/path/to/directory"}));If the path is relative, FlightGear will search for the file in $FG_ROOT/Nasal/.
There is also a simple GUI dialog available under Development > nasal-test in the menu bar. It lets you type in the name of a .nut file and run it with a button click, without having to type the full fgcommand in the Nasal console.
What happens when you run a test
When you run a test file, the framework does the following:
- It parses the
.nutfile. - It creates an isolated namespace for the test (named
_test_<filepath>). - It executes the file's top-level code, which defines your variables and functions.
- It looks through all the members of the namespace for functions whose names start with
test_. - For each test function it finds:
- It creates a fresh test context.
- It calls your
setUp()function, if you have one. - It calls the test function.
- It logs whether the test passed or failed, along with the source file name and line number.
- It calls your
tearDown()function, if you have one. - It resets the test context and moves on to the next test.
Note that the order in which the test functions run is not guaranteed. Each test should be independent and not rely on other tests having run before it.
Worked example: test_emesary.nut
This section walks through [GitLab]/flightgear/fgdata/next/Nasal/test_emesary.nut to show how a real test file is put together.
The run command
As described above, the run command is included as a comment at the top of the file:
# fgcommand("nasal-test", props.Node.new({"path":"test_emesary.nut"}));setUp and tearDown
The file has simple setUp and tearDown functions that log when the tests begin and finish:
var setUp = func {
logprint(LOG_INFO, "Emesary tests begin");
};
var tearDown = func {
logprint(LOG_INFO, "Emesary tests finished");
};These are called before and after each of the three test functions in the file.
The test functions
The file has three test functions. Each one is automatically discovered and run by the framework:
test_emesary_transmit_receivetests the core Emesary message-passing system, including registering and deregistering recipients, and sending notifications.test_transfertests the binary and ASCII encoding utilities used for multiplayer data transfer.test_mp_bridgetests the multiplayer bridge from end to end.
Using a helper function
test_emesary_transmit_receive has a lot of similar checks, so it defines a helper function called PerformTest to avoid repeating the same assertion pattern over and over. Because the name does not start with test_, the framework will not try to run it as a test:
var PerformTest = func(tid, expected_value, method) {
var testResult = method();
unitTest.assert(expected_value, testResult, tid);
};Each check then becomes a single call. For example, this one registers a recipient on the global transmitter and checks that the recipient count went up by one:
PerformTest("Register tt", 1 + baseRecipientCount, func {
emesary.GlobalTransmitter.Register(tt);
return emesary.GlobalTransmitter.RecipientCount();
});If the count does not match, the test will fail with the label "Register tt", so you can easily see which check failed.
Using assertions in loops
test_transfer uses assertions directly inside loops to verify that encoding and decoding round-trips work correctly across a range of values:
# Test float normalisation encode/decode with tolerance
for (i = -1; i <= 1; i += 0.1) {
var dv = emesary.TransferNorm.encode(i, 2);
var v = emesary.TransferNorm.decode(dv, 2, 0);
var delta = math.abs(i - v.value);
unitTest.assert(delta <= 0.01, sprintf("Norm: Fail: %f => %f : d=%f", i, v.value, delta));
}
# Test byte encode/decode
for (i = -124; i < 124; i += 1) {
var dv = emesary.TransferByte.encode(i);
var v = emesary.TransferByte.decode(dv, 0);
unitTest.assert(i == v.value, sprintf("Byte: fail: %d => %d", i, v.value));
}
# Test string encode/decode
var dv = emesary.TransferString.encode("hello");
var nv = emesary.TransferString.decode(dv, 0);
unitTest.assert_equal("hello", nv.value, "emesary.TransferString");Notice how sprintf() is used in the assertion messages. This is a good habit, as it includes the actual values in the failure message, making it much easier to figure out what went wrong.
Using assert_equal for exact comparisons
test_transfer also shows how to use assert_equal when you want to check that two values are exactly the same:
var v = emesary.BinaryAsciiTransfer.encodeNumeric(123, 1, 1.0);
var dv = emesary.BinaryAsciiTransfer.decodeNumeric(v, 1, 1.0, pos);
unitTest.assert_equal(dv.value, 123, "BinaryAsciiTransfer.encodeNumeric");Things to take away
- Include the
fgcommandrun command as a comment at the top of your file for easy copy-paste. - Give your helper functions names that do not start with
test_so the framework ignores them. - Use
sprintf()in your assertion messages to include actual values. This makes failures much easier to diagnose. - Each
test_function should set up its own state and not depend on other tests having run first.
Existing test files
The following test files ship with $FG_ROOT/Nasal/:
| File | What it tests | Author |
|---|---|---|
| [GitLab]/flightgear/fgdata/next/Nasal/test_math.nut | Basic assertion examples (equality, doubles comparison) | - |
| [GitLab]/flightgear/fgdata/next/Nasal/test_emesary.nut | Emesary transmitter/receiver, MP bridge, binary encoding | Richard Harrison |
| [GitLab]/flightgear/fgdata/next/Nasal/test_frame_utils.nut | PartitionProcessor | Richard Harrison |
| [GitLab]/flightgear/fgdata/next/Nasal/std.nut | std.Hash, std.Vector, std.String, std.stoul |
Henning Stahlke |
| [GitLab]/flightgear/fgdata/next/Nasal/props.nut | props.Node methods (add, sub, isValidPropName, makeValidPropName) |
Henning Stahlke |
| [GitLab]/flightgear/fgdata/next/Nasal/canvas/api/svgcanvas.nut | SVG canvas API | - |
These are good to look at when you are writing your own tests, as they show different ways of using the framework.
In-sim testing (TestSuite framework)
The TestSuite framework was written in 2014 by Anton Gomez Alvedro. It is a pure-Nasal testing system with no C++ integration. It is still used by the FailureMgr tests.
Why you might want to use it
Because the tests run inside the simulator, they can interact with the live property tree, use setprop() and getprop(), use setlistener(), and exercise code paths that only work inside a running fgfs session.
This makes it useful for testing subsystems that depend on the full simulator environment, for example property tree listeners, failure triggers, or module interactions that need the FlightGear runtime to be up and running.
The trade-off is that you have to run these tests manually from the Nasal Console while the simulator is running. They are not integrated with CI.
Writing a test suite
Unlike the standalone framework where tests are top-level functions, the in-sim TestSuite framework uses Nasal's object system. You create a test suite object that inherits from TestSuite, and you put your test methods inside it.
Each test file includes the framework and any modules it depends on using io.include():
io.include("Aircraft/Generic/Systems/Tests/test.nas");
io.include("Aircraft/Generic/Systems/failures.nas");
var MyTestSuite = {
parents: [TestSuite],
# Called before each test (optional)
setup: func {
props.globals.initNode("/test");
},
# Called after each test (optional)
cleanup: func {
me.trigger = nil;
props.globals.getNode("/test").remove();
},
# Helper methods (not run as tests because no "test_" prefix)
_my_helper: func {
# ...
},
# Test cases (must start with "test_")
test_something: func {
assert(1 == 1, "basic check");
},
test_prop_exists: func {
assert_prop_exists("/sim/version/flightgear");
},
};A few things to note:
- The
parents: [TestSuite]line is required. Therun_tests()function usesisa()to find test suites, so without this line your tests will not be discovered. - The lifecycle methods are called
setup()andcleanup()(lowercase), notsetUpandtearDownlike in the standalone framework. They are called before and after each test method. - Methods that do not start with
test_are ignored by the test runner, so you can use them as helpers. A common convention is to prefix them with an underscore. - Tests use Nasal's built-in
assert(), notunitTest.assert(). A failing assertion callsdie(), which is caught by the framework.
Testing with the live property tree
The FailureMgr tests are a good example of why you might want to test inside the simulator. They create temporary properties, manipulate them to simulate things like gear cycles or altitude changes, and then check that the triggers and failure modes respond correctly.
For example, TestCycleCounter creates a temporary property under /test/, oscillates its value to simulate gear up/down cycles, and checks that the cycle counter tracks them:
var TestCycleCounter = {
parents: [TestSuite],
setup: func {
props.globals.initNode("/test");
},
cleanup: func {
props.globals.getNode("/test").remove();
me.counter = nil;
},
# Helper: simulate property oscillation (e.g. gear up/down cycles)
_shake_that_prop: func (pattern=nil) {
if (pattern == nil)
pattern = [0, -10, 10, -10, 10, -10, 10, 0];
setprop("/test/property", pattern[0]);
me.counter.reset();
var i = 0;
var value = pattern[0];
while (i < size(pattern) - 1) {
var target = pattern[i+1];
var delta = target > pattern[i] ? 1 : -1;
while (value != target) {
value += delta;
setprop("/test/property", value);
}
i += 1;
}
},
test_cycles_dont_grow_while_disabled: func {
me.counter = CycleCounter.new("/test/property");
me._shake_that_prop();
assert(me.counter.cycles == 0);
},
test_cycles_grow_while_enabled: func {
me.counter = CycleCounter.new("/test/property");
me._shake_that_prop();
assert(me.counter.cycles == 0);
me.counter.enable();
me._shake_that_prop();
assert(me.counter.cycles == 3);
},
};Notice how the cleanup() method removes the /test/ property node after each test. This is important so that tests do not interfere with each other or leave debris in the property tree.
TestAltitudeTrigger is another good example. It tests that binding a trigger creates the expected properties, and that unbinding removes them without touching unrelated properties:
test_binding: func {
setprop("/test/foreign-property", 25);
me.trigger = AltitudeTrigger.new(100, 200);
me.trigger.bind("/test/");
# Check that the binding created the expected properties
assert_prop_exists("/test/reset");
assert_prop_exists("/test/min-altitude-ft");
me.trigger.unbind();
# Check that the unbinding removed them
fail_if_prop_exists("/test/reset");
fail_if_prop_exists("/test/min-altitude-ft");
# Check that unrelated properties were not touched
assert_prop_exists("/test/foreign-property");
},TestFailureMode tests that setting a failure level from Nasal code is reflected in the property tree, and vice versa. This kind of two-way synchronisation test is only possible when running inside the simulator with the live property tree:
test_setting_level_from_nasal_is_shown_in_prop: func {
# ... (actuator and mode setup) ...
me.mode.set_failure_level(1);
assert(level == 1);
var prop_value = getprop("/test/instruments/compass/failure-level");
assert(prop_value == 1);
me.mode.set_failure_level(0.5);
assert(level == 0.5);
prop_value = getprop("/test/instruments/compass/failure-level");
assert(prop_value == 0.5);
},Aggregating test suites
If you have several test files that belong together, you can create an aggregation file that includes them all using io.include(). The FailureMgr tests do this with [GitLab]/flightgear/fgdata/next/Aircraft/Generic/Systems/Tests/FailureMgr/test_all.nas:
# Aggregation of all tests for the Failure Manager
io.include("Aircraft/Generic/Systems/Tests/FailureMgr/test_cycle_counter.nas");
io.include("Aircraft/Generic/Systems/Tests/FailureMgr/test_altitude_trigger.nas");
io.include("Aircraft/Generic/Systems/Tests/FailureMgr/test_mcbf_trigger.nas");
io.include("Aircraft/Generic/Systems/Tests/FailureMgr/test_mtbf_trigger.nas");
io.include("Aircraft/Generic/Systems/Tests/FailureMgr/test_failure_mode.nas");This allows you to run all the related test suites with a single run_tests() call.
Running In-sim tests
In-sim tests are run manually from the Nasal Console while the simulator is running. There is no automatic loading mechanism or menu item for these tests.
To run a single test file, you do the following three steps:
delete(globals, "test");
io.load_nasal("Aircraft/Generic/Systems/Tests/FailureMgr/test_cycle_counter.nas", "test");
test.run_tests();What this does is:
delete(globals, "test")clears thetestnamespace to remove any previous test run.io.load_nasal(..., "test")loads the test file into thetestnamespace. The framework (test.nas) is loaded automatically because the test file includes it viaio.include().test.run_tests()finds and executes allTestSuiteobjects in the namespace.
If you want to run all the FailureMgr tests at once, you can load the aggregation file instead:
delete(globals, "test");
io.load_nasal("Aircraft/Generic/Systems/Tests/FailureMgr/test_all.nas", "test");
test.run_tests();The output will appear in the console and looks something like:
Running test suite TestCycleCounter Running test suite TestAltitudeTrigger ... 25 tests run. 25 passed, 0 failed
TestSuite helper functions
The framework provides a few helper functions in addition to the TestSuite base object and run_tests():
| Function | What it does |
|---|---|
run_tests(namespace) |
Finds and executes all test suites in the given namespace. If no namespace is given, it uses the namespace where run_tests is defined.
|
assert_prop_exists(prop_path) |
Fails if the property node does not exist in the property tree. |
fail_if_prop_exists(prop_path) |
Fails if the property node exists in the property tree. |
Existing in-sim test files
All in-sim test files are in $FG_ROOT/Aircraft/Generic/Systems/Tests/FailureMgr/:
Differences from the standalone framework
If you are trying to decide which framework to use, or are reading test files and want to understand what you are looking at, here is a summary of the differences:
| Aspect | Standalone (unitTest) |
In-sim (TestSuite)
|
|---|---|---|
| File extension | .nut |
.nas
|
| Lifecycle methods | setUp / tearDown |
setup / cleanup
|
| Test structure | Top-level var functions |
Methods on an object inheriting from TestSuite
|
| Assertions | unitTest.assert(), unitTest.assert_equal(), etc. |
Nasal built-in assert() or die()
|
| Property tree helpers | - | assert_prop_exists(), fail_if_prop_exists()
|
| C++ / CI integration | Yes, built into FlightGear with CppUnit support | No, pure Nasal |
| How to run | fgcommand("nasal-test", ...) |
run_tests() from the Nasal Console
|
| Best used for | Unit testing Nasal functions | In-sim testing of subsystems that interact with the property tree |
See also
Wiki articles
- Nasal scripting language
- Nasal Console
- Emesary
- Software testing
- Nasal Unit Testing Framework (An older effort.)
- How to write tests for FlightGear
Examples (standalone framework)
- [GitLab]/flightgear/fgdata/next/Nasal/test_math.nut
- [GitLab]/flightgear/fgdata/next/Nasal/std.nut
- [GitLab]/flightgear/fgdata/next/Nasal/props.nut
- [GitLab]/flightgear/fgdata/next/Nasal/test_emesary.nut
- [GitLab]/flightgear/fgdata/next/Nasal/test_frame_utils.nut
Examples (in-sim framework)
- [GitLab]/flightgear/fgdata/next/Aircraft/Generic/Systems/Tests/FailureMgr/test_all.nas
- [GitLab]/flightgear/fgdata/next/Aircraft/Generic/Systems/Tests/FailureMgr/test_cycle_counter.nas
- [GitLab]/flightgear/fgdata/next/Aircraft/Generic/Systems/Tests/FailureMgr/test_altitude_trigger.nas
- [GitLab]/flightgear/fgdata/next/Aircraft/Generic/Systems/Tests/FailureMgr/test_mcbf_trigger.nas
- [GitLab]/flightgear/fgdata/next/Aircraft/Generic/Systems/Tests/FailureMgr/test_mtbf_trigger.nas
- [GitLab]/flightgear/fgdata/next/Aircraft/Generic/Systems/Tests/FailureMgr/test_failure_mode.nas
External links
- CppUnit Documentation - Making assertions - The unit test framework that runs the unit tests.