SimGrid
3.18
Versatile Simulation of Distributed Systems
|
This page will teach you how to run the tests, selecting the ones you want, and how to add new tests to the archive.
SimGrid code coverage is usually between 70% and 80%, which is much more than most projects out there. This is because we consider SimGrid to be a rather complex project, and we want to modify it with less fear.
We have two sets of tests in SimGrid: Each of the 10,000+ unit tests check one specific case for one specific function, while the 500+ integration tests run a given simulation specifically intended to exercise a larger amount of functions together. Every example provided in examples/ is used as an integration test, while some other torture tests and corner cases integration tests are located in teshsuite/. For each integration test, we ensure that the output exactly matches the defined expectations. Since SimGrid displays the timestamp of every logged line, this ensures that every change of the models' prediction will be noticed. All these tests should ensure that SimGrid is safe to use and to depend on.
Running the tests is done using the ctest binary that comes with cmake. These tests are run for every commit and the result is publicly available.
ctest # Launch all tests ctest -R msg # Launch only the tests which name match the string "msg" ctest -j4 # Launch all tests in parallel, at most 4 at the same time ctest --verbose # Display all details on what's going on ctest --output-on-failure # Only get verbose for the tests that fail ctest -R msg- -j5 --output-on-failure # You changed MSG and want to check that you didn't break anything, huh? # That's fine, I do so all the time myself.
All unit tests are packed into the testall binary, that lives at the source root. These tests are run when you launch ctest, don't worry.
make testall # Rebuild the test runner on need ./testall # Launch all tests ./testall --help # revise how it goes if you forgot ./testall --tests=-all # run no test at all (yeah, that's useless) ./testall --dump-only # Display all existing test suites ./testall --tests=-all,+dict # Only launch the tests from the dict test suite ./testall --tests=-all,+foo:bar # run only the bar test from the foo suite.
If you want to test a specific function or set of functions, you need a unit test. Edit the file tools/cmake/UnitTesting.cmake to add your source file to the FILES_CONTAINING_UNITTESTS list. For example, if you want to create unit tests in the file src/xbt/plouf.c, your changes should look like that:
--- a/tools/cmake/UnitTesting.cmake +++ b/tools/cmake/UnitTesting.cmake @@ -11,6 +11,7 @@ set(FILES_CONTAINING_UNITTESTS src/xbt/xbt_sha.c src/xbt/config.c + src/xbt/plouf.c ) if(SIMGRID_HAVE_MC)
Then, you want to actually add your tests in the source file. All the tests must be protected by "#ifdef SIMGRID_TEST" so that they don't get included in the regular build. The line SIMGRID_TEST must also be written on the endif line for the extraction script to work properly.
Tests are subdivided in three levels. The top-level, called test suite, is created with the macro XBT_TEST_SUITE. There can be only one suite per source file. A suite contains test units that you create with the XBT_TEST_UNIT macro. Finally, you start actual tests with xbt_test_add. There is no closing marker of any sort, and an unit is closed when the next unit starts, or when the end of file is reached.
Once a given test is started with xbt_test_add, you use xbt_test_assert to specify that it was actually an assert, or xbt_test_fail to specify that it failed (if your test cannot easily be written as an assert). xbt_test_exception can be used to report that it failed with an exception. There is nothing to do to report that a given test succeeded, just start the next test without reporting any issue. Finally, xbt_test_log can be used to report intermediate steps. The messages will be shown only if the corresponding test fails.
Here is a recaping example, inspired from src/xbt/dynar.h (see that file for details).
For more details on the macro used to write unit tests, see their reference guide: Unit testing support. For details on on how the tests are extracted from the module source, check the tools/sg_unit_extractor.pl script directly.
Last note: please try to keep your tests fast. We run them very very very often, and you should strive to make it as fast as possible, to not upset the other developers. Do not hesitate to stress test your code with such unit tests, but make sure that it runs reasonably fast, or nobody will run "ctest" before commiting code.
TESH (the TEsting SHell) is the test runner that we wrote for our integration tests. It is distributed with the SimGrid source file, and even comes with a man page. TESH ensures that the output produced by a command perfectly matches the expected output. This is very precious to ensure that no change modifies the timings computed by the models without notice.
To add a new integration test, you thus have 3 things to do:
<project/directory>/teshsuite/<interface eg msg>/CMakeLists.txtMake sure to pick a wise name for your test. It is often useful to check a category of tests together. The only way to do so in ctest is to use the -R argument that specifies a regular expression that the test names must match. For example, you can run all MSG test with "ctest -R msg". That explains the importance of the test names.
Once the name is chosen, create a new test by adding a line similar to the following (assuming that you use tesh as expected).
# Usage: ADD_TEST(test-name ${CMAKE_BINARY_DIR}/bin/tesh <options> <tesh-file>) # option --setenv bindir set the directory containing the binary # --setenv srcdir set the directory containing the source file # --cd set the working directory ADD_TEST(my-test-name ${CMAKE_BINARY_DIR}/bin/tesh --setenv bindir=${CMAKE_BINARY_DIR}/examples/my-test/ --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/my-test/ --cd ${CMAKE_HOME_DIRECTORY}/examples/my-test/ ${CMAKE_HOME_DIRECTORY}/examples/msg/io/io.tesh )
As usual, you must run "make distcheck" after modifying the cmake files, to ensure that you did not forget any files in the distributed archive.
We use several systems to automatically test SimGrid with a large set of parameters, across as many platforms as possible. We use Jenkins on Inria servers as a workhorse: it runs all of our tests for many configurations. It takes a long time to answer, and it often reports issues but when it's green, then you know that SimGrid is very fit! We use Travis to quickly run some tests on Linux and Mac. It answers quickly but may miss issues. And we use AppVeyor to build and somehow test SimGrid on windows.
You should not have to change the configuration of the Jenkins tool yourself, although you could have to change the slaves' configuration using the CI interface of INRIA – refer to the CI documentation.
The result can be seen here: https://ci.inria.fr/simgrid/
We have 2 interesting projects on Jenkins:
tools/jenkins/build.sh
tools/jenkins/DynamicAnalysis.sh
In each case, SimGrid gets built in /builds/workspace/$PROJECT/build_mode/$CONFIG/label/$SERVER/build with $PROJECT being for instance "SimGrid-Multi", $CONFIG "DEBUG" or "ModelChecker" and $SERVER for instance "simgrid-fedora20-64-clang".
If some configurations are known to fail on some systems (such as model-checking on non-linux systems), go to your Project and click on "Configuration". There, find the field "combination filter" (if your interface language is English) and tick the checkbox; then add a groovy-expression to disable a specific configuration. For example, in order to disable the "ModelChecker" build on host "small-netbsd-64-clang", use:
(label=="small-netbsd-64-clang").implies(build_mode!="ModelChecker")
Just for the record, the slaves were created from the available template with the following commands:
#debian/ubuntu apt-get install gcc g++ gfortran automake cmake libboost-dev openjdk-8-jdk openjdk-8-jre libxslt-dev libxml2-dev libevent-dev libunwind-dev libdw-dev htop git python3 xsltproc libboost-context-dev #for dynamicanalysis: apt-get install jacoco libjacoco-java libns3-dev pcregrep gcovr ant lua5.3-dev sloccount #fedora dnf install libboost-devel openjdk-8-jdk openjdk-8-jre libxslt-devel libxml2-devel xsltproc git python3 libdw-devel libevent-devel libunwind-devel htop lua5.3-devel #netbsd pkg_add cmake gcc7 boost boost-headers automake openjdk8 libxslt libxml2 libunwind git htop python36 #opensuse zypper install cmake automake clang boost-devel java-1_8_0-openjdk-devel libxslt-devel libxml2-devel xsltproc git python3 libdw-devel libevent-devel libunwind-devel htop binutils ggc7-fortran #freebsd pkg install boost-libs cmake openjdk8 automake libxslt libxml2 libunwind git htop python3 automake gcc6 flang elfutils libevent #+ clang-devel from ports #osx brew install cmake boost libunwind-headers libxslt git python3
Travis is a free (as in free beer) Continuous Integration system that open-sourced project can use freely. It is very well integrated in the GitHub ecosystem. There is a plenty of documentation out there. Our configuration is in the file .travis.yml as it should be, and the result is here: https://travis-ci.org/simgrid/simgrid
The .travis.yml configuration file can be useful if you fail to get SimGrid to compile on modern mac systems. We use the brew
package manager there, and it works like a charm.
AppVeyor aims at becoming the Travis of Windows. It is maybe less mature than Travis, or maybe it is just that I'm less trained in Windows. Our configuration is in the file appveyor.yml as it should be, and the result is here: https://ci.appveyor.com/project/simgrid/simgrid
We use Choco
as a package manager on AppVeyor, and it is sufficient for us. In the future, we will probably move to the ubuntu subsystem of Windows 10: SimGrid performs very well under these settings, but unfortunately we have no continuous integration service providing it yet, so we cannot drop AppVeyor yet.
Since SimGrid is packaged in Debian, we benefit from their huge testing infrastructure. That's an interesting torture test for our code base. The downside is that it's only for the released versions of SimGrid. That is why the Debian build does not stop when the tests fail: post-releases fixes do not fit well in our workflow and we fix only the most important breakages.
The build results are here: https://buildd.debian.org/status/package.php?p=simgrid
SonarQube is an open-source code quality analysis solution. Their nice code scanners are provided as plugin. The one for C++ is not free, but open-source project can use it at no cost. That is what we are doing.
Don't miss the great looking dashboard here: https://nemo.sonarqube.org/overview?id=simgrid
This tool is enriched by the script tools/internal/travis-sonarqube.sh
that is run from .travis.yml