Basic Structure

The interface design and naming is inspired by the Boost Test Library. Following this naming scheme, the universal testing package consists of three basic structural elements:

The basic building blocks of this Igor Pro Universal Testing Framework are assertions. Assertions are used for checking if a condition is true. See Assertion Types for a clarification of the difference between the three assertion types. Assertions are grouped into single test cases and test cases are organized in test suites.

A test suite is a group of test cases that live in a single procedure file. You can group multiple test suites in a named test environment by using the optional parameter name of RunTest().

For a list of all objects see Index or use the Search Page.

Test Run

A Test Run is executed using RunTest() with only a single mandatory parameter which is the Test Suite.

Function definition of RunTest

variable RunTest(string procWinList, string name = defaultValue, string testCase = defaultValue, variable enableJU = defaultValue, variable enableTAP = defaultValue, variable enableRegExp = defaultValue, variable allowDebug = defaultValue, variable debugMode = defaultValue, variable keepDataFolder = defaultValue, string traceWinList = defaultValue, string traceOptions = defaultValue, variable fixLogName = defaultValue, variable waveTrackingMode = defaultValue, variable retry = defaultValue, variable retryMaxCount = defaultValue, variable shuffle = defaultValue)

Main function to execute test suites with the universal testing framework.

You can abort the test run using Command-dot on Macintosh, Ctrl+Break on Windows or Shift+Escape on all platforms.

usage example
RunTest("proc0;proc1", name="myTest")

This command will run the test suites proc0 and proc1 in a test named myTest.

Parameters:
  • procWinList

    A list of procedure files that should be treated as test suites.

    The list should be given semicolon (“;”) separated.

    The procedure name must not include Independent Module specifications.

    This parameter can be given as a regular expression with enableRegExp set to 1.

  • name

    (optional) default “Unnamed”

    descriptive name for the executed test suites. This can be used to group multiple test suites into a single test run.

  • testCase

    (optional) default “.*” (all test cases in the list of test suites)

    function names, resembling test cases, which should be executed in the given list of test suites (procWinList).

    The list should be given semicolon (“;”) separated.

    This parameter can be treated as a regular expression with enableRegExp set to 1.

  • enableJU

    (optional) default disabled, enabled when set to 1:

    A JUNIT compatible XML file is written at the end of the Test Run. It allows the combination of this framework with continuous integration servers like Atlassian Bamboo/GitLab/etc. The experiment is required to be saved somewhere on the disk. (it is okay to have unsaved changes.)

  • enableTAP

    (optional) default disabled, enabled when set to 1:

    A TAP compatible file is written at the end of the test run.

    Test Anything Protocol (TAP) standard 13

    The experiment is required to be saved somewhere on the disk. (it is okay to have unsaved changes.)

  • enableRegExp

    (optional) default disabled, enabled when set to 1:

    The input for test suites (procWinList) and test cases (testCase) is treated as a regular expression.

    Example
    RunTest("example[1-3]-plain\\.ipf", enableRegExp=1)
    

    This command will run all test cases in the following test suites:

  • allowDebug

    (optional) default disabled, enabled when set to 1:

    The Igor debugger will be left in its current state when running the tests. Is ignored when debugMode is also enabled.

  • debugMode

    (optional) default disabled, enabled when set to 1-15:

    The Igor debugger will be turned on in the state:

    1st bit = 1 (IUTF_DEBUG_ENABLE): Enable Debugger (only breakpoints)

    2nd bit = 1 (IUTF_DEBUG_ON_ERROR): Debug on Error

    3rd bit = 1 (IUTF_DEBUG_NVAR_SVAR_WAVE): Check NVAR SVAR WAVE

    4th bit = 1 (IUTF_DEBUG_FAILED_ASSERTION): Debug on failed assertion

    Example
    RunTest(..., debugMode = IUTF_DEBUG_ON_ERROR | IUTF_DEBUG_FAILED_ASSERTION)
    

    This will enable the debugger with Debug On Error and debugging on failed assertion.

  • keepDataFolder

    (optional) default disabled, enabled when set to 1:

    The temporary data folder wherein each test case is executed is not removed at the end of the test case. This allows to review the produced data.

  • traceWinList – (optional) default “” A list of windows where execution gets traced. The universal testing framework saves a RTF document for each traced procedure file. When REGEXP was set in traceOptions then traceWinList is also interpreted as a regular expression. The experiment is required to be saved somewhere on the disk. (it is okay to have unsaved changes.)

  • traceOptions – (optional) default “” A key:value pair list of additional tracing options. Currently supported is: INSTRUMENTONLY:boolean When set, run instrumentation only and return. No tests are executed. HTMLCREATION:boolean When set to zero, no htm result files are created at the end of the run REGEXP:boolean When set, traceWinList is interpreted as regular expression COBERTURA:boolean When set, it will export the tracing results in Cobertura format COBERTURA_SOURCES:string A comma (,) delimited list of directory paths that should be used as source paths for the procedure files. If this list is empty or this option not set it will use the current home directory of this experiment as the source path for all procedure files. COBERTURA_OUT:string The output directory to locate the generated cobertura files. The default is to use the current home directory.

  • fixLogName

    (optional) default 0 disabled, enabled when set to 1:

    If enabled the output files that will be generated after an autorun will have predictable names like “IUTF_Test.log”. If disabled the file names will always contain the name of the procedure file and a timestamp.

  • waveTrackingMode

    (optional) default disabled, enabled when set to a value different than 0:

    Monitors the number of free waves before and after a test case run. If for some reasons the number is not the same as before this considered as an error. If you want to opt-out a single test case you have to tag it with IUTF_NO_WAVE_TRACKING. This uses the flags UTF_WAVE_TRACKING_FREE, UTF_WAVE_TRACKING_LOCAL and UTF_WAVE_TRACKING_ALL. This feature is only available since Igor Pro 9.

  • retry – (optional) default IUTF_RETRY_NORETRY Set the conditions and options when IUTF should retry a test case. The following flags are allowed:

    • IUTF_RETRY_FAILED_UNTIL_PASS: Reruns every failed flaky test up to retryMaxCount. A flaky test case needs the IUTF_RETRY_FAILED function tag.

    • IUTF_RETRY_MARK_ALL_AS_RETRY: Treats all test cases as flaky. There is no need to use the IUTF_RETRY_FAILED function tag. This option does nothing if IUTF_RETRY_FAILED_UNTIL_PASS is not set.

    • IUTF_RETRY_REQUIRES: Allow to retry failed REQUIRE assertions. This option does nothing if IUTF_RETRY_FAILED_UNTIL_PASS is not set.

  • retryMaxCount – (optional) default IUTF_MAX_SUPPORTED_RETRY Sets the maximum number of retries if rerunning of flaky tests is enabled. Setting this number higher than IUTF_MAX_SUPPORTED_RETRY is not allowed.

  • shuffle – (optional) default IUTF_SHUFFLE_NONE A combination of flags which specify the current shuffle mode. Supported flags are: IUTF_SHUFFLE_NONE: Shuffle nothing. Use a deterministic execution order. IUTF_SHUFFLE_TEST_SUITES: Shuffle the order of execution of the test suites IUTF_SHUFFLE_TEST_CASES: Shuffle the order of execution of the test cases inside the test suites. You can opt-out single procedure files if you place the tag IUTF_NO_SHUFFLE_TEST_CASE somewhere in these specific files. IUTF_SHUFFLE_ALL: A combination of IUTF_SHUFFLE_TEST_SUITES and IUTF_SHUFFLE_TEST_CASES

Returns:

total number of errors

Test Suite

A Test Suite is a group of Test Cases which should belong together. All test functions are defined in a single procedure file, RunTest() calls them from top to bottom. Generally speaking, a Test Suite is equal to a procedure file. Therefore tests suites can not be nested, although multiple test suites can be run with one command by supplying a list to the parameter procWinList in RunTest().

Note

Although possible, a test suite should not live inside the main program. It should be separated from the rest of the project into its own procedure file. This also allows to load only the necessary parts of your program into the unit test.

Test Case

A Test Case is one of the basic building blocks grouping assertions together. A function is considered a test case if it fulfills all of the following properties:

  1. It takes no parameters.

  2. It returns a numeric value (Igor Pro default).

  3. Its name does not end with _IGNORE or _REENTRY.

  4. It is either non-static, or static and part of a regular module.

The first rule is making the test case callable in automated test environments.

The second rule is reserving the _IGNORE namespace to allow advanced users to add their own helper functions. It is advised to define all test cases as static functions and to create one regular distinctive module per procedure file. This will keep the Test Cases in their own namespace and thus not interfere with user-defined functions in ProcGlobal.

A defined list of test cases in a test suite can be run using the optional parameter testCase of RunTest(). When executing multiple test suites and a test case is found in more than one test suite, it is executed in every matching test suite.

Test cases can be marked to expect failures. The assertions are executed as normal and the error counter is reset to zero if one or more assertions failed during the execution of this test case. Only if the test case finished without any failed assertion the test case itself is considered as failed. To mark a test case as expected failure write the keyword in the comment above (all lines above Function up to the previous Function are considered as tags, every tag in separate line):

// IUTF_EXPECTED_FAILURE
Function TestCase_NotWorkingYet()

All assertions in a test case are marked as expected failures. If the test case ends due to an Abort, AbortOnRTE or pending RTE this is also considered as expected failure and neighter the error counter is increased or test case failed.

Example:

In Test Suite TestSuite_1.ipf the Test Cases static Duplicate() and static Unique_1() are defined. In Test Suite TestSuite_2.ipf the Test Cases static Duplicate(), static Unique_2() are defined.

Runtest("TestSuite_1.ipf;TestSuite_2.ipf", testCase="Unique_1;Unique_2;Duplicate")

The command will run the two test suites TestSuite_1.ipf and TestSuite_2.ipf separately. Within every test suites two test cases are execute: the Unique* test case and the Duplicate test case. The Duplicate test cases do not interfere with each other since they are static to the corresponding procedure files. Since the duplicate test cases are found in both test suites, they are also executed in both.

Note

The Test Run will not execute if the one of the specified test cases can not be found in the given list of test suites. This is also applies if no test case could be found using a regular expression pattern.

Assertion Types

An assertion checks that a given condition is true or in more general terms that an entity fulfills specific properties. Test assertions are defined for strings, variables and waves and have ALL_CAPS names. The assertion group is specified with a prefix to the assertion name using one of WARN, CHECK or REQUIRE. Assertions usually come in these triplets which differ only in how they react on a failed assertion. The following table clarifies the difference between the three assertion prefix groups:

Type

Create Log Message

Increment Error Count

Abort execution immediately

WARN

YES

NO

NO

CHECK

YES

YES

NO

REQUIRE

YES

YES

YES

The most simple assertion is CHECK() which tests if its argument is true. If you do not want to increase the error count, you could use the corresponding WARN() function and if you want to Abort the execution of the current test case if the supplied argument is false, you can use the REQUIRE() variant for this.

Similar to these simple assertions there are many different checks for typical use cases. Comparing two variables, for example, can be done with WARN_EQUAL_VAR(), or REQUIRE_EQUAL_VAR(). Take a look at Example10 for a test case with various assertions.

Note

See Assertions for a complete list of all available checks. If in doubt use the CHECK variant.

Assertions with only one variant are PASS() and FAIL(). If you want to know more about how to use these two special assertions, take a look at Example7.

Aborting the test run

You can abort the execution of the test run by clicking the Abort button in the status bar or pressing the following user abort key combinations:

Command-dot

Macintosh only

Ctrl+Break

Windows only

Shift+Escape

All Platforms