Advanced Usage

Test Hooks

A Test Run can be extended with user-defined code at specific points during its execution. These pre-defined injection points are at the beginning and respectively at the end of a complete Test Run, a Test Suite, and a Test Case.

The following functions are reserved for user code injections:

TEST_BEGIN_OVERRIDE()

Executed at the begin of a Test Run.

TEST_END_OVERRIDE()

Executed at the end of a Test Run.

TEST_SUITE_BEGIN_OVERRIDE()

Executed at the begin of a Test Suite.

TEST_SUITE_END_OVERRIDE()

Executed at the end of a Test Suite.

TEST_CASE_BEGIN_OVERRIDE()

Executed at the begin of a Test Case.

TEST_CASE_END_OVERRIDE()

Executed at the end of a Test Case.

Note

TEST_END_OVERRIDE() is executed at the very end of a test run so that the Igor debugger state is already reset to the state it had before RunTest() was executed.

Note

The functions TEST_SUITE_BEGIN_OVERRIDE() and TEST_SUITE_END_OVERRIDE() as well as TEST_CASE_BEGIN_OVERRIDE() and TEST_CASE_END_OVERRIDE() can also be defined locally in a test suite with the static keyword. Example2 shows how static functions are called the framework.

These functions are executed automatically if they are defined anywhere in global or local context. For example, TEST_CASE_BEGIN_OVERRIDE() gets executed at the beginning of each Test Case. Locally defined functions always override globally defined ones of the same name. To visualize this behavior, take a look at the following scenario: A user would like to have code executed only in a specific Test Suite. Then the functions TEST_SUITE_BEGIN_OVERRIDE() and TEST_SUITE_END_OVERRIDE() can be defined locally within the current Test Suite by declaring them static to the current Test Suite. The local (static) functions then replace any previously defined global functions. The functionality with additional user code at certain points of a Test Run is demonstrated in Example5.

Note

If the locally defined function should only extend a global function the user can call the global function within the local function as follows:

FUNCREF USER_HOOK_PROTO tcbegin_global = TEST_CASE_BEGIN_OVERRIDE
tcbegin_global(TestCaseName)

To give a possible use case, take a look at the following scenario: By default, each Test Case is executed in its own temporary data folder. TEST_CASE_BEGIN_OVERRIDE() can be used to set the data folder to root:. This will result that each Test Case gets executed in root: and no cleanup is done afterward. The next Test Case then starts with the data the previous Test Case left in root:.

Note

By default the Igor debugger is disabled during the execution of a test run.

JUNIT Output

The igor unit testing framework supports output of test run results in JUNIT compatible format. The output can be enabled by adding the optional parameter enableJU=1 to RunTest(). The XML output files are written to the experiments home directory with naming JU_Experiment_Date_Time.xml. If a file with the same name already exists a three digit number is added to the name. The JUNIT Output also contains the history log of each test case and test suite.

Test Anything Protocol Output

Output according to the Test Anything Protocol (TAP) standard 13 can be enabled with the optional parameter enableTAP = 1 of RunTest().

The output is written into a file in the experiment folder with a unique generated name tap_’time’.log. This prevents accidental overwrites of previous test runs. A TAP output file combines all Test Cases from all Test Suites given in RunTest(). Additional TAP compliant descriptions and directives for each Test Case can be added in the two lines preceeding the function of a Test Case:

// #TAPDescription: My description here
// #TAPDirective: My directive here

For directives two additional keywords are defined that can be written at the beginning of the directive message.

  • TODO indicates a Test that includes a part of the program still in development. Failures here will be ignored by a TAP consumer.
  • SKIP indicates a Test that should be skipped. A Test with this directive keyword is not executed and reported always as ‘ok’.

Examples:

// #TAPDirective: TODO routine that should be tested is still under development

or

// #TAPDirective: SKIP this test gets skipped

See the Experiment in the TAP_Example folder for reference.

Automate Test Runs

To further simplify test execution it is possible to automate test runs from the command line.

Steps to do that include:

  • Implement a function called run() in ProcGlobal context taking no parameters. This function must perform all necessary steps for test execution, which is at least one call to RunTest().
  • Put the test experiment together with your Test Suites and the script helper/autorun-test.bat into its own folder.
  • Run the batch file autorun-test.bat.
  • Inspect the created log file.

The example batch files for autorun create a file named DO_AUTORUN.TXT before starting Igor Pro. This enables autorun mode. After the run() function is executed and returned the log is saved in a file on disk and Igor Pro quits.

A different autorun mode is enabled if the file is named DO_AUTORUN_PLAIN.TXT. In this mode no log file is saved after the test execution and Igor Pro does not quit. This mode also does not use the Operation Queue.

See also Example6.

Running in an Independent Module

The unit-testing framework can be run itself in an independent module. This can be required in very rare cases when the ProcGlobal procedures might not always be compiled.

See also Example9.

Handling of Abort Code

The unit-testing framework continues with the next test case after catching Abort and logs the abort code. Currently differentiation of different abort conditions include manual user aborts, stack overflow and an encountered Abort in the code. The framework is terminated when manually pressing the Abort button.

Note

Igor Pro 6 can not differentiate between manual user aborts and programmatic abort codes. Pressing the Abort button in Igor Pro 6 will therefore terminate only the current test case and continue with the next queued test case.

Test Cases with Background Activity

There exist situations where a test case needs to return temporary to the Igor command prompt and continue after a background task has finished. A real world use case is for example a testing code that runs data acquisition in a background task and the test case should continue after the acquisition finished.

The unit-testing framework supports such cases with a feature that allows to register one or more background tasks that should be monitored. A procedure name can be given that is called when the monitored background tasks finish. After the current test case procedure finishes the framework will return to Igors command prompt. This allows the users background task(s) to do its job. After the task(s) finish the framework continues the test case with the registered procedure.

The registration is done by calling RegisterUTFMonitor() from a test case or a BEGIN hook. The registration allows to give a list of background tasks that should be monitored. The mode parameter sets if all or one task has to finish to continue test execution. Optional a timeout can be set after the test continues independently of the user task(s) state.

See also Background Monitor Modes.

Function definition of RegisterUTFMonitor

variable RegisterUTFMonitor(string taskList, variable mode, string reentryFunc, variable timeout = defaultValue, variable failOnTimeout = defaultValue)

Registers a background monitor for a list of other background tasks.

usage example
RegisterUTFMonitor("TestCaseTask1;TestCaseTask2", BACKGROUNDMONMODE_OR, \
                   "testcase_REENTRY", timeout = 60)

This command will register the UTF background monitor task to monitor the state of TestCaseTask1 and TestCaseTask2. As mode is set to BACKGROUNDMONMODE_OR, when TestCaseTask1 OR TestCaseTask2 has finished the function testcase_REENTRY() is called to continue the current test case. The reentry function is also called if after 60 seconds both tasks are still running.

Parameters
  • taskList: A list of background task names that should be monitored by the unit testing framework

    The list should be given semicolon (“;”) separated.

  • mode: Mode sets how multiple tasks are evaluated. If set to BACKGROUNDMONMODE_AND all tasks of the list must finish (AND). If set to BACKGROUNDMONMODE_OR one task of the list must finish (OR).
  • reentryFunc: Name of the function that the unit testing framework calls when the monitored background tasks finished. The function name must end with _REENTRY and it must be of the form $fun_REENTRY() (same format as test cases). The reentry function continues the current test case therefore no hooks are called.
  • timeout: (optional) default 0. Timeout in seconds that the background monitor waits for the test case task(s). A timeout of 0 equals no timeout. If the timeout is reached the registered reentry function is called.
  • failOnTimeout: (optional) default to false. If the test case should be failed on reaching the timeout.

The function that is registered to continue the test execution must have the same format as a test case function and the name has to end with _REENTRY. When the unit-testing framework temporary drops to Igors command line and resumes later no begin/end hooks are executed. Logically the unit-testing frame work stays in the same test case. It is allowed to register another monitoring in the _REENTRY function.

Multiple subsequent calls to RegisterUTFMonitor() in the same function overwrite the previous registration.

Test Cases with background activity are supported from multi data test cases, see Multi Data Test Cases with Background Activity.

See also Example11.

See also Example12.

Multi Data Test Cases

Often the same test should be run multiple times with different sets of data. The unit-testing framework offers direct support for such tests. Test cases that are run with multiple data take one optional argument. To the test case a data generator function is attributed that returns a wave. For each element of that wave the test case is run. This sketches a simple multi data test case:

// UTF_TD_GENERATOR DataGeneratorFunction
Function myTestCase([arg])
  variable arg
  // add checks here
End

Function/WAVE DataGeneratorFunction()
  Make/FREE data = {1, 2, 3, 4}
  return data
End

To the test case myTestCase a data generator function name is attributed with the comment line above following the key word UTF_TD_GENERATOR. The data generator DataGeneratorFunction returns a wave of numeric type and the test case takes one optional argument of numeric type. When run myTestCase is executed four times with argument arg 1, 2, 3 and 4.

Supported types for arg are variable, string, complex, Integer64, data folder references and wave references. The type of the returned wave of the attributed data generator function must fit to the argument type that the multi data test case takes. The data generator function name must be attributed with a comment within three lines above the test cases Function line. The key word is UTF_TD_GENERATOR with the data generators function name following as seen in the simple example here. If no data generator is given or the format of the test case function does not fit to the wave type then a error message is printed and the test case is ignored.

The test case names are by default extended with :num where num is the index of the wave returned from the data generator. For convenience in the data generator dimension labels can be set for each wave element that are used instead of the index.

Function/WAVE DataGeneratorFunction()
  Make/FREE data = {1, 2, 3, 4}
  SetDimLabel 0, 0, first, data
  SetDimLabel 0, 1, second, data
  SetDimLabel 0, 2, third, data
  SetDimLabel 0, 3, fourth, data
  return data
End

The test case names would now be myTestCase:first, myTestCase:second and so on.

The optional argument of the test case function is always given from the data generator wave elements. Thus the case that ParamIsDefault(arg) is true never happens.

See also Example13.

Multi Data Test Cases with Background Activity

Multi data test cases that register a background task to be monitored are supported. For a multi data test case each reentry function can have one of two different formats:

  • Function fun_REENTRY() with no argument as described in Test Cases with Background Activity
  • Function fun_REENTRY([arg]) with the same argument type as the originating multi data test case.

For the second case, the reentry function is called with the same wave element as argument as when the multi data test case was started.

If the reentry function uses a different argument type than the test case entry function then on reentry to the unit-testing framework an error is printed and further test execution is aborted.

// UTF_TD_GENERATOR DataGeneratorFunction
Function myTestCase([var])
  variable var

  CtrlNamedBackGround testtask, proc=UserTask, period=1, start
  RegisterUTFMonitor("testtask", 1, "testCase_REENTRY")
  CHECK(var == 1 || var == 5)
End

Function UserTask(s)
  STRUCT WMBackgroundStruct &s

  return !mod(trunc(datetime), 5)
End

Function/WAVE DataGeneratorFunction()
  Make/FREE data = {5, 1}
  SetDimLabel 0, 0, first, data
  SetDimLabel 0, 1, second, data
  return data
End

Function testCase_REENTRY([var])
  variable var

  print "Reentered test case with argument ", var
  PASS()
End