Examples

The example section shows the usage of the Igor Unit Testing Framework. If you are just starting to use this framework, consider taking the Guided Tour.

Example1

This example is showing the basic working principle of the compare assertion. Constant values are given as input to the unit abs() and the output is checked for equality.

This unit test makes sure that the function abs() behaves as expected. For example if you use the unit abs() in a function and you give NaN as an input value the output value will also be NaN. The function is also capable of handling INF singularities.

example1-plain.ipf
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
#pragma rtGlobals=3
#pragma TextEncoding="UTF-8"

#include "unit-testing"

Function TestAbs()

	CHECK_EQUAL_VAR(abs(1.5), 1.5)
	CHECK_EQUAL_VAR(abs(-1.5), 1.5)
	CHECK_EQUAL_VAR(abs(NaN), NaN)
	WARN(abs(NaN) == NaN)
	CHECK_EQUAL_VAR(abs(INF), INF)
	CHECK_EQUAL_VAR(abs(-INF), INF)
End

The test suite can be executed using the following command:

command
RunTest("example1-plain.ipf")

By looking at line 10 in this example it becomes clear that CHECK_EQUAL_VAR() is a better way of comparing numeric variables then the plain CHECK() assertion since NaN == NaN is false. The error is skipped by using the WARN() variant and will not raise the error counter. If you want to know up to what extend those methods differ, take a look at the section on Assertion Types .

Note

It is recommended to take a look at the complete list of assertions. This will help in choosing the right assertion type for a comparison.

The definition for the assertions in this test suite:

Example2

This test suite has its own run routine. The run_IGNORE function serves as an entry point for "example2-plain.ipf". By using the _IGNORE suffix, the function itself will be ignored as a test case. This is also explained in the section about Test Cases. It is important to note that calling RunTest() would otherwise lead to a recursion error.

There are multiple calls to RunTest() in run_IGNORE to demonstrate the use of optional arguments. Calling the function without any optional argument will lead to a search for all available test cases in the procedure file. You can also execute specific test cases by supplying them with the testCase parameter.

The optional parameter name is especially useful for bundling more than one procedure file into a single test run.

The test suite itself lives in a module and all test cases are static to that module. This is the recommended environment for a test suite. When using the static keyword, you also have to define a module with #pragma ModuleName=Example2

example2-plain.ipf
#pragma rtGlobals=3
#pragma TextEncoding="UTF-8"
#pragma ModuleName=Example2

#include "unit-testing"

Function run_IGNORE()

	// executes all test cases of this file
	RunTest("example2-plain.ipf")
	// execute only one test case at a time
	RunTest("example2-plain.ipf", testCase="VerifyStringComparison")
	// explicitly specify both tests
	RunTest("example2-plain.ipf", testCase="VerifyStringComparison;VerfifyEmptyString")
	// specify with regular expression
	RunTest("example2-plain.ipf", testCase="Verify.*", enableRegExp = 1)
	// Give the test a descriptive name
	RunTest("example2-plain.ipf", name="My first test")
End

static Function VerifyStringComparison()

	string strLow      = "123abc"
	string strUP       = "123ABC"

	// by default string comparison is done case insensitive
	CHECK_EQUAL_STR(strLow, strUP)
	// It can be specificylly enabled or disabled.
	CHECK_EQUAL_STR(strLow, strUP, case_sensitive = 0)
	// Now we use WARN because the two strings are not equal.
	WARN_EQUAL_STR(strLow, strUP, case_sensitive = 1)
	// other comparisons are also possible
	CHECK_EQUAL_VAR(strlen(strLow), 6)
End

static Function VerfifyEmptyString()

	string nullString
	string emptyString = ""
	string filledString = "filled"

	// an uninitialized string is not equal to an empty string.
	CHECK_NEQ_STR(emptyString, nullString)
	// same as for a filled string
	CHECK_NEQ_STR(filledString, nullString)
	// there is an explicit function for empty strings
	CHECK_EMPTY_STR(emptyString)
	// and also for null strings.
	CHECK_NULL_STR(nullString)
End
command
run_IGNORE()

Note

The definition for the Assertions in this test suite:

Example3

This test suite emphasizes the difference between the WARN(), CHECK(), and REQUIRE() assertion variants.

The WARN_* variant does not increment the error count if the executed assertion fails. CHECK_* variants increase the error count. REQUIRE_* variants also increment the error count but will stop the execution of the test case immediately if the assertion fails.

Even if a test has failed, the test end hook is still executed. See Example5 for more details on hooks.

example3-plain.ipf
#pragma rtGlobals=3
#pragma TextEncoding="UTF-8"
#pragma ModuleName=Example3

#include "unit-testing"

// WARN_* does not increment the error count
Function WarnTest()

	WARN_EQUAL_VAR(1.0,0.0)
End

// CHECK_* increments the error count
Function CheckTest()

	CHECK_EQUAL_VAR(1.0,0.0)
End

// REQUIRE_* will stop execution of the test case immediately
Function RequireTest()

	REQUIRE_EQUAL_VAR(1.0,0.0)
	print "If I'm reached math is wrong !"
End
command
print RunTest("example3-plain.ipf")

The error count this test suite returns is 2

Note

See also the section on Assertion Types.

Example4

This test suite shows the use of test assertions for waves.

The type of a wave can be checked with CHECK_EQUAL_WAVES() and binary flags for the MinorType and MajorType. All flags are defined in Test Wave Flags and can be concatenated as shown in line 45. If the comparison is done against such a concatenation, it will fail if a single flag is not true. This is also shown in line 47 where the free wave does not exist but as proven before, it is definitely numeric.

It is noteworthy that each test case is executed in a fresh and empty datafolder. There is no need to use KillWaves or Make/O here.

example4-wavechecking.ipf
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
#pragma rtGlobals=3
#pragma TextEncoding="UTF-8"
#pragma ModuleName=Example4

#include "unit-testing"


static Function CheckMakeDouble()

	CHECK_EMPTY_FOLDER()

	Make/D myWave
	CHECK_WAVE(myWave, NUMERIC_WAVE, minorType = DOUBLE_WAVE)
	CHECK_EQUAL_VAR(DimSize(myWave, 0), 128)

	Duplicate myWave, myWaveCopy
	CHECK_EQUAL_WAVES(myWave, myWaveCopy)

End

static Function CheckMakeText()

	CHECK_EMPTY_FOLDER()

	Make/T myWave
	CHECK_WAVE(myWave, TEXT_WAVE)
	CHECK_EQUAL_VAR(DimSize(myWave, 0), 128)

	Duplicate/T myWave, myWaveCopy
	CHECK_EQUAL_WAVES(myWave, myWaveCopy)
End

static Function CheckWaveTypes()

	WAVE/Z wv
	CHECK_WAVE(wv, NULL_WAVE)

	Make/FREE/U/I wv0
	CHECK_WAVE(wv0, FREE_WAVE | NUMERIC_WAVE, minorType = UNSIGNED_WAVE | INT32_WAVE)

	Make/FREE/T wv1
	CHECK_WAVE(wv1, FREE_WAVE | TEXT_WAVE)

	Make/O/U/I root:wv2/WAVE=wv2
	CHECK_WAVE(wv2, NORMAL_WAVE | NUMERIC_WAVE, minorType = UNSIGNED_WAVE | INT32_WAVE)
	//The following check for a free wave is intended to fail
	WARN_WAVE(wv2, FREE_WAVE | NUMERIC_WAVE, minorType = UNSIGNED_WAVE | INT32_WAVE)
End
command
print RunTest("example4-wavechecking.ipf")

Helper functions to check wave types and compare with reference waves are also provided in Assertions.

Example5

The two test suites show how to use test hook overrides.

Here is shown how user code can be added to the Test Run at certain points. In this test suite, additional code can be executed at the beginning and end of the test cases. This is done by declaring the TEST_CASE_BEGIN_OVERRIDE or TEST_CASE_END_OVERRIDE function 'static'. Functions with this sepcific naming and the _OVERRIDE suffix are automatically found and registered as hooks.

Be aware that a 'static' defined hook overrides any global TEST_CASE_BEGIN_OVERRIDE functions for this Test Suite. If you want to execute the global TEST_CASE_BEGIN_OVERRIDE as well add this code to the static override function:

FUNCREF USER_HOOK_PROTO tcbegin_global = $"ProcGlobal#TEST_CASE_BEGIN_OVERRIDE"
tcbegin_global(name)

The second procedure file example5-extensionhooks-otherSuite.ipf is in ProcGlobal context so the test hook extensions are also global.

example5-extensionhooks.ipf
#pragma rtGlobals=3
#pragma TextEncoding="UTF-8"
#pragma ModuleName=Example5

#include "unit-testing"

static Function TEST_CASE_BEGIN_OVERRIDE(name)
	string name

	printf ">> Begin of Test Case %s was extended in this test suite only <<\r", name
End

static Function TEST_CASE_END_OVERRIDE(name)
	string name

	printf ">> End of Test Case %s was extended in this test suite only <<\r", name
End

static Function CheckSquareRoot()

	CHECK_EQUAL_VAR(sqrt(4.0), 2.0)
	CHECK_CLOSE_VAR(sqrt(2.0), 1.4142, tol = 1e-4)
End
example5-extensionhooks-otherSuite.ipf
#pragma rtGlobals=3
#pragma TextEncoding="UTF-8"

#include "unit-testing"

Function TEST_BEGIN_OVERRIDE(name)
	string name

	print ">> The global Test Begin is extended by this output <<"
End

Function TEST_END_OVERRIDE(name)
	string name

	print ">> The global Test End is extended by this output <<"
End

Function TEST_CASE_END_OVERRIDE(name)
	string name

	print ">> This is the global extension for the End of Test Cases <<"
End

Function TEST_SUITE_BEGIN_OVERRIDE(name)
	string name

	print ">> The Test Suite Begin is globally extended by this output <<"
End

Function TEST_SUITE_END_OVERRIDE(name)
	string name

	print ">> The Test Suite End is globally extended by this output <<"
End

Function CheckBasicMath()

	CHECK_EQUAL_VAR(1+2,3)
End
command
RunTest("example5-extensionhooks.ipf;example5-extensionhooks-otherSuite.ipf")

Each hook will output a message starting with >>. After the Test Run has finished you can see at which points the additional user code was executed.

Note

Also take a look at the Test Hooks section.

The definition for the Assertions in this test suite:

Example6

This test suite shows the automatic execution of test runs from the command line. On Windows, call the “autorun-test-xxx.bat” from the helper folder.

The autorun batch script executes test runs for all pxp experiment files in the current folder. After the run, a log file is created in the folder. The log file includes the history of the Igor Pro Experiment. See also the section on Automate Test Runs.

example6-automatic-invocation.ipf
#pragma rtGlobals=3
#pragma TextEncoding="UTF-8"
#pragma ModuleName=Example6

#include "unit-testing"

static Function CheckTrigonometricFunctions()

	CHECK_EQUAL_VAR(sin(0.0), 0.0)
	CHECK_EQUAL_VAR(cos(0.0), 1.0)
	CHECK_EQUAL_VAR(tan(0.0), 0.0)
End
example6-runner.ipf
#pragma rtGlobals=3
#pragma TextEncoding="UTF-8"

#include "unit-testing"

Function run()

	RunTest("example6-automatic-invocation.ipf", enableJU=1)
End

In this example, the automatic invocation of the Unit Testing Framework is also producing `JUNIT Output`_. This allows the framework to be used in automated CI/CD Pipelines.

Note

The definition for the assertion in this test suite:

Example7

This test suite is showing how unhandled aborts in the test cases are displayed.

The Test environment catches such conditions and treats them accordingly. This works with both Abort and AbortOnValue.

example7-uncaught-aborts.ipf
#pragma rtGlobals=3
#pragma TextEncoding="UTF-8"
#pragma ModuleName=Example7

#include "unit-testing"

Function CheckNumber(a)
	variable a

	PASS()
	if(numType(a) == 2)
		Abort
	endif
	return 1
End

static Function CheckNumber_not_nan()

	CheckNumber(1.0)
End

static Function CheckNumber_nan()

	CheckNumber(NaN)
End
command
RunTest("example7-uncaught-aborts.ipf")

Note

Relevant definitions for the Assertions in this test suite:

Example8

This test suite shows the behaviour of the unit testing environment if user code generates an uncaught Runtime Error. The test environment catches this condition and gives a detailed error message in the history. The runtime error (RTE) is of course treated as FAIL().

In this example, the highlighted lines both generate such a RTE due to a missing data folder reference.

There might be situations where the user wants to catch a runtime error (RTE) himself. In line 12 TestWaveOpSelfCatch shows how to catch the RTE before the test environment handles it. The test environment is controlled manually by PASS() and FAIL(). PASS() increases the assertion counter and FAIL() treats this assertion as fail when a RTE was caught.

example8-uncaught-runtime-errors
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
#pragma rtGlobals=3
#pragma TextEncoding="UTF-8"
#pragma ModuleName=Example8

#include "unit-testing"

Function TestWaveOp()

		WAVE/Z/SDFR=$"I dont exist" wv;
End

Function TestWaveOpSelfCatch()

	try
		WAVE/Z/SDFR=$"I dont exist" wv; AbortOnRTE
		PASS()
	catch
		// Do not forget to clear the RTE
		variable err = getRTError(1)
		FAIL()
	endtry
End
command
RunTest("example8-uncaught-runtime-errors.ipf")

Note

Relevant definitions for the Assertions in this test suite:

Example9

This examples shows how the whole framework can be run in an independent module.

Please note that when calling the test suite, the procedure window name does not need to include any independent module specification.

example9-IM.ipf
#pragma TextEncoding = "UTF-8"
#pragma rtGlobals=3
#pragma IndependentModule=Example9

#include "unit-testing"

Function TestMe()

	CHECK_EQUAL_VAR(1, 1)
End
command
Example9#RunTest("example9-IM.ipf")

Note

Definition for the assertion in this test suite:

Example10

This example tests the functionality of a peak find library found on github. It demonstrates that by defining a unit test, we can rely on the functionality of an external library. Even though we can not see the code itself from this unit, we can test it and see if it fits our needs. Keep in mind that a program is only as good as the unit test the define it.

example10-peakfind.ipf
#pragma TextEncoding = "UTF-8"
#pragma rtGlobals=3		// Use modern global access method and strict wave access.

#include "unit-testing"

// https://github.com/ukos-git/igor-common-utilities.git
#include "common-utilities"

Function testSinglePeakFit()

	// define a peak
	variable peak_position = 570
	variable peak_fwhm = 50

	// create the peak
	Make/O root:spectrum/WAVE=peak
	SetScale x, 0, 1000, "nm", peak
	peak = Gauss(x, peak_position, peak_fwhm) + gnoise(1e-3)

	// do the fit
	wave/Z/WAVE peakParam = Utilities#FitGauss(peak)

	// check that our input wave was good
	REQUIRE_WAVE(peak, NUMERIC_WAVE, minorType = FLOAT_WAVE)
	// check that the returned function is a valid wave
	REQUIRE_WAVE(peakParam, FREE_WAVE | WAVE_WAVE)
	// require at least one peak
	REQUIRE_EQUAL_VAR(1, DimSize(peakParam, 0) > 0)
	// warn if more than one peak was found
	WARN_EQUAL_VAR(1.0, DimSize(peakParam, 0))

	// convert to human readable result
	wave/Z peakInfo = Utilities#peakParamToResult(peakParam)

	// again, check that the function returned a valid wave
	CHECK_WAVE(peakInfo, FREE_WAVE | NUMERIC_WAVE)
	// check the found peak against the peak definition
	REQUIRE_CLOSE_VAR(peakInfo[0][%position], peak_position, tol=peakInfo[0][%position_err])
	REQUIRE_CLOSE_VAR(peakInfo[0][%fwhm], peak_fwhm, tol=peakInfo[0][%fwhm_err])
End
command
RunTest("example10-peakfind.ipf")

Note

Definition for the Assertions in this test suite:

Example11

This example demonstrates the usage of the unit-test framework background monitor. It contains a single test case that registers a user task to be monitored. After the initial test case procedure finishes the unit-testing framework drops to Igors command line. After the user task finishes the unit-testing framework resumes the test case in the given _REENTRY function. To emphasize that this feature can be chained the first _REENTRY function registers the same user task again with another _REENTRY function to resume.

example11-background.ipf
#pragma rtGlobals=3
#pragma TextEncoding="UTF-8"
#pragma ModuleName=Example11

#include "unit-testing"

// A task that is run by the test, could be data acquisition
// but to keep it simple this task runs until the current seconds
// are dividable by 5 with a remainder of 0
static Function ReEntryTask(s)
	STRUCT WMBackgroundStruct &s

	return !mod(trunc(datetime), 5)
End

// A test case that setups the tests task, registers the task to be monitored
// and registers a reentry function that is called when the tests task finishes (or times out).
// The reentry functions code belongs to the same test case BackgroundTest()
Function BackgroundTest()

	CtrlNamedBackGround testtask, proc=Example11#ReEntryTask, period=1, start
	RegisterUTFMonitor("testtask", 1, "FirstReentry_reentry")
End

// A second test case that registers our second reentry function.
Function BackgroundTest2()

	CtrlNamedBackGround testtask, proc=Example11#ReEntryTask, period=1, start
	RegisterUTFMonitor("testtask", 1, "SecondReentry_REENTRY", timeout=2)
End

// The registered reentry function from BackgroundTest()
// This does not has to be the end of this test case, so lets assume there is more work to do
// and we register our testtask again, but this time with another reentry function.
Function FirstReentry_REENTRY()

	WARN_EQUAL_VAR(1, 0)
	// Setup follow up background task
	CtrlNamedBackGround testtask, proc=Example11#ReEntryTask, period=1, start
	RegisterUTFMonitor("testtask", 1, "SecondReentry_REENTRY")
End

// After two tasks run our BackgroundTest() test case concludes with this final reentry function.
// Note that the test case BackgroundTest2() registers this function as well as reentry function.
// So the code in this function is part of both test cases.
Function SecondReentry_reentry()

	WARN_EQUAL_VAR(2, 0)
End
command
RunTest("example11-background.ipf")

Note

Definition for the Assertions in this test suite:

Example12

This example demonstrates the usage of the unit-test framework background monitor from a TEST_CASE_BEGIN_OVERRIDE() hook, see Test Hooks. The background monitor registration can be called from any begin hook.

example12-background-using-hooks.ipf
#pragma rtGlobals=3
#pragma TextEncoding="UTF-8"
#pragma ModuleName=Example12

#include "unit-testing"

// A task that is run by the test, could be data acquisition
// but to keep it simple this task runs until the current seconds
// are dividable by 5 with a remainder of 0
static Function ReEntryTask(s)
	STRUCT WMBackgroundStruct &s

	return !mod(trunc(datetime), 5)
End

static Function TEST_CASE_BEGIN_OVERRIDE(name)
	string name

	printf ">> Registration of task monitoring in test case begin hook <<\r"

	CtrlNamedBackGround testtask, proc=Example12#ReEntryTask, period=1, start
	RegisterUTFMonitor("testtask", 1, "TestCase_reentry")
End

// A test case that shows output generated by a warning.
// The test case begin hook above is called before this function runs.
// So there is already a reentry function registered that continues execution
// of this test case later.
Function MonitorInHookTest()

	string infoText = "Entering example test case"
	string badComparison = "in MonitorInHookTest()"
	// the warning will generate output to history
	WARN_EQUAL_STR(infoText, badComparison)
End

// The registered reentry function from the test case begin hook.
// Here continues the MonitorInHookTest test case after ReEntryTask finished.
// We generate some history output with a warning.
Function TestCase_reentry()

	string infoText = "Continuing example test case in reentry function..."
	string badComparison = "in TestCase_reentry()"
	// the warning will generate output to history
	WARN_EQUAL_STR(infoText, badComparison)
End
command
RunTest("example12-background-using-hooks.ipf")

Note

Definition for the Assertions in this test suite:

Example13

This example shows how test cases are used with data generators. It includes test cases that take one argument that is provided by a data generator function. The data generator function returns a wave of that argument type and the test case is called for each element of that wave.

example13-multi-test-data.ipf
#pragma rtGlobals=3
#pragma ModuleName=Example13

#include "unit-testing"

// RunTest("example13-multi-test-data.ipf")

// This examples demonstrates the usage of multi data test cases
// Each test case allows an optional parameter and is tagged by a comment above
// to a data generator. The attributed data generator returns a wave that is of the
// same type as the parameter the test case accepts.
// The test case is executed for each wave element.

// This is the first data generator function used by MDTestCaseVar
static Function/WAVE tcDataGenVar()
	Make/FREE data = {5, 1}
	SetDimLabel 0, 0, first, data
	SetDimLabel 0, 1, second, data
	return data
End

// UTF_TD_GENERATOR tcDataGenVar
static Function MDTestCaseVar([var])
	variable var

	CHECK(var == 1 || var == 5)
End

static Function/WAVE tcDataGenStr()
	Make/FREE/T favorites = {"Dancing with the Source", "A Tear in the Veil", "The Battle for Divinity"}
	return favorites
End

// UTF_TD_GENERATOR tcDataGenStr
static Function MDTestCaseStr([str])
	string str

	CHECK(strsearch(str, "the", 0, 2) >= 0)
End

static Function/WAVE tcDataGenWv()
	Make/FREE wa = {1}
	Make/FREE wb = {1}
	Make/FREE/WAVE w1 = {wa, wb}
	Make/FREE/WAVE w2 = {wa, wb}
	Make/FREE/WAVE wr = {w1, w2}
	return wr
End

// UTF_TD_GENERATOR tcDataGenWv
static Function MDTestCaseWv([wv])
	WAVE wv

	WAVE/WAVE wr = wv

	CHECK_EQUAL_WAVES(wr[0], wr[1])
End

static Function/WAVE tcDataGenDFR()
	DFREF dfr = NewFreeDataFolder()
	string/G dfr:data = "Damn it, Steve!"
	Make/FREE/DF w = {dfr}
	return w
End

// UTF_TD_GENERATOR tcDataGenDFR
static Function MDTestCaseDFR([dfr])
	DFREF dfr

	SVAR/Z s = dfr:data
	CHECK(strsearch(s, "Steve!", 0) >= 0)
End
command
RunTest("example13-multi-test-data.ipf")

Note

Definition for the Assertions in this test suite: