Advanced Usage

Test Hooks

A Test Run can be extended with user-defined code at specific points during its execution. These pre-defined injection points are at the beginning and respectively at the end of a complete Test Run, a Test Suite, and a Test Case.

The following functions are reserved for user code injections:


Executed at the begin of a Test Run.


Executed at the end of a Test Run.


Executed at the begin of a Test Suite.


Executed at the end of a Test Suite.


Executed at the begin of a Test Case.


Executed at the end of a Test Case.


TEST_END_OVERRIDE() is executed at the very end of a test run so that the Igor debugger state is already reset to the state it had before RunTest() was executed.


The functions TEST_SUITE_BEGIN_OVERRIDE() and TEST_SUITE_END_OVERRIDE() as well as TEST_CASE_BEGIN_OVERRIDE() and TEST_CASE_END_OVERRIDE() can also be defined locally in a test suite with the static keyword. Example2 shows how static functions are called the framework.

These functions are executed automatically if they are defined anywhere in global or local context. For example, TEST_CASE_BEGIN_OVERRIDE() gets executed at the beginning of each Test Case. Locally defined functions always override globally defined ones of the same name. To visualize this behavior, take a look at the following scenario: A user would like to have code executed only in a specific Test Suite. Then the functions TEST_SUITE_BEGIN_OVERRIDE() and TEST_SUITE_END_OVERRIDE() can be defined locally within the current Test Suite by declaring them static to the current Test Suite. The local (static) functions then replace any previously defined global functions. The functionality with additional user code at certain points of a Test Run is demonstrated in Example5.

To give a possible use case, take a look at the following scenario: By default, each Test Case is executed in its own temporary data folder. TEST_CASE_BEGIN_OVERRIDE() can be used to set the data folder to root:. This will result that each Test Case gets executed in root: and no cleanup is done afterward. The next Test Case then starts with the data the previous Test Case left in root:.


By default the Igor debugger is disabled during the execution of a test run.

Assertions can be used in test hooks. However it is enforced by the IUTF that the test case itself must contain at least one assertion.

JUNIT Output

All common continuous integration frameworks support input as JUNIT XML files. The igor unit testing framework supports output of test run results in JUNIT XML format. The output can be enabled by adding the optional parameter enableJU=1 to RunTest().

The XML output files are written to the experiments home directory with naming JU_Experiment_Date_Time.xml. If a file with the same name already exists a three digit number is added to the name. The JUNIT Output includes the results and history log of each test case and test suite.

The format reference that the IUTF uses is described in the section JUNIT Reference.

Test Anything Protocol Output

Output according to the Test Anything Protocol (TAP) standard 13 can be enabled with the optional parameter enableTAP = 1 of RunTest().

The output is written into a file in the experiment folder with a unique generated name tap_’time’.log. This prevents accidental overwrites of previous test runs. A TAP output file combines all Test Cases from all Test Suites given in RunTest(). Additional TAP compliant descriptions and directives for each Test Case can be added in the lines preceeding the function of a Test Case (maximum 4 lines above Function are considered as tags, every tag in separate line):

// #TAPDescription: My description here
// #TAPDirective: My directive here

For directives two additional keywords are defined that can be written at the beginning of the directive message.

  • TODO indicates a Test that includes a part of the program still in development. Failures here will be ignored by a TAP consumer.

  • SKIP indicates a Test that should be skipped. A Test with this directive keyword is not executed and reported always as ‘ok’.


// #TAPDirective: TODO routine that should be tested is still under development


// #TAPDirective: SKIP this test gets skipped

See the Experiment in the TAP_Example folder for reference.

Automate Test Runs

To further simplify test execution it is possible to automate test runs from the command line.

Steps to do that include:

  • Implement a function called run() in ProcGlobal context taking no parameters. This function must perform all necessary steps for test execution, which is at least one call to RunTest().

  • Put the test experiment together with your Test Suites and the script helper/autorun-test.bat into its own folder.

  • Run the batch file autorun-test.bat.

  • Inspect the created log file.

The example batch files for autorun create a file named DO_AUTORUN.TXT before starting Igor Pro. This enables autorun mode. After the run() function is executed and returned the log is saved in a file on disk and Igor Pro quits.

A different autorun mode is enabled if the file is named DO_AUTORUN_PLAIN.TXT. In this mode no log file is saved after the test execution and Igor Pro does not quit. This mode also does not use the Operation Queue.

See also Example6.

Running in an Independent Module

The unit-testing framework can be run itself in an independent module. This can be required in very rare cases when the ProcGlobal procedures might not always be compiled.

See also Example9.

Handling of Abort Code

The unit-testing framework continues with the next test case after catching Abort and logs the abort code. Currently differentiation of different abort conditions include manual user aborts, stack overflow and an encountered Abort in the code. The framework is terminated when manually pressing the Abort button.


Igor Pro 6 can not differentiate between manual user aborts and programmatic abort codes. Pressing the Abort button in Igor Pro 6 will therefore terminate only the current test case and continue with the next queued test case.

Test Cases with Background Activity

There exist situations where a test case needs to return temporary to the Igor command prompt and continue after a background task has finished. A real world use case is for example a testing code that runs data acquisition in a background task and the test case should continue after the acquisition finished.

The unit-testing framework supports such cases with a feature that allows to register one or more background tasks that should be monitored. A procedure name can be given that is called when the monitored background tasks finish. After the current test case procedure finishes the framework will return to Igors command prompt. This allows the users background task(s) to do its job. After the task(s) finish the framework continues the test case with the registered procedure.

The registration is done by calling RegisterUTFMonitor() from a test case or a BEGIN hook. The registration allows to give a list of background tasks that should be monitored. The mode parameter sets if all or one task has to finish to continue test execution. Optional a timeout can be set after the test continues independently of the user task(s) state.

See also Background Monitor Modes.

Function definition of RegisterUTFMonitor

variable RegisterUTFMonitor(string taskList, variable mode, string reentryFunc, variable timeout = defaultValue, variable failOnTimeout = defaultValue)

Registers a background monitor for a list of other background tasks.

usage example
RegisterUTFMonitor("TestCaseTask1;TestCaseTask2", BACKGROUNDMONMODE_OR, \
                   "testcase_REENTRY", timeout = 60)

This command will register the UTF background monitor task to monitor the state of TestCaseTask1 and TestCaseTask2. As mode is set to BACKGROUNDMONMODE_OR, when TestCaseTask1 OR TestCaseTask2 has finished the function testcase_REENTRY() is called to continue the current test case. The reentry function is also called if after 60 seconds both tasks are still running.

  • taskList

    A list of background task names that should be monitored by the unit testing framework

    The list should be given semicolon (“;”) separated.

  • mode – Mode sets how multiple tasks are evaluated. If set to BACKGROUNDMONMODE_AND all tasks of the list must finish (AND). If set to BACKGROUNDMONMODE_OR one task of the list must finish (OR).

  • reentryFunc – Name of the function that the unit testing framework calls when the monitored background tasks finished. The function name must end with _REENTRY and it must be of the form $fun_REENTRY() (same format as test cases). The reentry function continues the current test case therefore no hooks are called.

  • timeout – (optional) default 0. Timeout in seconds that the background monitor waits for the test case task(s). A timeout of 0 equals no timeout. If the timeout is reached the registered reentry function is called.

  • failOnTimeout – (optional) default to false. If the test case should be failed on reaching the timeout.

The function that is registered to continue the test execution must have the same format as a test case function and the name has to end with _REENTRY. When the unit-testing framework temporary drops to Igors command line and resumes later no begin/end hooks are executed. Logically the unit-testing frame work stays in the same test case. It is allowed to register another monitoring in the _REENTRY function.

Multiple subsequent calls to RegisterUTFMonitor() in the same function overwrite the previous registration.

Test Cases with background activity are supported from multi data test cases, see Multi Data Test Cases with Background Activity.

See also Example11.

See also Example12.

Multi Data Test Cases

Often the same test should be run multiple times with different sets of data. The unit-testing framework offers direct support for such tests. Test cases that are run with multiple data take one optional argument. To the test case a data generator function is attributed that returns a wave. For each element of that wave the test case is run. This sketches a simple multi data test case:

// UTF_TD_GENERATOR DataGeneratorFunction
Function myTestCase([arg])
  variable arg
  // add checks here

Function/WAVE DataGeneratorFunction()
  Make/FREE data = {1, 2, 3, 4}
  return data

To the test case myTestCase a data generator function name is attributed with the comment line above following the tag word UTF_TD_GENERATOR. A maximum of four lines above Function are considered as tags with every tag in a separate line. If the data generator function is not found in the current procedure file it is searched in all procedure files of the current compilation unit as a non-static function. (ProcGlobal context) Also a static data generator function in another procedure file can be specified by adding the Module name in the specification. There is no search in other procedure files if such specified function is not found.

// UTF_TD_GENERATOR GeneratorModule#DataGeneratorFunction

The data generator DataGeneratorFunction returns a wave of numeric type and the test case takes one optional argument of numeric type. When run myTestCase is executed four times with argument arg 1, 2, 3 and 4.

Supported types for arg are variable, string, complex, Integer64, data folder references and wave references. The type of the returned wave of the attributed data generator function must fit to the argument type that the multi data test case takes. The data generator function name must be attributed with a comment within four lines above the test cases Function line. The key word is UTF_TD_GENERATOR with the data generators function name following as seen in the simple example here. If no data generator is given or the format of the test case function does not fit to the wave type then a error message is printed and the test run is aborted.

The test case names are by default extended with :num where num is the index of the wave returned from the data generator. For convenience in the data generator dimension labels can be set for each wave element that are used instead of the index.

Function/WAVE DataGeneratorFunction()
  Make/FREE data = {1, 2, 3, 4}
  SetDimLabel 0, 0, first, data
  SetDimLabel 0, 1, second, data
  SetDimLabel 0, 2, third, data
  SetDimLabel 0, 3, fourth, data
  return data

The test case names would now be myTestCase:first, myTestCase:second and so on.

The optional argument of the test case function is always given from the data generator wave elements. Thus the case that ParamIsDefault(arg) is true never happens.

When setting up a multi data test case with a data generator returning wave references then the test case can also use typed waves. Supported are text waves (WAVE/T), waves with data folder references (WAVE/DF) and waves with wave references (WAVE/WAVE). For such a test case or reentry function the associated data generator must return a wave reference wave where each wave element refers to a wave of the fitting type. For a test case setup with the generic WAVE the type is not fixed for all elements of from the data generator.

See also Example13.

Multi Data Test Cases with Background Activity

Multi data test cases that register a background task to be monitored are supported. For a multi data test case each reentry function can have one of two different formats:

  • Function fun_REENTRY() with no argument as described in Test Cases with Background Activity

  • Function fun_REENTRY([arg]) with the same argument type as the originating multi data test case.

For the second case, the reentry function is called with the same wave element as argument as when the multi data test case was started.

If the reentry function uses a different argument type than the test case entry function then on reentry to the unit-testing framework an error is printed and further test execution is aborted.

// UTF_TD_GENERATOR DataGeneratorFunction
Function myTestCase([var])
  variable var

  CtrlNamedBackGround testtask, proc=UserTask, period=1, start
  RegisterUTFMonitor("testtask", 1, "testCase_REENTRY")
  CHECK(var == 1 || var == 5)

Function UserTask(s)
  STRUCT WMBackgroundStruct &s

  return !mod(trunc(datetime), 5)

Function/WAVE DataGeneratorFunction()
  Make/FREE data = {5, 1}
  SetDimLabel 0, 0, first, data
  SetDimLabel 0, 1, second, data
  return data

Function testCase_REENTRY([var])
  variable var

  print "Reentered test case with argument ", var

Code Coverage Determination

When running Igor Pro 9 or newer the Igor Unit testing Framework offers the feature to obtain code coverage information. When enabled the IUTF adds to functions in target procedure files code to track execution. At the end of the test run the IUTF outputs files in HTML format with coverage information.

This feature is enabled when the optional parameter traceWinList is set and non-empty when calling RunTest. Before the actual tests are executed the given procedure files are modified on disk where additional function calls are inserted. The additional code does not change the execution of the original code. This step is named Instrumentation. The coverage results are output as HTML files in the experiments folder for each procedure file in the form:

   ________|________|________|threadsafe static Function Workload(variable count)
   00000100|________|________|// Inside Function comment
   00000100|________|________|	count += 1
   ________|________|________|static Function TracingTest()
   00000001|________|________|	Make/FREE/N=100 index
   00000001|________|________|	MultiThread/NT=32 index = Workload(p)
   00000001|00000000|00000001|	if(0)
   00000001|00000001|00000000|	elseif(1)
   00000001|________|________|	else
   ________|________|________|	endif

The code is prefixed with three columns where the number in the first column is the count how many times the line was executed. In second and third column is counted, when the code contained an if conditional. For that case the second column counts the execution for the case the condition was true and the third column counts when the condition was false respectively.


The optional parameter traceOptions for RunTest allows to tune execution with code coverage. This parameter is a list with key-value pairs that can be set using the Igor functions ReplaceNumberByKey or ReplaceStringByKey respectively. For each settings key a constant is defined in TraceOptionKeyStrings. The following keys are available:

  • UTF_KEY_REGEXP (REGEXP:boolean) When set the parameter traceWinList is parsed as a regular expression for all procedure window names.

  • UTF_KEY_HTMLCREATION (HTMLCREATION:boolean) When set to zero no HTML files are created after the test run. HTML files can be created by calling UTF_Tracing#AnalyzeTracingResult() manually after a test run.

  • UTF_KEY_INSTRUMENTATIONONLY (INSTRUMENTONLY:boolean) When set the IUTF will only do the code instrumentation and then return. No tests get executed.

Additionally function and macros can be excluded from instrumentation by adding the special comment // UTF_NOINSTRUMENTATION before the first line of the function. This special comment can be in one of the fives lines before the function or macro starts. Excluding basic functions or macros that are called very often can speed up the execution of instrumented code.

Static functions in procedure files can only be instrumented, if the procedure file has the pragma ModuleName set, e.g. #pragma ModuleName=myUtilities. For static functions that exist in a given procedure file without ModuleName a warning is printed to history. These function are not instrumented and appear in the coverage result file with zero executions.

Instrumented code runs roughly 30% slower. In special cases a stronger slowdown can occur. In such cases it should be considered to exclude very often called functions from the instrumentation with the special comment // UTF_NOINSTRUMENTATION as described above.

Coverage logging also works for threadsafe functions and functions that are executed in preemptive threads.

The instrumented code that is written to disk and executed with code coverage logging is based on the current code within Igor Pro at the time when RunTest is called. The evaluation of gathered coverage data refers to the procedure file content on disk when RunTest was called. Thus, unsaved changes in procedure files that are targeted for instrumentation will result in incorrect result files. It is strongly recommended to save all procedure file changes to disk before running a test with code coverage logging.

At the end of a run with code coverage determination Igor Pro outputs the global coverage to stdout in the form Coverage: 12.3%. The following regular expression can be used in CI services (e.g. in GitLab) to retrieve the number (?:^Coverage: )(\d+.\d+)(?:%$).


The function that calls RunTest with tracing enabled must return to the Igor Pro command line afterwards to allow recompilation of the instrumented code. It is not allowed to have another RunTest call in between. The unit testing framework will abort with an error in that case.

If the full autorun feature is enabled through DO_AUTORUN.TXT the RunTest call with instrumentation must be the only call in the experiment. Specifically, if a RunTest call without tracing is placed before then the RunTest call with tracing will not execute tests.


Sets up a test, enables coverage determination for all procedure files that start with CODE_.
Function Run1()
    string traceOptions = ""
    string testsuite = "myTests.ipf"

    // Use traceWinList as regular expression
    traceOptions = ReplaceNumberByKey(UTF_KEY_REGEXP, traceOptions, 1)

    RunTest(testsuite, traceWinList="CODE_.*\.ipf", traceOptions=traceOptions)
Enables coverage determination for all procedure files that start with CODE_, but stops after instrumentation of the code.
Function Run2()
    string traceOptions = ""
    string testsuite = "myTests.ipf"

    // Use traceWinList as regular expression
    traceOptions = ReplaceNumberByKey(UTF_KEY_REGEXP, traceOptions, 1)
    // Execute only instrumentation
    traceOptions = ReplaceNumberByKey(UTF_KEY_INSTRUMENTATIONONLY, traceOptions, 1)

    RunTest(testsuite, traceWinList="CODE_.*\.ipf", traceOptions=traceOptions)

JUNIT Reference

The JUNIT implementation in the IUTF is based on the XML scheme definition from Jenkins XUnit plugin.

Example XML reference file.

Example XML file with attributes used also supported by the Jenkins JUnit plugin based on the file published at <>.
<?xml version="1.0" encoding="UTF-8"?>
<!-- a description of the JUnit XML format and how Jenkins parses it. See also junit.xsd -->

<!-- if only a single testsuite element is present, the testsuites
     element can be omitted. All attributes are optional.
     Not supported by maven surefire.
<testsuites disabled="" <!-- total number of disabled tests from all testsuites. -->
            errors=""   <!-- total number of tests with error result from all testsuites. -->
            failures="" <!-- total number of failed tests from all testsuites. -->
            tests=""    <!-- total number of tests from all testsuites. Some software may expect to only see the number of successful tests from all testsuites though. -->
            time=""     <!-- time in seconds to execute all test suites. -->

  <!-- testsuite can appear multiple times, if contained in a testsuites element.
       It can also be the root element. -->
  <testsuite name=""      <!-- Full (class) name of the test for non-aggregated testsuite documents.
                               Class name without the package for aggregated testsuites documents. Required -->
         tests=""     <!-- The total number of tests in the suite, required. -->
             errors=""    <!-- The total number of tests in the suite that errored. An errored test is one that had an unanticipated problem,
                               for example an unchecked throwable; or a problem with the implementation of the test. optional -->
             failures=""  <!-- The total number of tests in the suite that failed. A failure is a test which the code has explicitly failed
                               by using the mechanisms for that purpose. e.g., via an assertEquals. optional -->
             hostname=""  <!-- Host on which the tests were executed. 'localhost' should be used if the hostname cannot be determined. optional. not supported by maven surefire. -->
         id=""        <!-- Starts at 0 for the first testsuite and is incremented by 1 for each following testsuite. optional. not supported by maven surefire. -->
         package=""   <!-- Derived from testsuite/@name in the non-aggregated documents. optional. not supported by maven surefire. -->
         skipped=""   <!-- The total number of skipped tests. optional -->
         time=""      <!-- Time taken (in seconds) to execute the tests in the suite. optional -->
         timestamp="" <!-- when the test was executed in ISO 8601 format (2014-01-21T16:17:18). Timezone may not be specified. optional. not supported by maven surefire. -->

    <!-- Properties (e.g., environment settings) set during test execution.
         The properties element can appear 0 or once. -->
      <!-- property can appear multiple times. The name and value attributres are required. -->
      <property name="" value=""/>

    <!-- testcase can appear multiple times, see /testsuites/testsuite@tests -->
    <testcase name=""       <!-- Name of the test method, required. -->
          assertions="" <!-- number of assertions in the test case. optional. not supported by maven surefire. -->
          classname=""  <!-- Full class name for the class the test method is in. required -->
          status=""     <!-- optional. not supported by maven surefire. -->
          time=""       <!-- Time taken (in seconds) to execute the test. optional -->

      <!-- If the test was not executed or failed, you can specify one of the skipped, error or failure elements. -->

      <!-- skipped can appear 0 or once. optional -->
      <skipped message=""   <!-- message/description string why the test case was skipped. optional -->

      <!-- error indicates that the test errored.
           An errored test had an unanticipated problem.
           For example an unchecked throwable (exception), crash or a problem with the implementation of the test.
           Contains as a text node relevant data for the error, for example a stack trace. optional -->
      <error message="" <!-- The error message. e.g., if a java exception is thrown, the return value of getMessage() -->
         type=""    <!-- The type of error that occured. e.g., if a java execption is thrown the full class name of the exception. -->
         >error description</error>

      <!-- failure indicates that the test failed.
           A failure is a condition which the code has explicitly failed by using the mechanisms for that purpose.
           For example via an assertEquals.
           Contains as a text node relevant data for the failure, e.g., a stack trace. optional -->
      <failure message="" <!-- The message specified in the assert. -->
           type=""    <!-- The type of the assert. -->
           >failure description</failure>

      <!-- Data that was written to standard out while the test was executed. optional -->
      <system-out>STDOUT text</system-out>

      <!-- Data that was written to standard error while the test was executed. optional -->
      <system-err>STDERR text</system-err>

    <!-- Data that was written to standard out while the test suite was executed. optional -->
    <system-out>STDOUT text</system-out>
    <!-- Data that was written to standard error while the test suite was executed. optional -->
    <system-err>STDERR text</system-err>
XSD (XML scheme definition) file for JUNIT
<?xml version="1.0" encoding="UTF-8" ?>
<!-- -->
The MIT License (MIT)

Copyright (c) 2014, Gregory Boissinot

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

<xs:schema xmlns:xs="">
    <xs:simpleType name="SUREFIRE_TIME">
        <xs:restriction base="xs:string">
            <xs:pattern value="(([0-9]{0,3},)*[0-9]{3}|[0-9]{0,3})*(\.[0-9]{0,3})?"/>

    <xs:complexType name="rerunType" mixed="true"> <!-- mixed (XML contains text) to be compatible with version previous than 2.22.1 -->
            <xs:element name="stackTrace" type="xs:string" minOccurs="0" /> <!-- optional to be compatible with version previous than 2.22.1 -->
            <xs:element name="system-out" type="xs:string" minOccurs="0" />
            <xs:element name="system-err" type="xs:string" minOccurs="0" />
        <xs:attribute name="message" type="xs:string" />
        <xs:attribute name="type" type="xs:string" use="required" />

    <xs:element name="failure">
        <xs:complexType mixed="true">
            <xs:attribute name="type" type="xs:string"/>
            <xs:attribute name="message" type="xs:string"/>

    <xs:element name="error">
        <xs:complexType mixed="true">
            <xs:attribute name="type" type="xs:string"/>
            <xs:attribute name="message" type="xs:string"/>

    <xs:element name="skipped">
        <xs:complexType mixed="true">
            <xs:attribute name="type" type="xs:string"/>
            <xs:attribute name="message" type="xs:string"/>

    <xs:element name="properties">
                <xs:element ref="property" minOccurs="0" maxOccurs="unbounded"/>

    <xs:element name="property">
            <xs:attribute name="name" type="xs:string" use="required"/>
            <xs:attribute name="value" type="xs:string" use="required"/>

    <xs:element name="system-err" type="xs:string"/>
    <xs:element name="system-out" type="xs:string"/>
    <xs:element name="rerunFailure" type="rerunType"/>
    <xs:element name="rerunError" type="rerunType"/>
    <xs:element name="flakyFailure" type="rerunType"/>
    <xs:element name="flakyError" type="rerunType"/>

    <xs:element name="testcase">
                <xs:choice minOccurs="0" maxOccurs="unbounded">
                    <xs:element ref="skipped"/>
                    <xs:element ref="error"/>
                    <xs:element ref="failure"/>
                    <xs:element ref="rerunFailure" minOccurs="0" maxOccurs="unbounded"/>
                    <xs:element ref="rerunError" minOccurs="0" maxOccurs="unbounded"/>
                    <xs:element ref="flakyFailure" minOccurs="0" maxOccurs="unbounded"/>
                    <xs:element ref="flakyError" minOccurs="0" maxOccurs="unbounded"/>
                    <xs:element ref="system-out"/>
                    <xs:element ref="system-err"/>
            <xs:attribute name="name" type="xs:string" use="required"/>
            <xs:attribute name="time" type="xs:string"/>
            <xs:attribute name="classname" type="xs:string"/>
            <xs:attribute name="group" type="xs:string"/>

    <xs:element name="testsuite">
            <xs:choice minOccurs="0" maxOccurs="unbounded">
                <xs:element ref="testsuite"/>
                <xs:element ref="properties"/>
                <xs:element ref="testcase"/>
                <xs:element ref="system-out"/>
                <xs:element ref="system-err"/>
            <xs:attribute name="name" type="xs:string" use="required"/>
            <xs:attribute name="tests" type="xs:string" use="required"/>
            <xs:attribute name="failures" type="xs:string" use="required"/>
            <xs:attribute name="errors" type="xs:string" use="required"/>
            <xs:attribute name="group" type="xs:string" />
            <xs:attribute name="time" type="SUREFIRE_TIME"/>
            <xs:attribute name="skipped" type="xs:string" />
            <xs:attribute name="timestamp" type="xs:string" />
            <xs:attribute name="hostname" type="xs:string" />
            <xs:attribute name="id" type="xs:string" />
            <xs:attribute name="package" type="xs:string" />
            <xs:attribute name="file" type="xs:string"/>
            <xs:attribute name="log" type="xs:string"/>
            <xs:attribute name="url" type="xs:string"/>
            <xs:attribute name="version" type="xs:string"/>

    <xs:element name="testsuites">
                <xs:element ref="testsuite" minOccurs="0" maxOccurs="unbounded" />
            <xs:attribute name="name" type="xs:string" />
            <xs:attribute name="time" type="SUREFIRE_TIME"/>
            <xs:attribute name="tests" type="xs:string" />
            <xs:attribute name="failures" type="xs:string" />
            <xs:attribute name="errors" type="xs:string" />