Organizing Tests into Suites: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
No edit summary
 
(23 intermediate revisions by 4 users not shown)
Line 1: Line 1:
==Overview==
==Overview==
Unless you specify otherwise, stride will publish results from all of your test units into a single top-level test suite when run. This is a satisfactory arrangement as long as the number of tests is relatively small. As the number of test units grows, or test units from disparate groups are aggregated (as in a Continuous Integration arrangement) this flat report organization becomes hard to use.
Unless you specify otherwise, when [[Running Tests|running tests]], stride will publish results from all of your results into a single top-level test suite. This is a satisfactory arrangement as long as the number of tests is relatively small. As the number of test units grows, or test units from disparate groups are aggregated (as in a Continuous Integration arrangement) this flat report organization becomes hard to use.


The software under test itself is typically broken up into separate units (e.g. libraries, components, objects, etc.) and organized into a hierarchy that reflects the functional organization of the software. If the results from STRIDE tests could be organized identically to the software under test, ****.
The software under test itself is typically broken up into separate units (e.g. libraries, components, objects, etc.) and organized into a hierarchy that reflects the functional organization of the software. If the results from STRIDE tests could be organized identically to the software under test, understanding and analyzing test results would be much simpler.


== Organizing Into Suites at Test Runtime ==
The [[STRIDE Runner|stride test runner]] allows you to specify a "path" under which a tests results will be published. The path is a forward-slash delimited list of suite names. The path is optionally specified in the argument to the <tt>-r</tt> command line option.
Following are some examples. The examples shown apply to both host script test modules and target test units. Many more examples can be seen [[Stride_Runner#Input|here]]:.
;<tt>-r /{Test1; Test2}</tt>
: Here we specify two tests to be run and published to the root suite
;<tt>-r /suite1{Test1} -r /suite2{Test2}</tt>
: Here the two tests are each published to their own suite. (Note that <tt>-r</tt> can be specified any number of times on a stride command line.)
;<tt>-r /suite1/suite1.1/suite1.1.1{Test1; Test2}</tt>
: Here, the two tests are published to a suite that is several levels deep in a hierarchy. Note that stride will create suites as needed to fulfill your publishing requirements.
Another important benefit to be gained by organizing your tests into suites is that a subtotal roll-up of several important results are provided by each suite. Pass/fail counts, error counts, and time under tests are all summed at the suite level for all tests that are under the current suite and any of its subsuites.


== Techniques for Test Organization ==
=== Using an Options File ===
One way to go about organizing an aggregation of tests into suites would be to create an options file comprising stride run options that map tests to the suites under which they are to be published. This options file would then be submitted to stride (along with other options).


* Subtotals of Time Under Test, Pass/Fail and Error counts are provided for each suite
This technique has a shortcoming in an environment where test units may be added or removed. Even if controls are put into place, the options file will inevitably get out of sync with the test content.<ref>The <tt>-</tt> (hyphen) [[STRIDE_Runner#Wildcard_Matching|wildcard]] can help with this problem, but test units that aren't specified will all be published to a single suite instead of where they ought to be.</ref>


== Organizing Into Suites at Test Runtime ==
Another variation on this technique is to have each workgroup maintain their own options file, then submit all of the options files to stride when the aggregated tests are run. (You can specify more than one options file on the stride command line: e.g. <tt>-O group1.txt -O group2.txt ...</tt>.)
[-r omitted]
 
-r /(TestUnitName)
===Organize Test Modules in Directories ===
  -r /suite1/suite1.1/suite1.2(TestUnitName)
We recommend that you leverage host directory structures to organize your script based test modules. Entire directories of test modules can be executed (recursively searched) by passing the directory names to the <tt>--run</tt> option
 
===Automating Your Test Unit Organization===
If you adopt a standard project-wide naming convention for your native code test units, a little bit of scripting glue can provide the organization by allowing test unit authors to determine where the test unit will be published within the suite hierarchy.
 
This technique works like this: at the time the tests are run, the available test unit names are listed, and from these names the path of parent suites is determined. We create an options file dynamically from this information and submit it to stride when tests are run.
 
The naming convention follows the suite specification pattern: ''Name segments describing a hierarchy are delimited by a unique character or character sequence''.
 
For C++ environments where namespaces are used, the scope operator (<tt>::</tt>) makes a convenient delimiter. For other environments, the underscore (<tt>_</tt>) is a good choice. The [[C/C%2B%2B_Samples|Sample Tests]] use these naming conventions and can provide a convenient testbed to see how the automation technique works.
 
Taking a couple of example test units from the [[C/C%2B%2B_Samples|Sample Tests]]:
 
  s2_testclass::RuntimeServices::Simple --translates to--> -r /s2/testclass/RuntimeServices{s2_testclass::RuntimeServices::Simple}


== Techniques for Organization ==
and


* Name your test units such that their organization is implicit
s2_testcclass_basic_fixtures --translates to--> -r /s2/testcclass/basic{s2_testcclass_basic_fixtures}


level1_level2_level3_name
Note that the full test unit name (in curly braces) must be specified since this is the name that is known to stride.


* Have each subgroup manage their own file.
==== OrganizeIntoSuites.pl ====
The following perl script reads a list of test unit names from <tt>stdin</tt> and creates a "suite-ized" run command line parameter for each, writing to <tt>stdout</tt>.


=== OrganizeIntoSuites.pl ===
<source lang="Perl">
<source lang="Perl">
#!/usr/bin/perl
#!/usr/bin/perl
Line 27: Line 55:
use warnings;
use warnings;


# set this to your separator character or string
# regular expression specifying the name separator
my $token = '::';
my $token = '::|_';


while( <> ) {
while( <> ) {
Line 39: Line 67:
         print "$segment/";
         print "$segment/";
     }
     }
     print "($_)\n";
     print "{$_}\n";
}
}
</source>
</source>


=== Batch File or Shell Script ===
==== Sample Output ====
If we build a target application to include all of the sample tests, then list the test units and pipe the output to <OrganizeIntoSuites.pl> like this:
stride --list --database TestApp.sidb | perl OrganizeIntoSuites.pl
the following output is produced:
 
<pre>
-r /s2/testcclass/basic{s2_testcclass_basic_fixtures}
-r /s2/testcclass/basic{s2_testcclass_basic_simple}
-r /s2/testcclass/runtimeservices{s2_testcclass_runtimeservices_dynamic}
-r /s2/testcclass/runtimeservices{s2_testcclass_runtimeservices_override}
-r /s2/testcclass/runtimeservices{s2_testcclass_runtimeservices_simple}
-r /s2/testcclass/runtimeservices{s2_testcclass_runtimeservices_varcomment}
-r /s2/testclass/Basic{s2_testclass::Basic::Exceptions}
-r /s2/testclass/Basic{s2_testclass::Basic::Fixtures}
-r /s2/testclass/Basic{s2_testclass::Basic::Simple}
-r /s2/testclass/RuntimeServices{s2_testclass::RuntimeServices::Dynamic}
-r /s2/testclass/RuntimeServices{s2_testclass::RuntimeServices::Override}
-r /s2/testclass/RuntimeServices{s2_testclass::RuntimeServices::Simple}
-r /s2/testclass/RuntimeServices{s2_testclass::RuntimeServices::VarComment}
-r /s2/testclass/srTest{s2_testclass::srTest::Dynamic}
-r /s2/testclass/srTest{s2_testclass::srTest::Simple}
-r /s2/testdouble/Basic{s2_testdouble::Basic::TestFunction}
-r /s2/testdouble/Basic{s2_testdouble::Basic::TestFunctionWithDepend}
-r /s2/testflist/basic{s2_testflist_basic_fixtures}
-r /s2/testflist/basic{s2_testflist_basic_simple}
-r /s2/testflist/runtimeservices{s2_testflist_runtimeservices_override}
-r /s2/testflist/runtimeservices{s2_testflist_runtimeservices_simple}
-r /s2/testflist/runtimeservices{s2_testflist_runtimeservices_varcomment}
-r /s2/testflist/runtimservices{s2_testflist_runtimservices_dynamic}
-r /s2/testintro{s2_testintro_cclass}
-r /s2/testintro{s2_testintro_flist}
-r /s2/testintro{s2_testintro_testdoubles}
-r /s2/testintro{s2_testintro_testpoints}
-r /s2/testlog/Basic{s2_testlog::Basic::Log}
-r /s2/testmacro/Basic{s2_testmacro::Basic::Asserts}
-r /s2/testmacro/Basic{s2_testmacro::Basic::CString}
-r /s2/testmacro/Basic{s2_testmacro::Basic::Comparison}
-r /s2/testmacro/Basic{s2_testmacro::Basic::Exceptions}
-r /s2/testmacro/Basic{s2_testmacro::Basic::ExpectBool}
-r /s2/testmacro/Basic{s2_testmacro::Basic::FloatingPointComparison}
-r /s2/testmacro/Basic{s2_testmacro::Basic::Predicates}
-r /s2/testpoint{s2_testpoint_basic}
 
</pre>
 
==== Putting It All Together ====
By combining the pre-processing of the test unit list with the running of the tests in a shell script or batch file, you can create a fully automated solution.
 
Here's an example of a script used to preprocess and run the tests:
 
<source lang="dos">
<source lang="dos">
stride --list --database TestApp.sidb | perl OrganizeIntoSuites.pl > TestsInSuites.txt
stride --list --database TestApp.sidb | perl OrganizeIntoSuites.pl > TestsInSuites.txt
stride --database TestApp.sidb --device TCP:localhost:8000 -O TestsInSuites.txt
stride --database TestApp.sidb --device TCP:localhost:8000 --options_file TestsInSuites.txt
</source>
</source>
== Aggregating Report Output Across Multiple Stride Invocations ==
In some cases, it may be necessary to invoke stride more than once as part of a larger overall test. In this case, you can use Test Space to aggregate the results into a single comprehensive test report.
The feature that makes this aggregation possible is the argument to [[STRIDE Runner|stride]]'s <tt>--upload</tt> parameter. If no argument is given, results are uploaded to Test Space  in a single piece and marked complete. (This is equivalent to the default <tt>all</tt> argument.)
To aggregate the output of multiple executions of stride, use parameters as shown in the following example:
<pre>
> stride --options_file runComponentA.opt  --upload=start
> stride --options_file runComponentB.opt  --upload=add
> ..
> stride --options_file runComponentN.opt  --upload=finish
</pre>
(Of course, all of the <tt>--project</tt> arguments must be identical and all of the <tt>--space</tt> arguments must be identical in each execution for the results to be aggregated to the same report.)
For more information on the <tt>--upload</tt> arguments, see [[STRIDE Runner]].
==Notes==
<references/>
[[Category:Running Tests]]

Latest revision as of 23:06, 24 November 2011

Overview

Unless you specify otherwise, when running tests, stride will publish results from all of your results into a single top-level test suite. This is a satisfactory arrangement as long as the number of tests is relatively small. As the number of test units grows, or test units from disparate groups are aggregated (as in a Continuous Integration arrangement) this flat report organization becomes hard to use.

The software under test itself is typically broken up into separate units (e.g. libraries, components, objects, etc.) and organized into a hierarchy that reflects the functional organization of the software. If the results from STRIDE tests could be organized identically to the software under test, understanding and analyzing test results would be much simpler.

Organizing Into Suites at Test Runtime

The stride test runner allows you to specify a "path" under which a tests results will be published. The path is a forward-slash delimited list of suite names. The path is optionally specified in the argument to the -r command line option.

Following are some examples. The examples shown apply to both host script test modules and target test units. Many more examples can be seen here:.

-r /{Test1; Test2}
Here we specify two tests to be run and published to the root suite
-r /suite1{Test1} -r /suite2{Test2}
Here the two tests are each published to their own suite. (Note that -r can be specified any number of times on a stride command line.)
-r /suite1/suite1.1/suite1.1.1{Test1; Test2}
Here, the two tests are published to a suite that is several levels deep in a hierarchy. Note that stride will create suites as needed to fulfill your publishing requirements.

Another important benefit to be gained by organizing your tests into suites is that a subtotal roll-up of several important results are provided by each suite. Pass/fail counts, error counts, and time under tests are all summed at the suite level for all tests that are under the current suite and any of its subsuites.

Techniques for Test Organization

Using an Options File

One way to go about organizing an aggregation of tests into suites would be to create an options file comprising stride run options that map tests to the suites under which they are to be published. This options file would then be submitted to stride (along with other options).

This technique has a shortcoming in an environment where test units may be added or removed. Even if controls are put into place, the options file will inevitably get out of sync with the test content.[1]

Another variation on this technique is to have each workgroup maintain their own options file, then submit all of the options files to stride when the aggregated tests are run. (You can specify more than one options file on the stride command line: e.g. -O group1.txt -O group2.txt ....)

Organize Test Modules in Directories

We recommend that you leverage host directory structures to organize your script based test modules. Entire directories of test modules can be executed (recursively searched) by passing the directory names to the --run option

Automating Your Test Unit Organization

If you adopt a standard project-wide naming convention for your native code test units, a little bit of scripting glue can provide the organization by allowing test unit authors to determine where the test unit will be published within the suite hierarchy.

This technique works like this: at the time the tests are run, the available test unit names are listed, and from these names the path of parent suites is determined. We create an options file dynamically from this information and submit it to stride when tests are run.

The naming convention follows the suite specification pattern: Name segments describing a hierarchy are delimited by a unique character or character sequence.

For C++ environments where namespaces are used, the scope operator (::) makes a convenient delimiter. For other environments, the underscore (_) is a good choice. The Sample Tests use these naming conventions and can provide a convenient testbed to see how the automation technique works.

Taking a couple of example test units from the Sample Tests:

s2_testclass::RuntimeServices::Simple --translates to--> -r /s2/testclass/RuntimeServices{s2_testclass::RuntimeServices::Simple}

and

s2_testcclass_basic_fixtures --translates to--> -r /s2/testcclass/basic{s2_testcclass_basic_fixtures}

Note that the full test unit name (in curly braces) must be specified since this is the name that is known to stride.

OrganizeIntoSuites.pl

The following perl script reads a list of test unit names from stdin and creates a "suite-ized" run command line parameter for each, writing to stdout.

#!/usr/bin/perl
use strict;
use warnings;

# regular expression specifying the name separator
my $token = '::|_';

while( <> ) {
    chomp;
    my @segments = split(/$token/);
    # remove the last segment from the array
    my $testunit = pop @segments;
    print "-r /";
    foreach my $segment (@segments) {
        print "$segment/";
    }
    print "{$_}\n";
}

Sample Output

If we build a target application to include all of the sample tests, then list the test units and pipe the output to <OrganizeIntoSuites.pl> like this:

stride --list --database TestApp.sidb | perl OrganizeIntoSuites.pl

the following output is produced:

-r /s2/testcclass/basic{s2_testcclass_basic_fixtures}
-r /s2/testcclass/basic{s2_testcclass_basic_simple}
-r /s2/testcclass/runtimeservices{s2_testcclass_runtimeservices_dynamic}
-r /s2/testcclass/runtimeservices{s2_testcclass_runtimeservices_override}
-r /s2/testcclass/runtimeservices{s2_testcclass_runtimeservices_simple}
-r /s2/testcclass/runtimeservices{s2_testcclass_runtimeservices_varcomment}
-r /s2/testclass/Basic{s2_testclass::Basic::Exceptions}
-r /s2/testclass/Basic{s2_testclass::Basic::Fixtures}
-r /s2/testclass/Basic{s2_testclass::Basic::Simple}
-r /s2/testclass/RuntimeServices{s2_testclass::RuntimeServices::Dynamic}
-r /s2/testclass/RuntimeServices{s2_testclass::RuntimeServices::Override}
-r /s2/testclass/RuntimeServices{s2_testclass::RuntimeServices::Simple}
-r /s2/testclass/RuntimeServices{s2_testclass::RuntimeServices::VarComment}
-r /s2/testclass/srTest{s2_testclass::srTest::Dynamic}
-r /s2/testclass/srTest{s2_testclass::srTest::Simple}
-r /s2/testdouble/Basic{s2_testdouble::Basic::TestFunction}
-r /s2/testdouble/Basic{s2_testdouble::Basic::TestFunctionWithDepend}
-r /s2/testflist/basic{s2_testflist_basic_fixtures}
-r /s2/testflist/basic{s2_testflist_basic_simple}
-r /s2/testflist/runtimeservices{s2_testflist_runtimeservices_override}
-r /s2/testflist/runtimeservices{s2_testflist_runtimeservices_simple}
-r /s2/testflist/runtimeservices{s2_testflist_runtimeservices_varcomment}
-r /s2/testflist/runtimservices{s2_testflist_runtimservices_dynamic}
-r /s2/testintro{s2_testintro_cclass}
-r /s2/testintro{s2_testintro_flist}
-r /s2/testintro{s2_testintro_testdoubles}
-r /s2/testintro{s2_testintro_testpoints}
-r /s2/testlog/Basic{s2_testlog::Basic::Log}
-r /s2/testmacro/Basic{s2_testmacro::Basic::Asserts}
-r /s2/testmacro/Basic{s2_testmacro::Basic::CString}
-r /s2/testmacro/Basic{s2_testmacro::Basic::Comparison}
-r /s2/testmacro/Basic{s2_testmacro::Basic::Exceptions}
-r /s2/testmacro/Basic{s2_testmacro::Basic::ExpectBool}
-r /s2/testmacro/Basic{s2_testmacro::Basic::FloatingPointComparison}
-r /s2/testmacro/Basic{s2_testmacro::Basic::Predicates}
-r /s2/testpoint{s2_testpoint_basic}

Putting It All Together

By combining the pre-processing of the test unit list with the running of the tests in a shell script or batch file, you can create a fully automated solution.

Here's an example of a script used to preprocess and run the tests:

stride --list --database TestApp.sidb | perl OrganizeIntoSuites.pl > TestsInSuites.txt
stride --database TestApp.sidb --device TCP:localhost:8000 --options_file TestsInSuites.txt

Aggregating Report Output Across Multiple Stride Invocations

In some cases, it may be necessary to invoke stride more than once as part of a larger overall test. In this case, you can use Test Space to aggregate the results into a single comprehensive test report.

The feature that makes this aggregation possible is the argument to stride's --upload parameter. If no argument is given, results are uploaded to Test Space in a single piece and marked complete. (This is equivalent to the default all argument.)

To aggregate the output of multiple executions of stride, use parameters as shown in the following example:

> stride --options_file runComponentA.opt  --upload=start
> stride --options_file runComponentB.opt  --upload=add
> ..
> stride --options_file runComponentN.opt  --upload=finish

(Of course, all of the --project arguments must be identical and all of the --space arguments must be identical in each execution for the results to be aggregated to the same report.)

For more information on the --upload arguments, see STRIDE Runner.

Notes

  1. The - (hyphen) wildcard can help with this problem, but test units that aren't specified will all be published to a single suite instead of where they ought to be.