Organizing Tests into Suites

From STRIDE Wiki
Jump to navigation Jump to search

Overview

Unless you specify otherwise, stride will publish results from all of your test units into a single top-level test suite when run. This is a satisfactory arrangement as long as the number of tests is relatively small. As the number of test units grows, or test units from disparate groups are aggregated (as in a Continuous Integration arrangement) this flat report organization becomes hard to use.

The software under test itself is typically broken up into separate units (e.g. libraries, components, objects, etc.) and organized into a hierarchy that reflects the functional organization of the software. If the results from STRIDE tests could be organized identically to the software under test, understanding and analyzing test results would be much simpler.


Organizing Into Suites at Test Runtime

The stride test runner allows you to specify a "path" under which a test unit's results will be published. The path is a forward-slash delimited list of suite names. The path is optionally specified in the argument to the -r command line option.

Following are some examples. Many more examples can be seen here:.

<-r omitted>
If you omit the -r argument, stride will run all of the test units specified in the database and publish them into a single top-level root test suite
-r /{TestUnit1; TestUnit2}
Here we specify two test units to be run and published to the root suite
-r /suite1{TestUnit1} -r /suite2{TestUnit2}
Here the two test units are each published to their own suite. (Note that -r can be specified any number of times on a stride command line.)
-r /suite1/suite1.1/suite1.1.1{TestUnit1; TestUnit2}
Here, the two test units are published to a suite that is several levels deep in a hierarchy. Note that stride will create suites as needed to fulfill your publishing requirements.

Another important benefit to be gained by organizing your test units into suites is that a subtotal roll-up of several important results are provided by each suite. Pass/fail counts, error counts, and time under tests are all summed at the suite level for all tests that are under the current suite and any of its subsuites.

Techniques for Test Organization

Using an Options File

One way to go about organizing an aggregation of tests into suites would be to create an options file comprising stride run options that map tests to the suites under which they are to be published. This options file would then be submitted to stride (along with other options).

This technique has a big shortcoming in an environment where test units may be added or removed. Even if controls are put into place, the options file will inevitably get out of sync with the test content.[1]

Another variation on this technique is to have each workgroup maintain their own options file, then submit all of the options files to stride when the aggregated tests are run. (You can specify more than one options file on the stride command line: e.g. -O group1.txt -O group2.txt ....)

Automating Your Test Organization

The key to reliable and repeatable organization is through automation. If you adopt a standard project-wide naming convention for your test units, a little bit of scripting glue can provide the organization by allowing test unit authors to determine where the test unit will be published within the suite hierarchy.

This technique works like this: at the time the tests are run, the available test unit names are listed, and from these names the path of parent suites is determined. We create an options file dynamically from this information and submit it to stride when tests are run.

The naming convention follows the suite specification pattern: Name segments describing a hierarchy are delimited by a unique character or character sequence.

For C++ environments where namespaces are used, the scope operator (::) makes a convenient delimiter. For other environments, the underscore (_) is a good choice. The Sample Tests use these naming conventions and can provide a convenient testbed to see how the automation technique works.

Taking a couple of example test units from the Sample Tests:

s2::RuntimeServices::Simple --translates to--> -r /s2/RuntimeServices/(s2::RuntimeServices::Simple)

and

s2_testcclass_basic_fixtures --translates to--> -r /s2/testcclass/basic(s2_testcclass_basic_fixtures)

Note that the full test unit name (in parentheses) must be specified since this is the name that is known to stride.

OrganizeIntoSuites.pl

The following perl script reads a list of test unit names from stdin and creates a "suite-ized" run command line parameter for each, writing to stdout.

#!/usr/bin/perl
use strict;
use warnings;

# regular expression specifying the name separator
my $token = '::|_';

while( <> ) {
    chomp;
    my @segments = split(/$token/);
    # remove the last segment from the array
    my $testunit = pop @segments;
    print "-r /";
    foreach my $segment (@segments) {
        print "$segment/";
    }
    print "($_)\n";
}

Sample Output

If we build a target application to include all of the sample tests, then list the test units and pipe the output to <OrganizeIntoSuites.pl> like this:

stride --list --database TestApp.sidb | perl OrganizeIntoSuites.pl

the following output is produced:

-r /s2/testcclass/basic/(s2_testcclass_basic_fixtures)
-r /s2/testcclass/basic/(s2_testcclass_basic_simple)
-r /s2/testcclass/runtimeservices/(s2_testcclass_runtimeservices_dynamic)
-r /s2/testcclass/runtimeservices/(s2_testcclass_runtimeservices_override)
-r /s2/testcclass/runtimeservices/(s2_testcclass_runtimeservices_simple)
-r /s2/testcclass/runtimeservices/(s2_testcclass_runtimeservices_varcomment)
-r /s2/testclass/Basic/(s2_testclass::Basic::Exceptions)
-r /s2/testclass/Basic/(s2_testclass::Basic::Fixtures)
-r /s2/testclass/Basic/(s2_testclass::Basic::Simple)
-r /s2/testclass/RuntimeServices/(s2_testclass::RuntimeServices::Dynamic)
-r /s2/testclass/RuntimeServices/(s2_testclass::RuntimeServices::Override)
-r /s2/testclass/RuntimeServices/(s2_testclass::RuntimeServices::Simple)
-r /s2/testclass/RuntimeServices/(s2_testclass::RuntimeServices::VarComment)
-r /s2/testclass/srTest/(s2_testclass::srTest::Dynamic)
-r /s2/testclass/srTest/(s2_testclass::srTest::Simple)
-r /s2/testdouble/Basic/(s2_testdouble::Basic::TestFunction)
-r /s2/testdouble/Basic/(s2_testdouble::Basic::TestFunctionWithDepend)
-r /s2/testflist/basic/(s2_testflist_basic_fixtures)
-r /s2/testflist/basic/(s2_testflist_basic_simple)
-r /s2/testflist/runtimeservices/(s2_testflist_runtimeservices_override)
-r /s2/testflist/runtimeservices/(s2_testflist_runtimeservices_simple)
-r /s2/testflist/runtimeservices/(s2_testflist_runtimeservices_varcomment)
-r /s2/testflist/runtimservices/(s2_testflist_runtimservices_dynamic)
-r /s2/testintro/(s2_testintro_cclass)
-r /s2/testintro/(s2_testintro_flist)
-r /s2/testintro/(s2_testintro_testdoubles)
-r /s2/testintro/(s2_testintro_testpoints)
-r /s2/testlog/Basic/(s2_testlog::Basic::Log)
-r /s2/testmacro/Basic/(s2_testmacro::Basic::Asserts)
-r /s2/testmacro/Basic/(s2_testmacro::Basic::CString)
-r /s2/testmacro/Basic/(s2_testmacro::Basic::Comparison)
-r /s2/testmacro/Basic/(s2_testmacro::Basic::Exceptions)
-r /s2/testmacro/Basic/(s2_testmacro::Basic::ExpectBool)
-r /s2/testmacro/Basic/(s2_testmacro::Basic::FloatingPointComparison)
-r /s2/testmacro/Basic/(s2_testmacro::Basic::Predicates)
-r /s2/testpoint/(s2_testpoint_basic)


Putting It All Together

By combining the pre-processing of the test unit list with the running of the tests in a shell script or batch file, you can create a fully automated solution.

Here's an example of a script used to preprocess and run the tests:

stride --list --database TestApp.sidb | perl OrganizeIntoSuites.pl > TestsInSuites.txt
stride --database TestApp.sidb --device TCP:localhost:8000 -O TestsInSuites.txt


Notes

  1. The - (hyphen) wildcard can help with this problem, but test units that aren't specified will all be published to a single suite instead of where they ought to be.