Test Units Overview: Difference between revisions
Line 13: | Line 13: | ||
* Specification of a test as a test method | * Specification of a test as a test method | ||
* Aggregation of individual tests into test suites which form execution and reporting units | * Aggregation of individual tests into test suites which form execution and reporting units | ||
* Specification of expected results within test methods (typically by using one or more Test Macros) | * Specification of expected results within test methods (typically by using one or more [[Pass/Fail Macros|Test Macros]]) | ||
* Test fixturing (optional setup and teardown) | * Test fixturing (optional setup and teardown) | ||
* Automated execution | * Automated execution |
Revision as of 15:10, 6 April 2009
Introduction
STRIDE enables testing of C/C++ code through the use of xUnit-style test units. Test units can be written by developers, captured using an SCL pragma, and executed from the host. STRIDE facilitates the execution of some or all of the test units by automatically creating entry points for the execution of test units on the target.
What are STRIDE Test Units?
STRIDE Test Units is a general term for xUnit-style test modules running within the STRIDE runtime framework. These tests--written in C and C++--are compiled and linked with your embedded software and run in-place on your target hardware. They are suitable for both developer unit testing as well as ongoing regression testing.
An external Test Runner is provided which controls the execution of the tests and publishes test results to the local filesystem and optionally to S2's Internet STRIDE Test Space.
Test Unit Features
In all cases, STRIDE Test Units provide the following capabilities typical of all xUnit-style testing frameworks:
- Specification of a test as a test method
- Aggregation of individual tests into test suites which form execution and reporting units
- Specification of expected results within test methods (typically by using one or more Test Macros)
- Test fixturing (optional setup and teardown)
- Automated execution
- Automated results report generation
As well as these unique features:
- Remote Execution
- Execution and reporting controlled from a remote host, thus making the framework useful for on-target embedded system testing
- Dynamic Test and Suite Generation
- Test cases and suites can be created and manipulated at runtime
- Test Doubles
- Dynamic runtime function substitution to implement on-the-fly mocks, stubs, and doubles
- Asynchronous Testing Framework
- Support for testing of asynchronous activities occurring in multiple threads
- Multiprocess Testing Framework
- Support for testing across multiple processes running simultaneously on the target
- Automatic Timing Data Collection
- Automatic "time under test" collection
- Automatic Results Publishing to Local Disk and Internet
- Automatic publishing of test results to STRIDE Test Space
Test Unit Deployment
Test Units implement three different test deployment strategies, each of which is demonstrated in the samples:
- test units based on C++ test classes,
- test units based on C test functions,
- test units based on a C language implementation of test classes (struct containing function pointers)
The required steps to get started with writing test units are as follows:
- Write a test unit and capture it with one of the Test Units pragmas. You may simply create a C++ class with a number of test methods and SCL capture it using scl_test_class pragma:
- Build and generate the IM code using STRIDE Build Tools:
- Build the generate IM code along with the rest of the source to create your application's binary.
- Download your application to the Target and start it.
- Execute your test units and publish results using the Test Unit Runner.
// testcpp.h
class Simple
{
public:
int test1() { return 0;} // PASS
int test2() { return 23;} // FAIL <>0
bool test3() { return true;} // PASS
bool test4() { return false;} // FAIL
};
#ifdef _SCL
#pragma scl_test_class(Simple)
#endif
Or, if you are writing in C, create a set of global functions and SCL capture them with scl_test_flist pragma (in more complicated scenarios when initialization is required scl_test_cclass pragma could be a better choice):
// testc.h
#ifdef __cplusplus
extern "C" {
#endif
int test1(void)
{
return 0; // PASS
}
int test2(void)
{
return 23; // FAIL <>0
}
#ifdef __cplusplus
}
#endif
#ifdef _SCL
#pragma scl_test_flist("Simple", test1, test2)
#endif
> s2scompile --c++ testcpp.h > s2scompile --c testc.h > s2sbind --output=test.sidb testcpp.h.meta testc.h.meta > s2sinstrument --im_name=test test.sidb
If using STRIDE Studio, create a new workspace (or open an existing one), add the above source files, adjust your compiler settings, build and generate the IM manually through the UI, or write custom scripts to automate the same sequence.
> perl testunitrun.pl -u -d test.sidb
If using STRIDE Studio, you can execute individual test units interactively by opening the user interface view corresponding to the test unit you would like to execute, then call it. Further more you may write a simple script to automate your test units execution and result publishing.
Requirements
Several variations on typical xUnit-style test units are supported. The additional supported features include:
- Test status can be set using STRIDE Runtime APIs or by specifying simple return types for test methods.
- Integral return types: 0 = PASS; <> 0 = FAIL
- C++ bool return type: true = PASS; false = FAIL
- void return type with no explict status setting is assumed PASS
- Test writers can create additional child suites and tests at runtime by using Runtime APIs.
- We do not rely on exceptions for reporting of status.
- One of the Test Unit pragmas must be applied.
The STRIDE test class framework has the following requirements of each test class:
- The test class must have a suitable default (no-argument) constructor.
- The test class must have one or more public methods suitable as test methods. Allowable test methods always take no arguments (void) and return either void, simple integer types (int, short, long or char) or bool. At this time, we do not allow typedef types or macros for the return values specification.
- The scl_test_class pragma must be applied to the class.
Simple example using return values for status
Using a Test Class
#include <srtest.h>
class Simple {
public:
int tc_Int_ExpectPass() {return 0;}
int tc_Int_ExpectFail() {return -1;}
bool tc_Bool_ExpectPass() {return true;}
bool tc_Bool_ExpectFail() {return false;}
};
#ifdef _SCL
#pragma scl_test_class(Simple)
#endif
Using a Test Function List
#include <srtest.h>
#ifdef __cplusplus
extern "C" {
#endif
int tf_Int_ExpectPass(void) {return 0;}
int tf_Int_ExpectFail(void) {return -1;}
#ifdef _SCL
#pragma scl_test_flist("Simple", tf_Int_ExpectPass, tf_Int_ExpectFail)
#endif
#ifdef __cplusplus
}
#endif
Simple example using runtime test service APIs
Using a Test Class
#include <srtest.h>
class RuntimeServices_basic {
public:
void tc_ExpectPass()
{
srTestCaseAddComment(srTEST_CASE_DEFAULT, "this test should pass");
srTestCaseSetStatus(srTEST_CASE_DEFAULT, srTEST_PASS, 0);
}
void tc_ExpectFail()
{
srTestCaseAddComment(srTEST_CASE_DEFAULT, "this test should fail");
srTestCaseSetStatus(srTEST_CASE_DEFAULT, srTEST_FAIL, 0);
}
void tc_ExpectInProgress()
{
srTestCaseAddComment(srTEST_CASE_DEFAULT, "this test should be in progress");
}
};
#ifdef _SCL
#pragma scl_test_class(RuntimeServices_basic)
#endif
Using a Test Function List
#include <srtest.h>
#ifdef __cplusplus
extern "C" {
#endif
void tf_ExpectPass(void)
{
srTestCaseAddComment(srTEST_CASE_DEFAULT, "this test should pass");
srTestCaseSetStatus(srTEST_CASE_DEFAULT, srTEST_PASS, 0);
}
void tf_ExpectFail(void)
{
srTestCaseAddComment(srTEST_CASE_DEFAULT, "this test should fail");
srTestCaseSetStatus(srTEST_CASE_DEFAULT, srTEST_FAIL, 0);
}
void tf_ExpectInProgress(void)
{
srTestCaseAddComment(srTEST_CASE_DEFAULT, "this test should be in progress");
}
#ifdef _SCL
#pragma scl_test_flist("RuntimeServices_basic", tf_ExpectPass, tf_ExpectFail, tf_ExpectInProgress)
#endif
#ifdef __cplusplus
}
#endif
Simple example using srTest base class
#include <srtest.h>
class MyTest : public stride::srTest {
public:
void tc_ExpectPass()
{
testCase.AddComment("this test should pass");
testCase.SetStatus(srTEST_PASS, 0);
}
void tc_ExpectFail()
{
testCase.AddComment("this test should fail");
testCase.SetStatus(srTEST_FAIL, 0);
}
void tc_ExpectInProgress()
{
testCase.AddComment("this test should be in progress");
}
int tc_ChangeMyName()
{
testCase.AddComment("this test should have name = MyChangedName");
testCase.SetName("MyChangedName");
return 0;
}
int tc_ChangeMyDescription()
{
testCase.AddComment("this test should have a description set");
testCase.SetDescription("this is my new description");
return 0;
}
};
#ifdef _SCL
#pragma scl_test_class(MyTest)
#endif
Using Testpoints
Testpoints are covered in the article Using Testpoints.
Scripting a Test Unit
To automate the execution and reporting of a Test Unit a script is required. Scripts can be written by hand or automatically generated using the Script Wizard and a corresponding template script. A scripting tool for executing a test unit is the AutoScript TestUnits collection. An Ascript TestUnit object assembles all of the reporting information for the test unit and its corresponding test methods.
- Require usage of the AutoScript TestUnits collection
- Can be written by hand (refer below)
- Can leverage Templates via the Script Wizard
- Order of multiple test units dictated by SUID assignment
Single test unit example
The following example script is used to harness a test unit that has been captured using #pragma scl_test_class(Simple).
JavaScript
var tu = ascript.TestUnits.Item("Simple");
// Ensure test unit exists
if (tu != null)
tu.Run();
Perl
use strict;
use Win32::OLE;
Win32::OLE->Option(Warn => 3);
my $tu = $main::ascript->TestUnits->Item("Simple");
if (defined $tu) {
$tu->Run();
}
Multiple test units example
The following example script is used to harness two test units that have been captured using #pragma scl_test_class(Simple1) and #pragma scl_test_class(Simple2).
JavaScript
var Units = ["Simple1","Simple2"];
// iterate through each function
for (i in Units)
{
var tu = ascript.TestUnits.Item(Units[i]);
if ( tu != null )
tu.Run();
}
Perl
use strict;
use Win32::OLE;
Win32::OLE->Option(Warn => 3);
# initialize an array with all selected function names
my @UnitNames = ("Simple1","Simple2");
foreach (@UnitNames) {
my $tu = $main::ascript->TestUnits->Item($_->[1]);
die "TestUnit not found: $_->[1]\n" unless (defined $tu);
$tu->Run();
}