Test Units Overview: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 53: Line 53:
You can freely mix different deployment methods across a project if desired, the format of the results is consistent across all test unit packaging options.
You can freely mix different deployment methods across a project if desired, the format of the results is consistent across all test unit packaging options.


Following are a few short examples.


=== Deployment as C++ Classes ===
== Simple Test Unit Examples ==
Following are a few short examples. In each example, a single test unit with the name "MyTest" is identified to the [[s2scompile|STRIDE compiler]] via a custom [[Test Unit Pragmas|STRIDE #pragma]].




=== Simple example using srTest base class ===
=== Test Unit as C++ Class ===
==== MyTest.h ====
<source lang=cpp>
<source lang=cpp>
#include <srtest.h>
#include <srtest.h>
Line 95: Line 96:
</source>
</source>


=== Deployment as C Classes ===
=== Test Unit as C Class ===
 
====MyTest.h====
<source lang=c>
<source lang=c>
#include <srtest.h>
 


typdef struct MyTest
typdef struct MyTest
Line 110: Line 113:


#ifdef _SCL
#ifdef _SCL
// This pragma identifies MyTest as a test c class to the STRIDE compiler.
// Extra instrumentation code will be generated to call MyTest_Init() before
// tests are run.
#pragma scl_test_cclass(MyTest, MyTest_Init)
#pragma scl_test_cclass(MyTest, MyTest_Init)
#endif
#endif
</source>
</source>
 
====MyTest.c====
<source lang='c'>
<source lang='c'>
#include "MyTest.h"


static void ExpectPass(MyTest* self)
static void ExpectPass(MyTest* self)
Line 147: Line 154:
</source>
</source>


=== Deployment C Functions ===
=== Test Unit as Group of Free Functions ===
 
====MyTest.h====
<source lang=c>
<source lang=c>
#include <srtest.h>
void ExpectPass();
void ExpectPass();
void ExpectFail();
void ExpectFail();
Line 156: Line 164:


#ifdef _SCL
#ifdef _SCL
// this pragma identifies MyTest as a test class to the STRIDE compiler
#pragma scl_test_flist("MyTest", ExpectPass, ExpectFail, ChangeMyName, ChangeMyDescription)
#pragma scl_test_flist("MyTest", ExpectPass, ExpectFail, ChangeMyName, ChangeMyDescription)
#endif
#endif
</source>
</source>


====MyTest.c====
<source lang='c'>
<source lang='c'>
#include "MyTest.h"
void ExpectPass()
void ExpectPass()
{
{
Line 185: Line 197:
</source>
</source>


== Integrating Test Units Into Your Target Build ==
STRIDE Test Units are easily integrated into your target build since all required test harnessing code is automatically generated based on header files that include [[Test Unit Pragmas|STRIDE #pragmas]].


This harnessing code (referred to as [[Intercept Module|Intercept Module]], or IM code) is responsible for
* Communicating with the I/O portion of the STRIDE target runtime
* Instantiating each specified Test Unit
* Running each member test of the Test Unit
* Collecting test output


Harnessing code generation is the responsibility of the [[Build Tools|STRIDE Build Tools]]. The useful artifacts created by the build tools comprise the STRIDE database (xx.sidb), and the IM source (strideIM.c/cpp, strideIM.h, and strideIMEntry.h).


To build a fully-instrumented target:
* Several statements are added to your applications main() function (or equivalent) to start and stop the STRIDE I/O and IM threads
* The generated IM source files are compiled and linked with your target application
* The STRIDE library (which provides I/O and common services) is also linked with your application.


<li>Build and generate the IM code using STRIDE [[Build Tools]]:</li>
== Running Target-Based Tests ==
<pre>
After configuring TC/IP or COM port communication parameters, test are controlled and run from a [[Running Test Units|remote host computer]]. Test results are also reported from the host computer. Options are available to run a subset of the available test units and/or run test units in a specified order.
> s2scompile --c++ testcpp.h
> s2scompile --c testc.h
> s2sbind --output=test.sidb testcpp.h.meta testc.h.meta
> s2sinstrument --im_name=test test.sidb
</pre>
''If using [[STRIDE Studio]], create a new workspace (or open an existing one), add the above source files, adjust your compiler settings, build and generate the IM manually through the UI, or write custom scripts to automate the same sequence.''
<li>Build the generate IM code along with the rest of the source to create your application's binary.
<li>Download your application to the Target and start it.</li>
<li>Execute your test units and publish results using the [[Test_Runners#TestUnitRun.pl|Test Unit Runner]].</li>
<pre>
> perl testunitrun.pl -u -d test.sidb
</pre>
''If using [[STRIDE Studio]], you can execute individual test units interactively by opening the user interface view corresponding to the test unit you would like to execute, then call it. Further more you may write a simple script to automate your [[#Scripting_a_Test_Unit|test units execution]] and result publishing.''
</ol>
 
== Requirements  ==
 
Several variations on typical xUnit-style test units are supported. The additional supported features include:
 
*Test status can be set using STRIDE Runtime APIs ''or'' by specifying simple return types for test methods.
*Integral return types: 0 = PASS; &lt;&gt; 0 = FAIL
*C++ bool return type: true = PASS; false = FAIL
*void return type with no explict status setting is assumed PASS
*Test writers can create additional child suites and tests at runtime by using Runtime APIs.
*We do not rely on exceptions for reporting of status.
*One of the [[SCL_Pragmas#Test_Units|Test Unit pragmas]] must be applied.
 
The STRIDE test class framework has the following requirements of each test class:
 
*The test class must have a suitable default (no-argument) constructor.
*The test class must have one or more public methods suitable as test methods. Allowable test methods always take no arguments (void) and return either void, simple integer types (int, short, long or char) or bool. At this time, we do not allow typedef types or macros for the return values specification.
*The [[scl_test_class]] pragma must be applied to the class.
 
 
 
==== Using a Test Function List ====
<source lang=c>
#include <srtest.h>
 
#ifdef __cplusplus
extern "C" {
#endif
 
int tf_Int_ExpectPass(void) {return 0;}
int tf_Int_ExpectFail(void) {return -1;}
 
#ifdef _SCL
#pragma scl_test_flist("Simple", tf_Int_ExpectPass, tf_Int_ExpectFail)
#endif
 
#ifdef __cplusplus
}
#endif
</source>
 
 
 
 
 
 
== Using Testpoints ==
Testpoints are described in the article [[Using Testpoints]].
 


[[Category:Test Units]]
[[Category:Test Units]]
[[Category:Reference]]
[[Category:Reference]]

Revision as of 15:36, 15 April 2009

What are STRIDE Test Units?

STRIDE Test Units is a general term for xUnit-style test modules running within the STRIDE runtime framework. These tests--written in C and C++--are compiled and linked with your embedded software and run in-place on your target hardware. They are suitable for both developer unit testing as well as ongoing regression testing.

An external Test Runner is provided which controls the execution of the tests and publishes test results to the local filesystem and optionally to S2's Internet STRIDE Test Space.

Test Unit Features

In all cases, STRIDE Test Units provide the following capabilities typical of all xUnit-style testing frameworks:

  • Specification of a test as a test method
  • Aggregation of individual tests into test suites which form execution and reporting units
  • Specification of expected results within test methods (typically by using one or more Test Macros)
  • Test fixturing (optional setup and teardown)
  • Automated execution
  • Automated results report generation

Unique Test Unit Features

In addition, STRIDE Test Units offer these unique features:

Remote Execution
Execution and reporting controlled from a remote host, thus making the framework useful for on-target embedded system testing
Dynamic Test and Suite Generation
Test cases and suites can be created and manipulated at runtime
Test Doubles
Dynamic runtime function substitution to implement on-the-fly mocks, stubs, and doubles
Asynchronous Testing Framework (Test Points)
Support for testing of asynchronous activities occurring in multiple threads
Multiprocess Testing Framework
Support for testing across multiple processes running simultaneously on the target
Automatic Timing Data Collection
Automatic "time under test" collection
Automatic Results Publishing to Local Disk and Internet
Automatic publishing of test results to STRIDE Test Space

Test Unit Deployment

Individual Tests

Individual test are implemented as test functions or methods which follow a four-phase testing pattern:

  1. Setting up a test fixture (optional)
  2. Exercising the System Under Test (SUT)
  3. Verifying that the expected outcome has occurred (typically using calls to Assertion or Expectation Macros)
  4. Tearing down the test fixture (optional)

Test Units

Individual functions or methods, which typically implement a single test case are grouped into one or more Test Units which are executed as atomic entities.

Grouping of individual tests into a Test Unit can be accomplished in any of three ways:

  • A Test Unit can be comprised of the member functions of a C++ class,
  • A Test Unit can be comprised of a set of C functions,
  • A Test Unit can be comprised of C functions pointed to by members of a C struct

The best choice is usually the C++ class since it offers the best mix of features and ease-of-use. (You can test code written in C or C++ using the C++ class test units.) However, compiling C++ is not always possible, in this case one of the C-based test unit packaging options must be used.

You can freely mix different deployment methods across a project if desired, the format of the results is consistent across all test unit packaging options.


Simple Test Unit Examples

Following are a few short examples. In each example, a single test unit with the name "MyTest" is identified to the STRIDE compiler via a custom STRIDE #pragma.


Test Unit as C++ Class

MyTest.h

#include <srtest.h>
  
class MyTest : public stride::srTest 
{
public:
  void ExpectPass() 
  {
    srLOG_INFO("this test should pass");
    srEXPECT_EQ(2 + 2, 4); 
  }
  void ExpectFail() 
  {
    srLOG_INFO("this test should fail");
    srEXPECT_GT(2 * 3, 7); 
  }
  int ChangeMyName() 
  {
    srLOG_INFO("this test should have name = MyChangedName");
    testCase.SetName("MyChangedName");
    return 0;
  }
  int ChangeMyDescription() 
  {
    srLOG_INFO("this test should have a description set");
    testCase.SetDescription("this is my new description");
    return 0;
  }
};

#ifdef _SCL
// this pragma identifies MyTest as a test class to the STRIDE compiler
#pragma scl_test_class(MyTest)
#endif

Test Unit as C Class

MyTest.h

#include <srtest.h>
  

typdef struct MyTest
{
    void (*ExpectPass)(struct MyTest* self);
    void (*ExpectFail)(struct MyTest* self);
    int (*ChangeMyName)(struct MyTest* self);
    int (*ChangeMyDescription)(struct MyTest* self);
} MyTest;

void MyTest_Init(MyTest* self);

#ifdef _SCL
// This pragma identifies MyTest as a test c class to the STRIDE compiler.
// Extra instrumentation code will be generated to call MyTest_Init() before
// tests are run.
#pragma scl_test_cclass(MyTest, MyTest_Init)
#endif

MyTest.c

#include "MyTest.h"

static void ExpectPass(MyTest* self)
{
    srLOG_INFO("this test should pass");
    srEXPECT_EQ(2 + 2, 4); 
}
static void ExpectFail(MyTest* self)
{
    srLOG_INFO("this test should fail");
    srEXPECT_GT(2 * 3, 7); 
}
static int ChangeMyName(MyTest* self)
{
    srLOG_INFO("this test should have name = MyChangedName");
    srTestCaseSetName(srTEST_CASE_DEFAULT, "MyChangedName");
    return 0;
}
static int ChangeMyDescription(MyTest* self)
{
    srLOG_INFO("this test should have a description set");
    srTestCaseSetDescription(srTEST_CASE_DEFAULT, "this is my new description");
    return 0;
}
void MyTest_Init(MyTest* self)
{
    self->ExpectPass = ExpectPass;
    self->ExpectFail = ExpectFail;
    self->ChangeMyName = ChangeMyName;
    self->ChangeMyDescription = ChangeMyDescription;
}

Test Unit as Group of Free Functions

MyTest.h

#include <srtest.h>
void ExpectPass();
void ExpectFail();
int ChangeMyName();
int ChangeMyDescription();

#ifdef _SCL
// this pragma identifies MyTest as a test class to the STRIDE compiler
#pragma scl_test_flist("MyTest", ExpectPass, ExpectFail, ChangeMyName, ChangeMyDescription)
#endif

MyTest.c

#include "MyTest.h"

void ExpectPass()
{
    srLOG_INFO("this test should pass");
    srEXPECT_EQ(2 + 2, 4); 
}
void ExpectFail()
{
    srLOG_INFO("this test should fail");
    srEXPECT_GT(2 * 3, 7); 
}
int ChangeMyName()
{
    srLOG_INFO("this test should have name = MyChangedName");
    srTestCaseSetName(srTEST_CASE_DEFAULT, "MyChangedName");
    return 0;
}
int ChangeMyDescription()
{
    srLOG_INFO("this test should have a description set");
    srTestCaseSetDescription(srTEST_CASE_DEFAULT, "this is my new description");
    return 0;
}

Integrating Test Units Into Your Target Build

STRIDE Test Units are easily integrated into your target build since all required test harnessing code is automatically generated based on header files that include STRIDE #pragmas.

This harnessing code (referred to as Intercept Module, or IM code) is responsible for

  • Communicating with the I/O portion of the STRIDE target runtime
  • Instantiating each specified Test Unit
  • Running each member test of the Test Unit
  • Collecting test output

Harnessing code generation is the responsibility of the STRIDE Build Tools. The useful artifacts created by the build tools comprise the STRIDE database (xx.sidb), and the IM source (strideIM.c/cpp, strideIM.h, and strideIMEntry.h).

To build a fully-instrumented target:

  • Several statements are added to your applications main() function (or equivalent) to start and stop the STRIDE I/O and IM threads
  • The generated IM source files are compiled and linked with your target application
  • The STRIDE library (which provides I/O and common services) is also linked with your application.

Running Target-Based Tests

After configuring TC/IP or COM port communication parameters, test are controlled and run from a remote host computer. Test results are also reported from the host computer. Options are available to run a subset of the available test units and/or run test units in a specified order.