Test Units Overview: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
 
(5 intermediate revisions by 3 users not shown)
Line 11: Line 11:
* Specification of expected results within test methods (typically by using one or more [[Test Code Macros]])
* Specification of expected results within test methods (typically by using one or more [[Test Code Macros]])
* Test fixturing (optional setup and teardown)
* Test fixturing (optional setup and teardown)
* Test parametrization (optional constructor/initialization parameters)
* Automated execution
* Automated execution
* Automated results report generation
* Automated results report generation
Line 17: Line 18:
In addition, STRIDE Test Units offer these unique features:
In addition, STRIDE Test Units offer these unique features:
; On-Target Execution
; On-Target Execution
: Tests execute on the target hardware in a true operational environment. Execution and reporting is controlled from a remote Windows or Linux host
: Tests execute on the target hardware in a true operational environment. Execution and reporting is controlled from a remote desktop (Windows, Linux or FreeBSD) host
; Dynamic Test and Suite Generation
; Dynamic Test and Suite Generation
: Test cases and suites can be created and manipulated at runtime
: Test cases and suites can be created and manipulated at runtime
; [[Using Test Doubles|Test Doubles]]
; [[Using Test Doubles|Test Doubles]]
: Dynamic runtime function substitution to implement on-the-fly mocks, stubs, and doubles
: Dynamic runtime function substitution to implement on-the-fly mocks, stubs, and doubles
; [[Using Test Points|Asynchronous Testing Framework]] (Test Points)
; [[Test Point Testing in C/C++|Behavior Testing]] (Test Points)
: Support for testing of asynchronous activities occurring in multiple threads  
: Support for testing of asynchronous activities occurring in multiple threads  
; Multiprocess Testing Framework
; Multiprocess Testing Framework
Line 30: Line 31:
; Automatic Results Publishing to Local Disk and Internet
; Automatic Results Publishing to Local Disk and Internet
: Automatic publishing of test results to [[STRIDE Test Space]]
: Automatic publishing of test results to [[STRIDE Test Space]]
== Test Unit Deployment ==
=== Individual Tests ===
Individual test are implemented as test functions or methods which follow a four-phase testing pattern:
# Setting up a test fixture (optional)
# Exercising the System Under Test (SUT)
# Verifying that the expected outcome has occurred (typically using calls to ''Pass/Fail Macros'' or ''Test Points'')
# Tearing down the test fixture (optional)
=== Test Units ===
Individual functions or methods, which typically implement a single test case are grouped into one or more Test Units which are executed as atomic entities.
Grouping of individual tests into a Test Unit can be accomplished in any of three ways:
* A Test Unit can be comprised of the member functions of a '''C++ class''',
* A Test Unit can be comprised of a set of '''C functions''',
* A Test Unit can be comprised of C functions pointed to by members of a '''C struct'''
The best choice is usually the C++ class since it offers the best mix of features and ease-of-use. (You can test code written in C or C++ using the C++ class test units.) However, compiling C++ is not always possible, in this case one of the C-based test unit packaging options must be used.
You can freely mix different deployment methods across a project if desired, the format of the results is consistent across all test unit packaging options.
== Simple Test Unit Examples ==
Following are a few short examples. In each example, a single test unit with the name "MyTest" is identified to the [[s2scompile|STRIDE compiler]] via a custom [[Test Unit Pragmas|special test pragma]].
=== Test Unit as C++ Class  ===
==== MyTest.h ====
<source lang=cpp>
#include <srtest.h>
 
class MyTest : public stride::srTest
{
public:
  void ExpectPass()
  {
    srLOG_INFO("this test should pass");
    srEXPECT_EQ(2 + 2, 4);
  }
  void ExpectFail()
  {
    srLOG_INFO("this test should fail");
    srEXPECT_GT(2 * 3, 7);
  }
  int ChangeMyName()
  {
    srLOG_INFO("this test should have name = MyChangedName");
    testCase.SetName("MyChangedName");
    return 0;
  }
  int ChangeMyDescription()
  {
    srLOG_INFO("this test should have a description set");
    testCase.SetDescription("this is my new description");
    return 0;
  }
};
#ifdef _SCL
// this pragma identifies MyTest as a test class to the STRIDE compiler
#pragma scl_test_class(MyTest)
#endif
</source>
=== Test Unit as C Class ===
====MyTest.h====
<source lang=c>
#include <srtest.h>
 
typdef struct MyTest
{
    void (*ExpectPass)(struct MyTest* self);
    void (*ExpectFail)(struct MyTest* self);
    int (*ChangeMyName)(struct MyTest* self);
    int (*ChangeMyDescription)(struct MyTest* self);
} MyTest;
void MyTest_Init(MyTest* self);
#ifdef _SCL
// This pragma identifies MyTest as a test c class to the STRIDE compiler.
// Extra instrumentation code will be generated to call MyTest_Init() before
// tests are run.
#pragma scl_test_cclass(MyTest, MyTest_Init)
#endif
</source>
====MyTest.c====
<source lang='c'>
#include "MyTest.h"
static void ExpectPass(MyTest* self)
{
    srLOG_INFO("this test should pass");
    srEXPECT_EQ(2 + 2, 4);
}
static void ExpectFail(MyTest* self)
{
    srLOG_INFO("this test should fail");
    srEXPECT_GT(2 * 3, 7);
}
static int ChangeMyName(MyTest* self)
{
    srLOG_INFO("this test should have name = MyChangedName");
    srTestCaseSetName(srTEST_CASE_DEFAULT, "MyChangedName");
    return 0;
}
static int ChangeMyDescription(MyTest* self)
{
    srLOG_INFO("this test should have a description set");
    srTestCaseSetDescription(srTEST_CASE_DEFAULT, "this is my new description");
    return 0;
}
void MyTest_Init(MyTest* self)
{
    self->ExpectPass = ExpectPass;
    self->ExpectFail = ExpectFail;
    self->ChangeMyName = ChangeMyName;
    self->ChangeMyDescription = ChangeMyDescription;
}
</source>
=== Test Unit as Group of Free Functions ===
====MyTest.h====
<source lang=c>
#include <srtest.h>
void ExpectPass();
void ExpectFail();
int ChangeMyName();
int ChangeMyDescription();
#ifdef _SCL
// this pragma identifies MyTest as a test unit to the STRIDE compiler, specifying
// the four functions as members of the test unit
#pragma scl_test_flist("MyTest", ExpectPass, ExpectFail, ChangeMyName, ChangeMyDescription)
#endif
</source>
====MyTest.c====
<source lang='c'>
#include "MyTest.h"
void ExpectPass()
{
    srLOG_INFO("this test should pass");
    srEXPECT_EQ(2 + 2, 4);
}
void ExpectFail()
{
    srLOG_INFO("this test should fail");
    srEXPECT_GT(2 * 3, 7);
}
int ChangeMyName()
{
    srLOG_INFO("this test should have name = MyChangedName");
    srTestCaseSetName(srTEST_CASE_DEFAULT, "MyChangedName");
    return 0;
}
int ChangeMyDescription()
{
    srLOG_INFO("this test should have a description set");
    srTestCaseSetDescription(srTEST_CASE_DEFAULT, "this is my new description");
    return 0;
}
</source>
== Integrating Test Units Into Your Target Build ==
STRIDE Test Units are easily [[Integration_Overview|integrated into your target build]] since all required test harnessing code is automatically generated based on header files that include [[Test Unit Pragmas|STRIDE Test pragmas]].
This harnessing code (referred to as [[Intercept Module|Intercept Module]], or IM code) is responsible for
* Communicating with the I/O portion of the STRIDE target runtime
* Instantiating each specified Test Unit
* Running each member test of the Test Unit
* Collecting test output
Harnessing code generation is the responsibility of the [[Build Tools|STRIDE Build Tools]]. The useful artifacts created by the build tools comprise the STRIDE database (xx.sidb), and the IM source (strideIM.c/cpp, strideIM.h, and strideIMEntry.h).
To build a fully-instrumented target:
* Several statements are added to your applications main() function (or equivalent) to start and stop the STRIDE I/O and IM threads
* The generated IM source files are compiled and linked with your target application
* The STRIDE library (which provides I/O and common services) is also linked with your application.
== Running Target-Based Tests ==
After configuring TCP/IP or COM port communication parameters, tests are controlled and run from a [[Running Test Units|remote host computer]]. See [[Stride Runner]] for details.


[[Category:Test Units]]
[[Category:Test Units]]
[[Category:Reference]]
[[Category:Testing]]

Latest revision as of 23:09, 25 September 2013

What are STRIDE Test Units?

STRIDE Test Units is a general term for xUnit-style test modules running within the STRIDE runtime framework. These tests--written in C and C++--are compiled and linked with your embedded software and run in-place on your target hardware. They are suitable for both developer unit testing as well as end-to-end integration testing.

An external Test Runner is provided which controls the execution of the tests and publishes test results to the local file system and optionally to S2's Internet STRIDE Test Space.

Test Unit Features

In all cases, STRIDE Test Units provide the following capabilities typical of all xUnit-style testing frameworks:

  • Specification of a test as a test method
  • Aggregation of individual tests into test suites which form execution and reporting units
  • Specification of expected results within test methods (typically by using one or more Test Code Macros)
  • Test fixturing (optional setup and teardown)
  • Test parametrization (optional constructor/initialization parameters)
  • Automated execution
  • Automated results report generation

Unique STRIDE Test Unit Features

In addition, STRIDE Test Units offer these unique features:

On-Target Execution
Tests execute on the target hardware in a true operational environment. Execution and reporting is controlled from a remote desktop (Windows, Linux or FreeBSD) host
Dynamic Test and Suite Generation
Test cases and suites can be created and manipulated at runtime
Test Doubles
Dynamic runtime function substitution to implement on-the-fly mocks, stubs, and doubles
Behavior Testing (Test Points)
Support for testing of asynchronous activities occurring in multiple threads
Multiprocess Testing Framework
Support for testing across multiple processes running simultaneously on the target
Automatic Timing Data Collection
Duration are automatically measured for each test case.
Automatic Results Publishing to Local Disk and Internet
Automatic publishing of test results to STRIDE Test Space