Test Point Sample: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
 
(10 intermediate revisions by 4 users not shown)
Line 1: Line 1:
== Introduction ==
== Introduction ==
This examples demonstrate a simple technique to monitor and test activity occurring in another thread. The samples show a common testing scenario of this type: where we want to verify the behavior of a state machine.


Please read the [[Using Test Points]] article before proceeding in order to understand the concepts. This examples demonstrate a simple technique to monitor and test activity occurring in another thread. The samples show a common testing scenario of this type: where we want to verify the behavior of a state machine.
if you are not familiar with test points you may find it helpful to review the [[Test Point]] article before proceeding.


== Source under test ==
== Source under test ==


=== s2_testpoint_source.c / h ===
=== <tt>s2_testpoint_source.c / h</tt> ===
These files impelement a simple state machine that we wish to test. The state machine runs in its own thread, and starts when the thread function '''StateControllerTask''' is executed.
These files implement a simple state machine that we wish to test. The state machine runs in its own thread, and starts when the thread function '''StateControllerTask''' is executed.


The expected state transitions are as follows:
The expected state transitions are as follows:
Line 22: Line 23:
=== s2_testpoint_basic ===
=== s2_testpoint_basic ===


This example implements three tests of the state machine implemented in s2_testpoint_source. These tests demonstrate the use of the '''srTEST_POINT_WAIT()''' macros to verify activity in another thread and '''srTEST_POINT_CHECK()''' macro to verify an already completed activity.
This example implements three tests of the state machine implemented in s2_testpoint_source. These tests demonstrate the use of '''srTestPointWait()''' to verify activity in another thread and '''srTestPointCheck()''' to verify an already completed activity.


Each test follows the same pattern in preparing and using the test point feature:
Each test follows the same pattern in preparing and using the test point feature:
# create an array of type '''srTestPointExpect_t''' which specifies the expected test points
# specify the set of test points of interest - create an array of type '''srTestPointExpect_t''' which specifies the expected test points and optionally an array of type '''srTestPointUnexpect_t''' for unexpected
# set the expectation array using '''srTestPointExpect()'''
# set the expectation array using '''srTestPointSetup()'''
# start the state machine
# start the state machine
# use '''srTEST_POINT_CHECK()''' or '''srTEST_POINT_WAIT()''' macro to validate the expected test points
# use '''srTestPointCheck()''' or '''srTestPointWait()''' macro to validate the expected test points


We create an "expectation" of activity and then validate the observed activity against the expectation using rules that we specify. If the expectation is met, the test passes; if the expectation is not met, the test fails.
We create an "expectation" of activity and then validate the observed activity against the expectation using rules that we specify. If the expectation is met, the test passes; if the expectation is not met, the test fails.


The main difference between the tests is the values of the parameters provided to each test's validation API.
The main difference between the tests is the values of the parameters provided to each test's validation API.


==== TestPoint_SyncExact====
==== SyncExact====


Here we verify an exact match between the contents of the ''expected'' array and the observed testpoints. The combination of srTEST_POINT_EXPECT_ORDERED and srTEST_POINT_EXPECT_EXCLUSIVE specifies that the test will pass only if:
Here we verify an exact match between the contents of the ''expected'' array and the observed testpoints. The combination of ''srTEST_POINT_EXPECT_ORDERED'' and an ''unexpected list'' specifies that the test will pass only if:
* only the testpoints in the expected array are seen, and
* only the testpoints in the expected array are seen, and
* the testpoints are seen in the order specified
* the testpoints are seen in the order specified


==== TestPoint_SyncLooseTimed====
==== SyncLooseTimed====
Here we loosen the restrictions of the exact test. By specifiying srTEST_POINT_EXPECT_UNORDERED and srTEST_POINT_EXPECT_NONEXCLUSIVE, we now will:
Here we loosen the restrictions of the exact test. By specifiying ''srTEST_POINT_EXPECT_UNORDERED'' and empty ''unexpected list'', we now will:
* ignore any testpoints seen that aren't in the expected array. and
* ignore any testpoints seen that aren't in the expected array. and
* disregard the order in which the testpoints are received
* disregard the order in which the testpoints are received
Line 47: Line 48:
Note that the "IDLE" testpoint is now included in the expected array only once, but with an expected count of 2.
Note that the "IDLE" testpoint is now included in the expected array only once, but with an expected count of 2.


The srTEST_POINT_CHECK() will now cause the test to fail only if all of the expected testpoints are not seen (the specified number of times) within the timeout period.
The srTestPointCheck() will now cause the test to fail only if all of the expected testpoints are not seen (the specified number of times) within the timeout period.
 
==== AsyncLooseTimed====
This test is identical to the SyncLooseTimed test, except that we call srTestPointWait() and pass a timeout value to 400 milliseconds, which will result in a test failure, as it takes approximately 600 milliseconds for the testpoint expectations to be satisfied.
 
==== CheckData ====
This test is identical to SyncLooseTimed, except that we specify srTEST_POINT_EXPECT_ORDERED
for an ordered expectation set '''and''' we specify expected data for some of our test points. This
test will pass only if the test points are seen in the specified order and match both the label
and data specified.
 
==== CheckBinaryData ====
This test demonstrates the validation of a test point with a binary payload using a predicate function.
A structure is used as the test point payload, and a predicate is written to validate one or more fields
in the resulting structure payload.
 
== Run Tests ==
Now launch the test app (if you have not already) and execute the runner with the following commands:
 
''Test Point tests'':
<pre>
stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="s2_testpoint_basic" --log_level=all --output=TestPoint.xml
</pre>
 
Note the command will produce distinct result files for the run (per the --output command above). Please use the result file to peruse the results by opening each in your browser.
 
== Observations ==
This sample demonstrates how to do tests that validate [[Test_Point_Testing_in_C/C++|STRIDE Test Points]] - with native test code. Although test point validation tests can be written in host-based scripting languages as well, sometimes it's preferable to write (and execute) the test logic in native target code - for instance, when validating large or otherwise complex data payloads. Review the source code in the directory and follow the sample description.
 
The following are some test observations:
* the test point tests cases show the test points that were encountered, information about failures (if any) and log messages (the latter only because we included the --log_level=all option when executing the runner).
* test point tests can be packaged into a harness using any of the [[Test_Units#Test_Units|three types of test units]] that we support. In this case, we used an FList so that the sample could be used on systems that were not C++ capable.
* one of two methods is used to process the test: [[Test_Point_Testing_in_C/C++#srTestPointWait|srTestPointWait]] or [[Test_Point_Testing_in_C/C++#srTestPointCheck|srTestPointCheck]]. The former is used to process test points as they happen (with a specified timeout) and the latter is used to process test points that have already occurred at the time it is called (post completion check).
* due to the limitations of C syntax, it can be ugly to create the srTestPointExpect_t data, especially where user data validation is concerned (see the ''CheckData'' example, for instance).


==== TestPoint_AsynLooseTimed====
This test is identical to the SyncLooseTimed test, except that we call srTEST_POINT_WAIT() and pass a timeout value to 400 milliseconds, which will result in a test failure, as it takes approximately 600 milliseconds for the testpoint expectations to be satisfied.


[[Category:Samples]]
[[Category:Samples]]

Latest revision as of 17:08, 9 June 2011

Introduction

This examples demonstrate a simple technique to monitor and test activity occurring in another thread. The samples show a common testing scenario of this type: where we want to verify the behavior of a state machine.

if you are not familiar with test points you may find it helpful to review the Test Point article before proceeding.

Source under test

s2_testpoint_source.c / h

These files implement a simple state machine that we wish to test. The state machine runs in its own thread, and starts when the thread function StateControllerTask is executed.

The expected state transitions are as follows:

eSTART -> eIDLE -> eACTIVE -> eIDLE -> eEND

The states don't do any work; instead they just sleep() so there's some time spent in each one.

Each state transition is managed through a call to SetNewState() which communicates the state transition to the test thread using the srTEST_POINT() macro. We set the macro argument to the name of the state we are transitioning to as this is the 'value' of the testpoint that will be received by the test thread.

Tests Description

s2_testpoint_basic

This example implements three tests of the state machine implemented in s2_testpoint_source. These tests demonstrate the use of srTestPointWait() to verify activity in another thread and srTestPointCheck() to verify an already completed activity.

Each test follows the same pattern in preparing and using the test point feature:

  1. specify the set of test points of interest - create an array of type srTestPointExpect_t which specifies the expected test points and optionally an array of type srTestPointUnexpect_t for unexpected
  2. set the expectation array using srTestPointSetup()
  3. start the state machine
  4. use srTestPointCheck() or srTestPointWait() macro to validate the expected test points

We create an "expectation" of activity and then validate the observed activity against the expectation using rules that we specify. If the expectation is met, the test passes; if the expectation is not met, the test fails.

The main difference between the tests is the values of the parameters provided to each test's validation API.

SyncExact

Here we verify an exact match between the contents of the expected array and the observed testpoints. The combination of srTEST_POINT_EXPECT_ORDERED and an unexpected list specifies that the test will pass only if:

  • only the testpoints in the expected array are seen, and
  • the testpoints are seen in the order specified

SyncLooseTimed

Here we loosen the restrictions of the exact test. By specifiying srTEST_POINT_EXPECT_UNORDERED and empty unexpected list, we now will:

  • ignore any testpoints seen that aren't in the expected array. and
  • disregard the order in which the testpoints are received

Note that the "IDLE" testpoint is now included in the expected array only once, but with an expected count of 2.

The srTestPointCheck() will now cause the test to fail only if all of the expected testpoints are not seen (the specified number of times) within the timeout period.

AsyncLooseTimed

This test is identical to the SyncLooseTimed test, except that we call srTestPointWait() and pass a timeout value to 400 milliseconds, which will result in a test failure, as it takes approximately 600 milliseconds for the testpoint expectations to be satisfied.

CheckData

This test is identical to SyncLooseTimed, except that we specify srTEST_POINT_EXPECT_ORDERED for an ordered expectation set and we specify expected data for some of our test points. This test will pass only if the test points are seen in the specified order and match both the label and data specified.

CheckBinaryData

This test demonstrates the validation of a test point with a binary payload using a predicate function. A structure is used as the test point payload, and a predicate is written to validate one or more fields in the resulting structure payload.

Run Tests

Now launch the test app (if you have not already) and execute the runner with the following commands:

Test Point tests:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="s2_testpoint_basic" --log_level=all --output=TestPoint.xml

Note the command will produce distinct result files for the run (per the --output command above). Please use the result file to peruse the results by opening each in your browser.

Observations

This sample demonstrates how to do tests that validate STRIDE Test Points - with native test code. Although test point validation tests can be written in host-based scripting languages as well, sometimes it's preferable to write (and execute) the test logic in native target code - for instance, when validating large or otherwise complex data payloads. Review the source code in the directory and follow the sample description.

The following are some test observations:

  • the test point tests cases show the test points that were encountered, information about failures (if any) and log messages (the latter only because we included the --log_level=all option when executing the runner).
  • test point tests can be packaged into a harness using any of the three types of test units that we support. In this case, we used an FList so that the sample could be used on systems that were not C++ capable.
  • one of two methods is used to process the test: srTestPointWait or srTestPointCheck. The former is used to process test points as they happen (with a specified timeout) and the latter is used to process test points that have already occurred at the time it is called (post completion check).
  • due to the limitations of C syntax, it can be ugly to create the srTestPointExpect_t data, especially where user data validation is concerned (see the CheckData example, for instance).