Expectations Sample
Introduction
This examples demonstrate a simple technique to monitor and test activity occurring in instrumented source code on the device from test logic implemented on the host. This sample show a common testing scenario - namely, verifying the behavior of a state machine.
If you are not familiar with test points you may find it helpful to review the Test Point article before proceeding.
Source under test
s2_expectations_source.c / h
These files implement a simple state machine that we wish to test. The state machine runs when DoStateChanges is executed, which is a function we have also instrumented so it can be remotely invoked from the host.
The expected state transitions are as follows:
eSTART -> eIDLE -> eACTIVE -> eIDLE -> eEND
The states don't do any work; instead they just sleep() so there's some time spent in each one.
Each state transition is managed through a call to SetNewState() which communicates the state transition to the test thread using the srTEST_POINT() macro. We also provide an incrementing counter value as data to each of these test points - this data is used for validation in one of our example scenarios.
Tests Description
s2_expectations_testmodule
This example implements four tests of the state machine implemented in s2_expectations_source. These tests demonstrate the use of the Perl Script APIs to validate expectations.
Each test follows the same pattern in preparing and using the test point feature:
- call TestPointSetup with order parameter as well as expected and unexpected lists.
- Invoke target processing by calling the remote function DoStateChanges.
- use Check or Wait on the test point object to process the expectations.
We create an "expectation" of activity and then validate the observed activity against the expectation using rules that we specify. If the expectation is met, the test passes; if the expectation is not met, the test fails.
The main difference between the tests is the values of the parameters provided to each test's validation API.
sync_exact
Here we verify an exact match between the contents of the expected array and the observed testpoints. The combination of srTEST_POINT_EXPECT_ORDERED and an unexpected list specifies that the test will pass only if:
- only the testpoints in the expected array are seen, and
- the testpoints are seen in the order specified
sync_loose_timed
Here we loosen the restrictions of the exact test. By specifiying srTEST_POINT_EXPECT_UNORDERED and empty unexpected list, we now will:
- ignore any testpoints seen that aren't in the expected array. and
- disregard the order in which the testpoints are received
Note that the "IDLE" testpoint is now included in the expected array only once, but with an expected count of 2.
The Check() method will now cause the test to fail only if all of the expected testpoints are not seen the specified number of times.
async_loose_timed
This test is identical to the sync_loose_timed test, except that we call Wait() and pass a timeout value to 200 milliseconds, which will result in a test failure, as it takes approximately 600 milliseconds for the testpoint expectations to be satisfied (due to sleep statements in the state machine code).
check_data
This test is identical to the sync_loose_timed, except that we specify B<srTEST_POINT_EXPECT_ORDERED> for an ordered expectation set B<and> we specify expected data for some of our test points. This test will pass only if the test points are seen in the specified order and match both the label and data specified.
For the test points with binary data, we have to use the perl pack() function to create a scalar value that has the proper bit pattern. Whenever you are validating target data, you will need to take into account byte ordering for basic types. Here we assume the target has the same byte ordering as the host.