Expectations Sample

From STRIDE Wiki
Revision as of 00:52, 24 February 2010 by Mikee (talk | contribs)
Jump to navigation Jump to search

Introduction

This example demonstrates a simple technique to monitor and test activity occurring in instrumented source code on the device from test logic implemented on the host. This sample show a common testing scenario - namely, verifying the behavior of a state machine.

If you are not familiar with test points you may find it helpful to review the Test Point article before proceeding.

Source under test

s2_expectations_source.c / h

These files implement a simple state machine that we wish to test. The state machine runs when Exp_DoStateChanges is executed, which is a function we have also instrumented so it can be remotely invoked from the host.

The expected state transitions are as follows:

eSTART -> eIDLE -> eACTIVE -> eIDLE -> eEND

The states don't do any work; instead they just sleep() so there's some time spent in each one.

Each state transition is managed through a call to SetNewState() which communicates the state transition to the test thread using the srTEST_POINT() macro. We also provide an incrementing counter value as data to each of these test points - this data is used for validation in one of our example scenarios.

Tests Description

s2_expectations_testmodule

This example implements four tests of the state machine implemented in s2_expectations_source. These tests demonstrate the use of the Perl Script APIs to validate expectations.

Each test follows the same pattern in preparing and using the test point feature:

  1. call TestPointSetup with order parameter as well as expected and unexpected lists.
  2. Invoke target processing by calling the remote function Exp_DoStateChanges.
  3. use Check or Wait on the test point object to process the expectations.

We create an "expectation" of activity and then validate the observed activity against the expectation using rules that we specify. If the expectation is met, the test passes; if the expectation is not met, the test fails.

The main difference between the tests is the values of the parameters provided to each test's validation API.

sync_exact

Here we verify an exact match between the contents of the expected array and the observed testpoints. The combination of srTEST_POINT_EXPECT_ORDERED and an unexpected list specifies that the test will pass only if:

  • only the testpoints in the expected array are seen, and
  • the testpoints are seen in the order specified

sync_loose_timed

Here we loosen the restrictions of the exact test. By specifiying srTEST_POINT_EXPECT_UNORDERED and empty unexpected list, we now will:

  • ignore any testpoints seen that aren't in the expected array. and
  • disregard the order in which the testpoints are received

Note that the "IDLE" testpoint is now included in the expected array only once, but with an expected count of 2.

The Check() method will now cause the test to fail only if all of the expected testpoints are not seen the specified number of times.

async_loose_timed

This test is identical to the sync_loose_timed test, except that we call Wait() and pass a timeout value to 200 milliseconds, which will result in a test failure, as it takes approximately 600 milliseconds for the testpoint expectations to be satisfied (due to sleep statements in the state machine code).

check_data

This test is identical to the sync_loose_timed, except that we specify B<srTEST_POINT_EXPECT_ORDERED> for an ordered expectation set B<and> we specify expected data for some of our test points. This test will pass only if the test points are seen in the specified order and match both the label and data specified.

For the test points with binary data, we have to use the perl pack() function to create a scalar value that has the proper bit pattern. Whenever you are validating target data, you will need to take into account byte ordering for basic types. Here we assume the target has the same byte ordering as the host.

trace_data

This test is similar to sync_exact, except the expectations are loaded from a trace data file that was created using the --trace option on the STRIDE Runner. By default, tests that use a trace data file perform validation based on both the test point lable AND the data values in the trace file.

trace_data_predicate

This test is identical to the trace_data test except that a custom predicate is specified for the data validation. The custom predicate in this example just validates binary data using the standard memory comparison and implicitly passes for any non-binary payloads.

json_data

This test demonstrates the use of JSON formatted string data from the target in predicate validation. The string payload for the test point is decoded using the standard perl JSON library and the object's fields are validated in the predicate function above. JSON is a logical choice if you want to serialize small amounts of data in a platform agnostic way since it is widely supported in many languages.