Training Tests in Script: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
No edit summary
Line 39: Line 39:
=== Build the  test app ===
=== Build the  test app ===


So that we can run the sample, let's now build an off target test app that contains the source under tests -- you can follow the generic steps [[Building Sample Source|described here]].  '''Note:''' you can copy all of the sample files into the <tt>sample_src</tt>  directory -- although only the source will be compiled into the app, this will make it easier to run the module.
So that we can run the sample, let's now build an off target test app that contains the source under tests -- you can follow the generic steps [[Off-Target_Test_App#Build_Steps|described here]].  '''Note:''' you can copy all of the sample files into the <tt>sample_src</tt>  directory -- although only the source will be compiled into the app, this will make it easier to run the module.


=== Run the sample ===
=== Run the sample ===

Revision as of 20:57, 19 May 2010

Background

The STRIDE Framework allows you to write expectation tests that execute on the host while connected to a running device that has been instrumented with STRIDE Test Points. Host-based expectation tests leverage the power of scripting languages (perl is currently supported, others are expected in the future) to quickly and easily write validation logic for the test points on your system. What's more, since the test logic is implemented and executed on the host, your device software does not have to be rebuilt when you want to create new tests or change existing ones.

Please review the following reference articles before proceeding:

What is an expectation test ?

An expectation test is a test that validates behavior by verifying the occurrence (or non-occurrence) of specific test points on your running device. The STRIDE Framework makes it easy to define expectation tests via a single setup API to define a test (see TestPointSetup) . Once defined, expectation tests are executed by invoking a wait method that evaluates the test points on the device as they occur. The wait method typically blocks until the entire defined expectation set has been satisfied or until an optional timeout has been exceeded.

How to I start my target processing scenario ?

It's often necessary, as part of test, to start the processing on the device that causes the test scenario to occur. Sometimes processing is invoked via external stimulus (e.g. send a command to a serial port or send a network message). Given the wealth of libraries available for perl, it's likely that you'll be able to find modules to help you in automating common communication protocols.

If, on the other hand, the processing can be invoked by code paths in your application, you can consider using function fixturing via STRIDE. STRIDE function remoting allows you to specify a set of functions on the device that are to made available in the host scripting environment for remote execution. This can be a convenient way to expose device application hooks to the host scripting environment.

Whatever the approach, we strongly encourage test authors to find ways to minimize the amount of manual interaction required to execute expectation tests. Wherever possible, we encourage you to try to find ways to fully automate the interaction required to execute your expectation scenarios. Fully automated tests are more likely to be run regularly and therefore provide a tighter feedback loop for your software quality.

Why would I want to write one in script ?

TBD or remove.

Sample: Expectations

For this training, we will again be using the sample code provided in the Expectations Sample. This sample demonstrates a number of common expectation test patterns as implemented in perl.

Sample Source: test_in_script/Expectations/s2_expectations_testmodules.pm

To begin, let's briefly examine the perl test module that implements the test logic. Open the file in your favorite editor (preferably one that supports perl syntax highlighting...) and examine the source code along with the description of each test that is provided at the beginning of each test. Here are some things to observe:

  • The package name matches the file name (this is required)
  • We have included documentation for the module and test cases using standard POD formatting codes. As long as you follow the rules described here, the STRIDE framework will automatically extract the POD during execution and annotate the report accordingly.
  • Most of the tests are invoking a remote function in the target app (Exp_DoStateChanges()). This function has been captured using STRIDE and is therefor available for invocation using the Functions object in the test module. In one case, we also invoke the function asynchronously (see async_loose). Functions are invoke synchronously by default.
  • In the check_data test, we validate integer values coming from the host that were passed as binary payloads to the test point. We use the perl pack function to create a scalar value that matches the data expected from the target (target is the same as the host, in this case). If we were testing against a target with different integer characteristics (size, byte ordering), we would have to adjust our pack statement accordingly to produce a bit pattern that matched the target value(s). In many cases, this proves to be difficult to maintain and this is why we typically recommend using string data payloads on test points wherever possible.

Build the test app

So that we can run the sample, let's now build an off target test app that contains the source under tests -- you can follow the generic steps described here. Note: you can copy all of the sample files into the sample_src directory -- although only the source will be compiled into the app, this will make it easier to run the module.

Run the sample

Now launch the test app (if you have not already) and execute the runner with the following command:

stride --device="TCP:localhost:8000" --database=../out/TestApp.sidb --run=../sample_src/s2_expectations_testmodule.pm

(this assumes you are running from the src directory of your off-target SDK. If that's not the case, you need to change the path to the database and test module files accordingly.)

If you'd like to see how log messages will appear in reports, you can add --log_level=all to this command.

Examine the results

TBD