Training Tests in Script: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
No edit summary
 
(14 intermediate revisions by 3 users not shown)
Line 10: Line 10:
== What is an expectation test ? ==
== What is an expectation test ? ==


An expectation test is a test that validates behavior by verifying the occurrence (or non-occurrence) of specific test points on your running device. The STRIDE Framework makes it easy to define expectation tests via a single setup API to define a test ([[Perl_Script_APIs#Methods|see TestPointSetup]]) . Once defined, expectation tests are executed by invoking a ''wait'' method that evaluates the test points on the device as they occur. The wait method typically blocks until the entire defined expectation set has been satisfied '''or''' until an optional timeout has been exceeded.
An expectation test is a test that validates behavior by verifying the occurrence (or non-occurrence) of specific test points on your running device. The STRIDE Framework makes it easy to define expectation tests ''via a single setup API'' ([[Perl_Script_APIs#Methods|see TestPointSetup]]) . Once defined, expectation tests are executed by invoking a ''wait'' method that evaluates the test points on the device as they occur. The wait method typically blocks until the entire defined expectation set has been satisfied '''or''' until a timeout (optional) has been exceeded.


== How to I start my target processing scenario ? ==
== How to I start my target processing scenario ? ==
Line 16: Line 16:
It's often necessary, as part of test, to start the processing on the device that causes the test scenario to occur. Sometimes processing is invoked via external stimulus (e.g. send a command to a serial port or send a network message). Given the [http://search.cpan.org/ wealth of libraries] available for perl, it's likely that you'll be able to find modules to help you in automating common communication protocols.  
It's often necessary, as part of test, to start the processing on the device that causes the test scenario to occur. Sometimes processing is invoked via external stimulus (e.g. send a command to a serial port or send a network message). Given the [http://search.cpan.org/ wealth of libraries] available for perl, it's likely that you'll be able to find modules to help you in automating common communication protocols.  


If, on the other hand, the processing can be invoked by code paths in your application, you can consider using [[Function_Capturing|function fixturing]] via STRIDE. STRIDE function remoting allows you to specify a set of functions on the device that are to made available in the host scripting environment for remote execution. This can be a convenient way to expose device application hooks to the host scripting environment.
If, on the other hand, the processing can be invoked by direct code paths in your application, you can consider using [[Function_Capturing|function fixturing]] via STRIDE. STRIDE function remoting allows you to specify a set of functions on the device that are to made available in the host scripting environment for remote execution. This can be a convenient way to expose device application hooks to the host scripting environment.


Whatever the approach, we strongly encourage test authors to find ways to minimize the amount of manual interaction required to execute expectation tests. Wherever possible, we encourage you to try to find ways to fully automate the interaction required to execute your expectation scenarios. Fully automated tests are more likely to be run regularly and therefore provide a tighter feedback loop for your software quality.
Whatever the approach, we strongly encourage test authors to find ways to minimize the amount of manual interaction required to execute expectation tests. Try to find ways to fully automate the interaction required to run your expectation scenarios. Fully automated tests are more likely to be run regularly and therefore provide a tighter feedback loop for your software quality.


== Why would I want to write one in script ? ==
== Can I use STRIDE test modules for any other testing besides expectation validation? ==


TBD or remove.
'''Yes''' - STRIDE test modules provide a language-specific way to harness test code. If you have other procedures that can be automated using perl code on the host, then you can certainly use STRIDE test modules to harness the test code. In doing so, you will get the reporting conveniences that test modules provide (like automatic POD doc extraction, etc. and suite/test case generation) - as well as unified reporting with your other STRIDE test cases.


== Sample: Expectations ==
== Sample: Expectations ==


=== Build the test app ===
For this training, we will again be using the sample code provided in the [[Expectations Sample]]. This sample demonstrates a number of common expectation test patterns as implemented in perl.


=== Sample Source: test_in_script/Expectations/s2_expectations_testmodules.pm ===
=== Sample Source: test_in_script/Expectations/s2_expectations_testmodules.pm ===
To begin, let's briefly examine the perl test module that implements the test logic. Open the file in your favorite editor (preferably one that supports perl syntax highlighting) and examine the source code along with a description that is provided at the beginning of each test. Here are some things to observe:
* The package name matches the file name -- ''this is required''
* We have included documentation for the module and test cases using standard POD formatting codes. As long as you follow the rules described [[Perl_Script_APIs#Documentation|here]], the STRIDE framework will automatically extract the POD during execution and annotate the report accordingly.
* Most of the test sequences are initiated via a remote function in the target app (<tt>Exp_DoStateChanges()</tt>). This function has been [[Function Capturing|captured]] using STRIDE and is therefore available for invocation using the Functions object in the test module. In one case, we also invoke the function asynchronously (see <tt>async_loose</tt>). Functions are invoke synchronously by default.
* In the <tt>check_data</tt> test, we validate integer values coming from the host that were passed as binary payloads to the test point. We use the perl [http://perldoc.perl.org/functions/pack.html pack] function to create a scalar value that matches the data expected from the target (target is the same as the host, in this case). If we were testing against a target with different integer characteristics (size, byte ordering), we would have to adjust our pack statement accordingly to produce a bit pattern that matched the target value(s). In many cases, this binary payload validation proves to be difficult to maintain and this is why ''we typically recommend using string data payloads'' on test points wherever possible.
=== Build the  test app ===
So that we can run the sample, let's now build an off target test app that contains the source under tests -- you can follow the generic steps [[Off-Target_Test_App#Copy_Sample_Source|described here]].  '''Note:''' you can copy all of the sample files into the <tt>sample_src</tt>  directory -- although only the source will be compiled into the app, this will make it easier to run the module.


=== Run the sample ===
=== Run the sample ===
Now launch the test app (if you have not already) and execute the runner with the following command:
<pre>
stride --device="TCP:localhost:8000" --database=../out/TestApp.sidb --run=s2_expectations_testmodule.pm
</pre>
(this assumes you are running from the <tt>sample_src</tt> directory of your off-target SDK. If that's not the case, you need to change the path to the database and test module files accordingly.)
If you'd like to see how log messages will appear in reports, you can add <tt>--log_level=all</tt> to this command.


=== Examine the results ===
=== Examine the results ===
The runner will produce a results (xml) file once the execution is complete. The file will (by default) have the same name as the database and be located in the current directory - so find the <tt>TestApp.xml</tt> file and open it with your browser. You can then use the buttons to expand the test suites and test cases. Here are some things to observe about the results:
* there is a single suite called '''s2_expectations_module'''. This matches the name given to the test module.
* the test module's suite has one annotation - it is a trace file containing all of the test points and logs that were reported to the host during the execution of the test module. This trace file can be useful if you want to get a sequential view of all test points that were encountered during the execution of the module.
* the test module suite contains 14 test cases -- each one corresponds to a single test case (test function) in the module. The description for each test module was automatically generated from the POD documentation that was included in the test module file.
* each test case has a list of several annotations. The first is a simple HTML view of the source code of the test itself. This can be useful for quickly inspecting the code that was used to run the test without having to go to your actual test module implementation file. The remaining annotation contain information about each test point that was hit during processing and any expectation failures or timeouts that were encountered.
[[Category: Training_OLD]]

Latest revision as of 16:44, 29 August 2011

Background

The STRIDE Framework allows you to write expectation tests that execute on the host while connected to a running device that has been instrumented with STRIDE Test Points. Host-based expectation tests leverage the power of scripting languages (perl is currently supported, others are expected in the future) to quickly and easily write validation logic for the test points on your system. What's more, since the test logic is implemented and executed on the host, your device software does not have to be rebuilt when you want to create new tests or change existing ones.

Please review the following reference articles before proceeding:

What is an expectation test ?

An expectation test is a test that validates behavior by verifying the occurrence (or non-occurrence) of specific test points on your running device. The STRIDE Framework makes it easy to define expectation tests via a single setup API (see TestPointSetup) . Once defined, expectation tests are executed by invoking a wait method that evaluates the test points on the device as they occur. The wait method typically blocks until the entire defined expectation set has been satisfied or until a timeout (optional) has been exceeded.

How to I start my target processing scenario ?

It's often necessary, as part of test, to start the processing on the device that causes the test scenario to occur. Sometimes processing is invoked via external stimulus (e.g. send a command to a serial port or send a network message). Given the wealth of libraries available for perl, it's likely that you'll be able to find modules to help you in automating common communication protocols.

If, on the other hand, the processing can be invoked by direct code paths in your application, you can consider using function fixturing via STRIDE. STRIDE function remoting allows you to specify a set of functions on the device that are to made available in the host scripting environment for remote execution. This can be a convenient way to expose device application hooks to the host scripting environment.

Whatever the approach, we strongly encourage test authors to find ways to minimize the amount of manual interaction required to execute expectation tests. Try to find ways to fully automate the interaction required to run your expectation scenarios. Fully automated tests are more likely to be run regularly and therefore provide a tighter feedback loop for your software quality.

Can I use STRIDE test modules for any other testing besides expectation validation?

Yes - STRIDE test modules provide a language-specific way to harness test code. If you have other procedures that can be automated using perl code on the host, then you can certainly use STRIDE test modules to harness the test code. In doing so, you will get the reporting conveniences that test modules provide (like automatic POD doc extraction, etc. and suite/test case generation) - as well as unified reporting with your other STRIDE test cases.

Sample: Expectations

For this training, we will again be using the sample code provided in the Expectations Sample. This sample demonstrates a number of common expectation test patterns as implemented in perl.

Sample Source: test_in_script/Expectations/s2_expectations_testmodules.pm

To begin, let's briefly examine the perl test module that implements the test logic. Open the file in your favorite editor (preferably one that supports perl syntax highlighting) and examine the source code along with a description that is provided at the beginning of each test. Here are some things to observe:

  • The package name matches the file name -- this is required
  • We have included documentation for the module and test cases using standard POD formatting codes. As long as you follow the rules described here, the STRIDE framework will automatically extract the POD during execution and annotate the report accordingly.
  • Most of the test sequences are initiated via a remote function in the target app (Exp_DoStateChanges()). This function has been captured using STRIDE and is therefore available for invocation using the Functions object in the test module. In one case, we also invoke the function asynchronously (see async_loose). Functions are invoke synchronously by default.
  • In the check_data test, we validate integer values coming from the host that were passed as binary payloads to the test point. We use the perl pack function to create a scalar value that matches the data expected from the target (target is the same as the host, in this case). If we were testing against a target with different integer characteristics (size, byte ordering), we would have to adjust our pack statement accordingly to produce a bit pattern that matched the target value(s). In many cases, this binary payload validation proves to be difficult to maintain and this is why we typically recommend using string data payloads on test points wherever possible.

Build the test app

So that we can run the sample, let's now build an off target test app that contains the source under tests -- you can follow the generic steps described here. Note: you can copy all of the sample files into the sample_src directory -- although only the source will be compiled into the app, this will make it easier to run the module.

Run the sample

Now launch the test app (if you have not already) and execute the runner with the following command:

stride --device="TCP:localhost:8000" --database=../out/TestApp.sidb --run=s2_expectations_testmodule.pm

(this assumes you are running from the sample_src directory of your off-target SDK. If that's not the case, you need to change the path to the database and test module files accordingly.)

If you'd like to see how log messages will appear in reports, you can add --log_level=all to this command.

Examine the results

The runner will produce a results (xml) file once the execution is complete. The file will (by default) have the same name as the database and be located in the current directory - so find the TestApp.xml file and open it with your browser. You can then use the buttons to expand the test suites and test cases. Here are some things to observe about the results:

  • there is a single suite called s2_expectations_module. This matches the name given to the test module.
  • the test module's suite has one annotation - it is a trace file containing all of the test points and logs that were reported to the host during the execution of the test module. This trace file can be useful if you want to get a sequential view of all test points that were encountered during the execution of the module.
  • the test module suite contains 14 test cases -- each one corresponds to a single test case (test function) in the module. The description for each test module was automatically generated from the POD documentation that was included in the test module file.
  • each test case has a list of several annotations. The first is a simple HTML view of the source code of the test itself. This can be useful for quickly inspecting the code that was used to run the test without having to go to your actual test module implementation file. The remaining annotation contain information about each test point that was hit during processing and any expectation failures or timeouts that were encountered.