Script Samples: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(4 intermediate revisions by 3 users not shown)
Line 1: Line 1:
These samples are a  collection of native source under test and script test code code that demonstrate the techniques for creating and executing tests using the STRIDE Framework. These are the  introductory samples for the STRIDE Framework that illustrate how to do  expectation testing of instrumented source under test using script test  modules on the host.
== Introduction ==
The samples are [[Desktop_Installation#Directories_and_Files | provided with]] with the Desktop installation package. They are self-documented (using perldoc) and this content will be attached to the test report whenever a sample is executed. They are readily executable using our [[STRIDE Off-Target Environment | off-target (desktop) environment]] and can easily be [[Building_an_Off-Target_Test_App | built and executed]]. In every case, you can include a sample(s) simply by copying its source files to the SDK's [[Desktop_Installation#SDK | sample_src]] directory and rebuilding the testapp.


Once you have installed the STRIDE  framework on your host machine, you can easily build and run any combination of  these samples. In each case, you can include the source under test for the sample simply by copying  its source files to the SDK's [[Framework_Installation#SDK  | sample_src]]  directory and rebuilding the off-target testapp.
The samples were created to be as simple as possible while sufficiently demonstrating the topic at hand. In particular, the samples are '''very light on core application logic (that is, the source under test)''' -- they focus instead on the code that leverages STRIDE to define and execute tests. As you review the sample code, if you find yourself confused about which code is the test logic and which is the source under test, try reading the source file comments to discern this.


= Description =
=== What you need to do ===
The  following samples are available to illustrate the use of host scripting  with the STRIDE Framework.
 
In order to get the full benefit from the '''samples''', we recommend you do the following:
 
* follow the '''reference wiki links''' we provide in the main sample articles. These links provide rich technical information on the topics covered by the sample. These are also articles you will likely refer to in the future when you are implementing your own tests.
* read/review all sample source code prior to running. The samples consist almost entirely of source code, so it makes sense to use a source code editor (one you are familiar with) for this purpose.
* build and execute the samples using the off-target framework. If you completed your installation as instructed above, it should be fully functional when executing them.
* review the reports that are produced when you run the samples. The reports give you a feel for how data is reported in the STRIDE Framework. The extracted documentation is also provided in the report.
* For most samples, we provide some observations that help summarize aspects of the results that might be of interest to you. These observations are not necessarily comprehensive - in fact, we hope you'll discover other interesting features in the samples that we haven't mentioned.
 
=== Background ===
The STRIDE Framework allows you to write expectation tests that execute on the host while connected to a running device that has been instrumented with [[Test Point|STRIDE Test Points]]. Host-based expectation tests leverage the power of scripting languages (perl is currently supported, others are expected in the future) to quickly and easily write validation logic for the test points on your system. What's more, since the test logic is implemented and executed on the host, your device software does not have to be rebuilt when you want to create new tests or change existing ones.
 
Please review the following reference articles before proceeding:
 
* [[Test Modules Overview|Scripting Overview]]
* [[Perl Script APIs|perl Test Modules]]
 
== What is an expectation test ? ==
 
An expectation test is a test that validates behavior by verifying the occurrence (or non-occurrence) of specific test points on your running device. The STRIDE Framework makes it easy to define expectation tests ''via a single setup API'' ([[Perl_Script_APIs#Methods|see TestPointSetup]]) . Once defined, expectation tests are executed by invoking a ''wait'' method that evaluates the test points on the device as they occur. The wait method typically blocks until the entire defined expectation set has been satisfied '''or''' until a timeout (optional) has been exceeded.
 
== How to I start my target processing scenario ? ==
 
It's often necessary, as part of test, to start the processing on the device that causes the test scenario to occur. Sometimes processing is invoked via external stimulus (e.g. send a command to a serial port or send a network message). Given the [http://search.cpan.org/ wealth of libraries] available for perl, it's likely that you'll be able to find modules to help you in automating common communication protocols.
 
If, on the other hand, the processing can be invoked by direct code paths in your application, you can consider using [[Function_Capturing|function fixturing]] via STRIDE. STRIDE function remoting allows you to specify a set of functions on the device that are to made available in the host scripting environment for remote execution. This can be a convenient way to expose device application hooks to the host scripting environment.
 
Whatever the approach, we strongly encourage test authors to find ways to minimize the amount of manual interaction required to execute expectation tests. Try to find ways to fully automate the interaction required to run your expectation scenarios. Fully automated tests are more likely to be run regularly and therefore provide a tighter feedback loop for your software quality.
 
== Can I use test scripts for any other testing besides expectation validation? ==
 
'''Yes''' - STRIDE test modules provide a language-specific way to harness test code. If you have other procedures that can be automated using perl code on the host, then you can certainly use STRIDE test modules to harness the test code. In doing so, you will get the reporting conveniences that test modules provide (like automatic POD doc extraction, etc. and suite/test case generation) - as well as unified reporting with your other STRIDE test cases.
 
== Samples ==
The  following samples are available to illustrate the use of host scripting  with the STRIDE Framework. They are a  collection of native source under test and script test code code that demonstrate the  techniques for creating and executing tests using the STRIDE Framework. These are the  introductory samples for the STRIDE Framework that illustrate how to do [[Expectations | expectation testing]] of instrumented source under test using script test  modules on the host.
 
Once you have installed the STRIDE  framework on your host machine, you can easily build and run any combination of these samples. In each case, you can include the source under test for the sample simply by copying  its source files to the SDK's [[Framework_Installation#SDK  | sample_src]]  directory and [[Building_an_Off-Target_Test_App |rebuilding]] the [[STRIDE Off-Target Environment | off-target]] testapp.


;[[Expectations Sample]]
;[[Expectations Sample]]
: This sample  provides some pre-instrumented source code that include [[Test Point|STRIDE Test  Points]] and [[Test Log|Test Logs]].  A single perl test module is included that implements a few examples of  expectation testing based on the software under test. The process of  including this sample as well as running it and publishing results is  covered in [[Running and  Publishing the Expectations Sample]],  which comprised an earlier step in the sandbox setup.
: This sample  provides some pre-instrumented source code that include [[Test Point|STRIDE Test  Points]] and [[Test Log|Test Logs]].  A single perl test module is included that implements a few examples of  expectation testing based on the software under test. The process of  including this sample as well as running it and publishing results is  covered in [[Running and  Publishing the Expectations Sample]].


;[[FileTransfer  Sample]]
;[[FileTransfer  Sample]]
: This sample  shows an example of how you might use helper functions on the target to  invoke [[File Transfer Services|STRIDE File  Transfer services]]. The example is driven by host  script logic that invokes remote target functions to actuate a file  transfer to the target.
: This sample  shows an example of how you might use helper functions on the target to  invoke [[File Transfer Services|STRIDE File  Transfer services]]. The example is driven by host  script logic that invokes remote target functions to actuate a file  transfer to the target.
;[[FunctionRemoting  Sample]]
: This sample shows some examples of invoking [[Function_Capturing|remote functions]] for the purpose of fixturing your behavior tests written in script on the host.


[[Category:Samples]]
[[Category:Samples]]
[[Category:Tests in Script]]
[[Category:Tests in Script]]

Latest revision as of 17:17, 26 August 2011

Introduction

The samples are provided with with the Desktop installation package. They are self-documented (using perldoc) and this content will be attached to the test report whenever a sample is executed. They are readily executable using our off-target (desktop) environment and can easily be built and executed. In every case, you can include a sample(s) simply by copying its source files to the SDK's sample_src directory and rebuilding the testapp.

The samples were created to be as simple as possible while sufficiently demonstrating the topic at hand. In particular, the samples are very light on core application logic (that is, the source under test) -- they focus instead on the code that leverages STRIDE to define and execute tests. As you review the sample code, if you find yourself confused about which code is the test logic and which is the source under test, try reading the source file comments to discern this.

What you need to do

In order to get the full benefit from the samples, we recommend you do the following:

  • follow the reference wiki links we provide in the main sample articles. These links provide rich technical information on the topics covered by the sample. These are also articles you will likely refer to in the future when you are implementing your own tests.
  • read/review all sample source code prior to running. The samples consist almost entirely of source code, so it makes sense to use a source code editor (one you are familiar with) for this purpose.
  • build and execute the samples using the off-target framework. If you completed your installation as instructed above, it should be fully functional when executing them.
  • review the reports that are produced when you run the samples. The reports give you a feel for how data is reported in the STRIDE Framework. The extracted documentation is also provided in the report.
  • For most samples, we provide some observations that help summarize aspects of the results that might be of interest to you. These observations are not necessarily comprehensive - in fact, we hope you'll discover other interesting features in the samples that we haven't mentioned.

Background

The STRIDE Framework allows you to write expectation tests that execute on the host while connected to a running device that has been instrumented with STRIDE Test Points. Host-based expectation tests leverage the power of scripting languages (perl is currently supported, others are expected in the future) to quickly and easily write validation logic for the test points on your system. What's more, since the test logic is implemented and executed on the host, your device software does not have to be rebuilt when you want to create new tests or change existing ones.

Please review the following reference articles before proceeding:

What is an expectation test ?

An expectation test is a test that validates behavior by verifying the occurrence (or non-occurrence) of specific test points on your running device. The STRIDE Framework makes it easy to define expectation tests via a single setup API (see TestPointSetup) . Once defined, expectation tests are executed by invoking a wait method that evaluates the test points on the device as they occur. The wait method typically blocks until the entire defined expectation set has been satisfied or until a timeout (optional) has been exceeded.

How to I start my target processing scenario ?

It's often necessary, as part of test, to start the processing on the device that causes the test scenario to occur. Sometimes processing is invoked via external stimulus (e.g. send a command to a serial port or send a network message). Given the wealth of libraries available for perl, it's likely that you'll be able to find modules to help you in automating common communication protocols.

If, on the other hand, the processing can be invoked by direct code paths in your application, you can consider using function fixturing via STRIDE. STRIDE function remoting allows you to specify a set of functions on the device that are to made available in the host scripting environment for remote execution. This can be a convenient way to expose device application hooks to the host scripting environment.

Whatever the approach, we strongly encourage test authors to find ways to minimize the amount of manual interaction required to execute expectation tests. Try to find ways to fully automate the interaction required to run your expectation scenarios. Fully automated tests are more likely to be run regularly and therefore provide a tighter feedback loop for your software quality.

Can I use test scripts for any other testing besides expectation validation?

Yes - STRIDE test modules provide a language-specific way to harness test code. If you have other procedures that can be automated using perl code on the host, then you can certainly use STRIDE test modules to harness the test code. In doing so, you will get the reporting conveniences that test modules provide (like automatic POD doc extraction, etc. and suite/test case generation) - as well as unified reporting with your other STRIDE test cases.

Samples

The following samples are available to illustrate the use of host scripting with the STRIDE Framework. They are a collection of native source under test and script test code code that demonstrate the techniques for creating and executing tests using the STRIDE Framework. These are the introductory samples for the STRIDE Framework that illustrate how to do expectation testing of instrumented source under test using script test modules on the host.

Once you have installed the STRIDE framework on your host machine, you can easily build and run any combination of these samples. In each case, you can include the source under test for the sample simply by copying its source files to the SDK's sample_src directory and rebuilding the off-target testapp.

Expectations Sample
This sample provides some pre-instrumented source code that include STRIDE Test Points and Test Logs. A single perl test module is included that implements a few examples of expectation testing based on the software under test. The process of including this sample as well as running it and publishing results is covered in Running and Publishing the Expectations Sample.
FileTransfer Sample
This sample shows an example of how you might use helper functions on the target to invoke STRIDE File Transfer services. The example is driven by host script logic that invokes remote target functions to actuate a file transfer to the target.
FunctionRemoting Sample
This sample shows some examples of invoking remote functions for the purpose of fixturing your behavior tests written in script on the host.