Source Instrumentation Overview: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
No edit summary
 
(5 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== Introduction ==
== Introduction ==
Source instrumentation is the process by which developers and domain experts selectively instrument the source under test for the purpose of writing test scenarios against the executing application. Implementing tests that leverages source instrumentation is called '''Expectation Testing'''. This validation technique is very useful for verifying proper code sequencing based on the software's internal design.  
Source instrumentation is the process by which developers and domain experts selectively instrument the source under test for the purpose of writing test scenarios against the executing application. Implementing tests that leverage source instrumentation is called '''Expectation Testing'''. This validation technique is very useful for verifying proper code sequencing based on the software's internal design.  


Unlike traditional unit testing that drives testing based on input parameters and isolating functionality, expectation testing is executed within a fully functional software build running on a real target platform. Expectation tests are not dependent on passed in input parameters, but often leverages the same types of input / output controls used by functional and black-box testing.  
Unlike traditional unit testing that drives testing based on input parameters and isolating functionality, expectation testing is executed within a fully functional software build running on a real target platform. Expectation tests are not dependent on input parameters, but often leverage the same types of input/output controls used by functional and black-box testing.  


Another unique feature of expectation testing is that being a domain expert is not required to implement a test. Developers and domain experts use instrumentation to export design knowledge of the software to the entire team. Also there is no stubbing required, no special logic to generated input parameters, and even knowledge of how the application software is coded or built is not necessarily required.  
Another unique feature of expectation testing is that domain expertise is not required to implement a test. Developers and domain experts use instrumentation to export design knowledge of the software to the entire team. Furthermore, there is no stubbing required, no special logic to generate input parameters, and no advanced knowledge required of how the application software is coded.  


To enable effective test coverage developers and domain experts are required to insert instrumentation at key locations to gain insight and testability. Here are some general suggested source code areas to consider instrumenting:
To enable effective test coverage, developers and domain experts are required to insert instrumentation at key locations to gain insight and testability. Here are some general suggested source code areas to consider instrumenting:
* critical function entry/exit points
* critical function entry/exit points
* state transitions
* state transitions
Line 15: Line 15:


== Instrumentation ==
== Instrumentation ==
To make the software ''testable'' the first step is the process is for the experts to selectively insert instrumentation macros called [[Test_Point | Test Point]] into the source code. Test Points themselves have nominal impact on the performance of the application – they are only active during test data collection<ref name="n1"> Test data collection is typically implemented in a low priority background thread. The data is captured in the calling routine's thread context (no context switch) but processed in the background. When testing is not active instrumentation macros return immediately to the caller (i.e. NOP)</ref>. Test Points contains names and optionally payload data. When Test Points are activated, they are collected in the background, along with timing and any associated data. The set of Test Points hit, their order, timing, and data content can all be used to validate the software is behaving as expected. [[Test_Log | Test Logs]] can also be added to the source code to provide additional information in the context of an executing test.  
To make the software ''testable'', the first step in the process is for the experts to selectively insert instrumentation macros -- called [[Test_Point | Test Points]] -- into the source code. Test Points themselves have nominal impact on the performance of the application – they are only active during test data collection<ref name="n1"> Test data collection is typically implemented in a low priority background thread. The data is captured in the calling routine's thread context (no context switch) but processed in the background or on the host. Instrumentation macros return immediately to the caller (i.e. no-op) when testing is not active.</ref>. Test Points contain names and optional payload data. When Test Points are activated, they are collected in the background, along with timing and any associated data. The set of Test Points hit, their order, timing, and data content can all be used to validate that the software is behaving as expected. [[Test_Log | Test Logs]] can also be added to the source code to provide additional information in the context of an executing test.  


Here are the requirements for source instrumentation using the STRIDE Framework:
Here are the requirements for source instrumentation using the STRIDE Framework:
Line 23: Line 23:
* selectively instrument strategic locations in your source code with [[Test_Point | Test Points]] and optionally [[Test_Log | Test Logs]]
* selectively instrument strategic locations in your source code with [[Test_Point | Test Points]] and optionally [[Test_Log | Test Logs]]


== Defining Expectations ==
== Expectations ==
In addition to instrumenting the source code with Test Points, defining the expectation requirements based on testing scenarios is required. The first step is defining the set of Test Points expected to be hit during a test scenario. This is the '''list''' of Test Points and the expected number of hits for each of the Test Points. Also included is any expected '''data''' associated with a Test Point that requires validation.


In addition to instrumenting the source code with Test Points, you must also define the [[Expectations]] of the Test Points. This involves defining the list of Test Points expected to be hit during a given test scenario. An expectation can also include any expected '''data''' associated with a Test Point that requires validation.


'''Expected TEST POINTS list:'''
== Testing ==


{| border="1" cellspacing="0" cellpadding="10" style="align:left;" 
Once the source under test has been instrumented and the [[Expectations | expectations]] defined, STRIDE offers a number of techniques that can be used for implementing '''expectation tests'''. For non-developers the [[Test_Modules_Overview | STRIDE Scripting Solution]] is recommended. Scripting allows testers to leverage the power of dynamic languages that execute on the host. The framework provides script libraries that automate the behavior validation as well as hooks for customization. The script implementation also reduces software build dependencies since the test code is not part of the device image. Scripting can also leverage [[Function_Capturing | function remoting]] to fully automate test execution by [[Perl_Script_Snippets#Invoking_a_function_on_the_target | invoking functions on the target.]]
| '''Label'''  
| '''Count'''
| '''Expected Data'''
|-
| Name 1
|  1
| ''<describe data payload validation requirements if applicable>''
|-
| Name 2
| 1 +
| ''<describe data payload validation requirements if applicable>''
|-
| ''name ...''
| ''n''
| ''<...>''
|}


 
For developers writing expectation tests, the [[Test_Units_Overview | STRIDE Test Units]] with test logic implemented in [[Expectation_Tests_in_C/C%2B%2B | C or C++]] is recommended as a starting point.
In addition to the list of Test Points certain processing '''properties''' are required to be defined.
 
'''Expected TEST POINTS processing properties:'''
 
{| border="1" cellspacing="0" cellpadding="10" style="align:left;"
| '''Properties'''
| ''' Description'''
|-
| ''Ordered''
| Test Points are expected to be hit exactly as defined in the ordered list
|-
| ''Unordered''
| Test Points can be hit in any order defined in the list (only ordered or unordered can be set)
|-
| ''Strict''
| Test Points specified in the list must match exactly (i.e. no duplication)
|-
| ''Non-Strict''
| Test Points specified in the list can be duplicated during the processing (only strict or non-strict can be set)
|}
 
 
In addition to the list of Test Points, along with processing properties, and optional '''Unexpected Test Point list''' can also be defined. This list of Test Points are to be treated as failures if they are encountered.
 
== Implementing Tests ==
Once the source under test has been instrumented and the expectation requirements defined, STRIDE offers a number of techniques to be used for implementing '''expectation tests'''. For non-developers the [[Test_Modules_Overview | STRIDE Scripting Solution]] is recommended. Scripting allows testers to leverage the power of dynamic languages that execute on the host. The framework provides script libraries that automate the behavior validation as well as hooks for customization. Also there are minimal software build dependencies using scripting for validation. Scripting can also leverage [[Function_Capturing | function remoting]] to fully automate test execution using script modules [[Perl_Script_Snippets#Invoking_a_function_on_the_target | invoking functions on the target.]]
 
For developers writing expectation tests the [[Test_Units_Overview | STRIDE Test Units]] and implementation in [[Expectation_Tests_in_C/C%2B%2B | C or C++]] is recommended to begin with.


== Notes ==
== Notes ==

Latest revision as of 15:50, 9 December 2010

Introduction

Source instrumentation is the process by which developers and domain experts selectively instrument the source under test for the purpose of writing test scenarios against the executing application. Implementing tests that leverage source instrumentation is called Expectation Testing. This validation technique is very useful for verifying proper code sequencing based on the software's internal design.

Unlike traditional unit testing that drives testing based on input parameters and isolating functionality, expectation testing is executed within a fully functional software build running on a real target platform. Expectation tests are not dependent on input parameters, but often leverage the same types of input/output controls used by functional and black-box testing.

Another unique feature of expectation testing is that domain expertise is not required to implement a test. Developers and domain experts use instrumentation to export design knowledge of the software to the entire team. Furthermore, there is no stubbing required, no special logic to generate input parameters, and no advanced knowledge required of how the application software is coded.

To enable effective test coverage, developers and domain experts are required to insert instrumentation at key locations to gain insight and testability. Here are some general suggested source code areas to consider instrumenting:

  • critical function entry/exit points
  • state transitions
  • critical or interesting data transitions (using optional payload to convey data values)
  • callback routines
  • data persistence
  • error conditions

Instrumentation

To make the software testable, the first step in the process is for the experts to selectively insert instrumentation macros -- called Test Points -- into the source code. Test Points themselves have nominal impact on the performance of the application – they are only active during test data collection[1]. Test Points contain names and optional payload data. When Test Points are activated, they are collected in the background, along with timing and any associated data. The set of Test Points hit, their order, timing, and data content can all be used to validate that the software is behaving as expected. Test Logs can also be added to the source code to provide additional information in the context of an executing test.

Here are the requirements for source instrumentation using the STRIDE Framework:

  • define the STRIDE_ENABLED preprocessor macro in your build system
  • include the srtest.h header file. This file is included in the Runtime source distribution
  • selectively instrument strategic locations in your source code with Test Points and optionally Test Logs

Expectations

In addition to instrumenting the source code with Test Points, you must also define the Expectations of the Test Points. This involves defining the list of Test Points expected to be hit during a given test scenario. An expectation can also include any expected data associated with a Test Point that requires validation.

Testing

Once the source under test has been instrumented and the expectations defined, STRIDE offers a number of techniques that can be used for implementing expectation tests. For non-developers the STRIDE Scripting Solution is recommended. Scripting allows testers to leverage the power of dynamic languages that execute on the host. The framework provides script libraries that automate the behavior validation as well as hooks for customization. The script implementation also reduces software build dependencies since the test code is not part of the device image. Scripting can also leverage function remoting to fully automate test execution by invoking functions on the target.

For developers writing expectation tests, the STRIDE Test Units with test logic implemented in C or C++ is recommended as a starting point.

Notes

  1. Test data collection is typically implemented in a low priority background thread. The data is captured in the calling routine's thread context (no context switch) but processed in the background or on the host. Instrumentation macros return immediately to the caller (i.e. no-op) when testing is not active.