Difference between revisions of "Training Instrumentation"

From STRIDE Wiki
Jump to: navigation, search
(Created page with '== Background == Source instrumentation is the means by which application domain experts strategically instrument the source under test to as to be able to the write tests again…')
 
Line 4: Line 4:
  
 
Source instrumentation is accomplished by a set of simple yet powerful macros provided by the STRIDE Runtime library. The macros are easily activated/deactivated for any build through the use of a single preprocessor macro value. When activated, they provide a means to validate the behavior of your application as it is running.
 
Source instrumentation is accomplished by a set of simple yet powerful macros provided by the STRIDE Runtime library. The macros are easily activated/deactivated for any build through the use of a single preprocessor macro value. When activated, they provide a means to validate the behavior of your application as it is running.
 +
 +
Please review the following reference articles before proceeding:
 +
 +
* [[Source Instrumentation Overview|Instrumentation Overview]]
 +
* [[Test Point|Test Points]]
 +
* [[Test Log|Test Logs]]
 +
 +
== How Do I Instrument ? ==
 +
 +
As described [[Source Instrumentation Overview|here]], you can begin instrumenting your source code by including the <tt>srtest.h</tt> header file and then adding '''srTEST_POINT*''' and '''srTEST_LOG*''' in source locations of interest. These macros will be inactive (no-op) unless '''STRIDE_ENABLED''' is defined during compilation.
  
 
== Where Should I Instrument ? ==
 
== Where Should I Instrument ? ==
Line 11: Line 21:
 
* Function entry and exit points. Include call parameters as data if appropriate.
 
* Function entry and exit points. Include call parameters as data if appropriate.
 
* State transitions  
 
* State transitions  
* Data transitions (any point where key data changes value)
+
* Data transitions (i.e. any point where important data changes value)
 
* Error conditions
 
* Error conditions
  
 
== What About Data ? ==
 
== What About Data ? ==
  
Including data with your test points adds another level of complexity and power to the validation of your source. Here are some general recommendations for using data effectively:
+
Including data with your test points adds another level of power to the validation of your source. Here are some general recommendations for using data effectively:
  
 
* Try to use string data payloads wherever possible. String payloads are considerably more human-friendly when viewing test results and they allow for relatively simple string comparison validation.
 
* Try to use string data payloads wherever possible. String payloads are considerably more human-friendly when viewing test results and they allow for relatively simple string comparison validation.
* i
+
* If you need to complex validation of '''multi-field data''' in a test point, consider using an object serialization format such as [http://json.org/ JSON]. Standard formats like this are readily parsable in host scripting languages. If, however, you will only be writing expectation tests in native code for execution on the target, then string serialization formats might be too cumbersome for validation. In that case, using binary payloads (structures, typically) is sensible.
 +
* TBD
 +
* TBD
 +
 
  
 
== Sample Code: s2_expectations_source ==  
 
== Sample Code: s2_expectations_source ==  
  
For this
+
TBD

Revision as of 13:02, 11 May 2010

Background

Source instrumentation is the means by which application domain experts strategically instrument the source under test to as to be able to the write tests against the expected behavior of the code when it's running. Source instrumentation is one of the first steps toward enabling expectation testing of your application.

Source instrumentation is accomplished by a set of simple yet powerful macros provided by the STRIDE Runtime library. The macros are easily activated/deactivated for any build through the use of a single preprocessor macro value. When activated, they provide a means to validate the behavior of your application as it is running.

Please review the following reference articles before proceeding:

How Do I Instrument ?

As described here, you can begin instrumenting your source code by including the srtest.h header file and then adding srTEST_POINT* and srTEST_LOG* in source locations of interest. These macros will be inactive (no-op) unless STRIDE_ENABLED is defined during compilation.

Where Should I Instrument ?

When thinking about where to instrument your source under test, consider these high-value locations:

  • Function entry and exit points. Include call parameters as data if appropriate.
  • State transitions
  • Data transitions (i.e. any point where important data changes value)
  • Error conditions

What About Data ?

Including data with your test points adds another level of power to the validation of your source. Here are some general recommendations for using data effectively:

  • Try to use string data payloads wherever possible. String payloads are considerably more human-friendly when viewing test results and they allow for relatively simple string comparison validation.
  • If you need to complex validation of multi-field data in a test point, consider using an object serialization format such as JSON. Standard formats like this are readily parsable in host scripting languages. If, however, you will only be writing expectation tests in native code for execution on the target, then string serialization formats might be too cumbersome for validation. In that case, using binary payloads (structures, typically) is sensible.
  • TBD
  • TBD


Sample Code: s2_expectations_source

TBD