Training Instrumentation: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 13: Line 13:
== How Do I Instrument ? ==
== How Do I Instrument ? ==


As described [[Source Instrumentation Overview|here]], you can begin instrumenting your source code by including the <tt>srtest.h</tt> header file and then adding '''srTEST_POINT*''' and '''srTEST_LOG*''' in source locations of interest. These macros will be inactive (no-op) unless '''STRIDE_ENABLED''' is defined during compilation.
As described in [[Source Instrumentation Overview|the overview]], you can begin instrumenting your source code by including the <tt>srtest.h</tt> header file and then adding '''srTEST_POINT*''' and '''srTEST_LOG*''' in source locations of interest. These macros will be inactive (no-op) unless '''STRIDE_ENABLED''' is defined during compilation.


== Where Should I Instrument ? ==
== Where Should I Instrument ? ==
Line 23: Line 23:
* Data transitions (i.e. any point where important data changes value)
* Data transitions (i.e. any point where important data changes value)
* Error conditions
* Error conditions
== What's The Difference between a Test Point and a Test Log ? ==
[[Test Point|Test Points]] can be used for validation since they are what's validated against when you run a STRIDE expectation test. Test Logs, on the other hand, are purely informational and will be included in the report, according to the log level indicated when the [[Stride_Runner#Options|STRIDE Runner]] was executed. Refer to the [[#Background|background links above]] for more information on each of these instrumentation types.


== What About Data ? ==
== What About Data ? ==

Revision as of 20:36, 11 May 2010

Background

Source instrumentation is the means by which application domain experts strategically instrument the source under test to as to be able to the write tests against the expected behavior of the code when it's running. Source instrumentation is one of the first steps toward enabling expectation testing of your application.

Source instrumentation is accomplished by a set of simple yet powerful macros provided by the STRIDE Runtime library. The macros are easily activated/deactivated for any build through the use of a single preprocessor macro value. When activated, they provide a means to validate the behavior of your application as it is running.

Please review the following reference articles before proceeding:

How Do I Instrument ?

As described in the overview, you can begin instrumenting your source code by including the srtest.h header file and then adding srTEST_POINT* and srTEST_LOG* in source locations of interest. These macros will be inactive (no-op) unless STRIDE_ENABLED is defined during compilation.

Where Should I Instrument ?

When thinking about where to instrument your source under test, consider these high-value locations:

  • Function entry and exit points. Include call parameters as data if appropriate.
  • State transitions
  • Data transitions (i.e. any point where important data changes value)
  • Error conditions

What's The Difference between a Test Point and a Test Log ?

Test Points can be used for validation since they are what's validated against when you run a STRIDE expectation test. Test Logs, on the other hand, are purely informational and will be included in the report, according to the log level indicated when the STRIDE Runner was executed. Refer to the background links above for more information on each of these instrumentation types.

What About Data ?

Including data with your test points adds another level of power to the validation of your source. Here are some general recommendations for using data effectively:

  • Try to use string data payloads wherever possible. String payloads are considerably more human-friendly when viewing test results and they allow for relatively simple string comparison validation.
  • If you need to complex validation of multi-field data in a test point, consider using an object serialization format such as JSON. Standard formats like this are readily parsable in host scripting languages. If, however, you will only be writing expectation tests in native code for execution on the target, then string serialization formats might be too cumbersome for validation. In that case, using binary payloads (structures, typically) is sensible.
  • TBD
  • TBD


Sample Code: s2_expectations_source

TBD