Training Instrumentation: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(24 intermediate revisions by 3 users not shown)
Line 1: Line 1:
== Background ==
== Background ==


Source instrumentation is the means by which application domain experts strategically instrument the source under test to as to be able to the write tests against the expected behavior of the code when it's running. Source instrumentation is one of the first steps toward enabling expectation testing of your application.
Source instrumentation is the means by which application domain experts strategically instrument the source under test so as to enable themselves or others to write expectation tests against the running code. Source instrumentation is one of the first steps toward enabling expectation testing of your application.


Source instrumentation is accomplished by a set of simple yet powerful macros provided by the STRIDE Runtime library. The macros are easily activated/deactivated for any build through the use of a single preprocessor macro value. When activated, they provide a means to validate the behavior of your application as it is running.
Source instrumentation is accomplished by a set of simple yet powerful macros provided by the STRIDE Runtime library. The macros are easily activated/deactivated for any build through the use of a single preprocessor macro value. When activated, they provide a means to validate the behavior of your application as it is running.
Line 13: Line 13:
== How Do I Instrument ? ==
== How Do I Instrument ? ==


As described [[Source Instrumentation Overview|here]], you can begin instrumenting your source code by including the <tt>srtest.h</tt> header file and then adding '''srTEST_POINT*''' and '''srTEST_LOG*''' in source locations of interest. These macros will be inactive (no-op) unless '''STRIDE_ENABLED''' is defined during compilation.
As described in [[Source Instrumentation Overview|the overview]], you can begin instrumenting your source code by including the <tt>srtest.h</tt> header file and then adding '''srTEST_POINT*''' and '''srTEST_LOG*''' in source locations of interest. These macros will be inactive (no-op) unless '''STRIDE_ENABLED''' is defined during compilation.


== Where Should I Instrument ? ==
== Where Should I Instrument ? ==
Line 23: Line 23:
* Data transitions (i.e. any point where important data changes value)
* Data transitions (i.e. any point where important data changes value)
* Error conditions
* Error conditions
* Callback functions
In addition, when you start to instrument your source code, it's beneficial to pause and consider some of the test cases you expect to validate against the test points you are inserting. For each potential test case, you might also want to consider some of the characteristics you'd use for those tests, as described in [[Expectations|this article]]. The characteristics you expect to apply for various test cases might, for example, inform things like how you label your test points or what kind of data you include.
== What's The Difference between a Test Point and a Test Log ? ==
[[Test Point|Test Points]] can be used for validation since they are what's checked when you run a STRIDE expectation test. [[Test Log|Test Logs]], on the other hand, are purely informational and will be included in the report, according to the log level indicated when the [[Stride_Runner#Options|STRIDE Runner]] was executed. Refer to the [[#Background|background links above]] for more information on each of these instrumentation types.


== What About Data ? ==
== What About Data ? ==
Line 29: Line 36:


* Try to use string data payloads wherever possible. String payloads are considerably more human-friendly when viewing test results and they allow for relatively simple string comparison validation.
* Try to use string data payloads wherever possible. String payloads are considerably more human-friendly when viewing test results and they allow for relatively simple string comparison validation.
* If you need to complex validation of '''multi-field data''' in a test point, consider using an object serialization format such as [http://json.org/ JSON]. Standard formats like this are readily parsable in host scripting languages. If, however, you will only be writing expectation tests in native code for execution on the target, then string serialization formats might be too cumbersome for validation. In that case, using binary payloads (structures, typically) is sensible.
* If you need to do complex validation of '''multi-field data''' in a test point, consider using an object serialization format such as [http://json.org/ JSON]. Standard formats like this are readily parsable in host scripting languages. If, however, you will ''only'' be writing expectation tests in native code for execution on the target, then string serialization formats might be too cumbersome for validation. In that case, using binary payloads (structures, typically) is reasonable.
* TBD
* The data payloads for a test point are limited to a fixed size (512 bytes by default, but configurable if needed). If you have large data payloads that you need to validate and you are using host-based script validation logic, consider using [[File Transfer Services]] to read/write the data as files on the host. If, on the other hand, you are using native code for validation, you can independently manage your own buffers of data (heap allocated, for example) for validation and use the test point payloads only to transmit addresses and sizes of the payloads.
* TBD
 
== Sample Code: s2_expectations_source ==


For this training, we will simply review some existing sample source code and explain the motivation behind some of the instrumentation. In particular, we will peruse the source code associated with the [[Expectations Sample]].


== Sample Code: s2_expectations_source ==  
=== Samples/test_in_script/Expectations/s2_expectations_source.c ===
 
This is the source under test for a simple example that demonstrates expectation tests where the test logic is written in a script module (perl) that runs on the host. The source under test is a simple state machine and we have chosen to instrument each of the 4 states with one or more test points.  Open the source file in your preferred editor and search for '''srTEST_POINT'''. You will see that we have test points in the following functions:
 
* ''SetNewState()'': This is a shared function that is called to effectuate a state change. The test point label is just a static string chosen by the instrumenter and the name of the new state is included as string data on the test point.
* ''Start()'': The Start state includes a single test point with no data. The label is from a shared function that returns a string representation for any given state (''GetStateName()'').
* ''Idle()'': The Idle state records a single test point. The test point label is again obtained from ''GetStateName()'' and the data associated with the test point is a transition count value that the software under test is maintaining. This state function also includes an ''info'' level test log. Since it is an info level log, it will not be captured during testing unless you explicitly set the log level to ''info'' or higher when executing the tests.
* ''Active()'': The Active state has three distinct test points. The first point is similar to previous states and records a test point with the name of the current state as a label and transition count as data. The second test point shows another example of including string data in a test point. The third test point shows an example of simple JSON serialization to include several values in a single test point payload.
* ''Exp_DoStateChanges()'': This function drives the state transitions and includes one warning log message that is not hit during normal execution of the code.
* ''End()'': The End state records a single test point with the state name as label, similar to the other states already mentioned.
 
=== Build, Run, and Trace ===
 
So that you can see the trace points and logs in action, we will now build an off-target test app with this sample source included. Follow the [[Off-Target_Test_App#Copy Sample Source|instructions here]], using the same source files from the Expectation sample that we just reviewed. Once the test app is built, we recommend that you manually open a new console (or Windows equivalent) and startup the application.
 
Now we want to [[Function_Capturing | remotely invoke a function]] from our source under test using the [[STRIDE Runner]] and request that the runner show a trace of any trace points that occur in the test app during execution. Calling a function remotely in this example allows us to better control the behavior (state) of the software under test. In many cases you can simply [[Tracing|trace]] on the application to view test points and test logs.
 
In order to invoke the function that starts our state transitions, let's create the following two line perl script (use your favorite editor):
 
<source lang="perl">
use STRIDE;
$STRIDE::Functions->Exp_DoStateChanges();
</source>
 
Save this file as <tt>do_state_changes.pl</tt> in the <tt>sample_src</tt> directory of your SDK. Now open a command prompt and change to that same directory. Execute the stride runner:
 
<pre>
stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run=do_state_changes.pl --trace=-
</pre>
 
This tells the runner to execute our script ''and'' to report any test points that are encountered on the device (or our test app, in this case) during the execution of that script. You should see a number of test points reported while the script executes and then some summary information about the tests that were executed (there are no tests executed in this case).
 
Now let's include any log messages by specifying the <tt>--log_level</tt> flag:
 
<pre>
stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb"  --run=do_state_changes.pl --trace=- --log_level=all
</pre>
 
When you run this, you should see the same output as before as well as one ''LOG'' entry. If we had many LOG entries and wanted to filter based on log_level, we would change the value passed to <tt>--log_level</tt> accordingly.
 
== What next ? ==
 
Now you've seen how instrumentation is strategically placed in source under test and can be traced during execution under the STRIDE Runner. We recommend that you proceed to some of our other training topics to learn how to create tests on the host (in script) or on the target (in native code) that use your instrumentation test points as a means for validation.


TBD
[[Category: Training_OLD]]

Latest revision as of 16:44, 29 August 2011

Background

Source instrumentation is the means by which application domain experts strategically instrument the source under test so as to enable themselves or others to write expectation tests against the running code. Source instrumentation is one of the first steps toward enabling expectation testing of your application.

Source instrumentation is accomplished by a set of simple yet powerful macros provided by the STRIDE Runtime library. The macros are easily activated/deactivated for any build through the use of a single preprocessor macro value. When activated, they provide a means to validate the behavior of your application as it is running.

Please review the following reference articles before proceeding:

How Do I Instrument ?

As described in the overview, you can begin instrumenting your source code by including the srtest.h header file and then adding srTEST_POINT* and srTEST_LOG* in source locations of interest. These macros will be inactive (no-op) unless STRIDE_ENABLED is defined during compilation.

Where Should I Instrument ?

When thinking about where to instrument your source under test, consider these high-value locations:

  • Function entry and exit points. Include call parameters as data if appropriate.
  • State transitions
  • Data transitions (i.e. any point where important data changes value)
  • Error conditions
  • Callback functions

In addition, when you start to instrument your source code, it's beneficial to pause and consider some of the test cases you expect to validate against the test points you are inserting. For each potential test case, you might also want to consider some of the characteristics you'd use for those tests, as described in this article. The characteristics you expect to apply for various test cases might, for example, inform things like how you label your test points or what kind of data you include.

What's The Difference between a Test Point and a Test Log ?

Test Points can be used for validation since they are what's checked when you run a STRIDE expectation test. Test Logs, on the other hand, are purely informational and will be included in the report, according to the log level indicated when the STRIDE Runner was executed. Refer to the background links above for more information on each of these instrumentation types.

What About Data ?

Including data with your test points adds another level of power to the validation of your source. Here are some general recommendations for using data effectively:

  • Try to use string data payloads wherever possible. String payloads are considerably more human-friendly when viewing test results and they allow for relatively simple string comparison validation.
  • If you need to do complex validation of multi-field data in a test point, consider using an object serialization format such as JSON. Standard formats like this are readily parsable in host scripting languages. If, however, you will only be writing expectation tests in native code for execution on the target, then string serialization formats might be too cumbersome for validation. In that case, using binary payloads (structures, typically) is reasonable.
  • The data payloads for a test point are limited to a fixed size (512 bytes by default, but configurable if needed). If you have large data payloads that you need to validate and you are using host-based script validation logic, consider using File Transfer Services to read/write the data as files on the host. If, on the other hand, you are using native code for validation, you can independently manage your own buffers of data (heap allocated, for example) for validation and use the test point payloads only to transmit addresses and sizes of the payloads.

Sample Code: s2_expectations_source

For this training, we will simply review some existing sample source code and explain the motivation behind some of the instrumentation. In particular, we will peruse the source code associated with the Expectations Sample.

Samples/test_in_script/Expectations/s2_expectations_source.c

This is the source under test for a simple example that demonstrates expectation tests where the test logic is written in a script module (perl) that runs on the host. The source under test is a simple state machine and we have chosen to instrument each of the 4 states with one or more test points. Open the source file in your preferred editor and search for srTEST_POINT. You will see that we have test points in the following functions:

  • SetNewState(): This is a shared function that is called to effectuate a state change. The test point label is just a static string chosen by the instrumenter and the name of the new state is included as string data on the test point.
  • Start(): The Start state includes a single test point with no data. The label is from a shared function that returns a string representation for any given state (GetStateName()).
  • Idle(): The Idle state records a single test point. The test point label is again obtained from GetStateName() and the data associated with the test point is a transition count value that the software under test is maintaining. This state function also includes an info level test log. Since it is an info level log, it will not be captured during testing unless you explicitly set the log level to info or higher when executing the tests.
  • Active(): The Active state has three distinct test points. The first point is similar to previous states and records a test point with the name of the current state as a label and transition count as data. The second test point shows another example of including string data in a test point. The third test point shows an example of simple JSON serialization to include several values in a single test point payload.
  • Exp_DoStateChanges(): This function drives the state transitions and includes one warning log message that is not hit during normal execution of the code.
  • End(): The End state records a single test point with the state name as label, similar to the other states already mentioned.

Build, Run, and Trace

So that you can see the trace points and logs in action, we will now build an off-target test app with this sample source included. Follow the instructions here, using the same source files from the Expectation sample that we just reviewed. Once the test app is built, we recommend that you manually open a new console (or Windows equivalent) and startup the application.

Now we want to remotely invoke a function from our source under test using the STRIDE Runner and request that the runner show a trace of any trace points that occur in the test app during execution. Calling a function remotely in this example allows us to better control the behavior (state) of the software under test. In many cases you can simply trace on the application to view test points and test logs.

In order to invoke the function that starts our state transitions, let's create the following two line perl script (use your favorite editor):

use STRIDE;
$STRIDE::Functions->Exp_DoStateChanges();

Save this file as do_state_changes.pl in the sample_src directory of your SDK. Now open a command prompt and change to that same directory. Execute the stride runner:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run=do_state_changes.pl --trace=-

This tells the runner to execute our script and to report any test points that are encountered on the device (or our test app, in this case) during the execution of that script. You should see a number of test points reported while the script executes and then some summary information about the tests that were executed (there are no tests executed in this case).

Now let's include any log messages by specifying the --log_level flag:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb"  --run=do_state_changes.pl --trace=- --log_level=all

When you run this, you should see the same output as before as well as one LOG entry. If we had many LOG entries and wanted to filter based on log_level, we would change the value passed to --log_level accordingly.

What next ?

Now you've seen how instrumentation is strategically placed in source under test and can be traced during execution under the STRIDE Runner. We recommend that you proceed to some of our other training topics to learn how to create tests on the host (in script) or on the target (in native code) that use your instrumentation test points as a means for validation.