Tracing: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
No edit summary
 
(13 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Using the [[STRIDE Runner]], it is relatively easy to observe the execution of a running target or capture that content and reuse it as the basis for test module expectation tests.
== Prerequisites ==
Similar to [[Running_Tests#Prerequisites|Running Tests]] you need to have a fully setup Host and Target environments.
== Observation of the Execution Process ==
Once you have instrumented your source code, you might want to observe the execution process or more specifically verify that STRIDE Test Points are indeed being reported as you expect. One easy way to accomplish this is by using the console trace output feature in the [[STRIDE Runner]].
Once you have instrumented your source code, you might want to observe the execution process or more specifically verify that STRIDE Test Points are indeed being reported as you expect. One easy way to accomplish this is by using the console trace output feature in the [[STRIDE Runner]].


The simplest usage model for tracing with the runner is to add the <tt>--trace</tt> flag to your command line runner parameters, for example:
The simplest usage model for tracing with the runner is to add the <tt>--trace</tt> flag with "-" (dash) value to your command line runner parameters, for example:


  stride --device=TCP:localhost:8000 --database=%STRIDE_DIR%\sdk\windows\out\testapp.sidb --trace --trace_timeout=600 --log_level=all
<pre>
stride --trace=- --trace_timeout=600 --log_level=all --database="/my/path/targetApp.sidb" --device=TCP:localhost:8000
</pre>


This will cause the runner to listen for 600 sec for test point and log messages from the target device. When the runner is passively tracing like this, you can terminate it by entering '''<tt>Ctrl-C</tt>''' in the console window. This mode of tracing works well if your device under test is continuously executing the instrumented code '''or''' if you can manually start the processing on the device (via external input, for example).
This will cause the runner to listen for 600 sec for test point and log messages (contolled via the <tt>--log_level</tt> flag) from the target device. When the runner is passively tracing like this, you can terminate it by entering '''<tt>Ctrl-C</tt>''' in the console window. This mode of tracing works well if your device under test is continuously executing the instrumented code '''or''' if you can manually start the processing on the device (via external input, for example).


If, however, the source code under test must be initiated by a remote function call (using [[Scl_function|STRIDE remoting]]), then you will need to incorporate the function call into the execution of the runner. The simplest way to accomplish this is to write a simple script to make the remote function call, as shown [[Perl_Script_Snippets#Standalone_two-line_script_for_invoking_a_remote_function|here for perl]].
If, however, the source code under test must be initiated by a remote function call (using [[Function_Capturing#Remotting|STRIDE remoting]]), then you will need to incorporate the function call into the execution of the runner. The simplest way to accomplish this is to write a simple script to make the remote function call, as shown [[Perl_Script_Snippets#Standalone_two-line_script_for_invoking_a_remote_function|here for perl]].


Once you have written a simple script to invoke your function(s), you can execute it with the runner using:
Once you have written a simple script to invoke your function(s), you can execute it with the runner using:


  stride --device=TCP:localhost:8000 --database=%STRIDE_DIR%\sdk\windows\out\testapp.sidb --trace --trace_timeout=600 --log_level=all --run=MyFunctionCall.pl
<pre>
stride --run=MyFunctionCall.pl --trace=- --trace_timeout=600 --log_level=all --database="/my/path/targetApp.sidb" --device=TCP:localhost:8000
</pre>
(assuming you named your script ''MyFunctionCall.pl'')
(assuming you named your script ''MyFunctionCall.pl'')


Line 20: Line 30:
Executing...
Executing...
   script "c:\s2\MyFunctionCall.pl"
   script "c:\s2\MyFunctionCall.pl"
2082576600 POINT "START" [../sample_src/s2_expectations_source.c:62]
0269952066 called      run_through_all                - {with_logs=0,with_delay=0}
2082586700 POINT "IDLE" - 02 00 00 00 [../sample_src/s2_expectations_source.c:77]
0269952066 test point  "A"                            - {label="A",timestamp=151661000,file="../sample_src/script_testing.c",line=6,data=NULL,size=0,bin=1}
2082596700 LOG   "Info" - Idle state entered first time [../sample_src/s2_expectations_source.c:85]
0269952066 test point  "B"                           - {label="B",timestamp=151661001,file="../sample_src/script_testing.c",line=11,data=*(97 98 99 100 101 102 103 104 105 106 107...),size=27,bin=0}
2082596701 POINT "ACTIVE" - 03 00 00 00 [../sample_src/s2_expectations_source.c:100]
0269952066 test point   "C"                           - {label="C",timestamp=151661002,file="../sample_src/script_testing.c",line=17,data=*(0 1 2 3 4 5),size=6,bin=1}
2082606800 POINT "ACTIVE Previous State" - IDLE [../sample_src/s2_expectations_source.c:102]
0269952066 test point  "D"                           - {label="D",timestamp=151661003,file="../sample_src/script_testing.c",line=22,data=NULL,size=0,bin=1}
2082616800 POINT "IDLE" - 04 00 00 00 [../sample_src/s2_expectations_source.c:77]
0269952066 test point  "E"                           - {label="E",timestamp=151661004,file="../sample_src/script_testing.c",line=27,data=*(48 49 50 51 52 53 54 55 56 57 0...),size=11,bin=0}
2082626900 POINT "END" - 05 00 00 00 [../sample_src/s2_expectations_source.c:114]
0269952066 test point  "F"                           - {label="F",timestamp=151661005,file="../sample_src/script_testing.c",line=33,data=*(5 4 3 2 1 0),size=6,bin=1}
    > 0 passed, 0 failed, 0 in progress, 0 not in use.
0269952066 test point  "G"                           - {label="G",timestamp=151661006,file="../sample_src/script_testing.c",line=38,data=*(0),size=1,bin=1}
0269952066 return      run_through_all             
   ---------------------------------------------------------------------
   ---------------------------------------------------------------------
   Summary: 0 passed, 0 failed, 0 in progress, 0 not in use.
   Summary: 0 passed, 0 failed, 0 in progress, 0 not in use.
Line 35: Line 46:
</pre>
</pre>


As you can see, any test points or logs that are executed in the source under test will be reported to the console output. This provides a mean to peruse the output of your instrumented source code and do a quick sanity check of your instrumentation.   
As you can see, any intercepted functions, test points or logs that are executed in the source under test will be reported to the console output. This provides a mean to peruse the output of your instrumented source code and do a quick sanity check of your instrumentation.   
 
Tracing within the runner also allows timeout interval to be specified - more details can be found in the [[STRIDE Runner]] page.
 
== Capturing Trace Content ==
Besides console logging the trace content could be redirected to a file attachment in the report. Simply specify "yaml" or "txt" as parameter to the <tt>--trace</tt> flag to your command line runner parameters, for example:
 
<pre>
stride --run=MyFunctionCall.pl --trace=yaml --trace_timeout=600 --database="/my/path/targetApp.sidb" --device=TCP:localhost:8000
</pre>
 
The trace parameters specify that tracing should be activated (<tt>--trace</tt>) and the results should be written to an output file in [http://en.wikipedia.org/wiki/Yaml yaml] or single-entry-per-line text formatted file attachment. To see the trace just open your output report file and click on the "Trace Data" link under top level annotations.
 
Once you have captured your trace data into a file, you should look it over (using a text editor of your choice) and make sure that the test points and data captured are what you expect. You are free to manually edit this file as long as you maintain the existing field hierarchy for each entry.


Tracing within the runner also allows filters to be applied and timeout intervals to be specified - more details can be found in the [[STRIDE Runner]] page.
You can reuse this data as a basis for a test module test - a simple example is available [[Perl_Script_Snippets#Reusing_a_trace_data_file_as_expectation|here for perl]]. The test will load the test points from the trace data file and use them as the basis for your expectation test. By default, both test point label and data are checked, although this behavior can be modified using options and custom predicates. More information on the options available when setting-up an expectation test is available [[Perl_Script_APIs#Methods|here]].




[[Category:Source Instrumentation]]
[[Category:Running Tests]]
[[Category:Running Tests & Publishing Results]]

Latest revision as of 22:06, 13 June 2011

Using the STRIDE Runner, it is relatively easy to observe the execution of a running target or capture that content and reuse it as the basis for test module expectation tests.

Prerequisites

Similar to Running Tests you need to have a fully setup Host and Target environments.

Observation of the Execution Process

Once you have instrumented your source code, you might want to observe the execution process or more specifically verify that STRIDE Test Points are indeed being reported as you expect. One easy way to accomplish this is by using the console trace output feature in the STRIDE Runner.

The simplest usage model for tracing with the runner is to add the --trace flag with "-" (dash) value to your command line runner parameters, for example:

stride --trace=- --trace_timeout=600 --log_level=all --database="/my/path/targetApp.sidb" --device=TCP:localhost:8000 

This will cause the runner to listen for 600 sec for test point and log messages (contolled via the --log_level flag) from the target device. When the runner is passively tracing like this, you can terminate it by entering Ctrl-C in the console window. This mode of tracing works well if your device under test is continuously executing the instrumented code or if you can manually start the processing on the device (via external input, for example).

If, however, the source code under test must be initiated by a remote function call (using STRIDE remoting), then you will need to incorporate the function call into the execution of the runner. The simplest way to accomplish this is to write a simple script to make the remote function call, as shown here for perl.

Once you have written a simple script to invoke your function(s), you can execute it with the runner using:

stride --run=MyFunctionCall.pl --trace=- --trace_timeout=600 --log_level=all --database="/my/path/targetApp.sidb" --device=TCP:localhost:8000 

(assuming you named your script MyFunctionCall.pl)

Once you have managed to start the source under test on the device and have connected with the runner, you should see output that looks roughly like:

Loading database...
Connecting to device...
Executing...
  script "c:\s2\MyFunctionCall.pl"
0269952066 called       run_through_all                - {with_logs=0,with_delay=0}
0269952066 test point   "A"                            - {label="A",timestamp=151661000,file="../sample_src/script_testing.c",line=6,data=NULL,size=0,bin=1}
0269952066 test point   "B"                            - {label="B",timestamp=151661001,file="../sample_src/script_testing.c",line=11,data=*(97 98 99 100 101 102 103 104 105 106 107...),size=27,bin=0}
0269952066 test point   "C"                            - {label="C",timestamp=151661002,file="../sample_src/script_testing.c",line=17,data=*(0 1 2 3 4 5),size=6,bin=1}
0269952066 test point   "D"                            - {label="D",timestamp=151661003,file="../sample_src/script_testing.c",line=22,data=NULL,size=0,bin=1}
0269952066 test point   "E"                            - {label="E",timestamp=151661004,file="../sample_src/script_testing.c",line=27,data=*(48 49 50 51 52 53 54 55 56 57 0...),size=11,bin=0}
0269952066 test point   "F"                            - {label="F",timestamp=151661005,file="../sample_src/script_testing.c",line=33,data=*(5 4 3 2 1 0),size=6,bin=1}
0269952066 test point   "G"                            - {label="G",timestamp=151661006,file="../sample_src/script_testing.c",line=38,data=*(0),size=1,bin=1}
0269952066 return       run_through_all               
  ---------------------------------------------------------------------
  Summary: 0 passed, 0 failed, 0 in progress, 0 not in use.

Disconnecting from device...
Saving result file...

As you can see, any intercepted functions, test points or logs that are executed in the source under test will be reported to the console output. This provides a mean to peruse the output of your instrumented source code and do a quick sanity check of your instrumentation.

Tracing within the runner also allows timeout interval to be specified - more details can be found in the STRIDE Runner page.

Capturing Trace Content

Besides console logging the trace content could be redirected to a file attachment in the report. Simply specify "yaml" or "txt" as parameter to the --trace flag to your command line runner parameters, for example:

stride --run=MyFunctionCall.pl --trace=yaml --trace_timeout=600 --database="/my/path/targetApp.sidb" --device=TCP:localhost:8000 

The trace parameters specify that tracing should be activated (--trace) and the results should be written to an output file in yaml or single-entry-per-line text formatted file attachment. To see the trace just open your output report file and click on the "Trace Data" link under top level annotations.

Once you have captured your trace data into a file, you should look it over (using a text editor of your choice) and make sure that the test points and data captured are what you expect. You are free to manually edit this file as long as you maintain the existing field hierarchy for each entry.

You can reuse this data as a basis for a test module test - a simple example is available here for perl. The test will load the test points from the trace data file and use them as the basis for your expectation test. By default, both test point label and data are checked, although this behavior can be modified using options and custom predicates. More information on the options available when setting-up an expectation test is available here.