Training Running Tests: Difference between revisions
No edit summary |
|||
Line 104: | Line 104: | ||
== Organizing with suites == | == Organizing with suites == | ||
TBD | |||
== Notes == | == Notes == | ||
<references/> | <references/> |
Revision as of 23:10, 26 May 2010
Background
Device connection and STRIDE test execution is handled by the STRIDE Runner (aka "the runner"). The runner is a command line tool with a number of options for listing runnable items, executing tests, tracing on test points, and uploading your results. The runner is designed for use in both ad-hoc execution of tests and fully automated CI execution. The command line interface is sufficiently configurable to lend it to a variety of uses. You've already seen some basic test execution scenarios in the other training sections - now we will look explicitly at several of the most common use-cases for the runner.
Please review the following reference articles before proceeding:
Build a test app
Let's begin by building an off-target test app to use for these examples. The sources we want to include in this app are test_in_script/Expectations and test_in_c_cpp/TestClass. Copy these source files to your sample_src directory and follow these instructions for building
List items
Listing the contents of database is an easy way to see what test units and fixturing functions are available for execution. Open a command line and try the following[1]:
stride --database="../out/TestApp.sidb" --list
You should see output something like this:
Functions Exp_DoStateChanges() Test Units s2_testclass::Basic::Exceptions() s2_testclass::Basic::Fixtures() s2_testclass::Basic::Parameterized(char const * szString, unsigned int uExpectedLen) s2_testclass::Basic::Simple() s2_testclass::RuntimeServices::Dynamic() s2_testclass::RuntimeServices::Override() s2_testclass::RuntimeServices::Simple() s2_testclass::RuntimeServices::VarComment() s2_testclass::srTest::Dynamic() s2_testclass::srTest::Simple()
A few things to notice:
- The Functions (if any) are listed before the Test Units.
- Function and Test Unit arguments (input params) will be shown, if any. The parameter types are described shown for each, absent any typedefs.
Trace on test points
Tracing using the runner will show any STRIDE Test Points that are generated on the device during the window of time that the runner is connected. If you have test points that are continuously being emitted (for instance, in some background thread), then you can just connect to the device with tracing enabled to see them (you'll need to specify a --trace_timeout parameter to tell the runner how long to trace for). If your test points require some fixturing to be hit, then you'll need to specify a script to execute that makes the necessary fixture calls. This is precisely what we did in our previous Instrumentation training. If you recall from that training, we did the following[1]:
stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run=do_state_changes.pl --trace
You should see output resembling this:
Loading database... Connecting to device... Executing... script "C:\s2\seaside\SDK\Windows\src\do_state_changes.pl" 1032564500 POINT "SET_NEW_STATE" - START [../sample_src/s2_expectations_source.c:49] 1032564501 POINT "START" [../sample_src/s2_expectations_source.c:63] 1032574600 POINT "SET_NEW_STATE" - IDLE [../sample_src/s2_expectations_source.c:49] 1032574601 POINT "IDLE" - 02 00 00 00 [../sample_src/s2_expectations_source.c:78] 1032584600 POINT "SET_NEW_STATE" - ACTIVE [../sample_src/s2_expectations_source.c:49] 1032584601 POINT "ACTIVE" - 03 00 00 00 [../sample_src/s2_expectations_source.c:101] 1032594700 POINT "ACTIVE Previous State" - IDLE [../sample_src/s2_expectations_source.c:103] 1032594701 POINT "JSON_DATA" - {"string_field": "a-string-value", "int_field": 42, "bool_field": true, "hex_field": "0xDEADBEEF"} [../sample_src/s2_expectations_source.c:105] 1032604800 POINT "SET_NEW_STATE" - IDLE [../sample_src/s2_expectations_source.c:49] 1032604801 POINT "IDLE" - 04 00 00 00 [../sample_src/s2_expectations_source.c:78] 1032614800 POINT "SET_NEW_STATE" - END [../sample_src/s2_expectations_source.c:49] 1032614801 POINT "END" - 05 00 00 00 [../sample_src/s2_expectations_source.c:117] > 0 passed, 0 failed, 0 in progress, 0 not in use. --------------------------------------------------------------------- Summary: 0 passed, 0 failed, 0 in progress, 0 not in use. Disconnecting from device... Saving result file...
Now, let's trace again, but include a filter expression for the test points:
stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run=do_state_changes.pl --trace="ACTIVE.*"
..and now you should see fewer trace points emitted:
Loading database... Connecting to device... Executing... script "C:\s2\seaside\SDK\Windows\src\do_state_changes.pl" 1047379801 POINT "ACTIVE" - 03 00 00 00 [../sample_src/s2_expectations_source.c:101] 1047389800 POINT "ACTIVE Previous State" - IDLE [../sample_src/s2_expectations_source.c:103] > 0 passed, 0 failed, 0 in progress, 0 not in use. --------------------------------------------------------------------- Summary: 0 passed, 0 failed, 0 in progress, 0 not in use. Disconnecting from device... Saving result file...
The --trace argument accepts a filter expression which takes the form of a regular expression that is applied to the test point label. In this case, we've specified a filter that permits any test points that begin with ACTIVE. Filtering gives you a convenient way to inspect quickly inspet specific behavioral aspects of your STRIDE instrumented software.
Organizing with suites
TBD