Training Running Tests: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
No edit summary
 
(2 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== Background ==
== Background ==


Device connection and STRIDE test execution is handled by the [[STRIDE Runner]] (aka ''"the runner"''). The runner is a command line tool with a number of options for listing runnable items, executing tests, tracing on test points, and uploading your results. The runner is designed for use in both ad-hoc execution of tests and fully automated CI execution. You've already seen some basic test execution scenarios in the other training sections - now we will look explicitly at several of the most common use-cases for the runner.
Device connection and STRIDE test execution is handled by the [[STRIDE Runner]] (aka ''"the runner"''). The runner is a command line tool with a number of options for listing runnable items, executing tests, tracing on test points, and uploading your results. The runner is designed for use in both ad-hoc execution of tests and fully [[Setting_up_your_CI_Environment|automated CI]] execution. You've already seen some basic test execution scenarios in the other training sections - now we will look explicitly at several of the most common use-cases for the runner.


Please review the following reference articles before proceeding:  
Please review the following reference articles before proceeding:  
Line 130: Line 130:


<pre>
<pre>
--run=/BasicTests{s2_testclass::Basic::Exceptions;s2_testclass::Basic::Fixtures; s2_testclass::Basic::Parameterized;s2_testclass::Basic::Simple}
--run=/BasicTests{s2_testclass::Basic::Exceptions;s2_testclass::Basic::Fixtures;s2_testclass::Basic::Parameterized;s2_testclass::Basic::Simple}
--run=/srTest{s2_testclass::srTest::Dynamic;s2_testclass::srTest::Simple}
--run=/srTest{s2_testclass::srTest::Dynamic;s2_testclass::srTest::Simple}
</pre>
</pre>
Line 149: Line 149:
<references/>
<references/>


[[Category: Training]]
[[Category: Training OLD]]

Latest revision as of 22:05, 19 October 2011

Background

Device connection and STRIDE test execution is handled by the STRIDE Runner (aka "the runner"). The runner is a command line tool with a number of options for listing runnable items, executing tests, tracing on test points, and uploading your results. The runner is designed for use in both ad-hoc execution of tests and fully automated CI execution. You've already seen some basic test execution scenarios in the other training sections - now we will look explicitly at several of the most common use-cases for the runner.

Please review the following reference articles before proceeding:

Build a test app

Let's begin by building an off-target test app to use for these examples. The sources we want to include in this app are test_in_script/Expectations and test_in_c_cpp/TestClass. Copy these source files to your sample_src directory and follow these instructions for building

Listing items

Listing the contents of database is an easy way to see what test units and fixturing functions are available for execution. Open a command line and try the following[1]:

stride --database="../out/TestApp.sidb" --list

You should see output something like this:

Functions
  Exp_DoStateChanges()
Test Units
  s2_testclass::Basic::Exceptions()
  s2_testclass::Basic::Fixtures()
  s2_testclass::Basic::Parameterized(char const * szString, unsigned int uExpectedLen)
  s2_testclass::Basic::Simple()
  s2_testclass::RuntimeServices::Dynamic()
  s2_testclass::RuntimeServices::Override()
  s2_testclass::RuntimeServices::Simple()
  s2_testclass::RuntimeServices::VarComment()
  s2_testclass::srTest::Dynamic()
  s2_testclass::srTest::Simple()

A few things to notice:

  • The Functions (if any) are listed before the Test Units.
  • Function and Test Unit arguments (input params) will be shown, if any. The parameter types are described shown for each.

Tracing on test points

Tracing using the runner will show any STRIDE Test Points that are generated on the device during the window of time that the runner is connected. If you have test points that are continuously being emitted (for instance, in some background thread), then you can just connect to the device with tracing enabled to see them (you'll need to specify a --trace_timeout parameter to tell the runner how long to trace for). If your test points require some fixturing to be hit, then you'll need to specify a script to execute that makes the necessary fixture calls. This is precisely what we did in our previous Instrumentation training. If you recall from that training, we did the following[1]:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run=do_state_changes.pl --trace 

You should see output resembling this:

Loading database...
Connecting to device...
Executing...
  script "C:\s2\seaside\SDK\Windows\src\do_state_changes.pl"
1032564500 POINT "SET_NEW_STATE" - START [../sample_src/s2_expectations_source.c:49]
1032564501 POINT "START" [../sample_src/s2_expectations_source.c:63]
1032574600 POINT "SET_NEW_STATE" - IDLE [../sample_src/s2_expectations_source.c:49]
1032574601 POINT "IDLE" - 02 00 00 00 [../sample_src/s2_expectations_source.c:78]
1032584600 POINT "SET_NEW_STATE" - ACTIVE [../sample_src/s2_expectations_source.c:49]
1032584601 POINT "ACTIVE" - 03 00 00 00 [../sample_src/s2_expectations_source.c:101]
1032594700 POINT "ACTIVE Previous State" - IDLE [../sample_src/s2_expectations_source.c:103]
1032594701 POINT "JSON_DATA" - {"string_field": "a-string-value", "int_field": 42, "bool_field": true, "hex_field": "0xDEADBEEF"} [../sample_src/s2_expectations_source.c:105]
1032604800 POINT "SET_NEW_STATE" - IDLE [../sample_src/s2_expectations_source.c:49]
1032604801 POINT "IDLE" - 04 00 00 00 [../sample_src/s2_expectations_source.c:78]
1032614800 POINT "SET_NEW_STATE" - END [../sample_src/s2_expectations_source.c:49]
1032614801 POINT "END" - 05 00 00 00 [../sample_src/s2_expectations_source.c:117]
    > 0 passed, 0 failed, 0 in progress, 0 not in use.
  ---------------------------------------------------------------------
  Summary: 0 passed, 0 failed, 0 in progress, 0 not in use.

Disconnecting from device...
Saving result file...

Now, let's trace again, but include a filter expression for the test points:

stride  --device="TCP:localhost:8000" --database="../out/TestApp.sidb"  --run=do_state_changes.pl --trace="ACTIVE.*" 

..and now you should see fewer trace points emitted:

Loading database...
Connecting to device...
Executing...
  script "C:\s2\seaside\SDK\Windows\src\do_state_changes.pl"
1047379801 POINT "ACTIVE" - 03 00 00 00 [../sample_src/s2_expectations_source.c:101]
1047389800 POINT "ACTIVE Previous State" - IDLE [../sample_src/s2_expectations_source.c:103]
    > 0 passed, 0 failed, 0 in progress, 0 not in use.
  ---------------------------------------------------------------------
  Summary: 0 passed, 0 failed, 0 in progress, 0 not in use.

Disconnecting from device...
Saving result file...

The --trace argument accepts a filter expression which takes the form of a regular expression that is applied to the test point label. In this case, we've specified a filter that permits any test points that begin with ACTIVE. Filtering gives you a convenient way to quickly inspect specific behavioral aspects of your instrumented software.

Organizing with suites

Now let's briefly describe how you can use the runner to organize subsets of test units into suites. First, let's run our current set of test units without any explicit suite hierarchy:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb"  --run=*

If you examine the results file, you will see that this creates the default flat hierarchy with each test unit's corresponding suite at the root level of the report.

Now, let's try grouping our tests into suites:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="/BasicTests{s2_testclass::Basic::Exceptions; s2_testclass::Basic::Fixtures; s2_testclass::Basic::Parameterized; s2_testclass::Basic::Simple}" --run="/srTest{s2_testclass::srTest::Dynamic; s2_testclass::srTest::Simple}"

Now when you view the results, you will see two top-level suites - BasicTests and srTest - and within those are the suites for the test units that we specified to be within each suite.

If you plan to use this functionality to organize your tests into sub-suites, we recommend that you create options files to specify test unit groupings. This makes it easier to update and manage the suite hierarchy for your tests.

Using options files

The runner also supports options files which allow you to place commonly used or lengthy command line options in a file. Let's run the same example as above (using suites for organization) - however, this time we'll put the --run arguments into a file.

First, let's create a text file with the --run arguments in it - call it run_suites.opt and add these lines:

--run=/BasicTests{s2_testclass::Basic::Exceptions;s2_testclass::Basic::Fixtures;s2_testclass::Basic::Parameterized;s2_testclass::Basic::Simple}
--run=/srTest{s2_testclass::srTest::Dynamic;s2_testclass::srTest::Simple}

When creating an options file, you can use newlines to separate individual options for easier maintenance and readability. Also, you'll notice we've omitted the quotation marks around the strings we pass to each option since they are generally not needed (they are needed on the command line to allow the command shell to pass them along).

Once you have created the options file, you can use it with the runner like:

stride  --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --options_file=run_suites.opt

When you execute this, you should see the same results as in the previous section.

Options files are a convenient way to persist and reuse common command line settings to the runner or to organize the way in which your individual test units will be run. Once you have lots of groups adding tests to your system, options files will provide a manageable way to group subsets of tests during execution. Option files also lend themselves to persisting/reusing your test space upload settings (if you are uploading), which typically do not change often.

Notes

  1. 1.0 1.1 These examples assume you are executing the runner from the sample_src directory of your off-target framework. If that's not the case, you will need to adjust the database path accordingly.