<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.stridewiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jeffs</id>
	<title>STRIDE Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.stridewiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jeffs"/>
	<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Special:Contributions/Jeffs"/>
	<updated>2026-04-28T14:24:14Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.10</generator>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=STRIDE_Runner&amp;diff=14659</id>
		<title>STRIDE Runner</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=STRIDE_Runner&amp;diff=14659"/>
		<updated>2015-09-09T03:00:32Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: Switch to testrun from stride.exe&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
The &#039;&#039;&#039;testrun&#039;&#039;&#039; executable (a.k.a. Stride Runner) runs script or native target-based tests from a desktop (Windows, Linux or FreeBSD) host computer and optionally uploads results to [http://www.testspace.com Testspace]. &lt;br /&gt;
&lt;br /&gt;
 testrun [&amp;lt;i&amp;gt;options&amp;lt;/i&amp;gt;] [&amp;lt;i&amp;gt;TU...&amp;lt;/i&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
Each command line option that accepts an argument should be entered with a space between the option and its corresponding argument; i.e.&lt;br /&gt;
 testrun -o argument --option1 argument1&lt;br /&gt;
&lt;br /&gt;
= Input =&lt;br /&gt;
In order to run tests, you must provide the following information to Runner:&lt;br /&gt;
&lt;br /&gt;
;database file&lt;br /&gt;
: This is the .sidb file that is created by the [[Build Tools]] during the target build process. This file contains meta-information used to run tests.&lt;br /&gt;
&lt;br /&gt;
;device parameters&lt;br /&gt;
: This tells how to connect to the target.&lt;br /&gt;
&lt;br /&gt;
;tests to run&lt;br /&gt;
: A set of Test Units and or Test Scripts. &lt;br /&gt;
&lt;br /&gt;
== Running ==&lt;br /&gt;
By default, no tests are run. You need to explicitly list them using the &amp;lt;tt&amp;gt;--run&amp;lt;/tt&amp;gt; option.&lt;br /&gt;
&lt;br /&gt;
=== Order ===&lt;br /&gt;
When explicitly specified, Tests are run in the order in which they are given on the command line.&lt;br /&gt;
&lt;br /&gt;
It is possible to run all Test Units by explicitly specifying a wildcard (*). In that case they are run in alphabetical order.&lt;br /&gt;
&lt;br /&gt;
=== Rules ===&lt;br /&gt;
&lt;br /&gt;
* Test Units are specified by name or wildcard&lt;br /&gt;
* Scripts are specified by relative or absolute file name &lt;br /&gt;
* Tests (scripts or test units) are grouped together within curly braces, i.e. &amp;quot;{&amp;quot; and &amp;quot;}&amp;quot;&lt;br /&gt;
* When specifying more than one Test in a group, each Test instance must be delimited by a semicolon.&lt;br /&gt;
* When a Test accepts parameters they are passed following the Test name in parentheses comma separated, i.e. &amp;quot;&amp;lt;tt&amp;gt;myTest(param1, param2)&amp;lt;/tt&amp;gt;&amp;quot;&lt;br /&gt;
* The output of each Test group is placed within a hierarchy of named suites. (The root suite does not have a name.)&lt;br /&gt;
* The suite into which a Test group is placed is specified immediately before the group, e.g. &#039;&#039;&amp;lt;tt&amp;gt;suitepath&amp;lt;/tt&amp;gt;&#039;&#039;&amp;lt;tt&amp;gt;{&amp;lt;/tt&amp;gt;&#039;&#039;&amp;lt;tt&amp;gt;testunitgroup&amp;lt;/tt&amp;gt;&#039;&#039;&amp;lt;tt&amp;gt;}&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Hierarchical suite paths are delimited by &amp;quot;/&amp;quot; (forward slash) and are always specified starting from the root.&lt;br /&gt;
* If the same Test is specified to be run more than once, and one or more results are to be written to the same suite, the each conflicting Test name is appended with an incrementing count in the form of &amp;quot;(n)&amp;quot;. For example: Results from three runs of TestUnit &amp;quot;myTest&amp;quot; are all written to the root suite. The Tests will be reported with the names &amp;quot;myTest&amp;quot; &amp;quot;myTest(1)&amp;quot; and &amp;quot;myTest(2)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
==== Wildcard Matching ====&lt;br /&gt;
The following wildcard characters are recognized in Test Unit specifications:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;code&amp;gt;*&amp;lt;/code&amp;gt;&#039;&#039;&#039; (asterisk) matches all Test Units&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;code&amp;gt;-&amp;lt;/code&amp;gt;&#039;&#039;&#039; (hyphen) matches all remaining Test Units (useful when putting Test Units into suites) &lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-r &amp;quot;/{*}&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
: Run all Test Units and put results into the root-level suite.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-r &amp;quot;/Suite{*}&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
: Run all tests and put results into a suite named &amp;quot;Suite&amp;quot; that is a child of the root suite.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-r &amp;quot;/{Test1;Test2;Test3}&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
: Run the Tests named &amp;quot;Test1&amp;quot;, &amp;quot;Test2&amp;quot;, and &amp;quot;Test3&amp;quot; in the designated order; put results into the root suite.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-r &amp;quot;/{Test1;Test2;Test3}&amp;quot; -r &amp;quot;/SecondPass{Test1}&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
: Run the Tests named &amp;quot;Test1&amp;quot;, &amp;quot;Test2&amp;quot;, and &amp;quot;Test3&amp;quot; in the designated order; put results into the root suite. Then run the Test Unit named &amp;quot;Test1&amp;quot; again and put the results into a suite named &amp;quot;SecondPass&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-r &amp;quot;/{Test1;Test2;Test3}&amp;quot; -r &amp;quot;/Remaining{-}&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
: Run the Test Units named &amp;quot;Test1&amp;quot;, &amp;quot;Test2&amp;quot;, and &amp;quot;Test3&amp;quot; in the designated order; put results into the root suite. Then run all remaining Test Units and put the results into a suite named &amp;quot;Remaining&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-r &amp;quot;/Suite1/Suite2{TestA; TestB}&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
: Run the Tests named &amp;quot;TestA&amp;quot; and &amp;quot;TestB&amp;quot;, and put the results into a suite named &amp;quot;Suite2&amp;quot; that is a child of a suite named &amp;quot;Suite1&amp;quot; that is a child of the root. Note that we must enclose the specification in quotes since the specification contains a space.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-r &amp;quot;/{TestA(29); TestB(5.67, \&amp;quot;some text\&amp;quot;)}&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
: Run the Tests named &amp;quot;TestA&amp;quot; with parameter 29 and &amp;quot;TestB&amp;quot; with parameters 5.67 and &amp;quot;some text&amp;quot;, and put the results into the root suite. Note that we must enclose the specification in quotes since the specification contains spaces and embedded quotes (which needs to be escaped using backslash).&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-r &amp;quot;TestX; TestY; TestZ&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
: This is a special convenience syntax. If the suite and grouping braces are omitted, the app runs the the named Test Units and puts their results put into the root suite.&lt;br /&gt;
&lt;br /&gt;
= Output =&lt;br /&gt;
Upon test completion, test output is always written as follows:&lt;br /&gt;
&lt;br /&gt;
;console output&lt;br /&gt;
: A quick summary of results is written to standard output. Test counts are shown for the categories of &#039;&#039;&#039;passed&#039;&#039;&#039;, &#039;&#039;&#039;failed&#039;&#039;&#039;, &#039;&#039;&#039;in progress&#039;&#039;&#039;, and &#039;&#039;&#039;not in use&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
;local xml file&lt;br /&gt;
: Detailed results are written to a local xml file. By default, this file is written to the directory where the input STRIDE database file is located and given the same name as the database file with an extension of &amp;quot;.xml&amp;quot;. If you open this file in a web browser, an xsl transform is automatically downloaded and applied before rendering.&lt;br /&gt;
&lt;br /&gt;
Optionally, you may also publish the results to your [http://www.testspace.com Testspace] upon test completion.&lt;br /&gt;
&lt;br /&gt;
;Testspace&lt;br /&gt;
: Results are uploaded using your Testspace URL and login credentials. You must specify the testspace name and project name when using this option.&lt;br /&gt;
&lt;br /&gt;
= Options =&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;10&amp;quot; style=&amp;quot;align:left;background-color:#ffffcc;&amp;quot;   &lt;br /&gt;
!width=&amp;quot;200pt&amp;quot;|&#039;&#039;&#039;Option&#039;&#039;&#039;&lt;br /&gt;
!width=&amp;quot;500pt&amp;quot;|&#039;&#039;&#039;Description&#039;&#039;&#039;&lt;br /&gt;
|- &lt;br /&gt;
| &#039;&#039;&#039;--database&#039;&#039;&#039; [ -d ] &#039;&#039;arg&#039;&#039;&lt;br /&gt;
| Specifies the name of an existing STRIDE database (.sidb) file that will be used for test execution.&lt;br /&gt;
&lt;br /&gt;
This can be a relative or absolute path. If the path contains one or more spaces, it must be enclosed in quotes. Under Windows, both DOS and [http://en.wikipedia.org/wiki/Path_(computing)#Uniform_Naming_Convention UNC names] are accepted.&lt;br /&gt;
&lt;br /&gt;
As a path delimiter you may use either the forward slash or backslash character. (Forward slashes are typically preferred due to the standard use of the backslash as an escape character.)&lt;br /&gt;
|- &lt;br /&gt;
| &#039;&#039;&#039;--device&#039;&#039;&#039; &#039;&#039;arg&#039;&#039;&lt;br /&gt;
| Specifies the parameters to be used to connect to the target device (i.e. TCP:&#039;&#039;host&#039;&#039;:&#039;&#039;port&#039;&#039; or COM&#039;&#039;port&#039;&#039;:&#039;&#039;rate&#039;&#039;:&#039;&#039;mode&#039;&#039;). For example: &lt;br /&gt;
* &amp;lt;code&amp;gt;TCP:localhost:8000&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;COM7:28800:8N1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/dev/ttyS2:57600:8N1&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;--timeout&#039;&#039;&#039; &#039;&#039;arg&#039;&#039; (=0)&lt;br /&gt;
| Specifies a watchdog timeout (in seconds) per single test function or test method. &lt;br /&gt;
Default value is 0. (0 = infinite timeout)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;--run&#039;&#039;&#039; [ -r ] &#039;&#039;arg&#039;&#039;&lt;br /&gt;
| Specifies a list of Test Units or Scripts to execute and their order of execution with optional report grouping by suite. You may specify this option many times on the command line to include multiple groupings in a single test. A named Test Suite may be specified more than once to run it multiple times. See [[#Input|Input]] section above for more details.&lt;br /&gt;
Any nameless positional option is treated as this option.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;--trace&#039;&#039;&#039; [ -t ] &#039;&#039;arg&#039;&#039; (=txt)&lt;br /&gt;
| Specifies if and how to trace on target instrumentation (functions, messages and test points/logs). &amp;lt;br/&amp;gt;&lt;br /&gt;
The following types of tracing formats are supported:&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;txt&amp;quot;&amp;lt;/code&amp;gt; - single-entry-per-line plain text file&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;yml&amp;quot;&amp;lt;/code&amp;gt; - [http://en.wikipedia.org/wiki/Yaml YAML] formated file&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;-&amp;quot;&amp;lt;/code&amp;gt; - interactive single-entry-per-line plain text&lt;br /&gt;
With the exception of the interactive type all others are attached as an annotation to the output file.&amp;lt;br/&amp;gt;&lt;br /&gt;
Default value is &amp;quot;txt&amp;quot;.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;--trace_timeout&#039;&#039;&#039; &#039;&#039;arg&#039;&#039;&lt;br /&gt;
| Specifies a trace timeout (in seconds).&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;--upload&#039;&#039;&#039; [ -u ] &#039;&#039;arg&#039;&#039; (=start)&lt;br /&gt;
| Specifies if and how to upload the results to Testspace. Arg specifies type of upload; &amp;lt;br/&amp;gt;&lt;br /&gt;
The following types of uploads are supported:&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;full&amp;quot;&amp;lt;/code&amp;gt; - This causes a new result set to be created and marked complete. &lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;start&amp;quot;&amp;lt;/code&amp;gt; - This is used to create a new result set and mark it incomplete. An upload of this type should be followed by zero or more uploads that add results and one final upload that finishes the result set.&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;add&amp;quot;&amp;lt;/code&amp;gt; - This is used to add more data to an existing incomplete result set. The result set will still be marked incomplete after this upload type.&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;finish&amp;quot;&amp;lt;/code&amp;gt; - This is used to add more data to an existing incomplete result set and mark it complete. This is the last step in creating a result set from multiple executions.&lt;br /&gt;
Default value is &amp;quot;start&amp;quot;. &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;--testspace&#039;&#039;&#039; &#039;&#039;arg&#039;&#039;&lt;br /&gt;
| Specifies the Testspace URL to which the results will be uploaded, expressed in the following form: &amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;user:pwd@&amp;lt;domain&amp;gt;.testspace.com/&amp;lt;project&amp;gt;/&amp;lt;space&amp;gt;&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
Where &#039;domain&#039; is your organization&#039;s subdomain, and &#039;project&#039; and &#039;space&#039; are the names of related project and space. &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;--result&#039;&#039;&#039; &#039;&#039;arg&#039;&#039;&lt;br /&gt;
| Name of the output XML result file and optionally, when uploading, the result set name along with folder path (default &amp;quot;root&amp;quot;).&lt;br /&gt;
The following ways to specify are supported:&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;myfile.xml&amp;quot;&amp;lt;/code&amp;gt; - set both, output XML result file and result set name, to be the same.&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;myname|&amp;quot;&amp;lt;/code&amp;gt; - set the result set name only, use the default &amp;quot;testresult.xml&amp;quot; as output XML result file.&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;|myfile.xml&amp;quot;&amp;lt;/code&amp;gt; - set the output XML result file only, let Testspace to choose the result set name.&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;myname|myfile.xml&amp;quot;&amp;lt;/code&amp;gt; - set both, result set name and output XML result file, to be different names.&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;myname/path/to/folder|myfile.xml&amp;quot;&amp;lt;/code&amp;gt; - set both, result set name and output XML result file, to be different and upload under an explicit folder path.&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;/path/to/folder|myfile.xml&amp;quot;&amp;lt;/code&amp;gt; - set the output XML result file, let Testspace to choose the result set name but upload under an explicit folder path.&lt;br /&gt;
Default value is &amp;quot;testresult.xml&amp;quot;. &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;--list&#039;&#039;&#039;&lt;br /&gt;
| Lists all Test Units and intercepted Functions in the specified Stride database to standard out.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;--diagnostics&#039;&#039;&#039;&lt;br /&gt;
| Performs a set of built-in diagnostic tests on the target. See [[Build_Integration#Running_Diagnostics|Running Diagnostics]] for details.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;--log_level&#039;&#039;&#039; &#039;&#039;arg&#039;&#039; (=warn)&lt;br /&gt;
| Controls the target source [[Test Log |Test Logs]] level. &amp;lt;br/&amp;gt;&lt;br /&gt;
The following levels are supported:&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;all&amp;quot;&amp;lt;/code&amp;gt; - enables all logging&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;info&amp;quot;&amp;lt;/code&amp;gt; - enables only INFO and higher&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;warn&amp;quot;&amp;lt;/code&amp;gt; - enables only WARN and higher&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;err&amp;quot;&amp;lt;/code&amp;gt; - enables only ERROR and higher&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;quot;off&amp;quot;&amp;lt;/code&amp;gt; - disables all logging&lt;br /&gt;
Default value is &amp;quot;warn&amp;quot;. &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;--options_file&#039;&#039;&#039; &#039;&#039;arg&#039;&#039;&lt;br /&gt;
| Specifies a file from which the program reads command line options. The format is the same as the command line except that options may be split across multiple lines. Lines in an options file that begin with the character &#039;#&#039; are ignored.&lt;br /&gt;
|- &lt;br /&gt;
| &#039;&#039;&#039;--version&#039;&#039;&#039;&lt;br /&gt;
| Prints version information to the console.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;--dry&#039;&#039;&#039;&lt;br /&gt;
| Dry run, just validate the input without device connection or test execution.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Using a Proxy ===&lt;br /&gt;
If you access the Internet via an HTTP proxy, you must set the environment variables &amp;lt;tt&amp;gt;HTTP_PROXY&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;HTTPS_PROXY&amp;lt;/tt&amp;gt; to specify the name and port of the proxy server.&lt;br /&gt;
&lt;br /&gt;
Symptoms of needing to specify a proxy is the following errors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Unable to proceed. Invalid testspace address or credentials. Couldn&#039;t access the specified company. &lt;br /&gt;
Failed performing request: [7] Couldn&#039;t connect to server&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Unable to proceed. Failed performing request: [6] Couldn’t resolve host name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Define the environment variables in format &#039;&#039;&#039;&amp;lt;tt&amp;gt;&amp;lt;nowiki&amp;gt;host:port&amp;lt;/nowiki&amp;gt;&amp;lt;/tt&amp;gt;&#039;&#039;&#039;  where &#039;&#039;host&#039;&#039;:&#039;&#039;port&#039;&#039; corresponding to your proxy server. For example, on Windows:&lt;br /&gt;
 &amp;gt; set HTTP_PROXY=myproxy:8765&lt;br /&gt;
 &amp;gt; set HTTPS_PROXY=myproxy:8765&lt;br /&gt;
would instruct the stride runner to use the proxy named &amp;lt;tt&amp;gt;myproxy&amp;lt;/tt&amp;gt; on port &amp;lt;tt&amp;gt;8765&amp;lt;/tt&amp;gt; to communicate via HTTP and HTTPS protocols.&lt;br /&gt;
&lt;br /&gt;
If your proxy requires authentication use the format &#039;&#039;&#039;&amp;lt;tt&amp;gt;&amp;lt;nowiki&amp;gt;username:password@host:port&amp;lt;/nowiki&amp;gt;&amp;lt;/tt&amp;gt;&#039;&#039;&#039; where &#039;&#039;username&#039;&#039; and &#039;&#039;password&#039;&#039; are your credentials.&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Test_Log&amp;diff=14634</id>
		<title>Test Log</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Test_Log&amp;diff=14634"/>
		<updated>2015-07-21T18:54:21Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Test Log macros provide a simple means to add information from the source under test to the currently executing test case. This information is added to the test report as annotations with a level of either &#039;&#039;Error&#039;&#039;, &#039;&#039;Warning&#039;&#039;, or &#039;&#039;Info&#039;&#039; according to the macro that is used. The log messages are also captured when tracing is enabled in the [[Stride Runner]].&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;tt&amp;gt;srTEST_LOG_xx()&amp;lt;/tt&amp;gt; macro is intended to be a general instrumentation macro for capturing information in the source under test. Logs differ from Test Points in that they cannot be used for the basis of expectation tests - they are strictly informational. The Stride log messages are intended to supplement, not supplant a [[Test Point | Test Points]].&lt;br /&gt;
&lt;br /&gt;
= Reference =&lt;br /&gt;
To use these macros you should include the &#039;&#039;&#039;srtest.h&#039;&#039;&#039; header file from the Stride Runtime in your compilation unit. The Test Log macros are active only when &amp;lt;tt&amp;gt;STRIDE_ENABLED&amp;lt;/tt&amp;gt; is &amp;lt;tt&amp;gt;#define&amp;lt;/tt&amp;gt;d, therefore it is practical to place these macros in-line in production source. When &amp;lt;tt&amp;gt;STRIDE_ENABLED&amp;lt;/tt&amp;gt; is not &amp;lt;tt&amp;gt;#define&amp;lt;/tt&amp;gt;d, these macros evaluate to nothing. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;prettytable&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Error Logging&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| srTEST_LOG_ERROR(&#039;&#039;message&#039;&#039;, ...)&lt;br /&gt;
| &#039;&#039;message&#039;&#039; is a pointer to a null-terminated format string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;...&#039;&#039; variable list matching the format string&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;prettytable&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Warning Logging&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| srTEST_LOG_WARN(&#039;&#039;message&#039;&#039;, ...)&lt;br /&gt;
| &#039;&#039;message&#039;&#039; is a pointer to a null-terminated format string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;...&#039;&#039; variable list matching the format string&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;prettytable&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Info Logging&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| srTEST_LOG_INFO(&#039;&#039;message&#039;&#039;, ...)&lt;br /&gt;
| &#039;&#039;message&#039;&#039; is a pointer to a null-terminated format string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;...&#039;&#039; variable list matching the format string&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* The maximum length of the message string approximately 1000 characters. If the maximum length is exceeded, the message string is truncated.&lt;br /&gt;
&lt;br /&gt;
=== C++ Only Features ===&lt;br /&gt;
In C++ source the macros above support adding to other content to the message by using the &amp;lt;&amp;lt; operator. &lt;br /&gt;
&lt;br /&gt;
=== Log Level ===&lt;br /&gt;
&lt;br /&gt;
By default only logs with level of either &#039;&#039;Error&#039;&#039; or &#039;&#039;Warning&#039;&#039; are propagated to the host. The [[STRIDE Runner#Options | Stride Runner]] via &amp;lt;tt&amp;gt;--log_level&amp;lt;/tt&amp;gt; &#039;&#039;option&#039;&#039; provides finer control over that behavior.&lt;br /&gt;
&lt;br /&gt;
== Code Snippets ==&lt;br /&gt;
&amp;lt;source lang=&#039;c&#039;&amp;gt;&lt;br /&gt;
#include &amp;lt;srtest.h&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
srTEST_LOG_ERROR(&amp;quot;This is an error message.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
srTEST_LOG_WARN(&amp;quot;This is a warning message with format string %d.&amp;quot;, 123);&lt;br /&gt;
&lt;br /&gt;
srTEST_LOG_INFO(&amp;quot;This is an info message with format string %s and %s.&amp;quot;, &amp;quot;this&amp;quot;, &amp;quot;that&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
#ifdef __cplusplus&lt;br /&gt;
srTEST_LOG_ERROR(&amp;quot;some error: &amp;quot;) &amp;lt;&amp;lt; &amp;quot;stream input supported under c++&amp;quot;;&lt;br /&gt;
#endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Test_Point&amp;diff=14633</id>
		<title>Test Point</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Test_Point&amp;diff=14633"/>
		<updated>2015-07-21T17:21:26Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: Update implementation step titles&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Source instrumentation is the process by which developers and domain experts selectively instrument the source under test for the purpose of writing test scenarios against the executing application. Implementing tests that leverage source instrumentation is called &#039;&#039;&#039;Expectation Testing&#039;&#039;&#039;. This validation technique is very useful for verifying proper code sequencing based on the software&#039;s internal design. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;c&#039;&amp;gt;&lt;br /&gt;
#include &amp;lt;srtest.h&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
/* a test point with no payload */&lt;br /&gt;
srTEST_POINT(&amp;quot;first test point&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
/* a test point with binary payload */&lt;br /&gt;
srTEST_POINT_DATA(&amp;quot;second test point&amp;quot;, myData, sizeofMyData);&lt;br /&gt;
&lt;br /&gt;
/* a test point with simple string payload */&lt;br /&gt;
srTEST_POINT_STR(&amp;quot;third test point&amp;quot;, &amp;quot;payload with simple string&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
/* a test point with formatted string payload */&lt;br /&gt;
srTEST_POINT_STR(&amp;quot;third test point&amp;quot;, &amp;quot;payload with format string %d&amp;quot;, myVar);&lt;br /&gt;
&lt;br /&gt;
#ifdef __cplusplus&lt;br /&gt;
srTEST_POINT_STR(&amp;quot;c++ test point&amp;quot;, &amp;quot;&amp;quot;) &amp;lt;&amp;lt; &amp;quot;stream input supported under c++&amp;quot;;&lt;br /&gt;
#endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unlike traditional unit testing that drives testing based on input parameters and isolating functionality, Expectation testing is executed within a fully functional software build running on a real target platform. Test Points are not dependent on input parameters, but often leverage the same types of input/output controls used by functional and black-box testing. &lt;br /&gt;
&lt;br /&gt;
Another unique feature of this type of testing is that domain expertise is not required to implement a test. Developers and domain experts use instrumentation to export design knowledge of the software to the entire team. Furthermore, no stubbing is required, no special logic to generate input parameters, and no advanced knowledge required of how the application software is coded. &lt;br /&gt;
&lt;br /&gt;
To enable effective test coverage, developers and domain experts insert instrumentation at key locations to gain insight and testability. Here are some general suggested source code areas to consider instrumenting:&lt;br /&gt;
* entry/exit points of critical functions&lt;br /&gt;
* state transitions&lt;br /&gt;
* critical or interesting data transitions (using optional payload to convey data values)&lt;br /&gt;
* callback routines&lt;br /&gt;
* data persistence&lt;br /&gt;
* error conditions&lt;br /&gt;
&lt;br /&gt;
The steps required to implement an Expectation test are the following:&lt;br /&gt;
#[[#Instrument Code | Instrument Code]]&lt;br /&gt;
#[[#Define Expectations | Define Expectations]]&lt;br /&gt;
#[[#Write Test Units | Write Test Units]]&lt;br /&gt;
&lt;br /&gt;
== Instrument Code ==&lt;br /&gt;
To make the software &#039;&#039;testable&#039;&#039;, the first step in the process is for the experts to selectively insert instrumentation macros into the source code. Test Points have a nominal impact on the performance of the application as they are only active during test data collection&amp;lt;ref name=&amp;quot;n1&amp;quot;&amp;gt; Test data collection is typically implemented in a low priority background thread. The data is captured in the calling routine&#039;s thread context (no context switch) but processed in the background or on the host. Instrumentation macros return immediately to the caller (i.e. no-op) when testing is not active.&amp;lt;/ref&amp;gt;. Test Points contain names and optional payload data. When Test Points are activated, they are collected in the background, along with timing and any associated data. The set of Test Points hit, their order, timing, and data content can all be used to validate that the software is behaving as expected. [[Test_Log | Test Logs]] can also be added to the source code to provide additional information in the context of an executing test. &lt;br /&gt;
&lt;br /&gt;
To specify a Test Point you should include the &#039;&#039;&#039;srtest.h&#039;&#039;&#039; header file from the Stride Runtime in your compilation unit. The Test Point macros are active only when &amp;lt;tt&amp;gt;STRIDE_ENABLED&amp;lt;/tt&amp;gt; is &amp;lt;tt&amp;gt;#define&amp;lt;/tt&amp;gt;d, therefore it is practical to place these macros in-line in production source. When &amp;lt;tt&amp;gt;STRIDE_ENABLED&amp;lt;/tt&amp;gt; is not &amp;lt;tt&amp;gt;#define&amp;lt;/tt&amp;gt;d, these macros evaluate to nothing.  &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;prettytable&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Test Point Macros&#039;&#039;&#039;&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;srTEST_POINT&#039;&#039;&#039;(&#039;&#039;label&#039;&#039;)&lt;br /&gt;
| &#039;&#039;label&#039;&#039; is a pointer to a null-terminated string&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;srTEST_POINT_DATA&#039;&#039;&#039;(&#039;&#039;label&#039;&#039;, &#039;&#039;data&#039;&#039;, &#039;&#039;size&#039;&#039;)&lt;br /&gt;
| &#039;&#039;label&#039;&#039; is a pointer to a null-terminated string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;data&#039;&#039; is a pointer to a byte sequence&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;size&#039;&#039; is the size of the &#039;&#039;data&#039;&#039; in bytes&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;srTEST_POINT_STR&#039;&#039;&#039;(&#039;&#039;label&#039;&#039;, &#039;&#039;message&#039;&#039;)&lt;br /&gt;
| &#039;&#039;label&#039;&#039; is a pointer to a null-terminated string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;message&#039;&#039; is a pointer to a null-terminated format string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;...&#039;&#039; variable list matching the format string&amp;lt;br/&amp;gt;&lt;br /&gt;
When used in the context of a c++ compilation unit, this macro also supports the streaming operator to append to the message string (see example below)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Define Expectations ==&lt;br /&gt;
&lt;br /&gt;
In addition to instrumenting the source code with Test Points, you must also define the [[Expectations]] of the Test Points. This involves defining the list of Test Points expected to be hit during a given test scenario. Expectations can also include any &#039;&#039;&#039;data&#039;&#039;&#039; associated with a Test Point that requires validation.&lt;br /&gt;
&lt;br /&gt;
== Write Test Units ==&lt;br /&gt;
Once the source under test has been instrumented and the [[Expectations]] defined, Stride offers a number of techniques that can be used for implementing &#039;&#039;&#039;expectation tests&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* Tests can be written in [[Test_Point_Testing_in_C/C%2B%2B | C or C++]] and executed on the device under test using the Stride framework. &lt;br /&gt;
* Tests can be written in [[Perl_Script_APIs | Perl]].&lt;br /&gt;
&lt;br /&gt;
== Notes ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Test_Point&amp;diff=14632</id>
		<title>Test Point</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Test_Point&amp;diff=14632"/>
		<updated>2015-07-21T17:16:24Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Source instrumentation is the process by which developers and domain experts selectively instrument the source under test for the purpose of writing test scenarios against the executing application. Implementing tests that leverage source instrumentation is called &#039;&#039;&#039;Expectation Testing&#039;&#039;&#039;. This validation technique is very useful for verifying proper code sequencing based on the software&#039;s internal design. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;c&#039;&amp;gt;&lt;br /&gt;
#include &amp;lt;srtest.h&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
/* a test point with no payload */&lt;br /&gt;
srTEST_POINT(&amp;quot;first test point&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
/* a test point with binary payload */&lt;br /&gt;
srTEST_POINT_DATA(&amp;quot;second test point&amp;quot;, myData, sizeofMyData);&lt;br /&gt;
&lt;br /&gt;
/* a test point with simple string payload */&lt;br /&gt;
srTEST_POINT_STR(&amp;quot;third test point&amp;quot;, &amp;quot;payload with simple string&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
/* a test point with formatted string payload */&lt;br /&gt;
srTEST_POINT_STR(&amp;quot;third test point&amp;quot;, &amp;quot;payload with format string %d&amp;quot;, myVar);&lt;br /&gt;
&lt;br /&gt;
#ifdef __cplusplus&lt;br /&gt;
srTEST_POINT_STR(&amp;quot;c++ test point&amp;quot;, &amp;quot;&amp;quot;) &amp;lt;&amp;lt; &amp;quot;stream input supported under c++&amp;quot;;&lt;br /&gt;
#endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unlike traditional unit testing that drives testing based on input parameters and isolating functionality, Expectation testing is executed within a fully functional software build running on a real target platform. Test Points are not dependent on input parameters, but often leverage the same types of input/output controls used by functional and black-box testing. &lt;br /&gt;
&lt;br /&gt;
Another unique feature of this type of testing is that domain expertise is not required to implement a test. Developers and domain experts use instrumentation to export design knowledge of the software to the entire team. Furthermore, no stubbing is required, no special logic to generate input parameters, and no advanced knowledge required of how the application software is coded. &lt;br /&gt;
&lt;br /&gt;
To enable effective test coverage, developers and domain experts insert instrumentation at key locations to gain insight and testability. Here are some general suggested source code areas to consider instrumenting:&lt;br /&gt;
* entry/exit points of critical functions&lt;br /&gt;
* state transitions&lt;br /&gt;
* critical or interesting data transitions (using optional payload to convey data values)&lt;br /&gt;
* callback routines&lt;br /&gt;
* data persistence&lt;br /&gt;
* error conditions&lt;br /&gt;
&lt;br /&gt;
The steps required to implement an Expectation test are the following:&lt;br /&gt;
#[[#Instrumentation | Instrumentation]]&lt;br /&gt;
#[[#Define your Expectations | Define your Expectations]]&lt;br /&gt;
#[[#Write the Test Unit | Write the Test Unit]]&lt;br /&gt;
&lt;br /&gt;
== Instrumentation ==&lt;br /&gt;
To make the software &#039;&#039;testable&#039;&#039;, the first step in the process is for the experts to selectively insert instrumentation macros into the source code. Test Points have a nominal impact on the performance of the application as they are only active during test data collection&amp;lt;ref name=&amp;quot;n1&amp;quot;&amp;gt; Test data collection is typically implemented in a low priority background thread. The data is captured in the calling routine&#039;s thread context (no context switch) but processed in the background or on the host. Instrumentation macros return immediately to the caller (i.e. no-op) when testing is not active.&amp;lt;/ref&amp;gt;. Test Points contain names and optional payload data. When Test Points are activated, they are collected in the background, along with timing and any associated data. The set of Test Points hit, their order, timing, and data content can all be used to validate that the software is behaving as expected. [[Test_Log | Test Logs]] can also be added to the source code to provide additional information in the context of an executing test. &lt;br /&gt;
&lt;br /&gt;
To specify a Test Point you should include the &#039;&#039;&#039;srtest.h&#039;&#039;&#039; header file from the Stride Runtime in your compilation unit. The Test Point macros are active only when &amp;lt;tt&amp;gt;STRIDE_ENABLED&amp;lt;/tt&amp;gt; is &amp;lt;tt&amp;gt;#define&amp;lt;/tt&amp;gt;d, therefore it is practical to place these macros in-line in production source. When &amp;lt;tt&amp;gt;STRIDE_ENABLED&amp;lt;/tt&amp;gt; is not &amp;lt;tt&amp;gt;#define&amp;lt;/tt&amp;gt;d, these macros evaluate to nothing.  &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;prettytable&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Test Point Macros&#039;&#039;&#039;&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;srTEST_POINT&#039;&#039;&#039;(&#039;&#039;label&#039;&#039;)&lt;br /&gt;
| &#039;&#039;label&#039;&#039; is a pointer to a null-terminated string&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;srTEST_POINT_DATA&#039;&#039;&#039;(&#039;&#039;label&#039;&#039;, &#039;&#039;data&#039;&#039;, &#039;&#039;size&#039;&#039;)&lt;br /&gt;
| &#039;&#039;label&#039;&#039; is a pointer to a null-terminated string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;data&#039;&#039; is a pointer to a byte sequence&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;size&#039;&#039; is the size of the &#039;&#039;data&#039;&#039; in bytes&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;srTEST_POINT_STR&#039;&#039;&#039;(&#039;&#039;label&#039;&#039;, &#039;&#039;message&#039;&#039;)&lt;br /&gt;
| &#039;&#039;label&#039;&#039; is a pointer to a null-terminated string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;message&#039;&#039; is a pointer to a null-terminated format string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;...&#039;&#039; variable list matching the format string&amp;lt;br/&amp;gt;&lt;br /&gt;
When used in the context of a c++ compilation unit, this macro also supports the streaming operator to append to the message string (see example below)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Define your Expectations ==&lt;br /&gt;
&lt;br /&gt;
In addition to instrumenting the source code with Test Points, you must also define the [[Expectations]] of the Test Points. This involves defining the list of Test Points expected to be hit during a given test scenario. Expectations can also include any &#039;&#039;&#039;data&#039;&#039;&#039; associated with a Test Point that requires validation.&lt;br /&gt;
&lt;br /&gt;
== Write the Test Unit ==&lt;br /&gt;
Once the source under test has been instrumented and the [[Expectations]] defined, Stride offers a number of techniques that can be used for implementing &#039;&#039;&#039;expectation tests&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* Tests can be written in [[Test_Point_Testing_in_C/C%2B%2B | C or C++]] and executed on the device under test using the Stride framework. &lt;br /&gt;
* Tests can be written in [[Perl_Script_APIs | Perl]].&lt;br /&gt;
&lt;br /&gt;
== Notes ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Test_Log&amp;diff=14589</id>
		<title>Test Log</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Test_Log&amp;diff=14589"/>
		<updated>2015-07-08T21:25:22Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: Replace STRIDE with Stride&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Test Log macros provide a simple means to add information from the source under test to the currently executing test case. This information is added to the test report as annotations with a level of either &#039;&#039;Error&#039;&#039;, &#039;&#039;Warning&#039;&#039;, or &#039;&#039;Info&#039;&#039; according to the macro that is used. The log messages are also captured when tracing is enabled in the [[Stride Runner]].&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;tt&amp;gt;srTEST_LOG_xx()&amp;lt;/tt&amp;gt; macro is intended to be a general instrumentation macro for capturing information in the source under test. Logs differ from test points in that they cannot be used for the basis of expectation tests - they are strictly informational. The Stride log messages are intended to supplement, not supplant a [[Test Point | Test Points]].&lt;br /&gt;
&lt;br /&gt;
= Reference =&lt;br /&gt;
To use these macros you should include the &#039;&#039;&#039;srtest.h&#039;&#039;&#039; header file from the Stride Runtime in your compilation unit. The Test Log macros are active only when &amp;lt;tt&amp;gt;STRIDE_ENABLED&amp;lt;/tt&amp;gt; is &amp;lt;tt&amp;gt;#define&amp;lt;/tt&amp;gt;d, therefore it is practical to place these macros in-line in production source. When &amp;lt;tt&amp;gt;STRIDE_ENABLED&amp;lt;/tt&amp;gt; is not &amp;lt;tt&amp;gt;#define&amp;lt;/tt&amp;gt;d, these macros evaluate to nothing. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;prettytable&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Error Logging&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| srTEST_LOG_ERROR(&#039;&#039;message&#039;&#039;, ...)&lt;br /&gt;
| &#039;&#039;message&#039;&#039; is a pointer to a null-terminated format string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;...&#039;&#039; variable list matching the format string&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;prettytable&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Warning Logging&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| srTEST_LOG_WARN(&#039;&#039;message&#039;&#039;, ...)&lt;br /&gt;
| &#039;&#039;message&#039;&#039; is a pointer to a null-terminated format string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;...&#039;&#039; variable list matching the format string&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;prettytable&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Info Logging&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| srTEST_LOG_INFO(&#039;&#039;message&#039;&#039;, ...)&lt;br /&gt;
| &#039;&#039;message&#039;&#039; is a pointer to a null-terminated format string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;...&#039;&#039; variable list matching the format string&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* The maximum length of the message string approximately 1000 characters. If the maximum length is exceeded, the message string is truncated.&lt;br /&gt;
&lt;br /&gt;
=== C++ Only Features ===&lt;br /&gt;
In C++ source the macros above support adding to other content to the message by using the &amp;lt;&amp;lt; operator. &lt;br /&gt;
&lt;br /&gt;
=== Log Level ===&lt;br /&gt;
&lt;br /&gt;
By default only logs with level of either &#039;&#039;Error&#039;&#039; or &#039;&#039;Warning&#039;&#039; are propagated to the host. The [[STRIDE Runner#Options | Stride Runner]] via &amp;lt;tt&amp;gt;--log_level&amp;lt;/tt&amp;gt; &#039;&#039;option&#039;&#039; provides finer control over that behavior.&lt;br /&gt;
&lt;br /&gt;
== Code Snippets ==&lt;br /&gt;
&amp;lt;source lang=&#039;c&#039;&amp;gt;&lt;br /&gt;
#include &amp;lt;srtest.h&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
srTEST_LOG_ERROR(&amp;quot;This is an error message.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
srTEST_LOG_WARN(&amp;quot;This is a warning message with format string %d.&amp;quot;, 123);&lt;br /&gt;
&lt;br /&gt;
srTEST_LOG_INFO(&amp;quot;This is an info message with format string %s and %s.&amp;quot;, &amp;quot;this&amp;quot;, &amp;quot;that&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
#ifdef __cplusplus&lt;br /&gt;
srTEST_LOG_ERROR(&amp;quot;some error: &amp;quot;) &amp;lt;&amp;lt; &amp;quot;stream input supported under c++&amp;quot;;&lt;br /&gt;
#endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Test_Point_Testing_in_C/C%2B%2B&amp;diff=14588</id>
		<title>Test Point Testing in C/C++</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Test_Point_Testing_in_C/C%2B%2B&amp;diff=14588"/>
		<updated>2015-07-08T21:14:08Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: Replace STRIDE with Stride&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Test code to validate the [[Expectations]] of the Test Points often follows this basic pattern:&lt;br /&gt;
&lt;br /&gt;
# Specify an expectation set consisting of expected (i.e. the test points that are expected to be hit) and optionally unexpected (i.e. the test points that are not expected to be hit) test points&lt;br /&gt;
# Register the expectation set with the Stride runtime&lt;br /&gt;
# Invoke the software under test (causing instrumentation points to be hit). This may not be necessary if the instrumented software under test is constantly running).&lt;br /&gt;
# Wait for the expectation set to be satisfied or a timeout to occur&lt;br /&gt;
&lt;br /&gt;
Here is an example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#include &amp;lt;srtest.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
void tf_testpoint_wait(void)&lt;br /&gt;
{&lt;br /&gt;
  /* specify expected set */&lt;br /&gt;
  srTestPointExpect_t expected[]= {&lt;br /&gt;
        {&amp;quot;START&amp;quot;}, &lt;br /&gt;
        {&amp;quot;ACTIVE&amp;quot;}, &lt;br /&gt;
        {&amp;quot;IDLE&amp;quot;},&lt;br /&gt;
        {&amp;quot;END&amp;quot;}, &lt;br /&gt;
        {0}};&lt;br /&gt;
&lt;br /&gt;
  /* specify unexpected set */&lt;br /&gt;
  srTestPointUnexpect_t unexpected[]= {&lt;br /&gt;
        {&amp;quot;INVALID&amp;quot;}, &lt;br /&gt;
        {0}};&lt;br /&gt;
&lt;br /&gt;
  /* register the expectation set*/&lt;br /&gt;
  srWORD handle;&lt;br /&gt;
  srTestPointSetup(expected, unexpected, srTEST_POINT_EXPECT_UNORDERED, srTEST_CASE_DEFAULT, &amp;amp;handle);&lt;br /&gt;
&lt;br /&gt;
  /* start your asynchronous operation */&lt;br /&gt;
  ...&lt;br /&gt;
&lt;br /&gt;
  /* wait for expectation set to be satisfied or a timeout to occur */&lt;br /&gt;
  srTestPointWait(handle, 1000);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
#ifdef _SCL&lt;br /&gt;
#pragma scl_test_flist(“testfunc”, tf_testpoint_wait)&lt;br /&gt;
#endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
&lt;br /&gt;
=== Expectation Set ===&lt;br /&gt;
An expectation set is specified with an [[Expectations#Expected_List|expected]] array of &#039;&#039;&#039;srTestPointExpect_t&#039;&#039;&#039; structures and a second optional [[Expectations#Unexpected_List|unexpected]] array of &#039;&#039;&#039;srTestPointUnexpect_t&#039;&#039;&#039; structures. &lt;br /&gt;
&lt;br /&gt;
==== Expected Array ====&lt;br /&gt;
srTestPointExpect_t is typedef&#039;d as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;c&#039;&amp;gt;&lt;br /&gt;
typedef struct&lt;br /&gt;
{&lt;br /&gt;
    /* the label value is considered the test point&#039;s identity */&lt;br /&gt;
    const srCHAR *          label;&lt;br /&gt;
    /* optional, count specifies the number of times the test point is expected to be hit */ &lt;br /&gt;
    srDWORD                 count;&lt;br /&gt;
    /* optional, predicate function to use for payload validation against user data */ &lt;br /&gt;
    srTestPointPredicate_t  predicate;&lt;br /&gt;
    /* optional, user data to validate the payload against */&lt;br /&gt;
    void *                  user;&lt;br /&gt;
} srTestPointExpect_t;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTES:&#039;&#039;&#039;&lt;br /&gt;
* The end of the array has to be marked by a srTestPointExpect_t set to all zero values&lt;br /&gt;
* The &#039;&#039;count&#039;&#039;, &#039;&#039;predicate&#039;&#039; and &#039;&#039;user&#039;&#039; members may be omitted in the array declaration (they will be automatically set to 0 by the compiler)&lt;br /&gt;
* A &#039;&#039;count&#039;&#039; value of either 0 or 1 is interpreted as 1 &lt;br /&gt;
* The &#039;&#039;count&#039;&#039; could be set as &amp;quot;0 or more&amp;quot; by using the [[Expectations#Special_Processing|special]] srTEST_POINT_ANY_COUNT symbolic constant  &lt;br /&gt;
* A &#039;&#039;predicate&#039;&#039; value 0 indicates that any associated data with a test point payload will be ignored.&lt;br /&gt;
* A &#039;&#039;user&#039;&#039; value 0 indicates that there is no user data associated with this test point&lt;br /&gt;
* The &#039;&#039;label&#039;&#039; could be specified to &#039;&#039;any test point&#039;&#039; within the current expected set of test points by using the [[Expectations#Special_Processing|special]] srTEST_POINT_ANY_IN_SET or srTEST_POINT_ANY_AT_ALL symbolic constants.&lt;br /&gt;
&lt;br /&gt;
==== Unexpected Array ====&lt;br /&gt;
srTestPointUnexpect_t is typedef&#039;d as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;c&#039;&amp;gt;&lt;br /&gt;
typedef struct&lt;br /&gt;
{&lt;br /&gt;
    /* the label value is considered the test point&#039;s identity */&lt;br /&gt;
    const srCHAR *          label;&lt;br /&gt;
} srTestPointUnexpect_t;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTES:&#039;&#039;&#039;&lt;br /&gt;
* The end of the array has to be marked by a srTestPointUnexpect_t set to all zero values&lt;br /&gt;
* The &#039;&#039;label&#039;&#039; could be specified to &#039;&#039;everything else&#039;&#039; relative to the &#039;&#039;&#039;expected&#039;&#039;&#039; array by using the [[Expectations#Special_Processing|special]] srTEST_POINT_EVERYTHING_ELSE symbolic constant. &lt;br /&gt;
&lt;br /&gt;
==== srTestPointPredicate_t ====&lt;br /&gt;
When defining the expectation set per entry a [[Expectations#State_Data_Validation|payload validation]] predicate function could be specified. The signature of it should match the following type:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
extern &amp;quot;C&amp;quot; typedef srBYTE (*srTestPointPredicate_t)(const srTestPoint_t* ptTP, void* pvUser);&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;10&amp;quot; style=&amp;quot;align:left;&amp;quot;  &lt;br /&gt;
| &#039;&#039;&#039;Parameters&#039;&#039;&#039; &lt;br /&gt;
| &#039;&#039;&#039;Type&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Description&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| ptTP &lt;br /&gt;
| Input&lt;br /&gt;
| Pointer to the currently processed Test Point.&lt;br /&gt;
|-&lt;br /&gt;
| pvUser&lt;br /&gt;
| Input &lt;br /&gt;
| Pointer to opaque user data associated with an entry in the expectation set.&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;10&amp;quot; style=&amp;quot;align:left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;Return Value&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039; Description&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| srBYTE &lt;br /&gt;
| srTRUE for valid, srFALSE for invalid, srIGNORE otherwise.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE:&#039;&#039;&#039;&lt;br /&gt;
* As part of the standard Stride distribution there are three predefined function predicate helpers:&lt;br /&gt;
** srTestPointMemCmp - byte comparison&lt;br /&gt;
** srTestPointStrCmp - string case sensitive comparison&lt;br /&gt;
** srTestPointStrCaseCmp - string case insensitive comparison&lt;br /&gt;
&lt;br /&gt;
==== srTestPointSetup ====&lt;br /&gt;
The srTestPointSetup() routine is used to register an expectation set.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
srBOOL srTestPointSetup(srTestPointExpect_t* ptExpected, &lt;br /&gt;
                        srTestPointUnexpect_t* ptUnexpected, &lt;br /&gt;
                        srBYTE yMode, &lt;br /&gt;
                        srTestCaseHandle_t tTestCase, &lt;br /&gt;
                        srWORD* pwHandle);&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;10&amp;quot; style=&amp;quot;align:left;&amp;quot;  &lt;br /&gt;
| &#039;&#039;&#039;Parameters&#039;&#039;&#039; &lt;br /&gt;
| &#039;&#039;&#039;Type&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Description&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| ptExpected&lt;br /&gt;
| Input&lt;br /&gt;
| Pointer to an expectated array.&lt;br /&gt;
|-&lt;br /&gt;
| ptUnexpected&lt;br /&gt;
| Input&lt;br /&gt;
| Pointer to an unexpectated array. This is optional and could be set srNULL.&lt;br /&gt;
|-&lt;br /&gt;
| yMode &lt;br /&gt;
| Input &lt;br /&gt;
| Bitmask that specifies whether the expectated test points occur in [[Expectations#Sequencing_Properties|order and/or strict]]. Possible values are: &amp;lt;br/&amp;gt; &lt;br /&gt;
srTEST_POINT_EXPECT_ORDERED - the test points are expected to be hit exactly in the defined order &amp;lt;br/&amp;gt;&lt;br /&gt;
srTEST_POINT_EXPECT_UNORDERED - the test points could to be hit in any order &amp;lt;br/&amp;gt;&lt;br /&gt;
srTEST_POINT_EXPECT_STRICT - the test points are expected to be hit exactly as specified (no consecutive duplicate hits)&amp;lt;br/&amp;gt;&lt;br /&gt;
srTEST_POINT_EXPECT_NONSTRICT - other test points from the universe could to be hit in between &amp;lt;br/&amp;gt;&lt;br /&gt;
srTEST_POINT_EXPECT_CONTINUE - on successful expectation satisfaction continue processing until the wait timeout expires&lt;br /&gt;
|-&lt;br /&gt;
| tTestCase  &lt;br /&gt;
| Input &lt;br /&gt;
| Handle to a test case. srTEST_CASE_DEFAULT can be used for the default test case.&lt;br /&gt;
|-&lt;br /&gt;
| pwHandle &lt;br /&gt;
| Output &lt;br /&gt;
| Handle that represents the registered expectation set&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;10&amp;quot; style=&amp;quot;align:left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;Return Value&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039; Description&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| srBOOL &lt;br /&gt;
| srTRUE on success, srFALSE otherwise.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== srTestPointWait ====&lt;br /&gt;
The srTestPointWait() routine is used to wait for the expectation to be satisfied. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
srBOOL srTestPointWait(srWORD wHandle, &lt;br /&gt;
                       srDWORD dwTimeout);&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;10&amp;quot; style=&amp;quot;align:left;&amp;quot;  &lt;br /&gt;
| &#039;&#039;&#039;Parameters&#039;&#039;&#039; &lt;br /&gt;
| &#039;&#039;&#039;Type&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Description&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| wHandle &lt;br /&gt;
| Input&lt;br /&gt;
| Handle to a registered expectation set.&lt;br /&gt;
|-&lt;br /&gt;
| dwTimeout &lt;br /&gt;
| Input &lt;br /&gt;
| Timeout value in milliseconds; 0 means just check without waiting.&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;10&amp;quot; style=&amp;quot;align:left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;Return Value&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039; Description&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| srBOOL &lt;br /&gt;
| srTRUE on success, srFALSE otherwise.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTES:&#039;&#039;&#039;&lt;br /&gt;
* The test thread blocks until either the expectation set is satisfied, unless srTEST_POINT_EXPECT_CONTINUE is specified on setup, or the timeout elapses.&lt;br /&gt;
* All test points hit during the wait (both expected and unexpected) are added to the test report as testcase comments&lt;br /&gt;
* Once the wait is over (whether the expectation set has been satisfied or there has been a test failure), the current expectation set is automatically unregistered from the Stride runtime and the handle is released&lt;br /&gt;
* If you want to return immediately from a test case if expectation fails then make the check/wait call an argument to the &#039;&#039;srASSERT_TRUE()&#039;&#039; macro.&lt;br /&gt;
&lt;br /&gt;
==== srTestPointCheck ====&lt;br /&gt;
The srTestPointCheck() routine is used to check for the expectation post routine completion. This is useful for verifying a set of expectations events that should have already transpired (thus are waiting to be processed).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
srBOOL srTestPointCheck(srWORD wHandle);&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;10&amp;quot; style=&amp;quot;align:left;&amp;quot;  &lt;br /&gt;
| &#039;&#039;&#039;Parameters&#039;&#039;&#039; &lt;br /&gt;
| &#039;&#039;&#039;Type&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Description&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| wHandle &lt;br /&gt;
| Input&lt;br /&gt;
| Handle to a registered expectation set.&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;10&amp;quot; style=&amp;quot;align:left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;Return Value&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039; Description&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| srBOOL &lt;br /&gt;
| srTRUE on success, srFALSE otherwise.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTES:&#039;&#039;&#039;&lt;br /&gt;
* All test points hit before the check (both expected and unexpected) are added to the test report as testcase comments&lt;br /&gt;
* Once the check is done (whether the expectation set has been satisfied or there has been a test failure), the current expectation set is automatically unregistered from the Stride runtime and the handle is released&lt;br /&gt;
* If you want to return immediately from a test case if expectation fails then make the check/wait call an argument to the &#039;&#039;srASSERT_TRUE()&#039;&#039; macro. &lt;br /&gt;
&lt;br /&gt;
=== C++ Facade Class ===&lt;br /&gt;
The &#039;&#039;srtest.h&#039;&#039; file provides a simple [http://en.wikipedia.org/wiki/Facade_pattern facade] class that wraps the test point APIs described above in a simple C++ class called &#039;&#039;&#039;srTestPointsHandler&#039;&#039;&#039;. If you are writing your unit tests in C++ (using Stride test classes), then this class is available for your use. The class implements the following methods, which correspond exactly to the C API equivalents:&lt;br /&gt;
&lt;br /&gt;
; srTestPointsHandler : constructor. Takes four arguments which match exactly the first four parameters of the [[#srTestPointSetup|srTestPointSetup]] function.&lt;br /&gt;
; Wait : instance method that provides same functionality as [[#srTestPointWait|srTestPointWait]]. &lt;br /&gt;
; Check : instance method that provides same functionality as [[#srTestPointCheck|srTestPointCheck]].&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Test_Point&amp;diff=14587</id>
		<title>Test Point</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Test_Point&amp;diff=14587"/>
		<updated>2015-07-08T21:05:24Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Source instrumentation is the process by which developers and domain experts selectively instrument the source under test for the purpose of writing test scenarios against the executing application. Implementing tests that leverage source instrumentation is called &#039;&#039;&#039;Expectation Testing&#039;&#039;&#039;. This validation technique is very useful for verifying proper code sequencing based on the software&#039;s internal design. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;c&#039;&amp;gt;&lt;br /&gt;
#include &amp;lt;srtest.h&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
/* a test point with no payload */&lt;br /&gt;
srTEST_POINT(&amp;quot;first test point&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
/* a test point with binary payload */&lt;br /&gt;
srTEST_POINT_DATA(&amp;quot;second test point&amp;quot;, myData, sizeofMyData);&lt;br /&gt;
&lt;br /&gt;
/* a test point with simple string payload */&lt;br /&gt;
srTEST_POINT_STR(&amp;quot;third test point&amp;quot;, &amp;quot;payload with simple string&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
/* a test point with formatted string payload */&lt;br /&gt;
srTEST_POINT_STR(&amp;quot;third test point&amp;quot;, &amp;quot;payload with format string %d&amp;quot;, myVar);&lt;br /&gt;
&lt;br /&gt;
#ifdef __cplusplus&lt;br /&gt;
srTEST_POINT_STR(&amp;quot;c++ test point&amp;quot;, &amp;quot;&amp;quot;) &amp;lt;&amp;lt; &amp;quot;stream input supported under c++&amp;quot;;&lt;br /&gt;
#endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unlike traditional unit testing that drives testing based on input parameters and isolating functionality, &#039;&#039;Expectation&#039;&#039; testing is executed within a fully functional software build running on a real target platform. Test Points are not dependent on input parameters, but often leverage the same types of input/output controls used by functional and black-box testing. &lt;br /&gt;
&lt;br /&gt;
Another unique feature of this type of testing is that domain expertise is not required to implement a test. Developers and domain experts use instrumentation to export design knowledge of the software to the entire team. Furthermore, there is no stubbing required, no special logic to generate input parameters, and no advanced knowledge required of how the application software is coded. &lt;br /&gt;
&lt;br /&gt;
To enable effective test coverage, developers and domain experts are required to insert instrumentation at key locations to gain insight and testability. Here are some general suggested source code areas to consider instrumenting:&lt;br /&gt;
* entry/exit points of critical functions&lt;br /&gt;
* state transitions&lt;br /&gt;
* critical or interesting data transitions (using optional payload to convey data values)&lt;br /&gt;
* callback routines&lt;br /&gt;
* data persistence&lt;br /&gt;
* error conditions&lt;br /&gt;
&lt;br /&gt;
The steps required to implement an Expectation test are the following:&lt;br /&gt;
#[[#Instrumentation | Instrumentation]]&lt;br /&gt;
#[[#Define your Expectations | Define your Expectations]]&lt;br /&gt;
#[[#Write the Test Unit | Write the Test Unit]]&lt;br /&gt;
&lt;br /&gt;
== Instrumentation ==&lt;br /&gt;
To make the software &#039;&#039;testable&#039;&#039;, the first step in the process is for the experts to selectively insert instrumentation macros into the source code. Test Points themselves have nominal impact on the performance of the application – they are only active during test data collection&amp;lt;ref name=&amp;quot;n1&amp;quot;&amp;gt; Test data collection is typically implemented in a low priority background thread. The data is captured in the calling routine&#039;s thread context (no context switch) but processed in the background or on the host. Instrumentation macros return immediately to the caller (i.e. no-op) when testing is not active.&amp;lt;/ref&amp;gt;. Test Points contain names and optional payload data. When Test Points are activated, they are collected in the background, along with timing and any associated data. The set of Test Points hit, their order, timing, and data content can all be used to validate that the software is behaving as expected. [[Test_Log | Test Logs]] can also be added to the source code to provide additional information in the context of an executing test. &lt;br /&gt;
&lt;br /&gt;
To specify a test point you should include the &#039;&#039;&#039;srtest.h&#039;&#039;&#039; header file from the Stride Runtime in your compilation unit. The Test Point macros are active only when &amp;lt;tt&amp;gt;STRIDE_ENABLED&amp;lt;/tt&amp;gt; is &amp;lt;tt&amp;gt;#define&amp;lt;/tt&amp;gt;d, therefore it is practical to place these macros in-line in production source. When &amp;lt;tt&amp;gt;STRIDE_ENABLED&amp;lt;/tt&amp;gt; is not &amp;lt;tt&amp;gt;#define&amp;lt;/tt&amp;gt;d, these macros evaluate to nothing.  &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;prettytable&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Test Point Macros&#039;&#039;&#039;&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;srTEST_POINT&#039;&#039;&#039;(&#039;&#039;label&#039;&#039;)&lt;br /&gt;
| &#039;&#039;label&#039;&#039; is a pointer to a null-terminated string&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;srTEST_POINT_DATA&#039;&#039;&#039;(&#039;&#039;label&#039;&#039;, &#039;&#039;data&#039;&#039;, &#039;&#039;size&#039;&#039;)&lt;br /&gt;
| &#039;&#039;label&#039;&#039; is a pointer to a null-terminated string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;data&#039;&#039; is a pointer to a byte sequence&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;size&#039;&#039; is the size of the &#039;&#039;data&#039;&#039; in bytes&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;srTEST_POINT_STR&#039;&#039;&#039;(&#039;&#039;label&#039;&#039;, &#039;&#039;message&#039;&#039;)&lt;br /&gt;
| &#039;&#039;label&#039;&#039; is a pointer to a null-terminated string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;message&#039;&#039; is a pointer to a null-terminated format string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;...&#039;&#039; variable list matching the format string&amp;lt;br/&amp;gt;&lt;br /&gt;
When used in the context of a c++ compilation unit, this macro also supports the streaming operator to append to the message string (see example below)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Define your Expectations ==&lt;br /&gt;
&lt;br /&gt;
In addition to instrumenting the source code with Test Points, you must also define the [[Expectations]] of the Test Points. This involves defining the list of Test Points expected to be hit during a given test scenario. Expectation can also include any &#039;&#039;&#039;data&#039;&#039;&#039; associated with a Test Point that requires validation.&lt;br /&gt;
&lt;br /&gt;
== Write the Test Unit ==&lt;br /&gt;
Once the source under test has been instrumented and the [[Expectations]] defined, Stride offers a number of techniques that can be used for implementing &#039;&#039;&#039;expectation tests&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* Tests can be written in [[Test_Point_Testing_in_C/C%2B%2B | C or C++]] and executed on the device under test using the Stride framework. &lt;br /&gt;
* Tests can be written in [[Perl_Script_APIs | Perl]].&lt;br /&gt;
&lt;br /&gt;
== Notes ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Test_Point&amp;diff=14586</id>
		<title>Test Point</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Test_Point&amp;diff=14586"/>
		<updated>2015-07-08T20:59:06Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Instrumentation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Source instrumentation is the process by which developers and domain experts selectively instrument the source under test for the purpose of writing test scenarios against the executing application. Implementing tests that leverage source instrumentation is called &#039;&#039;&#039;Expectation Testing&#039;&#039;&#039;. This validation technique is very useful for verifying proper code sequencing based on the software&#039;s internal design. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;c&#039;&amp;gt;&lt;br /&gt;
#include &amp;lt;srtest.h&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
/* a test point with no payload */&lt;br /&gt;
srTEST_POINT(&amp;quot;first test point&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
/* a test point with binary payload */&lt;br /&gt;
srTEST_POINT_DATA(&amp;quot;second test point&amp;quot;, myData, sizeofMyData);&lt;br /&gt;
&lt;br /&gt;
/* a test point with simple string payload */&lt;br /&gt;
srTEST_POINT_STR(&amp;quot;third test point&amp;quot;, &amp;quot;payload with simple string&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
/* a test point with formatted string payload */&lt;br /&gt;
srTEST_POINT_STR(&amp;quot;third test point&amp;quot;, &amp;quot;payload with format string %d&amp;quot;, myVar);&lt;br /&gt;
&lt;br /&gt;
#ifdef __cplusplus&lt;br /&gt;
srTEST_POINT_STR(&amp;quot;c++ test point&amp;quot;, &amp;quot;&amp;quot;) &amp;lt;&amp;lt; &amp;quot;stream input supported under c++&amp;quot;;&lt;br /&gt;
#endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unlike traditional unit testing that drives testing based on input parameters and isolating functionality, &#039;&#039;Expectation&#039;&#039; testing is executed within a fully functional software build running on a real target platform. Test Points are not dependent on input parameters, but often leverage the same types of input/output controls used by functional and black-box testing. &lt;br /&gt;
&lt;br /&gt;
Another unique feature of this type of testing is that domain expertise is not required to implement a test. Developers and domain experts use instrumentation to export design knowledge of the software to the entire team. Furthermore, there is no stubbing required, no special logic to generate input parameters, and no advanced knowledge required of how the application software is coded. &lt;br /&gt;
&lt;br /&gt;
To enable effective test coverage, developers and domain experts are required to insert instrumentation at key locations to gain insight and testability. Here are some general suggested source code areas to consider instrumenting:&lt;br /&gt;
* critical function entry/exit points&lt;br /&gt;
* state transitions&lt;br /&gt;
* critical or interesting data transitions (using optional payload to convey data values)&lt;br /&gt;
* callback routines&lt;br /&gt;
* data persistence&lt;br /&gt;
* error conditions&lt;br /&gt;
&lt;br /&gt;
The steps required to implement an Expectation test are the following:&lt;br /&gt;
#[[#Instrumentation | Instrumentation]]&lt;br /&gt;
#[[#Define your Expectations | Define your Expectations]]&lt;br /&gt;
#[[#Write the Test Unit | Write the Test Unit]]&lt;br /&gt;
&lt;br /&gt;
== Instrumentation ==&lt;br /&gt;
To make the software &#039;&#039;testable&#039;&#039;, the first step in the process is for the experts to selectively insert instrumentation macros into the source code. Test Points themselves have nominal impact on the performance of the application – they are only active during test data collection&amp;lt;ref name=&amp;quot;n1&amp;quot;&amp;gt; Test data collection is typically implemented in a low priority background thread. The data is captured in the calling routine&#039;s thread context (no context switch) but processed in the background or on the host. Instrumentation macros return immediately to the caller (i.e. no-op) when testing is not active.&amp;lt;/ref&amp;gt;. Test Points contain names and optional payload data. When Test Points are activated, they are collected in the background, along with timing and any associated data. The set of Test Points hit, their order, timing, and data content can all be used to validate that the software is behaving as expected. [[Test_Log | Test Logs]] can also be added to the source code to provide additional information in the context of an executing test. &lt;br /&gt;
&lt;br /&gt;
To specify a test point you should include the &#039;&#039;&#039;srtest.h&#039;&#039;&#039; header file from the Stride Runtime in your compilation unit. The Test Point macros are active only when &amp;lt;tt&amp;gt;STRIDE_ENABLED&amp;lt;/tt&amp;gt; is &amp;lt;tt&amp;gt;#define&amp;lt;/tt&amp;gt;d, therefore it is practical to place these macros in-line in production source. When &amp;lt;tt&amp;gt;STRIDE_ENABLED&amp;lt;/tt&amp;gt; is not &amp;lt;tt&amp;gt;#define&amp;lt;/tt&amp;gt;d, these macros evaluate to nothing.  &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;prettytable&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Test Point Macros&#039;&#039;&#039;&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;srTEST_POINT&#039;&#039;&#039;(&#039;&#039;label&#039;&#039;)&lt;br /&gt;
| &#039;&#039;label&#039;&#039; is a pointer to a null-terminated string&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;srTEST_POINT_DATA&#039;&#039;&#039;(&#039;&#039;label&#039;&#039;, &#039;&#039;data&#039;&#039;, &#039;&#039;size&#039;&#039;)&lt;br /&gt;
| &#039;&#039;label&#039;&#039; is a pointer to a null-terminated string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;data&#039;&#039; is a pointer to a byte sequence&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;size&#039;&#039; is the size of the &#039;&#039;data&#039;&#039; in bytes&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;srTEST_POINT_STR&#039;&#039;&#039;(&#039;&#039;label&#039;&#039;, &#039;&#039;message&#039;&#039;)&lt;br /&gt;
| &#039;&#039;label&#039;&#039; is a pointer to a null-terminated string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;message&#039;&#039; is a pointer to a null-terminated format string&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;...&#039;&#039; variable list matching the format string&amp;lt;br/&amp;gt;&lt;br /&gt;
When used in the context of a c++ compilation unit, this macro also supports the streaming operator to append to the message string (see example below)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Define your Expectations ==&lt;br /&gt;
&lt;br /&gt;
In addition to instrumenting the source code with Test Points, you must also define the [[Expectations]] of the Test Points. This involves defining the list of Test Points expected to be hit during a given test scenario. Expectation can also include any &#039;&#039;&#039;data&#039;&#039;&#039; associated with a Test Point that requires validation.&lt;br /&gt;
&lt;br /&gt;
== Write the Test Unit ==&lt;br /&gt;
Once the source under test has been instrumented and the [[Expectations]] defined, Stride offers a number of techniques that can be used for implementing &#039;&#039;&#039;expectation tests&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* Tests can be written in [[Test_Point_Testing_in_C/C%2B%2B | C or C++]] and executed on the device under test using the Stride framework. &lt;br /&gt;
* Tests can be written in [[Perl_Script_APIs | Perl]].&lt;br /&gt;
&lt;br /&gt;
== Notes ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Desktop_Installation&amp;diff=13992</id>
		<title>Desktop Installation</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Desktop_Installation&amp;diff=13992"/>
		<updated>2014-01-21T17:12:02Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Create/Update STRIDE_DIR */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Installation Packages ==&lt;br /&gt;
Files are installed by unzipping the provided package to your PC. Packages are available targeting the following operating systems (your version number may be different than that shown):&lt;br /&gt;
;Windows (x86)&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-windows_4.x.yy.zip&amp;lt;/tt&amp;gt;&lt;br /&gt;
;Linux (x86)&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-linux_4.x.yy.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
;FreeBSD (x86)&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-freebsd_4.x.yy.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please see the appropriate installation instructions below.&lt;br /&gt;
&lt;br /&gt;
== Windows Installation ==&lt;br /&gt;
&lt;br /&gt;
=== Unpacking ===&lt;br /&gt;
The following installation example assumes the the installation package is located in your root directory and that the directory &amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt; exists. You can choose to install to a different location (all instructions below assume you are installing into &amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt;). &lt;br /&gt;
&lt;br /&gt;
The example uses the open source [http://www.7-zip.org/ 7-Zip] utility to unzip the archive.&lt;br /&gt;
&lt;br /&gt;
 cd \stride&lt;br /&gt;
 &amp;quot;\Program Files\7-Zip\7z&amp;quot; x ..\STRIDE_framework-windows_4.x.yy.zip&lt;br /&gt;
&lt;br /&gt;
Once unzipped, files will have been installed under the &amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
=== Verify Environment Variables ===&lt;br /&gt;
&lt;br /&gt;
==== Updated PATH ====&lt;br /&gt;
As a final step, you will need to update your &amp;lt;tt&amp;gt;[http://en.wikipedia.org/wiki/Path_(variable) PATH]&amp;lt;/tt&amp;gt; environment variable to include &amp;lt;tt&amp;gt;\stride\bin&amp;lt;/tt&amp;gt;. &lt;br /&gt;
For instructions on modifying it, please see [http://support.microsoft.com/kb/310519 http://support.microsoft.com/kb/310519].&lt;br /&gt;
&lt;br /&gt;
NOTE: &#039;&#039;Make sure to insert &#039;&#039;&#039;no spaces&#039;&#039;&#039; before and after the semicolon separators(;).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Create/Update STRIDE_DIR====&lt;br /&gt;
&lt;br /&gt;
Verify that the  &amp;lt;tt&amp;gt;STRIDE_DIR&amp;lt;/tt&amp;gt; environment variable exists and is set to the root installation directory (&amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt;). If this environment variable does not yet exist, you should create it as a user environment variable.&lt;br /&gt;
&lt;br /&gt;
To confirm installation and display &#039;&#039;help&#039;&#039; run the following command in a console window:&lt;br /&gt;
&lt;br /&gt;
 stride -h&lt;br /&gt;
&lt;br /&gt;
=== Uninstalling ===&lt;br /&gt;
To uninstall STRIDE simply:&lt;br /&gt;
* Remove any reference to &amp;lt;tt&amp;gt;\stride\bin&amp;lt;/tt&amp;gt; in your &amp;lt;tt&amp;gt;PATH&amp;lt;/tt&amp;gt; environment variable. &lt;br /&gt;
* Remove &amp;lt;tt&amp;gt;STRIDE_DIR&amp;lt;/tt&amp;gt; environment variable.&lt;br /&gt;
* Remove &amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
== Linux/FreeBSD Installation ==&lt;br /&gt;
&lt;br /&gt;
=== Unpacking ===&lt;br /&gt;
The following installation example assumes the the installation package is located in your home directory and that the directory &amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt; exists. You can choose to install to a different location (all instructions below assume you are installing into &amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt;). &lt;br /&gt;
&lt;br /&gt;
 cd ~/stride&lt;br /&gt;
 tar -zxvf ../STRIDE_framework-linux_4.x.yy.tgz&lt;br /&gt;
&lt;br /&gt;
Once unzipped, files will have been installed under the &amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
=== Verify Environment Variables ===&lt;br /&gt;
&lt;br /&gt;
==== Updated PATH ====&lt;br /&gt;
As a final step, you will need to update your &amp;lt;tt&amp;gt;[http://en.wikipedia.org/wiki/Path_(variable) PATH]&amp;lt;/tt&amp;gt; environment variable to include &amp;lt;tt&amp;gt;~/stride/bin&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
If you use the bash shell, enter the following at a command prompt, or to automatically set at each login, add to your &amp;lt;tt&amp;gt;.bashrc&amp;lt;/tt&amp;gt;:&lt;br /&gt;
 export PATH=$PATH:~/stride/bin&lt;br /&gt;
&lt;br /&gt;
For other shells, and more information, please see the following articles:&lt;br /&gt;
* [http://www.linuxheadquarters.com/howto/basic/path.shtml http://www.linuxheadquarters.com/howto/basic/path.shtml].&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Environment_variable#UNIX http://en.wikipedia.org/wiki/Environment_variable]&lt;br /&gt;
&lt;br /&gt;
==== Create/Update STRIDE_DIR====&lt;br /&gt;
Verify that the  &amp;lt;tt&amp;gt;STRIDE_DIR&amp;lt;/tt&amp;gt; environment variable exists and is set to the root installation directory (&amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt;). If this environment variable does not yet exist, you should automatically set at each login, add to your &amp;lt;tt&amp;gt;.bashrc&amp;lt;/tt&amp;gt;:&lt;br /&gt;
 export STRIDE_DIR=~/stride&lt;br /&gt;
&lt;br /&gt;
To confirm installation and display &#039;&#039;help&#039;&#039; run the following command in a console window:&lt;br /&gt;
&lt;br /&gt;
 stride -h&lt;br /&gt;
&lt;br /&gt;
NOTE: &#039;&#039;In a 64-bit environment the above may fail with errors like: &amp;lt;code&amp;gt;&amp;quot;/lib/ld-linux.so.2: bad ELF interpreter: No such file or directory&amp;quot;&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;&amp;quot;ELF interpreter /libexec/ld-elf32.so.1 not found&amp;quot;&amp;lt;/code&amp;gt;. To resolve this issue install the appropriate 32-bit compatibility libraries for your Linux/FreeBSD distribution:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Debian / Ubuntu&lt;br /&gt;
 sudo apt-get install ia32-libs&lt;br /&gt;
 sudo apt-get install ia32-libs-multiarch:i386 (for 12.04 or higher)&lt;br /&gt;
* Fedora / CentOS / RHEL&lt;br /&gt;
 sudo yum -y install glibc.i686 libstdc++.i686&lt;br /&gt;
* FreeBSD&lt;br /&gt;
Make sure to have &amp;lt;code&amp;gt;lib32&amp;lt;/code&amp;gt; installed (via [http://www.freebsd.org/cgi/man.cgi?query=sysinstall&amp;amp;apropos=0&amp;amp;sektion=0&amp;amp;manpath=FreeBSD+8.4-RELEASE&amp;amp;arch=default&amp;amp;format=html sysinstall(8)] - Configure|Distributions|lib32) and have your kernel built with:&lt;br /&gt;
 options 	COMPAT_FREEBSD32	# Compatible with i386 binaries&lt;br /&gt;
&lt;br /&gt;
=== Uninstalling ===&lt;br /&gt;
To uninstall STRIDE simply:&lt;br /&gt;
* Remove any reference to &amp;lt;tt&amp;gt;~/stride/bin&amp;lt;/tt&amp;gt; in your &amp;lt;tt&amp;gt;PATH&amp;lt;/tt&amp;gt; environment variable. &lt;br /&gt;
* Remove &amp;lt;tt&amp;gt;STRIDE_DIR&amp;lt;/tt&amp;gt; environment variable.&lt;br /&gt;
* Remove &amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
== Directories and Files ==&lt;br /&gt;
&lt;br /&gt;
To integrate STRIDE in to your target build system it is required to understand the directories layout and the files inside then. A quick orientation is shown below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;&#039;&#039;NOTE:&#039;&#039;&amp;lt;/u&amp;gt; &#039;&#039;It&#039;s not necessary to understand the workings of the STRIDE framework to perform evaluation or training. The desktop package contains an [[STRIDE Off-Target Environment]] that utilizes a SDK that is set up with appropriate options and settings to enable &amp;quot;out of the box&amp;quot; functionality.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;bin&amp;lt;/tt&amp;gt;===&lt;br /&gt;
This directory contains the [[Build Tools|STRIDE Build Tools]] and the [[STRIDE Runner]].&lt;br /&gt;
&lt;br /&gt;
The build tools are invoked early on in the target software build process to generate special STRIDE artifacts that are used in subsequent build steps and later when running tests against the target. In an Off-Target Environment installation, these files are needed on the host computer since this is where we are building the target application. In a production environment, these files are needed only on the computer that performs the target software build.&lt;br /&gt;
&lt;br /&gt;
The [[STRIDE Runner]] is the program you use to run tests from the host.&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;lib&amp;lt;/tt&amp;gt;===&lt;br /&gt;
This directory contains a set of STRIDE specific core scripting libraries along with prebuild binaries intended to be used for [[Test Modules Overview|testing in scripts]].&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;Samples&amp;lt;/tt&amp;gt;===&lt;br /&gt;
The Samples directory contains a number of sub-directories, each containing the source for a [[Samples|sample test]].&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;SDK&amp;lt;/tt&amp;gt;===&lt;br /&gt;
This directory contains the sub-directories &amp;lt;tt&amp;gt;Posix/Windows&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;Runtime&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;SLAP&amp;lt;/tt&amp;gt;, which contain source code that comprises the [[Runtime_Reference|STRIDE Runtime]]. These sources are built in to a static libary (e.g. STRIDE Runtime library - &amp;lt;tt&amp;gt;stride.a/lib&amp;lt;/tt&amp;gt;) as a dependency of your Test Application. &lt;br /&gt;
&lt;br /&gt;
The &amp;lt;tt&amp;gt;Posix&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;Windows&amp;lt;/tt&amp;gt; directories contain the target operating system specific source and configuration. If you are interested in the details, consult the articles [[Posix SDK]] and [[Windows SDK]]. Each of them contains the following sub-directories:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;tt&amp;gt;settings&amp;lt;/tt&amp;gt;&lt;br /&gt;
: This directory contains a set of &amp;lt;tt&amp;gt;stride.XXX.s2scompile&amp;lt;/tt&amp;gt; files, where &amp;lt;tt&amp;gt;XXX&amp;lt;/tt&amp;gt; coresponds to the target CPU architecture (i.e. X86, ARM...). These files, used by the [[s2scompile|STRIDE Compiler]], specify target CPU characteristics (endian-ness, data sizes and alignments). On Windows, this directory also contains a set of files for [[STRIDE_Extensions_for_Visual_Studio|use in building target apps with Visual Studio]].&lt;br /&gt;
*&amp;lt;tt&amp;gt;src&amp;lt;/tt&amp;gt;&lt;br /&gt;
: This directory contains the source of the target [[Platform Abstraction Layer]] PAL. In addition there is a sample Makefile used to produce a sandbox TestApp.&lt;br /&gt;
&lt;br /&gt;
== Perl Installation (Optional) ==&lt;br /&gt;
&amp;lt;u&amp;gt;&#039;&#039;NOTE:&#039;&#039;&amp;lt;/u&amp;gt; &#039;&#039;This is &#039;&#039;&#039;NOT&#039;&#039;&#039; required if only [[Test_Units_Overview|tests in C/C++]] will be run or to complete the [[Training_Getting_Started|STRIDE training]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you intend to use [[Test Modules Overview|STRIDE Script modules]] for testing in script, you will need a recent version of Perl (x86 with threads support) installed. &lt;br /&gt;
&lt;br /&gt;
As of this writing, we support only the 32-bit versions 5.8.9, 5.10.x, 5.12.x, 5.14.x and 5.16.x of Perl. &lt;br /&gt;
&lt;br /&gt;
=== Windows === &lt;br /&gt;
It is required to use the standard 32-bit Perl distributions from [http://www.activestate.com/activeperl ActiveState].&lt;br /&gt;
&lt;br /&gt;
The following additional (non-standard) Perl packages are also required for full functionality of STRIDE tests in perl:&lt;br /&gt;
&lt;br /&gt;
* [http://search.cpan.org/~smueller/Class-ISA-0.36/lib/Class/ISA.pm Class::ISA]&lt;br /&gt;
* [http://search.cpan.org/~andrewf/Pod-POM-0.27/lib/Pod/POM.pm Pod::POM]&lt;br /&gt;
* [http://search.cpan.org/~andk/Devel-Symdump-2.08/lib/Devel/Symdump.pm Devel::Symdump]&lt;br /&gt;
&lt;br /&gt;
You can easily install these packages using the [http://docs.activestate.com/activeperl/5.10/faq/ActivePerl-faq2.html ppm tool]. If you access the Internet via a proxy make sure to read [http://docs.activestate.com/activeperl/5.10/faq/ActivePerl-faq2.html#ppm_and_proxies this]. Simple command-line installation of PACKAGE_NAME (the package to install) typically just requires typing:&lt;br /&gt;
&lt;br /&gt;
 ppm install PACKAGE_NAME&lt;br /&gt;
&lt;br /&gt;
=== Linux/FreeBSD ===&lt;br /&gt;
We recommend you to use the standard 32-bit Perl distribution that comes with your OS version. In case you need to manually build from source make sure to configure &amp;quot;shared library&amp;quot; (&amp;lt;tt&amp;gt;-Duseshrplib&amp;lt;/tt&amp;gt;), &amp;quot;thread support&amp;quot; (&amp;lt;tt&amp;gt;-Duseithreads&amp;lt;/tt&amp;gt;) and no &amp;quot;64-bit support&amp;quot; (&amp;lt;tt&amp;gt;-Uuse64bitint -Uuse64bitall&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The following additional (non-standard) Perl packages are also required for full functionality of STRIDE tests in perl:&lt;br /&gt;
&lt;br /&gt;
* [http://search.cpan.org/~ingy/YAML-LibYAML-0.38/lib/YAML/XS.pm YAML::XS]&lt;br /&gt;
* [http://search.cpan.org/~smueller/Class-ISA-0.36/lib/Class/ISA.pm Class::ISA]&lt;br /&gt;
* [http://search.cpan.org/~andrewf/Pod-POM-0.27/lib/Pod/POM.pm Pod::POM]&lt;br /&gt;
* [http://search.cpan.org/~andk/Devel-Symdump-2.08/lib/Devel/Symdump.pm Devel::Symdump]&lt;br /&gt;
&lt;br /&gt;
If your perl is installed in a system directory (&amp;lt;tt&amp;gt;/usr/bin/perl&amp;lt;/tt&amp;gt;, for instance), you will need root access to install shared modules. The simplest method for installing packages is via the [http://www.perl.com/doc/manual/html/lib/CPAN.html CPAN shell]. If you access the Internet via a proxy make sure to set the appropriate [http://search.cpan.org/dist/CPAN/lib/CPAN.pm#Config_Variables CPAN config variables]. To start the shell in interactive mode:&lt;br /&gt;
&lt;br /&gt;
 sudo perl -MCPAN -eshell&lt;br /&gt;
&lt;br /&gt;
Once in the shell, search for and install the latest stable version of PACKAGE_NAME (the package to install):&lt;br /&gt;
&lt;br /&gt;
 install PACKAGE_NAME&lt;br /&gt;
&lt;br /&gt;
The STRIDE perl packages also need to load your system&#039;s &#039;&#039;&#039;libperl.so&#039;&#039;&#039; (shared object file) at runtime. Depending on your system, this file should be loadable from a perl CORE directory or from one of the shared system directories. If you &#039;&#039;&#039;DO NOT&#039;&#039;&#039; have this shared library on your system, you might need to install a &#039;&#039;libperl-dev&#039;&#039;, &#039;&#039;perl-devel&#039;&#039; or &#039;&#039;perl-libs&#039;&#039; package in order to get it. Here is how you can do that on the console of some Linux distributions:&lt;br /&gt;
&lt;br /&gt;
* Debian / Ubuntu&lt;br /&gt;
 sudo apt-get install libperl-dev&lt;br /&gt;
* Fedora / CentOS / RHEL&lt;br /&gt;
 sudo yum -y install perl-devel&lt;br /&gt;
&lt;br /&gt;
=== Validation ===&lt;br /&gt;
Once you have installed Perl we recommend you to run the following command in a console window:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
stride --device NULL  --diagnostics Perl --output PerlCheck&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If everything was properly set up you should get the following output:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Executing diagnostics...&lt;br /&gt;
  script &amp;quot;diagnostics.pl&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  ---------------------------------------------------------------------&lt;br /&gt;
  Summary: 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In addition a report file with name &amp;lt;tt&amp;gt;PerlCheck.xml&amp;lt;/tt&amp;gt; will be created in the current directory. If interested in the details you could open that report file in a browser of your choice.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Installation]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Doubling&amp;diff=13950</id>
		<title>Training Doubling</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Doubling&amp;diff=13950"/>
		<updated>2013-02-27T17:41:22Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Implement Exercise */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused on explaining how to leverage &#039;&#039;&#039;Test Doubles&#039;&#039;&#039; in the context of executing a test. For an overview of &#039;&#039;&#039;intercepting&#039;&#039;&#039; existing functions please refer to [[Using_Test_Doubles | Test Doubles]]. The module covers following topics:&lt;br /&gt;
* How to apply pragmas for [[Function_Capturing | function intercepting]]&lt;br /&gt;
* How to decide what kind of &#039;&#039;mangling&#039;&#039; is required&lt;br /&gt;
** &#039;&#039;Definition&#039;&#039; verses &#039;&#039;Reference&#039;&#039;&lt;br /&gt;
** &#039;&#039;Explicit&#039;&#039; verses &#039;&#039;Implicit&#039;&#039;&lt;br /&gt;
* Setting and Resetting the Double implementation&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two new files used -- &#039;&#039;&#039;TestDouble.cpp &amp;amp; TestDouble.h&#039;&#039;&#039; --- that implement two test Units:&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Reference&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Definition&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All of the Test Units have test cases already implemented (used for reference) and have one test method that you are required to implement called &#039;&#039;&#039;Exercise&#039;&#039;&#039;. Currently the exercise methods return a &#039;&#039;NOT IN USE&#039;&#039; status. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Double&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference --run TestDouble_Definition&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
      &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
      &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    ---------------------------------------------------------&lt;br /&gt;
    Summary: 4 passed, 0 failed, 0 in progress, 2 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Reference::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Implement a new &#039;&#039;test double&#039;&#039; for the &#039;&#039;strlen()&#039;&#039; routine:&lt;br /&gt;
*** Add a &#039;&#039;NOTE&#039;&#039; capturing its name when it is called&lt;br /&gt;
*** Uses the real &#039;&#039;strlen()&#039;&#039; to return the length of the passed in string&lt;br /&gt;
** &#039;&#039;Hint:&#039;&#039; You will probably want to read about [[Using Test Doubles]]&lt;br /&gt;
** Use &#039;&#039;srEXPECT_EQ()&#039;&#039; to validate that &#039;&#039;sut_strcheck()&#039;&#039; returns correct length &lt;br /&gt;
** Make sure to restore the original routine &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Definition::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Implement a new &#039;&#039;test double&#039;&#039; for the &#039;&#039;sut_strcpy()&#039;&#039; routine:&lt;br /&gt;
*** Log its name when it is called&lt;br /&gt;
*** Validate string passed to &#039;&#039;sut_strsave()&#039;&#039; is received correctly by the &#039;&#039;test double&#039;&#039;&lt;br /&gt;
*** Remember that &#039;&#039;sut_strsave()&#039;&#039; calls &#039;&#039;sut_strcpy()&#039;&#039; with the string passed to it&lt;br /&gt;
*** Can use a test macro to validate the string is correctly passed to the double&lt;br /&gt;
*** Call the original function (&#039;&#039;sut_strcpy()&#039;&#039;) with a &#039;&#039;&#039;DIFFERENT STRING&#039;&#039;&#039; (i.e &amp;quot;Exercise Test String 2&amp;quot;)&lt;br /&gt;
**** Call the original within the test double &lt;br /&gt;
**** Make sure to reset the original function within the mock before calling it (i.e. srDOUBLE_RESET)&lt;br /&gt;
** Call &#039;&#039;sut_strsave()&#039;&#039; with a string (i.e. &amp;quot;Exercise Test String 1&amp;quot;)&lt;br /&gt;
** Use &#039;&#039;sut_strget()&#039;&#039; to retrieve a string&lt;br /&gt;
** Compare retrieved string with the &#039;&#039;&#039;DIFFERENT STRING&#039;&#039;&#039; inserted by the &#039;&#039;test double&#039;&#039;&lt;br /&gt;
** NOTE - won&#039;t work if you wait until the end of the test to restore the original routine (i.e. srDOUBLE_SET)&lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Double&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference TestDouble_Definition&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    ---------------------------------------------------------&lt;br /&gt;
    Summary: 6 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. If you have not added test space options to your options file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) please see [[Training_Getting_Started#Test_Space_Access| testspace access]]. &lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference TestDouble_Definition --space TestDouble --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
* [[Using_Test_Doubles | Using Test Doubles]] - Outlines the basic steps to enable a &#039;&#039;double&#039;&#039;&lt;br /&gt;
** Capture the f(x) that is going to be doubled&lt;br /&gt;
** Code the test logic to additionally set the Double and restore original f(x)&lt;br /&gt;
&lt;br /&gt;
* [[Scl_function | Intercepting a function]] - Pragma specifics outline&lt;br /&gt;
** For this exercise, there are &#039;&#039;&#039;NO&#039;&#039;&#039; optional parameters when capturing a f(x) to be &#039;&#039;&#039;intercepted&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Details of the pragma parameters can be understood in the [[Intercept_Module#Interceptor | Intercept Module]] article. Please review in detail the following:&lt;br /&gt;
** &#039;&#039;context&#039;&#039; - This refers to incepting a callee or the called routine. &lt;br /&gt;
** &#039;&#039;name-mangling&#039;&#039; - How the function name is switched during the compiling process&lt;br /&gt;
** &#039;&#039;group-id&#039;&#039; - Used to associate a group of intercepted functions &lt;br /&gt;
&lt;br /&gt;
* Some &#039;&#039;&#039;external links&#039;&#039;&#039; to review:&lt;br /&gt;
** [http://www.martinfowler.com/bliki/TestDouble.html Martin Fowler definition]&lt;br /&gt;
** [http://en.wikipedia.org/wiki/Test_Double Wikipedia definition]&lt;br /&gt;
** [http://msdn.microsoft.com/en-us/magazine/cc163358.aspx MSDN Magazine description]&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Double_Sample | Test Double Sample]] that can be a useful reference. This reference example can be built and executed like all of samples using the Off-Target environment.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Expectations&amp;diff=13949</id>
		<title>Training Expectations</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Expectations&amp;diff=13949"/>
		<updated>2013-02-27T17:39:04Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Implement Exercise */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused on &#039;&#039;Test Points&#039;&#039; and how to validate them using &#039;&#039;Expectations&#039;&#039;.  The module covers the following topics:&lt;br /&gt;
* Presentation of a [[Expectations | validation]] technique based on &#039;&#039;code sequencing&#039;&#039; and &#039;&#039;state data&#039;&#039;&lt;br /&gt;
* Overview of [[Source_Instrumentation_Overview#Instrumentation | source instrumentation]]&lt;br /&gt;
* Review of [[Test_Point_Testing_in_C/C%2B%2B | expectation tables and predicates]]&lt;br /&gt;
* Example use cases such as concurrent validation, using trigger conditions, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two test files used -- &#039;&#039;&#039;TestExpect.cpp &amp;amp; TestExpect.h&#039;&#039;&#039; -- that implement three Test Units:&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Seq&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Data&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Misc&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The first two Test Units have two test methods already implemented and have one method each that you are required to implement called &#039;&#039;&#039;Exercise&#039;&#039;&#039;. The third Test Unit has four test methods already implemented and also has one method to implement as an exercise. Currently the &#039;&#039;exercise methods&#039;&#039; return a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Expectations&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  -----------------------------------------------------------&lt;br /&gt;
  Summary: 7 passed, 1 failed, 0 in progress, 3 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Seq::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Validate &#039;&#039;&#039;ALL&#039;&#039;&#039; upper case Test Points (A - I)&lt;br /&gt;
** Use &#039;&#039;Unordered&#039;&#039; and &#039;&#039;Nonstrict&#039;&#039; sequencing&lt;br /&gt;
** Use &#039;&#039;sut_DoSequencing(SEQ_1)&#039;&#039; to generate part of the sequence&lt;br /&gt;
** Use &#039;&#039;sut_start_thread(SEQ_3)&#039;&#039; to generate the rest of the sequence&lt;br /&gt;
** Add a &#039;&#039;NOTE_INFO(..)&#039;&#039; to log that the test is executing&lt;br /&gt;
** &#039;&#039;Hint:&#039;&#039; You will probably want to read about [[Test Point Testing in C/C++]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Data::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Validate the following Test Points &#039;&#039;&#039;{D, G, F, H}&#039;&#039;&#039;&lt;br /&gt;
** Check that &#039;&#039;&#039;F&#039;&#039;&#039; occurs 2 times&lt;br /&gt;
** Use &#039;&#039;Unordered&#039;&#039; sequencing&lt;br /&gt;
** Write a new custom predicate that validates data for both &#039;&#039;&#039;D&#039;&#039;&#039; and &#039;&#039;&#039;H&#039;&#039;&#039; Test Points&lt;br /&gt;
*** Add &#039;&#039;NOTE_INFO(..)&#039;&#039; to capture content of the Test Points&lt;br /&gt;
*** Add extra check for &#039;&#039;&#039;D&#039;&#039;&#039; that the status is &#039;&#039;&#039;GOOD&#039;&#039;&#039;&lt;br /&gt;
*** Confirm that data fields &#039;&#039;&#039;d1&#039;&#039;&#039; and &#039;&#039;&#039;d2&#039;&#039;&#039; are as expected&lt;br /&gt;
*** Pass the expected data fields (for both Test Points) as part of the &#039;&#039;&#039;user&#039;&#039;&#039; data within the setup&lt;br /&gt;
*** Use &#039;&#039;sut_start_thread(SEQ_3)&#039;&#039; to generate the sequence&lt;br /&gt;
** Write another custom predicate for validating data for &#039;&#039;&#039;G&#039;&#039;&#039;&lt;br /&gt;
*** Add &#039;&#039;NOTE_INFO(..)&#039;&#039; to capture content of the Test Point&lt;br /&gt;
*** Validate the expected string using &#039;&#039;&#039;user&#039;&#039;&#039; data&lt;br /&gt;
** Add a &#039;&#039;NOTE_INFO(..)&#039;&#039; to log that the test is executing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Misc::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Validate 2 sequences using a Trigger in between&lt;br /&gt;
*** Sequence 1 = &#039;&#039;&#039;D&#039;&#039;&#039; &#039;&#039;&#039;E&#039;&#039;&#039; and &#039;&#039;&#039;A&#039;&#039;&#039;&lt;br /&gt;
*** Trigger = &#039;&#039;&#039;C&#039;&#039;&#039;&lt;br /&gt;
*** Sequence 2 =  &#039;&#039;&#039;F&#039;&#039;&#039; and &#039;&#039;&#039;F&#039;&#039;&#039; (2 occurrences)&lt;br /&gt;
** Use &#039;&#039;ANY AT ALL&#039;&#039; special member with trigger&lt;br /&gt;
** Use &#039;&#039;Ordered&#039;&#039; and &#039;&#039;Strict&#039;&#039; sequencing &lt;br /&gt;
** Use &#039;&#039;sut_start_thread(SEQ_2)&#039;&#039; to generate the expected sequences  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Expectations&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
      &amp;gt; 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    -----------------------------------------------------------&lt;br /&gt;
    Summary: 10 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space.  If you have not added test space options to your options file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) please see [[Training_Getting_Started#Test_Space_Access| testspace access]].&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc --space TestExpect --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Source_Instrumentation_Overview | Instrumentation Overview]] providing high-level concepts of this validation technique&lt;br /&gt;
* [[Test_Point |Test Point]] Macro definition &lt;br /&gt;
* [[Expectations |Expectations]] definition and how to set your &#039;&#039;Expectations&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Point_Sample | Test Point Sample]] - Demonstrates simple technique to monitor and test activity occurring in another thread&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Building_an_Off-Target_Test_App&amp;diff=13948</id>
		<title>Building an Off-Target Test App</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Building_an_Off-Target_Test_App&amp;diff=13948"/>
		<updated>2013-02-27T16:56:00Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Prerequisite ==&lt;br /&gt;
This article guides you through building a test application (&#039;&#039;&#039;TestApp&#039;&#039;&#039;) for the purpose of running sample code using the [[STRIDE Off-Target Environment]]. &lt;br /&gt;
&lt;br /&gt;
It requires an installation of the STRIDE Framework desktop package. If not installed, please see [[Desktop Installation]] for more information. &lt;br /&gt;
&lt;br /&gt;
It also requires that your desktop contains one of the following &#039;&#039;&#039;compilers&#039;&#039;&#039;:&lt;br /&gt;
* For Windows, Microsoft Visual Studio 2008 or later is required. If you don&#039;t already have Visual Studio, the free [http://en.wikipedia.org/wiki/Microsoft_Visual_Studio_Express Visual C++ Express] can be used (download [http://www.microsoft.com/express/download/#webInstall here]). &amp;lt;i&amp;gt;In case you have [http://www.cygwin.com Cygwin] installed, the [http://en.wikipedia.org/wiki/GNU_Compiler_Collection GNU Compiler Collection] could be used as an alrternative.&amp;lt;/i&amp;gt;&lt;br /&gt;
* For Linux, the [http://en.wikipedia.org/wiki/GNU_Compiler_Collection GNU Compiler Collection] (included by default in almost all Linux distros) is required.&lt;br /&gt;
&lt;br /&gt;
== Building a TestApp ==&lt;br /&gt;
&lt;br /&gt;
=== SDK Makefile ===&lt;br /&gt;
The SDK Makefile is set up so that all &amp;lt;tt&amp;gt;.c&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;.cpp&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt; files found in the directory &amp;lt;tt&amp;gt;SDK\Windows\sample_src&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;SDK/Posix/sample_src&amp;lt;/tt&amp;gt; for Linux) are included in the compile and link of the &#039;&#039;&#039;testapp&#039;&#039;&#039; target.&lt;br /&gt;
&lt;br /&gt;
Further--as a pre-compilation step--any &amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt; files found in &amp;lt;tt&amp;gt;sample_src&amp;lt;/tt&amp;gt; are submitted to the [[STRIDE Build Tools]]. This will result in &lt;br /&gt;
* the detection of [[Test_Unit_Pragmas| test pragmas]] used to declare Test Units in these &amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt; files&lt;br /&gt;
* the detection of [[Scl_function | function pragmas]] used to declare remoting of functions also found in &amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt; files&lt;br /&gt;
* the inclusion of metadata into the &amp;lt;tt&amp;gt;sidb&amp;lt;/tt&amp;gt; file created&lt;br /&gt;
* the generation of an [[Intercept Module]] required for executing tests&lt;br /&gt;
&lt;br /&gt;
=== Build Steps ===&lt;br /&gt;
To begin, be sure that TestApp is not running then perform the following steps:&lt;br /&gt;
&lt;br /&gt;
NOTE: &#039;&#039;If you experience any build problem please make sure to read [[Troubleshooting Build Problems]] for possible resolution.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
====Linux====&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Build the test app&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd $STRIDE_DIR/SDK/Posix/src&lt;br /&gt;
make testapp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Note that the following artifacts are produced by the build:&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;tt&amp;gt;$STRIDE_DIR/SDK/Posix/out/bin/TestApp&amp;lt;/tt&amp;gt;&lt;br /&gt;
: the test application&lt;br /&gt;
;&amp;lt;tt&amp;gt;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;lt;/tt&amp;gt;&lt;br /&gt;
: the STRIDE interface database file which contains metadata describing the interfaces remoted by the test app (along with other data)&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Windows====&lt;br /&gt;
NOTE: &#039;&#039;In case you have [http://www.cygwin.com Cygwin] and [http://en.wikipedia.org/wiki/GNU_Compiler_Collection GNU Compiler Collection] installed and prefer to use it, please follow the build steps for Linux ([[#Linux|above]]) and ignore the one in here.&#039;&#039;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;If using Microsoft Visual Studio, open a [http://msdn.microsoft.com/en-us/library/ms235639(v=vs.100).aspx Visual Studio Command Prompt] to ensure that the compiler and linker are on your PATH.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Build the test app using the supplied GNU make. (You will get Makefile errors if you use the default make.)&lt;br /&gt;
&amp;lt;source lang=&amp;quot;dos&amp;quot;&amp;gt;&lt;br /&gt;
cd %STRIDE_DIR%\SDK\Windows\src&lt;br /&gt;
..\bin\make testapp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Note that the following artifacts are produced by the build:&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;tt&amp;gt;%STRIDE_DIR%\SDK\Windows\out\bin\TestApp.exe&amp;lt;/tt&amp;gt;&lt;br /&gt;
: the test application&lt;br /&gt;
;&amp;lt;tt&amp;gt;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;lt;/tt&amp;gt;&lt;br /&gt;
: the STRIDE interface database file which contains metadata describing the interfaces remoted by the test app (along with other data)&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE: &#039;&#039;If you prefer to use Visual Studio to build/debug/run your testapp, we provide instructions [[STRIDE_Extensions_for_Visual_Studio|here]] about how to accomplish this.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Diagnostics ==&lt;br /&gt;
The test app we just built does not have any user tests in it. At this point it provides a starting point for test that we will subsequently add.&lt;br /&gt;
&lt;br /&gt;
However, a set of diagnostic tests that verify operation of the STRIDE runtime itself are always built into the generated TestApp executable. If desired (we recommend you to do so) you could run them by doing the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Invoke the TestApp. In order to see TestApp&#039;s output, we recommend that you manually run in a console window (or Windows equivalent): &lt;br /&gt;
;Linux&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$STRIDE_DIR/SDK/Posix/out/bin/TestApp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
;Windows&lt;br /&gt;
&amp;lt;source lang=&amp;quot;dos&amp;quot;&amp;gt;&lt;br /&gt;
%STRIDE_DIR%\SDK\Windows\out\bin\TestApp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
(...or launch from the file explorer)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Note TestApp&#039;s output upon startup.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
STRIDE Test Console Application.&lt;br /&gt;
Enter &#039;Ctrl+C&#039; to Quit.&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
Listening on TCP port 8000&lt;br /&gt;
starting up...&lt;br /&gt;
&amp;quot;_srThread&amp;quot; thread started.&lt;br /&gt;
&amp;quot;stride&amp;quot; thread started.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;From a second console window, invoke &amp;lt;tt&amp;gt;[[STRIDE_Runner|stride]]&amp;lt;/tt&amp;gt; as follows, to verify connectivity with the test app and STRIDE runtime operation:&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
;Linux&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
stride --diagnostics --database=&amp;quot;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;quot; --device=TCP:localhost:8000 --run=&amp;quot;*&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
;Windows&lt;br /&gt;
&amp;lt;source lang=&amp;quot;dos&amp;quot;&amp;gt;&lt;br /&gt;
stride --diagnostics --database=&amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot; --device=TCP:localhost:8000 --run=&amp;quot;*&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As the tests run you will see output in both the TestApp (target) and stride (host) console windows.&lt;br /&gt;
&lt;br /&gt;
The host console window output is shown here:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
  runtime version: 4.3.0x &lt;br /&gt;
Executing diagnostics...&lt;br /&gt;
  test unit &amp;quot;Link&amp;quot;&lt;br /&gt;
    Loopback ............&lt;br /&gt;
    Payload Fragmentation&lt;br /&gt;
    Stub-Proxy Deadlock&lt;br /&gt;
    Target Characteristics&lt;br /&gt;
    &amp;gt; 4 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;Stat&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;Time&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 8 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Note the Summary results shown in the host output; all in use tests should pass.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;To exit TestApp, give the target window focus and enter Ctrl-C (or &#039;q&#039; under Windows).&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Samples ==&lt;br /&gt;
&lt;br /&gt;
The initial desktop installation of STRIDE does not set up any source code (with the exception of a set of system diagnostic tests) for automatic inclusion in a test application. The [[Desktop Installation | desktop framework]] distribution, however, comes with a set of [[Samples_Overview | Samples]]. &lt;br /&gt;
&lt;br /&gt;
To demonstrate how to build a sample, will will add the [[Test Intro Sample]] that provide an overview of STRIDE testing techniques. For an overview from a C++ perspective, please see [[Test Intro Cpp Sample]]. &lt;br /&gt;
&lt;br /&gt;
The following steps are applicable for &#039;&#039;&#039;all&#039;&#039;&#039; [[Samples_Overview | Samples]].&lt;br /&gt;
&lt;br /&gt;
To begin, be sure that TestApp is not running, then copy the &amp;lt;tt&amp;gt;.c&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt; files found in &amp;lt;tt&amp;gt;Samples/test_in_c_cpp/TestIntro&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;SDK/Windows/sample_src&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;SDK/Posix/sample_src&amp;lt;/tt&amp;gt; for Linux).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039; Only files in the sample_src directory will be picked up by the makefile. Files in any subdirectories will be igored.&lt;br /&gt;
&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Once the files have been copied to &amp;lt;tt&amp;gt;sample_src&amp;lt;/tt&amp;gt;, simply build TestApp as described above. Note if you had previously copied other sample source to this directory, you should decide whether to remove those files first. When you complete the test app build, any source that is in this directory at the time of build will be included in the test app.&lt;br /&gt;
&lt;br /&gt;
=== Running the Tests ===&lt;br /&gt;
Here we will run all tests in the &amp;lt;tt&amp;gt;TestApp.sidb&amp;lt;/tt&amp;gt; database.&amp;lt;ref&amp;gt;Note that the S2 diagnostic tests are treated separately, and are not run unless the &amp;lt;tt&amp;gt;--diagnostics&amp;lt;/tt&amp;gt; option is specified to &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Run the build above TestApp in a console window.&lt;br /&gt;
# Invoke &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt; in a separate console window (different from the running TestApp) -- as shown below and verify Summary results.&lt;br /&gt;
&lt;br /&gt;
Here are the command line parameters that we will submit to &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--database ./out/TestApp.sidb &lt;br /&gt;
--device TCP:localhost:8000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
* If you setup an [[STRIDE_Runner#Environment_Variables | environment variable]] for the &#039;&#039;&#039;device&#039;&#039;&#039; option than it is not required in the option file. Note: Command line options override environment variables.  &lt;br /&gt;
* You may want to create a text file named &#039;&#039;RunTestIntro.txt&#039;&#039; in the &amp;lt;tt&amp;gt;SDK\Windows&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;SDK/Posix&amp;lt;/tt&amp;gt; for Linux) directory as an option file to submit to &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt;.&lt;br /&gt;
* &#039;&#039;stride --help&#039;&#039; provides [[STRIDE_Runner#Options | options information]]&lt;br /&gt;
&lt;br /&gt;
If you haven&#039;t done so already, start &amp;lt;tt&amp;gt;TestApp&amp;lt;/tt&amp;gt; running in a separate console window.&lt;br /&gt;
&lt;br /&gt;
Now run stride as follows (starting from the &amp;lt;tt&amp;gt;SDK\Windows&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;SDK/Posix&amp;lt;/tt&amp;gt; directory):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
stride --options_file RunTestIntro.txt --run=&amp;quot;*&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The output should look like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
Executing...&lt;br /&gt;
  test unit &amp;quot;s2_testintro_flist&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;s2_testintro_cclass&amp;quot;&lt;br /&gt;
    &amp;gt; 1 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;s2_testintro_testdoubles&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;s2_testintro_testpoints&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;s2_testintro_parameters&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 11 passed, 2 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Interpreting Results ===&lt;br /&gt;
Open &#039;&#039;&#039;TestApp.xml&#039;&#039;&#039; in your browser; this file is created in the directory from which you ran &#039;&#039;&#039;stride&#039;&#039;&#039; (or the directory via the &amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt; command line option). If you were connected to the Internet when you ran the tests, the TestApp.xsl file is also generated in the directory. By opening TestApp.xml in a web browser, the xsl is automatically applied to create html in the browser. If you use Google Chrome, please see [[Browser Compatibility]].&lt;br /&gt;
&lt;br /&gt;
If you&#039;re interested in the details of the tests, please see to the test documentation contained in the test report.&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Installation]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Getting_Started&amp;diff=13947</id>
		<title>Training Getting Started</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Getting_Started&amp;diff=13947"/>
		<updated>2013-02-27T16:47:31Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Run Training Tests */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
Our training approach is based on a self-guided tour of the [[STRIDE_Overview#STRIDE_Testing_Features | STRIDE Testing Features]] using reference examples and assigned exercises. The set of examples and the implemented exercises will be built and executed using a standard desktop computer. &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;software under test&#039;&#039;, &#039;&#039;reference examples&#039;&#039;, and &#039;&#039;exercises&#039;&#039; have been designed to be as &#039;&#039;&#039;&#039;&#039;simple as possible while sufficiently demonstrating the topic at hand&#039;&#039;&#039;&#039;&#039;. In particular, the &#039;&#039;software under test&#039;&#039; is &#039;&#039;&#039;very light&#039;&#039;&#039; on core application logic -- the focus instead is on the test code that leverages STRIDE to define and execute tests. &lt;br /&gt;
&lt;br /&gt;
The user is expected to work through each of the training modules covering a specific testing feature. Once the exercises are completed (actual test cases implemented), the results are published to [[STRIDE_Overview#STRIDE_Test_Space | Test Space]] (and validated against a pre-created baseline).&lt;br /&gt;
&lt;br /&gt;
The training collateral consists of the following:&lt;br /&gt;
# The [[STRIDE_Overview#STRIDE_Framework | STRIDE Framework]] used to implement and execute tests&lt;br /&gt;
#* The Framework is configured in an [[STRIDE_Off-Target_Environment | Off-Target Environment]] &lt;br /&gt;
# A set of specific [[#Training | Training Modules]] that will guide you through the exercises&lt;br /&gt;
# [[Main_Page | &#039;&#039;Wiki articles&#039;&#039;]] that will provide background material and other technical information&lt;br /&gt;
# [[STRIDE_Overview#STRIDE_Test_Space | STRIDE Test Space]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For more details concerning STRIDE refer to the following:&lt;br /&gt;
* &#039;&#039;[[STRIDE_Overview_Video | Overview screencast]]&#039;&#039;  &lt;br /&gt;
* &#039;&#039;[[What is Unique About STRIDE | What is Unique About STRIDE]]&#039;&#039; &lt;br /&gt;
* &#039;&#039;[[Types of Testing Supported by STRIDE | Types of Testing Supported]]&#039;&#039; &lt;br /&gt;
* &#039;&#039;[[Frequently Asked Questions About STRIDE | Frequently Asked Questions]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;For questions and support [mailto:training@s2technologies.com email us]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Before Starting ==&lt;br /&gt;
Before you can start setting up your environment you need the following 3 items:&lt;br /&gt;
* &#039;&#039;&#039;STRIDE Desktop Installation Package&#039;&#039;&#039; (one of the following)&lt;br /&gt;
**&amp;lt;tt&amp;gt;STRIDE_framework-windows_4.3.yy.zip&amp;lt;/tt&amp;gt; (Windows desktop)&lt;br /&gt;
**&amp;lt;tt&amp;gt;STRIDE_framework-linux_4.3.yy.tgz&amp;lt;/tt&amp;gt; (Linux desktop)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Training Source files&#039;&#039;&#039;&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_training_source.zip&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Test Space User Account URL and Logon Credentials&#039;&#039;&#039;&lt;br /&gt;
** Test Space URL (&amp;lt;tt&amp;gt;&amp;lt;i&amp;gt;YourCompany&amp;lt;/i&amp;gt;.stridetestspace.com&amp;lt;/tt&amp;gt;)&lt;br /&gt;
** User-name&lt;br /&gt;
** User-password&lt;br /&gt;
&lt;br /&gt;
=== Desktop Setup ===&lt;br /&gt;
The training requires that you install the [[Desktop_Installation | Desktop Framework]] and that the [[STRIDE_Off-Target_Environment | Off-Target Environment]] is setup correctly and verified. &lt;br /&gt;
&lt;br /&gt;
For an overview of the installation steps please refer to [[Installation_Overview | this]] article.&lt;br /&gt;
&lt;br /&gt;
The following steps are required:&lt;br /&gt;
# Install your [[Desktop_Installation | desktop Framework package]]&lt;br /&gt;
# Read about the [[STRIDE_Off-Target_Environment | Off-Target Environment]]&lt;br /&gt;
# Install [[STRIDE_Off-Target_Environment#Host_Compiler | host C++ compiler]] for your desktop (if needed)&lt;br /&gt;
# Use the Off-Target SDK to [[Building_an_Off-Target_Test_App | build a Test App]]&lt;br /&gt;
# [[Building_an_Off-Target_Test_App#Diagnostics | Run STRIDE diagnostics]] with the Test App built&lt;br /&gt;
&lt;br /&gt;
=== Training Setup ===&lt;br /&gt;
The following source code can be found in the &#039;&#039;&#039;STRIDE_training_source.zip&#039;&#039;&#039; file:&lt;br /&gt;
&lt;br /&gt;
   software_under_test.c | h&lt;br /&gt;
   TestBasic.cpp | h&lt;br /&gt;
   TestParam.cpp | h&lt;br /&gt;
   TestFixture.cpp | h&lt;br /&gt;
   TestExpect.cpp | h   &lt;br /&gt;
   TestRuntime.cpp | h&lt;br /&gt;
   TestFile.cpp | h&lt;br /&gt;
   TestDouble.cpp | h&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;software under test&#039;&#039; is contained in the file &#039;&#039;&#039;software_under_test.c | h&#039;&#039;&#039;.  All of the public functions use &#039;&#039;&#039;sut_&#039;&#039;&#039; as a prefix. All training modules test against this file. Although the test examples are contained in C++ files, most of the test logic is written in standard C. &lt;br /&gt;
 &lt;br /&gt;
* Extract STRIDE_training_source.zip into the directory &amp;lt;tt&amp;gt;%STRIDE_DIR%\SDK\Windows\sample_src&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;$STRIDE_DIR/SDK/Posix/sample_src&amp;lt;/tt&amp;gt; for Linux)&lt;br /&gt;
* Rebuild the TestApp following the instructions for [[Building_an_Off-Target_Test_App#Build_Steps | Building a TestApp]].&lt;br /&gt;
* List all [[Test_Units | Test Units]] within the generated database file using the following command:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Windows&#039;&#039;&#039;&lt;br /&gt;
 &amp;gt; stride --database=&amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot; --list&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux&#039;&#039;&#039;&lt;br /&gt;
 &amp;gt; stride --database=&amp;quot;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;quot; --list&lt;br /&gt;
&lt;br /&gt;
The following remoted Functions and Test Units should be displayed:&lt;br /&gt;
 &lt;br /&gt;
   Functions&lt;br /&gt;
     sut_strcpy(char const * input, char * output) : void&lt;br /&gt;
     strlen(char const * _Str) : size_t&lt;br /&gt;
   Test Units&lt;br /&gt;
     TestBasic()&lt;br /&gt;
     TestDouble_Reference()&lt;br /&gt;
     TestDouble_Definition()&lt;br /&gt;
     TestExpect_Seq()&lt;br /&gt;
     TestExpect_Data()&lt;br /&gt;
     TestExpect_Misc()&lt;br /&gt;
     TestFile()&lt;br /&gt;
     TestFixture()&lt;br /&gt;
     TestParam(int data1, int data2, char * szString)&lt;br /&gt;
     TestRuntime_Static()&lt;br /&gt;
     TestRuntime_Dynamic(int NumberOfTestCases)&lt;br /&gt;
&lt;br /&gt;
=== Run Training Tests ===&lt;br /&gt;
Here we will run all tests in the &amp;lt;tt&amp;gt;TestApp.sidb&amp;lt;/tt&amp;gt; database.&amp;lt;ref&amp;gt;Note that the S2 diagnostic tests are treated separately, and are not run unless the &amp;lt;tt&amp;gt;--diagnostics&amp;lt;/tt&amp;gt; option is specified to &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* If you haven&#039;t done so already, [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using the SDK makefile&lt;br /&gt;
* Invoke TestApp found in the &#039;&#039;/out/bin&#039;&#039; directory&lt;br /&gt;
* Create an [[Stride_Runner#Options | option file]] (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) in a directory of your choice.&lt;br /&gt;
&#039;&#039;&#039;Windows&#039;&#039;&#039;&lt;br /&gt;
  ##### Command Line Options ######&lt;br /&gt;
  --device &amp;quot;TCP:localhost:8000&amp;quot;&lt;br /&gt;
  --database &amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot;&lt;br /&gt;
  --output &amp;quot;%STRIDE_DIR%\SDK\Windows\sample_src\TestApp.xml&amp;quot;&lt;br /&gt;
  --log_level all&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux&#039;&#039;&#039;&lt;br /&gt;
  ##### Command Line Options #####&lt;br /&gt;
  --device &amp;quot;TCP:localhost:8000&amp;quot;&lt;br /&gt;
  --database &amp;quot;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;quot;&lt;br /&gt;
  --output &amp;quot;$STRIDE_DIR/SDK/Posix/sample_src/TestApp.xml&amp;quot;&lt;br /&gt;
  --log_level all &lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
* If you set up an [[STRIDE_Runner#Environment_Variables | environment variable]] for the &#039;&#039;&#039;device&#039;&#039;&#039; option then it is not required in the option file. Note: Command line options override environment variables.  &lt;br /&gt;
* &#039;&#039;stride --help&#039;&#039; provides [[STRIDE_Runner#Options | options information]]&lt;br /&gt;
&lt;br /&gt;
If you haven&#039;t done so already, start &amp;lt;tt&amp;gt;TestApp&amp;lt;/tt&amp;gt; running in a separate console window.&lt;br /&gt;
&lt;br /&gt;
Now run stride as shown below and verify summary results:&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run &amp;quot;*&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The output should look like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
Executing...&lt;br /&gt;
  test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
    &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
    &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 21 passed, 3 failed, 0 in progress, 11 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Interpreting Results ===&lt;br /&gt;
Open &#039;&#039;&#039;TestApp.xml&#039;&#039;&#039; in your browser; this file is created in the directory from which you ran &#039;&#039;&#039;stride&#039;&#039;&#039; (or at the path specified via the &amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt; command line option). If you were connected to the Internet when you ran the tests, the TestApp.xsl file is also generated in the directory. By opening TestApp.xml in a web browser, the xsl is automatically applied to create html in the browser. If you use Google Chrome, please see [[Browser Compatibility]].&lt;br /&gt;
&lt;br /&gt;
If you&#039;re interested in the details of the tests, please see the test documentation contained in the test report.&lt;br /&gt;
&lt;br /&gt;
=== Test Space Access ===&lt;br /&gt;
&lt;br /&gt;
As part of the training, users implement exercises (test cases).  As each exercise is completed, the results are expected to be uploaded to Test Space. Accessing Test Space (uploading, viewing, etc.) requires a user-name and password. Before working on a training module please confirm that your user account has been set up by [[Test_Space_Setup | logging in]]. &lt;br /&gt;
&lt;br /&gt;
Test Space will have expected results using [[Creating_And_Using_Baselines | baselines]]. This is used to automatically verify if the exercises have been implemented correctly (at least to some degree). For capturing test results a &#039;&#039;&#039;Training&#039;&#039;&#039; project has been created with the following &#039;&#039;&#039;Spaces&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
   Sandbox     - Results for training setup&lt;br /&gt;
   TestBasic   - Basics training results: TestBasic.cpp | h&lt;br /&gt;
   TestParam   - Parameters training results: TestParam.cpp | h&lt;br /&gt;
   TestFixture - Fixturing training results: TestFixture.cpp | h&lt;br /&gt;
   TestExpect  - Expectations training results: TestExpect.cpp | h   &lt;br /&gt;
   TestRuntime - Runtime API training results: TestRuntime.cpp | h&lt;br /&gt;
   TestFile    - File IO training results: TestFile.cpp | h&lt;br /&gt;
   TestDouble  - Doubling training results: TestDouble.cpp | h&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To publish your results using the [[STRIDE_Runner | STRIDE Runner]] the following command-line options should be used:&lt;br /&gt;
&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --space &#039;&#039;MODULENAME&#039;&#039; &lt;br /&gt;
  --name &#039;&#039;YOURNAME&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Notes:&lt;br /&gt;
* Concerning the &#039;&#039;MODULENAME&#039;&#039; option: use the name that corresponds to the training module currently underway (e.g. TestBasic, TestParam, etc.)&lt;br /&gt;
* &#039;&#039;YOURNAME&#039;&#039; should be set to your name (i.e. JohnD); omit spaces from this string&lt;br /&gt;
* If you access the Internet via an HTTP proxy please read [[STRIDE_Runner#Using_a_Proxy | &#039;&#039;&#039;this article&#039;&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
To make it easier for now we recommend that you update your existing option file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) with the following: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
=== Publish Test Space Results ===&lt;br /&gt;
&lt;br /&gt;
To complete the setup publish your results to Test Space. Please make sure to use &#039;&#039;&#039;Sandbox&#039;&#039;&#039; as the space to upload your results to (see below).&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run &amp;quot;*&amp;quot; --space Sandbox --upload&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
  test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
    &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
    &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 21 passed, 3 failed, 0 in progress, 11 not in use.&lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
  Uploading to test space...&lt;br /&gt;
&lt;br /&gt;
=== Confirming Setup ===&lt;br /&gt;
&lt;br /&gt;
After you have run and uploaded the results of the training setup, you should confirm the correctness of your work. The training setup results will end up with a mix of passes and failures, so we use a [[Creating_And_Using_Baselines | Test Space baseline]] to get an aggregate comparison between expected and actual results.&lt;br /&gt;
&lt;br /&gt;
To validate your setup:&lt;br /&gt;
# Navigate to Sandbox set in Test Space and look under the column labeled &#039;&#039;&#039;&#039;&#039;SEQUENTIAL COMPARISON&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
# Confirm that &#039;&#039;&#039;&#039;&#039;none&#039;&#039;&#039;&#039;&#039; is indicated for each of the sub-column. If so, you have completed the setup and are ready to move onto the training modules.&lt;br /&gt;
&lt;br /&gt;
[[image:Training_Confirming_Setup.jpg|none|600px| Test Space baseline example]]&lt;br /&gt;
&lt;br /&gt;
== Training ==&lt;br /&gt;
At this point you should be ready to start the actual training. There are &#039;&#039;&#039;7 separate training modules&#039;&#039;&#039; and we recommend the following order:&lt;br /&gt;
&lt;br /&gt;
===Introductory===&lt;br /&gt;
* [[Training_Basics | &#039;&#039;&#039;Basics&#039;&#039;&#039;]] - Covers basics of implementing and executing test cases&lt;br /&gt;
* [[Training_Parameters |&#039;&#039;&#039;Parameters&#039;&#039;&#039;]] - How to pass parameters to a test &lt;br /&gt;
* [[Training_Fixturing | &#039;&#039;&#039;Fixturing&#039;&#039;&#039;]] - Leveraging setup and teardown featuring&lt;br /&gt;
&lt;br /&gt;
===Advanced===&lt;br /&gt;
* [[Training_Expectations | &#039;&#039;&#039;Expectations&#039;&#039;&#039;]] - Validating code sequencing along with state data&lt;br /&gt;
* [[Training_Runtime_API | &#039;&#039;&#039;Runtime API&#039;&#039;&#039;]] - Using the runtime services to dynamically create test suits / cases&lt;br /&gt;
* [[Training_File_IO | &#039;&#039;&#039;File IO&#039;&#039;&#039;]] - Reading and writing files existing on the host&lt;br /&gt;
* [[Training_Doubling | &#039;&#039;&#039;Doubling&#039;&#039;&#039;]] - Replacing a dependency with a stub, fake, or mock&lt;br /&gt;
&lt;br /&gt;
== Training Confirmation ==&lt;br /&gt;
&lt;br /&gt;
As you run and upload each test unit containing your worked training exercises, you should confirm the correctness of your work.&lt;br /&gt;
&lt;br /&gt;
The correctly worked training test units will end up with a mix of passes and failures, so we use a [[Creating_And_Using_Baselines | Test Space baseline]] to get an aggregate comparison between expected and actual results &lt;br /&gt;
&lt;br /&gt;
To validate your work:&lt;br /&gt;
# Navigate to your result set in Test Space and look under the column labeled &#039;&#039;&#039;&#039;&#039;SEQUENTIAL COMPARISON&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
# Confirm that &#039;&#039;&#039;&#039;&#039;none&#039;&#039;&#039;&#039;&#039; is indicated for each of the sub-column.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Doubling&amp;diff=13915</id>
		<title>Training Doubling</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Doubling&amp;diff=13915"/>
		<updated>2013-02-02T00:34:16Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: Undo revision 13914 by Jeffs (Talk)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused on explaining how to leverage &#039;&#039;&#039;Test Doubles&#039;&#039;&#039; in the context of executing a test. For an overview of &#039;&#039;&#039;intercepting&#039;&#039;&#039; existing functions please refer to [[Using_Test_Doubles | Test Doubles]]. The module covers following topics:&lt;br /&gt;
* How to apply pragmas for [[Function_Capturing | function intercepting]]&lt;br /&gt;
* How to decide what kind of &#039;&#039;mangling&#039;&#039; is required&lt;br /&gt;
** &#039;&#039;Definition&#039;&#039; verses &#039;&#039;Reference&#039;&#039;&lt;br /&gt;
** &#039;&#039;Explicit&#039;&#039; verses &#039;&#039;Implicit&#039;&#039;&lt;br /&gt;
* Setting and Resetting the Double implementation&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two new files used -- &#039;&#039;&#039;TestDouble.cpp &amp;amp; TestDouble.h&#039;&#039;&#039; --- that implement two test Units:&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Reference&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Definition&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All of the Test Units have test cases already implemented (used for reference) and have one test method that you are required to implement called &#039;&#039;&#039;Exercise&#039;&#039;&#039;. Currently the exercise methods return a &#039;&#039;NOT IN USE&#039;&#039; status. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Double&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference --run TestDouble_Definition&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
      &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
      &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    ---------------------------------------------------------&lt;br /&gt;
    Summary: 4 passed, 0 failed, 0 in progress, 2 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Reference::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Implement a new &#039;&#039;test double&#039;&#039; for the &#039;&#039;strlen()&#039;&#039; routine:&lt;br /&gt;
*** Add a &#039;&#039;NOTE&#039;&#039; capturing its name when it is called&lt;br /&gt;
*** Uses the real &#039;&#039;strlen()&#039;&#039; to return the length of the passed in string&lt;br /&gt;
** Use &#039;&#039;srEXPECT_EQ()&#039;&#039; to validate that &#039;&#039;sut_strcheck()&#039;&#039; returns correct length &lt;br /&gt;
** Make sure to restore the original routine &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Definition::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Implement a new &#039;&#039;test double&#039;&#039; for the &#039;&#039;sut_strcpy()&#039;&#039; routine:&lt;br /&gt;
*** Log its name when it is called&lt;br /&gt;
*** Validate string passed to &#039;&#039;sut_strsave()&#039;&#039; is received correctly by the &#039;&#039;test double&#039;&#039;&lt;br /&gt;
*** Remember that &#039;&#039;sut_strsave()&#039;&#039; calls &#039;&#039;sut_strcpy()&#039;&#039; with the string passed to it&lt;br /&gt;
*** Can use a test macro to validate the string is correctly passed to the double&lt;br /&gt;
*** Call the original function (&#039;&#039;sut_strcpy()&#039;&#039;) with a &#039;&#039;&#039;DIFFERENT STRING&#039;&#039;&#039; (i.e &amp;quot;Exercise Test String 2&amp;quot;)&lt;br /&gt;
**** Call the original within the test double &lt;br /&gt;
**** Make sure to reset the original function within the mock before calling it (i.e. srDOUBLE_RESET)&lt;br /&gt;
** Call &#039;&#039;sut_strsave()&#039;&#039; with a string (i.e. &amp;quot;Exercise Test String 1&amp;quot;)&lt;br /&gt;
** Use &#039;&#039;sut_strget()&#039;&#039; to retrieve a string&lt;br /&gt;
** Compare retrieved string with the &#039;&#039;&#039;DIFFERENT STRING&#039;&#039;&#039; inserted by the &#039;&#039;test double&#039;&#039;&lt;br /&gt;
** NOTE - won&#039;t work if you wait until the end of the test to restore the original routine (i.e. srDOUBLE_SET)&lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Double&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference TestDouble_Definition&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    ---------------------------------------------------------&lt;br /&gt;
    Summary: 6 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. If you have not added test space options to your options file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) please see [[Training_Getting_Started#Test_Space_Access| testspace access]]. &lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference TestDouble_Definition --space TestDouble --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
* [[Using_Test_Doubles | Using Test Doubles]] - Outlines the basic steps to enable a &#039;&#039;double&#039;&#039;&lt;br /&gt;
** Capture the f(x) that is going to be doubled&lt;br /&gt;
** Code the test logic to additionally set the Double and restore original f(x)&lt;br /&gt;
&lt;br /&gt;
* [[Scl_function | Intercepting a function]] - Pragma specifics outline&lt;br /&gt;
** For this exercise, there are &#039;&#039;&#039;NO&#039;&#039;&#039; optional parameters when capturing a f(x) to be &#039;&#039;&#039;intercepted&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Details of the pragma parameters can be understood in the [[Intercept_Module#Interceptor | Intercept Module]] article. Please review in detail the following:&lt;br /&gt;
** &#039;&#039;context&#039;&#039; - This refers to incepting a callee or the called routine. &lt;br /&gt;
** &#039;&#039;name-mangling&#039;&#039; - How the function name is switched during the compiling process&lt;br /&gt;
** &#039;&#039;group-id&#039;&#039; - Used to associate a group of intercepted functions &lt;br /&gt;
&lt;br /&gt;
* Some &#039;&#039;&#039;external links&#039;&#039;&#039; to review:&lt;br /&gt;
** [http://www.martinfowler.com/bliki/TestDouble.html Martin Fowler definition]&lt;br /&gt;
** [http://en.wikipedia.org/wiki/Test_Double Wikipedia definition]&lt;br /&gt;
** [http://msdn.microsoft.com/en-us/magazine/cc163358.aspx MSDN Magazine description]&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Double_Sample | Test Double Sample]] that can be a useful reference. This reference example can be built and executed like all of samples using the Off-Target environment.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Doubling&amp;diff=13914</id>
		<title>Training Doubling</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Doubling&amp;diff=13914"/>
		<updated>2013-02-02T00:31:35Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: Reverted edits by Jeffs (Talk) to last revision by Marku&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused on explaining how to leverage &#039;&#039;&#039;Test Doubles&#039;&#039;&#039; in the context of executing a test. For an overview of &#039;&#039;&#039;intercepting&#039;&#039;&#039; existing functions please refer to [[Using_Test_Doubles | Test Doubles]]. The module covers following topics:&lt;br /&gt;
* How to apply pragmas for [[Function_Capturing | function intercepting]]&lt;br /&gt;
* How to decide what kind of &#039;&#039;mangling&#039;&#039; is required&lt;br /&gt;
** &#039;&#039;Definition&#039;&#039; verses &#039;&#039;Reference&#039;&#039;&lt;br /&gt;
** &#039;&#039;Explicit&#039;&#039; verses &#039;&#039;Implicit&#039;&#039;&lt;br /&gt;
* Setting and Resetting the Double implementation&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two new files used -- &#039;&#039;&#039;TestDouble.cpp &amp;amp; TestDouble.h&#039;&#039;&#039; --- that implement two test Units:&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Reference&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Definition&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All of the Test Units have test cases already implemented (used for reference) and have one test method that you are required to implement called &#039;&#039;&#039;Exercise&#039;&#039;&#039;. Currently the exercise methods return a &#039;&#039;NOT IN USE&#039;&#039; status. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If not already done, create an [[Stride_Runner#Options | option file]] (myoptions.txt) using the following content (Windows example)&lt;br /&gt;
  &lt;br /&gt;
  ##### Command Line Options ######&lt;br /&gt;
  --device &amp;quot;TCP:localhost:8000&amp;quot;&lt;br /&gt;
  --database &amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot;&lt;br /&gt;
  --output &amp;quot;%STRIDE_DIR%\SDK\Windows\sample_src\TestApp.xml&amp;quot;&lt;br /&gt;
  --log_level all&lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Double&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference --run TestDouble_Definition&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
      &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
      &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    ---------------------------------------------------------&lt;br /&gt;
    Summary: 4 passed, 0 failed, 0 in progress, 2 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Reference::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Implement a new &#039;&#039;test double&#039;&#039; for the &#039;&#039;strlen()&#039;&#039; routine:&lt;br /&gt;
*** Add a &#039;&#039;NOTE&#039;&#039; capturing its name when it is called&lt;br /&gt;
*** Uses the real &#039;&#039;strlen()&#039;&#039; to return the length of the passed in string&lt;br /&gt;
** Use &#039;&#039;srEXPECT_EQ()&#039;&#039; to validate that &#039;&#039;sut_strcheck()&#039;&#039; returns correct length &lt;br /&gt;
** Make sure to restore the original routine &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Definition::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Implement a new &#039;&#039;test double&#039;&#039; for the &#039;&#039;sut_strcpy()&#039;&#039; routine:&lt;br /&gt;
*** Log its name when it is called&lt;br /&gt;
*** Validate string passed to &#039;&#039;sut_strsave()&#039;&#039; is received correctly by the &#039;&#039;test double&#039;&#039;&lt;br /&gt;
*** Remember that &#039;&#039;sut_strsave()&#039;&#039; calls &#039;&#039;sut_strcpy()&#039;&#039; with the string passed to it&lt;br /&gt;
*** Can use a test macro to validate the string is correctly passed to the double&lt;br /&gt;
*** Call the original function (&#039;&#039;sut_strcpy()&#039;&#039;) with a &#039;&#039;&#039;DIFFERENT STRING&#039;&#039;&#039; (i.e &amp;quot;Exercise Test String 2&amp;quot;)&lt;br /&gt;
**** Call the original within the test double &lt;br /&gt;
**** Make sure to reset the original function within the mock before calling it (i.e. srDOUBLE_RESET)&lt;br /&gt;
** Call &#039;&#039;sut_strsave()&#039;&#039; with a string (i.e. &amp;quot;Exercise Test String 1&amp;quot;)&lt;br /&gt;
** Use &#039;&#039;sut_strget()&#039;&#039; to retrieve a string&lt;br /&gt;
** Compare retrieved string with the &#039;&#039;&#039;DIFFERENT STRING&#039;&#039;&#039; inserted by the &#039;&#039;test double&#039;&#039;&lt;br /&gt;
** NOTE - won&#039;t work if you wait until the end of the test to restore the original routine (i.e. srDOUBLE_SET)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Double&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference TestDouble_Definition&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    ---------------------------------------------------------&lt;br /&gt;
    Summary: 6 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. To make it easier for now we recommend that you update your existing option file (myoptions.txt) with the following if not already done: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference TestDouble_Definition --space TestDouble --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
* [[Using_Test_Doubles | Using Test Doubles]] - Outlines the basic steps to enable a &#039;&#039;double&#039;&#039;&lt;br /&gt;
** Capture the f(x) that is going to be doubled&lt;br /&gt;
** Code the test logic to additionally set the Double and restore original f(x)&lt;br /&gt;
&lt;br /&gt;
* [[Scl_function | Intercepting a function]] - Pragma specifics outline&lt;br /&gt;
** For this exercise, there are &#039;&#039;&#039;NO&#039;&#039;&#039; optional parameters when capturing a f(x) to be &#039;&#039;&#039;intercepted&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Details of the pragma parameters can be understood in the [[Intercept_Module#Interceptor | Intercept Module]] article. Please review in detail the following:&lt;br /&gt;
** &#039;&#039;context&#039;&#039; - This refers to incepting a callee or the called routine. &lt;br /&gt;
** &#039;&#039;name-mangling&#039;&#039; - How the function name is switched during the compiling process&lt;br /&gt;
** &#039;&#039;group-id&#039;&#039; - Used to associate a group of intercepted functions &lt;br /&gt;
&lt;br /&gt;
* Some &#039;&#039;&#039;external links&#039;&#039;&#039; to review:&lt;br /&gt;
** [http://www.martinfowler.com/bliki/TestDouble.html Martin Fowler definition]&lt;br /&gt;
** [http://en.wikipedia.org/wiki/Test_Double Wikipedia definition]&lt;br /&gt;
** [http://msdn.microsoft.com/en-us/magazine/cc163358.aspx MSDN Magazine description]&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Double_Sample | Test Double Sample]] that can be a useful reference. This reference example can be built and executed like all of samples using the Off-Target environment.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Doubling&amp;diff=13913</id>
		<title>Training Doubling</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Doubling&amp;diff=13913"/>
		<updated>2013-02-02T00:30:19Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Implement Exercise */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused on explaining how to leverage &#039;&#039;&#039;Test Doubles&#039;&#039;&#039; in the context of executing a test. For an overview of &#039;&#039;&#039;intercepting&#039;&#039;&#039; existing functions please refer to [[Using_Test_Doubles | Test Doubles]]. The module covers following topics:&lt;br /&gt;
* How to apply pragmas for [[Function_Capturing | function intercepting]]&lt;br /&gt;
* How to decide what kind of &#039;&#039;mangling&#039;&#039; is required&lt;br /&gt;
** &#039;&#039;Definition&#039;&#039; verses &#039;&#039;Reference&#039;&#039;&lt;br /&gt;
** &#039;&#039;Explicit&#039;&#039; verses &#039;&#039;Implicit&#039;&#039;&lt;br /&gt;
* Setting and Resetting the Double implementation&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two new files used -- &#039;&#039;&#039;TestDouble.cpp &amp;amp; TestDouble.h&#039;&#039;&#039; --- that implement two test Units:&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Reference&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Definition&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All of the Test Units have test cases already implemented (used for reference) and have one test method that you are required to implement called &#039;&#039;&#039;Exercise&#039;&#039;&#039;. Currently the exercise methods return a &#039;&#039;NOT IN USE&#039;&#039; status. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Double&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference --run TestDouble_Definition&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
      &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
      &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    ---------------------------------------------------------&lt;br /&gt;
    Summary: 4 passed, 0 failed, 0 in progress, 2 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Reference::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Implement a new &#039;&#039;test double&#039;&#039; for the &#039;&#039;strlen()&#039;&#039; routine:&lt;br /&gt;
*** Add a &#039;&#039;NOTE&#039;&#039; capturing its name when it is called&lt;br /&gt;
*** Uses the real &#039;&#039;strlen()&#039;&#039; to return the length of the passed in string&lt;br /&gt;
** Use &#039;&#039;srEXPECT_EQ()&#039;&#039; to validate that &#039;&#039;sut_strcheck()&#039;&#039; returns correct length &lt;br /&gt;
** Make sure to restore the original routine &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Definition::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Implement a new &#039;&#039;test double&#039;&#039; for the &#039;&#039;sut_strcpy()&#039;&#039; routine:&lt;br /&gt;
*** Log its name when it is called&lt;br /&gt;
*** Validate string passed to &#039;&#039;sut_strsave()&#039;&#039; is received correctly by the &#039;&#039;test double&#039;&#039;&lt;br /&gt;
*** Remember that &#039;&#039;sut_strsave()&#039;&#039; calls &#039;&#039;sut_strcpy()&#039;&#039; with the string passed to it&lt;br /&gt;
*** Can use a test macro to validate the string is correctly passed to the double&lt;br /&gt;
*** Call the original function (&#039;&#039;sut_strcpy()&#039;&#039;) with a &#039;&#039;&#039;DIFFERENT STRING&#039;&#039;&#039; (i.e &amp;quot;Exercise Test String 2&amp;quot;)&lt;br /&gt;
**** Call the original within the test double &lt;br /&gt;
**** Make sure to reset the original function within the mock before calling it (i.e. srDOUBLE_RESET)&lt;br /&gt;
** Call &#039;&#039;sut_strsave()&#039;&#039; with a string (i.e. &amp;quot;Exercise Test String 1&amp;quot;)&lt;br /&gt;
** Use &#039;&#039;sut_strget()&#039;&#039; to retrieve a string&lt;br /&gt;
** Compare retrieved string with the &#039;&#039;&#039;DIFFERENT STRING&#039;&#039;&#039; inserted by the &#039;&#039;test double&#039;&#039;&lt;br /&gt;
** NOTE - won&#039;t work if you wait until the end of the test to restore the original routine (i.e. srDOUBLE_SET)&lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Double&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference TestDouble_Definition&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    ---------------------------------------------------------&lt;br /&gt;
    Summary: 6 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. If you have not added test space options to your options file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) please see [[Training_Getting_Started#Test_Space_Access| testspace access]]. &lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference TestDouble_Definition --space TestDouble --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
* [[Using_Test_Doubles | Using Test Doubles]] - Outlines the basic steps to enable a &#039;&#039;double&#039;&#039;&lt;br /&gt;
** Capture the f(x) that is going to be doubled&lt;br /&gt;
** Code the test logic to additionally set the Double and restore original f(x)&lt;br /&gt;
&lt;br /&gt;
* [[Scl_function | Intercepting a function]] - Pragma specifics outline&lt;br /&gt;
** For this exercise, there are &#039;&#039;&#039;NO&#039;&#039;&#039; optional parameters when capturing a f(x) to be &#039;&#039;&#039;intercepted&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Details of the pragma parameters can be understood in the [[Intercept_Module#Interceptor | Intercept Module]] article. Please review in detail the following:&lt;br /&gt;
** &#039;&#039;context&#039;&#039; - This refers to incepting a callee or the called routine. &lt;br /&gt;
** &#039;&#039;name-mangling&#039;&#039; - How the function name is switched during the compiling process&lt;br /&gt;
** &#039;&#039;group-id&#039;&#039; - Used to associate a group of intercepted functions &lt;br /&gt;
&lt;br /&gt;
* Some &#039;&#039;&#039;external links&#039;&#039;&#039; to review:&lt;br /&gt;
** [http://www.martinfowler.com/bliki/TestDouble.html Martin Fowler definition]&lt;br /&gt;
** [http://en.wikipedia.org/wiki/Test_Double Wikipedia definition]&lt;br /&gt;
** [http://msdn.microsoft.com/en-us/magazine/cc163358.aspx MSDN Magazine description]&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Double_Sample | Test Double Sample]] that can be a useful reference. This reference example can be built and executed like all of samples using the Off-Target environment.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Getting_Started&amp;diff=13912</id>
		<title>Training Getting Started</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Getting_Started&amp;diff=13912"/>
		<updated>2013-02-02T00:17:49Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Before Starting */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
Our training approach is based on a self-guided tour of the [[STRIDE_Overview#STRIDE_Testing_Features | STRIDE Testing Features]] using reference examples and assigned exercises. The set of examples and the implemented exercises will be built and executed using a standard desktop computer. &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;software under test&#039;&#039;, &#039;&#039;reference examples&#039;&#039;, and &#039;&#039;exercises&#039;&#039; have been designed to be as &#039;&#039;&#039;&#039;&#039;simple as possible while sufficiently demonstrating the topic at hand&#039;&#039;&#039;&#039;&#039;. In particular, the &#039;&#039;software under test&#039;&#039; is &#039;&#039;&#039;very light&#039;&#039;&#039; on core application logic -- the focus instead is on the test code that leverages STRIDE to define and execute tests. &lt;br /&gt;
&lt;br /&gt;
The user is expected to work through each of the training modules covering a specific testing feature. Once the exercises are completed (actual test cases implemented), the results are published to [[STRIDE_Overview#STRIDE_Test_Space | Test Space]] (and validated against a pre-created baseline).&lt;br /&gt;
&lt;br /&gt;
The training collateral consists of the following:&lt;br /&gt;
# The [[STRIDE_Overview#STRIDE_Framework | STRIDE Framework]] used to implement and execute tests&lt;br /&gt;
#* The Framework is configured in an [[STRIDE_Off-Target_Environment | Off-Target Environment]] &lt;br /&gt;
# A set of specific [[#Training | Training Modules]] that will guide you through the exercises&lt;br /&gt;
# [[Main_Page | &#039;&#039;Wiki articles&#039;&#039;]] that will provide background material and other technical information&lt;br /&gt;
# [[STRIDE_Overview#STRIDE_Test_Space | STRIDE Test Space]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For more details concerning STRIDE refer to the following:&lt;br /&gt;
* &#039;&#039;[[STRIDE_Overview_Video | Overview screencast]]&#039;&#039;  &lt;br /&gt;
* &#039;&#039;[[What is Unique About STRIDE | What is Unique About STRIDE]]&#039;&#039; &lt;br /&gt;
* &#039;&#039;[[Types of Testing Supported by STRIDE | Types of Testing Supported]]&#039;&#039; &lt;br /&gt;
* &#039;&#039;[[Frequently Asked Questions About STRIDE | Frequently Asked Questions]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;For questions and support [mailto:training@s2technologies.com email us]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Before Starting ==&lt;br /&gt;
Before you can start setting up your environment you need the following 3 items:&lt;br /&gt;
* &#039;&#039;&#039;STRIDE Desktop Installation Package&#039;&#039;&#039; (one of the following)&lt;br /&gt;
**&amp;lt;tt&amp;gt;STRIDE_framework-windows_4.3.yy.zip&amp;lt;/tt&amp;gt; (Windows desktop)&lt;br /&gt;
**&amp;lt;tt&amp;gt;STRIDE_framework-linux_4.3.yy.tgz&amp;lt;/tt&amp;gt; (Linux desktop)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Training Source files&#039;&#039;&#039;&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_training_source.zip&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Test Space User Account URL and Logon Credentials&#039;&#039;&#039;&lt;br /&gt;
** Test Space URL (&amp;lt;tt&amp;gt;https://&amp;lt;i&amp;gt;YourCompany&amp;lt;/i&amp;gt;.stridetestspace.com&amp;lt;/tt&amp;gt;)&lt;br /&gt;
** User-name&lt;br /&gt;
** User-password&lt;br /&gt;
&lt;br /&gt;
=== Desktop Setup ===&lt;br /&gt;
The training requires that you install the [[Desktop_Installation | Desktop Framework]] and that the [[STRIDE_Off-Target_Environment | Off-Target Environment]] is setup correctly and verified. &lt;br /&gt;
&lt;br /&gt;
For an overview of the installation steps please refer to [[Installation_Overview | this]] article.&lt;br /&gt;
&lt;br /&gt;
The following steps are required:&lt;br /&gt;
# Install your [[Desktop_Installation | desktop Framework package]]&lt;br /&gt;
# Read about the [[STRIDE_Off-Target_Environment | Off-Target Environment]]&lt;br /&gt;
# Install [[STRIDE_Off-Target_Environment#Host_Compiler | host C++ compiler]] for your desktop (if needed)&lt;br /&gt;
# Use the Off-Target SDK to [[Building_an_Off-Target_Test_App | build a Test App]]&lt;br /&gt;
# [[Building_an_Off-Target_Test_App#Diagnostics | Run STRIDE diagnostics]] with the Test App built&lt;br /&gt;
&lt;br /&gt;
=== Training Setup ===&lt;br /&gt;
The following source code can be found in the &#039;&#039;&#039;STRIDE_training_source.zip&#039;&#039;&#039; file:&lt;br /&gt;
&lt;br /&gt;
   software_under_test.c | h&lt;br /&gt;
   TestBasic.cpp | h&lt;br /&gt;
   TestParam.cpp | h&lt;br /&gt;
   TestFixture.cpp | h&lt;br /&gt;
   TestExpect.cpp | h   &lt;br /&gt;
   TestRuntime.cpp | h&lt;br /&gt;
   TestFile.cpp | h&lt;br /&gt;
   TestDouble.cpp | h&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;software under test&#039;&#039; is contained in the file &#039;&#039;&#039;software_under_test.c | h&#039;&#039;&#039;.  All of the public functions use &#039;&#039;&#039;sut_&#039;&#039;&#039; as a prefix. All training modules test against this file. Although the test examples are contained in C++ files, most of the test logic is written in standard C. &lt;br /&gt;
 &lt;br /&gt;
* Extract STRIDE_training_source.zip into the directory &amp;lt;tt&amp;gt;%STRIDE_DIR%\SDK\Windows\sample_src&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;$STRIDE_DIR/SDK/Posix/sample_src&amp;lt;/tt&amp;gt; for Linux)&lt;br /&gt;
* Rebuild the TestApp following the instructions for [[Building_an_Off-Target_Test_App#Build_Steps | Building a TestApp]].&lt;br /&gt;
* List all [[Test_Units | Test Units]] within the generated database file using the following command:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Windows&#039;&#039;&#039;&lt;br /&gt;
 &amp;gt; stride --database=&amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot; --list&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux&#039;&#039;&#039;&lt;br /&gt;
 &amp;gt; stride --database=&amp;quot;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;quot; --list&lt;br /&gt;
&lt;br /&gt;
The following remoted Functions and Test Units should be displayed:&lt;br /&gt;
 &lt;br /&gt;
   Functions&lt;br /&gt;
     sut_strcpy(char const * input, char * output) : void&lt;br /&gt;
     strlen(char const * _Str) : size_t&lt;br /&gt;
   Test Units&lt;br /&gt;
     TestBasic()&lt;br /&gt;
     TestDouble_Reference()&lt;br /&gt;
     TestDouble_Definition()&lt;br /&gt;
     TestExpect_Seq()&lt;br /&gt;
     TestExpect_Data()&lt;br /&gt;
     TestExpect_Misc()&lt;br /&gt;
     TestFile()&lt;br /&gt;
     TestFixture()&lt;br /&gt;
     TestParam(int data1, int data2, char * szString)&lt;br /&gt;
     TestRuntime_Static()&lt;br /&gt;
     TestRuntime_Dynamic(int NumberOfTestCases)&lt;br /&gt;
&lt;br /&gt;
=== Run Training Tests ===&lt;br /&gt;
Here we will run all tests in the &amp;lt;tt&amp;gt;TestApp.sidb&amp;lt;/tt&amp;gt; database.&amp;lt;ref&amp;gt;Note that the S2 diagnostic tests are treated separately, and are not run unless the &amp;lt;tt&amp;gt;--diagnostics&amp;lt;/tt&amp;gt; option is specified to &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* If you haven&#039;t done so already, [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using the SDK makefile&lt;br /&gt;
* Invoke TestApp found in the &#039;&#039;/out/bin&#039;&#039; directory&lt;br /&gt;
* Create an [[Stride_Runner#Options | option file]] (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) that includes the following content in &amp;lt;tt&amp;gt;%STRIDE_DIR%\SDK\Windows\myoptions.txt&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;$STRIDE_DIR/SDK/Posix/myoptions.txt&amp;lt;/tt&amp;gt; for Linux)&lt;br /&gt;
&#039;&#039;&#039;Windows&#039;&#039;&#039;&lt;br /&gt;
  ##### Command Line Options ######&lt;br /&gt;
  --device &amp;quot;TCP:localhost:8000&amp;quot;&lt;br /&gt;
  --database &amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot;&lt;br /&gt;
  --output &amp;quot;%STRIDE_DIR%\SDK\Windows\sample_src\TestApp.xml&amp;quot;&lt;br /&gt;
  --log_level all&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux&#039;&#039;&#039;&lt;br /&gt;
  ##### Command Line Options #####&lt;br /&gt;
  --device &amp;quot;TCP:localhost:8000&amp;quot;&lt;br /&gt;
  --database &amp;quot;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;quot;&lt;br /&gt;
  --output &amp;quot;$STRIDE_DIR/SDK/Posix/sample_src/TestApp.xml&amp;quot;&lt;br /&gt;
  --log_level all &lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
* If you set up an [[STRIDE_Runner#Environment_Variables | environment variable]] for the &#039;&#039;&#039;device&#039;&#039;&#039; option then it is not required in the option file. Note: Command line options override environment variables.  &lt;br /&gt;
* &#039;&#039;stride --help&#039;&#039; provides [[STRIDE_Runner#Options | options information]]&lt;br /&gt;
&lt;br /&gt;
If you haven&#039;t done so already, start &amp;lt;tt&amp;gt;TestApp&amp;lt;/tt&amp;gt; running in a separate console window.&lt;br /&gt;
&lt;br /&gt;
Now run stride as shown below and verify summary results (starting from the &amp;lt;tt&amp;gt;SDK\Windows&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;SDK/Posix&amp;lt;/tt&amp;gt; directory):&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run &amp;quot;*&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The output should look like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
Executing...&lt;br /&gt;
  test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
    &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
    &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 21 passed, 3 failed, 0 in progress, 11 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Interpreting Results ===&lt;br /&gt;
Open &#039;&#039;&#039;TestApp.xml&#039;&#039;&#039; in your browser; this file is created in the directory from which you ran &#039;&#039;&#039;stride&#039;&#039;&#039; (or at the path specified via the &amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt; command line option). If you were connected to the Internet when you ran the tests, the TestApp.xsl file is also generated in the directory. By opening TestApp.xml in a web browser, the xsl is automatically applied to create html in the browser. If you use Google Chrome, please see [[Browser Compatibility]].&lt;br /&gt;
&lt;br /&gt;
If you&#039;re interested in the details of the tests, please see the test documentation contained in the test report.&lt;br /&gt;
&lt;br /&gt;
=== Test Space Access ===&lt;br /&gt;
&lt;br /&gt;
As part of the training, users implement exercises (test cases).  As each exercise is completed, the results are expected to be uploaded to Test Space. Accessing Test Space (uploading, viewing, etc.) requires a user-name and password. Before working on a training module please confirm that your user account has been set up by [[Test_Space_Setup | logging in]]. &lt;br /&gt;
&lt;br /&gt;
Test Space will have expected results using [[Creating_And_Using_Baselines | baselines]]. This is used to automatically verify if the exercises have been implemented correctly (at least to some degree). For capturing test results a &#039;&#039;&#039;Training&#039;&#039;&#039; project has been created with the following &#039;&#039;&#039;Spaces&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
   Sandbox     - Results for training setup&lt;br /&gt;
   TestBasic   - Basics training results: TestBasic.cpp | h&lt;br /&gt;
   TestParam   - Parameters training results: TestParam.cpp | h&lt;br /&gt;
   TestFixture - Fixturing training results: TestFixture.cpp | h&lt;br /&gt;
   TestExpect  - Expectations training results: TestExpect.cpp | h   &lt;br /&gt;
   TestRuntime - Runtime API training results: TestRuntime.cpp | h&lt;br /&gt;
   TestFile    - File IO training results: TestFile.cpp | h&lt;br /&gt;
   TestDouble  - Doubling training results: TestDouble.cpp | h&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To publish your results using the [[STRIDE_Runner | STRIDE Runner]] the following command-line options should be used:&lt;br /&gt;
&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --space &#039;&#039;MODULENAME&#039;&#039; &lt;br /&gt;
  --name &#039;&#039;YOURNAME&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Notes:&lt;br /&gt;
* Concerning the &#039;&#039;MODULENAME&#039;&#039; option: use the name that corresponds to the training module currently underway (e.g. TestBasic, TestParam, etc.)&lt;br /&gt;
* &#039;&#039;YOURNAME&#039;&#039; should be set to your name (i.e. JohnD); omit spaces from this string&lt;br /&gt;
* If you access the Internet via an HTTP proxy please read [[STRIDE_Runner#Using_a_Proxy | &#039;&#039;&#039;this article&#039;&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
To make it easier for now we recommend that you update your existing option file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) with the following: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
=== Publish Test Space Results ===&lt;br /&gt;
&lt;br /&gt;
To complete the setup publish your results to Test Space. Please make sure to use &#039;&#039;&#039;Sandbox&#039;&#039;&#039; as the space to upload your results to (see below).&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run &amp;quot;*&amp;quot; --space Sandbox --upload&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
  test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
    &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
    &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 21 passed, 3 failed, 0 in progress, 11 not in use.&lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
  Uploading to test space...&lt;br /&gt;
&lt;br /&gt;
=== Confirming Setup ===&lt;br /&gt;
&lt;br /&gt;
After you have run and uploaded the results of the training setup, you should confirm the correctness of your work. The training setup results will end up with a mix of passes and failures, so we use a [[Creating_And_Using_Baselines | Test Space baseline]] to get an aggregate comparison between expected and actual results.&lt;br /&gt;
&lt;br /&gt;
To validate your setup:&lt;br /&gt;
# Navigate to Sandbox set in Test Space and look under the column labeled &#039;&#039;&#039;&#039;&#039;SEQUENTIAL COMPARISON&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
# Confirm that &#039;&#039;&#039;&#039;&#039;none&#039;&#039;&#039;&#039;&#039; is indicated for each of the sub-column. If so, you have completed the setup and are ready to move onto the training modules.&lt;br /&gt;
&lt;br /&gt;
[[image:STRIDE_results_012813.JPG|none|600px| Test Space baseline example]]&lt;br /&gt;
&lt;br /&gt;
== Training ==&lt;br /&gt;
At this point you should be ready to start the actual training. There are &#039;&#039;&#039;7 separate training modules&#039;&#039;&#039; and we recommend the following order:&lt;br /&gt;
&lt;br /&gt;
===Introductory===&lt;br /&gt;
* [[Training_Basics | &#039;&#039;&#039;Basics&#039;&#039;&#039;]] - Covers basics of implementing and executing test cases&lt;br /&gt;
* [[Training_Parameters |&#039;&#039;&#039;Parameters&#039;&#039;&#039;]] - How to pass parameters to a test &lt;br /&gt;
* [[Training_Fixturing | &#039;&#039;&#039;Fixturing&#039;&#039;&#039;]] - Leveraging setup and teardown featuring&lt;br /&gt;
&lt;br /&gt;
===Advanced===&lt;br /&gt;
* [[Training_Expectations | &#039;&#039;&#039;Expectations&#039;&#039;&#039;]] - Validating code sequencing along with state data&lt;br /&gt;
* [[Training_Runtime_API | &#039;&#039;&#039;Runtime API&#039;&#039;&#039;]] - Using the runtime services to dynamically create test suits / cases&lt;br /&gt;
* [[Training_File_IO | &#039;&#039;&#039;File IO&#039;&#039;&#039;]] - Reading and writing files existing on the host&lt;br /&gt;
* [[Training_Doubling | &#039;&#039;&#039;Doubling&#039;&#039;&#039;]] - Replacing a dependency with a stub, fake, or mock&lt;br /&gt;
&lt;br /&gt;
== Training Confirmation ==&lt;br /&gt;
&lt;br /&gt;
As you run and upload each test unit containing your worked training exercises, you should confirm the correctness of your work.&lt;br /&gt;
&lt;br /&gt;
The correctly worked training test units will end up with a mix of passes and failures, so we use a [[Creating_And_Using_Baselines | Test Space baseline]] to get an aggregate comparison between expected and actual results &lt;br /&gt;
&lt;br /&gt;
To validate your work:&lt;br /&gt;
# Navigate to your result set in Test Space and look under the column labeled &#039;&#039;&#039;&#039;&#039;SEQUENTIAL COMPARISON&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
# Confirm that &#039;&#039;&#039;&#039;&#039;none&#039;&#039;&#039;&#039;&#039; is indicated for each of the sub-column.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Getting_Started&amp;diff=13911</id>
		<title>Training Getting Started</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Getting_Started&amp;diff=13911"/>
		<updated>2013-02-02T00:15:55Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
Our training approach is based on a self-guided tour of the [[STRIDE_Overview#STRIDE_Testing_Features | STRIDE Testing Features]] using reference examples and assigned exercises. The set of examples and the implemented exercises will be built and executed using a standard desktop computer. &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;software under test&#039;&#039;, &#039;&#039;reference examples&#039;&#039;, and &#039;&#039;exercises&#039;&#039; have been designed to be as &#039;&#039;&#039;&#039;&#039;simple as possible while sufficiently demonstrating the topic at hand&#039;&#039;&#039;&#039;&#039;. In particular, the &#039;&#039;software under test&#039;&#039; is &#039;&#039;&#039;very light&#039;&#039;&#039; on core application logic -- the focus instead is on the test code that leverages STRIDE to define and execute tests. &lt;br /&gt;
&lt;br /&gt;
The user is expected to work through each of the training modules covering a specific testing feature. Once the exercises are completed (actual test cases implemented), the results are published to [[STRIDE_Overview#STRIDE_Test_Space | Test Space]] (and validated against a pre-created baseline).&lt;br /&gt;
&lt;br /&gt;
The training collateral consists of the following:&lt;br /&gt;
# The [[STRIDE_Overview#STRIDE_Framework | STRIDE Framework]] used to implement and execute tests&lt;br /&gt;
#* The Framework is configured in an [[STRIDE_Off-Target_Environment | Off-Target Environment]] &lt;br /&gt;
# A set of specific [[#Training | Training Modules]] that will guide you through the exercises&lt;br /&gt;
# [[Main_Page | &#039;&#039;Wiki articles&#039;&#039;]] that will provide background material and other technical information&lt;br /&gt;
# [[STRIDE_Overview#STRIDE_Test_Space | STRIDE Test Space]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For more details concerning STRIDE refer to the following:&lt;br /&gt;
* &#039;&#039;[[STRIDE_Overview_Video | Overview screencast]]&#039;&#039;  &lt;br /&gt;
* &#039;&#039;[[What is Unique About STRIDE | What is Unique About STRIDE]]&#039;&#039; &lt;br /&gt;
* &#039;&#039;[[Types of Testing Supported by STRIDE | Types of Testing Supported]]&#039;&#039; &lt;br /&gt;
* &#039;&#039;[[Frequently Asked Questions About STRIDE | Frequently Asked Questions]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;For questions and support [mailto:training@s2technologies.com email us]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Before Starting ==&lt;br /&gt;
Before you can start setting up your environment you need the following 3 items:&lt;br /&gt;
* &#039;&#039;&#039;STRIDE Desktop Installation Package&#039;&#039;&#039; (one of the following)&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-windows_4.3.yy.zip&amp;lt;/tt&amp;gt; (Windows desktop)&lt;br /&gt;
&#039;&#039;or&#039;&#039;&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-linux_4.3.yy.tgz&amp;lt;/tt&amp;gt; (Linux desktop)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Training Source files&#039;&#039;&#039;&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_training_source.zip&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Test Space User Account URL and Logon Credentials&#039;&#039;&#039;&lt;br /&gt;
** Test Space URL (&amp;lt;tt&amp;gt;https://&amp;lt;i&amp;gt;YourCompany&amp;lt;/i&amp;gt;.stridetestspace.com&amp;lt;/tt&amp;gt;)&lt;br /&gt;
** User-name&lt;br /&gt;
** User-password&lt;br /&gt;
&lt;br /&gt;
=== Desktop Setup ===&lt;br /&gt;
The training requires that you install the [[Desktop_Installation | Desktop Framework]] and that the [[STRIDE_Off-Target_Environment | Off-Target Environment]] is setup correctly and verified. &lt;br /&gt;
&lt;br /&gt;
For an overview of the installation steps please refer to [[Installation_Overview | this]] article.&lt;br /&gt;
&lt;br /&gt;
The following steps are required:&lt;br /&gt;
# Install your [[Desktop_Installation | desktop Framework package]]&lt;br /&gt;
# Read about the [[STRIDE_Off-Target_Environment | Off-Target Environment]]&lt;br /&gt;
# Install [[STRIDE_Off-Target_Environment#Host_Compiler | host C++ compiler]] for your desktop (if needed)&lt;br /&gt;
# Use the Off-Target SDK to [[Building_an_Off-Target_Test_App | build a Test App]]&lt;br /&gt;
# [[Building_an_Off-Target_Test_App#Diagnostics | Run STRIDE diagnostics]] with the Test App built&lt;br /&gt;
&lt;br /&gt;
=== Training Setup ===&lt;br /&gt;
The following source code can be found in the &#039;&#039;&#039;STRIDE_training_source.zip&#039;&#039;&#039; file:&lt;br /&gt;
&lt;br /&gt;
   software_under_test.c | h&lt;br /&gt;
   TestBasic.cpp | h&lt;br /&gt;
   TestParam.cpp | h&lt;br /&gt;
   TestFixture.cpp | h&lt;br /&gt;
   TestExpect.cpp | h   &lt;br /&gt;
   TestRuntime.cpp | h&lt;br /&gt;
   TestFile.cpp | h&lt;br /&gt;
   TestDouble.cpp | h&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;software under test&#039;&#039; is contained in the file &#039;&#039;&#039;software_under_test.c | h&#039;&#039;&#039;.  All of the public functions use &#039;&#039;&#039;sut_&#039;&#039;&#039; as a prefix. All training modules test against this file. Although the test examples are contained in C++ files, most of the test logic is written in standard C. &lt;br /&gt;
 &lt;br /&gt;
* Extract STRIDE_training_source.zip into the directory &amp;lt;tt&amp;gt;%STRIDE_DIR%\SDK\Windows\sample_src&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;$STRIDE_DIR/SDK/Posix/sample_src&amp;lt;/tt&amp;gt; for Linux)&lt;br /&gt;
* Rebuild the TestApp following the instructions for [[Building_an_Off-Target_Test_App#Build_Steps | Building a TestApp]].&lt;br /&gt;
* List all [[Test_Units | Test Units]] within the generated database file using the following command:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Windows&#039;&#039;&#039;&lt;br /&gt;
 &amp;gt; stride --database=&amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot; --list&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux&#039;&#039;&#039;&lt;br /&gt;
 &amp;gt; stride --database=&amp;quot;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;quot; --list&lt;br /&gt;
&lt;br /&gt;
The following remoted Functions and Test Units should be displayed:&lt;br /&gt;
 &lt;br /&gt;
   Functions&lt;br /&gt;
     sut_strcpy(char const * input, char * output) : void&lt;br /&gt;
     strlen(char const * _Str) : size_t&lt;br /&gt;
   Test Units&lt;br /&gt;
     TestBasic()&lt;br /&gt;
     TestDouble_Reference()&lt;br /&gt;
     TestDouble_Definition()&lt;br /&gt;
     TestExpect_Seq()&lt;br /&gt;
     TestExpect_Data()&lt;br /&gt;
     TestExpect_Misc()&lt;br /&gt;
     TestFile()&lt;br /&gt;
     TestFixture()&lt;br /&gt;
     TestParam(int data1, int data2, char * szString)&lt;br /&gt;
     TestRuntime_Static()&lt;br /&gt;
     TestRuntime_Dynamic(int NumberOfTestCases)&lt;br /&gt;
&lt;br /&gt;
=== Run Training Tests ===&lt;br /&gt;
Here we will run all tests in the &amp;lt;tt&amp;gt;TestApp.sidb&amp;lt;/tt&amp;gt; database.&amp;lt;ref&amp;gt;Note that the S2 diagnostic tests are treated separately, and are not run unless the &amp;lt;tt&amp;gt;--diagnostics&amp;lt;/tt&amp;gt; option is specified to &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* If you haven&#039;t done so already, [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using the SDK makefile&lt;br /&gt;
* Invoke TestApp found in the &#039;&#039;/out/bin&#039;&#039; directory&lt;br /&gt;
* Create an [[Stride_Runner#Options | option file]] (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) that includes the following content in &amp;lt;tt&amp;gt;%STRIDE_DIR%\SDK\Windows\myoptions.txt&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;$STRIDE_DIR/SDK/Posix/myoptions.txt&amp;lt;/tt&amp;gt; for Linux)&lt;br /&gt;
&#039;&#039;&#039;Windows&#039;&#039;&#039;&lt;br /&gt;
  ##### Command Line Options ######&lt;br /&gt;
  --device &amp;quot;TCP:localhost:8000&amp;quot;&lt;br /&gt;
  --database &amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot;&lt;br /&gt;
  --output &amp;quot;%STRIDE_DIR%\SDK\Windows\sample_src\TestApp.xml&amp;quot;&lt;br /&gt;
  --log_level all&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux&#039;&#039;&#039;&lt;br /&gt;
  ##### Command Line Options #####&lt;br /&gt;
  --device &amp;quot;TCP:localhost:8000&amp;quot;&lt;br /&gt;
  --database &amp;quot;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;quot;&lt;br /&gt;
  --output &amp;quot;$STRIDE_DIR/SDK/Posix/sample_src/TestApp.xml&amp;quot;&lt;br /&gt;
  --log_level all &lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
* If you set up an [[STRIDE_Runner#Environment_Variables | environment variable]] for the &#039;&#039;&#039;device&#039;&#039;&#039; option then it is not required in the option file. Note: Command line options override environment variables.  &lt;br /&gt;
* &#039;&#039;stride --help&#039;&#039; provides [[STRIDE_Runner#Options | options information]]&lt;br /&gt;
&lt;br /&gt;
If you haven&#039;t done so already, start &amp;lt;tt&amp;gt;TestApp&amp;lt;/tt&amp;gt; running in a separate console window.&lt;br /&gt;
&lt;br /&gt;
Now run stride as shown below and verify summary results (starting from the &amp;lt;tt&amp;gt;SDK\Windows&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;SDK/Posix&amp;lt;/tt&amp;gt; directory):&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run &amp;quot;*&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The output should look like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
Executing...&lt;br /&gt;
  test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
    &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
    &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 21 passed, 3 failed, 0 in progress, 11 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Interpreting Results ===&lt;br /&gt;
Open &#039;&#039;&#039;TestApp.xml&#039;&#039;&#039; in your browser; this file is created in the directory from which you ran &#039;&#039;&#039;stride&#039;&#039;&#039; (or at the path specified via the &amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt; command line option). If you were connected to the Internet when you ran the tests, the TestApp.xsl file is also generated in the directory. By opening TestApp.xml in a web browser, the xsl is automatically applied to create html in the browser. If you use Google Chrome, please see [[Browser Compatibility]].&lt;br /&gt;
&lt;br /&gt;
If you&#039;re interested in the details of the tests, please see the test documentation contained in the test report.&lt;br /&gt;
&lt;br /&gt;
=== Test Space Access ===&lt;br /&gt;
&lt;br /&gt;
As part of the training, users implement exercises (test cases).  As each exercise is completed, the results are expected to be uploaded to Test Space. Accessing Test Space (uploading, viewing, etc.) requires a user-name and password. Before working on a training module please confirm that your user account has been set up by [[Test_Space_Setup | logging in]]. &lt;br /&gt;
&lt;br /&gt;
Test Space will have expected results using [[Creating_And_Using_Baselines | baselines]]. This is used to automatically verify if the exercises have been implemented correctly (at least to some degree). For capturing test results a &#039;&#039;&#039;Training&#039;&#039;&#039; project has been created with the following &#039;&#039;&#039;Spaces&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
   Sandbox     - Results for training setup&lt;br /&gt;
   TestBasic   - Basics training results: TestBasic.cpp | h&lt;br /&gt;
   TestParam   - Parameters training results: TestParam.cpp | h&lt;br /&gt;
   TestFixture - Fixturing training results: TestFixture.cpp | h&lt;br /&gt;
   TestExpect  - Expectations training results: TestExpect.cpp | h   &lt;br /&gt;
   TestRuntime - Runtime API training results: TestRuntime.cpp | h&lt;br /&gt;
   TestFile    - File IO training results: TestFile.cpp | h&lt;br /&gt;
   TestDouble  - Doubling training results: TestDouble.cpp | h&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To publish your results using the [[STRIDE_Runner | STRIDE Runner]] the following command-line options should be used:&lt;br /&gt;
&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --space &#039;&#039;MODULENAME&#039;&#039; &lt;br /&gt;
  --name &#039;&#039;YOURNAME&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Notes:&lt;br /&gt;
* Concerning the &#039;&#039;MODULENAME&#039;&#039; option: use the name that corresponds to the training module currently underway (e.g. TestBasic, TestParam, etc.)&lt;br /&gt;
* &#039;&#039;YOURNAME&#039;&#039; should be set to your name (i.e. JohnD); omit spaces from this string&lt;br /&gt;
* If you access the Internet via an HTTP proxy please read [[STRIDE_Runner#Using_a_Proxy | &#039;&#039;&#039;this article&#039;&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
To make it easier for now we recommend that you update your existing option file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) with the following: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
=== Publish Test Space Results ===&lt;br /&gt;
&lt;br /&gt;
To complete the setup publish your results to Test Space. Please make sure to use &#039;&#039;&#039;Sandbox&#039;&#039;&#039; as the space to upload your results to (see below).&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run &amp;quot;*&amp;quot; --space Sandbox --upload&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
  test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
    &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
    &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 21 passed, 3 failed, 0 in progress, 11 not in use.&lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
  Uploading to test space...&lt;br /&gt;
&lt;br /&gt;
=== Confirming Setup ===&lt;br /&gt;
&lt;br /&gt;
After you have run and uploaded the results of the training setup, you should confirm the correctness of your work. The training setup results will end up with a mix of passes and failures, so we use a [[Creating_And_Using_Baselines | Test Space baseline]] to get an aggregate comparison between expected and actual results.&lt;br /&gt;
&lt;br /&gt;
To validate your setup:&lt;br /&gt;
# Navigate to Sandbox set in Test Space and look under the column labeled &#039;&#039;&#039;&#039;&#039;SEQUENTIAL COMPARISON&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
# Confirm that &#039;&#039;&#039;&#039;&#039;none&#039;&#039;&#039;&#039;&#039; is indicated for each of the sub-column. If so, you have completed the setup and are ready to move onto the training modules.&lt;br /&gt;
&lt;br /&gt;
[[image:STRIDE_results_012813.JPG|none|600px| Test Space baseline example]]&lt;br /&gt;
&lt;br /&gt;
== Training ==&lt;br /&gt;
At this point you should be ready to start the actual training. There are &#039;&#039;&#039;7 separate training modules&#039;&#039;&#039; and we recommend the following order:&lt;br /&gt;
&lt;br /&gt;
===Introductory===&lt;br /&gt;
* [[Training_Basics | &#039;&#039;&#039;Basics&#039;&#039;&#039;]] - Covers basics of implementing and executing test cases&lt;br /&gt;
* [[Training_Parameters |&#039;&#039;&#039;Parameters&#039;&#039;&#039;]] - How to pass parameters to a test &lt;br /&gt;
* [[Training_Fixturing | &#039;&#039;&#039;Fixturing&#039;&#039;&#039;]] - Leveraging setup and teardown featuring&lt;br /&gt;
&lt;br /&gt;
===Advanced===&lt;br /&gt;
* [[Training_Expectations | &#039;&#039;&#039;Expectations&#039;&#039;&#039;]] - Validating code sequencing along with state data&lt;br /&gt;
* [[Training_Runtime_API | &#039;&#039;&#039;Runtime API&#039;&#039;&#039;]] - Using the runtime services to dynamically create test suits / cases&lt;br /&gt;
* [[Training_File_IO | &#039;&#039;&#039;File IO&#039;&#039;&#039;]] - Reading and writing files existing on the host&lt;br /&gt;
* [[Training_Doubling | &#039;&#039;&#039;Doubling&#039;&#039;&#039;]] - Replacing a dependency with a stub, fake, or mock&lt;br /&gt;
&lt;br /&gt;
== Training Confirmation ==&lt;br /&gt;
&lt;br /&gt;
As you run and upload each test unit containing your worked training exercises, you should confirm the correctness of your work.&lt;br /&gt;
&lt;br /&gt;
The correctly worked training test units will end up with a mix of passes and failures, so we use a [[Creating_And_Using_Baselines | Test Space baseline]] to get an aggregate comparison between expected and actual results &lt;br /&gt;
&lt;br /&gt;
To validate your work:&lt;br /&gt;
# Navigate to your result set in Test Space and look under the column labeled &#039;&#039;&#039;&#039;&#039;SEQUENTIAL COMPARISON&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
# Confirm that &#039;&#039;&#039;&#039;&#039;none&#039;&#039;&#039;&#039;&#039; is indicated for each of the sub-column.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Building_an_Off-Target_Test_App&amp;diff=13910</id>
		<title>Building an Off-Target Test App</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Building_an_Off-Target_Test_App&amp;diff=13910"/>
		<updated>2013-02-01T23:26:56Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Samples (Optional Not for Training Exercises) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Prerequisite ==&lt;br /&gt;
This article guides you through building a test application (&#039;&#039;&#039;TestApp&#039;&#039;&#039;) for the purpose of running sample code using the [[STRIDE Off-Target Environment]]. &lt;br /&gt;
&lt;br /&gt;
It requires an installation of the STRIDE Framework desktop package. If not installed, please see [[Desktop Installation]] for more information. &lt;br /&gt;
&lt;br /&gt;
It also requires that your desktop contains one of the following &#039;&#039;&#039;compilers&#039;&#039;&#039;:&lt;br /&gt;
* For Windows Microsoft Visual Studio 2008 or later is required. If you don&#039;t already have Visual Studio, the free [http://en.wikipedia.org/wiki/Microsoft_Visual_Studio_Express Visual C++ Express] can be used (download [http://www.microsoft.com/express/download/#webInstall here]). &lt;br /&gt;
* For Linux the [http://en.wikipedia.org/wiki/GNU_Compiler_Collection GNU Compiler Collection] (included by default in almost all Linux distros) is required.&lt;br /&gt;
&lt;br /&gt;
== Building a TestApp ==&lt;br /&gt;
&lt;br /&gt;
=== SDK Makefile ===&lt;br /&gt;
The SDK Makefile is set up so that all &amp;lt;tt&amp;gt;.c&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;.cpp&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt; files found in the directory &amp;lt;tt&amp;gt;SDK\Windows\sample_src&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;SDK/Posix/sample_src&amp;lt;/tt&amp;gt; for Linux) are included in the compile and link of the &#039;&#039;&#039;testapp&#039;&#039;&#039; target.&lt;br /&gt;
&lt;br /&gt;
Further--as a pre-compilation step--any &amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt; files found in &amp;lt;tt&amp;gt;sample_src&amp;lt;/tt&amp;gt; are submitted to the [[STRIDE Build Tools]]. This will result in &lt;br /&gt;
* the detection of [[Test_Unit_Pragmas| test pragmas]] used to declare Test Units in these &amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt; files&lt;br /&gt;
* the detection of [[Scl_function | function pragmas]] used to declare remoting of functions also found in &amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt; files&lt;br /&gt;
* the inclusion of metadata into the &amp;lt;tt&amp;gt;sidb&amp;lt;/tt&amp;gt; file created&lt;br /&gt;
* the generation of an [[Intercept Module]] required for executing tests&lt;br /&gt;
&lt;br /&gt;
=== Build Steps ===&lt;br /&gt;
To begin, be sure that TestApp is not running then perform the following steps:&lt;br /&gt;
&lt;br /&gt;
NOTE: &#039;&#039;If you experience any build problem please make sure to read [[Troubleshooting Build Problems]] for possible resolution.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
====Linux====&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Build the test app&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd $STRIDE_DIR/SDK/Posix/src&lt;br /&gt;
make testapp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Note that the following artifacts are produced by the build:&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;tt&amp;gt;$STRIDE_DIR/SDK/Posix/out/bin/TestApp&amp;lt;/tt&amp;gt;&lt;br /&gt;
: the test application&lt;br /&gt;
;&amp;lt;tt&amp;gt;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;lt;/tt&amp;gt;&lt;br /&gt;
: the STRIDE interface database file which contains metadata describing the interfaces remoted by the test app (along with other data)&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Windows====&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;If using Microsoft Visual Studio, open a [http://msdn.microsoft.com/en-us/library/ms235639(v=vs.100).aspx Visual Studio Command Prompt] to ensure that the compiler and linker are on your PATH.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Build the test app using the supplied GNU make. (You will get Makefile errors if you use the default make.)&lt;br /&gt;
&amp;lt;source lang=&amp;quot;dos&amp;quot;&amp;gt;&lt;br /&gt;
cd %STRIDE_DIR%\SDK\Windows\src&lt;br /&gt;
..\bin\make testapp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Note that the following artifacts are produced by the build:&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;tt&amp;gt;%STRIDE_DIR%\SDK\Windows\out\bin\TestApp.exe&amp;lt;/tt&amp;gt;&lt;br /&gt;
: the test application&lt;br /&gt;
;&amp;lt;tt&amp;gt;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;lt;/tt&amp;gt;&lt;br /&gt;
: the STRIDE interface database file which contains metadata describing the interfaces remoted by the test app (along with other data)&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE: &#039;&#039;If you prefer to use Visual Studio to build/debug/run your testapp, we provide instructions [[STRIDE_Extensions_for_Visual_Studio|here]] about how to accomplish this.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Diagnostics ==&lt;br /&gt;
The test app we just built does not have any user tests in it. At this point it provides a starting point for test that we will subsequently add.&lt;br /&gt;
&lt;br /&gt;
However, a set of diagnostic tests that verify operation of the STRIDE runtime itself are always built into the generated TestApp executable. If desired (we recommend you to do so) you could run them by doing the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Invoke the TestApp. In order to see TestApp&#039;s output, we recommend that you manually open a new console (or Windows equivalent): &lt;br /&gt;
;Linux&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$STRIDE_DIR/SDK/Posix/out/bin/TestApp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
;Windows&lt;br /&gt;
&amp;lt;source lang=&amp;quot;dos&amp;quot;&amp;gt;&lt;br /&gt;
%STRIDE_DIR%\SDK\Windows\out\bin\TestApp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
(...or launch from the file explorer)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Note TestApp&#039;s output upon startup.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
STRIDE Test Console Application.&lt;br /&gt;
Enter &#039;Ctrl+C&#039; to Quit.&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
Listening on TCP port 8000&lt;br /&gt;
starting up...&lt;br /&gt;
&amp;quot;_srThread&amp;quot; thread started.&lt;br /&gt;
&amp;quot;stride&amp;quot; thread started.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;From a second console window, invoke &amp;lt;tt&amp;gt;[[STRIDE_Runner|stride]]&amp;lt;/tt&amp;gt; as follows, to verify connectivity with the test app and STRIDE runtime operation:&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
;Linux&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
stride --diagnostics --database=&amp;quot;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;quot; --device=TCP:localhost:8000 --run=&amp;quot;*&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
;Windows&lt;br /&gt;
&amp;lt;source lang=&amp;quot;dos&amp;quot;&amp;gt;&lt;br /&gt;
stride --diagnostics --database=&amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot; --device=TCP:localhost:8000 --run=&amp;quot;*&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As the tests run you will see output in both the TestApp (target) and stride (host) console windows.&lt;br /&gt;
&lt;br /&gt;
The host console window output is shown here:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
  runtime version: 4.3.0x &lt;br /&gt;
Executing diagnostics...&lt;br /&gt;
  test unit &amp;quot;Link&amp;quot;&lt;br /&gt;
    Loopback ............&lt;br /&gt;
    Payload Fragmentation&lt;br /&gt;
    Stub-Proxy Deadlock&lt;br /&gt;
    Target Characteristics&lt;br /&gt;
    &amp;gt; 4 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;Stat&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;Time&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 8 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Note the Summary results shown in the host output; all in use tests should pass.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;To exit TestApp, give the target window focus and enter Ctrl-C (or &#039;q&#039; under Windows).&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Samples (Optional, Not for Training Exercises) ==&lt;br /&gt;
&lt;br /&gt;
The initial desktop installation of STRIDE does not set up any source code (with the exception of a set of system diagnostic tests) for automatic inclusion in a test application. The [[Desktop Installation | desktop framework]] distribution, however, comes with a set of [[Samples_Overview | Samples]]. &lt;br /&gt;
&lt;br /&gt;
To demonstrate how to build a sample, will will add the [[Test Intro Sample]] that provide an overview of STRIDE testing techniques. For an overview from a C++ perspective, please see [[Test Intro Cpp Sample]]. &lt;br /&gt;
&lt;br /&gt;
The following steps are applicable for &#039;&#039;&#039;all&#039;&#039;&#039; [[Samples_Overview | Samples]].&lt;br /&gt;
&lt;br /&gt;
To begin, be sure that TestApp is not running, then copy the &amp;lt;tt&amp;gt;.c&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt; files found in &amp;lt;tt&amp;gt;Samples/test_in_c_cpp/TestIntro&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;SDK/Windows/sample_src&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;SDK/Posix/sample_src&amp;lt;/tt&amp;gt; for Linux).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039; Only files in the sample_src directory will be picked up by the makefile. Files in any subdirectories will be igored.&lt;br /&gt;
&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Once the files have been copied to &amp;lt;tt&amp;gt;sample_src&amp;lt;/tt&amp;gt;, simply build TestApp as described above. Note if you had previously copied other sample source to this directory, you should decide whether to remove those files first. When you complete the test app build, any source that is in this directory at the time of build will be included in the test app.&lt;br /&gt;
&lt;br /&gt;
=== Running the Tests ===&lt;br /&gt;
Here we will run all tests in the &amp;lt;tt&amp;gt;TestApp.sidb&amp;lt;/tt&amp;gt; database.&amp;lt;ref&amp;gt;Note that the S2 diagnostic tests are treated separately, and are not run unless the &amp;lt;tt&amp;gt;--diagnostics&amp;lt;/tt&amp;gt; option is specified to &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Run the build above TestApp in a console window.&lt;br /&gt;
# Invoke &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt; in a separate console window (different from the running TestApp) -- as shown below and verify Summary results.&lt;br /&gt;
&lt;br /&gt;
Here are the command line parameters that we will submit to &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--database ./out/TestApp.sidb &lt;br /&gt;
--device TCP:localhost:8000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
* If you setup an [[STRIDE_Runner#Environment_Variables | environment variable]] for the &#039;&#039;&#039;device&#039;&#039;&#039; option than it is not required in the option file. Note: Command line options override environment variables.  &lt;br /&gt;
* You may want to create a text file named &#039;&#039;RunTestIntro.txt&#039;&#039; in the &amp;lt;tt&amp;gt;SDK\Windows&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;SDK/Posix&amp;lt;/tt&amp;gt; for Linux) directory as an option file to submit to &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt;.&lt;br /&gt;
* &#039;&#039;stride --help&#039;&#039; provides [[STRIDE_Runner#Options | options information]]&lt;br /&gt;
&lt;br /&gt;
If you haven&#039;t done so already, start &amp;lt;tt&amp;gt;TestApp&amp;lt;/tt&amp;gt; running in a separate console window.&lt;br /&gt;
&lt;br /&gt;
Now run stride as follows (starting from the &amp;lt;tt&amp;gt;SDK\Windows&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;SDK/Posix&amp;lt;/tt&amp;gt; directory):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
stride --options_file RunTestIntro.txt --run=&amp;quot;*&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The output should look like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
Executing...&lt;br /&gt;
  test unit &amp;quot;s2_testintro_flist&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;s2_testintro_cclass&amp;quot;&lt;br /&gt;
    &amp;gt; 1 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;s2_testintro_testdoubles&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;s2_testintro_testpoints&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;s2_testintro_parameters&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 11 passed, 2 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Interpreting Results ===&lt;br /&gt;
Open &#039;&#039;&#039;TestApp.xml&#039;&#039;&#039; in your browser; this file is created in the directory from which you ran &#039;&#039;&#039;stride&#039;&#039;&#039; (or the directory via the &amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt; command line option). If you were connected to the Internet when you ran the tests, the TestApp.xsl file is also generated in the directory. By opening TestApp.xml in a web browser, the xsl is automatically applied to create html in the browser. If you use Google Chrome, please see [[Browser Compatibility]].&lt;br /&gt;
&lt;br /&gt;
If you&#039;re interested in the details of the tests, please see to the test documentation contained in the test report.&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Installation]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Desktop_Installation&amp;diff=13909</id>
		<title>Desktop Installation</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Desktop_Installation&amp;diff=13909"/>
		<updated>2013-02-01T23:25:40Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Perl Installation (Optional Not for Training Exercises) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Installation Packages ==&lt;br /&gt;
Files are installed by unzipping the provided package to your PC. Packages are available targeting the following operating systems (your version number may be different than that shown):&lt;br /&gt;
;Windows (x86)&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-windows_4.x.yy.zip&amp;lt;/tt&amp;gt;&lt;br /&gt;
;Linux (x86)&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-linux_4.x.yy.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please see the appropriate installation instructions below.&lt;br /&gt;
&lt;br /&gt;
== Windows Installation ==&lt;br /&gt;
&lt;br /&gt;
=== Unpacking ===&lt;br /&gt;
The following installation example assumes the the installation package is located in your root directory and that the directory &amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt; exists. You can choose to install to a different location (all instructions below assume you are installing into &amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt;). &lt;br /&gt;
&lt;br /&gt;
The example uses the open source [http://www.7-zip.org/ 7-Zip] utility to unzip the archive.&lt;br /&gt;
&lt;br /&gt;
 cd \stride&lt;br /&gt;
 &amp;quot;\Program Files\7-Zip\7z&amp;quot; x ..\STRIDE_framework-windows_4.x.yy.zip&lt;br /&gt;
&lt;br /&gt;
Once unzipped, files will have been installed under the &amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
=== Verify Environment Variables ===&lt;br /&gt;
&lt;br /&gt;
==== Updated PATH ====&lt;br /&gt;
As a final step, you will need to update your &amp;lt;tt&amp;gt;[http://en.wikipedia.org/wiki/Path_(variable) PATH]&amp;lt;/tt&amp;gt; environment variable to include &amp;lt;tt&amp;gt;\stride\bin&amp;lt;/tt&amp;gt;. &lt;br /&gt;
For instructions on modifying it, please see [http://support.microsoft.com/kb/310519 http://support.microsoft.com/kb/310519].&lt;br /&gt;
&lt;br /&gt;
NOTE: &#039;&#039;Make sure to insert &#039;&#039;&#039;no spaces&#039;&#039;&#039; before and after the semicolon separators(;).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Create/Update STRIDE_DIR====&lt;br /&gt;
&lt;br /&gt;
Verify that the  &amp;lt;tt&amp;gt;STRIDE_DIR&amp;lt;/tt&amp;gt; environment variable exists and is set to the root installation directory (&amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt;). If this environment variable does not yet exist, you should create it as a user environment variable.&lt;br /&gt;
&lt;br /&gt;
To confirm installation and display &#039;&#039;help&#039;&#039; run the following command in a console window:&lt;br /&gt;
&lt;br /&gt;
 stride -h&lt;br /&gt;
&lt;br /&gt;
=== Uninstalling ===&lt;br /&gt;
To uninstall STRIDE simply:&lt;br /&gt;
* Remove any reference to &amp;lt;tt&amp;gt;\stride\bin&amp;lt;/tt&amp;gt; in your &amp;lt;tt&amp;gt;PATH&amp;lt;/tt&amp;gt; environment variable. &lt;br /&gt;
* Remove &amp;lt;tt&amp;gt;STRIDE_DIR&amp;lt;/tt&amp;gt; environment variable.&lt;br /&gt;
* Remove &amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
== Linux Installation ==&lt;br /&gt;
&lt;br /&gt;
=== Unpacking ===&lt;br /&gt;
The following installation example assumes the the installation package is located in your home directory and that the directory &amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt; exists. You can choose to install to a different location (all instructions below assume you are installing into &amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt;). &lt;br /&gt;
&lt;br /&gt;
 cd ~/stride&lt;br /&gt;
 tar -zxvf ../STRIDE_framework-linux_4.x.yy.tgz&lt;br /&gt;
&lt;br /&gt;
Once unzipped, files will have been installed under the &amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
=== Verify Environment Variables ===&lt;br /&gt;
&lt;br /&gt;
==== Updated PATH ====&lt;br /&gt;
As a final step, you will need to update your &amp;lt;tt&amp;gt;[http://en.wikipedia.org/wiki/Path_(variable) PATH]&amp;lt;/tt&amp;gt; environment variable to include &amp;lt;tt&amp;gt;~/stride/bin&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
If you use the bash shell, enter the following at a command prompt, or to automatically set at each login, add to your &amp;lt;tt&amp;gt;.bashrc&amp;lt;/tt&amp;gt;:&lt;br /&gt;
 export PATH=$PATH:~/stride/bin&lt;br /&gt;
&lt;br /&gt;
For other shells, and more information, please see the following articles:&lt;br /&gt;
* [http://www.linuxheadquarters.com/howto/basic/path.shtml http://www.linuxheadquarters.com/howto/basic/path.shtml].&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Environment_variable#UNIX http://en.wikipedia.org/wiki/Environment_variable]&lt;br /&gt;
&lt;br /&gt;
==== Create/Update STRIDE_DIR====&lt;br /&gt;
Verify that the  &amp;lt;tt&amp;gt;STRIDE_DIR&amp;lt;/tt&amp;gt; environment variable exists and is set to the root installation directory (&amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt;). If this environment variable does not yet exist, you should automatically set at each login, add to your &amp;lt;tt&amp;gt;.bashrc&amp;lt;/tt&amp;gt;:&lt;br /&gt;
 export STRIDE_DIR=~/stride&lt;br /&gt;
&lt;br /&gt;
To confirm installation and display &#039;&#039;help&#039;&#039; run the following command in a console window:&lt;br /&gt;
&lt;br /&gt;
 stride -h&lt;br /&gt;
&lt;br /&gt;
NOTE: &#039;&#039;In a 64-bit environment the above may fail with the following error: &amp;lt;code&amp;gt;&amp;quot;stride: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory.&amp;quot;&amp;lt;/code&amp;gt; To resolve this issue install the appropriate 32-bit compatibility libraries for your Linux distribution:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Debian / Ubuntu&lt;br /&gt;
 sudo apt-get install ia32-libs&lt;br /&gt;
* Fedora / CentOS / RHEL&lt;br /&gt;
 sudo yum -y install glibc.i686 libstdc++.i686&lt;br /&gt;
&lt;br /&gt;
=== Uninstalling ===&lt;br /&gt;
To uninstall STRIDE simply:&lt;br /&gt;
* Remove any reference to &amp;lt;tt&amp;gt;~/stride/bin&amp;lt;/tt&amp;gt; in your &amp;lt;tt&amp;gt;PATH&amp;lt;/tt&amp;gt; environment variable. &lt;br /&gt;
* Remove &amp;lt;tt&amp;gt;STRIDE_DIR&amp;lt;/tt&amp;gt; environment variable.&lt;br /&gt;
* Remove &amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
== Directories and Files ==&lt;br /&gt;
&lt;br /&gt;
It&#039;s not necessary to understand the workings of the STRIDE framework to perform evaluation or training. The desktop package contains an [[STRIDE Off-Target Environment]] that utilizes a SDK that is set up with appropriate options and settings to enable &amp;quot;out of the box&amp;quot; functionality. A quick orientation to the Off-Target Environment&#039;s directories and files are shown below.&lt;br /&gt;
&lt;br /&gt;
If you are interested in the details, consult the articles [[Posix SDK]] and [[Windows SDK]].&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;bin&amp;lt;/tt&amp;gt;===&lt;br /&gt;
This directory contains the [[Build Tools|STRIDE Build Tools]] and the [[STRIDE Runner]].&lt;br /&gt;
&lt;br /&gt;
The build tools are invoked early on in the target software build process to generate special STRIDE artifacts that are used in subsequent build steps and later when running tests against the target. In an Off-Target Environment installation, these files are needed on the host computer since this is where we are building the target application. In a production environment, these files are needed only on the computer that performs the target software build.&lt;br /&gt;
&lt;br /&gt;
The [[STRIDE Runner]] is the program you use to run tests from the host.&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;lib&amp;lt;/tt&amp;gt;===&lt;br /&gt;
This directory contains a set of STRIDE specific core scripting libraries along with prebuild binaries intended to be used for [[Test Modules Overview|testing in scripts]].&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;Samples&amp;lt;/tt&amp;gt;===&lt;br /&gt;
The Samples directory contains a number of sub-directories, each containing the source for a [[Samples|sample test]].&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;SDK&amp;lt;/tt&amp;gt;===&lt;br /&gt;
This directory contains the sub-directories &amp;lt;tt&amp;gt;GRS&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;Runtime&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;SLAP&amp;lt;/tt&amp;gt;, which contain source code that comprises the STRIDE Runtime. These sources are built into the STRIDE Runtime library as a dependency of the &amp;lt;tt&amp;gt;testapp&amp;lt;/tt&amp;gt; target. (See &amp;lt;tt&amp;gt;src&amp;lt;/tt&amp;gt; directory below.) &lt;br /&gt;
&lt;br /&gt;
In addition, there is a directory named either &amp;lt;tt&amp;gt;Posix&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;Windows&amp;lt;/tt&amp;gt;, depending on your host operating system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;tt&amp;gt;[[Windows_SDK| SDK\Windows]](&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;[[Posix_SDK| SDK/Posix]]&amp;lt;/tt&amp;gt; )&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This directory (&amp;lt;tt&amp;gt;Windows&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;Posix&amp;lt;/tt&amp;gt;) contains the following directories:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;tt&amp;gt;bin&amp;lt;/tt&amp;gt; &#039;&#039;(Windows only)&#039;&#039;&lt;br /&gt;
: Contains GNU make program files (GNU make is already present on Linux systems)&lt;br /&gt;
*&amp;lt;tt&amp;gt;sample_src&amp;lt;/tt&amp;gt;&lt;br /&gt;
: This directory is originally empty. The sandbox is set up so that any files in this directory are included in the TestApp build.&lt;br /&gt;
*&amp;lt;tt&amp;gt;settings&amp;lt;/tt&amp;gt;&lt;br /&gt;
: This directory contains a set of &amp;lt;tt&amp;gt;stride.XXX.s2scompile&amp;lt;/tt&amp;gt; files, where &amp;lt;tt&amp;gt;XXX&amp;lt;/tt&amp;gt; coresponds to the target CPU architecture (i.e. X86, ARM...). These files, used by the [[s2scompile|STRIDE Compiler]], specify target CPU characteristics (endian-ness, data sizes and alignments). On Windows, this directory also contains a set of files for [[STRIDE_Extensions_for_Visual_Studio|use in building target apps with Visual Studio]].&lt;br /&gt;
*&amp;lt;tt&amp;gt;src&amp;lt;/tt&amp;gt;&lt;br /&gt;
: This directory contains the Makefile used to produce the sandbox TestApp as well as the TestApp sources.&lt;br /&gt;
*&#039;&#039;&amp;lt;tt&amp;gt;out&amp;lt;/tt&amp;gt;&#039;&#039;&lt;br /&gt;
: This directory (and several sub-directories) is created as part of the make process. All of the make targets are written to this directory and its sub-directories.&lt;br /&gt;
&lt;br /&gt;
== Perl Installation (Optional, Not for Training Exercises) ==&lt;br /&gt;
If you intend to use [[Test Modules Overview|STRIDE Script modules]] for testing in script, you will need a recent version of Perl (x86 with threads support) installed. This is &#039;&#039;&#039;NOT&#039;&#039;&#039; required if only tests in C/C++ will be run or to complete the training.&lt;br /&gt;
&lt;br /&gt;
As of this writing, we support only the 32-bit versions 5.8.9, 5.10.x, 5.12.x and 5.14.x of Perl. &lt;br /&gt;
&lt;br /&gt;
=== Windows === &lt;br /&gt;
It is required to use the standard 32-bit Perl distributions from [http://www.activestate.com/activeperl ActiveState].&lt;br /&gt;
&lt;br /&gt;
The following additional (non-standard) Perl packages are also required for full functionality of STRIDE tests in perl:&lt;br /&gt;
&lt;br /&gt;
* [http://search.cpan.org/~smueller/Class-ISA-0.36/lib/Class/ISA.pm Class::ISA]&lt;br /&gt;
* [http://search.cpan.org/~andrewf/Pod-POM-0.27/lib/Pod/POM.pm Pod::POM]&lt;br /&gt;
* [http://search.cpan.org/~andk/Devel-Symdump-2.08/lib/Devel/Symdump.pm Devel::Symdump]&lt;br /&gt;
&lt;br /&gt;
You can easily install these packages using the [http://docs.activestate.com/activeperl/5.10/faq/ActivePerl-faq2.html ppm tool]. If you access the Internet via a proxy make sure to read [http://docs.activestate.com/activeperl/5.10/faq/ActivePerl-faq2.html#ppm_and_proxies this]. Simple command-line installation of PACKAGE_NAME (the package to install) typically just requires typing:&lt;br /&gt;
&lt;br /&gt;
 ppm install PACKAGE_NAME&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
We recommend you to use the standard 32-bit Perl distribution that comes with your Linux version. In case you need to manually build from source make sure to configure &amp;quot;shared library&amp;quot; (&amp;lt;tt&amp;gt;-Duseshrplib&amp;lt;/tt&amp;gt;), &amp;quot;thread support&amp;quot; (&amp;lt;tt&amp;gt;-Duseithreads&amp;lt;/tt&amp;gt;) and no &amp;quot;64-bit support&amp;quot; (&amp;lt;tt&amp;gt;-Uuse64bitint -Uuse64bitall&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The following additional (non-standard) Perl packages are also required for full functionality of STRIDE tests in perl:&lt;br /&gt;
&lt;br /&gt;
* [http://search.cpan.org/~ingy/YAML-LibYAML-0.38/lib/YAML/XS.pm YAML::XS]&lt;br /&gt;
* [http://search.cpan.org/~smueller/Class-ISA-0.36/lib/Class/ISA.pm Class::ISA]&lt;br /&gt;
* [http://search.cpan.org/~andrewf/Pod-POM-0.27/lib/Pod/POM.pm Pod::POM]&lt;br /&gt;
* [http://search.cpan.org/~andk/Devel-Symdump-2.08/lib/Devel/Symdump.pm Devel::Symdump]&lt;br /&gt;
&lt;br /&gt;
If your perl is installed in a system directory (&amp;lt;tt&amp;gt;/usr/bin/perl&amp;lt;/tt&amp;gt;, for instance), you will need root access to install shared modules. The simplest method for installing packages is via the [http://www.perl.com/doc/manual/html/lib/CPAN.html CPAN shell]. If you access the Internet via a proxy make sure to set the appropriate [http://search.cpan.org/dist/CPAN/lib/CPAN.pm#Config_Variables CPAN config variables]. To start the shell in interactive mode:&lt;br /&gt;
&lt;br /&gt;
 sudo perl -MCPAN -eshell&lt;br /&gt;
&lt;br /&gt;
Once in the shell, search for and install the latest stable version of PACKAGE_NAME (the package to install):&lt;br /&gt;
&lt;br /&gt;
 install PACKAGE_NAME&lt;br /&gt;
&lt;br /&gt;
The STRIDE perl packages also need to load your system&#039;s &#039;&#039;&#039;libperl.so&#039;&#039;&#039; (shared object file) at runtime. Depending on your system, this file should be loadable from a perl CORE directory or from one of the shared system directories. If you &#039;&#039;&#039;DO NOT&#039;&#039;&#039; have this shared library on your system, you might need to install a &#039;&#039;libperl-dev&#039;&#039;, &#039;&#039;perl-devel&#039;&#039; or &#039;&#039;perl-libs&#039;&#039; package in order to get it. Here is how you can do that on the console of some Linux distributions:&lt;br /&gt;
&lt;br /&gt;
* Debian / Ubuntu&lt;br /&gt;
 sudo apt-get install libperl-dev&lt;br /&gt;
* Fedora / CentOS / RHEL&lt;br /&gt;
 sudo yum -y install perl-devel&lt;br /&gt;
&lt;br /&gt;
=== Validation ===&lt;br /&gt;
Once you have installed Perl we recommend you to run the following command in a console window:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
stride --device NULL  --diagnostics Perl --output PerlCheck&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If everything was properly set up you should get the following output:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Executing diagnostics...&lt;br /&gt;
  script &amp;quot;diagnostics.pl&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  ---------------------------------------------------------------------&lt;br /&gt;
  Summary: 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In addition a report file with name &amp;lt;tt&amp;gt;PerlCheck.xml&amp;lt;/tt&amp;gt; will be created in the current directory. If interested in the details you could open that report file in a browser of your choice.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Installation]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Desktop_Installation&amp;diff=13908</id>
		<title>Desktop Installation</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Desktop_Installation&amp;diff=13908"/>
		<updated>2013-02-01T23:25:03Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Perl Installation (Optional) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Installation Packages ==&lt;br /&gt;
Files are installed by unzipping the provided package to your PC. Packages are available targeting the following operating systems (your version number may be different than that shown):&lt;br /&gt;
;Windows (x86)&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-windows_4.x.yy.zip&amp;lt;/tt&amp;gt;&lt;br /&gt;
;Linux (x86)&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-linux_4.x.yy.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please see the appropriate installation instructions below.&lt;br /&gt;
&lt;br /&gt;
== Windows Installation ==&lt;br /&gt;
&lt;br /&gt;
=== Unpacking ===&lt;br /&gt;
The following installation example assumes the the installation package is located in your root directory and that the directory &amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt; exists. You can choose to install to a different location (all instructions below assume you are installing into &amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt;). &lt;br /&gt;
&lt;br /&gt;
The example uses the open source [http://www.7-zip.org/ 7-Zip] utility to unzip the archive.&lt;br /&gt;
&lt;br /&gt;
 cd \stride&lt;br /&gt;
 &amp;quot;\Program Files\7-Zip\7z&amp;quot; x ..\STRIDE_framework-windows_4.x.yy.zip&lt;br /&gt;
&lt;br /&gt;
Once unzipped, files will have been installed under the &amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
=== Verify Environment Variables ===&lt;br /&gt;
&lt;br /&gt;
==== Updated PATH ====&lt;br /&gt;
As a final step, you will need to update your &amp;lt;tt&amp;gt;[http://en.wikipedia.org/wiki/Path_(variable) PATH]&amp;lt;/tt&amp;gt; environment variable to include &amp;lt;tt&amp;gt;\stride\bin&amp;lt;/tt&amp;gt;. &lt;br /&gt;
For instructions on modifying it, please see [http://support.microsoft.com/kb/310519 http://support.microsoft.com/kb/310519].&lt;br /&gt;
&lt;br /&gt;
NOTE: &#039;&#039;Make sure to insert &#039;&#039;&#039;no spaces&#039;&#039;&#039; before and after the semicolon separators(;).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Create/Update STRIDE_DIR====&lt;br /&gt;
&lt;br /&gt;
Verify that the  &amp;lt;tt&amp;gt;STRIDE_DIR&amp;lt;/tt&amp;gt; environment variable exists and is set to the root installation directory (&amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt;). If this environment variable does not yet exist, you should create it as a user environment variable.&lt;br /&gt;
&lt;br /&gt;
To confirm installation and display &#039;&#039;help&#039;&#039; run the following command in a console window:&lt;br /&gt;
&lt;br /&gt;
 stride -h&lt;br /&gt;
&lt;br /&gt;
=== Uninstalling ===&lt;br /&gt;
To uninstall STRIDE simply:&lt;br /&gt;
* Remove any reference to &amp;lt;tt&amp;gt;\stride\bin&amp;lt;/tt&amp;gt; in your &amp;lt;tt&amp;gt;PATH&amp;lt;/tt&amp;gt; environment variable. &lt;br /&gt;
* Remove &amp;lt;tt&amp;gt;STRIDE_DIR&amp;lt;/tt&amp;gt; environment variable.&lt;br /&gt;
* Remove &amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
== Linux Installation ==&lt;br /&gt;
&lt;br /&gt;
=== Unpacking ===&lt;br /&gt;
The following installation example assumes the the installation package is located in your home directory and that the directory &amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt; exists. You can choose to install to a different location (all instructions below assume you are installing into &amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt;). &lt;br /&gt;
&lt;br /&gt;
 cd ~/stride&lt;br /&gt;
 tar -zxvf ../STRIDE_framework-linux_4.x.yy.tgz&lt;br /&gt;
&lt;br /&gt;
Once unzipped, files will have been installed under the &amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
=== Verify Environment Variables ===&lt;br /&gt;
&lt;br /&gt;
==== Updated PATH ====&lt;br /&gt;
As a final step, you will need to update your &amp;lt;tt&amp;gt;[http://en.wikipedia.org/wiki/Path_(variable) PATH]&amp;lt;/tt&amp;gt; environment variable to include &amp;lt;tt&amp;gt;~/stride/bin&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
If you use the bash shell, enter the following at a command prompt, or to automatically set at each login, add to your &amp;lt;tt&amp;gt;.bashrc&amp;lt;/tt&amp;gt;:&lt;br /&gt;
 export PATH=$PATH:~/stride/bin&lt;br /&gt;
&lt;br /&gt;
For other shells, and more information, please see the following articles:&lt;br /&gt;
* [http://www.linuxheadquarters.com/howto/basic/path.shtml http://www.linuxheadquarters.com/howto/basic/path.shtml].&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Environment_variable#UNIX http://en.wikipedia.org/wiki/Environment_variable]&lt;br /&gt;
&lt;br /&gt;
==== Create/Update STRIDE_DIR====&lt;br /&gt;
Verify that the  &amp;lt;tt&amp;gt;STRIDE_DIR&amp;lt;/tt&amp;gt; environment variable exists and is set to the root installation directory (&amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt;). If this environment variable does not yet exist, you should automatically set at each login, add to your &amp;lt;tt&amp;gt;.bashrc&amp;lt;/tt&amp;gt;:&lt;br /&gt;
 export STRIDE_DIR=~/stride&lt;br /&gt;
&lt;br /&gt;
To confirm installation and display &#039;&#039;help&#039;&#039; run the following command in a console window:&lt;br /&gt;
&lt;br /&gt;
 stride -h&lt;br /&gt;
&lt;br /&gt;
NOTE: &#039;&#039;In a 64-bit environment the above may fail with the following error: &amp;lt;code&amp;gt;&amp;quot;stride: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory.&amp;quot;&amp;lt;/code&amp;gt; To resolve this issue install the appropriate 32-bit compatibility libraries for your Linux distribution:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Debian / Ubuntu&lt;br /&gt;
 sudo apt-get install ia32-libs&lt;br /&gt;
* Fedora / CentOS / RHEL&lt;br /&gt;
 sudo yum -y install glibc.i686 libstdc++.i686&lt;br /&gt;
&lt;br /&gt;
=== Uninstalling ===&lt;br /&gt;
To uninstall STRIDE simply:&lt;br /&gt;
* Remove any reference to &amp;lt;tt&amp;gt;~/stride/bin&amp;lt;/tt&amp;gt; in your &amp;lt;tt&amp;gt;PATH&amp;lt;/tt&amp;gt; environment variable. &lt;br /&gt;
* Remove &amp;lt;tt&amp;gt;STRIDE_DIR&amp;lt;/tt&amp;gt; environment variable.&lt;br /&gt;
* Remove &amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
== Directories and Files ==&lt;br /&gt;
&lt;br /&gt;
It&#039;s not necessary to understand the workings of the STRIDE framework to perform evaluation or training. The desktop package contains an [[STRIDE Off-Target Environment]] that utilizes a SDK that is set up with appropriate options and settings to enable &amp;quot;out of the box&amp;quot; functionality. A quick orientation to the Off-Target Environment&#039;s directories and files are shown below.&lt;br /&gt;
&lt;br /&gt;
If you are interested in the details, consult the articles [[Posix SDK]] and [[Windows SDK]].&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;bin&amp;lt;/tt&amp;gt;===&lt;br /&gt;
This directory contains the [[Build Tools|STRIDE Build Tools]] and the [[STRIDE Runner]].&lt;br /&gt;
&lt;br /&gt;
The build tools are invoked early on in the target software build process to generate special STRIDE artifacts that are used in subsequent build steps and later when running tests against the target. In an Off-Target Environment installation, these files are needed on the host computer since this is where we are building the target application. In a production environment, these files are needed only on the computer that performs the target software build.&lt;br /&gt;
&lt;br /&gt;
The [[STRIDE Runner]] is the program you use to run tests from the host.&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;lib&amp;lt;/tt&amp;gt;===&lt;br /&gt;
This directory contains a set of STRIDE specific core scripting libraries along with prebuild binaries intended to be used for [[Test Modules Overview|testing in scripts]].&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;Samples&amp;lt;/tt&amp;gt;===&lt;br /&gt;
The Samples directory contains a number of sub-directories, each containing the source for a [[Samples|sample test]].&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;SDK&amp;lt;/tt&amp;gt;===&lt;br /&gt;
This directory contains the sub-directories &amp;lt;tt&amp;gt;GRS&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;Runtime&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;SLAP&amp;lt;/tt&amp;gt;, which contain source code that comprises the STRIDE Runtime. These sources are built into the STRIDE Runtime library as a dependency of the &amp;lt;tt&amp;gt;testapp&amp;lt;/tt&amp;gt; target. (See &amp;lt;tt&amp;gt;src&amp;lt;/tt&amp;gt; directory below.) &lt;br /&gt;
&lt;br /&gt;
In addition, there is a directory named either &amp;lt;tt&amp;gt;Posix&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;Windows&amp;lt;/tt&amp;gt;, depending on your host operating system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;tt&amp;gt;[[Windows_SDK| SDK\Windows]](&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;[[Posix_SDK| SDK/Posix]]&amp;lt;/tt&amp;gt; )&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This directory (&amp;lt;tt&amp;gt;Windows&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;Posix&amp;lt;/tt&amp;gt;) contains the following directories:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;tt&amp;gt;bin&amp;lt;/tt&amp;gt; &#039;&#039;(Windows only)&#039;&#039;&lt;br /&gt;
: Contains GNU make program files (GNU make is already present on Linux systems)&lt;br /&gt;
*&amp;lt;tt&amp;gt;sample_src&amp;lt;/tt&amp;gt;&lt;br /&gt;
: This directory is originally empty. The sandbox is set up so that any files in this directory are included in the TestApp build.&lt;br /&gt;
*&amp;lt;tt&amp;gt;settings&amp;lt;/tt&amp;gt;&lt;br /&gt;
: This directory contains a set of &amp;lt;tt&amp;gt;stride.XXX.s2scompile&amp;lt;/tt&amp;gt; files, where &amp;lt;tt&amp;gt;XXX&amp;lt;/tt&amp;gt; coresponds to the target CPU architecture (i.e. X86, ARM...). These files, used by the [[s2scompile|STRIDE Compiler]], specify target CPU characteristics (endian-ness, data sizes and alignments). On Windows, this directory also contains a set of files for [[STRIDE_Extensions_for_Visual_Studio|use in building target apps with Visual Studio]].&lt;br /&gt;
*&amp;lt;tt&amp;gt;src&amp;lt;/tt&amp;gt;&lt;br /&gt;
: This directory contains the Makefile used to produce the sandbox TestApp as well as the TestApp sources.&lt;br /&gt;
*&#039;&#039;&amp;lt;tt&amp;gt;out&amp;lt;/tt&amp;gt;&#039;&#039;&lt;br /&gt;
: This directory (and several sub-directories) is created as part of the make process. All of the make targets are written to this directory and its sub-directories.&lt;br /&gt;
&lt;br /&gt;
== Perl Installation (Optional Not for Training Exercises) ==&lt;br /&gt;
If you intend to use [[Test Modules Overview|STRIDE Script modules]] for testing in script, you will need a recent version of Perl (x86 with threads support) installed. This is &#039;&#039;&#039;NOT&#039;&#039;&#039; required if only tests in C/C++ will be run or to complete the training.&lt;br /&gt;
&lt;br /&gt;
As of this writing, we support only the 32-bit versions 5.8.9, 5.10.x, 5.12.x and 5.14.x of Perl. &lt;br /&gt;
&lt;br /&gt;
=== Windows === &lt;br /&gt;
It is required to use the standard 32-bit Perl distributions from [http://www.activestate.com/activeperl ActiveState].&lt;br /&gt;
&lt;br /&gt;
The following additional (non-standard) Perl packages are also required for full functionality of STRIDE tests in perl:&lt;br /&gt;
&lt;br /&gt;
* [http://search.cpan.org/~smueller/Class-ISA-0.36/lib/Class/ISA.pm Class::ISA]&lt;br /&gt;
* [http://search.cpan.org/~andrewf/Pod-POM-0.27/lib/Pod/POM.pm Pod::POM]&lt;br /&gt;
* [http://search.cpan.org/~andk/Devel-Symdump-2.08/lib/Devel/Symdump.pm Devel::Symdump]&lt;br /&gt;
&lt;br /&gt;
You can easily install these packages using the [http://docs.activestate.com/activeperl/5.10/faq/ActivePerl-faq2.html ppm tool]. If you access the Internet via a proxy make sure to read [http://docs.activestate.com/activeperl/5.10/faq/ActivePerl-faq2.html#ppm_and_proxies this]. Simple command-line installation of PACKAGE_NAME (the package to install) typically just requires typing:&lt;br /&gt;
&lt;br /&gt;
 ppm install PACKAGE_NAME&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
We recommend you to use the standard 32-bit Perl distribution that comes with your Linux version. In case you need to manually build from source make sure to configure &amp;quot;shared library&amp;quot; (&amp;lt;tt&amp;gt;-Duseshrplib&amp;lt;/tt&amp;gt;), &amp;quot;thread support&amp;quot; (&amp;lt;tt&amp;gt;-Duseithreads&amp;lt;/tt&amp;gt;) and no &amp;quot;64-bit support&amp;quot; (&amp;lt;tt&amp;gt;-Uuse64bitint -Uuse64bitall&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The following additional (non-standard) Perl packages are also required for full functionality of STRIDE tests in perl:&lt;br /&gt;
&lt;br /&gt;
* [http://search.cpan.org/~ingy/YAML-LibYAML-0.38/lib/YAML/XS.pm YAML::XS]&lt;br /&gt;
* [http://search.cpan.org/~smueller/Class-ISA-0.36/lib/Class/ISA.pm Class::ISA]&lt;br /&gt;
* [http://search.cpan.org/~andrewf/Pod-POM-0.27/lib/Pod/POM.pm Pod::POM]&lt;br /&gt;
* [http://search.cpan.org/~andk/Devel-Symdump-2.08/lib/Devel/Symdump.pm Devel::Symdump]&lt;br /&gt;
&lt;br /&gt;
If your perl is installed in a system directory (&amp;lt;tt&amp;gt;/usr/bin/perl&amp;lt;/tt&amp;gt;, for instance), you will need root access to install shared modules. The simplest method for installing packages is via the [http://www.perl.com/doc/manual/html/lib/CPAN.html CPAN shell]. If you access the Internet via a proxy make sure to set the appropriate [http://search.cpan.org/dist/CPAN/lib/CPAN.pm#Config_Variables CPAN config variables]. To start the shell in interactive mode:&lt;br /&gt;
&lt;br /&gt;
 sudo perl -MCPAN -eshell&lt;br /&gt;
&lt;br /&gt;
Once in the shell, search for and install the latest stable version of PACKAGE_NAME (the package to install):&lt;br /&gt;
&lt;br /&gt;
 install PACKAGE_NAME&lt;br /&gt;
&lt;br /&gt;
The STRIDE perl packages also need to load your system&#039;s &#039;&#039;&#039;libperl.so&#039;&#039;&#039; (shared object file) at runtime. Depending on your system, this file should be loadable from a perl CORE directory or from one of the shared system directories. If you &#039;&#039;&#039;DO NOT&#039;&#039;&#039; have this shared library on your system, you might need to install a &#039;&#039;libperl-dev&#039;&#039;, &#039;&#039;perl-devel&#039;&#039; or &#039;&#039;perl-libs&#039;&#039; package in order to get it. Here is how you can do that on the console of some Linux distributions:&lt;br /&gt;
&lt;br /&gt;
* Debian / Ubuntu&lt;br /&gt;
 sudo apt-get install libperl-dev&lt;br /&gt;
* Fedora / CentOS / RHEL&lt;br /&gt;
 sudo yum -y install perl-devel&lt;br /&gt;
&lt;br /&gt;
=== Validation ===&lt;br /&gt;
Once you have installed Perl we recommend you to run the following command in a console window:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
stride --device NULL  --diagnostics Perl --output PerlCheck&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If everything was properly set up you should get the following output:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Executing diagnostics...&lt;br /&gt;
  script &amp;quot;diagnostics.pl&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  ---------------------------------------------------------------------&lt;br /&gt;
  Summary: 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In addition a report file with name &amp;lt;tt&amp;gt;PerlCheck.xml&amp;lt;/tt&amp;gt; will be created in the current directory. If interested in the details you could open that report file in a browser of your choice.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Installation]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Building_an_Off-Target_Test_App&amp;diff=13907</id>
		<title>Building an Off-Target Test App</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Building_an_Off-Target_Test_App&amp;diff=13907"/>
		<updated>2013-02-01T23:24:12Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Samples */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Prerequisite ==&lt;br /&gt;
This article guides you through building a test application (&#039;&#039;&#039;TestApp&#039;&#039;&#039;) for the purpose of running sample code using the [[STRIDE Off-Target Environment]]. &lt;br /&gt;
&lt;br /&gt;
It requires an installation of the STRIDE Framework desktop package. If not installed, please see [[Desktop Installation]] for more information. &lt;br /&gt;
&lt;br /&gt;
It also requires that your desktop contains one of the following &#039;&#039;&#039;compilers&#039;&#039;&#039;:&lt;br /&gt;
* For Windows Microsoft Visual Studio 2008 or later is required. If you don&#039;t already have Visual Studio, the free [http://en.wikipedia.org/wiki/Microsoft_Visual_Studio_Express Visual C++ Express] can be used (download [http://www.microsoft.com/express/download/#webInstall here]). &lt;br /&gt;
* For Linux the [http://en.wikipedia.org/wiki/GNU_Compiler_Collection GNU Compiler Collection] (included by default in almost all Linux distros) is required.&lt;br /&gt;
&lt;br /&gt;
== Building a TestApp ==&lt;br /&gt;
&lt;br /&gt;
=== SDK Makefile ===&lt;br /&gt;
The SDK Makefile is set up so that all &amp;lt;tt&amp;gt;.c&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;.cpp&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt; files found in the directory &amp;lt;tt&amp;gt;SDK\Windows\sample_src&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;SDK/Posix/sample_src&amp;lt;/tt&amp;gt; for Linux) are included in the compile and link of the &#039;&#039;&#039;testapp&#039;&#039;&#039; target.&lt;br /&gt;
&lt;br /&gt;
Further--as a pre-compilation step--any &amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt; files found in &amp;lt;tt&amp;gt;sample_src&amp;lt;/tt&amp;gt; are submitted to the [[STRIDE Build Tools]]. This will result in &lt;br /&gt;
* the detection of [[Test_Unit_Pragmas| test pragmas]] used to declare Test Units in these &amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt; files&lt;br /&gt;
* the detection of [[Scl_function | function pragmas]] used to declare remoting of functions also found in &amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt; files&lt;br /&gt;
* the inclusion of metadata into the &amp;lt;tt&amp;gt;sidb&amp;lt;/tt&amp;gt; file created&lt;br /&gt;
* the generation of an [[Intercept Module]] required for executing tests&lt;br /&gt;
&lt;br /&gt;
=== Build Steps ===&lt;br /&gt;
To begin, be sure that TestApp is not running then perform the following steps:&lt;br /&gt;
&lt;br /&gt;
NOTE: &#039;&#039;If you experience any build problem please make sure to read [[Troubleshooting Build Problems]] for possible resolution.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
====Linux====&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Build the test app&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cd $STRIDE_DIR/SDK/Posix/src&lt;br /&gt;
make testapp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Note that the following artifacts are produced by the build:&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;tt&amp;gt;$STRIDE_DIR/SDK/Posix/out/bin/TestApp&amp;lt;/tt&amp;gt;&lt;br /&gt;
: the test application&lt;br /&gt;
;&amp;lt;tt&amp;gt;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;lt;/tt&amp;gt;&lt;br /&gt;
: the STRIDE interface database file which contains metadata describing the interfaces remoted by the test app (along with other data)&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Windows====&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;If using Microsoft Visual Studio, open a [http://msdn.microsoft.com/en-us/library/ms235639(v=vs.100).aspx Visual Studio Command Prompt] to ensure that the compiler and linker are on your PATH.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Build the test app using the supplied GNU make. (You will get Makefile errors if you use the default make.)&lt;br /&gt;
&amp;lt;source lang=&amp;quot;dos&amp;quot;&amp;gt;&lt;br /&gt;
cd %STRIDE_DIR%\SDK\Windows\src&lt;br /&gt;
..\bin\make testapp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Note that the following artifacts are produced by the build:&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;tt&amp;gt;%STRIDE_DIR%\SDK\Windows\out\bin\TestApp.exe&amp;lt;/tt&amp;gt;&lt;br /&gt;
: the test application&lt;br /&gt;
;&amp;lt;tt&amp;gt;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;lt;/tt&amp;gt;&lt;br /&gt;
: the STRIDE interface database file which contains metadata describing the interfaces remoted by the test app (along with other data)&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE: &#039;&#039;If you prefer to use Visual Studio to build/debug/run your testapp, we provide instructions [[STRIDE_Extensions_for_Visual_Studio|here]] about how to accomplish this.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Diagnostics ==&lt;br /&gt;
The test app we just built does not have any user tests in it. At this point it provides a starting point for test that we will subsequently add.&lt;br /&gt;
&lt;br /&gt;
However, a set of diagnostic tests that verify operation of the STRIDE runtime itself are always built into the generated TestApp executable. If desired (we recommend you to do so) you could run them by doing the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Invoke the TestApp. In order to see TestApp&#039;s output, we recommend that you manually open a new console (or Windows equivalent): &lt;br /&gt;
;Linux&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$STRIDE_DIR/SDK/Posix/out/bin/TestApp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
;Windows&lt;br /&gt;
&amp;lt;source lang=&amp;quot;dos&amp;quot;&amp;gt;&lt;br /&gt;
%STRIDE_DIR%\SDK\Windows\out\bin\TestApp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
(...or launch from the file explorer)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt; Note TestApp&#039;s output upon startup.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
STRIDE Test Console Application.&lt;br /&gt;
Enter &#039;Ctrl+C&#039; to Quit.&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
Listening on TCP port 8000&lt;br /&gt;
starting up...&lt;br /&gt;
&amp;quot;_srThread&amp;quot; thread started.&lt;br /&gt;
&amp;quot;stride&amp;quot; thread started.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;From a second console window, invoke &amp;lt;tt&amp;gt;[[STRIDE_Runner|stride]]&amp;lt;/tt&amp;gt; as follows, to verify connectivity with the test app and STRIDE runtime operation:&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
;Linux&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
stride --diagnostics --database=&amp;quot;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;quot; --device=TCP:localhost:8000 --run=&amp;quot;*&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
;Windows&lt;br /&gt;
&amp;lt;source lang=&amp;quot;dos&amp;quot;&amp;gt;&lt;br /&gt;
stride --diagnostics --database=&amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot; --device=TCP:localhost:8000 --run=&amp;quot;*&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As the tests run you will see output in both the TestApp (target) and stride (host) console windows.&lt;br /&gt;
&lt;br /&gt;
The host console window output is shown here:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
  runtime version: 4.3.0x &lt;br /&gt;
Executing diagnostics...&lt;br /&gt;
  test unit &amp;quot;Link&amp;quot;&lt;br /&gt;
    Loopback ............&lt;br /&gt;
    Payload Fragmentation&lt;br /&gt;
    Stub-Proxy Deadlock&lt;br /&gt;
    Target Characteristics&lt;br /&gt;
    &amp;gt; 4 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;Stat&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;Time&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 8 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Note the Summary results shown in the host output; all in use tests should pass.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;To exit TestApp, give the target window focus and enter Ctrl-C (or &#039;q&#039; under Windows).&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Samples (Optional Not for Training Exercises) ==&lt;br /&gt;
&lt;br /&gt;
The initial desktop installation of STRIDE does not set up any source code (with the exception of a set of system diagnostic tests) for automatic inclusion in a test application. The [[Desktop Installation | desktop framework]] distribution, however, comes with a set of [[Samples_Overview | Samples]]. &lt;br /&gt;
&lt;br /&gt;
To demonstrate how to build a sample, will will add the [[Test Intro Sample]] that provide an overview of STRIDE testing techniques. For an overview from a C++ perspective, please see [[Test Intro Cpp Sample]]. &lt;br /&gt;
&lt;br /&gt;
The following steps are applicable for &#039;&#039;&#039;all&#039;&#039;&#039; [[Samples_Overview | Samples]].&lt;br /&gt;
&lt;br /&gt;
To begin, be sure that TestApp is not running, then copy the &amp;lt;tt&amp;gt;.c&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;.h&amp;lt;/tt&amp;gt; files found in &amp;lt;tt&amp;gt;Samples/test_in_c_cpp/TestIntro&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;SDK/Windows/sample_src&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;SDK/Posix/sample_src&amp;lt;/tt&amp;gt; for Linux).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039; Only files in the sample_src directory will be picked up by the makefile. Files in any subdirectories will be igored.&lt;br /&gt;
&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Once the files have been copied to &amp;lt;tt&amp;gt;sample_src&amp;lt;/tt&amp;gt;, simply build TestApp as described above. Note if you had previously copied other sample source to this directory, you should decide whether to remove those files first. When you complete the test app build, any source that is in this directory at the time of build will be included in the test app.&lt;br /&gt;
&lt;br /&gt;
=== Running the Tests ===&lt;br /&gt;
Here we will run all tests in the &amp;lt;tt&amp;gt;TestApp.sidb&amp;lt;/tt&amp;gt; database.&amp;lt;ref&amp;gt;Note that the S2 diagnostic tests are treated separately, and are not run unless the &amp;lt;tt&amp;gt;--diagnostics&amp;lt;/tt&amp;gt; option is specified to &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Run the build above TestApp in a console window.&lt;br /&gt;
# Invoke &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt; in a separate console window (different from the running TestApp) -- as shown below and verify Summary results.&lt;br /&gt;
&lt;br /&gt;
Here are the command line parameters that we will submit to &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--database ./out/TestApp.sidb &lt;br /&gt;
--device TCP:localhost:8000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
* If you setup an [[STRIDE_Runner#Environment_Variables | environment variable]] for the &#039;&#039;&#039;device&#039;&#039;&#039; option than it is not required in the option file. Note: Command line options override environment variables.  &lt;br /&gt;
* You may want to create a text file named &#039;&#039;RunTestIntro.txt&#039;&#039; in the &amp;lt;tt&amp;gt;SDK\Windows&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;SDK/Posix&amp;lt;/tt&amp;gt; for Linux) directory as an option file to submit to &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt;.&lt;br /&gt;
* &#039;&#039;stride --help&#039;&#039; provides [[STRIDE_Runner#Options | options information]]&lt;br /&gt;
&lt;br /&gt;
If you haven&#039;t done so already, start &amp;lt;tt&amp;gt;TestApp&amp;lt;/tt&amp;gt; running in a separate console window.&lt;br /&gt;
&lt;br /&gt;
Now run stride as follows (starting from the &amp;lt;tt&amp;gt;SDK\Windows&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;SDK/Posix&amp;lt;/tt&amp;gt; directory):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
stride --options_file RunTestIntro.txt --run=&amp;quot;*&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The output should look like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
Executing...&lt;br /&gt;
  test unit &amp;quot;s2_testintro_flist&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;s2_testintro_cclass&amp;quot;&lt;br /&gt;
    &amp;gt; 1 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;s2_testintro_testdoubles&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;s2_testintro_testpoints&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  test unit &amp;quot;s2_testintro_parameters&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 11 passed, 2 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Interpreting Results ===&lt;br /&gt;
Open &#039;&#039;&#039;TestApp.xml&#039;&#039;&#039; in your browser; this file is created in the directory from which you ran &#039;&#039;&#039;stride&#039;&#039;&#039; (or the directory via the &amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt; command line option). If you were connected to the Internet when you ran the tests, the TestApp.xsl file is also generated in the directory. By opening TestApp.xml in a web browser, the xsl is automatically applied to create html in the browser. If you use Google Chrome, please see [[Browser Compatibility]].&lt;br /&gt;
&lt;br /&gt;
If you&#039;re interested in the details of the tests, please see to the test documentation contained in the test report.&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Installation]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Desktop_Installation&amp;diff=13906</id>
		<title>Desktop Installation</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Desktop_Installation&amp;diff=13906"/>
		<updated>2013-02-01T23:18:53Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Perl Installation (Optional) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Installation Packages ==&lt;br /&gt;
Files are installed by unzipping the provided package to your PC. Packages are available targeting the following operating systems (your version number may be different than that shown):&lt;br /&gt;
;Windows (x86)&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-windows_4.x.yy.zip&amp;lt;/tt&amp;gt;&lt;br /&gt;
;Linux (x86)&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-linux_4.x.yy.tgz&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please see the appropriate installation instructions below.&lt;br /&gt;
&lt;br /&gt;
== Windows Installation ==&lt;br /&gt;
&lt;br /&gt;
=== Unpacking ===&lt;br /&gt;
The following installation example assumes the the installation package is located in your root directory and that the directory &amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt; exists. You can choose to install to a different location (all instructions below assume you are installing into &amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt;). &lt;br /&gt;
&lt;br /&gt;
The example uses the open source [http://www.7-zip.org/ 7-Zip] utility to unzip the archive.&lt;br /&gt;
&lt;br /&gt;
 cd \stride&lt;br /&gt;
 &amp;quot;\Program Files\7-Zip\7z&amp;quot; x ..\STRIDE_framework-windows_4.x.yy.zip&lt;br /&gt;
&lt;br /&gt;
Once unzipped, files will have been installed under the &amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
=== Verify Environment Variables ===&lt;br /&gt;
&lt;br /&gt;
==== Updated PATH ====&lt;br /&gt;
As a final step, you will need to update your &amp;lt;tt&amp;gt;[http://en.wikipedia.org/wiki/Path_(variable) PATH]&amp;lt;/tt&amp;gt; environment variable to include &amp;lt;tt&amp;gt;\stride\bin&amp;lt;/tt&amp;gt;. &lt;br /&gt;
For instructions on modifying it, please see [http://support.microsoft.com/kb/310519 http://support.microsoft.com/kb/310519].&lt;br /&gt;
&lt;br /&gt;
NOTE: &#039;&#039;Make sure to insert &#039;&#039;&#039;no spaces&#039;&#039;&#039; before and after the semicolon separators(;).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Create/Update STRIDE_DIR====&lt;br /&gt;
&lt;br /&gt;
Verify that the  &amp;lt;tt&amp;gt;STRIDE_DIR&amp;lt;/tt&amp;gt; environment variable exists and is set to the root installation directory (&amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt;). If this environment variable does not yet exist, you should create it as a user environment variable.&lt;br /&gt;
&lt;br /&gt;
To confirm installation and display &#039;&#039;help&#039;&#039; run the following command in a console window:&lt;br /&gt;
&lt;br /&gt;
 stride -h&lt;br /&gt;
&lt;br /&gt;
=== Uninstalling ===&lt;br /&gt;
To uninstall STRIDE simply:&lt;br /&gt;
* Remove any reference to &amp;lt;tt&amp;gt;\stride\bin&amp;lt;/tt&amp;gt; in your &amp;lt;tt&amp;gt;PATH&amp;lt;/tt&amp;gt; environment variable. &lt;br /&gt;
* Remove &amp;lt;tt&amp;gt;STRIDE_DIR&amp;lt;/tt&amp;gt; environment variable.&lt;br /&gt;
* Remove &amp;lt;tt&amp;gt;\stride&amp;lt;/tt&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
== Linux Installation ==&lt;br /&gt;
&lt;br /&gt;
=== Unpacking ===&lt;br /&gt;
The following installation example assumes the the installation package is located in your home directory and that the directory &amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt; exists. You can choose to install to a different location (all instructions below assume you are installing into &amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt;). &lt;br /&gt;
&lt;br /&gt;
 cd ~/stride&lt;br /&gt;
 tar -zxvf ../STRIDE_framework-linux_4.x.yy.tgz&lt;br /&gt;
&lt;br /&gt;
Once unzipped, files will have been installed under the &amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
=== Verify Environment Variables ===&lt;br /&gt;
&lt;br /&gt;
==== Updated PATH ====&lt;br /&gt;
As a final step, you will need to update your &amp;lt;tt&amp;gt;[http://en.wikipedia.org/wiki/Path_(variable) PATH]&amp;lt;/tt&amp;gt; environment variable to include &amp;lt;tt&amp;gt;~/stride/bin&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
If you use the bash shell, enter the following at a command prompt, or to automatically set at each login, add to your &amp;lt;tt&amp;gt;.bashrc&amp;lt;/tt&amp;gt;:&lt;br /&gt;
 export PATH=$PATH:~/stride/bin&lt;br /&gt;
&lt;br /&gt;
For other shells, and more information, please see the following articles:&lt;br /&gt;
* [http://www.linuxheadquarters.com/howto/basic/path.shtml http://www.linuxheadquarters.com/howto/basic/path.shtml].&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Environment_variable#UNIX http://en.wikipedia.org/wiki/Environment_variable]&lt;br /&gt;
&lt;br /&gt;
==== Create/Update STRIDE_DIR====&lt;br /&gt;
Verify that the  &amp;lt;tt&amp;gt;STRIDE_DIR&amp;lt;/tt&amp;gt; environment variable exists and is set to the root installation directory (&amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt;). If this environment variable does not yet exist, you should automatically set at each login, add to your &amp;lt;tt&amp;gt;.bashrc&amp;lt;/tt&amp;gt;:&lt;br /&gt;
 export STRIDE_DIR=~/stride&lt;br /&gt;
&lt;br /&gt;
To confirm installation and display &#039;&#039;help&#039;&#039; run the following command in a console window:&lt;br /&gt;
&lt;br /&gt;
 stride -h&lt;br /&gt;
&lt;br /&gt;
NOTE: &#039;&#039;In a 64-bit environment the above may fail with the following error: &amp;lt;code&amp;gt;&amp;quot;stride: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory.&amp;quot;&amp;lt;/code&amp;gt; To resolve this issue install the appropriate 32-bit compatibility libraries for your Linux distribution:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Debian / Ubuntu&lt;br /&gt;
 sudo apt-get install ia32-libs&lt;br /&gt;
* Fedora / CentOS / RHEL&lt;br /&gt;
 sudo yum -y install glibc.i686 libstdc++.i686&lt;br /&gt;
&lt;br /&gt;
=== Uninstalling ===&lt;br /&gt;
To uninstall STRIDE simply:&lt;br /&gt;
* Remove any reference to &amp;lt;tt&amp;gt;~/stride/bin&amp;lt;/tt&amp;gt; in your &amp;lt;tt&amp;gt;PATH&amp;lt;/tt&amp;gt; environment variable. &lt;br /&gt;
* Remove &amp;lt;tt&amp;gt;STRIDE_DIR&amp;lt;/tt&amp;gt; environment variable.&lt;br /&gt;
* Remove &amp;lt;tt&amp;gt;~/stride&amp;lt;/tt&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
== Directories and Files ==&lt;br /&gt;
&lt;br /&gt;
It&#039;s not necessary to understand the workings of the STRIDE framework to perform evaluation or training. The desktop package contains an [[STRIDE Off-Target Environment]] that utilizes a SDK that is set up with appropriate options and settings to enable &amp;quot;out of the box&amp;quot; functionality. A quick orientation to the Off-Target Environment&#039;s directories and files are shown below.&lt;br /&gt;
&lt;br /&gt;
If you are interested in the details, consult the articles [[Posix SDK]] and [[Windows SDK]].&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;bin&amp;lt;/tt&amp;gt;===&lt;br /&gt;
This directory contains the [[Build Tools|STRIDE Build Tools]] and the [[STRIDE Runner]].&lt;br /&gt;
&lt;br /&gt;
The build tools are invoked early on in the target software build process to generate special STRIDE artifacts that are used in subsequent build steps and later when running tests against the target. In an Off-Target Environment installation, these files are needed on the host computer since this is where we are building the target application. In a production environment, these files are needed only on the computer that performs the target software build.&lt;br /&gt;
&lt;br /&gt;
The [[STRIDE Runner]] is the program you use to run tests from the host.&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;lib&amp;lt;/tt&amp;gt;===&lt;br /&gt;
This directory contains a set of STRIDE specific core scripting libraries along with prebuild binaries intended to be used for [[Test Modules Overview|testing in scripts]].&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;Samples&amp;lt;/tt&amp;gt;===&lt;br /&gt;
The Samples directory contains a number of sub-directories, each containing the source for a [[Samples|sample test]].&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;tt&amp;gt;SDK&amp;lt;/tt&amp;gt;===&lt;br /&gt;
This directory contains the sub-directories &amp;lt;tt&amp;gt;GRS&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;Runtime&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;SLAP&amp;lt;/tt&amp;gt;, which contain source code that comprises the STRIDE Runtime. These sources are built into the STRIDE Runtime library as a dependency of the &amp;lt;tt&amp;gt;testapp&amp;lt;/tt&amp;gt; target. (See &amp;lt;tt&amp;gt;src&amp;lt;/tt&amp;gt; directory below.) &lt;br /&gt;
&lt;br /&gt;
In addition, there is a directory named either &amp;lt;tt&amp;gt;Posix&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;Windows&amp;lt;/tt&amp;gt;, depending on your host operating system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;tt&amp;gt;[[Windows_SDK| SDK\Windows]](&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;[[Posix_SDK| SDK/Posix]]&amp;lt;/tt&amp;gt; )&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This directory (&amp;lt;tt&amp;gt;Windows&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;Posix&amp;lt;/tt&amp;gt;) contains the following directories:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;tt&amp;gt;bin&amp;lt;/tt&amp;gt; &#039;&#039;(Windows only)&#039;&#039;&lt;br /&gt;
: Contains GNU make program files (GNU make is already present on Linux systems)&lt;br /&gt;
*&amp;lt;tt&amp;gt;sample_src&amp;lt;/tt&amp;gt;&lt;br /&gt;
: This directory is originally empty. The sandbox is set up so that any files in this directory are included in the TestApp build.&lt;br /&gt;
*&amp;lt;tt&amp;gt;settings&amp;lt;/tt&amp;gt;&lt;br /&gt;
: This directory contains a set of &amp;lt;tt&amp;gt;stride.XXX.s2scompile&amp;lt;/tt&amp;gt; files, where &amp;lt;tt&amp;gt;XXX&amp;lt;/tt&amp;gt; coresponds to the target CPU architecture (i.e. X86, ARM...). These files, used by the [[s2scompile|STRIDE Compiler]], specify target CPU characteristics (endian-ness, data sizes and alignments). On Windows, this directory also contains a set of files for [[STRIDE_Extensions_for_Visual_Studio|use in building target apps with Visual Studio]].&lt;br /&gt;
*&amp;lt;tt&amp;gt;src&amp;lt;/tt&amp;gt;&lt;br /&gt;
: This directory contains the Makefile used to produce the sandbox TestApp as well as the TestApp sources.&lt;br /&gt;
*&#039;&#039;&amp;lt;tt&amp;gt;out&amp;lt;/tt&amp;gt;&#039;&#039;&lt;br /&gt;
: This directory (and several sub-directories) is created as part of the make process. All of the make targets are written to this directory and its sub-directories.&lt;br /&gt;
&lt;br /&gt;
== Perl Installation (Optional) ==&lt;br /&gt;
If you intend to use [[Test Modules Overview|STRIDE Script modules]] for testing in script, you will need a recent version of Perl (x86 with threads support) installed. This is &#039;&#039;&#039;NOT&#039;&#039;&#039; required if only tests in C/C++ will be run or to complete the training.&lt;br /&gt;
&lt;br /&gt;
As of this writing, we support only the 32-bit versions 5.8.9, 5.10.x, 5.12.x and 5.14.x of Perl. &lt;br /&gt;
&lt;br /&gt;
=== Windows === &lt;br /&gt;
It is required to use the standard 32-bit Perl distributions from [http://www.activestate.com/activeperl ActiveState].&lt;br /&gt;
&lt;br /&gt;
The following additional (non-standard) Perl packages are also required for full functionality of STRIDE tests in perl:&lt;br /&gt;
&lt;br /&gt;
* [http://search.cpan.org/~smueller/Class-ISA-0.36/lib/Class/ISA.pm Class::ISA]&lt;br /&gt;
* [http://search.cpan.org/~andrewf/Pod-POM-0.27/lib/Pod/POM.pm Pod::POM]&lt;br /&gt;
* [http://search.cpan.org/~andk/Devel-Symdump-2.08/lib/Devel/Symdump.pm Devel::Symdump]&lt;br /&gt;
&lt;br /&gt;
You can easily install these packages using the [http://docs.activestate.com/activeperl/5.10/faq/ActivePerl-faq2.html ppm tool]. If you access the Internet via a proxy make sure to read [http://docs.activestate.com/activeperl/5.10/faq/ActivePerl-faq2.html#ppm_and_proxies this]. Simple command-line installation of PACKAGE_NAME (the package to install) typically just requires typing:&lt;br /&gt;
&lt;br /&gt;
 ppm install PACKAGE_NAME&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
We recommend you to use the standard 32-bit Perl distribution that comes with your Linux version. In case you need to manually build from source make sure to configure &amp;quot;shared library&amp;quot; (&amp;lt;tt&amp;gt;-Duseshrplib&amp;lt;/tt&amp;gt;), &amp;quot;thread support&amp;quot; (&amp;lt;tt&amp;gt;-Duseithreads&amp;lt;/tt&amp;gt;) and no &amp;quot;64-bit support&amp;quot; (&amp;lt;tt&amp;gt;-Uuse64bitint -Uuse64bitall&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The following additional (non-standard) Perl packages are also required for full functionality of STRIDE tests in perl:&lt;br /&gt;
&lt;br /&gt;
* [http://search.cpan.org/~ingy/YAML-LibYAML-0.38/lib/YAML/XS.pm YAML::XS]&lt;br /&gt;
* [http://search.cpan.org/~smueller/Class-ISA-0.36/lib/Class/ISA.pm Class::ISA]&lt;br /&gt;
* [http://search.cpan.org/~andrewf/Pod-POM-0.27/lib/Pod/POM.pm Pod::POM]&lt;br /&gt;
* [http://search.cpan.org/~andk/Devel-Symdump-2.08/lib/Devel/Symdump.pm Devel::Symdump]&lt;br /&gt;
&lt;br /&gt;
If your perl is installed in a system directory (&amp;lt;tt&amp;gt;/usr/bin/perl&amp;lt;/tt&amp;gt;, for instance), you will need root access to install shared modules. The simplest method for installing packages is via the [http://www.perl.com/doc/manual/html/lib/CPAN.html CPAN shell]. If you access the Internet via a proxy make sure to set the appropriate [http://search.cpan.org/dist/CPAN/lib/CPAN.pm#Config_Variables CPAN config variables]. To start the shell in interactive mode:&lt;br /&gt;
&lt;br /&gt;
 sudo perl -MCPAN -eshell&lt;br /&gt;
&lt;br /&gt;
Once in the shell, search for and install the latest stable version of PACKAGE_NAME (the package to install):&lt;br /&gt;
&lt;br /&gt;
 install PACKAGE_NAME&lt;br /&gt;
&lt;br /&gt;
The STRIDE perl packages also need to load your system&#039;s &#039;&#039;&#039;libperl.so&#039;&#039;&#039; (shared object file) at runtime. Depending on your system, this file should be loadable from a perl CORE directory or from one of the shared system directories. If you &#039;&#039;&#039;DO NOT&#039;&#039;&#039; have this shared library on your system, you might need to install a &#039;&#039;libperl-dev&#039;&#039;, &#039;&#039;perl-devel&#039;&#039; or &#039;&#039;perl-libs&#039;&#039; package in order to get it. Here is how you can do that on the console of some Linux distributions:&lt;br /&gt;
&lt;br /&gt;
* Debian / Ubuntu&lt;br /&gt;
 sudo apt-get install libperl-dev&lt;br /&gt;
* Fedora / CentOS / RHEL&lt;br /&gt;
 sudo yum -y install perl-devel&lt;br /&gt;
&lt;br /&gt;
=== Validation ===&lt;br /&gt;
Once you have installed Perl we recommend you to run the following command in a console window:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
stride --device NULL  --diagnostics Perl --output PerlCheck&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If everything was properly set up you should get the following output:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Executing diagnostics...&lt;br /&gt;
  script &amp;quot;diagnostics.pl&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  ---------------------------------------------------------------------&lt;br /&gt;
  Summary: 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In addition a report file with name &amp;lt;tt&amp;gt;PerlCheck.xml&amp;lt;/tt&amp;gt; will be created in the current directory. If interested in the details you could open that report file in a browser of your choice.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Installation]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Test_Macros&amp;diff=13905</id>
		<title>Training Test Macros</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Test_Macros&amp;diff=13905"/>
		<updated>2013-02-01T19:04:48Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Run and Publish Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module focuses on the basics of writing and executing tests. It covers the following topics:&lt;br /&gt;
* The C++ class [[Test_Units | Test Unit packaging]] option&lt;br /&gt;
* How to leverage [[Test_Macros | Test Macros]]&lt;br /&gt;
* Using [[Test_Macros#Notes | Notes]] and [[Test_Log | Test Logs]], what is the difference?&lt;br /&gt;
* Creating [[Test_Documentation_in_C/C%2B%2B | Test Documentation]] via the build process&lt;br /&gt;
* Executing Tests using the [[STRIDE_Runner | Runner]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test unit is implemented in two source files: &#039;&#039;&#039;TestBasic.cpp&#039;&#039;&#039; and &#039;&#039;&#039;TestBasic.h&#039;&#039;&#039;. The comments and descriptions are contained in &#039;&#039;&#039;TestBasic.h&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The Test Unit has test cases already implemented (used for reference) and has a test method that you are required to implement (called &#039;&#039;&#039;Exercise&#039;&#039;&#039;). Currently this method is empty and returns a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using the SDK makefile&lt;br /&gt;
* Invoke TestApp found in the &#039;&#039;/out/bin&#039;&#039; directory&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute the &#039;&#039;Test Basic&#039;&#039; Test Units &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestBasic&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
You can also review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the sample_src directory (based on the output option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercises ===&lt;br /&gt;
&lt;br /&gt;
Now edit the training source code to complete the following exercises:&lt;br /&gt;
&lt;br /&gt;
;Exercise()&lt;br /&gt;
* &#039;&#039;&#039;Assignment 1:&#039;&#039;&#039; Add an &#039;&#039;srNOTE&#039;&#039; to &amp;lt;tt&amp;gt;Exercise()&amp;lt;/tt&amp;gt; that will add a simple message to the test report (e.g. &amp;quot;Exercise ...&amp;quot;)&lt;br /&gt;
** &#039;&#039;Hint:&#039;&#039; You will probably want to read about [[Test Macros]]&lt;br /&gt;
** &#039;&#039;Hint:&#039;&#039; srTestCaseSetStatus(srTEST_CASE_DEFAULT, srTEST_NOTINUSE, 0) is a placeholder to be removed when you add your test code.&lt;br /&gt;
* &#039;&#039;&#039;Assignment 2:&#039;&#039;&#039; Within &amp;lt;tt&amp;gt;Exercise()&amp;lt;/tt&amp;gt; validate that &amp;lt;tt&amp;gt;sut_mult(1,1)&amp;lt;/tt&amp;gt; does NOT equal &amp;lt;tt&amp;gt;sut_add(1,1)&amp;lt;/tt&amp;gt;&lt;br /&gt;
** &#039;&#039;Hint:&#039;&#039; You will use a [[Test Macros | Test Macro]].&lt;br /&gt;
&lt;br /&gt;
=== Check Results ===&lt;br /&gt;
&lt;br /&gt;
* Before you rebuild &amp;lt;tt&amp;gt;TestApp.exe&amp;lt;/tt&amp;gt;, you will need to shut it down by entering &#039;q&#039; in its console window or closing the window directly.&lt;br /&gt;
&lt;br /&gt;
* After rebuilding, invoke &amp;lt;tt&amp;gt;TestApp.exe&amp;lt;/tt&amp;gt; once again.&lt;br /&gt;
&lt;br /&gt;
* Execute only &#039;&#039;TestBasic&#039;&#039; using the stride runner:&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestBasic&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
      &amp;gt; 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* If you have TestApp.xml already open in your browser, you can simply refresh (F5), to view the latest test results.&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space(--space TestBasic). If you have not added test space options to your options file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) please see [[Training_Getting_Started#Test_Space_Access| testspace access]].&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestBasic --space TestBasic --upload&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
      &amp;gt; 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    ------------------------------------------------------------&lt;br /&gt;
    Summary: 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
  Uploading to test space...&lt;br /&gt;
&lt;br /&gt;
=== Validate Uploaded Results ===&lt;br /&gt;
&lt;br /&gt;
Navigate to Test Space using your browser, then validate your results against the pre-configured baseline. For details, please see [[Training_Getting_Started#Confirming_Training_Exercise_Results | Confirming Training Exercise Results]]&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
You can view the results on test space by pointing your browser at &#039;&#039;&#039;&amp;lt;tt&amp;gt;&amp;lt;nowiki&amp;gt;https://yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&amp;lt;/tt&amp;gt;&#039;&#039;&#039;. Log in using the credentials you supplied earlier in the options file.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to Test Unit basics.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Units_Overview | Test Units Overview]] - Provides a general overview Test Units (i.e. writing tests in C and C++)&lt;br /&gt;
* [[Test_Units |Test Unit Packaging]] - Discusses the three types of packaging that can be used for Test Units (we prefer Test Classes even for c programmers)&lt;br /&gt;
* [[Test_Macros|Test Macros]] - Optional macros that provide shortcuts for testing assertions and automatic report annotation (you will want to use these).&lt;br /&gt;
* [[Test_Macros#Note_Macros|Notes]] - Used to add logging information to your &#039;&#039;&#039;test logic&#039;&#039;&#039; (automatically added to test reports)&lt;br /&gt;
* [[Test_Log|Test Logs]] - Used to add logging information to your &#039;&#039;&#039;source code&#039;&#039;&#039; (added to test reports if enabled)&lt;br /&gt;
* [[Tracing|Tracing]] - The Runner allows tracing on logs and test points.&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Class_Sample|Test Class Sample]] -  Specifically the [[Test_Class_Sample#Basic|Basic Simple]] Test Unit&lt;br /&gt;
* [[Test_Macros_Sample|Test Macro Sample]] - This sample covers simple uses of each of the Test Macros. &lt;br /&gt;
&lt;br /&gt;
Note - Other Test Unit Samples related to packaging that are useful are the following:&lt;br /&gt;
&lt;br /&gt;
* [[Test_CClass_Sample|Test C Class Sample]]&lt;br /&gt;
* [[Test_Function_List_Sample|Test Function List Sample]] &lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Expectations&amp;diff=13904</id>
		<title>Training Expectations</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Expectations&amp;diff=13904"/>
		<updated>2013-02-01T19:04:11Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Run and Publish Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused on &#039;&#039;Test Points&#039;&#039; and how to validate them using &#039;&#039;Expectations&#039;&#039;.  The module covers the following topics:&lt;br /&gt;
* Presentation of a [[Expectations | validation]] technique based on &#039;&#039;code sequencing&#039;&#039; and &#039;&#039;state data&#039;&#039;&lt;br /&gt;
* Overview of [[Source_Instrumentation_Overview#Instrumentation | source instrumentation]]&lt;br /&gt;
* Review of [[Test_Point_Testing_in_C/C%2B%2B | expectation tables and predicates]]&lt;br /&gt;
* Example use cases such as concurrent validation, using trigger conditions, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two test files used -- &#039;&#039;&#039;TestExpect.cpp &amp;amp; TestExpect.h&#039;&#039;&#039; -- that implement three Test Units:&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Seq&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Data&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Misc&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The first two Test Units have two test methods already implemented and have one method each that you are required to implement called &#039;&#039;&#039;Exercise&#039;&#039;&#039;. The third Test Unit has four test methods already implemented and also has one method to implement as an exercise. Currently the &#039;&#039;exercise methods&#039;&#039; return a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Expectations&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  -----------------------------------------------------------&lt;br /&gt;
  Summary: 7 passed, 1 failed, 0 in progress, 3 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Seq::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Validate &#039;&#039;&#039;ALL&#039;&#039;&#039; upper case Test Points (A - I)&lt;br /&gt;
** Use &#039;&#039;Unordered&#039;&#039; and &#039;&#039;Nonstrict&#039;&#039; sequencing&lt;br /&gt;
** Use &#039;&#039;sut_DoSequencing(SEQ_1)&#039;&#039; to generate part of the sequence&lt;br /&gt;
** Use &#039;&#039;sut_start_thread(SEQ_3)&#039;&#039; to generate the rest of the sequence&lt;br /&gt;
** Add a &#039;&#039;NOTE_INFO(..)&#039;&#039; to log that the test is executing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Data::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Validate the following Test Points &#039;&#039;&#039;{D, G, F, H}&#039;&#039;&#039;&lt;br /&gt;
** Check that &#039;&#039;&#039;F&#039;&#039;&#039; occurs 2 times&lt;br /&gt;
** Use &#039;&#039;Unordered&#039;&#039; sequencing&lt;br /&gt;
** Write a new custom predicate that validates data for both &#039;&#039;&#039;D&#039;&#039;&#039; and &#039;&#039;&#039;H&#039;&#039;&#039; Test Points&lt;br /&gt;
*** Add &#039;&#039;NOTE_INFO(..)&#039;&#039; to capture content of the Test Points&lt;br /&gt;
*** Add extra check for &#039;&#039;&#039;D&#039;&#039;&#039; that the status is &#039;&#039;&#039;GOOD&#039;&#039;&#039;&lt;br /&gt;
*** Confirm that data fields &#039;&#039;&#039;d1&#039;&#039;&#039; and &#039;&#039;&#039;d2&#039;&#039;&#039; are as expected&lt;br /&gt;
*** Pass the expected data fields (for both Test Points) as part of the &#039;&#039;&#039;user&#039;&#039;&#039; data within the setup&lt;br /&gt;
*** Use &#039;&#039;sut_start_thread(SEQ_3)&#039;&#039; to generate the sequence&lt;br /&gt;
** Write another custom predicate for validating data for &#039;&#039;&#039;G&#039;&#039;&#039;&lt;br /&gt;
*** Add &#039;&#039;NOTE_INFO(..)&#039;&#039; to capture content of the Test Point&lt;br /&gt;
*** Validate the expected string using &#039;&#039;&#039;user&#039;&#039;&#039; data&lt;br /&gt;
** Add a &#039;&#039;NOTE_INFO(..)&#039;&#039; to log that the test is executing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Misc::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Validate 2 sequences using a Trigger in between&lt;br /&gt;
*** Sequence 1 = &#039;&#039;&#039;D&#039;&#039;&#039; &#039;&#039;&#039;E&#039;&#039;&#039; and &#039;&#039;&#039;A&#039;&#039;&#039;&lt;br /&gt;
*** Trigger = &#039;&#039;&#039;C&#039;&#039;&#039;&lt;br /&gt;
*** Sequence 2 =  &#039;&#039;&#039;F&#039;&#039;&#039; and &#039;&#039;&#039;F&#039;&#039;&#039; (2 occurrences)&lt;br /&gt;
** Use &#039;&#039;ANY AT ALL&#039;&#039; special member with trigger&lt;br /&gt;
** Use &#039;&#039;Ordered&#039;&#039; and &#039;&#039;Strict&#039;&#039; sequencing &lt;br /&gt;
** Use &#039;&#039;sut_start_thread(SEQ_2)&#039;&#039; to generate the expected sequences  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Expectations&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
      &amp;gt; 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    -----------------------------------------------------------&lt;br /&gt;
    Summary: 10 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space.  If you have not added test space options to your options file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) please see [[Training_Getting_Started#Test_Space_Access| testspace access]].&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc --space TestExpect --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Source_Instrumentation_Overview | Instrumentation Overview]] providing high-level concepts of this validation technique&lt;br /&gt;
* [[Test_Point |Test Point]] Macro definition &lt;br /&gt;
* [[Expectations |Expectations]] definition and how to set your &#039;&#039;Expectations&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Point_Sample | Test Point Sample]] - Demonstrates simple technique to monitor and test activity occurring in another thread&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Runtime_API&amp;diff=13903</id>
		<title>Training Runtime API</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Runtime_API&amp;diff=13903"/>
		<updated>2013-02-01T19:03:34Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Run and Publish Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused leveraging the [[Runtime_Test_Services | Runtime APIs]] in the context of writing a test. The module covers the following topics:&lt;br /&gt;
* How to set [[Runtime_Test_Services#srTestCaseSetStatus | test status]]&lt;br /&gt;
* Setting [[Runtime_Test_Services#srTestCaseSetName | test case name]] and [[Runtime_Test_Services#srTestCaseSetDescription | description]] directly via the API&lt;br /&gt;
* Setting [[Runtime_Test_Services#srTestSuiteSetName | test suite name]] and [[Runtime_Test_Services#srTestSuiteSetDescription | description]] directly via the API&lt;br /&gt;
* Dynamically creating [[Runtime_Test_Services#srTestSuiteAddSuite | test suites]] and [[Runtime_Test_Services#srTestSuiteAddCase | test cases]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two new files used -- &#039;&#039;&#039;TestRuntime.cpp &amp;amp; TestRuntime.h&#039;&#039;&#039; --- that implement two test Units:&lt;br /&gt;
* &#039;&#039;&#039;TestRuntime_Static&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestRuntime_Dynamic&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All of the Test Units have test cases already implemented (used for reference) and have one test method that you are required to implement: one is called &#039;&#039;&#039;Exercise&#039;&#039;&#039; and the other is called &#039;&#039;&#039;dynamic_Exercise&#039;&#039;&#039;. Currently the exercise methods return a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Runtime APIs&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestRuntime_Static --run TestRuntime_Dynamic&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
   test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
     &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
   test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
     &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  ------------------------------------------------------------&lt;br /&gt;
  Summary: 6 passed, 0 failed, 0 in progress, 2 not in use.&lt;br /&gt;
  &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestRuntime_Static::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Validate the &#039;&#039;sut_mult()&#039;&#039; routine using some simple data input&lt;br /&gt;
*** &#039;&#039;Hint:&#039;&#039; You will probably want to read about [[Runtime Test Services]]&lt;br /&gt;
** Set the &#039;&#039;test method&#039;&#039; name to &#039;&#039;&#039;Mult&#039;&#039;&#039;&lt;br /&gt;
** Use direct Runtime APIs to:&lt;br /&gt;
*** Set the test case description&lt;br /&gt;
*** Capture logging information via comments&lt;br /&gt;
*** Set the status of the test case&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestRuntime_Dynamic-&amp;gt;Exercise&#039;&#039;&#039;&lt;br /&gt;
** Pass in &#039;&#039;&#039;5&#039;&#039;&#039; via command line for the number of test cases to generate&lt;br /&gt;
** Check using &#039;&#039;&#039;srEXIT_XX&#039;&#039;&#039; that the number of test cases has been passed in correctly&lt;br /&gt;
** Add a new Test Suite using &#039;&#039;&#039;Exercise&#039;&#039;&#039; for the name&lt;br /&gt;
** Write a loop generating a dynamic test case using &#039;&#039;NumberOfTestCases&#039;&#039;&lt;br /&gt;
*** Test Case name shall be &#039;&#039;&#039;Test_n&#039;&#039;&#039; where &amp;quot;n&amp;quot; is the loop count as each case must have a unique name.&lt;br /&gt;
*** Add a description for each test case&lt;br /&gt;
*** Validate &#039;&#039;sut_foo()&#039;&#039; using the loop count&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* note- The &#039;&#039;dynamic&#039;&#039; Test Unit is using &#039;&#039;CClass&#039;&#039; packaging&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Runtime API&#039;&#039; Test Units only (NOTE - requires passing parameter)&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestRuntime_Static --run TestRuntime_Dynamic(5)&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
      &amp;gt; 10 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    ---------------------------------------------------------------------&lt;br /&gt;
    Summary: 13 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. If you have not added test space options to your options file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) please see [[Training_Getting_Started#Test_Space_Access| testspace access]].&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestRuntime_Static --run TestRuntime_Dynamic --space TestRuntime --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Runtime_Test_Services#C_Test_Functions | Runtime Test Services]] with special attention to the following:&lt;br /&gt;
** Review srTestCaseSetDescription()&lt;br /&gt;
** Review srTestCaseSetStatus()&lt;br /&gt;
** Review srTestCaseSetName()&lt;br /&gt;
** Review srTestCaseAddComment()&lt;br /&gt;
** Review srTestSuiteAddCase()&lt;br /&gt;
** Review srTestSuiteAddSuite()&lt;br /&gt;
** Review srTestSuiteSetDescription()&lt;br /&gt;
** Review srTestSuiteSetName()&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
* [[Test_Class_Sample | Test Class Sample]] - Specifically the [[Test_Class_Sample#Runtime_Services |Runtime Services]] Test Units&lt;br /&gt;
* [[Test_CClass_Sample | Test C Class Sample]] - Specifically the [[Test_CClass_Sample#Runtime_Services | Runtime Services]] Test Units&lt;br /&gt;
* [[Test_Function_List_Sample | Test Function List Sample]] - Specifically the [[Test_Function_List_Sample#Runtime_Services | Runtime Services]] Test Units&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Runtime_API&amp;diff=13902</id>
		<title>Training Runtime API</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Runtime_API&amp;diff=13902"/>
		<updated>2013-02-01T19:03:21Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Run and Publish Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused leveraging the [[Runtime_Test_Services | Runtime APIs]] in the context of writing a test. The module covers the following topics:&lt;br /&gt;
* How to set [[Runtime_Test_Services#srTestCaseSetStatus | test status]]&lt;br /&gt;
* Setting [[Runtime_Test_Services#srTestCaseSetName | test case name]] and [[Runtime_Test_Services#srTestCaseSetDescription | description]] directly via the API&lt;br /&gt;
* Setting [[Runtime_Test_Services#srTestSuiteSetName | test suite name]] and [[Runtime_Test_Services#srTestSuiteSetDescription | description]] directly via the API&lt;br /&gt;
* Dynamically creating [[Runtime_Test_Services#srTestSuiteAddSuite | test suites]] and [[Runtime_Test_Services#srTestSuiteAddCase | test cases]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two new files used -- &#039;&#039;&#039;TestRuntime.cpp &amp;amp; TestRuntime.h&#039;&#039;&#039; --- that implement two test Units:&lt;br /&gt;
* &#039;&#039;&#039;TestRuntime_Static&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestRuntime_Dynamic&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All of the Test Units have test cases already implemented (used for reference) and have one test method that you are required to implement: one is called &#039;&#039;&#039;Exercise&#039;&#039;&#039; and the other is called &#039;&#039;&#039;dynamic_Exercise&#039;&#039;&#039;. Currently the exercise methods return a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Runtime APIs&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestRuntime_Static --run TestRuntime_Dynamic&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
   test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
     &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
   test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
     &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  ------------------------------------------------------------&lt;br /&gt;
  Summary: 6 passed, 0 failed, 0 in progress, 2 not in use.&lt;br /&gt;
  &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestRuntime_Static::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Validate the &#039;&#039;sut_mult()&#039;&#039; routine using some simple data input&lt;br /&gt;
*** &#039;&#039;Hint:&#039;&#039; You will probably want to read about [[Runtime Test Services]]&lt;br /&gt;
** Set the &#039;&#039;test method&#039;&#039; name to &#039;&#039;&#039;Mult&#039;&#039;&#039;&lt;br /&gt;
** Use direct Runtime APIs to:&lt;br /&gt;
*** Set the test case description&lt;br /&gt;
*** Capture logging information via comments&lt;br /&gt;
*** Set the status of the test case&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestRuntime_Dynamic-&amp;gt;Exercise&#039;&#039;&#039;&lt;br /&gt;
** Pass in &#039;&#039;&#039;5&#039;&#039;&#039; via command line for the number of test cases to generate&lt;br /&gt;
** Check using &#039;&#039;&#039;srEXIT_XX&#039;&#039;&#039; that the number of test cases has been passed in correctly&lt;br /&gt;
** Add a new Test Suite using &#039;&#039;&#039;Exercise&#039;&#039;&#039; for the name&lt;br /&gt;
** Write a loop generating a dynamic test case using &#039;&#039;NumberOfTestCases&#039;&#039;&lt;br /&gt;
*** Test Case name shall be &#039;&#039;&#039;Test_n&#039;&#039;&#039; where &amp;quot;n&amp;quot; is the loop count as each case must have a unique name.&lt;br /&gt;
*** Add a description for each test case&lt;br /&gt;
*** Validate &#039;&#039;sut_foo()&#039;&#039; using the loop count&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* note- The &#039;&#039;dynamic&#039;&#039; Test Unit is using &#039;&#039;CClass&#039;&#039; packaging&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Runtime API&#039;&#039; Test Units only (NOTE - requires passing parameter)&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestRuntime_Static --run TestRuntime_Dynamic(5)&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
      &amp;gt; 10 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    ---------------------------------------------------------------------&lt;br /&gt;
    Summary: 13 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. If you have not added test space options to your options file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) please see [[Training_Getting_Started#Test_Space_Access| testspace access]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestRuntime_Static --run TestRuntime_Dynamic --space TestRuntime --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Runtime_Test_Services#C_Test_Functions | Runtime Test Services]] with special attention to the following:&lt;br /&gt;
** Review srTestCaseSetDescription()&lt;br /&gt;
** Review srTestCaseSetStatus()&lt;br /&gt;
** Review srTestCaseSetName()&lt;br /&gt;
** Review srTestCaseAddComment()&lt;br /&gt;
** Review srTestSuiteAddCase()&lt;br /&gt;
** Review srTestSuiteAddSuite()&lt;br /&gt;
** Review srTestSuiteSetDescription()&lt;br /&gt;
** Review srTestSuiteSetName()&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
* [[Test_Class_Sample | Test Class Sample]] - Specifically the [[Test_Class_Sample#Runtime_Services |Runtime Services]] Test Units&lt;br /&gt;
* [[Test_CClass_Sample | Test C Class Sample]] - Specifically the [[Test_CClass_Sample#Runtime_Services | Runtime Services]] Test Units&lt;br /&gt;
* [[Test_Function_List_Sample | Test Function List Sample]] - Specifically the [[Test_Function_List_Sample#Runtime_Services | Runtime Services]] Test Units&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Doubling&amp;diff=13901</id>
		<title>Training Doubling</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Doubling&amp;diff=13901"/>
		<updated>2013-02-01T19:02:52Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Run and Publish Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused on explaining how to leverage &#039;&#039;&#039;Test Doubles&#039;&#039;&#039; in the context of executing a test. For an overview of &#039;&#039;&#039;intercepting&#039;&#039;&#039; existing functions please refer to [[Using_Test_Doubles | Test Doubles]]. The module covers following topics:&lt;br /&gt;
* How to apply pragmas for [[Function_Capturing | function intercepting]]&lt;br /&gt;
* How to decide what kind of &#039;&#039;mangling&#039;&#039; is required&lt;br /&gt;
** &#039;&#039;Definition&#039;&#039; verses &#039;&#039;Reference&#039;&#039;&lt;br /&gt;
** &#039;&#039;Explicit&#039;&#039; verses &#039;&#039;Implicit&#039;&#039;&lt;br /&gt;
* Setting and Resetting the Double implementation&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two new files used -- &#039;&#039;&#039;TestDouble.cpp &amp;amp; TestDouble.h&#039;&#039;&#039; --- that implement two test Units:&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Reference&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Definition&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All of the Test Units have test cases already implemented (used for reference) and have one test method that you are required to implement called &#039;&#039;&#039;Exercise&#039;&#039;&#039;. Currently the exercise methods return a &#039;&#039;NOT IN USE&#039;&#039; status. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Double&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference --run TestDouble_Definition&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
      &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
      &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    ---------------------------------------------------------&lt;br /&gt;
    Summary: 4 passed, 0 failed, 0 in progress, 2 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Reference::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Implement a new &#039;&#039;test double&#039;&#039; for the &#039;&#039;strlen()&#039;&#039; routine:&lt;br /&gt;
*** Add a &#039;&#039;NOTE&#039;&#039; capturing its name when it is called&lt;br /&gt;
*** Uses the real &#039;&#039;strlen()&#039;&#039; to return the length of the passed in string&lt;br /&gt;
** Use &#039;&#039;srEXPECT_EQ()&#039;&#039; to validate that &#039;&#039;sut_strcheck()&#039;&#039; returns correct length &lt;br /&gt;
** Make sure to restore the original routine &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Definition::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Implement a new &#039;&#039;test double&#039;&#039; for the &#039;&#039;sut_strcpy()&#039;&#039; routine:&lt;br /&gt;
*** Log its name when it is called&lt;br /&gt;
*** Validate string passed to &#039;&#039;sut_strsave()&#039;&#039; is received correctly by the &#039;&#039;test double&#039;&#039;&lt;br /&gt;
*** Remember that &#039;&#039;sut_strsave()&#039;&#039; calls &#039;&#039;sut_strcpy()&#039;&#039; with the string passed to it&lt;br /&gt;
*** Can use a test macro to validate the string is correctly passed to the double&lt;br /&gt;
*** Call the original function (&#039;&#039;sut_strcpy()&#039;&#039;) with a &#039;&#039;&#039;DIFFERENT STRING&#039;&#039;&#039; (i.e &amp;quot;Exercise Test String 2&amp;quot;)&lt;br /&gt;
**** Call the original within the test double &lt;br /&gt;
**** Make sure to reset the original function within the mock before calling it (i.e. srDOUBLE_RESET)&lt;br /&gt;
** Call &#039;&#039;sut_strsave()&#039;&#039; with a string (i.e. &amp;quot;Exercise Test String 1&amp;quot;)&lt;br /&gt;
** Use &#039;&#039;sut_strget()&#039;&#039; to retrieve a string&lt;br /&gt;
** Compare retrieved string with the &#039;&#039;&#039;DIFFERENT STRING&#039;&#039;&#039; inserted by the &#039;&#039;test double&#039;&#039;&lt;br /&gt;
** NOTE - won&#039;t work if you wait until the end of the test to restore the original routine (i.e. srDOUBLE_SET)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Double&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference TestDouble_Definition&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    ---------------------------------------------------------&lt;br /&gt;
    Summary: 6 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. If you have not added test space options to your options file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) please see [[Training_Getting_Started#Test_Space_Access| testspace access]]. &lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference TestDouble_Definition --space TestDouble --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
* [[Using_Test_Doubles | Using Test Doubles]] - Outlines the basic steps to enable a &#039;&#039;double&#039;&#039;&lt;br /&gt;
** Capture the f(x) that is going to be doubled&lt;br /&gt;
** Code the test logic to additionally set the Double and restore original f(x)&lt;br /&gt;
&lt;br /&gt;
* [[Scl_function | Intercepting a function]] - Pragma specifics outline&lt;br /&gt;
** For this exercise, there are &#039;&#039;&#039;NO&#039;&#039;&#039; optional parameters when capturing a f(x) to be &#039;&#039;&#039;intercepted&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Details of the pragma parameters can be understood in the [[Intercept_Module#Interceptor | Intercept Module]] article. Please review in detail the following:&lt;br /&gt;
** &#039;&#039;context&#039;&#039; - This refers to incepting a callee or the called routine. &lt;br /&gt;
** &#039;&#039;name-mangling&#039;&#039; - How the function name is switched during the compiling process&lt;br /&gt;
** &#039;&#039;group-id&#039;&#039; - Used to associate a group of intercepted functions &lt;br /&gt;
&lt;br /&gt;
* Some &#039;&#039;&#039;external links&#039;&#039;&#039; to review:&lt;br /&gt;
** [http://www.martinfowler.com/bliki/TestDouble.html Martin Fowler definition]&lt;br /&gt;
** [http://en.wikipedia.org/wiki/Test_Double Wikipedia definition]&lt;br /&gt;
** [http://msdn.microsoft.com/en-us/magazine/cc163358.aspx MSDN Magazine description]&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Double_Sample | Test Double Sample]] that can be a useful reference. This reference example can be built and executed like all of samples using the Off-Target environment.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_File_IO&amp;diff=13900</id>
		<title>Training File IO</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_File_IO&amp;diff=13900"/>
		<updated>2013-02-01T19:02:35Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Run and Publish Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused on leveraging [[File_Transfer_Services | File IO]] from the context of a Test Unit executing on a target. The module covers the following:&lt;br /&gt;
* Accessing host based files&lt;br /&gt;
* Read and Writing content to / from host files&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two test files used -- &#039;&#039;&#039;TestFile.cpp &amp;amp; TestFile.h&#039;&#039;&#039; -- that implement one Test Unit:&lt;br /&gt;
* &#039;&#039;&#039;TestFile&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The Test Unit has test cases already implemented (used for reference) and has one test method that you are required to implement (called &#039;&#039;&#039;Exercise&#039;&#039;&#039;).  Currently this method is empty and returns a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]]  using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test File&#039;&#039; Test Unit only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestFile&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
      &amp;gt; 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* Read data content &#039;&#039;TestFileInput.dat&#039;&#039; file created from the previous test method (SDK\Windows or SDK/Posix).&lt;br /&gt;
** &#039;&#039;Hint:&#039;&#039; You will probably want to read about [[File Transfer Services]]&lt;br /&gt;
* Make sure to Exit the test if an error occurs when opening file(s) (i.e. use &#039;&#039;&#039;srEXIT_XX&#039;&#039;&#039;)&lt;br /&gt;
* Take a summation of the data content using &#039;&#039;sut_add()&#039;&#039; &lt;br /&gt;
* Validate summation with content in &#039;&#039;TestFileSum.dat&#039;&#039; file&lt;br /&gt;
* Add a &#039;&#039;NOTE&#039;&#039; capturing important information such as &#039;&#039;bytes read&#039;&#039;, &#039;&#039;summation&#039;&#039;, etc. _&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
* Execute only &#039;&#039;TestFile&#039;&#039;  &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestFile&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
      &amp;gt; 1 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 1 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
   &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space.  If you have not added test space options to your options file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) please see [[Training_Getting_Started#Test_Space_Access| testspace access]].&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestFile --space TestFile --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
* [[File_Transfer_Services | File Transfer Services]] - important information for using file operations on host file from test code executing on the target&lt;br /&gt;
** Review &#039;&#039;srFileOpen()&#039;&#039; and &#039;&#039;srFileClose()&#039;&#039;&lt;br /&gt;
** Review &#039;&#039;srFileRead()&#039;&#039; and &#039;&#039;srFileWrite()&#039;&#039;&lt;br /&gt;
** Other references as needed&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
* [[File_Services_Sample | File Services Sample]] - presents a few basic examples of how to use the File Transfer Services API to interact with the host file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Doubling&amp;diff=13899</id>
		<title>Training Doubling</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Doubling&amp;diff=13899"/>
		<updated>2013-02-01T19:02:00Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Run and Publish Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused on explaining how to leverage &#039;&#039;&#039;Test Doubles&#039;&#039;&#039; in the context of executing a test. For an overview of &#039;&#039;&#039;intercepting&#039;&#039;&#039; existing functions please refer to [[Using_Test_Doubles | Test Doubles]]. The module covers following topics:&lt;br /&gt;
* How to apply pragmas for [[Function_Capturing | function intercepting]]&lt;br /&gt;
* How to decide what kind of &#039;&#039;mangling&#039;&#039; is required&lt;br /&gt;
** &#039;&#039;Definition&#039;&#039; verses &#039;&#039;Reference&#039;&#039;&lt;br /&gt;
** &#039;&#039;Explicit&#039;&#039; verses &#039;&#039;Implicit&#039;&#039;&lt;br /&gt;
* Setting and Resetting the Double implementation&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two new files used -- &#039;&#039;&#039;TestDouble.cpp &amp;amp; TestDouble.h&#039;&#039;&#039; --- that implement two test Units:&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Reference&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Definition&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All of the Test Units have test cases already implemented (used for reference) and have one test method that you are required to implement called &#039;&#039;&#039;Exercise&#039;&#039;&#039;. Currently the exercise methods return a &#039;&#039;NOT IN USE&#039;&#039; status. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Double&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference --run TestDouble_Definition&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
      &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
      &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    ---------------------------------------------------------&lt;br /&gt;
    Summary: 4 passed, 0 failed, 0 in progress, 2 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Reference::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Implement a new &#039;&#039;test double&#039;&#039; for the &#039;&#039;strlen()&#039;&#039; routine:&lt;br /&gt;
*** Add a &#039;&#039;NOTE&#039;&#039; capturing its name when it is called&lt;br /&gt;
*** Uses the real &#039;&#039;strlen()&#039;&#039; to return the length of the passed in string&lt;br /&gt;
** Use &#039;&#039;srEXPECT_EQ()&#039;&#039; to validate that &#039;&#039;sut_strcheck()&#039;&#039; returns correct length &lt;br /&gt;
** Make sure to restore the original routine &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Definition::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Implement a new &#039;&#039;test double&#039;&#039; for the &#039;&#039;sut_strcpy()&#039;&#039; routine:&lt;br /&gt;
*** Log its name when it is called&lt;br /&gt;
*** Validate string passed to &#039;&#039;sut_strsave()&#039;&#039; is received correctly by the &#039;&#039;test double&#039;&#039;&lt;br /&gt;
*** Remember that &#039;&#039;sut_strsave()&#039;&#039; calls &#039;&#039;sut_strcpy()&#039;&#039; with the string passed to it&lt;br /&gt;
*** Can use a test macro to validate the string is correctly passed to the double&lt;br /&gt;
*** Call the original function (&#039;&#039;sut_strcpy()&#039;&#039;) with a &#039;&#039;&#039;DIFFERENT STRING&#039;&#039;&#039; (i.e &amp;quot;Exercise Test String 2&amp;quot;)&lt;br /&gt;
**** Call the original within the test double &lt;br /&gt;
**** Make sure to reset the original function within the mock before calling it (i.e. srDOUBLE_RESET)&lt;br /&gt;
** Call &#039;&#039;sut_strsave()&#039;&#039; with a string (i.e. &amp;quot;Exercise Test String 1&amp;quot;)&lt;br /&gt;
** Use &#039;&#039;sut_strget()&#039;&#039; to retrieve a string&lt;br /&gt;
** Compare retrieved string with the &#039;&#039;&#039;DIFFERENT STRING&#039;&#039;&#039; inserted by the &#039;&#039;test double&#039;&#039;&lt;br /&gt;
** NOTE - won&#039;t work if you wait until the end of the test to restore the original routine (i.e. srDOUBLE_SET)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Double&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference TestDouble_Definition&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    ---------------------------------------------------------&lt;br /&gt;
    Summary: 6 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. If you have not added test space options to your options file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) please see [[Training_Getting_Started#Test_Space_Access| testspace access]]. &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference TestDouble_Definition --space TestDouble --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
* [[Using_Test_Doubles | Using Test Doubles]] - Outlines the basic steps to enable a &#039;&#039;double&#039;&#039;&lt;br /&gt;
** Capture the f(x) that is going to be doubled&lt;br /&gt;
** Code the test logic to additionally set the Double and restore original f(x)&lt;br /&gt;
&lt;br /&gt;
* [[Scl_function | Intercepting a function]] - Pragma specifics outline&lt;br /&gt;
** For this exercise, there are &#039;&#039;&#039;NO&#039;&#039;&#039; optional parameters when capturing a f(x) to be &#039;&#039;&#039;intercepted&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Details of the pragma parameters can be understood in the [[Intercept_Module#Interceptor | Intercept Module]] article. Please review in detail the following:&lt;br /&gt;
** &#039;&#039;context&#039;&#039; - This refers to incepting a callee or the called routine. &lt;br /&gt;
** &#039;&#039;name-mangling&#039;&#039; - How the function name is switched during the compiling process&lt;br /&gt;
** &#039;&#039;group-id&#039;&#039; - Used to associate a group of intercepted functions &lt;br /&gt;
&lt;br /&gt;
* Some &#039;&#039;&#039;external links&#039;&#039;&#039; to review:&lt;br /&gt;
** [http://www.martinfowler.com/bliki/TestDouble.html Martin Fowler definition]&lt;br /&gt;
** [http://en.wikipedia.org/wiki/Test_Double Wikipedia definition]&lt;br /&gt;
** [http://msdn.microsoft.com/en-us/magazine/cc163358.aspx MSDN Magazine description]&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Double_Sample | Test Double Sample]] that can be a useful reference. This reference example can be built and executed like all of samples using the Off-Target environment.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Runtime_API&amp;diff=13898</id>
		<title>Training Runtime API</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Runtime_API&amp;diff=13898"/>
		<updated>2013-02-01T18:46:38Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Implement Exercise */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused leveraging the [[Runtime_Test_Services | Runtime APIs]] in the context of writing a test. The module covers the following topics:&lt;br /&gt;
* How to set [[Runtime_Test_Services#srTestCaseSetStatus | test status]]&lt;br /&gt;
* Setting [[Runtime_Test_Services#srTestCaseSetName | test case name]] and [[Runtime_Test_Services#srTestCaseSetDescription | description]] directly via the API&lt;br /&gt;
* Setting [[Runtime_Test_Services#srTestSuiteSetName | test suite name]] and [[Runtime_Test_Services#srTestSuiteSetDescription | description]] directly via the API&lt;br /&gt;
* Dynamically creating [[Runtime_Test_Services#srTestSuiteAddSuite | test suites]] and [[Runtime_Test_Services#srTestSuiteAddCase | test cases]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two new files used -- &#039;&#039;&#039;TestRuntime.cpp &amp;amp; TestRuntime.h&#039;&#039;&#039; --- that implement two test Units:&lt;br /&gt;
* &#039;&#039;&#039;TestRuntime_Static&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestRuntime_Dynamic&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All of the Test Units have test cases already implemented (used for reference) and have one test method that you are required to implement: one is called &#039;&#039;&#039;Exercise&#039;&#039;&#039; and the other is called &#039;&#039;&#039;dynamic_Exercise&#039;&#039;&#039;. Currently the exercise methods return a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Runtime APIs&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestRuntime_Static --run TestRuntime_Dynamic&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
   test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
     &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
   test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
     &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  ------------------------------------------------------------&lt;br /&gt;
  Summary: 6 passed, 0 failed, 0 in progress, 2 not in use.&lt;br /&gt;
  &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestRuntime_Static::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Validate the &#039;&#039;sut_mult()&#039;&#039; routine using some simple data input&lt;br /&gt;
*** &#039;&#039;Hint:&#039;&#039; You will probably want to read about [[Runtime Test Services]]&lt;br /&gt;
** Set the &#039;&#039;test method&#039;&#039; name to &#039;&#039;&#039;Mult&#039;&#039;&#039;&lt;br /&gt;
** Use direct Runtime APIs to:&lt;br /&gt;
*** Set the test case description&lt;br /&gt;
*** Capture logging information via comments&lt;br /&gt;
*** Set the status of the test case&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestRuntime_Dynamic-&amp;gt;Exercise&#039;&#039;&#039;&lt;br /&gt;
** Pass in &#039;&#039;&#039;5&#039;&#039;&#039; via command line for the number of test cases to generate&lt;br /&gt;
** Check using &#039;&#039;&#039;srEXIT_XX&#039;&#039;&#039; that the number of test cases has been passed in correctly&lt;br /&gt;
** Add a new Test Suite using &#039;&#039;&#039;Exercise&#039;&#039;&#039; for the name&lt;br /&gt;
** Write a loop generating a dynamic test case using &#039;&#039;NumberOfTestCases&#039;&#039;&lt;br /&gt;
*** Test Case name shall be &#039;&#039;&#039;Test_n&#039;&#039;&#039; where &amp;quot;n&amp;quot; is the loop count as each case must have a unique name.&lt;br /&gt;
*** Add a description for each test case&lt;br /&gt;
*** Validate &#039;&#039;sut_foo()&#039;&#039; using the loop count&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* note- The &#039;&#039;dynamic&#039;&#039; Test Unit is using &#039;&#039;CClass&#039;&#039; packaging&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Runtime API&#039;&#039; Test Units only (NOTE - requires passing parameter)&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestRuntime_Static --run TestRuntime_Dynamic(5)&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
      &amp;gt; 10 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    ---------------------------------------------------------------------&lt;br /&gt;
    Summary: 13 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. To make it easier for now we recommend that you update your existing option file (myoptions.txt) with the following if not already done: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestRuntime_Static --run TestRuntime_Dynamic --space TestRuntime --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Runtime_Test_Services#C_Test_Functions | Runtime Test Services]] with special attention to the following:&lt;br /&gt;
** Review srTestCaseSetDescription()&lt;br /&gt;
** Review srTestCaseSetStatus()&lt;br /&gt;
** Review srTestCaseSetName()&lt;br /&gt;
** Review srTestCaseAddComment()&lt;br /&gt;
** Review srTestSuiteAddCase()&lt;br /&gt;
** Review srTestSuiteAddSuite()&lt;br /&gt;
** Review srTestSuiteSetDescription()&lt;br /&gt;
** Review srTestSuiteSetName()&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
* [[Test_Class_Sample | Test Class Sample]] - Specifically the [[Test_Class_Sample#Runtime_Services |Runtime Services]] Test Units&lt;br /&gt;
* [[Test_CClass_Sample | Test C Class Sample]] - Specifically the [[Test_CClass_Sample#Runtime_Services | Runtime Services]] Test Units&lt;br /&gt;
* [[Test_Function_List_Sample | Test Function List Sample]] - Specifically the [[Test_Function_List_Sample#Runtime_Services | Runtime Services]] Test Units&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_File_IO&amp;diff=13897</id>
		<title>Training File IO</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_File_IO&amp;diff=13897"/>
		<updated>2013-02-01T17:30:37Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Implement Exercise */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused on leveraging [[File_Transfer_Services | File IO]] from the context of a Test Unit executing on a target. The module covers the following:&lt;br /&gt;
* Accessing host based files&lt;br /&gt;
* Read and Writing content to / from host files&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two test files used -- &#039;&#039;&#039;TestFile.cpp &amp;amp; TestFile.h&#039;&#039;&#039; -- that implement one Test Unit:&lt;br /&gt;
* &#039;&#039;&#039;TestFile&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The Test Unit has test cases already implemented (used for reference) and has one test method that you are required to implement (called &#039;&#039;&#039;Exercise&#039;&#039;&#039;).  Currently this method is empty and returns a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]]  using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test File&#039;&#039; Test Unit only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestFile&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
      &amp;gt; 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* Read data content &#039;&#039;TestFileInput.dat&#039;&#039; file created from the previous test method (SDK\Windows or SDK/Posix).&lt;br /&gt;
** &#039;&#039;Hint:&#039;&#039; You will probably want to read about [[File Transfer Services]]&lt;br /&gt;
* Make sure to Exit the test if an error occurs when opening file(s) (i.e. use &#039;&#039;&#039;srEXIT_XX&#039;&#039;&#039;)&lt;br /&gt;
* Take a summation of the data content using &#039;&#039;sut_add()&#039;&#039; &lt;br /&gt;
* Validate summation with content in &#039;&#039;TestFileSum.dat&#039;&#039; file&lt;br /&gt;
* Add a &#039;&#039;NOTE&#039;&#039; capturing important information such as &#039;&#039;bytes read&#039;&#039;, &#039;&#039;summation&#039;&#039;, etc. _&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
* Execute only &#039;&#039;TestFile&#039;&#039;  &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestFile&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
      &amp;gt; 1 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 1 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
   &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. To make it easier for now we recommend that you update your existing option file (myoptions.txt) with the following if not already done: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestFile --space TestFile --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
* [[File_Transfer_Services | File Transfer Services]] - important information for using file operations on host file from test code executing on the target&lt;br /&gt;
** Review &#039;&#039;srFileOpen()&#039;&#039; and &#039;&#039;srFileClose()&#039;&#039;&lt;br /&gt;
** Review &#039;&#039;srFileRead()&#039;&#039; and &#039;&#039;srFileWrite()&#039;&#039;&lt;br /&gt;
** Other references as needed&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
* [[File_Services_Sample | File Services Sample]] - presents a few basic examples of how to use the File Transfer Services API to interact with the host file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Fixturing&amp;diff=13896</id>
		<title>Training Fixturing</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Fixturing&amp;diff=13896"/>
		<updated>2013-01-31T21:25:32Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Run and Publish Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module focuses on &#039;&#039;&#039;Fixturing&#039;&#039;&#039; in the context of the STRIDE Test System. For a high-level overview refer to the following wiki section on [[What_is_Unique_About_STRIDE#Fixturing | fixturing]]. This module covers the following topics:&lt;br /&gt;
* Startup logic at the beginning of a Test Unit&lt;br /&gt;
* [[Test_Unit_Pragmas#Fixturing_Pragmas | Setup]] logic for each test method&lt;br /&gt;
* [[Test_Unit_Pragmas#Fixturing_Pragmas | Teardown]] logic for each test method&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two test files used -- &#039;&#039;&#039;TestFixture.cpp &amp;amp; TestFixture.h&#039;&#039;&#039;. These implement one Test Unit:&lt;br /&gt;
* &#039;&#039;&#039;TestFixture&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The Test Unit has one test case already implemented (used for reference) and has &#039;&#039;&#039;two test methods&#039;&#039;&#039; that you are required to implement (called &#039;&#039;&#039;Exercise1&#039;&#039;&#039; and &#039;&#039;&#039;Exercise2&#039;&#039;&#039;).  Currently the &#039;&#039;Exercise1&#039;&#039; method is empty and returns a &#039;&#039;NOT IN USE&#039;&#039; status. The &#039;&#039;Exercise2&#039;&#039; method does not yet exist. &lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Fixture&#039;&#039; Test Unit only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride -options_file myoptions.txt --run TestFixture&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
      &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestFixture::Exercise1&#039;&#039;&#039;&lt;br /&gt;
** Add a &#039;&#039;NOTE&#039;&#039; that will display the &#039;&#039;name&#039;&#039; of the test.&lt;br /&gt;
** Validate that &#039;&#039;&#039;Sequence2&#039;&#039;&#039; is stored when starting the Thread by checking that &amp;quot;Sequence2&amp;quot; equals MyString[0].&lt;br /&gt;
*** &#039;&#039;HINT&#039;&#039;: The method to focus on in SUT is DoSequencing()&lt;br /&gt;
*** &#039;&#039;HINT&#039;&#039;: m_sequence is incremented as each method goes through Teardown.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestFixture::Exercise2&#039;&#039;&#039;&lt;br /&gt;
** Add new test method declaration called &#039;&#039;Exercise2&#039;&#039; to TestFixture.h&lt;br /&gt;
** Add new test method implementation of &#039;&#039;Exercise2&#039;&#039; to TestFixture.cpp&lt;br /&gt;
** Add a &#039;&#039;NOTE&#039;&#039; that will display the name of the test&lt;br /&gt;
** Validate that &#039;&#039;&#039;Sequence3&#039;&#039;&#039; is persisted when starting the Thread&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute only &#039;&#039;TestFixture&#039;&#039;  &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestFixture&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
   &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space (--space TestFixture). If you have not added test space options to your options file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) please see [[Training_Getting_Started#Test_Space_Access| testspace access]].&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestFixture --space TestFixture --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Unit_Pragmas#Fixturing_Pragmas | Fixturing Pragmas]] -- important information for setting up and tearing down &#039;&#039;things&#039;&#039; between each executed test method within a Test Unit&lt;br /&gt;
** Review the types of Test Units these pragmas are applicable for &lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Class_Sample | Test Class Sample]] - Specifically the [[Test_Class_Sample#Basic::Fixtures | Basic Fixtures]] Test Unit&lt;br /&gt;
* [[Test_CClass_Sample | Test C Class Sample]] - Specifically the [[Test_CClass_Sample#basic_fixtures | Basic Fixtures]] Test Unit&lt;br /&gt;
* [[Test_Function_List_Sample | Test Function List Sample]]- Specifically the [[Test_Function_List_Sample#basic_fixtures |Basic Fixtures]] Test Unit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Parameters&amp;diff=13895</id>
		<title>Training Parameters</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Parameters&amp;diff=13895"/>
		<updated>2013-01-31T21:24:47Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Run and Publish Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused on &#039;&#039;&#039;Parameter passing&#039;&#039;&#039; in the context of the Test Unit. The module covers the following:&lt;br /&gt;
* [[Parameterized_Test_Units | Passing parameters]] via the Runner&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two test files used -- &#039;&#039;&#039;TestParam.cpp &amp;amp; TestParam.h&#039;&#039;&#039;. These implement one Test Unit:&lt;br /&gt;
* &#039;&#039;&#039;TestParam&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The Test Unit has test cases already implemented (used for reference) and has one test method that you are required to implement (called &#039;&#039;&#039;Exercise&#039;&#039;&#039;).  Currently this method is empty and returns a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Param&#039;&#039; Test Unit only&lt;br /&gt;
: Note that parameters are passed to the test unit from the host command line.&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run &amp;quot;TestParam(9,4,\&amp;quot;STRING1\&amp;quot;)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
      &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
;Exercise()&lt;br /&gt;
* Add a &#039;&#039;NOTE&#039;&#039; that will display the &#039;&#039;integer parameters&#039;&#039; passed via command line&lt;br /&gt;
* Add an Assert macro guarding against no parameters being passed (i.e. default value of 0 received)&lt;br /&gt;
* Call &#039;&#039;sut_add()&#039;&#039; with parameters and validate that it returns the expected value&lt;br /&gt;
* Call &#039;&#039;sut_mult()&#039;&#039; with parameters and validate that it returns the expected value&lt;br /&gt;
** &#039;&#039;Hint:&#039;&#039; You will probably want to read about [[Parameterized Test Units]]&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
* Execute only &#039;&#039;TestParam&#039;&#039; &#039;&#039;&#039;without&#039;&#039;&#039; parameters &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestParam&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
      &amp;gt; 0 passed, 2 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 0 passed, 2 failed, 0 in progress, 0 not in use.&lt;br /&gt;
   &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute only &#039;&#039;TestParam&#039;&#039; &#039;&#039;&#039;with&#039;&#039;&#039; parameters &lt;br /&gt;
  &amp;gt; stride -O myoptions.txt --run &amp;quot;TestParam(9,4,\&amp;quot;STRING1\&amp;quot;)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
      &amp;gt; 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
   &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space (--space TestParam). If you have not added test space options to your options file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) please see [[Training_Getting_Started#Test_Space_Access| testspace access]].&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run &amp;quot;TestParam(9,4,\&amp;quot;STRING1\&amp;quot;)&amp;quot; --space TestParam --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Parameterized_Test_Units |Parameterized Test Units]] - Describes how to supply parameters to your Test Units&lt;br /&gt;
* [[STRIDE_Runner | STRIDE Runner]] - Information on command line options&lt;br /&gt;
** Refer to &amp;quot;Test Unit Specification Examples&amp;quot; - specifically examples using parameters&lt;br /&gt;
** Review &amp;lt;tt&amp;gt;run[--r] arg&amp;lt;/tt&amp;gt; option&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
* [[Test_Class_Sample | Test Class Sample]] - Specifically the [[Test_Class_Sample#Basic::Parameterized | Parameterized]] Test Unit&lt;br /&gt;
* [[Test_CClass_Sample | Test C Class Sample]] - Specifically the [[Test_CClass_Sample#basic_parameterized | Parameterized]] Test Unit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Test_Macros&amp;diff=13894</id>
		<title>Training Test Macros</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Test_Macros&amp;diff=13894"/>
		<updated>2013-01-31T21:19:56Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module focuses on the basics of writing and executing tests. It covers the following topics:&lt;br /&gt;
* The C++ class [[Test_Units | Test Unit packaging]] option&lt;br /&gt;
* How to leverage [[Test_Macros | Test Macros]]&lt;br /&gt;
* Using [[Test_Macros#Notes | Notes]] and [[Test_Log | Test Logs]], what is the difference?&lt;br /&gt;
* Creating [[Test_Documentation_in_C/C%2B%2B | Test Documentation]] via the build process&lt;br /&gt;
* Executing Tests using the [[STRIDE_Runner | Runner]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test unit is implemented in two source files: &#039;&#039;&#039;TestBasic.cpp&#039;&#039;&#039; and &#039;&#039;&#039;TestBasic.h&#039;&#039;&#039;. The comments and descriptions are contained in &#039;&#039;&#039;TestBasic.h&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The Test Unit has test cases already implemented (used for reference) and has a test method that you are required to implement (called &#039;&#039;&#039;Exercise&#039;&#039;&#039;). Currently this method is empty and returns a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using the SDK makefile&lt;br /&gt;
* Invoke TestApp found in the &#039;&#039;/out/bin&#039;&#039; directory&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute the &#039;&#039;Test Basic&#039;&#039; Test Units &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestBasic&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
You can also review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the sample_src directory (based on the output option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercises ===&lt;br /&gt;
&lt;br /&gt;
Now edit the training source code to complete the following exercises:&lt;br /&gt;
&lt;br /&gt;
;Exercise()&lt;br /&gt;
* &#039;&#039;&#039;Assignment 1:&#039;&#039;&#039; Add an &#039;&#039;srNOTE&#039;&#039; to &amp;lt;tt&amp;gt;Exercise()&amp;lt;/tt&amp;gt; that will add a simple message to the test report (e.g. &amp;quot;Exercise ...&amp;quot;)&lt;br /&gt;
** &#039;&#039;Hint:&#039;&#039; You will probably want to read about [[Test Macros]]&lt;br /&gt;
** &#039;&#039;Hint:&#039;&#039; srTestCaseSetStatus(srTEST_CASE_DEFAULT, srTEST_NOTINUSE, 0) is a placeholder to be removed when you add your test code.&lt;br /&gt;
* &#039;&#039;&#039;Assignment 2:&#039;&#039;&#039; Within &amp;lt;tt&amp;gt;Exercise()&amp;lt;/tt&amp;gt; validate that &amp;lt;tt&amp;gt;sut_mult(1,1)&amp;lt;/tt&amp;gt; does NOT equal &amp;lt;tt&amp;gt;sut_add(1,1)&amp;lt;/tt&amp;gt;&lt;br /&gt;
** &#039;&#039;Hint:&#039;&#039; You will use a [[Test Macros | Test Macro]].&lt;br /&gt;
&lt;br /&gt;
=== Check Results ===&lt;br /&gt;
&lt;br /&gt;
* Before you rebuild &amp;lt;tt&amp;gt;TestApp.exe&amp;lt;/tt&amp;gt;, you will need to shut it down by entering &#039;q&#039; in its console window or closing the window directly.&lt;br /&gt;
&lt;br /&gt;
* After rebuilding, invoke &amp;lt;tt&amp;gt;TestApp.exe&amp;lt;/tt&amp;gt; once again.&lt;br /&gt;
&lt;br /&gt;
* Execute only &#039;&#039;TestBasic&#039;&#039; using the stride runner:&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestBasic&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
      &amp;gt; 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* If you have TestApp.xml already open in your browser, you can simply refresh (F5), to view the latest test results.&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space(--space TestBasic). If you have not added test space options to your options file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) please see [[Training_Getting_Started#Test_Space_Access| testspace access]] &lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestBasic --space TestBasic --upload&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
      &amp;gt; 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    ------------------------------------------------------------&lt;br /&gt;
    Summary: 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
  Uploading to test space...&lt;br /&gt;
&lt;br /&gt;
=== Validate Uploaded Results ===&lt;br /&gt;
&lt;br /&gt;
Navigate to Test Space using your browser, then validate your results against the pre-configured baseline. For details, please see [[Training_Getting_Started#Confirming_Training_Exercise_Results | Confirming Training Exercise Results]]&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
You can view the results on test space by pointing your browser at &#039;&#039;&#039;&amp;lt;tt&amp;gt;&amp;lt;nowiki&amp;gt;https://yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&amp;lt;/tt&amp;gt;&#039;&#039;&#039;. Log in using the credentials you supplied earlier in the options file.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to Test Unit basics.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Units_Overview | Test Units Overview]] - Provides a general overview Test Units (i.e. writing tests in C and C++)&lt;br /&gt;
* [[Test_Units |Test Unit Packaging]] - Discusses the three types of packaging that can be used for Test Units (we prefer Test Classes even for c programmers)&lt;br /&gt;
* [[Test_Macros|Test Macros]] - Optional macros that provide shortcuts for testing assertions and automatic report annotation (you will want to use these).&lt;br /&gt;
* [[Test_Macros#Note_Macros|Notes]] - Used to add logging information to your &#039;&#039;&#039;test logic&#039;&#039;&#039; (automatically added to test reports)&lt;br /&gt;
* [[Test_Log|Test Logs]] - Used to add logging information to your &#039;&#039;&#039;source code&#039;&#039;&#039; (added to test reports if enabled)&lt;br /&gt;
* [[Tracing|Tracing]] - The Runner allows tracing on logs and test points.&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Class_Sample|Test Class Sample]] -  Specifically the [[Test_Class_Sample#Basic|Basic Simple]] Test Unit&lt;br /&gt;
* [[Test_Macros_Sample|Test Macro Sample]] - This sample covers simple uses of each of the Test Macros. &lt;br /&gt;
&lt;br /&gt;
Note - Other Test Unit Samples related to packaging that are useful are the following:&lt;br /&gt;
&lt;br /&gt;
* [[Test_CClass_Sample|Test C Class Sample]]&lt;br /&gt;
* [[Test_Function_List_Sample|Test Function List Sample]] &lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Test_Macros&amp;diff=13893</id>
		<title>Training Test Macros</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Test_Macros&amp;diff=13893"/>
		<updated>2013-01-31T21:04:56Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Implement Exercises */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module focuses on the basics of writing and executing tests. It covers the following topics:&lt;br /&gt;
* The C++ class [[Test_Units | Test Unit packaging]] option&lt;br /&gt;
* How to leverage [[Test_Macros | Test Macros]]&lt;br /&gt;
* Using [[Test_Macros#Notes | Notes]] and [[Test_Log | Test Logs]], what is the difference?&lt;br /&gt;
* Creating [[Test_Documentation_in_C/C%2B%2B | Test Documentation]] via the build process&lt;br /&gt;
* Executing Tests using the [[STRIDE_Runner | Runner]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test unit is implemented in two source files: &#039;&#039;&#039;TestBasic.cpp&#039;&#039;&#039; and &#039;&#039;&#039;TestBasic.h&#039;&#039;&#039;. The comments and descriptions are contained in &#039;&#039;&#039;TestBasic.h&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The Test Unit has test cases already implemented (used for reference) and has a test method that you are required to implement (called &#039;&#039;&#039;Exercise&#039;&#039;&#039;). Currently this method is empty and returns a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using the SDK makefile&lt;br /&gt;
* Invoke TestApp found in the &#039;&#039;/out/bin&#039;&#039; directory&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute the &#039;&#039;Test Basic&#039;&#039; Test Units &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestBasic&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
You can also review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the sample_src directory (based on the output option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercises ===&lt;br /&gt;
&lt;br /&gt;
Now edit the training source code to complete the following exercises:&lt;br /&gt;
&lt;br /&gt;
;Exercise()&lt;br /&gt;
* &#039;&#039;&#039;Assignment 1:&#039;&#039;&#039; Add an &#039;&#039;srNOTE&#039;&#039; to &amp;lt;tt&amp;gt;Exercise()&amp;lt;/tt&amp;gt; that will add a simple message to the test report (e.g. &amp;quot;Exercise ...&amp;quot;)&lt;br /&gt;
** &#039;&#039;Hint:&#039;&#039; You will probably want to read about [[Test Macros]]&lt;br /&gt;
** &#039;&#039;Hint:&#039;&#039; srTestCaseSetStatus(srTEST_CASE_DEFAULT, srTEST_NOTINUSE, 0) is a placeholder to be removed when you add your test code.&lt;br /&gt;
* &#039;&#039;&#039;Assignment 2:&#039;&#039;&#039; Within &amp;lt;tt&amp;gt;Exercise()&amp;lt;/tt&amp;gt; validate that &amp;lt;tt&amp;gt;sut_mult(1,1)&amp;lt;/tt&amp;gt; does NOT equal &amp;lt;tt&amp;gt;sut_add(1,1)&amp;lt;/tt&amp;gt;&lt;br /&gt;
** &#039;&#039;Hint:&#039;&#039; You will use a [[Test Macros | Test Macro]].&lt;br /&gt;
&lt;br /&gt;
=== Check Results ===&lt;br /&gt;
&lt;br /&gt;
* Before you rebuild &amp;lt;tt&amp;gt;TestApp.exe&amp;lt;/tt&amp;gt;, you will need to shut it down by entering &#039;q&#039; in its console window or closing the window directly.&lt;br /&gt;
&lt;br /&gt;
* After rebuilding, invoke &amp;lt;tt&amp;gt;TestApp.exe&amp;lt;/tt&amp;gt; once again.&lt;br /&gt;
&lt;br /&gt;
* Execute only &#039;&#039;TestBasic&#039;&#039; using the stride runner:&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestBasic&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
      &amp;gt; 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* If you have TestApp.xml already open in your browser, you can simply refresh (F5), to view the latest test results.&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. To make it easier for now we recommend that you update your existing option file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) with the following: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestBasic --space TestBasic --upload&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
      &amp;gt; 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    ------------------------------------------------------------&lt;br /&gt;
    Summary: 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
  Uploading to test space...&lt;br /&gt;
&lt;br /&gt;
=== Validate Uploaded Results ===&lt;br /&gt;
&lt;br /&gt;
Navigate to Test Space using your browser, then validate your results against the pre-configured baseline. For details, please see [[Training_Getting_Started#Confirming_Training_Exercise_Results | Confirming Training Exercise Results]]&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
You can view the results on test space by pointing your browser at &#039;&#039;&#039;&amp;lt;tt&amp;gt;&amp;lt;nowiki&amp;gt;https://yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&amp;lt;/tt&amp;gt;&#039;&#039;&#039;. Log in using the credentials you supplied earlier in the options file.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to Test Unit basics.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Units_Overview | Test Units Overview]] - Provides a general overview Test Units (i.e. writing tests in C and C++)&lt;br /&gt;
* [[Test_Units |Test Unit Packaging]] - Discusses the three types of packaging that can be used for Test Units (we prefer Test Classes even for c programmers)&lt;br /&gt;
* [[Test_Macros|Test Macros]] - Optional macros that provide shortcuts for testing assertions and automatic report annotation (you will want to use these).&lt;br /&gt;
* [[Test_Macros#Note_Macros|Notes]] - Used to add logging information to your &#039;&#039;&#039;test logic&#039;&#039;&#039; (automatically added to test reports)&lt;br /&gt;
* [[Test_Log|Test Logs]] - Used to add logging information to your &#039;&#039;&#039;source code&#039;&#039;&#039; (added to test reports if enabled)&lt;br /&gt;
* [[Tracing|Tracing]] - The Runner allows tracing on logs and test points.&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Class_Sample|Test Class Sample]] -  Specifically the [[Test_Class_Sample#Basic|Basic Simple]] Test Unit&lt;br /&gt;
* [[Test_Macros_Sample|Test Macro Sample]] - This sample covers simple uses of each of the Test Macros. &lt;br /&gt;
&lt;br /&gt;
Note - Other Test Unit Samples related to packaging that are useful are the following:&lt;br /&gt;
&lt;br /&gt;
* [[Test_CClass_Sample|Test C Class Sample]]&lt;br /&gt;
* [[Test_Function_List_Sample|Test Function List Sample]] &lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Getting_Started&amp;diff=13892</id>
		<title>Training Getting Started</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Getting_Started&amp;diff=13892"/>
		<updated>2013-01-31T19:40:46Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Training Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
Our training approach is based on a self-guided tour of the [[STRIDE_Overview#STRIDE_Testing_Features | STRIDE Testing Features]] using reference examples and assigned exercises. The set of examples and the implemented exercises will be built and executed using a standard desktop computer. &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;software under test&#039;&#039;, &#039;&#039;reference examples&#039;&#039;, and &#039;&#039;exercises&#039;&#039; have been designed to be as &#039;&#039;&#039;&#039;&#039;simple as possible while sufficiently demonstrating the topic at hand&#039;&#039;&#039;&#039;&#039;. In particular, the &#039;&#039;software under test&#039;&#039; is &#039;&#039;&#039;very light&#039;&#039;&#039; on core application logic -- the focus instead is on the test code that leverages STRIDE to define and execute tests. &lt;br /&gt;
&lt;br /&gt;
The user is expected to work through each of the training modules covering a specific testing feature. Once the exercises are completed (actual test cases implemented), the results are published to [[STRIDE_Overview#STRIDE_Test_Space | Test Space]] (and validated against a pre-created baseline).&lt;br /&gt;
&lt;br /&gt;
The training collateral consists of the following:&lt;br /&gt;
# The [[STRIDE_Overview#STRIDE_Framework | STRIDE Framework]] used to implement and execute tests&lt;br /&gt;
#* The Framework is configured in an [[STRIDE_Off-Target_Environment | Off-Target Environment]] &lt;br /&gt;
# A set of specific [[#Training | Training Modules]] that will guide you through the exercises&lt;br /&gt;
# [[Main_Page | &#039;&#039;Wiki articles&#039;&#039;]] that will provide background material and other technical information&lt;br /&gt;
# Reference [[C/C%2B%2B_Samples | Samples]]&lt;br /&gt;
# and [[STRIDE_Overview#STRIDE_Test_Space | STRIDE Test Space]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For more details concerning STRIDE refer to the following:&lt;br /&gt;
* &#039;&#039;[[STRIDE_Overview_Video | Overview screencast]]&#039;&#039;  &lt;br /&gt;
* &#039;&#039;[[What is Unique About STRIDE | What is Unique About STRIDE]]&#039;&#039; &lt;br /&gt;
* &#039;&#039;[[Types of Testing Supported by STRIDE | Types of Testing Supported]]&#039;&#039; &lt;br /&gt;
* &#039;&#039;[[Frequently Asked Questions About STRIDE | Frequently Asked Questions]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;For questions and support [mailto:training@s2technologies.com email us]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Before Starting ==&lt;br /&gt;
Before you can start setting up your environment you need the following 3 items:&lt;br /&gt;
* &#039;&#039;&#039;STRIDE Desktop Installation Package&#039;&#039;&#039; (one of the following)&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-windows_4.3.yy.zip&amp;lt;/tt&amp;gt; (Windows desktop)&lt;br /&gt;
&#039;&#039;or&#039;&#039;&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-linux_4.3.yy.tgz&amp;lt;/tt&amp;gt; (Linux desktop)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Training Source files&#039;&#039;&#039;&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_training_source.zip&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Test Space User Account URL and Logon Credentials&#039;&#039;&#039;&lt;br /&gt;
** Test Space URL (&amp;lt;tt&amp;gt;https://&amp;lt;i&amp;gt;YourCompany&amp;lt;/i&amp;gt;.stridetestspace.com&amp;lt;/tt&amp;gt;)&lt;br /&gt;
** User-name&lt;br /&gt;
** User-password&lt;br /&gt;
&lt;br /&gt;
=== Desktop Setup ===&lt;br /&gt;
The training requires that you install the [[Desktop_Installation | Desktop Framework]] and that the [[STRIDE_Off-Target_Environment | Off-Target Environment]] is setup correctly and verified. &lt;br /&gt;
&lt;br /&gt;
For an overview of the installation steps please refer to [[Installation_Overview | this]] article.&lt;br /&gt;
&lt;br /&gt;
The following steps are required:&lt;br /&gt;
# Install your [[Desktop_Installation | desktop Framework package]]&lt;br /&gt;
# Read about the [[STRIDE_Off-Target_Environment | Off-Target Environment]]&lt;br /&gt;
# Install [[STRIDE_Off-Target_Environment#Host_Compiler | host C++ compiler]] for your desktop (if needed)&lt;br /&gt;
# Use the Off-Target SDK to [[Building_an_Off-Target_Test_App | build a Test App]]&lt;br /&gt;
# [[Building_an_Off-Target_Test_App#Diagnostics | Run STRIDE diagnostics]] with the Test App built&lt;br /&gt;
&lt;br /&gt;
=== Training Setup ===&lt;br /&gt;
The following source code can be found in the &#039;&#039;&#039;STRIDE_training_source.zip&#039;&#039;&#039; file:&lt;br /&gt;
&lt;br /&gt;
   software_under_test.c | h&lt;br /&gt;
   TestBasic.cpp | h&lt;br /&gt;
   TestParam.cpp | h&lt;br /&gt;
   TestFixture.cpp | h&lt;br /&gt;
   TestExpect.cpp | h   &lt;br /&gt;
   TestRuntime.cpp | h&lt;br /&gt;
   TestFile.cpp | h&lt;br /&gt;
   TestDouble.cpp | h&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;software under test&#039;&#039; is contained in the file &#039;&#039;&#039;software_under_test.c | h&#039;&#039;&#039;.  All of the public functions use &#039;&#039;&#039;sut_&#039;&#039;&#039; as a prefix. All training modules test against this file. Although the test examples are contained in C++ files, most of the test logic is written in standard C. &lt;br /&gt;
 &lt;br /&gt;
* Extract STRIDE_training_source.zip into the directory &amp;lt;tt&amp;gt;%STRIDE_DIR%\SDK\Windows\sample_src&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;$STRIDE_DIR/SDK/Posix/sample_src&amp;lt;/tt&amp;gt; for Linux)&lt;br /&gt;
* Rebuild the TestApp following the instructions for [[Building_an_Off-Target_Test_App#Build_Steps | Building a TestApp]].&lt;br /&gt;
* List all [[Test_Units | Test Units]] within the generated database file using the following command:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Windows&#039;&#039;&#039;&lt;br /&gt;
 &amp;gt; stride --database=&amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot; --list&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux&#039;&#039;&#039;&lt;br /&gt;
 &amp;gt; stride --database=&amp;quot;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;quot; --list&lt;br /&gt;
&lt;br /&gt;
The following remoted Functions and Test Units should be displayed:&lt;br /&gt;
 &lt;br /&gt;
   Functions&lt;br /&gt;
     sut_strcpy(char const * input, char * output) : void&lt;br /&gt;
     strlen(char const * _Str) : size_t&lt;br /&gt;
   Test Units&lt;br /&gt;
     TestBasic()&lt;br /&gt;
     TestDouble_Reference()&lt;br /&gt;
     TestDouble_Definition()&lt;br /&gt;
     TestExpect_Seq()&lt;br /&gt;
     TestExpect_Data()&lt;br /&gt;
     TestExpect_Misc()&lt;br /&gt;
     TestFile()&lt;br /&gt;
     TestFixture()&lt;br /&gt;
     TestParam(int data1, int data2, char * szString)&lt;br /&gt;
     TestRuntime_Static()&lt;br /&gt;
     TestRuntime_Dynamic(int NumberOfTestCases)&lt;br /&gt;
&lt;br /&gt;
=== Run Training Tests ===&lt;br /&gt;
Here we will run all tests in the &amp;lt;tt&amp;gt;TestApp.sidb&amp;lt;/tt&amp;gt; database.&amp;lt;ref&amp;gt;Note that the S2 diagnostic tests are treated separately, and are not run unless the &amp;lt;tt&amp;gt;--diagnostics&amp;lt;/tt&amp;gt; option is specified to &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* If you haven&#039;t done so already, [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using the SDK makefile&lt;br /&gt;
* Invoke TestApp found in the &#039;&#039;/out/bin&#039;&#039; directory&lt;br /&gt;
* Create an [[Stride_Runner#Options | option file]] (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) that includes the following content in &amp;lt;tt&amp;gt;%STRIDE_DIR%\SDK\Windows\myoptions.txt&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;$STRIDE_DIR/SDK/Posix/myoptions.txt&amp;lt;/tt&amp;gt; for Linux)&lt;br /&gt;
&#039;&#039;&#039;Windows&#039;&#039;&#039;&lt;br /&gt;
  ##### Command Line Options ######&lt;br /&gt;
  --device &amp;quot;TCP:localhost:8000&amp;quot;&lt;br /&gt;
  --database &amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot;&lt;br /&gt;
  --output &amp;quot;%STRIDE_DIR%\SDK\Windows\sample_src\TestApp.xml&amp;quot;&lt;br /&gt;
  --log_level all&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux&#039;&#039;&#039;&lt;br /&gt;
  ##### Command Line Options #####&lt;br /&gt;
  --device &amp;quot;TCP:localhost:8000&amp;quot;&lt;br /&gt;
  --database &amp;quot;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;quot;&lt;br /&gt;
  --output &amp;quot;$STRIDE_DIR/SDK/Posix/sample_src/TestApp.xml&amp;quot;&lt;br /&gt;
  --log_level all &lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
* If you set up an [[STRIDE_Runner#Environment_Variables | environment variable]] for the &#039;&#039;&#039;device&#039;&#039;&#039; option then it is not required in the option file. Note: Command line options override environment variables.  &lt;br /&gt;
* &#039;&#039;stride --help&#039;&#039; provides [[STRIDE_Runner#Options | options information]]&lt;br /&gt;
&lt;br /&gt;
If you haven&#039;t done so already, start &amp;lt;tt&amp;gt;TestApp&amp;lt;/tt&amp;gt; running in a separate console window.&lt;br /&gt;
&lt;br /&gt;
Now run stride as shown below and verify summary results (starting from the &amp;lt;tt&amp;gt;SDK\Windows&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;SDK/Posix&amp;lt;/tt&amp;gt; directory):&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run &amp;quot;*&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The output should look like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
Executing...&lt;br /&gt;
  test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
    &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
    &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 21 passed, 3 failed, 0 in progress, 11 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Interpreting Results ===&lt;br /&gt;
Open &#039;&#039;&#039;TestApp.xml&#039;&#039;&#039; in your browser; this file is created in the directory from which you ran &#039;&#039;&#039;stride&#039;&#039;&#039; (or at the path specified via the &amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt; command line option). If you were connected to the Internet when you ran the tests, the TestApp.xsl file is also generated in the directory. By opening TestApp.xml in a web browser, the xsl is automatically applied to create html in the browser. If you use Google Chrome, please see [[Browser Compatibility]].&lt;br /&gt;
&lt;br /&gt;
If you&#039;re interested in the details of the tests, please see the test documentation contained in the test report.&lt;br /&gt;
&lt;br /&gt;
=== Test Space Access ===&lt;br /&gt;
&lt;br /&gt;
As part of the training, users implement exercises (test cases).  As each exercise is completed, the results are expected to be uploaded to Test Space. Accessing Test Space (uploading, viewing, etc.) requires a user-name and password. Before working on a training module please confirm that your user account has been set up by [[Test_Space_Setup | logging in]]. &lt;br /&gt;
&lt;br /&gt;
Test Space will have expected results using [[Creating_And_Using_Baselines | baselines]]. This is used to automatically verify if the exercises have been implemented correctly (at least to some degree). For capturing test results a &#039;&#039;&#039;Training&#039;&#039;&#039; project has been created with the following &#039;&#039;&#039;Spaces&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
   Sandbox     - Results for training setup&lt;br /&gt;
   TestBasic   - Basics training results: TestBasic.cpp | h&lt;br /&gt;
   TestParam   - Parameters training results: TestParam.cpp | h&lt;br /&gt;
   TestFixture - Fixturing training results: TestFixture.cpp | h&lt;br /&gt;
   TestExpect  - Expectations training results: TestExpect.cpp | h   &lt;br /&gt;
   TestRuntime - Runtime API training results: TestRuntime.cpp | h&lt;br /&gt;
   TestFile    - File IO training results: TestFile.cpp | h&lt;br /&gt;
   TestDouble  - Doubling training results: TestDouble.cpp | h&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To publish your results using the [[STRIDE_Runner | STRIDE Runner]] the following command-line options should be used:&lt;br /&gt;
&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --space &#039;&#039;MODULENAME&#039;&#039; &lt;br /&gt;
  --name &#039;&#039;YOURNAME&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Notes:&lt;br /&gt;
* Concerning the &#039;&#039;MODULENAME&#039;&#039; option: use the name that corresponds to the training module currently underway (e.g. TestBasic, TestParam, etc.)&lt;br /&gt;
* &#039;&#039;YOURNAME&#039;&#039; should be set to your name (i.e. JohnD); omit spaces from this string&lt;br /&gt;
* If you access the Internet via an HTTP proxy please read [[STRIDE_Runner#Using_a_Proxy | &#039;&#039;&#039;this article&#039;&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
To make it easier for now we recommend that you update your existing option file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) with the following: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
=== Publish Test Space Results ===&lt;br /&gt;
&lt;br /&gt;
To complete the setup publish your results to Test Space. Please make sure to use &#039;&#039;&#039;Sandbox&#039;&#039;&#039; as the space to upload your results to (see below).&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run &amp;quot;*&amp;quot; --space Sandbox --upload&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
  test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
    &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
    &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 21 passed, 3 failed, 0 in progress, 11 not in use.&lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
  Uploading to test space...&lt;br /&gt;
&lt;br /&gt;
=== Confirming Setup ===&lt;br /&gt;
&lt;br /&gt;
After you have run and uploaded the results of the training setup, you should confirm the correctness of your work. The training setup results will end up with a mix of passes and failures, so we use a [[Creating_And_Using_Baselines | Test Space baseline]] to get an aggregate comparison between expected and actual results.&lt;br /&gt;
&lt;br /&gt;
To validate your setup:&lt;br /&gt;
# Navigate to Sandbox set in Test Space and look under the column labeled &#039;&#039;&#039;&#039;&#039;SEQUENTIAL COMPARISON&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
# Confirm that &#039;&#039;&#039;&#039;&#039;none&#039;&#039;&#039;&#039;&#039; is indicated for each of the sub-column. If so, you have completed the setup and are ready to move onto the training modules.&lt;br /&gt;
&lt;br /&gt;
[[image:STRIDE_results_012813.JPG|none|600px| Test Space baseline example]]&lt;br /&gt;
&lt;br /&gt;
== Training ==&lt;br /&gt;
At this point you should be ready to start the actual training. There are &#039;&#039;&#039;7 separate training modules&#039;&#039;&#039; and we recommend the following order:&lt;br /&gt;
&lt;br /&gt;
===Introductory===&lt;br /&gt;
* [[Training_Basics | &#039;&#039;&#039;Basics&#039;&#039;&#039;]] - Covers basics of implementing and executing test cases&lt;br /&gt;
* [[Training_Parameters |&#039;&#039;&#039;Parameters&#039;&#039;&#039;]] - How to pass parameters to a test &lt;br /&gt;
* [[Training_Fixturing | &#039;&#039;&#039;Fixturing&#039;&#039;&#039;]] - Leveraging setup and teardown featuring&lt;br /&gt;
&lt;br /&gt;
===Advanced===&lt;br /&gt;
* [[Training_Expectations | &#039;&#039;&#039;Expectations&#039;&#039;&#039;]] - Validating code sequencing along with state data&lt;br /&gt;
* [[Training_Runtime_API | &#039;&#039;&#039;Runtime API&#039;&#039;&#039;]] - Using the runtime services to dynamically create test suits / cases&lt;br /&gt;
* [[Training_File_IO | &#039;&#039;&#039;File IO&#039;&#039;&#039;]] - Reading and writing files existing on the host&lt;br /&gt;
* [[Training_Doubling | &#039;&#039;&#039;Doubling&#039;&#039;&#039;]] - Replacing a dependency with a stub, fake, or mock&lt;br /&gt;
&lt;br /&gt;
== Training Confirmation ==&lt;br /&gt;
&lt;br /&gt;
As you run and upload each test unit containing your worked training exercises, you should confirm the correctness of your work.&lt;br /&gt;
&lt;br /&gt;
The correctly worked training test units will end up with a mix of passes and failures, so we use a [[Creating_And_Using_Baselines | Test Space baseline]] to get an aggregate comparison between expected and actual results &lt;br /&gt;
&lt;br /&gt;
To validate your work:&lt;br /&gt;
# Navigate to your result set in Test Space and look under the column labeled &#039;&#039;&#039;&#039;&#039;SEQUENTIAL COMPARISON&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
# Confirm that &#039;&#039;&#039;&#039;&#039;none&#039;&#039;&#039;&#039;&#039; is indicated for each of the sub-column.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Getting_Started&amp;diff=13891</id>
		<title>Training Getting Started</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Getting_Started&amp;diff=13891"/>
		<updated>2013-01-30T22:22:54Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Run Training Tests */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
Our training approach is based on a self-guided tour of the [[STRIDE_Overview#STRIDE_Testing_Features | STRIDE Testing Features]] using reference examples and assigned exercises. The set of examples and the implemented exercises will be built and executed using a standard desktop computer. &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;software under test&#039;&#039;, &#039;&#039;reference examples&#039;&#039;, and &#039;&#039;exercises&#039;&#039; have been designed to be as &#039;&#039;&#039;&#039;&#039;simple as possible while sufficiently demonstrating the topic at hand&#039;&#039;&#039;&#039;&#039;. In particular, the &#039;&#039;software under test&#039;&#039; is &#039;&#039;&#039;very light&#039;&#039;&#039; on core application logic -- the focus instead is on the test code that leverages STRIDE to define and execute tests. &lt;br /&gt;
&lt;br /&gt;
The user is expected to work through each of the training modules covering a specific testing feature. Once the exercises are completed (actual test cases implemented), the results are published to [[STRIDE_Overview#STRIDE_Test_Space | Test Space]] (and validated against a pre-created baseline).&lt;br /&gt;
&lt;br /&gt;
The training collateral consists of the following:&lt;br /&gt;
# The [[STRIDE_Overview#STRIDE_Framework | STRIDE Framework]] used to implement and execute tests&lt;br /&gt;
#* The Framework is configured in an [[STRIDE_Off-Target_Environment | Off-Target Environment]] &lt;br /&gt;
# A set of specific [[#Training | Training Modules]] that will guide you through the exercises&lt;br /&gt;
# [[Main_Page | &#039;&#039;Wiki articles&#039;&#039;]] that will provide background material and other technical information&lt;br /&gt;
# Reference [[C/C%2B%2B_Samples | Samples]]&lt;br /&gt;
# and [[STRIDE_Overview#STRIDE_Test_Space | STRIDE Test Space]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For more details concerning STRIDE refer to the following:&lt;br /&gt;
* &#039;&#039;[[STRIDE_Overview_Video | Overview screencast]]&#039;&#039;  &lt;br /&gt;
* &#039;&#039;[[What is Unique About STRIDE | What is Unique About STRIDE]]&#039;&#039; &lt;br /&gt;
* &#039;&#039;[[Types of Testing Supported by STRIDE | Types of Testing Supported]]&#039;&#039; &lt;br /&gt;
* &#039;&#039;[[Frequently Asked Questions About STRIDE | Frequently Asked Questions]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;For questions and support [mailto:training@s2technologies.com email us]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Before Starting ==&lt;br /&gt;
Before you can start setting up your environment you need the following 3 items:&lt;br /&gt;
* &#039;&#039;&#039;STRIDE Desktop Installation Package&#039;&#039;&#039; (one of the following)&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-windows_4.3.yy.zip&amp;lt;/tt&amp;gt; (Windows desktop)&lt;br /&gt;
&#039;&#039;or&#039;&#039;&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-linux_4.3.yy.tgz&amp;lt;/tt&amp;gt; (Linux desktop)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Training Source files&#039;&#039;&#039;&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_training_source.zip&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Test Space User Account URL and Logon Credentials&#039;&#039;&#039;&lt;br /&gt;
** Test Space URL (&amp;lt;tt&amp;gt;https://&amp;lt;i&amp;gt;YourCompany&amp;lt;/i&amp;gt;.stridetestspace.com&amp;lt;/tt&amp;gt;)&lt;br /&gt;
** User-name&lt;br /&gt;
** User-password&lt;br /&gt;
&lt;br /&gt;
=== Desktop Setup ===&lt;br /&gt;
The training requires that you install the [[Desktop_Installation | Desktop Framework]] and that the [[STRIDE_Off-Target_Environment | Off-Target Environment]] is setup correctly and verified. &lt;br /&gt;
&lt;br /&gt;
For an overview of the installation steps please refer to [[Installation_Overview | this]] article.&lt;br /&gt;
&lt;br /&gt;
The following steps are required:&lt;br /&gt;
# Install your [[Desktop_Installation | desktop Framework package]]&lt;br /&gt;
# Read about the [[STRIDE_Off-Target_Environment | Off-Target Environment]]&lt;br /&gt;
# Install [[STRIDE_Off-Target_Environment#Host_Compiler | host C++ compiler]] for your desktop (if needed)&lt;br /&gt;
# Use the Off-Target SDK to [[Building_an_Off-Target_Test_App | build a Test App]]&lt;br /&gt;
# [[Building_an_Off-Target_Test_App#Diagnostics | Run STRIDE diagnostics]] with the Test App built&lt;br /&gt;
&lt;br /&gt;
=== Training Setup ===&lt;br /&gt;
The following source code can be found in the &#039;&#039;&#039;STRIDE_training_source.zip&#039;&#039;&#039; file:&lt;br /&gt;
&lt;br /&gt;
   software_under_test.c | h&lt;br /&gt;
   TestBasic.cpp | h&lt;br /&gt;
   TestParam.cpp | h&lt;br /&gt;
   TestFixture.cpp | h&lt;br /&gt;
   TestExpect.cpp | h   &lt;br /&gt;
   TestRuntime.cpp | h&lt;br /&gt;
   TestFile.cpp | h&lt;br /&gt;
   TestDouble.cpp | h&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;software under test&#039;&#039; is contained in the file &#039;&#039;&#039;software_under_test.c | h&#039;&#039;&#039;.  All of the public functions use &#039;&#039;&#039;sut_&#039;&#039;&#039; as a prefix. All training modules test against this file. Although the test examples are contained in C++ files, most of the test logic is written in standard C. &lt;br /&gt;
 &lt;br /&gt;
* Extract STRIDE_training_source.zip into the directory &amp;lt;tt&amp;gt;%STRIDE_DIR%\SDK\Windows\sample_src&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;$STRIDE_DIR/SDK/Posix/sample_src&amp;lt;/tt&amp;gt; for Linux)&lt;br /&gt;
* Rebuild the TestApp following the instructions for [[Building_an_Off-Target_Test_App | Building a TestApp]].&lt;br /&gt;
* List all [[Test_Units | Test Units]] within the generated database file using the following command:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Windows&#039;&#039;&#039;&lt;br /&gt;
 &amp;gt; stride --database=&amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot; --list&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux&#039;&#039;&#039;&lt;br /&gt;
 &amp;gt; stride --database=&amp;quot;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;quot; --list&lt;br /&gt;
&lt;br /&gt;
The following remoted Functions and Test Units should be displayed:&lt;br /&gt;
 &lt;br /&gt;
   Functions&lt;br /&gt;
     sut_strcpy(char const * input, char * output) : void&lt;br /&gt;
     strlen(char const * _Str) : size_t&lt;br /&gt;
   Test Units&lt;br /&gt;
     TestBasic()&lt;br /&gt;
     TestDouble_Reference()&lt;br /&gt;
     TestDouble_Definition()&lt;br /&gt;
     TestExpect_Seq()&lt;br /&gt;
     TestExpect_Data()&lt;br /&gt;
     TestExpect_Misc()&lt;br /&gt;
     TestFile()&lt;br /&gt;
     TestFixture()&lt;br /&gt;
     TestParam(int data1, int data2, char * szString)&lt;br /&gt;
     TestRuntime_Static()&lt;br /&gt;
     TestRuntime_Dynamic(int NumberOfTestCases)&lt;br /&gt;
&lt;br /&gt;
=== Run Training Tests ===&lt;br /&gt;
Here we will run all tests in the &amp;lt;tt&amp;gt;TestApp.sidb&amp;lt;/tt&amp;gt; database.&amp;lt;ref&amp;gt;Note that the S2 diagnostic tests are treated separately, and are not run unless the &amp;lt;tt&amp;gt;--diagnostics&amp;lt;/tt&amp;gt; option is specified to &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* If you haven&#039;t done so already, [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using the SDK makefile&lt;br /&gt;
* Invoke TestApp found in the &#039;&#039;/out/bin&#039;&#039; directory&lt;br /&gt;
* Create an [[Stride_Runner#Options | option file]] (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) that includes the following content in &amp;lt;tt&amp;gt;%STRIDE_DIR%\SDK\Windows\myoptions.txt&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;$STRIDE_DIR/SDK/Posix/myoptions.txt&amp;lt;/tt&amp;gt; for Linux)&lt;br /&gt;
&#039;&#039;&#039;Windows&#039;&#039;&#039;&lt;br /&gt;
  ##### Command Line Options ######&lt;br /&gt;
  --device &amp;quot;TCP:localhost:8000&amp;quot;&lt;br /&gt;
  --database &amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot;&lt;br /&gt;
  --output &amp;quot;%STRIDE_DIR%\SDK\Windows\sample_src\TestApp.xml&amp;quot;&lt;br /&gt;
  --log_level all&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux&#039;&#039;&#039;&lt;br /&gt;
  ##### Command Line Options #####&lt;br /&gt;
  --device &amp;quot;TCP:localhost:8000&amp;quot;&lt;br /&gt;
  --database &amp;quot;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;quot;&lt;br /&gt;
  --output &amp;quot;$STRIDE_DIR/SDK/Posix/sample_src/TestApp.xml&amp;quot;&lt;br /&gt;
  --log_level all &lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
* If you set up an [[STRIDE_Runner#Environment_Variables | environment variable]] for the &#039;&#039;&#039;device&#039;&#039;&#039; option then it is not required in the option file. Note: Command line options override environment variables.  &lt;br /&gt;
* &#039;&#039;stride --help&#039;&#039; provides [[STRIDE_Runner#Options | options information]]&lt;br /&gt;
&lt;br /&gt;
If you haven&#039;t done so already, start &amp;lt;tt&amp;gt;TestApp&amp;lt;/tt&amp;gt; running in a separate console window.&lt;br /&gt;
&lt;br /&gt;
Now run stride as shown below and verify summary results (starting from the &amp;lt;tt&amp;gt;SDK\Windows&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;SDK/Posix&amp;lt;/tt&amp;gt; directory):&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run &amp;quot;*&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The output should look like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
Executing...&lt;br /&gt;
  test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
    &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
    &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 21 passed, 3 failed, 0 in progress, 11 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Interpreting Results ===&lt;br /&gt;
Open &#039;&#039;&#039;TestApp.xml&#039;&#039;&#039; in your browser; this file is created in the directory from which you ran &#039;&#039;&#039;stride&#039;&#039;&#039; (or at the path specified via the &amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt; command line option). If you were connected to the Internet when you ran the tests, the TestApp.xsl file is also generated in the directory. By opening TestApp.xml in a web browser, the xsl is automatically applied to create html in the browser. If you use Google Chrome, please see [[Browser Compatibility]].&lt;br /&gt;
&lt;br /&gt;
If you&#039;re interested in the details of the tests, please see the test documentation contained in the test report.&lt;br /&gt;
&lt;br /&gt;
=== Test Space Access ===&lt;br /&gt;
&lt;br /&gt;
As part of the training, users implement exercises (test cases).  As each exercise is completed, the results are expected to be uploaded to Test Space. Accessing Test Space (uploading, viewing, etc.) requires a user-name and password. Before working on a training module please confirm that your user account has been set up by [[Test_Space_Setup | logging in]]. &lt;br /&gt;
&lt;br /&gt;
Test Space will have expected results using [[Creating_And_Using_Baselines | baselines]]. This is used to automatically verify if the exercises have been implemented correctly (at least to some degree). For capturing test results a &#039;&#039;&#039;Training&#039;&#039;&#039; project has been created with the following &#039;&#039;&#039;Spaces&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
   Sandbox     - Results for training setup&lt;br /&gt;
   TestBasic   - Basics training results: TestBasic.cpp | h&lt;br /&gt;
   TestParam   - Parameters training results: TestParam.cpp | h&lt;br /&gt;
   TestFixture - Fixturing training results: TestFixture.cpp | h&lt;br /&gt;
   TestExpect  - Expectations training results: TestExpect.cpp | h   &lt;br /&gt;
   TestRuntime - Runtime API training results: TestRuntime.cpp | h&lt;br /&gt;
   TestFile    - File IO training results: TestFile.cpp | h&lt;br /&gt;
   TestDouble  - Doubling training results: TestDouble.cpp | h&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To publish your results using the [[STRIDE_Runner | STRIDE Runner]] the following command-line options should be used:&lt;br /&gt;
&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --space &#039;&#039;MODULENAME&#039;&#039; &lt;br /&gt;
  --name &#039;&#039;YOURNAME&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Notes:&lt;br /&gt;
* Concerning the &#039;&#039;MODULENAME&#039;&#039; option: use the name that corresponds to the training module currently underway (e.g. TestBasic, TestParam, etc.)&lt;br /&gt;
* &#039;&#039;YOURNAME&#039;&#039; should be set to your name (i.e. JohnD); omit spaces from this string&lt;br /&gt;
* If you access the Internet via an HTTP proxy please read [[STRIDE_Runner#Using_a_Proxy | &#039;&#039;&#039;this article&#039;&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
To make it easier for now we recommend that you update your existing option file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) with the following: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
=== Publish Test Space Results ===&lt;br /&gt;
&lt;br /&gt;
To complete the setup publish your results to Test Space. Please make sure to use &#039;&#039;&#039;Sandbox&#039;&#039;&#039; as the space to upload your results to (see below).&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run &amp;quot;*&amp;quot; --space Sandbox --upload&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
  test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
    &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
    &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 21 passed, 3 failed, 0 in progress, 11 not in use.&lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
  Uploading to test space...&lt;br /&gt;
&lt;br /&gt;
=== Confirming Setup ===&lt;br /&gt;
&lt;br /&gt;
After you have run and uploaded the results of the training setup, you should confirm the correctness of your work. The training setup results will end up with a mix of passes and failures, so we use a [[Creating_And_Using_Baselines | Test Space baseline]] to get an aggregate comparison between expected and actual results.&lt;br /&gt;
&lt;br /&gt;
To validate your setup:&lt;br /&gt;
# Navigate to Sandbox set in Test Space and look under the column labeled &#039;&#039;&#039;&#039;&#039;SEQUENTIAL COMPARISON&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
# Confirm that &#039;&#039;&#039;&#039;&#039;none&#039;&#039;&#039;&#039;&#039; is indicated for each of the sub-column. If so, you have completed the setup and are ready to move onto the training modules.&lt;br /&gt;
&lt;br /&gt;
[[image:STRIDE_results_012813.JPG|none|600px| Test Space baseline example]]&lt;br /&gt;
&lt;br /&gt;
== Training ==&lt;br /&gt;
At this point you should be ready to start the actual training. There are &#039;&#039;&#039;7 separate training modules&#039;&#039;&#039; and we recommend the following order:&lt;br /&gt;
&lt;br /&gt;
===Introductory===&lt;br /&gt;
* [[Training_Basics | &#039;&#039;&#039;Basics&#039;&#039;&#039;]] - Covers basics of implementing and executing test cases&lt;br /&gt;
* [[Training_Parameters |&#039;&#039;&#039;Parameters&#039;&#039;&#039;]] - How to pass parameters to a test &lt;br /&gt;
* [[Training_Fixturing | &#039;&#039;&#039;Fixturing&#039;&#039;&#039;]] - Leveraging setup and teardown featuring&lt;br /&gt;
&lt;br /&gt;
===Advanced===&lt;br /&gt;
* [[Training_Expectations | &#039;&#039;&#039;Expectations&#039;&#039;&#039;]] - Validating code sequencing along with state data&lt;br /&gt;
* [[Training_Runtime_API | &#039;&#039;&#039;Runtime API&#039;&#039;&#039;]] - Using the runtime services to dynamically create test suits / cases&lt;br /&gt;
* [[Training_File_IO | &#039;&#039;&#039;File IO&#039;&#039;&#039;]] - Reading and writing files existing on the host&lt;br /&gt;
* [[Training_Doubling | &#039;&#039;&#039;Doubling&#039;&#039;&#039;]] - Replacing a dependency with a stub, fake, or mock&lt;br /&gt;
&lt;br /&gt;
== Training Confirmation ==&lt;br /&gt;
&lt;br /&gt;
As you run and upload each test unit containing your worked training exercises, you should confirm the correctness of your work.&lt;br /&gt;
&lt;br /&gt;
The correctly worked training test units will end up with a mix of passes and failures, so we use a [[Creating_And_Using_Baselines | Test Space baseline]] to get an aggregate comparison between expected and actual results &lt;br /&gt;
&lt;br /&gt;
To validate your work:&lt;br /&gt;
# Navigate to your result set in Test Space and look under the column labeled &#039;&#039;&#039;&#039;&#039;SEQUENTIAL COMPARISON&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
# Confirm that &#039;&#039;&#039;&#039;&#039;none&#039;&#039;&#039;&#039;&#039; is indicated for each of the sub-column.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Getting_Started&amp;diff=13890</id>
		<title>Training Getting Started</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Getting_Started&amp;diff=13890"/>
		<updated>2013-01-30T22:18:52Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Training Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
Our training approach is based on a self-guided tour of the [[STRIDE_Overview#STRIDE_Testing_Features | STRIDE Testing Features]] using reference examples and assigned exercises. The set of examples and the implemented exercises will be built and executed using a standard desktop computer. &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;software under test&#039;&#039;, &#039;&#039;reference examples&#039;&#039;, and &#039;&#039;exercises&#039;&#039; have been designed to be as &#039;&#039;&#039;&#039;&#039;simple as possible while sufficiently demonstrating the topic at hand&#039;&#039;&#039;&#039;&#039;. In particular, the &#039;&#039;software under test&#039;&#039; is &#039;&#039;&#039;very light&#039;&#039;&#039; on core application logic -- the focus instead is on the test code that leverages STRIDE to define and execute tests. &lt;br /&gt;
&lt;br /&gt;
The user is expected to work through each of the training modules covering a specific testing feature. Once the exercises are completed (actual test cases implemented), the results are published to [[STRIDE_Overview#STRIDE_Test_Space | Test Space]] (and validated against a pre-created baseline).&lt;br /&gt;
&lt;br /&gt;
The training collateral consists of the following:&lt;br /&gt;
# The [[STRIDE_Overview#STRIDE_Framework | STRIDE Framework]] used to implement and execute tests&lt;br /&gt;
#* The Framework is configured in an [[STRIDE_Off-Target_Environment | Off-Target Environment]] &lt;br /&gt;
# A set of specific [[#Training | Training Modules]] that will guide you through the exercises&lt;br /&gt;
# [[Main_Page | &#039;&#039;Wiki articles&#039;&#039;]] that will provide background material and other technical information&lt;br /&gt;
# Reference [[C/C%2B%2B_Samples | Samples]]&lt;br /&gt;
# and [[STRIDE_Overview#STRIDE_Test_Space | STRIDE Test Space]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For more details concerning STRIDE refer to the following:&lt;br /&gt;
* &#039;&#039;[[STRIDE_Overview_Video | Overview screencast]]&#039;&#039;  &lt;br /&gt;
* &#039;&#039;[[What is Unique About STRIDE | What is Unique About STRIDE]]&#039;&#039; &lt;br /&gt;
* &#039;&#039;[[Types of Testing Supported by STRIDE | Types of Testing Supported]]&#039;&#039; &lt;br /&gt;
* &#039;&#039;[[Frequently Asked Questions About STRIDE | Frequently Asked Questions]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;For questions and support [mailto:training@s2technologies.com email us]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Before Starting ==&lt;br /&gt;
Before you can start setting up your environment you need the following 3 items:&lt;br /&gt;
* &#039;&#039;&#039;STRIDE Desktop Installation Package&#039;&#039;&#039; (one of the following)&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-windows_4.3.yy.zip&amp;lt;/tt&amp;gt; (Windows desktop)&lt;br /&gt;
&#039;&#039;or&#039;&#039;&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-linux_4.3.yy.tgz&amp;lt;/tt&amp;gt; (Linux desktop)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Training Source files&#039;&#039;&#039;&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_training_source.zip&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Test Space User Account URL and Logon Credentials&#039;&#039;&#039;&lt;br /&gt;
** Test Space URL (&amp;lt;tt&amp;gt;https://&amp;lt;i&amp;gt;YourCompany&amp;lt;/i&amp;gt;.stridetestspace.com&amp;lt;/tt&amp;gt;)&lt;br /&gt;
** User-name&lt;br /&gt;
** User-password&lt;br /&gt;
&lt;br /&gt;
=== Desktop Setup ===&lt;br /&gt;
The training requires that you install the [[Desktop_Installation | Desktop Framework]] and that the [[STRIDE_Off-Target_Environment | Off-Target Environment]] is setup correctly and verified. &lt;br /&gt;
&lt;br /&gt;
For an overview of the installation steps please refer to [[Installation_Overview | this]] article.&lt;br /&gt;
&lt;br /&gt;
The following steps are required:&lt;br /&gt;
# Install your [[Desktop_Installation | desktop Framework package]]&lt;br /&gt;
# Read about the [[STRIDE_Off-Target_Environment | Off-Target Environment]]&lt;br /&gt;
# Install [[STRIDE_Off-Target_Environment#Host_Compiler | host C++ compiler]] for your desktop (if needed)&lt;br /&gt;
# Use the Off-Target SDK to [[Building_an_Off-Target_Test_App | build a Test App]]&lt;br /&gt;
# [[Building_an_Off-Target_Test_App#Diagnostics | Run STRIDE diagnostics]] with the Test App built&lt;br /&gt;
&lt;br /&gt;
=== Training Setup ===&lt;br /&gt;
The following source code can be found in the &#039;&#039;&#039;STRIDE_training_source.zip&#039;&#039;&#039; file:&lt;br /&gt;
&lt;br /&gt;
   software_under_test.c | h&lt;br /&gt;
   TestBasic.cpp | h&lt;br /&gt;
   TestParam.cpp | h&lt;br /&gt;
   TestFixture.cpp | h&lt;br /&gt;
   TestExpect.cpp | h   &lt;br /&gt;
   TestRuntime.cpp | h&lt;br /&gt;
   TestFile.cpp | h&lt;br /&gt;
   TestDouble.cpp | h&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;software under test&#039;&#039; is contained in the file &#039;&#039;&#039;software_under_test.c | h&#039;&#039;&#039;.  All of the public functions use &#039;&#039;&#039;sut_&#039;&#039;&#039; as a prefix. All training modules test against this file. Although the test examples are contained in C++ files, most of the test logic is written in standard C. &lt;br /&gt;
 &lt;br /&gt;
* Extract STRIDE_training_source.zip into the directory &amp;lt;tt&amp;gt;%STRIDE_DIR%\SDK\Windows\sample_src&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;$STRIDE_DIR/SDK/Posix/sample_src&amp;lt;/tt&amp;gt; for Linux)&lt;br /&gt;
* Rebuild the TestApp following the instructions for [[Building_an_Off-Target_Test_App | Building a TestApp]].&lt;br /&gt;
* List all [[Test_Units | Test Units]] within the generated database file using the following command:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Windows&#039;&#039;&#039;&lt;br /&gt;
 &amp;gt; stride --database=&amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot; --list&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux&#039;&#039;&#039;&lt;br /&gt;
 &amp;gt; stride --database=&amp;quot;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;quot; --list&lt;br /&gt;
&lt;br /&gt;
The following remoted Functions and Test Units should be displayed:&lt;br /&gt;
 &lt;br /&gt;
   Functions&lt;br /&gt;
     sut_strcpy(char const * input, char * output) : void&lt;br /&gt;
     strlen(char const * _Str) : size_t&lt;br /&gt;
   Test Units&lt;br /&gt;
     TestBasic()&lt;br /&gt;
     TestDouble_Reference()&lt;br /&gt;
     TestDouble_Definition()&lt;br /&gt;
     TestExpect_Seq()&lt;br /&gt;
     TestExpect_Data()&lt;br /&gt;
     TestExpect_Misc()&lt;br /&gt;
     TestFile()&lt;br /&gt;
     TestFixture()&lt;br /&gt;
     TestParam(int data1, int data2, char * szString)&lt;br /&gt;
     TestRuntime_Static()&lt;br /&gt;
     TestRuntime_Dynamic(int NumberOfTestCases)&lt;br /&gt;
&lt;br /&gt;
=== Run Training Tests ===&lt;br /&gt;
Here we will run all tests in the &amp;lt;tt&amp;gt;TestApp.sidb&amp;lt;/tt&amp;gt; database.&amp;lt;ref&amp;gt;Note that the S2 diagnostic tests are treated separately, and are not run unless the &amp;lt;tt&amp;gt;--diagnostics&amp;lt;/tt&amp;gt; option is specified to &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* If you haven&#039;t done so already, [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using the SDK makefile&lt;br /&gt;
* Invoke TestApp found in the &#039;&#039;/out/bin&#039;&#039; directory&lt;br /&gt;
* Create an [[Stride_Runner#Options | option file]] (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) that includes the following content (SDK\Windows or SDK/Posix).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Windows&#039;&#039;&#039;&lt;br /&gt;
  ##### Command Line Options ######&lt;br /&gt;
  --device &amp;quot;TCP:localhost:8000&amp;quot;&lt;br /&gt;
  --database &amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot;&lt;br /&gt;
  --output &amp;quot;%STRIDE_DIR%\SDK\Windows\sample_src\TestApp.xml&amp;quot;&lt;br /&gt;
  --log_level all&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux&#039;&#039;&#039;&lt;br /&gt;
  ##### Command Line Options #####&lt;br /&gt;
  --device &amp;quot;TCP:localhost:8000&amp;quot;&lt;br /&gt;
  --database &amp;quot;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;quot;&lt;br /&gt;
  --output &amp;quot;$STRIDE_DIR/SDK/Posix/sample_src/TestApp.xml&amp;quot;&lt;br /&gt;
  --log_level all &lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
* If you set up an [[STRIDE_Runner#Environment_Variables | environment variable]] for the &#039;&#039;&#039;device&#039;&#039;&#039; option then it is not required in the option file. Note: Command line options override environment variables.  &lt;br /&gt;
* &#039;&#039;stride --help&#039;&#039; provides [[STRIDE_Runner#Options | options information]]&lt;br /&gt;
&lt;br /&gt;
If you haven&#039;t done so already, start &amp;lt;tt&amp;gt;TestApp&amp;lt;/tt&amp;gt; running in a separate console window.&lt;br /&gt;
&lt;br /&gt;
Now run stride as shown below and verify summary results (starting from the &amp;lt;tt&amp;gt;SDK\Windows&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;SDK/Posix&amp;lt;/tt&amp;gt; directory):&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run &amp;quot;*&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The output should look like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
Executing...&lt;br /&gt;
  test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
    &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
    &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 21 passed, 3 failed, 0 in progress, 11 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Interpreting Results ===&lt;br /&gt;
Open &#039;&#039;&#039;TestApp.xml&#039;&#039;&#039; in your browser; this file is created in the directory from which you ran &#039;&#039;&#039;stride&#039;&#039;&#039; (or at the path specified via the &amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt; command line option). If you were connected to the Internet when you ran the tests, the TestApp.xsl file is also generated in the directory. By opening TestApp.xml in a web browser, the xsl is automatically applied to create html in the browser. If you use Google Chrome, please see [[Browser Compatibility]].&lt;br /&gt;
&lt;br /&gt;
If you&#039;re interested in the details of the tests, please see the test documentation contained in the test report.&lt;br /&gt;
&lt;br /&gt;
=== Test Space Access ===&lt;br /&gt;
&lt;br /&gt;
As part of the training, users implement exercises (test cases).  As each exercise is completed, the results are expected to be uploaded to Test Space. Accessing Test Space (uploading, viewing, etc.) requires a user-name and password. Before working on a training module please confirm that your user account has been set up by [[Test_Space_Setup | logging in]]. &lt;br /&gt;
&lt;br /&gt;
Test Space will have expected results using [[Creating_And_Using_Baselines | baselines]]. This is used to automatically verify if the exercises have been implemented correctly (at least to some degree). For capturing test results a &#039;&#039;&#039;Training&#039;&#039;&#039; project has been created with the following &#039;&#039;&#039;Spaces&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
   Sandbox     - Results for training setup&lt;br /&gt;
   TestBasic   - Basics training results: TestBasic.cpp | h&lt;br /&gt;
   TestParam   - Parameters training results: TestParam.cpp | h&lt;br /&gt;
   TestFixture - Fixturing training results: TestFixture.cpp | h&lt;br /&gt;
   TestExpect  - Expectations training results: TestExpect.cpp | h   &lt;br /&gt;
   TestRuntime - Runtime API training results: TestRuntime.cpp | h&lt;br /&gt;
   TestFile    - File IO training results: TestFile.cpp | h&lt;br /&gt;
   TestDouble  - Doubling training results: TestDouble.cpp | h&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To publish your results using the [[STRIDE_Runner | STRIDE Runner]] the following command-line options should be used:&lt;br /&gt;
&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --space &#039;&#039;MODULENAME&#039;&#039; &lt;br /&gt;
  --name &#039;&#039;YOURNAME&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Notes:&lt;br /&gt;
* Concerning the &#039;&#039;MODULENAME&#039;&#039; option: use the name that corresponds to the training module currently underway (e.g. TestBasic, TestParam, etc.)&lt;br /&gt;
* &#039;&#039;YOURNAME&#039;&#039; should be set to your name (i.e. JohnD); omit spaces from this string&lt;br /&gt;
* If you access the Internet via an HTTP proxy please read [[STRIDE_Runner#Using_a_Proxy | &#039;&#039;&#039;this article&#039;&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
To make it easier for now we recommend that you update your existing option file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) with the following: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
=== Publish Test Space Results ===&lt;br /&gt;
&lt;br /&gt;
To complete the setup publish your results to Test Space. Please make sure to use &#039;&#039;&#039;Sandbox&#039;&#039;&#039; as the space to upload your results to (see below).&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run &amp;quot;*&amp;quot; --space Sandbox --upload&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
  test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
    &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
    &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 21 passed, 3 failed, 0 in progress, 11 not in use.&lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
  Uploading to test space...&lt;br /&gt;
&lt;br /&gt;
=== Confirming Setup ===&lt;br /&gt;
&lt;br /&gt;
After you have run and uploaded the results of the training setup, you should confirm the correctness of your work. The training setup results will end up with a mix of passes and failures, so we use a [[Creating_And_Using_Baselines | Test Space baseline]] to get an aggregate comparison between expected and actual results.&lt;br /&gt;
&lt;br /&gt;
To validate your setup:&lt;br /&gt;
# Navigate to Sandbox set in Test Space and look under the column labeled &#039;&#039;&#039;&#039;&#039;SEQUENTIAL COMPARISON&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
# Confirm that &#039;&#039;&#039;&#039;&#039;none&#039;&#039;&#039;&#039;&#039; is indicated for each of the sub-column. If so, you have completed the setup and are ready to move onto the training modules.&lt;br /&gt;
&lt;br /&gt;
[[image:STRIDE_results_012813.JPG|none|600px| Test Space baseline example]]&lt;br /&gt;
&lt;br /&gt;
== Training ==&lt;br /&gt;
At this point you should be ready to start the actual training. There are &#039;&#039;&#039;7 separate training modules&#039;&#039;&#039; and we recommend the following order:&lt;br /&gt;
&lt;br /&gt;
===Introductory===&lt;br /&gt;
* [[Training_Basics | &#039;&#039;&#039;Basics&#039;&#039;&#039;]] - Covers basics of implementing and executing test cases&lt;br /&gt;
* [[Training_Parameters |&#039;&#039;&#039;Parameters&#039;&#039;&#039;]] - How to pass parameters to a test &lt;br /&gt;
* [[Training_Fixturing | &#039;&#039;&#039;Fixturing&#039;&#039;&#039;]] - Leveraging setup and teardown featuring&lt;br /&gt;
&lt;br /&gt;
===Advanced===&lt;br /&gt;
* [[Training_Expectations | &#039;&#039;&#039;Expectations&#039;&#039;&#039;]] - Validating code sequencing along with state data&lt;br /&gt;
* [[Training_Runtime_API | &#039;&#039;&#039;Runtime API&#039;&#039;&#039;]] - Using the runtime services to dynamically create test suits / cases&lt;br /&gt;
* [[Training_File_IO | &#039;&#039;&#039;File IO&#039;&#039;&#039;]] - Reading and writing files existing on the host&lt;br /&gt;
* [[Training_Doubling | &#039;&#039;&#039;Doubling&#039;&#039;&#039;]] - Replacing a dependency with a stub, fake, or mock&lt;br /&gt;
&lt;br /&gt;
== Training Confirmation ==&lt;br /&gt;
&lt;br /&gt;
As you run and upload each test unit containing your worked training exercises, you should confirm the correctness of your work.&lt;br /&gt;
&lt;br /&gt;
The correctly worked training test units will end up with a mix of passes and failures, so we use a [[Creating_And_Using_Baselines | Test Space baseline]] to get an aggregate comparison between expected and actual results &lt;br /&gt;
&lt;br /&gt;
To validate your work:&lt;br /&gt;
# Navigate to your result set in Test Space and look under the column labeled &#039;&#039;&#039;&#039;&#039;SEQUENTIAL COMPARISON&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
# Confirm that &#039;&#039;&#039;&#039;&#039;none&#039;&#039;&#039;&#039;&#039; is indicated for each of the sub-column.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Expectations&amp;diff=13887</id>
		<title>Training Expectations</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Expectations&amp;diff=13887"/>
		<updated>2013-01-29T17:44:52Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Implement Exercise */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused on &#039;&#039;Test Points&#039;&#039; and how to validate them using &#039;&#039;Expectations&#039;&#039;.  The module covers the following topics:&lt;br /&gt;
* Presentation of a [[Expectations | validation]] technique based on &#039;&#039;code sequencing&#039;&#039; and &#039;&#039;state data&#039;&#039;&lt;br /&gt;
* Overview of [[Source_Instrumentation_Overview#Instrumentation | source instrumentation]]&lt;br /&gt;
* Review of [[Test_Point_Testing_in_C/C%2B%2B | expectation tables and predicates]]&lt;br /&gt;
* Example use cases such as concurrent validation, using trigger conditions, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two test files used -- &#039;&#039;&#039;TestExpect.cpp &amp;amp; TestExpect.h&#039;&#039;&#039; -- that implement three Test Units:&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Seq&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Data&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Misc&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The first two Test Units have two test methods already implemented and have one method each that you are required to implement called &#039;&#039;&#039;Exercise&#039;&#039;&#039;. The third Test Unit has four test methods already implemented and also has one method to implement as an exercise. Currently the &#039;&#039;exercise methods&#039;&#039; return a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Expectations&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  -----------------------------------------------------------&lt;br /&gt;
  Summary: 7 passed, 1 failed, 0 in progress, 3 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Seq::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Validate &#039;&#039;&#039;ALL&#039;&#039;&#039; upper case Test Points (A - I)&lt;br /&gt;
** Use &#039;&#039;Unordered&#039;&#039; and &#039;&#039;Nonstrict&#039;&#039; sequencing&lt;br /&gt;
** Use &#039;&#039;sut_DoSequencing(SEQ_1)&#039;&#039; to generate part of the sequence&lt;br /&gt;
** Use &#039;&#039;sut_start_thread(SEQ_3)&#039;&#039; to generate the rest of the sequence&lt;br /&gt;
** Add a &#039;&#039;NOTE_INFO(..)&#039;&#039; to log that the test is executing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Data::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Validate the following Test Points &#039;&#039;&#039;{D, G, F, H}&#039;&#039;&#039;&lt;br /&gt;
** Check that &#039;&#039;&#039;F&#039;&#039;&#039; occurs 2 times&lt;br /&gt;
** Use &#039;&#039;Unordered&#039;&#039; sequencing&lt;br /&gt;
** Write a new custom predicate that validates data for both &#039;&#039;&#039;D&#039;&#039;&#039; and &#039;&#039;&#039;H&#039;&#039;&#039; Test Points&lt;br /&gt;
*** Add &#039;&#039;NOTE_INFO(..)&#039;&#039; to capture content of the Test Points&lt;br /&gt;
*** Add extra check for &#039;&#039;&#039;D&#039;&#039;&#039; that the status is &#039;&#039;&#039;GOOD&#039;&#039;&#039;&lt;br /&gt;
*** Confirm that data fields &#039;&#039;&#039;d1&#039;&#039;&#039; and &#039;&#039;&#039;d2&#039;&#039;&#039; are as expected&lt;br /&gt;
*** Pass the expected data fields (for both Test Points) as part of the &#039;&#039;&#039;user&#039;&#039;&#039; data within the setup&lt;br /&gt;
*** Use &#039;&#039;sut_start_thread(SEQ_3)&#039;&#039; to generate the sequence&lt;br /&gt;
** Write another custom predicate for validating data for &#039;&#039;&#039;G&#039;&#039;&#039;&lt;br /&gt;
*** Add &#039;&#039;NOTE_INFO(..)&#039;&#039; to capture content of the Test Point&lt;br /&gt;
*** Validate the expected string using &#039;&#039;&#039;user&#039;&#039;&#039; data&lt;br /&gt;
** Add a &#039;&#039;NOTE_INFO(..)&#039;&#039; to log that the test is executing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Misc::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Validate 2 sequences using a Trigger in between&lt;br /&gt;
*** Sequence 1 = &#039;&#039;&#039;D&#039;&#039;&#039; &#039;&#039;&#039;E&#039;&#039;&#039; and &#039;&#039;&#039;A&#039;&#039;&#039;&lt;br /&gt;
*** Trigger = &#039;&#039;&#039;C&#039;&#039;&#039;&lt;br /&gt;
*** Sequence 2 =  &#039;&#039;&#039;F&#039;&#039;&#039; and &#039;&#039;&#039;F&#039;&#039;&#039; (2 occurrences)&lt;br /&gt;
** Use &#039;&#039;ANY AT ALL&#039;&#039; special member with trigger&lt;br /&gt;
** Use &#039;&#039;Ordered&#039;&#039; and &#039;&#039;Strict&#039;&#039; sequencing &lt;br /&gt;
** Use &#039;&#039;sut_start_thread(SEQ_2)&#039;&#039; to generate the expected sequences  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Expectations&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
      &amp;gt; 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    -----------------------------------------------------------&lt;br /&gt;
    Summary: 10 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. To make it easier for now we recommend that you update your existing option file (myoptions.txt) with the following if not already done: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc --space TestExpect --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Source_Instrumentation_Overview | Instrumentation Overview]] providing high-level concepts of this validation technique&lt;br /&gt;
* [[Test_Point |Test Point]] Macro definition &lt;br /&gt;
* [[Expectations |Expectations]] definition and how to set your &#039;&#039;Expectations&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Point_Sample | Test Point Sample]] - Demonstrates simple technique to monitor and test activity occurring in another thread&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Fixturing&amp;diff=13886</id>
		<title>Training Fixturing</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Fixturing&amp;diff=13886"/>
		<updated>2013-01-29T17:33:32Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Implement Exercise */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module focuses on &#039;&#039;&#039;Fixturing&#039;&#039;&#039; in the context of the STRIDE Test System. For a high-level overview refer to the following wiki section on [[What_is_Unique_About_STRIDE#Fixturing | fixturing]]. This module covers the following topics:&lt;br /&gt;
* Startup logic at the beginning of a Test Unit&lt;br /&gt;
* [[Test_Unit_Pragmas#Fixturing_Pragmas | Setup]] logic for each test method&lt;br /&gt;
* [[Test_Unit_Pragmas#Fixturing_Pragmas | Teardown]] logic for each test method&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two test files used -- &#039;&#039;&#039;TestFixture.cpp &amp;amp; TestFixture.h&#039;&#039;&#039;. These implement one Test Unit:&lt;br /&gt;
* &#039;&#039;&#039;TestFixture&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The Test Unit has one test case already implemented (used for reference) and has &#039;&#039;&#039;two test methods&#039;&#039;&#039; that you are required to implement (called &#039;&#039;&#039;Exercise1&#039;&#039;&#039; and &#039;&#039;&#039;Exercise2&#039;&#039;&#039;).  Currently the &#039;&#039;Exercise1&#039;&#039; method is empty and returns a &#039;&#039;NOT IN USE&#039;&#039; status. The &#039;&#039;Exercise2&#039;&#039; method does not yet exist. &lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Fixture&#039;&#039; Test Unit only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride -options_file myoptions.txt --run TestFixture&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
      &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestFixture::Exercise1&#039;&#039;&#039;&lt;br /&gt;
** Add a &#039;&#039;NOTE&#039;&#039; that will display the &#039;&#039;name&#039;&#039; of the test.&lt;br /&gt;
** Validate that &#039;&#039;&#039;Sequence2&#039;&#039;&#039; is stored when starting the Thread by checking that &amp;quot;Sequence2&amp;quot; equals MyString[0].&lt;br /&gt;
*** &#039;&#039;HINT&#039;&#039;: The method to focus on in SUT is DoSequencing()&lt;br /&gt;
*** &#039;&#039;HINT&#039;&#039;: m_sequence is incremented as each method goes through Teardown.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestFixture::Exercise2&#039;&#039;&#039;&lt;br /&gt;
** Add new test method declaration called &#039;&#039;Exercise2&#039;&#039; to TestFixture.h&lt;br /&gt;
** Add new test method implementation of &#039;&#039;Exercise2&#039;&#039; to TestFixture.cpp&lt;br /&gt;
** Add a &#039;&#039;NOTE&#039;&#039; that will display the name of the test&lt;br /&gt;
** Validate that &#039;&#039;&#039;Sequence3&#039;&#039;&#039; is persisted when starting the Thread&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute only &#039;&#039;TestFixture&#039;&#039;  &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestFixture&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
   &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. To make it easier for now we recommend that you update your existing option file (myoptions.txt) with the following if not already done: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestFixture --space TestFixture --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Unit_Pragmas#Fixturing_Pragmas | Fixturing Pragmas]] -- important information for setting up and tearing down &#039;&#039;things&#039;&#039; between each executed test method within a Test Unit&lt;br /&gt;
** Review the types of Test Units these pragmas are applicable for &lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Class_Sample | Test Class Sample]] - Specifically the [[Test_Class_Sample#Basic::Fixtures | Basic Fixtures]] Test Unit&lt;br /&gt;
* [[Test_CClass_Sample | Test C Class Sample]] - Specifically the [[Test_CClass_Sample#basic_fixtures | Basic Fixtures]] Test Unit&lt;br /&gt;
* [[Test_Function_List_Sample | Test Function List Sample]]- Specifically the [[Test_Function_List_Sample#basic_fixtures |Basic Fixtures]] Test Unit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Parameters&amp;diff=13885</id>
		<title>Training Parameters</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Parameters&amp;diff=13885"/>
		<updated>2013-01-29T17:20:44Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Implement Exercise */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused on &#039;&#039;&#039;Parameter passing&#039;&#039;&#039; in the context of the Test Unit. The module covers the following:&lt;br /&gt;
* [[Parameterized_Test_Units | Passing parameters]] via the Runner&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two test files used -- &#039;&#039;&#039;TestParam.cpp &amp;amp; TestParam.h&#039;&#039;&#039;. These implement one Test Unit:&lt;br /&gt;
* &#039;&#039;&#039;TestParam&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The Test Unit has test cases already implemented (used for reference) and has one test method that you are required to implement (called &#039;&#039;&#039;Exercise&#039;&#039;&#039;).  Currently this method is empty and returns a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Param&#039;&#039; Test Unit only&lt;br /&gt;
: Note that parameters are passed to the test unit from the host command line.&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run &amp;quot;TestParam(9,4,\&amp;quot;STRING1\&amp;quot;)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
      &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
;Exercise()&lt;br /&gt;
* Add a &#039;&#039;NOTE&#039;&#039; that will display the &#039;&#039;integer parameters&#039;&#039; passed via command line&lt;br /&gt;
* Add an Assert macro guarding against no parameters being passed (i.e. default value of 0 received)&lt;br /&gt;
* Call &#039;&#039;sut_add()&#039;&#039; with parameters and validate that it returns the expected value&lt;br /&gt;
* Call &#039;&#039;sut_mult()&#039;&#039; with parameters and validate that it returns the expected value&lt;br /&gt;
** &#039;&#039;Hint:&#039;&#039; You will probably want to read about [[Parameterized Test Units]]&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
* Execute only &#039;&#039;TestParam&#039;&#039; &#039;&#039;&#039;without&#039;&#039;&#039; parameters &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestParam&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
      &amp;gt; 0 passed, 2 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 0 passed, 2 failed, 0 in progress, 0 not in use.&lt;br /&gt;
   &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute only &#039;&#039;TestParam&#039;&#039; &#039;&#039;&#039;with&#039;&#039;&#039; parameters &lt;br /&gt;
  &amp;gt; stride -O myoptions.txt --run &amp;quot;TestParam(9,4,\&amp;quot;STRING1\&amp;quot;)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
      &amp;gt; 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
   &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. To make it easier for now we recommend that you update your existing option file (myoptions.txt) with the following if not already done: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run &amp;quot;TestParam(9,4,\&amp;quot;STRING1\&amp;quot;)&amp;quot; --space TestParam --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Parameterized_Test_Units |Parameterized Test Units]] - Describes how to supply parameters to your Test Units&lt;br /&gt;
* [[STRIDE_Runner | STRIDE Runner]] - Information on command line options&lt;br /&gt;
** Refer to &amp;quot;Test Unit Specification Examples&amp;quot; - specifically examples using parameters&lt;br /&gt;
** Review &amp;lt;tt&amp;gt;run[--r] arg&amp;lt;/tt&amp;gt; option&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
* [[Test_Class_Sample | Test Class Sample]] - Specifically the [[Test_Class_Sample#Basic::Parameterized | Parameterized]] Test Unit&lt;br /&gt;
* [[Test_CClass_Sample | Test C Class Sample]] - Specifically the [[Test_CClass_Sample#basic_parameterized | Parameterized]] Test Unit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Doubling&amp;diff=13877</id>
		<title>Training Doubling</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Doubling&amp;diff=13877"/>
		<updated>2013-01-28T18:53:08Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Build and Run TestApp */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused on explaining how to leverage &#039;&#039;&#039;Test Doubles&#039;&#039;&#039; in the context of executing a test. For an overview of &#039;&#039;&#039;intercepting&#039;&#039;&#039; existing functions please refer to [[Using_Test_Doubles | Test Doubles]]. The module covers following topics:&lt;br /&gt;
* How to apply pragmas for [[Function_Capturing | function intercepting]]&lt;br /&gt;
* How to decide what kind of &#039;&#039;mangling&#039;&#039; is required&lt;br /&gt;
** &#039;&#039;Definition&#039;&#039; verses &#039;&#039;Reference&#039;&#039;&lt;br /&gt;
** &#039;&#039;Explicit&#039;&#039; verses &#039;&#039;Implicit&#039;&#039;&lt;br /&gt;
* Setting and Resetting the Double implementation&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two new files used -- &#039;&#039;&#039;TestDouble.cpp &amp;amp; TestDouble.h&#039;&#039;&#039; --- that implement two test Units:&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Reference&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Definition&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All of the Test Units have test cases already implemented (used for reference) and have one test method that you are required to implement called &#039;&#039;&#039;Exercise&#039;&#039;&#039;. Currently the exercise methods return a &#039;&#039;NOT IN USE&#039;&#039; status. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Double&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference --run TestDouble_Definition&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
      &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
      &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    ---------------------------------------------------------&lt;br /&gt;
    Summary: 4 passed, 0 failed, 0 in progress, 2 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Reference::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Implement a new &#039;&#039;test double&#039;&#039; for the &#039;&#039;strlen()&#039;&#039; routine:&lt;br /&gt;
*** Add a &#039;&#039;NOTE&#039;&#039; capturing its name when it is called&lt;br /&gt;
*** Uses the real &#039;&#039;strlen()&#039;&#039; to return the length of the passed in string&lt;br /&gt;
** Use &#039;&#039;srEXPECT_EQ()&#039;&#039; to validate that &#039;&#039;sut_strcheck()&#039;&#039; returns correct length &lt;br /&gt;
** Make sure to restore the original routine &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestDouble_Definition::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Implement a new &#039;&#039;test double&#039;&#039; for the &#039;&#039;sut_strcpy()&#039;&#039; routine:&lt;br /&gt;
*** Log its name when it is called&lt;br /&gt;
*** Validate string passed to &#039;&#039;sut_strsave()&#039;&#039; is received correctly by the &#039;&#039;test double&#039;&#039;&lt;br /&gt;
*** Remember that &#039;&#039;sut_strsave()&#039;&#039; calls &#039;&#039;sut_strcpy()&#039;&#039; with the string passed to it&lt;br /&gt;
*** Can use a test macro to validate the string is correctly passed to the double&lt;br /&gt;
*** Call the original function (&#039;&#039;sut_strcpy()&#039;&#039;) with a &#039;&#039;&#039;DIFFERENT STRING&#039;&#039;&#039; (i.e &amp;quot;Exercise Test String 2&amp;quot;)&lt;br /&gt;
**** Call the original within the test double &lt;br /&gt;
**** Make sure to reset the original function within the mock before calling it (i.e. srDOUBLE_RESET)&lt;br /&gt;
** Call &#039;&#039;sut_strsave()&#039;&#039; with a string (i.e. &amp;quot;Exercise Test String 1&amp;quot;)&lt;br /&gt;
** Use &#039;&#039;sut_strget()&#039;&#039; to retrieve a string&lt;br /&gt;
** Compare retrieved string with the &#039;&#039;&#039;DIFFERENT STRING&#039;&#039;&#039; inserted by the &#039;&#039;test double&#039;&#039;&lt;br /&gt;
** NOTE - won&#039;t work if you wait until the end of the test to restore the original routine (i.e. srDOUBLE_SET)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Double&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference TestDouble_Definition&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    ---------------------------------------------------------&lt;br /&gt;
    Summary: 6 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. To make it easier for now we recommend that you update your existing option file (myoptions.txt) with the following if not already done: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestDouble_Reference TestDouble_Definition --space TestDouble --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
* [[Using_Test_Doubles | Using Test Doubles]] - Outlines the basic steps to enable a &#039;&#039;double&#039;&#039;&lt;br /&gt;
** Capture the f(x) that is going to be doubled&lt;br /&gt;
** Code the test logic to additionally set the Double and restore original f(x)&lt;br /&gt;
&lt;br /&gt;
* [[Scl_function | Intercepting a function]] - Pragma specifics outline&lt;br /&gt;
** For this exercise, there are &#039;&#039;&#039;NO&#039;&#039;&#039; optional parameters when capturing a f(x) to be &#039;&#039;&#039;intercepted&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Details of the pragma parameters can be understood in the [[Intercept_Module#Interceptor | Intercept Module]] article. Please review in detail the following:&lt;br /&gt;
** &#039;&#039;context&#039;&#039; - This refers to incepting a callee or the called routine. &lt;br /&gt;
** &#039;&#039;name-mangling&#039;&#039; - How the function name is switched during the compiling process&lt;br /&gt;
** &#039;&#039;group-id&#039;&#039; - Used to associate a group of intercepted functions &lt;br /&gt;
&lt;br /&gt;
* Some &#039;&#039;&#039;external links&#039;&#039;&#039; to review:&lt;br /&gt;
** [http://www.martinfowler.com/bliki/TestDouble.html Martin Fowler definition]&lt;br /&gt;
** [http://en.wikipedia.org/wiki/Test_Double Wikipedia definition]&lt;br /&gt;
** [http://msdn.microsoft.com/en-us/magazine/cc163358.aspx MSDN Magazine description]&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Double_Sample | Test Double Sample]] that can be a useful reference. This reference example can be built and executed like all of samples using the Off-Target environment.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_File_IO&amp;diff=13876</id>
		<title>Training File IO</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_File_IO&amp;diff=13876"/>
		<updated>2013-01-28T18:52:42Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Build and Run TestApp */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused on leveraging [[File_Transfer_Services | File IO]] from the context of a Test Unit executing on a target. The module covers the following:&lt;br /&gt;
* Accessing host based files&lt;br /&gt;
* Read and Writing content to / from host files&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two test files used -- &#039;&#039;&#039;TestFile.cpp &amp;amp; TestFile.h&#039;&#039;&#039; -- that implement one Test Unit:&lt;br /&gt;
* &#039;&#039;&#039;TestFile&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The Test Unit has test cases already implemented (used for reference) and has one test method that you are required to implement (called &#039;&#039;&#039;Exercise&#039;&#039;&#039;).  Currently this method is empty and returns a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]]  using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test File&#039;&#039; Test Unit only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestFile&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
      &amp;gt; 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* Read data content &#039;&#039;TestFileInput.dat&#039;&#039; file created from the previous test method (SDK\Windows or SDK/Posix).&lt;br /&gt;
* Make sure to Exit the test if an error occurs when opening file(s) (i.e. use &#039;&#039;&#039;srEXIT_XX&#039;&#039;&#039;)&lt;br /&gt;
* Take a summation of the data content using &#039;&#039;sut_add()&#039;&#039; &lt;br /&gt;
* Validate summation with content in &#039;&#039;TestFileSum.dat&#039;&#039; file&lt;br /&gt;
* Add a &#039;&#039;NOTE&#039;&#039; capturing important information such as &#039;&#039;bytes read&#039;&#039;, &#039;&#039;summation&#039;&#039;, etc. _&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
* Execute only &#039;&#039;TestFile&#039;&#039;  &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestFile&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
      &amp;gt; 1 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 1 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
   &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. To make it easier for now we recommend that you update your existing option file (myoptions.txt) with the following if not already done: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestFile --space TestFile --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
* [[File_Transfer_Services | File Transfer Services]] - important information for using file operations on host file from test code executing on the target&lt;br /&gt;
** Review &#039;&#039;srFileOpen()&#039;&#039; and &#039;&#039;srFileClose()&#039;&#039;&lt;br /&gt;
** Review &#039;&#039;srFileRead()&#039;&#039; and &#039;&#039;srFileWrite()&#039;&#039;&lt;br /&gt;
** Other references as needed&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
* [[File_Services_Sample | File Services Sample]] - presents a few basic examples of how to use the File Transfer Services API to interact with the host file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Runtime_API&amp;diff=13875</id>
		<title>Training Runtime API</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Runtime_API&amp;diff=13875"/>
		<updated>2013-01-28T18:52:07Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Build and Run TestApp */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused leveraging the [[Runtime_Test_Services | Runtime APIs]] in the context of writing a test. The module covers the following topics:&lt;br /&gt;
* How to set [[Runtime_Test_Services#srTestCaseSetStatus | test status]]&lt;br /&gt;
* Setting [[Runtime_Test_Services#srTestCaseSetName | test case name]] and [[Runtime_Test_Services#srTestCaseSetDescription | description]] directly via the API&lt;br /&gt;
* Setting [[Runtime_Test_Services#srTestSuiteSetName | test suite name]] and [[Runtime_Test_Services#srTestSuiteSetDescription | description]] directly via the API&lt;br /&gt;
* Dynamically creating [[Runtime_Test_Services#srTestSuiteAddSuite | test suites]] and [[Runtime_Test_Services#srTestSuiteAddCase | test cases]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two new files used -- &#039;&#039;&#039;TestRuntime.cpp &amp;amp; TestRuntime.h&#039;&#039;&#039; --- that implement two test Units:&lt;br /&gt;
* &#039;&#039;&#039;TestRuntime_Static&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestRuntime_Dynamic&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All of the Test Units have test cases already implemented (used for reference) and have one test method that you are required to implement: one is called &#039;&#039;&#039;Exercise&#039;&#039;&#039; and the other is called &#039;&#039;&#039;dynamic_Exercise&#039;&#039;&#039;. Currently the exercise methods return a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Runtime APIs&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestRuntime_Static --run TestRuntime_Dynamic&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
   test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
     &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
   test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
     &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  ------------------------------------------------------------&lt;br /&gt;
  Summary: 6 passed, 0 failed, 0 in progress, 2 not in use.&lt;br /&gt;
  &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestRuntime_Static::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Validate the &#039;&#039;sut_mult()&#039;&#039; routine using some simple data input&lt;br /&gt;
** Set the &#039;&#039;test method&#039;&#039; name to &#039;&#039;&#039;Mult&#039;&#039;&#039;&lt;br /&gt;
** Use direct Runtime APIs to:&lt;br /&gt;
*** Set the test case description&lt;br /&gt;
*** Capture logging information via comments&lt;br /&gt;
*** Set the status of the test case&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestRuntime_Dynamic-&amp;gt;Exercise&#039;&#039;&#039;&lt;br /&gt;
** Pass in &#039;&#039;&#039;5&#039;&#039;&#039; via command line for the number of test cases to generate&lt;br /&gt;
** Check using &#039;&#039;&#039;srEXIT_XX&#039;&#039;&#039; that the number of test cases has been passed in correctly&lt;br /&gt;
** Add a new Test Suite using &#039;&#039;&#039;Exercise&#039;&#039;&#039; for the name&lt;br /&gt;
** Write a loop generating a dynamic test case using &#039;&#039;NumberOfTestCases&#039;&#039;&lt;br /&gt;
*** Test Case name shall be &#039;&#039;&#039;Test_n&#039;&#039;&#039; where &amp;quot;n&amp;quot; is the loop count as each case must have a unique name.&lt;br /&gt;
*** Add a description for each test case&lt;br /&gt;
*** Validate &#039;&#039;sut_foo()&#039;&#039; using the loop count&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* note- The &#039;&#039;dynamic&#039;&#039; Test Unit is using &#039;&#039;CClass&#039;&#039; packaging&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Runtime API&#039;&#039; Test Units only (NOTE - requires passing parameter)&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestRuntime_Static --run TestRuntime_Dynamic(5)&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
      &amp;gt; 10 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    ---------------------------------------------------------------------&lt;br /&gt;
    Summary: 13 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. To make it easier for now we recommend that you update your existing option file (myoptions.txt) with the following if not already done: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestRuntime_Static --run TestRuntime_Dynamic --space TestRuntime --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Runtime_Test_Services#C_Test_Functions | Runtime Test Services]] with special attention to the following:&lt;br /&gt;
** Review srTestCaseSetDescription()&lt;br /&gt;
** Review srTestCaseSetStatus()&lt;br /&gt;
** Review srTestCaseSetName()&lt;br /&gt;
** Review srTestCaseAddComment()&lt;br /&gt;
** Review srTestSuiteAddCase()&lt;br /&gt;
** Review srTestSuiteAddSuite()&lt;br /&gt;
** Review srTestSuiteSetDescription()&lt;br /&gt;
** Review srTestSuiteSetName()&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
* [[Test_Class_Sample | Test Class Sample]] - Specifically the [[Test_Class_Sample#Runtime_Services |Runtime Services]] Test Units&lt;br /&gt;
* [[Test_CClass_Sample | Test C Class Sample]] - Specifically the [[Test_CClass_Sample#Runtime_Services | Runtime Services]] Test Units&lt;br /&gt;
* [[Test_Function_List_Sample | Test Function List Sample]] - Specifically the [[Test_Function_List_Sample#Runtime_Services | Runtime Services]] Test Units&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Expectations&amp;diff=13874</id>
		<title>Training Expectations</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Expectations&amp;diff=13874"/>
		<updated>2013-01-28T18:51:46Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Build and Run TestApp */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused on &#039;&#039;Test Points&#039;&#039; and how to validate them using &#039;&#039;Expectations&#039;&#039;.  The module covers the following topics:&lt;br /&gt;
* Presentation of a [[Expectations | validation]] technique based on &#039;&#039;code sequencing&#039;&#039; and &#039;&#039;state data&#039;&#039;&lt;br /&gt;
* Overview of [[Source_Instrumentation_Overview#Instrumentation | source instrumentation]]&lt;br /&gt;
* Review of [[Test_Point_Testing_in_C/C%2B%2B | expectation tables and predicates]]&lt;br /&gt;
* Example use cases such as concurrent validation, using trigger conditions, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two test files used -- &#039;&#039;&#039;TestExpect.cpp &amp;amp; TestExpect.h&#039;&#039;&#039; -- that implement three Test Units:&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Seq&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Data&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Misc&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The first two Test Units have two test methods already implemented and have one method each that you are required to implement called &#039;&#039;&#039;Exercise&#039;&#039;&#039;. The third Test Unit has four test methods already implemented and also has one method to implement as an exercise. Currently the &#039;&#039;exercise methods&#039;&#039; return a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Expectations&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  -----------------------------------------------------------&lt;br /&gt;
  Summary: 7 passed, 1 failed, 0 in progress, 3 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Seq::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Validate &#039;&#039;&#039;ALL&#039;&#039;&#039; upper case Test Points &lt;br /&gt;
** Use &#039;&#039;Unordered&#039;&#039; and &#039;&#039;Nonstrict&#039;&#039; sequencing&lt;br /&gt;
** Use &#039;&#039;sut_DoSequencing(SEQ_1)&#039;&#039; to generate part of the sequence&lt;br /&gt;
** Use &#039;&#039;sut_start_thread(SEQ_3)&#039;&#039; to generate the rest of the sequence&lt;br /&gt;
** Add a &#039;&#039;NOTE_INFO(..)&#039;&#039; to log that the test is executing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Data::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Validate the following Test Points &#039;&#039;&#039;{D, G, F, H}&#039;&#039;&#039;&lt;br /&gt;
** Check that &#039;&#039;&#039;F&#039;&#039;&#039; occurs 2 times&lt;br /&gt;
** Use &#039;&#039;Unordered&#039;&#039; sequencing&lt;br /&gt;
** Write a new custom predicate that validates data for both &#039;&#039;&#039;D&#039;&#039;&#039; and &#039;&#039;&#039;H&#039;&#039;&#039; Test Points&lt;br /&gt;
*** Add &#039;&#039;NOTE_INFO(..)&#039;&#039; to capture content of the Test Points&lt;br /&gt;
*** Add extra check for &#039;&#039;&#039;D&#039;&#039;&#039; that the status is &#039;&#039;&#039;GOOD&#039;&#039;&#039;&lt;br /&gt;
*** Confirm that data fields &#039;&#039;&#039;d1&#039;&#039;&#039; and &#039;&#039;&#039;d2&#039;&#039;&#039; are as expected&lt;br /&gt;
*** Pass the expected data fields (for both Test Points) as part of the &#039;&#039;&#039;user&#039;&#039;&#039; data within the setup&lt;br /&gt;
*** Use &#039;&#039;sut_start_thread(SEQ_3)&#039;&#039; to generate the sequence&lt;br /&gt;
** Write another custom predicate for validating data for &#039;&#039;&#039;G&#039;&#039;&#039;&lt;br /&gt;
*** Add &#039;&#039;NOTE_INFO(..)&#039;&#039; to capture content of the Test Point&lt;br /&gt;
*** Validate the expected string using &#039;&#039;&#039;user&#039;&#039;&#039; data&lt;br /&gt;
** Add a &#039;&#039;NOTE_INFO(..)&#039;&#039; to log that the test is executing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestExpect_Misc::Exercise&#039;&#039;&#039;&lt;br /&gt;
** Validate 2 sequences using a Trigger in between&lt;br /&gt;
*** Sequence 1 = &#039;&#039;&#039;D&#039;&#039;&#039; &#039;&#039;&#039;E&#039;&#039;&#039; and &#039;&#039;&#039;A&#039;&#039;&#039;&lt;br /&gt;
*** Trigger = &#039;&#039;&#039;C&#039;&#039;&#039;&lt;br /&gt;
*** Sequence 2 =  &#039;&#039;&#039;F&#039;&#039;&#039; and &#039;&#039;&#039;F&#039;&#039;&#039; (2 occurrences)&lt;br /&gt;
** Use &#039;&#039;ANY AT ALL&#039;&#039; special member with trigger&lt;br /&gt;
** Use &#039;&#039;Ordered&#039;&#039; and &#039;&#039;Strict&#039;&#039; sequencing &lt;br /&gt;
** Use &#039;&#039;sut_start_thread(SEQ_2)&#039;&#039; to generate the expected sequences  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Expectations&#039;&#039; Test Units only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
      &amp;gt; 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    -----------------------------------------------------------&lt;br /&gt;
    Summary: 10 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. To make it easier for now we recommend that you update your existing option file (myoptions.txt) with the following if not already done: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc --space TestExpect --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Source_Instrumentation_Overview | Instrumentation Overview]] providing high-level concepts of this validation technique&lt;br /&gt;
* [[Test_Point |Test Point]] Macro definition &lt;br /&gt;
* [[Expectations |Expectations]] definition and how to set your &#039;&#039;Expectations&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Point_Sample | Test Point Sample]] - Demonstrates simple technique to monitor and test activity occurring in another thread&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Fixturing&amp;diff=13873</id>
		<title>Training Fixturing</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Fixturing&amp;diff=13873"/>
		<updated>2013-01-28T18:51:14Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Build and Run TestApp */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module focuses on &#039;&#039;&#039;Fixturing&#039;&#039;&#039; in the context of the STRIDE Test System. For a high-level overview refer to the following wiki section on [[What_is_Unique_About_STRIDE#Fixturing | fixturing]]. This module covers the following topics:&lt;br /&gt;
* Startup logic at the beginning of a Test Unit&lt;br /&gt;
* [[Test_Unit_Pragmas#Fixturing_Pragmas | Setup]] logic for each test method&lt;br /&gt;
* [[Test_Unit_Pragmas#Fixturing_Pragmas | Teardown]] logic for each test method&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two test files used -- &#039;&#039;&#039;TestFixture.cpp &amp;amp; TestFixture.h&#039;&#039;&#039;. These implement one Test Unit:&lt;br /&gt;
* &#039;&#039;&#039;TestFixture&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The Test Unit has one test case already implemented (used for reference) and has &#039;&#039;&#039;two test methods&#039;&#039;&#039; that you are required to implement (called &#039;&#039;&#039;Exercise1&#039;&#039;&#039; and &#039;&#039;&#039;Exercise2&#039;&#039;&#039;).  Currently the &#039;&#039;Exercise1&#039;&#039; method is empty and returns a &#039;&#039;NOT IN USE&#039;&#039; status. The &#039;&#039;Exercise2&#039;&#039; method does not yet exist. &lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Fixture&#039;&#039; Test Unit only &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride -options_file myoptions.txt --run TestFixture&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
      &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestFixture::Exercise1&#039;&#039;&#039;&lt;br /&gt;
** Add a &#039;&#039;NOTE&#039;&#039; that will display the &#039;&#039;name&#039;&#039; of the test.&lt;br /&gt;
** Validate that &#039;&#039;&#039;Sequence2&#039;&#039;&#039; is stored when starting the Thread by checking that &amp;quot;Sequence2&amp;quot; equals MyString[0].&lt;br /&gt;
** NOTE: The method to focus on in SUT is DoSequencing()&lt;br /&gt;
** NOTE: m_sequence is incremented as each method goes through Teardown.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TestFixture::Exercise2&#039;&#039;&#039;&lt;br /&gt;
** Add new test method declaration called &#039;&#039;Exercise2&#039;&#039; to TestFixture.h&lt;br /&gt;
** Add new test method implementation of &#039;&#039;Exercise2&#039;&#039; to TestFixture.cpp&lt;br /&gt;
** Add a &#039;&#039;NOTE&#039;&#039; that will display the name of the test&lt;br /&gt;
** Validate that &#039;&#039;&#039;Sequence3&#039;&#039;&#039; is persisted when starting the Thread&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute only &#039;&#039;TestFixture&#039;&#039;  &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestFixture&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 3 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
   &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. To make it easier for now we recommend that you update your existing option file (myoptions.txt) with the following if not already done: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestFixture --space TestFixture --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Unit_Pragmas#Fixturing_Pragmas | Fixturing Pragmas]] -- important information for setting up and tearing down &#039;&#039;things&#039;&#039; between each executed test method within a Test Unit&lt;br /&gt;
** Review the types of Test Units these pragmas are applicable for &lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Class_Sample | Test Class Sample]] - Specifically the [[Test_Class_Sample#Basic::Fixtures | Basic Fixtures]] Test Unit&lt;br /&gt;
* [[Test_CClass_Sample | Test C Class Sample]] - Specifically the [[Test_CClass_Sample#basic_fixtures | Basic Fixtures]] Test Unit&lt;br /&gt;
* [[Test_Function_List_Sample | Test Function List Sample]]- Specifically the [[Test_Function_List_Sample#basic_fixtures |Basic Fixtures]] Test Unit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Parameters&amp;diff=13872</id>
		<title>Training Parameters</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Parameters&amp;diff=13872"/>
		<updated>2013-01-28T18:50:43Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Build and Run TestApp */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module is focused on &#039;&#039;&#039;Parameter passing&#039;&#039;&#039; in the context of the Test Unit. The module covers the following:&lt;br /&gt;
* [[Parameterized_Test_Units | Passing parameters]] via the Runner&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are two test files used -- &#039;&#039;&#039;TestParam.cpp &amp;amp; TestParam.h&#039;&#039;&#039;. These implement one Test Unit:&lt;br /&gt;
* &#039;&#039;&#039;TestParam&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The Test Unit has test cases already implemented (used for reference) and has one test method that you are required to implement (called &#039;&#039;&#039;Exercise&#039;&#039;&#039;).  Currently this method is empty and returns a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile&lt;br /&gt;
* Startup TestApp&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute &#039;&#039;Test Param&#039;&#039; Test Unit only&lt;br /&gt;
: Note that parameters are passed to the test unit from the host command line.&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run &amp;quot;TestParam(9,4,\&amp;quot;STRING1\&amp;quot;)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
      &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the &#039;&#039;sample_src&#039;&#039; directory (based on the &#039;&#039;output&#039;&#039; option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercise ===&lt;br /&gt;
&lt;br /&gt;
;Exercise()&lt;br /&gt;
* Add a &#039;&#039;NOTE&#039;&#039; that will display the &#039;&#039;integer parameters&#039;&#039; passed via command line&lt;br /&gt;
* Add an Assert macro guarding against no parameters being passed (i.e. default value of 0 received)&lt;br /&gt;
* Call &#039;&#039;sut_add()&#039;&#039; with parameters and validate that it returns the expected value&lt;br /&gt;
* Call &#039;&#039;sut_mult()&#039;&#039; with parameters and validate that it returns the expected value&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
* Execute only &#039;&#039;TestParam&#039;&#039; &#039;&#039;&#039;without&#039;&#039;&#039; parameters &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestParam&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
      &amp;gt; 0 passed, 2 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 0 passed, 2 failed, 0 in progress, 0 not in use.&lt;br /&gt;
   &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Execute only &#039;&#039;TestParam&#039;&#039; &#039;&#039;&#039;with&#039;&#039;&#039; parameters &lt;br /&gt;
  &amp;gt; stride -O myoptions.txt --run &amp;quot;TestParam(9,4,\&amp;quot;STRING1\&amp;quot;)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
      &amp;gt; 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 2 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
   &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. To make it easier for now we recommend that you update your existing option file (myoptions.txt) with the following if not already done: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run &amp;quot;TestParam(9,4,\&amp;quot;STRING1\&amp;quot;)&amp;quot; --space TestParam --upload&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to passing parameters to Test Units.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Parameterized_Test_Units |Parameterized Test Units]] - Describes how to supply parameters to your Test Units&lt;br /&gt;
* [[STRIDE_Runner | STRIDE Runner]] - Information on command line options&lt;br /&gt;
** Refer to &amp;quot;Test Unit Specification Examples&amp;quot; - specifically examples using parameters&lt;br /&gt;
** Review &amp;lt;tt&amp;gt;run[--r] arg&amp;lt;/tt&amp;gt; option&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
* [[Test_Class_Sample | Test Class Sample]] - Specifically the [[Test_Class_Sample#Basic::Parameterized | Parameterized]] Test Unit&lt;br /&gt;
* [[Test_CClass_Sample | Test C Class Sample]] - Specifically the [[Test_CClass_Sample#basic_parameterized | Parameterized]] Test Unit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Test_Macros&amp;diff=13870</id>
		<title>Training Test Macros</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Test_Macros&amp;diff=13870"/>
		<updated>2013-01-28T18:47:47Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Build and Run TestApp */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Objectives ==&lt;br /&gt;
&lt;br /&gt;
This Training Module focuses on the basics of writing and executing tests. It covers the following topics:&lt;br /&gt;
* The C++ class [[Test_Units | Test Unit packaging]] option&lt;br /&gt;
* How to leverage [[Test_Macros | Test Macros]]&lt;br /&gt;
* Using [[Test_Macros#Notes | Notes]] and [[Test_Log | Test Logs]], what is the difference?&lt;br /&gt;
* Creating [[Test_Documentation_in_C/C%2B%2B | Test Documentation]] via the build process&lt;br /&gt;
* Executing Tests using the [[STRIDE_Runner | Runner]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The test unit is implemented in two source files: &#039;&#039;&#039;TestBasic.cpp&#039;&#039;&#039; and &#039;&#039;&#039;TestBasic.h&#039;&#039;&#039;. The comments and descriptions are contained in &#039;&#039;&#039;TestBasic.h&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The Test Unit has test cases already implemented (used for reference) and has a test method that you are required to implement (called &#039;&#039;&#039;Exercise&#039;&#039;&#039;). Currently this method is empty and returns a &#039;&#039;NOT IN USE&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
== Instructions ==&lt;br /&gt;
&lt;br /&gt;
=== Build and Run TestApp ===&lt;br /&gt;
&lt;br /&gt;
* [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using the SDK makefile&lt;br /&gt;
* Invoke TestApp found in the &#039;&#039;/out/bin&#039;&#039; directory&lt;br /&gt;
* If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] &lt;br /&gt;
&lt;br /&gt;
* Execute the &#039;&#039;Test Basic&#039;&#039; Test Units &lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestBasic&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
      &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
You can also review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the sample_src directory (based on the output option). By opening the xml file in a web browser the xsl is automatically applied to create html.&lt;br /&gt;
&lt;br /&gt;
=== Implement Exercises ===&lt;br /&gt;
&lt;br /&gt;
Now edit the training source code to complete the following exercises:&lt;br /&gt;
&lt;br /&gt;
;Exercise()&lt;br /&gt;
* &#039;&#039;&#039;Assignment 1:&#039;&#039;&#039; Add an &#039;&#039;srNOTE&#039;&#039; to &amp;lt;tt&amp;gt;Exercise()&amp;lt;/tt&amp;gt; that will add a simple message to the test report (e.g. &amp;quot;Exercise ...&amp;quot;)&lt;br /&gt;
** &#039;&#039;Hint:&#039;&#039; You will probably want to read about [[Test Macros]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Assignment 2:&#039;&#039;&#039; Within &amp;lt;tt&amp;gt;Exercise()&amp;lt;/tt&amp;gt; validate that &amp;lt;tt&amp;gt;sut_mult(1,1)&amp;lt;/tt&amp;gt; does NOT equal &amp;lt;tt&amp;gt;sut_add(1,1)&amp;lt;/tt&amp;gt;&lt;br /&gt;
** &#039;&#039;Hint:&#039;&#039; You will use a [[Test Macros | Test Macro]].&lt;br /&gt;
&lt;br /&gt;
=== Check Results ===&lt;br /&gt;
&lt;br /&gt;
* Before you rebuild &amp;lt;tt&amp;gt;TestApp.exe&amp;lt;/tt&amp;gt;, you will need to shut it down by entering &#039;q&#039; in its console window or closing the window directly.&lt;br /&gt;
&lt;br /&gt;
* After rebuilding, invoke &amp;lt;tt&amp;gt;TestApp.exe&amp;lt;/tt&amp;gt; once again.&lt;br /&gt;
&lt;br /&gt;
* Execute only &#039;&#039;TestBasic&#039;&#039; using the stride runner:&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run TestBasic&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
      &amp;gt; 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    --------------------------------------------------------------&lt;br /&gt;
    Summary: 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
&lt;br /&gt;
* If you have TestApp.xml already open in your browser, you can simply refresh (F5), to view the latest test results.&lt;br /&gt;
&lt;br /&gt;
=== Run and Publish Results ===&lt;br /&gt;
&lt;br /&gt;
When you have completed the Exercise(s) publish your results to Test Space. To make it easier for now we recommend that you update your existing option file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) with the following: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run TestBasic --space TestBasic --upload&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
    test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
      &amp;gt; 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
    ------------------------------------------------------------&lt;br /&gt;
    Summary: 4 passed, 1 failed, 0 in progress, 0 not in use.&lt;br /&gt;
 &lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
  Uploading to test space...&lt;br /&gt;
&lt;br /&gt;
=== Validate Uploaded Results ===&lt;br /&gt;
&lt;br /&gt;
Navigate to Test Space using your browser, then validate your results against the pre-configured baseline. For details, please see [[Training_Getting_Started#Confirming_Training_Exercise_Results | Confirming Training Exercise Results]]&lt;br /&gt;
&lt;br /&gt;
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | &#039;&#039;expected test results&#039;&#039;]] that you can use to validate your results.&lt;br /&gt;
&lt;br /&gt;
You can view the results on test space by pointing your browser at &#039;&#039;&#039;&amp;lt;tt&amp;gt;&amp;lt;nowiki&amp;gt;https://yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&amp;lt;/tt&amp;gt;&#039;&#039;&#039;. Log in using the credentials you supplied earlier in the options file.&lt;br /&gt;
&lt;br /&gt;
== Reference ==&lt;br /&gt;
The following reference information is related to Test Unit basics.&lt;br /&gt;
&lt;br /&gt;
=== Wiki ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Units_Overview | Test Units Overview]] - Provides a general overview Test Units (i.e. writing tests in C and C++)&lt;br /&gt;
* [[Test_Units |Test Unit Packaging]] - Discusses the three types of packaging that can be used for Test Units (we prefer Test Classes even for c programmers)&lt;br /&gt;
* [[Test_Macros|Test Macros]] - Optional macros that provide shortcuts for testing assertions and automatic report annotation (you will want to use these).&lt;br /&gt;
* [[Test_Macros#Note_Macros|Notes]] - Used to add logging information to your &#039;&#039;&#039;test logic&#039;&#039;&#039; (automatically added to test reports)&lt;br /&gt;
* [[Test_Log|Test Logs]] - Used to add logging information to your &#039;&#039;&#039;source code&#039;&#039;&#039; (added to test reports if enabled)&lt;br /&gt;
* [[Tracing|Tracing]] - The Runner allows tracing on logs and test points.&lt;br /&gt;
&lt;br /&gt;
=== Samples ===&lt;br /&gt;
&lt;br /&gt;
* [[Test_Class_Sample|Test Class Sample]] -  Specifically the [[Test_Class_Sample#Basic|Basic Simple]] Test Unit&lt;br /&gt;
* [[Test_Macros_Sample|Test Macro Sample]] - This sample covers simple uses of each of the Test Macros. &lt;br /&gt;
&lt;br /&gt;
Note - Other Test Unit Samples related to packaging that are useful are the following:&lt;br /&gt;
&lt;br /&gt;
* [[Test_CClass_Sample|Test C Class Sample]]&lt;br /&gt;
* [[Test_Function_List_Sample|Test Function List Sample]] &lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Getting_Started&amp;diff=13867</id>
		<title>Training Getting Started</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Getting_Started&amp;diff=13867"/>
		<updated>2013-01-28T18:30:38Z</updated>

		<summary type="html">&lt;p&gt;Jeffs: /* Training Confirmation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
Our training approach is based on a self-guided tour of the [[STRIDE_Overview#STRIDE_Testing_Features | STRIDE Testing Features]] using reference examples and assigned exercises. The set of examples and the implemented exercises will be built and executed using a standard desktop computer. &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;software under test&#039;&#039;, &#039;&#039;reference examples&#039;&#039;, and &#039;&#039;exercises&#039;&#039; have been designed to be as &#039;&#039;&#039;&#039;&#039;simple as possible while sufficiently demonstrating the topic at hand&#039;&#039;&#039;&#039;&#039;. In particular, the &#039;&#039;software under test&#039;&#039; is &#039;&#039;&#039;very light&#039;&#039;&#039; on core application logic -- the focus instead is on the test code that leverages STRIDE to define and execute tests. &lt;br /&gt;
&lt;br /&gt;
The user is expected to work through each of the training modules covering a specific testing feature. Once the exercises are completed (actual test cases implemented), the results are published to [[STRIDE_Overview#STRIDE_Test_Space | Test Space]] (and validated against a pre-created baseline).&lt;br /&gt;
&lt;br /&gt;
The training collateral consists of the following:&lt;br /&gt;
# The [[STRIDE_Overview#STRIDE_Framework | STRIDE Framework]] used to implement and execute tests&lt;br /&gt;
# A set of specific [[#Recommend_Order | Training Modules]] that will guide you through the exercises&lt;br /&gt;
# [[Main_Page | &#039;&#039;Wiki articles&#039;&#039;]] that will provide background material and other technical information&lt;br /&gt;
# Reference [[C/C%2B%2B_Samples | Samples]]&lt;br /&gt;
# The [[STRIDE_Off-Target_Environment | Off-Target Environment]] &lt;br /&gt;
# and [[STRIDE_Overview#STRIDE_Test_Space | STRIDE Test Space]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For more details concerning STRIDE refer to the following:&lt;br /&gt;
* &#039;&#039;[[STRIDE_Overview_Video | Overview screencast]]&#039;&#039;  &lt;br /&gt;
* &#039;&#039;[[What is Unique About STRIDE | What is Unique About STRIDE]]&#039;&#039; &lt;br /&gt;
* &#039;&#039;[[Types of Testing Supported by STRIDE | Types of Testing Supported]]&#039;&#039; &lt;br /&gt;
* &#039;&#039;[[Frequently Asked Questions About STRIDE | Frequently Asked Questions]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;For questions and support [mailto:training@s2technologies.com email us]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Before Starting ==&lt;br /&gt;
Before you can start setting up your environment you need the following 3 items:&lt;br /&gt;
* &#039;&#039;&#039;STRIDE Desktop Installation Package&#039;&#039;&#039; (one of the following)&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-windows_4.3.yy.zip&amp;lt;/tt&amp;gt; (Windows desktop)&lt;br /&gt;
&#039;&#039;or&#039;&#039;&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_framework-linux_4.3.yy.tgz&amp;lt;/tt&amp;gt; (Linux desktop)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Training Source files&#039;&#039;&#039;&lt;br /&gt;
:&amp;lt;tt&amp;gt;STRIDE_training_source.zip&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Test Space User Account URL and Logon Credentials&#039;&#039;&#039;&lt;br /&gt;
** Test Space URL (&amp;lt;tt&amp;gt;https://&amp;lt;i&amp;gt;YourCompany&amp;lt;/i&amp;gt;.stridetestspace.com&amp;lt;/tt&amp;gt;)&lt;br /&gt;
** User-name&lt;br /&gt;
** User-password&lt;br /&gt;
&lt;br /&gt;
=== Desktop Setup ===&lt;br /&gt;
The training requires that you install the [[Desktop_Installation | Desktop Framework]] and that the [[STRIDE_Off-Target_Environment | Off-Target Environment]] is setup correctly and verified. &lt;br /&gt;
&lt;br /&gt;
For an overview of the installation steps please refer to [[Installation_Overview | this]] article.&lt;br /&gt;
&lt;br /&gt;
The following steps are required:&lt;br /&gt;
# Install your [[Desktop_Installation | desktop Framework package]]&lt;br /&gt;
# Read about the [[STRIDE_Off-Target_Environment | Off-Target Environment]]&lt;br /&gt;
# Install [[STRIDE_Off-Target_Environment#Host_Compiler | host compiler]] for your desktop&lt;br /&gt;
# Use the Off-Target SDK to [[Building_an_Off-Target_Test_App | build a Test App]]&lt;br /&gt;
# [[Building_an_Off-Target_Test_App#Diagnostics | Run STRIDE diagnostics]] with the Test App built&lt;br /&gt;
&lt;br /&gt;
=== Training Setup ===&lt;br /&gt;
The following source code can be found in the &#039;&#039;&#039;STRIDE_training_source.zip&#039;&#039;&#039; file:&lt;br /&gt;
&lt;br /&gt;
   software_under_test.c | h&lt;br /&gt;
   TestBasic.cpp | h&lt;br /&gt;
   TestParam.cpp | h&lt;br /&gt;
   TestFixture.cpp | h&lt;br /&gt;
   TestExpect.cpp | h   &lt;br /&gt;
   TestRuntime.cpp | h&lt;br /&gt;
   TestFile.cpp | h&lt;br /&gt;
   TestDouble.cpp | h&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;software under test&#039;&#039; is contained in the file &#039;&#039;&#039;software_under_test.c | h&#039;&#039;&#039;.  All of the public functions use &#039;&#039;&#039;sut_&#039;&#039;&#039; as a prefix. All training modules test against this file. Although the test examples are contained in C++ files, most of the test logic is written in standard C. &lt;br /&gt;
&lt;br /&gt;
Once the files are obtained the next step is to rebuild the TestApp using the training source code. Move &#039;&#039;&#039;all&#039;&#039;&#039; of the files into the directory &amp;lt;tt&amp;gt;%STRIDE_DIR%\SDK\Windows\sample_src&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;$STRIDE_DIR/SDK/Posix/sample_src&amp;lt;/tt&amp;gt; for Linux) and follow the instructions for [[Building_an_Off-Target_Test_App | Building a TestApp]].&lt;br /&gt;
&lt;br /&gt;
Once built, list all [[Test_Units | Test Units]] within the generated database file using the following command:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Windows&#039;&#039;&#039;&lt;br /&gt;
 &amp;gt; stride --database=&amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot; --list&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux&#039;&#039;&#039;&lt;br /&gt;
 &amp;gt; stride --database=&amp;quot;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;quot; --list&lt;br /&gt;
&lt;br /&gt;
The following remoted Functions and Test Units should be displayed:&lt;br /&gt;
 &lt;br /&gt;
   Functions&lt;br /&gt;
     sut_strcpy(char const * input, char * output) : void&lt;br /&gt;
     strlen(char const * _Str) : size_t&lt;br /&gt;
   Test Units&lt;br /&gt;
     TestBasic()&lt;br /&gt;
     TestDouble_Reference()&lt;br /&gt;
     TestDouble_Definition()&lt;br /&gt;
     TestExpect_Seq()&lt;br /&gt;
     TestExpect_Data()&lt;br /&gt;
     TestExpect_Misc()&lt;br /&gt;
     TestFile()&lt;br /&gt;
     TestFixture()&lt;br /&gt;
     TestParam(int data1, int data2, char * szString)&lt;br /&gt;
     TestRuntime_Static()&lt;br /&gt;
     TestRuntime_Dynamic(int NumberOfTestCases)&lt;br /&gt;
&lt;br /&gt;
=== Run Training Tests ===&lt;br /&gt;
Here we will run all tests in the &amp;lt;tt&amp;gt;TestApp.sidb&amp;lt;/tt&amp;gt; database.&amp;lt;ref&amp;gt;Note that the S2 diagnostic tests are treated separately, and are not run unless the &amp;lt;tt&amp;gt;--diagnostics&amp;lt;/tt&amp;gt; option is specified to &amp;lt;tt&amp;gt;stride&amp;lt;/tt&amp;gt;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* If you haven&#039;t done so already, [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using the SDK makefile&lt;br /&gt;
* Invoke TestApp found in the &#039;&#039;/out/bin&#039;&#039; directory&lt;br /&gt;
* Create an [[Stride_Runner#Options | option file]] (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) that includes the following content (SDK\Windows or SDK/Posix).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Windows&#039;&#039;&#039;&lt;br /&gt;
  ##### Command Line Options ######&lt;br /&gt;
  --device &amp;quot;TCP:localhost:8000&amp;quot;&lt;br /&gt;
  --database &amp;quot;%STRIDE_DIR%\SDK\Windows\out\TestApp.sidb&amp;quot;&lt;br /&gt;
  --output &amp;quot;%STRIDE_DIR%\SDK\Windows\sample_src\TestApp.xml&amp;quot;&lt;br /&gt;
  --log_level all&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Linux&#039;&#039;&#039;&lt;br /&gt;
  ##### Command Line Options #####&lt;br /&gt;
  --device &amp;quot;TCP:localhost:8000&amp;quot;&lt;br /&gt;
  --database &amp;quot;$STRIDE_DIR/SDK/Posix/out/TestApp.sidb&amp;quot;&lt;br /&gt;
  --output &amp;quot;$STRIDE_DIR/SDK/Posix/sample_src/TestApp.xml&amp;quot;&lt;br /&gt;
  --log_level all &lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
* If you setup an [[STRIDE_Runner#Environment_Variables | environment variable]] for the &#039;&#039;&#039;device&#039;&#039;&#039; option then it is not required in the option file. Note: Command line options override environment variables.  &lt;br /&gt;
* &#039;&#039;stride --help&#039;&#039; provides [[STRIDE_Runner#Options | options information]]&lt;br /&gt;
&lt;br /&gt;
If you haven&#039;t done so already, start &amp;lt;tt&amp;gt;TestApp&amp;lt;/tt&amp;gt; running in a separate console window.&lt;br /&gt;
&lt;br /&gt;
Now run stride as shown below and verify summary results (starting from the &amp;lt;tt&amp;gt;SDK\Windows&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;SDK/Posix&amp;lt;/tt&amp;gt; directory):&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; stride --options_file myoptions.txt --run &amp;quot;*&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The output should look like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
Executing...&lt;br /&gt;
  test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
    &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
    &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 21 passed, 3 failed, 0 in progress, 11 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Interpreting Results ===&lt;br /&gt;
Open &#039;&#039;&#039;TestApp.xml&#039;&#039;&#039; in your browser; this file is created in the directory from which you ran &#039;&#039;&#039;stride&#039;&#039;&#039; (or the directory via the &amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt; command line option). If you were connected to the Internet when you ran the tests, the TestApp.xsl file is also generated in the directory. By opening TestApp.xml in a web browser, the xsl is automatically applied to create html in the browser. If you use Google Chrome, please see [[Browser Compatibility]].&lt;br /&gt;
&lt;br /&gt;
If you&#039;re interested in the details of the tests, please see the test documentation contained in the test report.&lt;br /&gt;
&lt;br /&gt;
=== Test Space Access ===&lt;br /&gt;
&lt;br /&gt;
As part of the training users implement exercises (test cases).  As each exercise is completed, the results are expected to be uploaded to Test Space. Accessing Test Space (uploading, viewing, etc.) requires an user-name and password. Before working on a training module please confirm that your user account has been set up by [[Test_Space_Setup | logging in]]. &lt;br /&gt;
&lt;br /&gt;
Test Space will have expected results using [[Creating_And_Using_Baselines | baselines]]. This is used to automatically verify if the exercises have been implemented correctly (at least to some degree). For capturing test results a &#039;&#039;&#039;Training&#039;&#039;&#039; project has been created with the following &#039;&#039;&#039;Spaces&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
   Sandbox     - Results for training setup&lt;br /&gt;
   TestBasic   - Basics training results: TestBasic.cpp | h&lt;br /&gt;
   TestParam   - Parameters training results: TestParam.cpp | h&lt;br /&gt;
   TestFixture - Fixturing training results: TestFixture.cpp | h&lt;br /&gt;
   TestExpect  - Expectations training results: TestExpect.cpp | h   &lt;br /&gt;
   TestRuntime - Runtime API training results: TestRuntime.cpp | h&lt;br /&gt;
   TestFile    - File IO training results: TestFile.cpp | h&lt;br /&gt;
   TestDouble  - Doubling training results: TestDouble.cpp | h&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To publish your results using the [[STRIDE_Runner | STRIDE Runner]] the following command-line options should be used:&lt;br /&gt;
&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --space &#039;&#039;MODULENAME&#039;&#039; &lt;br /&gt;
  --name &#039;&#039;YOURNAME&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Notes:&lt;br /&gt;
* Concerning the &#039;&#039;MODULENAME&#039;&#039; option: use the name that corresponds to the training module currently underway (e.g. TestBasic, TestParam, etc.)&lt;br /&gt;
* &#039;&#039;YOURNAME&#039;&#039; should be set to your name (i.e. JohnD); omit spaces from this string&lt;br /&gt;
* If you access the Internet via an HTTP proxy please read [[STRIDE_Runner#Using_a_Proxy | &#039;&#039;&#039;this article&#039;&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
To make it easier for now we recommend that you update your existing option file (&amp;lt;tt&amp;gt;myoptions.txt&amp;lt;/tt&amp;gt;) with the following: &lt;br /&gt;
&lt;br /&gt;
  #### Test Space options (partial) #####&lt;br /&gt;
  #### Note - make sure to change username, etc. ####&lt;br /&gt;
  --testspace &amp;lt;nowiki&amp;gt;https://username:password@yourcompany.stridetestspace.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  --project Training&lt;br /&gt;
  --name YOURNAME&lt;br /&gt;
&lt;br /&gt;
=== Publish Test Space Results ===&lt;br /&gt;
&lt;br /&gt;
To complete the setup publish your results to Test Space. &lt;br /&gt;
&lt;br /&gt;
   &amp;gt; stride --options_file myoptions.txt --run &amp;quot;*&amp;quot; --space Sandbox --upload&lt;br /&gt;
&lt;br /&gt;
  Loading database...&lt;br /&gt;
  Connecting to device...&lt;br /&gt;
  Executing...&lt;br /&gt;
  test unit &amp;quot;TestBasic&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Definition&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestDouble_Reference&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Data&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Misc&amp;quot;&lt;br /&gt;
    &amp;gt; 3 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestExpect_Seq&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFile&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestFixture&amp;quot;&lt;br /&gt;
    &amp;gt; 1 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestParam&amp;quot;&lt;br /&gt;
    &amp;gt; 0 passed, 1 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Dynamic&amp;quot;&lt;br /&gt;
    &amp;gt; 4 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  test unit &amp;quot;TestRuntime_Static&amp;quot;&lt;br /&gt;
    &amp;gt; 2 passed, 0 failed, 0 in progress, 1 not in use.&lt;br /&gt;
  --------------------------------------------------------------------- &lt;br /&gt;
  Summary: 21 passed, 3 failed, 0 in progress, 11 not in use.&lt;br /&gt;
  Disconnecting from device...&lt;br /&gt;
  Saving result file...&lt;br /&gt;
  Uploading to test space...&lt;br /&gt;
&lt;br /&gt;
=== Confirming Setup ===&lt;br /&gt;
&lt;br /&gt;
After you have run and uploaded the results of the training setup, you should confirm the correctness of your work. The training setup results will end up with a mix of passes and failures, so we use a [[Creating_And_Using_Baselines | Test Space baseline]] to get an aggregate comparison between expected and actual results.&lt;br /&gt;
&lt;br /&gt;
To validate your setup:&lt;br /&gt;
# Navigate to Sandbox set in Test Space and look under the column labeled &#039;&#039;&#039;&#039;&#039;SEQUENTIAL COMPARISON&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
# Confirm that &#039;&#039;&#039;&#039;&#039;none&#039;&#039;&#039;&#039;&#039; is indicated for each of the sub-column. If so, you have completed the setup and are ready to move onto the training modules.&lt;br /&gt;
&lt;br /&gt;
[[image:STRIDE_results_012813.JPG|none|600px| Test Space baseline example]]&lt;br /&gt;
&lt;br /&gt;
== Training ==&lt;br /&gt;
At this point you should be ready to start the actual training. There are &#039;&#039;&#039;7 separate training modules&#039;&#039;&#039; and we recommend the following order:&lt;br /&gt;
&lt;br /&gt;
===Introductory===&lt;br /&gt;
* [[Training_Basics | &#039;&#039;&#039;Basics&#039;&#039;&#039;]] - Covers basics of implementing and executing test cases&lt;br /&gt;
* [[Training_Parameters |&#039;&#039;&#039;Parameters&#039;&#039;&#039;]] - How to pass parameters to a test &lt;br /&gt;
* [[Training_Fixturing | &#039;&#039;&#039;Fixturing&#039;&#039;&#039;]] - Leveraging setup and teardown featuring&lt;br /&gt;
&lt;br /&gt;
===Advanced===&lt;br /&gt;
* [[Training_Expectations | &#039;&#039;&#039;Expectations&#039;&#039;&#039;]] - Validating code sequencing along with state data&lt;br /&gt;
* [[Training_Runtime_API | &#039;&#039;&#039;Runtime API&#039;&#039;&#039;]] - Using the runtime services to dynamically create test suits / cases&lt;br /&gt;
* [[Training_File_IO | &#039;&#039;&#039;File IO&#039;&#039;&#039;]] - Reading and writing files existing on the host&lt;br /&gt;
* [[Training_Doubling | &#039;&#039;&#039;Doubling&#039;&#039;&#039;]] - Replacing a dependency with a stub, fake, or mock&lt;br /&gt;
&lt;br /&gt;
== Training Confirmation ==&lt;br /&gt;
&lt;br /&gt;
As you run and upload each test unit containing your worked training exercises, you should confirm the correctness of your work.&lt;br /&gt;
&lt;br /&gt;
The correctly worked training test units will end up with a mix of passes and failures, so we use a [[Creating_And_Using_Baselines | Test Space baseline]] to get an aggregate comparison between expected and actual results &lt;br /&gt;
&lt;br /&gt;
To validate your work:&lt;br /&gt;
# Navigate to your result set in Test Space and look under the column labeled &#039;&#039;&#039;&#039;&#039;SEQUENTIAL COMPARISON&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
# Confirm that &#039;&#039;&#039;&#039;&#039;none&#039;&#039;&#039;&#039;&#039; is indicated for each of the sub-column.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Jeffs</name></author>
	</entry>
</feed>