<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.stridewiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mikee</id>
	<title>STRIDE Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.stridewiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mikee"/>
	<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Special:Contributions/Mikee"/>
	<updated>2026-04-30T08:31:57Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.10</generator>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Expectations_Sample&amp;diff=12930</id>
		<title>Expectations Sample</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Expectations_Sample&amp;diff=12930"/>
		<updated>2010-07-13T23:06:03Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* Tests Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
This example demonstrates a simple technique to monitor and test activity occurring in instrumented source code on the device from test logic implemented on the host. This sample show a common testing scenario - namely, verifying the behavior of a state machine.&lt;br /&gt;
&lt;br /&gt;
If you are not familiar with test points you may find it helpful to review the [[Test Point]] article before proceeding.&lt;br /&gt;
&lt;br /&gt;
== Source under test ==&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;tt&amp;gt;s2_expectations_source.c / h&amp;lt;/tt&amp;gt; ===&lt;br /&gt;
These files implement a simple state machine that we wish to test. The state machine runs when &#039;&#039;&#039;Exp_DoStateChanges&#039;&#039;&#039; is executed, which is a function we have also instrumented so it can be remotely invoked from the host.&lt;br /&gt;
&lt;br /&gt;
The expected state transitions are as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
eSTART -&amp;gt; eIDLE -&amp;gt; eACTIVE -&amp;gt; eIDLE -&amp;gt; eEND&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The states don&#039;t do any work; instead they just sleep() so there&#039;s some time spent in each one.&lt;br /&gt;
&lt;br /&gt;
Each state transition is managed through a call to SetNewState() which communicates the state transition to the test thread using the srTEST_POINT() macro. We also provide an incrementing counter value as data to each of these test points - this data is used for validation in one of our example scenarios. The source under test has also been instrumented with a test point to illustrate the use of JSON as a textual serialization format (which is easily decoded on the host).&lt;br /&gt;
&lt;br /&gt;
== Tests Description ==&lt;br /&gt;
&lt;br /&gt;
=== s2_expectations_testmodule ===&lt;br /&gt;
&lt;br /&gt;
This example implements four tests of the state machine implemented in s2_expectations_source. These tests demonstrate the use of the [[Perl Script APIs]] to validate expectations.&lt;br /&gt;
&lt;br /&gt;
Each test follows the same pattern in preparing and using the test point feature:&lt;br /&gt;
# call &#039;&#039;&#039;TestPointSetup&#039;&#039;&#039; with order parameter as well as expected and unexpected lists. &lt;br /&gt;
# Invoke target processing by calling the remote function &#039;&#039;&#039;Exp_DoStateChanges&#039;&#039;&#039;.&lt;br /&gt;
# use &#039;&#039;&#039;Check&#039;&#039;&#039; or &#039;&#039;&#039;Wait&#039;&#039;&#039; on the test point object to process the expectations.&lt;br /&gt;
&lt;br /&gt;
We create an &amp;quot;expectation&amp;quot; of activity and then validate the observed activity against the expectation using rules that we specify. If the expectation is met, the test passes; if the expectation is not met, the test fails.&lt;br /&gt;
&lt;br /&gt;
==== sync_exact ====&lt;br /&gt;
&lt;br /&gt;
This is test implements a basic &#039;&#039;&#039;ordered&#039;&#039;&#039; and &#039;&#039;&#039;strict&#039;&#039;&#039; expectation test (ordered/strict&lt;br /&gt;
processing is the default behavior for &#039;&#039;TestPointSetup&#039;&#039;, so there is no need to explicitly&lt;br /&gt;
specify these settings). An ordered/strict test expects the declared events to occur in&lt;br /&gt;
the order specified and with no other intervening events from among those already declared in&lt;br /&gt;
the list.&lt;br /&gt;
&lt;br /&gt;
The test strictness normally applies to the universe of declared test points - that is, the union&lt;br /&gt;
of all points declared in the expected and unexpected lists. However, for this example, we want to&lt;br /&gt;
ensure that &#039;&#039;&#039;no other&#039;&#039;&#039; test points are encountered during the test. As such, we specify an&lt;br /&gt;
unexpected list containing &#039;&#039;&#039;TEST_POINT_EVERYTHING_ELSE&#039;&#039;&#039; which makes this check entirely exclusive&lt;br /&gt;
on the expected list.&lt;br /&gt;
&lt;br /&gt;
==== sync_loose ====&lt;br /&gt;
This test relaxes some of the restrictions of the previous test by&lt;br /&gt;
specifying &#039;&#039;&#039;unordered&#039;&#039;&#039; processing AND removing the &#039;&#039;&#039;unexpected&#039;&#039;&#039; list.&lt;br /&gt;
As a result, this test simply validates that the declared test points&lt;br /&gt;
occur with the specified frequency during the processing. This test &#039;&#039;does not&#039;&#039;&lt;br /&gt;
validate the ordering of the events nor does it exclude other events that are&lt;br /&gt;
outside the expectation universe (that is, test points NOT mentioned in the expectation&lt;br /&gt;
list)&lt;br /&gt;
&lt;br /&gt;
Note that the &#039;&#039;&#039;IDLE&#039;&#039;&#039; testpoint is now included in the expected array only once, but with an &lt;br /&gt;
expected count of 2. This technique is common when using unordered processing.&lt;br /&gt;
&lt;br /&gt;
This test will fail only if all of the expected testpoints &lt;br /&gt;
are not seen (the specified number of times) during the processing window.&lt;br /&gt;
&lt;br /&gt;
==== async_loose ====&lt;br /&gt;
This test is identical to the sync_loose test, except that we call &#039;&#039;&#039;Wait()&#039;&#039;&#039;&lt;br /&gt;
and pass a timeout value to 200 milliseconds, which will result in a test failure, as it &lt;br /&gt;
takes approximately 600 milliseconds for the testpoint expectations to be satisfied&lt;br /&gt;
(the source under test includes artificial delays between each state transition). &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;This test is expected to fail&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== check_data ====&lt;br /&gt;
This test is identical to sync_loose, except that we use &#039;&#039;&#039;ordered&#039;&#039;&#039; processing&lt;br /&gt;
for an ordered expectation set &#039;&#039;&#039;and&#039;&#039;&#039; we specify expected data for some of our test points. This &lt;br /&gt;
test will pass only if the test points are seen in the specified order and match both the label &lt;br /&gt;
and data specified.&lt;br /&gt;
&lt;br /&gt;
For the test points with binary data, we have to use the perl [http://perldoc.perl.org/functions/pack.html pack()] function to create a scalar &lt;br /&gt;
value that has the proper bit pattern. Whenever you are validating target data, you will need to &lt;br /&gt;
take into account byte ordering for basic types. Here we assume the target has the same byte &lt;br /&gt;
ordering as the host. A more scalable approach to data validation from the target involves&lt;br /&gt;
using string-based serialization formats (such as JSON).&lt;br /&gt;
&lt;br /&gt;
==== trace_data ====&lt;br /&gt;
This test is similar to sync_exact, except the expectations are loaded from a&lt;br /&gt;
expected data file that was created using the &amp;lt;tt&amp;gt;--trace&amp;lt;/tt&amp;gt; option on the [[STRIDE Runner]].&lt;br /&gt;
We&#039;ve also removed the unexpected list so the items are the trace file are not&lt;br /&gt;
considered exclusively.&lt;br /&gt;
&lt;br /&gt;
By default, tests that use a expect data file perform validation based only on &lt;br /&gt;
the test point label. If you want data comparison to be performed, you need to &lt;br /&gt;
specify a global predicate via the &#039;&#039;&#039;predicate&#039;&#039;&#039; option.&lt;br /&gt;
&lt;br /&gt;
==== trace_data_validation ====&lt;br /&gt;
&lt;br /&gt;
This test is identical to &#039;&#039;trace_data&#039;&#039;, but with the addition of data validation&lt;br /&gt;
using the provided [[Perl_Script_APIs#Builtin_Predicates|TestPointDefaultCmp predicate]].&lt;br /&gt;
&lt;br /&gt;
==== trace_data_custom_predicate  ====&lt;br /&gt;
This test is identical to &#039;&#039;trace_data&#039;&#039; except that a custom predicate&lt;br /&gt;
is specified for the data validation. The custom predicate in this example&lt;br /&gt;
just validates binary data using the standard memory comparison and implicitly&lt;br /&gt;
passes for any non-binary payloads.&lt;br /&gt;
&lt;br /&gt;
==== json_data ====&lt;br /&gt;
This test demonstrates the use of [http://www.json.org  JSON] formatted string data from the target in&lt;br /&gt;
predicate validation. The string payload for the test point is decoded&lt;br /&gt;
using the standard perl JSON library and the object&#039;s fields are validated&lt;br /&gt;
in the predicate function above.&lt;br /&gt;
&lt;br /&gt;
==== non_strict_ordered  ====&lt;br /&gt;
This test demonstrates the use of non-strict processing which allows&lt;br /&gt;
other occurrences of test points within your current universe. In this test,&lt;br /&gt;
there are other occurrences of the &#039;&#039;&#039;SET_NEW_STATE&#039;&#039;&#039; event that we have not&lt;br /&gt;
explicitly specified in our expectation list. Had we been using strict&lt;br /&gt;
processing (as is the default), these extra occurrences would have caused&lt;br /&gt;
the test to fail. Because we have specified non-strict processing, the&lt;br /&gt;
test passes.&lt;br /&gt;
&lt;br /&gt;
==== unexpected  ====&lt;br /&gt;
This test demonstrates the use of the unexpected list to ensure&lt;br /&gt;
that specific test points are not hit during the check. In this case,&lt;br /&gt;
we specify JSON_DATA in our unexpected list - however, since this&lt;br /&gt;
test points &#039;&#039;does&#039;&#039; actually occur during our processing, the test fails.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;This test is expected to fail&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== startup_predicate  ====&lt;br /&gt;
This test demonstrates the use of the special &#039;&#039;&#039;TEST_POINT_ANYTHING&#039;&#039;&#039;&lt;br /&gt;
specification to implement startup logic. Sometimes it is desirable&lt;br /&gt;
to suspend processing of your expectation list until a certain event&lt;br /&gt;
occurs. Startup predicate logic allows you to accomplish this, as this&lt;br /&gt;
example demonstrates. In this somewhat contrived example, we run our event&lt;br /&gt;
scenario twice consecutively. The startup logic waits for the &#039;&#039;&#039;END&#039;&#039;&#039; event&lt;br /&gt;
to occur. Once it does, the predicate returns true and the remaining items&lt;br /&gt;
are processed in turn.&lt;br /&gt;
&lt;br /&gt;
==== anything  ====&lt;br /&gt;
Similar to the &#039;&#039;&#039;startup_predicate&#039;&#039;&#039; test, this example demonstrates&lt;br /&gt;
the use of &#039;&#039;&#039;TEST_POINT_ANYTHING&#039;&#039;&#039; in &#039;&#039;the middle&#039;&#039; of an expectation list&lt;br /&gt;
to suspend further processing of the list until a condition is satisfied.&lt;br /&gt;
The proceed condition is enforced by a predicate, this any expectation&lt;br /&gt;
entries using TEST_POINT_ANYTHING &#039;&#039;&#039;must&#039;&#039;&#039; include a predicate to validate&lt;br /&gt;
the condition. In this example, the TEST_POINT_ANYTHING suspends processing&lt;br /&gt;
of the list until it sees the &#039;&#039;&#039;IDLE&#039;&#039;&#039; event.&lt;br /&gt;
&lt;br /&gt;
==== predicate_validation ====&lt;br /&gt;
This test shows that you can use a custom predicate along with&lt;br /&gt;
&#039;&#039;&#039;TEST_POINT_ANY_AT_ALL&#039;&#039;&#039;. In this way, you can force all validation&lt;br /&gt;
to be handled in your predicate. The predicate should return&lt;br /&gt;
0 to fail the test, 1 to pass the test, or &#039;&#039;&#039;TEST_POINT_IGNORE&#039;&#039;&#039; to&lt;br /&gt;
continue processing other test points.&lt;br /&gt;
&lt;br /&gt;
This technique should only be used in cases where the other&lt;br /&gt;
more explicit expectation list techniques are not sufficient.&lt;br /&gt;
&lt;br /&gt;
==== multiple_handlers ====&lt;br /&gt;
&lt;br /&gt;
This is test shows how you can use multiple test point setups simultaneously with each&lt;br /&gt;
processing a different set of expectations. This can be a powerful technique and&lt;br /&gt;
often produces more understandable test scenarios, particularly when you decompose&lt;br /&gt;
your expetations down into small, easy to digest expectation lists.&lt;br /&gt;
&lt;br /&gt;
In this specific case, we create one handler that expects START and END, in&lt;br /&gt;
sequence and another that just verifies that the IDLE event happens twice. We&lt;br /&gt;
deliberately create both handlers before starting the processing so that&lt;br /&gt;
each handler will have access to the stream of events as they happen during&lt;br /&gt;
processing.&lt;br /&gt;
&lt;br /&gt;
[[Category:Samples]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Expectations&amp;diff=12929</id>
		<title>Expectations</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Expectations&amp;diff=12929"/>
		<updated>2010-07-13T22:25:24Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* Members */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
STRIDE supports &#039;&#039;&#039;Behavior Testing&#039;&#039;&#039;, which is different than Unit Testing or API Testing in that it does not focus on calling functions and validating return values. Behavior Testing validates the &#039;&#039;expected sequencing and state of the software&#039;&#039; executing under normal operating conditions. In order to begin this type of testing, you must &#039;&#039;&#039;(1)&#039;&#039;&#039; instrument the source under test with [[Test Point | Test Points]] and &#039;&#039;&#039;(2)&#039;&#039;&#039; define the &#039;&#039;expectations&#039;&#039; of the Test Points for each test scenario. To learn more about the uniqueness of Behavior Testing [[What_is_Unique_About_STRIDE#STRIDE_includes_behavior-based_testing_techniques | read here]].&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Validation Model&#039;&#039;&#039; is based on validating that the &#039;&#039;Expected List&#039;&#039; of Test Points have been hit as well as optionally validating any &#039;&#039;state data&#039;&#039; associated with a Test Point member. Through the use of an &#039;&#039;Unexpected  List&#039;&#039;, it&#039;s possible to simulataneously validate that specified Test Points are NOT hit during the execution of the scenario. &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;Expected List&#039;&#039; and &#039;&#039;Unexpected List&#039;&#039; declared for a specific testing scenario define the &#039;&#039;active&#039;&#039; set of Test Points for the system. All other Test Points within the software are not active and have nominal impact. To learn more about testable software builds [[What_is_Unique_About_STRIDE#STRIDE_software_builds_are_both_functional_and_testable | read here]].&lt;br /&gt;
&lt;br /&gt;
== Expected List ==&lt;br /&gt;
&lt;br /&gt;
The first step in defining &#039;&#039;&#039;Expectations&#039;&#039;&#039; is describing the set of Test Points expected to be hit during a test scenario. This is the &#039;&#039;&#039;list&#039;&#039;&#039; of Test Points and the expected number of hits (&#039;&#039;&#039;count&#039;&#039;&#039;) for each of the Test Points. Any expected &#039;&#039;&#039;state data&#039;&#039;&#039; associated with a Test Point that requires validation should be included. &lt;br /&gt;
&lt;br /&gt;
The validation of the &#039;&#039;Expected List&#039;&#039; is done when any of the following conditions are met:&lt;br /&gt;
* The &#039;&#039;Expected List&#039;&#039; sequencing has been met completely&lt;br /&gt;
* The time allocated for validation has expired&lt;br /&gt;
* State data associated with a Test Point is declared invalid&lt;br /&gt;
* An out-of-sequence Test Point has been encountered (a concrete failure is identified)&lt;br /&gt;
* An [[Expectations#Unexpected_List | Unexpected Test Point]] has been encountered&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Expected TEST POINTS list:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;10&amp;quot; style=&amp;quot;align:left;&amp;quot;  &lt;br /&gt;
| &#039;&#039;&#039;Label&#039;&#039;&#039; &lt;br /&gt;
| &#039;&#039;&#039;Count&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Expected State Data&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Name 1 &lt;br /&gt;
|   1&lt;br /&gt;
| &#039;&#039;&amp;lt;describe data payload validation requirements if applicable&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Name 2&lt;br /&gt;
| 1 +&lt;br /&gt;
| &#039;&#039;&amp;lt;describe data payload validation requirements if applicable&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;name ...&#039;&#039;&lt;br /&gt;
|  &#039;&#039;n&#039;&#039;&lt;br /&gt;
| &#039;&#039;&amp;lt;...&amp;gt;&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Sequencing Properties ===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;Expected List&#039;&#039; requires defining &#039;&#039;sequencing properties&#039;&#039; used for the validation. This involves establishing how to process the &#039;&#039;&#039;ordering&#039;&#039;&#039; of the Test Points being hit and how &#039;&#039;&#039;strict&#039;&#039;&#039; to handle duplications of Test Point members.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Expected TEST POINTS sequencing properties:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;10&amp;quot; style=&amp;quot;align:left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;Properties&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039; Description&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;Ordered&#039;&#039; &lt;br /&gt;
| Test Points are expected to be hit in the exact order defined in the list&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;Unordered&#039;&#039; &lt;br /&gt;
| Test Points can be hit in any order defined in the list &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;Strict&#039;&#039;&lt;br /&gt;
| Test Points specified in the list must match the exact count (i.e. no duplication)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;Non-Strict&#039;&#039;&lt;br /&gt;
| Test Points specified in the list can be duplicated anywhere in the sequence &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== State Data Validation ===&lt;br /&gt;
&lt;br /&gt;
State data validation for a Test Point member is optional. The &#039;&#039;Expected List&#039;&#039; can associate a &#039;&#039;predicate&#039;&#039;&amp;lt;ref name=&amp;quot;predicate&amp;quot;&amp;gt;Defined as a necessary condition of the Test Point being valid -- logic within the predicate determines if &#039;&#039;valid&#039;&#039;, &#039;&#039;invalid&#039;&#039;, or &#039;&#039;ignored&#039;&#039; &amp;lt;/ref&amp;gt; as a necessary condition for validation of a Test Point member when being processed. The predicate provides an extra level of validation (beyond sequencing) focusing on the state data associated with the Test Point. &lt;br /&gt;
&lt;br /&gt;
The predicate can declare the Test Point:&lt;br /&gt;
* Valid - successful, thus the expected count is decremented&lt;br /&gt;
* Invalid - the expectations is NOT met, indicating failure&lt;br /&gt;
* Ignored - no affect on the validation, continue to wait on the current Test Point&lt;br /&gt;
&lt;br /&gt;
== Unexpected List ==&lt;br /&gt;
&lt;br /&gt;
An &#039;&#039;Unexpected List&#039;&#039; of Test Points can optionally be defined. This list works in conjunction with the [[Expectations#Expected_List | &#039;&#039;Expected List&#039;&#039;]]. The list defines a set of Test Points that are to be treated as failures if any one of them is encountered (hit) during a testing scenario.&lt;br /&gt;
&lt;br /&gt;
This list can be made up of the &#039;&#039; &#039;&#039;&#039;complement&#039;&#039;&#039; &#039;&#039; of the &#039;&#039;Expected List&#039;&#039;, thus including &#039;&#039;all&#039;&#039; Test Points not included in that defined set. This type of setting requires enabling all Test Points defined in the software during the execution of the test.&lt;br /&gt;
&lt;br /&gt;
If the &#039;&#039;Unexpected List&#039;&#039; members intersect with the &#039;&#039;Expected List&#039;&#039; members, that is considered an input error.&lt;br /&gt;
&lt;br /&gt;
== Special Processing ==&lt;br /&gt;
&lt;br /&gt;
=== Members ===&lt;br /&gt;
There is one &#039;&#039;special member&#039;&#039; that can be used for the &#039;&#039;Expected List&#039;&#039; called &#039;&#039;&#039;ANYTHING&#039;&#039;&#039;. This special member value is used to customize the success criteria (i.e. trigger condition) for the entry defined within the &#039;&#039;Expected List&#039;&#039;. It requires a predicate&amp;lt;ref name=&amp;quot;predicate&amp;quot;/&amp;gt; to determine if the Test Point is &#039;&#039;valid, invalid, or ignored&#039;&#039;. The &#039;&#039;&#039;ANYTHING&#039;&#039;&#039; refers to the set of points defined by the &#039;&#039;Expected List&#039;&#039;.  &lt;br /&gt;
&lt;br /&gt;
Concerning an &#039;&#039;&#039;Unordered&#039;&#039;&#039; &#039;&#039;Expected List&#039;&#039;, only one special member can be used. It will be processed first independent of its location within the list. The &#039;&#039;&#039;Count&#039;&#039;&#039; attribute is also optionally available to be used with the special members, indicating the number of valid returns from the predicate until the next Test Point in the list is validated. &lt;br /&gt;
&lt;br /&gt;
There is one &#039;&#039;special member&#039;&#039; that can be used for the &#039;&#039;Unexpected List&#039;&#039; called &#039;&#039;&#039;EVERYTHING ELSE&#039;&#039;&#039;. This special member can only be used as a &#039;&#039;&#039;single member only&#039;&#039;&#039; within an &#039;&#039;Unexpected List&#039;&#039;. It takes the &#039;&#039;complement&#039;&#039; of the set of Test Points defined within the &#039;&#039;Expected List&#039;&#039;. This requires enabling all Test Points defined in the software during the execution of the test. Also, by definition any Test Point hit during the test would be deemed a failure.&lt;br /&gt;
&lt;br /&gt;
=== Counts ===&lt;br /&gt;
There is one &#039;&#039;special count&#039;&#039; value that can be associated with a Test Point defined in the &#039;&#039;Expected List&#039;&#039; called &#039;&#039;&#039;ANY COUNT&#039;&#039;&#039;. This special count is used to focus on capturing a variable number of Test Point hits -- 0 or more. In essences the Test Point has no consequences of the Expected List sequencing, except if a predicate&amp;lt;ref name=&amp;quot;predicate&amp;quot;/&amp;gt; associated with the Test Point returns &#039;&#039;invalid&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
The following use cases provide examples of defining &#039;&#039;&#039;Expectations&#039;&#039;&#039; based on different testing scenarios.&lt;br /&gt;
&lt;br /&gt;
=== Sequencing Properties ===&lt;br /&gt;
Use case examples for validating the [[Expectations#Sequencing_Properties | sequencing]] of Test Points.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Expecting an exact ordered sequence of Test Points to be hit&lt;br /&gt;
  &#039;&#039;Expect&#039;&#039; = [A,B,C,D]; &#039;&#039;ORDERED, STRICT&#039;&#039;&lt;br /&gt;
  Hits   = [A,B,C,D]   - PASS&lt;br /&gt;
  Hits   = [A,B,D,C]   - FAIL (D hit before C)&lt;br /&gt;
&lt;br /&gt;
Expecting an exact ordered of Test Points but with duplicates &lt;br /&gt;
  &#039;&#039;Expect&#039;&#039; = [A,B,C,D]; &#039;&#039;ORDERED, STRICT&#039;&#039;&lt;br /&gt;
  Hits   = [A,B,B,C,D] - FAIL (B duplicated with strict defined)&lt;br /&gt;
&lt;br /&gt;
  &#039;&#039;Expect&#039;&#039; = [A,B,C,D]; &#039;&#039;ORDERED, NON-STRICT&#039;&#039;&lt;br /&gt;
  Hits   = [A,B,B,C,D] - PASS&lt;br /&gt;
  Hits   = [A,C,B,C,D] - PASS&lt;br /&gt;
  Hits   = [A,B,C,B,D] - PASS&lt;br /&gt;
&lt;br /&gt;
Expecting a set of Test Points to be hit, but the order does not matter&lt;br /&gt;
  &#039;&#039;Expect&#039;&#039; = [A,B,C,D]; &#039;&#039;UNORDERED, STRICT&#039;&#039;&lt;br /&gt;
  Hits   = [B,A,C,D]   - PASS&lt;br /&gt;
  Hits   = [B,A,C,B,D] - FAIL (B hit twice)&lt;br /&gt;
&lt;br /&gt;
  &#039;&#039;Expect&#039;&#039; = [A,B,C,D]; &#039;&#039;UNORDERED, NON-STRICT&#039;&#039;&lt;br /&gt;
  Hits   = [B,A,C,B,D] - PASS &lt;br /&gt;
&lt;br /&gt;
Expecting a set of Test Points to be hit, but order and duplications does not matter&lt;br /&gt;
  &#039;&#039;Expect&#039;&#039; = [A,B,C,D]; &#039;&#039;UNORDERED, NON-STRICT&#039;&#039;&lt;br /&gt;
  Hits   = [B,A,C,B,D] - PASS&lt;br /&gt;
  Hits   = [B,A,A,A,D] - FAIL (C never hit)&lt;br /&gt;
&lt;br /&gt;
=== Count Attribute ===&lt;br /&gt;
Use case examples that leverage the [[Expectations#Count | Count]] attribute associated with a list member.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Expecting a sequence of Test Points, including multiple hits for 2 members&lt;br /&gt;
  &#039;&#039;Expect&#039;&#039; = [A,B:3,C,D:2]; &#039;&#039;ORDERED, STRICT&#039;&#039;&lt;br /&gt;
  Hits   = [A,B,B,B,C,D,D]   - PASS&lt;br /&gt;
  Hits   = [A,B,B,C,B,D,D]   - FAIL (B not hit three times before C)&lt;br /&gt;
&lt;br /&gt;
  &#039;&#039;Expect&#039;&#039; = [A,B:3,C,D:2]; &#039;&#039;ORDERED, NON-STRICT&#039;&#039;&lt;br /&gt;
  Hits   = [A,B,B,B,C,B,D,D]  - PASS&lt;br /&gt;
&lt;br /&gt;
Expecting a set of Test Points being hit, including multiple hits (with duplications), but order does not matter&lt;br /&gt;
  &#039;&#039;Expect&#039;&#039; = [A,B:3,C,D]; &#039;&#039;UNORDERED, STRICT&#039;&#039;&lt;br /&gt;
  Hits   = [B,A,B,C,B,D]     - PASS&lt;br /&gt;
  Hits   = [B,A,B,B,C,B,D]   - FAIL (B duplicated on 4th hit)&lt;br /&gt;
&lt;br /&gt;
  &#039;&#039;Expect&#039;&#039; = [A,B:3,C,D]; &#039;&#039;UNORDERED, NON-STRICT&#039;&#039;&lt;br /&gt;
  Hits   = [B,A,B,B,C,B,D]   - PASS&lt;br /&gt;
&lt;br /&gt;
=== State Data Validation ===&lt;br /&gt;
Use case examples leveraging predicates&amp;lt;ref name=&amp;quot;predicate&amp;quot;/&amp;gt; to [[Expectations#State_Data_Validation | validate state data]].&lt;br /&gt;
&lt;br /&gt;
Simple example showing associating a predicate with Test Point named &#039;&#039;A&#039;&#039;.&lt;br /&gt;
  &#039;&#039;Expect&#039;&#039; = [A:p1(tp),B,C,D]; &#039;&#039;ORDERED, STRICT&#039;&#039;&lt;br /&gt;
  p1(tp) { return VALID }&lt;br /&gt;
  Hits   = [A,B,C,D]   - PASS&lt;br /&gt;
&lt;br /&gt;
Expecting Test Point &#039;&#039;B&#039;&#039; to be hit 3 times&lt;br /&gt;
  &#039;&#039;Expect&#039;&#039; = [A,B:3:p1(tp),C,D]; &#039;&#039;UNORDERED, STRICT&#039;&#039;&lt;br /&gt;
  p1(tp) { return VALID unless 3rd time called than return INVALID}&lt;br /&gt;
  Hits   = [B,A,B,C,B,D]     - FAIL (predicate returns invalid on 3rd B hit)&lt;br /&gt;
&lt;br /&gt;
=== Special Members ===&lt;br /&gt;
Use case examples using the two [[Expectations#Members | special members]] within an &#039;&#039;Expected List&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Waiting on a single trigger (D) to start the processing of an Expected List&lt;br /&gt;
  &#039;&#039;Expect&#039;&#039; = [ANYTHING:p1(tp),A,B,C:2,D]; &#039;&#039;ORDERED, STRICT&#039;&#039;&lt;br /&gt;
  p1(tp) {return VALID when D hit otherwise IGNORE}&lt;br /&gt;
  Hits   = [A,B,A,D*,A,B,C,C,D] - PASS&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Waiting on a multiple triggers - 1st trigger used to start processing and 2nd used midstream&lt;br /&gt;
  &#039;&#039;Expect&#039;&#039; = [ANYTHING:p1(tp),A,B,ANYTHING:p2(tp),C:2,D,E]; &#039;&#039;ORDERED, STRICT&#039;&#039;&lt;br /&gt;
  p1(tp) {return VALID when D hit}&lt;br /&gt;
  p2(tp) {return VALID when E hit}&lt;br /&gt;
  Hits   = [A,B,A,D*,A,B,A,A,B,E*,C,C,D,E] - PASS&lt;br /&gt;
&lt;br /&gt;
  &#039;&#039;Expect&#039;&#039; = [ANY_IN_SET:p1(tp),A,B,C:2,D]; &#039;&#039;UNORDERED, NON-STRICT&#039;&#039;&lt;br /&gt;
  p1(tp) {return VALID when D hit}&lt;br /&gt;
  Hits   = [A,B,A,D*,A,A,A,D,A,C,B,C,D] - PASS&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Special Counts ===&lt;br /&gt;
Use case examples leveraging the [[Expectations#Special_Counts | special count attributes]] within an &#039;&#039;Expected List&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Expecting between 0 and more Test Point As to be hit before processing the remaining members. &lt;br /&gt;
  &#039;&#039;Expect&#039;&#039; = [A:ANY_COUNT,B,C,D]; &#039;&#039;ORDERED, STRICT&#039;&#039;&lt;br /&gt;
  Hits   = [B,C,D]     - PASS&lt;br /&gt;
  Hits   = [A,B,C,D]   - PASS&lt;br /&gt;
 &lt;br /&gt;
This test passes &#039;&#039;immediately&#039;&#039; after the A is hit (example of a weird expectation)&lt;br /&gt;
  &#039;&#039;Expect&#039;&#039; = [A,B:ANY_COUNT]; &#039;&#039;ORDERED, STRICT&#039;&#039;&lt;br /&gt;
  Hits   = [A,..]      - PASS&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Miscellaneous ===&lt;br /&gt;
The following are miscellaneous use cases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Expecting an exact ordered sequence of Test Points to be hit, but verifying certain Test Points are NOT hit&lt;br /&gt;
  &#039;&#039;Expect&#039;&#039;   = [A,B,C,D]; &#039;&#039;ORDERED, STRICT&#039;&#039;&lt;br /&gt;
  &#039;&#039;Unexpect&#039;&#039; = [x,y]&lt;br /&gt;
  Hits     = [A,B,C,D]    - PASS&lt;br /&gt;
  Hits     = [A,B,B,C,D]  - FAIL&lt;br /&gt;
  Hits     = [A,B,C,y,..] - FAIL&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Notes ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: Source Instrumentation]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Script_Samples&amp;diff=12923</id>
		<title>Script Samples</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Script_Samples&amp;diff=12923"/>
		<updated>2010-07-09T22:34:03Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These samples are a  collection of native source under test and script test code code that demonstrate the  techniques for creating and executing tests using the STRIDE Framework. These are the  introductory samples for the STRIDE Framework that illustrate how to do  expectation testing of instrumented source under test using script test  modules on the host.&lt;br /&gt;
&lt;br /&gt;
Once you have installed the STRIDE  framework on your host machine, you can easily build and run any combination of  these samples. In each case, you can include the source under test for the sample simply by copying  its source files to the SDK&#039;s [[Framework_Installation#SDK  | sample_src]]  directory and rebuilding the off-target testapp.&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
The  following samples are available to illustrate the use of host scripting  with the STRIDE Framework.&lt;br /&gt;
&lt;br /&gt;
;[[Expectations Sample]]&lt;br /&gt;
: This sample  provides some pre-instrumented source code that include [[Test Point|STRIDE Test  Points]] and [[Test Log|Test Logs]].  A single perl test module is included that implements a few examples of  expectation testing based on the software under test. The process of  including this sample as well as running it and publishing results is  covered in [[Running and  Publishing the Expectations Sample]],  which comprised an earlier step in the sandbox setup.&lt;br /&gt;
&lt;br /&gt;
;[[FileTransfer  Sample]]&lt;br /&gt;
: This sample  shows an example of how you might use helper functions on the target to  invoke [[File Transfer Services|STRIDE File  Transfer services]]. The example is driven by host  script logic that invokes remote target functions to actuate a file  transfer to the target.&lt;br /&gt;
&lt;br /&gt;
;[[FunctionRemoting   Sample]]&lt;br /&gt;
: This sample shows some examples of invoking [[Function_Capturing|remote functions]] for the purpose of fixturing your behavior tests written in script on the host. &lt;br /&gt;
&lt;br /&gt;
[[Category:Samples]]&lt;br /&gt;
[[Category:Tests in Script]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=FunctionRemoting_Sample&amp;diff=12922</id>
		<title>FunctionRemoting Sample</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=FunctionRemoting_Sample&amp;diff=12922"/>
		<updated>2010-07-09T22:30:14Z</updated>

		<summary type="html">&lt;p&gt;Mikee: Created page with &amp;#039;== Introduction ==  This sample demonstrates some common remote function call patterns for the purpose of fixturing your script test scenarios. This sample only demonstrates the …&amp;#039;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
This sample demonstrates some common remote function call patterns for the purpose of fixturing your script test scenarios. This sample only demonstrates the STRIDE mechanics of qualifying and invoking the remote functions. The example functions here don&#039;t actually implement any fixturing logic - they only verify the parameters passed and returned. This sample is a good place to start if you are interested in creating some functions to provided on-target fixturing for your host-based behavior tests.&lt;br /&gt;
&lt;br /&gt;
== Source  under test ==&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;tt&amp;gt;s2_function_remoting_source.c / h&amp;lt;/tt&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
These files declare and implement several example functions taking and returning a variety of input/output data. In particular, we have examples of:&lt;br /&gt;
* void functions&lt;br /&gt;
* integer input/return&lt;br /&gt;
* string input/return&lt;br /&gt;
* structure input/return&lt;br /&gt;
* variable sized integer array input/return&lt;br /&gt;
* variable sized string array input/return&lt;br /&gt;
&lt;br /&gt;
== Tests  Description ==&lt;br /&gt;
&lt;br /&gt;
=== void_args_void_return ===&lt;br /&gt;
&lt;br /&gt;
demonstrates calling a function with void args and return value.&lt;br /&gt;
&lt;br /&gt;
=== int_args_int_return ===&lt;br /&gt;
&lt;br /&gt;
demonstrates calling a function with integer arg and return value.&lt;br /&gt;
&lt;br /&gt;
=== string_args_string_return ===&lt;br /&gt;
&lt;br /&gt;
demonstrates calling a function with string arg and return value.&lt;br /&gt;
&lt;br /&gt;
=== struct_args_struct_return ===&lt;br /&gt;
&lt;br /&gt;
demonstrates calling a function with struct arg and return value.&lt;br /&gt;
&lt;br /&gt;
=== intsizedarray_args_intsizedarray_return ===&lt;br /&gt;
&lt;br /&gt;
demonstrates calling a function with variable length integer array and return value. When using sized-pointers, you must assign the size field with correct number of elements in the array. The return value is a struct which gets mapped to a hashref in perl.&lt;br /&gt;
&lt;br /&gt;
=== stringsizedarray_args_stringsizedarray_return ===&lt;br /&gt;
&lt;br /&gt;
demonstrates calling a function with variable length string array and return value. When using sized-pointers, you must assign the size field with correct number of elements in the array. The return value is a struct which gets mapped to a hashref in perl.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Samples]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Frequently_Asked_Questions_About_STRIDE&amp;diff=12913</id>
		<title>Frequently Asked Questions About STRIDE</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Frequently_Asked_Questions_About_STRIDE&amp;diff=12913"/>
		<updated>2010-06-16T19:26:49Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Integration with STRIDE ==&lt;br /&gt;
&lt;br /&gt;
=== What is the size of the STRIDE Runtime? ===&lt;br /&gt;
&lt;br /&gt;
The [[STRIDE Runtime]] is a source package that supports connectivity with the host system and provides[[Runtime_Test_Services | services for testing]] and [[Source_Instrumentation_Overview | source instrumentation]].  The &#039;&#039;runtime&#039;&#039; is tailored specifically to embedded applications, overhead is minimal. It consumes very little memory for table and control block storage. Resource usage is [[STRIDE_Runtime#Runtime_Configuration | configurable]] and can be tailored to the limitations of the target platform.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Typical resource usage&lt;br /&gt;
! Aspect !! Resources&lt;br /&gt;
|-&lt;br /&gt;
| Code Space || About 90-130 KB depending on the compiler of use and the level of optimization.&lt;br /&gt;
|-&lt;br /&gt;
| Memory Usage || Configurable, by default set to about 10 KB &lt;br /&gt;
|-&lt;br /&gt;
| Threads || 3 Threads; configurable priority; blocked when inactive&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== What is the processing overhead? ===&lt;br /&gt;
&lt;br /&gt;
The [[STRIDE Runtime]] overhead is minimized by collecting raw data on the target, and transferring information at a low priority task to the host for processing. Processing is only active when executing tests via the [[STRIDE Runner]].&lt;br /&gt;
&lt;br /&gt;
== Source Instrumentation ==&lt;br /&gt;
&lt;br /&gt;
=== What are the advantages of Test Points over logging? ===&lt;br /&gt;
&lt;br /&gt;
[[Test Point | Test Points]] can be used for direct validation during real-time execution. Logging systems, while often useful for historical record, can only be used for post-processing validation at best. What&#039;s more, since there is no standard way to post-process logs, users often rely on manual inspection or non-scalable homegrown solutions for validation. Test Point validation is fully automated using the STRIDE Framework, which provides robust harnessing and reporting. Test Points also can include optional data payloads which can be very useful for validating state when a point is hit. Test Points have potentially lower impact on the system than standard logging since they &#039;&#039;&#039;(1)&#039;&#039;&#039; are only sent from the origin if a test is actively subscribed to test point (labels are used as a filter at runtime) and &#039;&#039;&#039;(2)&#039;&#039;&#039; test points can be completely disabled (no-opped) in your build by undefining a single preprocessor macro [[Runtime_Integration#STRIDE_Feature_Control | (STRIDE_ENABLED)]].&lt;br /&gt;
&lt;br /&gt;
=== What about source instrumentation bloat? ===&lt;br /&gt;
&lt;br /&gt;
Any mature code base has some level of diagnostic bloat to it. This often takes the form of ad-hoc logging or debug statements. With STRIDE [[Test Point | Test Points]] and [[Test Log | Test Logs]], you open your software to better automated test scenarios. All STRIDE instrumentation takes the form of single line macros, so the amount of instrumentation bloat is no worse than other typical ad-hoc diagnostics. What&#039;s more, the STRIDE macros are all designed to be quickly no-opped in a build via a single preprocessor macro, making it possible to completely eliminate any actual impact on certain builds, if so desired.&lt;br /&gt;
&lt;br /&gt;
=== Are all Test Points active? ===&lt;br /&gt;
&lt;br /&gt;
No. Under normal testing scenarios, only the specific test points that are needed for a test are actually broadcast through the system. We accomplish this by setting test point filters (by label) on the system whenever one of the Test Point setup functions is called (in script or native code). These filters are reset or removed at the end of the test, so in general &#039;&#039;none&#039;&#039; of the test points are actually sent through the system if no test is currently active. That said, there are a few special use cases in which &#039;&#039;all&#039;&#039; test points become active in the system - namely, when [[Tracing | tracing]] is activated on the host runner and when a specific test case uses a set that include &#039;&#039;TEST_POINT_EVERYTHING_ELSE&#039;&#039;. In general, however, the test points that are actually sent from the system are &#039;&#039;only&#039;&#039; those that are needed to execute the behavior validation for the current test.&lt;br /&gt;
&lt;br /&gt;
=== Will it affect performance? ===&lt;br /&gt;
&lt;br /&gt;
Our experience on a wide-range of systems has shown minimal impact from the STRIDE instrumentation.  The STRIDE Runtime has been designed to be small, portable, and readily configurable, allowing it to be optimized for the platform&#039;s specific characteristics. &lt;br /&gt;
&lt;br /&gt;
=== Should I leave Test Points in? ===&lt;br /&gt;
&lt;br /&gt;
Yes. Once you have some behavior tests written, it&#039;s worthwhile to maintain that instrumentation and the corresponding tests, which allows you to run the tests on any stride-enabled build. All of the instrumentation macros are easily no-opped via a single preprocessor flag, so you can choose to effectively remove the instrumentation code on select builds (production/release builds, for example). The ultimate value of instrumentation is the continuous feedback you get by regularly executing the automated tests on the build.&lt;br /&gt;
&lt;br /&gt;
== Testing ==&lt;br /&gt;
&lt;br /&gt;
=== What languages are supported ? ===&lt;br /&gt;
&lt;br /&gt;
[[Types_of_Testing_Supported#Unit Testing | Unit tests]] and  [[Types_of_Testing_Supported#API_Testing | API Tests]] are written using &#039;&#039;&#039;C and C++&#039;&#039;&#039;. [[Types_of_Testing_Supported#Behavior_Testing | Behavior tests]] can be written using &#039;&#039;&#039;Perl&#039;&#039;&#039; and &#039;&#039;&#039;C/C++&#039;&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
=== Is there any alternative to running STRIDE tests with a real device ? ===&lt;br /&gt;
&lt;br /&gt;
Yes. Tests can be also be built and executed using the [[Off-Target Environment]]. In order for this to work, your device source code must be built along with the test code using the host&#039;s desktop toolchain (MSVC on windows, gcc on linux).&lt;br /&gt;
&lt;br /&gt;
== Test Automation  ==&lt;br /&gt;
&lt;br /&gt;
=== What is continuous integration and why should I care? ===&lt;br /&gt;
&lt;br /&gt;
The key principle of continuous integration is regular testing of your  software--ideally done in an automated fashion. STRIDE tests are  reusable and automated. Over time, these tests accumulate, providing  more and more comprehensive coverage. By automating the execution of  tests and results publication via [[STRIDE Test Space]] with every software build,  development teams gain immediate feedback on defects and the health of  their software. By detecting and repairing defects immediately, the  expense and time involved with correcting bugs is minimized.&lt;br /&gt;
&lt;br /&gt;
=== Does STRIDE support continuous integration? ===&lt;br /&gt;
&lt;br /&gt;
Yes. The [[STRIDE Runner]] provides a straightforward means to connect to the device under test and execute the test cases you&#039;ve implemented using the STRIDE Framework. The runner allows you to configure which tests to run and how to organize the results using subsuites in the report. Since the runner supports an option-based command line interface, this tool is easy to integrate with typical continuous integration servers. Please refer to [[Setting_up_your_CI_Environment|this article]] as well. &lt;br /&gt;
&lt;br /&gt;
=== Where/how do you store test results? ===&lt;br /&gt;
&lt;br /&gt;
When you execute your tests using the [[STRIDE Runner]], upon completion the results are written to an xml file. This xml file uses a custom schema for representing the hierarchy of results (suites, cases, etc.). These files also include a stylsheet specification (which will be written to the same directory as the xml file) that allows them to be viewed as HTML in a browser. You are free to store these files for future use/reference. The runner &#039;&#039;&#039;also&#039;&#039;&#039; supports direct uploading to [[STRIDE Test Space]] which is a hosted web application for viewing and collaborating on your test results. Once you are regularly executing tests, whether automatically or manually, we recommend you use the upload feature to persist and share your test results.&lt;br /&gt;
&lt;br /&gt;
=== Can I get  email containing test reports? ===&lt;br /&gt;
&lt;br /&gt;
Yes. If you use [[STRIDE Test Space]] to store your results, you can optionally configure your test space(s) to automatically notify users when new results are uploaded. The email that is generated by Test Space contains only summary information and provides links so that you can view the complete report data.&lt;br /&gt;
&lt;br /&gt;
If you are using a continuous integration server to initiate your testing, it&#039;s likely that it supports different forms of notification when the testing is complete, so it&#039;s often possible to attach the xml report data as part of the CI server notification.&lt;br /&gt;
&lt;br /&gt;
== Getting Started ==&lt;br /&gt;
&lt;br /&gt;
=== How long does it take to install STRIDE? ===&lt;br /&gt;
&lt;br /&gt;
For standard platforms such as Linux the installation process varies between a few hours to a couple of days. We provide [[:Category:SDKs|SDK packages]] that work out of the box for our [[Off-Target Environment]], but also provide a reference to integrators. &lt;br /&gt;
&lt;br /&gt;
For proprietary targets a [[Platform_Abstraction_Layer|Platform Abstraction Layer (PAL)]] is required. The PAL provides the glue between the [[STRIDE Runtime]] and services offered by the OS. It is the only piece of the runtime that is customized between operating systems. The implementation of the PAL ranges between a day to a week depending on the complexity of the OS. There is also a [[Build Integration | build integration]] step. This involves integrating the [[STRIDE Build Tools]] into your software make process. The activity ranges from a single day to several. &lt;br /&gt;
&lt;br /&gt;
=== What kind of training is required? ===&lt;br /&gt;
&lt;br /&gt;
Our training [[Training Overview | approach]] is based on wiki articles, [[Samples | samples]], and leveraging the [[Off-Target Environment]]. The training have been setup for self-guided instructions that can be leverage for an initial introduction of the technology and on-demand for specific topics when required. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Overview]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Frequently_Asked_Questions_About_STRIDE&amp;diff=12912</id>
		<title>Frequently Asked Questions About STRIDE</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Frequently_Asked_Questions_About_STRIDE&amp;diff=12912"/>
		<updated>2010-06-16T17:39:58Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* Are all Test Points active? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Integration with STRIDE ==&lt;br /&gt;
&lt;br /&gt;
=== What is the size of the STRIDE Runtime? ===&lt;br /&gt;
&lt;br /&gt;
The [[STRIDE Runtime]] is a source package that supports connectivity with the host system and provides[[Runtime_Test_Services | services for testing]] and [[Source_Instrumentation_Overview | source instrumentation]].  The &#039;&#039;runtime&#039;&#039; is tailored specifically to embedded applications, overhead is minimal. It consumes very little memory for table and control block storage. Resource usage is [[STRIDE_Runtime#Runtime_Configuration | configurable]] and can be tailored to the limitations of the target platform.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Typical resource usage&lt;br /&gt;
! Aspect !! Resources&lt;br /&gt;
|-&lt;br /&gt;
| Code Space || About 90-130 KB depending on the compiler of use and the level of optimization.&lt;br /&gt;
|-&lt;br /&gt;
| Memory Usage || Configurable, by default set to about 10 KB &lt;br /&gt;
|-&lt;br /&gt;
| Threads || 3 Threads; configurable priority; blocked when inactive&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== What is the processing overhead? ===&lt;br /&gt;
&lt;br /&gt;
The [[STRIDE Runtime]] overhead is minimized by collecting raw data on the target, and transferring information at a low priority task to the host for processing. Processing is only active when executing tests via the [[STRIDE Runner]].&lt;br /&gt;
&lt;br /&gt;
== Source Instrumentation ==&lt;br /&gt;
&lt;br /&gt;
=== What are the advantages of Test Points over logging? ===&lt;br /&gt;
&lt;br /&gt;
[[Test Point | Test Points]] can be used for direct validation during real-time execution. Logging systems, while often useful for historical record, can only be used for post-processing validation at best. What&#039;s more, since there is no standard way to post-process logs, users often rely on manual inspection or non-scalable homegrown solutions for validation. Test Point validation is fully automated using the STRIDE Framework, which provides robust harnessing and reporting. Test Points also can include optional data payloads which can be very useful for validating state when a point is hit. Test Points have potentially lower impact on the system than standard logging since they &#039;&#039;&#039;(1)&#039;&#039;&#039; are only sent from the origin if a test is actively subscribed to test point (labels are used as a filter at runtime) and &#039;&#039;&#039;(2)&#039;&#039;&#039; test points can be completely disabled (no-opped) in your build by undefining a single preprocessor macro [[Runtime_Integration#STRIDE_Feature_Control | (STRIDE_ENABLED)]].&lt;br /&gt;
&lt;br /&gt;
=== What about source instrumentation bloat? ===&lt;br /&gt;
&lt;br /&gt;
Any mature code base has some level of diagnostic bloat to it. This often takes the form of ad-hoc logging or debug statements. With STRIDE [[Test Point | Test Points]] and [[Test Log | Test Logs]], you open your software to better automated test scenarios. All STRIDE instrumentation takes the form of single line macros, so the amount of instrumentation bloat is no worse than other typical ad-hoc diagnostics. What&#039;s more, the STRIDE macros are all designed to be quickly no-opped in a build via a single preprocessor macro, making it possible to completely eliminate any actual impact on certain builds, if so desired.&lt;br /&gt;
&lt;br /&gt;
=== Are all Test Points active? ===&lt;br /&gt;
&lt;br /&gt;
No. Under normal testing scenarios, only the specific test points that are needed for a test are actually broadcast through the system. We accomplish this by setting test point filters (by label) on the system whenever one of the Test Point setup functions is called (in script or native code). These filters are reset or removed at the end of the test, so in general &#039;&#039;none&#039;&#039; of the test points are actually sent through the system if no test is currently active. That said, there are a few special use cases in which &#039;&#039;all&#039;&#039; test points become active in the system - namely, when [[Tracing | tracing]] is activated on the host runner and when a specific test case uses a set that include &#039;&#039;TEST_POINT_EVERYTHING_ELSE&#039;&#039;. In general, however, the test points that are actually sent from the system are &#039;&#039;only&#039;&#039; those that are needed to execute the behavior validation for the current test.&lt;br /&gt;
&lt;br /&gt;
=== Will it affect performance? ===&lt;br /&gt;
&lt;br /&gt;
Our experience on a wide-range of systems has shown minimal impact from the STRIDE instrumentation.  The STRIDE Runtime has been designed to be small, portable, and readily configurable, allowing it to be optimized for the platform&#039;s specific characteristics. &lt;br /&gt;
&lt;br /&gt;
=== Should I leave Test Points in? ===&lt;br /&gt;
&lt;br /&gt;
Yes. Once you have some behavior tests written, it&#039;s worthwhile to maintain that instrumentation and the corresponding tests, which allows you to run the tests on any stride-enabled build. All of the instrumentation macros are easily no-opped via a single preprocessor flag, so you can choose to effectively remove the instrumentation code on select builds (production/release builds, for example). The ultimate value of instrumentation is the continuous feedback you get by regularly executing the automated tests on the build.&lt;br /&gt;
&lt;br /&gt;
== Testing ==&lt;br /&gt;
&lt;br /&gt;
=== What languages are supported? ===&lt;br /&gt;
&lt;br /&gt;
[[Types_of_Testing_Supported#Unit Testing | Unit tests]] and  [[Types_of_Testing_Supported#API_Testing | API Tests]] are written using &#039;&#039;&#039;C and C++&#039;&#039;&#039;. [[Types_of_Testing_Supported#Behavior_Testing | Behavior tests]] can be written using &#039;&#039;&#039;Perl&#039;&#039;&#039; and &#039;&#039;&#039;C/C++&#039;&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
=== Can I test using my host computer? ===&lt;br /&gt;
&lt;br /&gt;
Yes. Tests can be implemented and executing using the [[Off-Target Environment]].&lt;br /&gt;
&lt;br /&gt;
== Test Automation  ==&lt;br /&gt;
&lt;br /&gt;
=== What is continuous integration and why should I care? ===&lt;br /&gt;
&lt;br /&gt;
The key principle of continuous integration is regular testing of your  software--ideally done in an automated fashion. STRIDE tests are  reusable and automated. Over time, these tests accumulate, providing  more and more comprehensive coverage. By automating the execution of  tests and results publication via [[STRIDE Test Space]] with every software build,  development teams gain immediate feedback on defects and the health of  their software. By detecting and repairing defects immediately, the  expense and time involved with correcting bugs is minimized.&lt;br /&gt;
&lt;br /&gt;
=== Does STRIDE support continuous integration? ===&lt;br /&gt;
&lt;br /&gt;
Yes. The [[STRIDE Runner]] provides a straightforward means to connect to the device under test and execute the test cases you&#039;ve implemented using the STRIDE Framework. The runner allows you to configure which tests to run and how to organize the results using subsuites in the report. Since the runner supports an option-based command line interface, this tool is easy to integrate with typical continuous integration servers. Please refer to [[Setting_up_your_CI_Environment|this article]] as well. &lt;br /&gt;
&lt;br /&gt;
=== Where/how do you store test results? ===&lt;br /&gt;
&lt;br /&gt;
When you execute your tests using the [[STRIDE Runner]], upon completion the results are written to an xml file. This xml file uses a custom schema for representing the hierarchy of results (suites, cases, etc.). These files also include a stylsheet specification (which will be written to the same directory as the xml file) that allows them to be viewed as HTML in a browser. You are free to store these files for future use/reference. The runner &#039;&#039;&#039;also&#039;&#039;&#039; supports direct uploading to [[STRIDE Test Space]] which is a hosted web application for viewing and collaborating on your test results. Once you are regularly executing tests, whether automatically or manually, we recommend you use the upload feature to persist and share your test results.&lt;br /&gt;
&lt;br /&gt;
=== Can I get  email containing test reports? ===&lt;br /&gt;
&lt;br /&gt;
Yes. If you use [[STRIDE Test Space]] to store your results, you can optionally configure your test space(s) to automatically notify users when new results are uploaded. The email that is generated by Test Space contains only summary information and provides links so that you can view the complete report data.&lt;br /&gt;
&lt;br /&gt;
If you are using a continuous integration server to initiate your testing, it&#039;s likely that it supports different forms of notification when the testing is complete, so it&#039;s often possible to attach the xml report data as part of the CI server notification.&lt;br /&gt;
&lt;br /&gt;
== Getting Started ==&lt;br /&gt;
&lt;br /&gt;
=== How long does it take to install STRIDE? ===&lt;br /&gt;
&lt;br /&gt;
For standard platforms such as Linux the installation process varies between a few hours to a couple of days. We provide [[:Category:SDKs|SDK packages]] that work out of the box for our [[Off-Target Environment]], but also provide a reference to integrators. &lt;br /&gt;
&lt;br /&gt;
For proprietary targets a [[Platform_Abstraction_Layer|Platform Abstraction Layer (PAL)]] is required. The PAL provides the glue between the [[STRIDE Runtime]] and services offered by the OS. It is the only piece of the runtime that is customized between operating systems. The implementation of the PAL ranges between a day to a week depending on the complexity of the OS. There is also a [[Build Integration | build integration]] step. This involves integrating the [[STRIDE Build Tools]] into your software make process. The activity ranges from a single day to several. &lt;br /&gt;
&lt;br /&gt;
=== What kind of training is required? ===&lt;br /&gt;
&lt;br /&gt;
Our training [[Training Overview | approach]] is based on wiki articles, [[Samples | samples]], and leveraging the [[Off-Target Environment]]. The training have been setup for self-guided instructions that can be leverage for an initial introduction of the technology and on-demand for specific topics when required. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Overview]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Frequently_Asked_Questions_About_STRIDE&amp;diff=12911</id>
		<title>Frequently Asked Questions About STRIDE</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Frequently_Asked_Questions_About_STRIDE&amp;diff=12911"/>
		<updated>2010-06-16T17:38:44Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* What are the advantages of Test Points over logging? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Integration with STRIDE ==&lt;br /&gt;
&lt;br /&gt;
=== What is the size of the STRIDE Runtime? ===&lt;br /&gt;
&lt;br /&gt;
The [[STRIDE Runtime]] is a source package that supports connectivity with the host system and provides[[Runtime_Test_Services | services for testing]] and [[Source_Instrumentation_Overview | source instrumentation]].  The &#039;&#039;runtime&#039;&#039; is tailored specifically to embedded applications, overhead is minimal. It consumes very little memory for table and control block storage. Resource usage is [[STRIDE_Runtime#Runtime_Configuration | configurable]] and can be tailored to the limitations of the target platform.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Typical resource usage&lt;br /&gt;
! Aspect !! Resources&lt;br /&gt;
|-&lt;br /&gt;
| Code Space || About 90-130 KB depending on the compiler of use and the level of optimization.&lt;br /&gt;
|-&lt;br /&gt;
| Memory Usage || Configurable, by default set to about 10 KB &lt;br /&gt;
|-&lt;br /&gt;
| Threads || 3 Threads; configurable priority; blocked when inactive&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== What is the processing overhead? ===&lt;br /&gt;
&lt;br /&gt;
The [[STRIDE Runtime]] overhead is minimized by collecting raw data on the target, and transferring information at a low priority task to the host for processing. Processing is only active when executing tests via the [[STRIDE Runner]].&lt;br /&gt;
&lt;br /&gt;
== Source Instrumentation ==&lt;br /&gt;
&lt;br /&gt;
=== What are the advantages of Test Points over logging? ===&lt;br /&gt;
&lt;br /&gt;
[[Test Point | Test Points]] can be used for direct validation during real-time execution. Logging systems, while often useful for historical record, can only be used for post-processing validation at best. What&#039;s more, since there is no standard way to post-process logs, users often rely on manual inspection or non-scalable homegrown solutions for validation. Test Point validation is fully automated using the STRIDE Framework, which provides robust harnessing and reporting. Test Points also can include optional data payloads which can be very useful for validating state when a point is hit. Test Points have potentially lower impact on the system than standard logging since they &#039;&#039;&#039;(1)&#039;&#039;&#039; are only sent from the origin if a test is actively subscribed to test point (labels are used as a filter at runtime) and &#039;&#039;&#039;(2)&#039;&#039;&#039; test points can be completely disabled (no-opped) in your build by undefining a single preprocessor macro [[Runtime_Integration#STRIDE_Feature_Control | (STRIDE_ENABLED)]].&lt;br /&gt;
&lt;br /&gt;
=== What about source instrumentation bloat? ===&lt;br /&gt;
&lt;br /&gt;
Any mature code base has some level of diagnostic bloat to it. This often takes the form of ad-hoc logging or debug statements. With STRIDE [[Test Point | Test Points]] and [[Test Log | Test Logs]], you open your software to better automated test scenarios. All STRIDE instrumentation takes the form of single line macros, so the amount of instrumentation bloat is no worse than other typical ad-hoc diagnostics. What&#039;s more, the STRIDE macros are all designed to be quickly no-opped in a build via a single preprocessor macro, making it possible to completely eliminate any actual impact on certain builds, if so desired.&lt;br /&gt;
&lt;br /&gt;
=== Are all Test Points active? ===&lt;br /&gt;
&lt;br /&gt;
No. Under normal testing scenarios, only the specific test points that are needed for a test are actually broadcast through the system. We accomplish this by setting test point filters (by label) on the system whenever one of the Test Point setup functions is called (in script or native code). These filters are reset or removed at the end of the test, so in general &#039;&#039;none&#039;&#039; of the test points are actually sent through the system if no test is currently active. That said, there are a few special use cases in which &#039;&#039;all&#039;&#039; test points become active in the system - namely, when [[Tracing | tracing]] is activated on the host runner and when a specific test case uses a set that include &#039;&#039;TEST_POINT_ANYTHING&#039;&#039; or &#039;&#039;TEST_POINT_EVERYTHING_ELSE&#039;&#039;. In general, however, the test points that are actually sent from the system are &#039;&#039;only&#039;&#039; those that are needed to execute the behavior validation for the current test.&lt;br /&gt;
&lt;br /&gt;
=== Will it affect performance? ===&lt;br /&gt;
&lt;br /&gt;
Our experience on a wide-range of systems has shown minimal impact from the STRIDE instrumentation.  The STRIDE Runtime has been designed to be small, portable, and readily configurable, allowing it to be optimized for the platform&#039;s specific characteristics. &lt;br /&gt;
&lt;br /&gt;
=== Should I leave Test Points in? ===&lt;br /&gt;
&lt;br /&gt;
Yes. Once you have some behavior tests written, it&#039;s worthwhile to maintain that instrumentation and the corresponding tests, which allows you to run the tests on any stride-enabled build. All of the instrumentation macros are easily no-opped via a single preprocessor flag, so you can choose to effectively remove the instrumentation code on select builds (production/release builds, for example). The ultimate value of instrumentation is the continuous feedback you get by regularly executing the automated tests on the build.&lt;br /&gt;
&lt;br /&gt;
== Testing ==&lt;br /&gt;
&lt;br /&gt;
=== What languages are supported? ===&lt;br /&gt;
&lt;br /&gt;
[[Types_of_Testing_Supported#Unit Testing | Unit tests]] and  [[Types_of_Testing_Supported#API_Testing | API Tests]] are written using &#039;&#039;&#039;C and C++&#039;&#039;&#039;. [[Types_of_Testing_Supported#Behavior_Testing | Behavior tests]] can be written using &#039;&#039;&#039;Perl&#039;&#039;&#039; and &#039;&#039;&#039;C/C++&#039;&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
=== Can I test using my host computer? ===&lt;br /&gt;
&lt;br /&gt;
Yes. Tests can be implemented and executing using the [[Off-Target Environment]].&lt;br /&gt;
&lt;br /&gt;
== Test Automation  ==&lt;br /&gt;
&lt;br /&gt;
=== What is continuous integration and why should I care? ===&lt;br /&gt;
&lt;br /&gt;
The key principle of continuous integration is regular testing of your  software--ideally done in an automated fashion. STRIDE tests are  reusable and automated. Over time, these tests accumulate, providing  more and more comprehensive coverage. By automating the execution of  tests and results publication via [[STRIDE Test Space]] with every software build,  development teams gain immediate feedback on defects and the health of  their software. By detecting and repairing defects immediately, the  expense and time involved with correcting bugs is minimized.&lt;br /&gt;
&lt;br /&gt;
=== Does STRIDE support continuous integration? ===&lt;br /&gt;
&lt;br /&gt;
Yes. The [[STRIDE Runner]] provides a straightforward means to connect to the device under test and execute the test cases you&#039;ve implemented using the STRIDE Framework. The runner allows you to configure which tests to run and how to organize the results using subsuites in the report. Since the runner supports an option-based command line interface, this tool is easy to integrate with typical continuous integration servers. Please refer to [[Setting_up_your_CI_Environment|this article]] as well. &lt;br /&gt;
&lt;br /&gt;
=== Where/how do you store test results? ===&lt;br /&gt;
&lt;br /&gt;
When you execute your tests using the [[STRIDE Runner]], upon completion the results are written to an xml file. This xml file uses a custom schema for representing the hierarchy of results (suites, cases, etc.). These files also include a stylsheet specification (which will be written to the same directory as the xml file) that allows them to be viewed as HTML in a browser. You are free to store these files for future use/reference. The runner &#039;&#039;&#039;also&#039;&#039;&#039; supports direct uploading to [[STRIDE Test Space]] which is a hosted web application for viewing and collaborating on your test results. Once you are regularly executing tests, whether automatically or manually, we recommend you use the upload feature to persist and share your test results.&lt;br /&gt;
&lt;br /&gt;
=== Can I get  email containing test reports? ===&lt;br /&gt;
&lt;br /&gt;
Yes. If you use [[STRIDE Test Space]] to store your results, you can optionally configure your test space(s) to automatically notify users when new results are uploaded. The email that is generated by Test Space contains only summary information and provides links so that you can view the complete report data.&lt;br /&gt;
&lt;br /&gt;
If you are using a continuous integration server to initiate your testing, it&#039;s likely that it supports different forms of notification when the testing is complete, so it&#039;s often possible to attach the xml report data as part of the CI server notification.&lt;br /&gt;
&lt;br /&gt;
== Getting Started ==&lt;br /&gt;
&lt;br /&gt;
=== How long does it take to install STRIDE? ===&lt;br /&gt;
&lt;br /&gt;
For standard platforms such as Linux the installation process varies between a few hours to a couple of days. We provide [[:Category:SDKs|SDK packages]] that work out of the box for our [[Off-Target Environment]], but also provide a reference to integrators. &lt;br /&gt;
&lt;br /&gt;
For proprietary targets a [[Platform_Abstraction_Layer|Platform Abstraction Layer (PAL)]] is required. The PAL provides the glue between the [[STRIDE Runtime]] and services offered by the OS. It is the only piece of the runtime that is customized between operating systems. The implementation of the PAL ranges between a day to a week depending on the complexity of the OS. There is also a [[Build Integration | build integration]] step. This involves integrating the [[STRIDE Build Tools]] into your software make process. The activity ranges from a single day to several. &lt;br /&gt;
&lt;br /&gt;
=== What kind of training is required? ===&lt;br /&gt;
&lt;br /&gt;
Our training [[Training Overview | approach]] is based on wiki articles, [[Samples | samples]], and leveraging the [[Off-Target Environment]]. The training have been setup for self-guided instructions that can be leverage for an initial introduction of the technology and on-demand for specific topics when required. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Overview]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Perl_Script_APIs&amp;diff=12903</id>
		<title>Perl Script APIs</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Perl_Script_APIs&amp;diff=12903"/>
		<updated>2010-06-14T21:00:47Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The STRIDE Framework perl script model requires you to create perl modules (*.pm files) to group your tests. The following documents the API for the STRIDE::Test base class that you use when creating these modules.&lt;br /&gt;
&lt;br /&gt;
== STRIDE::Test ==&lt;br /&gt;
&lt;br /&gt;
This is the base class for test modules. It provides the following methods.&lt;br /&gt;
&lt;br /&gt;
=== Declaring Tests ===&lt;br /&gt;
&lt;br /&gt;
Once you have created a package that inherits from STRIDE::Test, you can declare any subroutine to be a test method by declaring it with the &#039;&#039;: Test&#039;&#039; attribute. In addition to test methods, the following attributes declare other kinds of subroutines:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;prettytable&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;subroutine attributes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot; &lt;br /&gt;
! attribute!! description&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| Test&lt;br /&gt;
| declares a test method - will be executed automatically when the module is run.&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| Test(startup)&lt;br /&gt;
| startup method, called once before any of the test methods have been executed.&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| Test(shutdown)&lt;br /&gt;
| shutdown method, called once after all test methods have been executed.&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| Test(setup)&lt;br /&gt;
| setup fixture, called before each test method.&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| Test(teardown)&lt;br /&gt;
| teardown fixture, called after each test method.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
You are free to declare as many methods as you like with these attributes. When more than one method has been declared with the same attribute, the methods will be called at the appropriate time in the order declared.&lt;br /&gt;
&lt;br /&gt;
=== Methods ===&lt;br /&gt;
&lt;br /&gt;
These methods are all available in the context of a test module that inherits from STRIDE::Test.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;prettytable&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Methods&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot; &lt;br /&gt;
! name !! description&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;TestCase&amp;lt;/source&amp;gt;&lt;br /&gt;
| returns the current test case object.&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;TestSuite&amp;lt;/source&amp;gt;&lt;br /&gt;
| returns the current test suite object.&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;AddAnnotation( &lt;br /&gt;
  test_case =&amp;gt; case,&lt;br /&gt;
  label =&amp;gt; &amp;quot;&amp;quot;, &lt;br /&gt;
  level =&amp;gt; LEVEL,&lt;br /&gt;
  message =&amp;gt; &amp;quot;&amp;quot;)&amp;lt;/source&amp;gt;&lt;br /&gt;
| Adds an annotation to the current test case (or to test_case if that named parameter is provided). A &#039;&#039;label&#039;&#039; can be provided but will default to &amp;quot;Host Annotation&amp;quot;. A &#039;&#039;level&#039;&#039; can be provided as one of the following, but will default to INFO level:&lt;br /&gt;
* &#039;&#039;&#039;ANNOTATION_TRACE&#039;&#039;&#039;&amp;lt;ref name=&amp;quot;const&amp;quot;&amp;gt;symbols like these are exported as perl constants - don&#039;t quote these values when you use them - rather, use the bare symbols and perl will use the constant value we provide&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;ANNOTATION_DEBUG&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;ANNOTATION_INFO&#039;&#039;&#039; &lt;br /&gt;
* &#039;&#039;&#039;ANNOTATION_WARNING&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;ANNOTATION_ERROR&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;ANNOTATION_FATAL&#039;&#039;&#039;&lt;br /&gt;
The &#039;&#039;message&#039;&#039; parameter is optional and specifies the text to use in the annotation description field.&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;AddTestCase(suite)&amp;lt;/source&amp;gt;&lt;br /&gt;
| Creates a new test case in the specified suite (or the current TestSuite(), if none is provided). This also updates the current test case value that is returned by TestCase().&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;AddTestSuite(suite)&amp;lt;/source&amp;gt;&lt;br /&gt;
| Creates a new sub-suite in the specified suite (or the current TestSuite(), if none is provided). This also updates the current test suite value that is returned by TestSuite().&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;Functions&amp;lt;/source&amp;gt;&lt;br /&gt;
| Returns a [[#STRIDE::Function|STRIDE::Function]] object that was initialized with the active database. This object is used for calling captured functions in the database.&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;Constants&amp;lt;/source&amp;gt;&lt;br /&gt;
| Returns a [[#STRIDE::Function|STRIDE::Function-&amp;gt;{constants}]] hash that was initialized with the active database. This object is a tied hash that can be used to retrieve database constant values (macros and enums) at the time of compilation.&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;TestPointSetup( &lt;br /&gt;
  expected =&amp;gt; [], &lt;br /&gt;
  unexpected =&amp;gt; [],&lt;br /&gt;
  ordered  =&amp;gt; 0|1,&lt;br /&gt;
  strict =&amp;gt; 0|1,&lt;br /&gt;
  expect_file =&amp;gt; filepath,&lt;br /&gt;
  predicate =&amp;gt; coderef,&lt;br /&gt;
  replay_file  =&amp;gt; filepath,&lt;br /&gt;
  test_case =&amp;gt; case)&amp;lt;/source&amp;gt;&lt;br /&gt;
| Creates a new instance of [[#STRIDE::TestPoint|STRIDE::TestPoint]], automatically passing the default TestCase() as the test case if none is provided. Options are passed using hash-style arguments. The supported arguments are:&lt;br /&gt;
; ordered : flag that specifies whether or not the expectation set must occur in the order specified. If set to true, the expectation list is treated as an ordered list, otherwise we assume the list is unordered. Default value is true (ordered processing). &lt;br /&gt;
; strict : flag that specifies if the list is exclusive. If strict is specified, then the actual events (within the specified universe of test points) must match the expectation list exactly as specified. If strict processing is disabled, then other test points within the universe are allowed to occur between items specified in the expectation list. The default value is true (strict processing).&lt;br /&gt;
; expected : is an array reference containing elements that are either  strings representing the test point labels &#039;&#039;&#039;OR&#039;&#039;&#039; an anonymous sub-array that contains these four items: &#039;&#039;&#039;[label, count, predicate, expected_data]&#039;&#039;&#039;. The four element arrayref elements are:&lt;br /&gt;
* &#039;&#039;label&#039;&#039; is the test point label (same as the non array argument type). the label is also allowed to be specified using the predefined constant value &#039;&#039;&#039;TEST_POINT_ANYTHING&#039;&#039;&#039;&amp;lt;ref name=&amp;quot;const&amp;quot;/&amp;gt; to indicate a place in the expectation list where &#039;&#039;&#039;any&#039;&#039;&#039; test point within the current universe is permitted. When this special value is used in the expectation list, a predicate must &#039;&#039;also&#039;&#039; be specified. The test point is only considered satisfied when the predicate returns true. &#039;&#039;&#039;TEST_POINT_ANYTHING&#039;&#039;&#039; can be used to implement, among other things, startup scenarios where you want to defer your expectation list processing until a particular test point and/or data state have been encountered. When specifying &#039;&#039;&#039;TEST_POINT_ANYTHING&#039;&#039;&#039; in an &#039;&#039;&#039;unordered&#039;&#039;&#039; expectation, it is only allowed to appear &#039;&#039;&#039;once&#039;&#039;&#039; in the expectation list and, in that case, it is treated as a startup expectation whereby none of the other test points are processed until the &#039;&#039;&#039;TEST_POINT_ANYTHING&#039;&#039;&#039; expectation has been satisfied (when it&#039;s predicate returns true).&lt;br /&gt;
* &#039;&#039;count&#039;&#039; is the number of expected occurrences - can be any positive integer value, or the special value &#039;&#039;&#039;TEST_POINT_ANY_COUNT&#039;&#039;&#039;&amp;lt;ref name=&amp;quot;const&amp;quot;/&amp;gt;.&lt;br /&gt;
* &#039;&#039;predicate&#039;&#039; must be a perl coderef for a function to call as a predicate. You are free to define your own predicate function OR use on the three [[#Predicates | standard ones]] provided by STRIDE::Test.&lt;br /&gt;
* &#039;&#039;expected_data&#039;&#039; will be passed as an argument to the predicate - intended to be used to specify expected data. &lt;br /&gt;
If using the array form of expectation, only the label entry is required - the remaining elements are optional.&lt;br /&gt;
; unexpected : is an array reference containing labels that are to be treated as failure if they are encountered. For either &#039;&#039;expected&#039;&#039; and &#039;&#039;unexpected&#039;&#039;, the special value &#039;&#039;&#039;TEST_POINT_EVERYTHING_ELSE&#039;&#039;&#039;&amp;lt;ref name=&amp;quot;const&amp;quot;/&amp;gt; can be used alone to indicate that any test points not explicitly listed in the set are considered part of this set.&lt;br /&gt;
; expect_file : if you have previously captured trace data in a file (using the [[STRIDE Runner]]), you can specify the file as the source of expectations using this parameter. If you specify a trace file, you should NOT also specify the &#039;&#039;&#039;expected&#039;&#039;&#039; items as they will be overridden by the trace file.&lt;br /&gt;
; predicate : is a perl coderef to a default predicate function to use for all expectation items. If any specific entry in the expectation list has a predicate function, the expectation&#039;s predicate will override this global value. By default, no global predicate is assumed and no predicate is called unless specified for each expectation item.&lt;br /&gt;
; replay_file : allows you to specify a trace file and &#039;&#039;&#039;&#039;&#039;input&#039;&#039;&#039;&#039;&#039; to the current test. If specified, the expectation will be validated against the events specified in the file rather than live events generated by a device under test.&lt;br /&gt;
; test_case : allows you to specify a test case to use for reporting the results. This is only useful for advanced users that are generating test cases dynamically within a test method. &lt;br /&gt;
&lt;br /&gt;
The returned object is of type [[#STRIDE::TestPoint|STRIDE::TestPoint]] and has access to all it&#039;s member functions.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Assertions ===&lt;br /&gt;
&lt;br /&gt;
Each of the following assertion methods are provided for standard comparisons. For each, there there are three different types, depending on the desired behavior upon failure: &#039;&#039;&#039;EXPECT&#039;&#039;&#039;, &#039;&#039;&#039;ASSERT&#039;&#039;&#039;, and &#039;&#039;&#039;EXIT&#039;&#039;&#039;. &#039;&#039;&#039;EXPECT&#039;&#039;&#039; checks will fail the current test case but continue executing the test method. &#039;&#039;&#039;ASSERT&#039;&#039;&#039; checks will fail the current test case and exit the test method immediately.  &#039;&#039;&#039;EXIT&#039;&#039;&#039; checks will fail the current test case, immediately exit the current test method AND cease further execution of the test module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;prettytable&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Boolean&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
! macro !! Pass if&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;&#039;&#039;prefix&#039;&#039;&#039;&#039;&#039;_TRUE(&#039;&#039;cond&#039;&#039;);&lt;br /&gt;
| &#039;&#039;cond&#039;&#039; is true&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;&#039;&#039;prefix&#039;&#039;&#039;&#039;&#039;_FALSE(&#039;&#039;cond&#039;&#039;);&lt;br /&gt;
| &#039;&#039;cond&#039;&#039; is false&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;prettytable&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Comparison&#039;&#039;&#039;&lt;br /&gt;
|- &lt;br /&gt;
! macro !! Pass if&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;&#039;&#039;prefix&#039;&#039;&#039;&#039;&#039;_EQ(&#039;&#039;val1&#039;&#039;, &#039;&#039;val2&#039;&#039;);&lt;br /&gt;
| &#039;&#039;val1&#039;&#039; == &#039;&#039;val2&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;&#039;&#039;prefix&#039;&#039;&#039;&#039;&#039;_NE(&#039;&#039;val1&#039;&#039;, &#039;&#039;val2&#039;&#039;);&lt;br /&gt;
| &#039;&#039;val1&#039;&#039; != &#039;&#039;val2&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;&#039;&#039;prefix&#039;&#039;&#039;&#039;&#039;_LT(&#039;&#039;val1&#039;&#039;, &#039;&#039;val2&#039;&#039;);&lt;br /&gt;
| &#039;&#039;val1&#039;&#039;&amp;lt;nowiki&amp;gt; &amp;lt; &amp;lt;/nowiki&amp;gt;&#039;&#039;val2&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;&#039;&#039;prefix&#039;&#039;&#039;&#039;&#039;_LE(&#039;&#039;val1&#039;&#039;, &#039;&#039;val2&#039;&#039;);&lt;br /&gt;
| &#039;&#039;val1&#039;&#039;&amp;lt;nowiki&amp;gt; &amp;lt;= &amp;lt;/nowiki&amp;gt;&#039;&#039;val2&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;&#039;&#039;prefix&#039;&#039;&#039;&#039;&#039;_GT(&#039;&#039;val1&#039;&#039;, &#039;&#039;val2&#039;&#039;);&lt;br /&gt;
| &#039;&#039;val1&#039;&#039; &amp;gt; &#039;&#039;val2&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;&#039;&#039;prefix&#039;&#039;&#039;&#039;&#039;_GE(&#039;&#039;val1&#039;&#039;, &#039;&#039;val2&#039;&#039;);&lt;br /&gt;
| &#039;&#039;val1&#039;&#039; &amp;gt;= &#039;&#039;val2&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
For all of the value comparison methods (&#039;&#039;&#039;_EQ&#039;&#039;&#039;, &#039;&#039;&#039;_NE&#039;&#039;&#039;, etc.), the comparison is numeric if both arguments are numeric -- otherwise the comparison is a case sensitive string comparison. If case insensitive comparison is needed, simply wrap both arguments with perl&#039;s builtin &#039;&#039;&#039;lc()&#039;&#039;&#039; (lowercase) or &#039;&#039;&#039;uc()&#039;&#039;&#039; (uppercase) functions.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;prettytable&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Predicates&#039;&#039;&#039;&lt;br /&gt;
|- &lt;br /&gt;
! macro !! Pass if&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;&#039;&#039;prefix&#039;&#039;&#039;&#039;&#039;_PRED(&#039;&#039;coderef&#039;&#039;, &#039;&#039;data&#039;&#039;)&lt;br /&gt;
| &#039;&#039;&amp;amp;coderef&#039;&#039;(&#039;&#039;data&#039;&#039;) returns true. The predicate function is specified by &#039;&#039;coderef&#039;&#039; with optional data &#039;&#039;data&#039;&#039;. The predicate can also return the special value &#039;&#039;&#039;TEST_POINT_IGNORE&#039;&#039;&#039;&amp;lt;ref name=&amp;quot;const&amp;quot;/&amp;gt;&#039; to indicate that the event should be ignored.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Each of these expectation methods also supports the following optional named arguments: &lt;br /&gt;
; test_case =&amp;gt; case : allows you to apply the check to a test case other than the current default&lt;br /&gt;
; message =&amp;gt; &amp;quot;message&amp;quot; : allows you to specify an additional message to include if the check fails. &lt;br /&gt;
&lt;br /&gt;
Because these arguments are optional, they are passed using named argument (hash-style) syntax after the required parameters that are shown above.&lt;br /&gt;
&lt;br /&gt;
=== Logging ===&lt;br /&gt;
&lt;br /&gt;
The following methods can be used to annotate a test case. Typically these methods are used to add additional information about the state of the test to the report.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;prettytable&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Logging Methods&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot; &lt;br /&gt;
! name !! description&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;NOTE_INFO(message)&amp;lt;/source&amp;gt;&lt;br /&gt;
| creates an info note in your test results report.&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;NOTE_WARN(message)&amp;lt;/source&amp;gt;&lt;br /&gt;
| creates an warning note in your test results report.&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;NOTE_ERROR(message)&amp;lt;/source&amp;gt;&lt;br /&gt;
| creates an error note in your test results report.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Each of these note methods also supports the following optional named arguments: &lt;br /&gt;
; test_case =&amp;gt; case : allows you to add the log to a test case other than the current default&lt;br /&gt;
; file =&amp;gt; file : allows you to attach a file along with the annotation message that is generated for the log message. &lt;br /&gt;
; test_point =&amp;gt; test_point_hashref : If you are annotating your report in the context of a predicate with a specific test point, you might want to specify the test point using this parameter. This will cause your annotation to be grouped in the final report with the annotation message that corresponds to the test point hit message. By default, a host timestamp value is used to generate the NOTE annotation, which generally causes the NOTE annotations to group toward the end of the test case report.&lt;br /&gt;
&lt;br /&gt;
Because these arguments are optional, they are passed using named argument (hash-style) syntax after the required parameters that are shown above.&lt;br /&gt;
&lt;br /&gt;
=== Documentation ===&lt;br /&gt;
&lt;br /&gt;
We have preliminary support for documentation extraction in the test modules using the standard perl POD formatting tokens. &lt;br /&gt;
&lt;br /&gt;
The POD that you include in your test module currently must follow these conventions:&lt;br /&gt;
&lt;br /&gt;
* it must begin with a &#039;&#039;head1 NAME&#039;&#039; section and the text of this section must contain the name of the package, preferably near the beginning.&lt;br /&gt;
* a &#039;&#039;head1 DESCRIPTION&#039;&#039; can follow the &#039;&#039;NAME&#039;&#039; section. If provided, it will be used as the description of the test suite created for the test unit.&lt;br /&gt;
* This NAME/DESCRIPTION block must finish with an empty &#039;&#039;head1 METHODS&#039;&#039; section.&lt;br /&gt;
* each of the test methods can be documented by preceding them with a &#039;&#039;head2&#039;&#039; section with the same name as the test method (subroutine name). The text in this section will be used as the testcase description.&lt;br /&gt;
&lt;br /&gt;
=== Predicates ===&lt;br /&gt;
&lt;br /&gt;
STRIDE expectation testing allows you to specify predicate functions for sophisticated data validation. We provide several standard predicates in the STRIDE::Test package, or you are free to define your own predicate functions.&lt;br /&gt;
&lt;br /&gt;
==== Builtin Predicates ====&lt;br /&gt;
The STRIDE::Test library provides a few standard predicates which you are free to use in your expectations:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;prettytable&amp;quot;&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &#039;&#039;&#039;Built-In Predicates&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot; &lt;br /&gt;
! predicate !! description&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;TestPointStrCmp&#039;&#039;&#039;&lt;br /&gt;
| does a case &#039;&#039;&#039;&#039;&#039;sensitive&#039;&#039;&#039;&#039;&#039; comparison of the test point data and the &#039;&#039;expected_data&#039;&#039; (specified as part of the expectation)&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot; &lt;br /&gt;
| &#039;&#039;&#039;TestPointStrCaseCmp&#039;&#039;&#039;&lt;br /&gt;
| does a case &#039;&#039;&#039;&#039;&#039;insensitive&#039;&#039;&#039;&#039;&#039; comparison of the test point data and the &#039;&#039;expected_data&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot; &lt;br /&gt;
| &#039;&#039;&#039;TestPointMemCmp&#039;&#039;&#039;&lt;br /&gt;
| does a bytewise comparison of the test point data and the &#039;&#039;expected_data&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
|-valign=&amp;quot;top&amp;quot; &lt;br /&gt;
| &#039;&#039;&#039;TestPointDefaultCmp&#039;&#039;&#039;&lt;br /&gt;
| pass-through function that calls &#039;&#039;&#039;TestPointMemCmp&#039;&#039;&#039; for binary test point data or &#039;&#039;&#039;TestPointStrCmp&#039;&#039;&#039; otherwise. This is useful as a global predicate since it implements an appropriate default data comparison.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== User Defined Predicates ====&lt;br /&gt;
User defined predicates are subroutines of the following form:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;&lt;br /&gt;
sub myPredicate &lt;br /&gt;
{&lt;br /&gt;
    my ($test_point, $expected_data) = @_;&lt;br /&gt;
    my $status = 0;&lt;br /&gt;
    # access the test point data as $test_point-&amp;gt;{data}, &lt;br /&gt;
    # and the label as $test_point-&amp;gt;{label}&lt;br /&gt;
&lt;br /&gt;
    # set $status according to whether or not your predicate passes&lt;br /&gt;
    return $status;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The predicate function is passed two arguments: the current test point and the expected data that was specified as part of the expectation. The test point data is a reference to a hash with the following fields: &lt;br /&gt;
; label : the test point label&lt;br /&gt;
; data : the data payload for the test point (if any)&lt;br /&gt;
; data_as_hex : an alternate form of the data payload, rendered as a string of hex characters&lt;br /&gt;
; size : the size of the data payload&lt;br /&gt;
; bin : flag indicating whether or not the data payload is binary&lt;br /&gt;
; file : the source file for the test point&lt;br /&gt;
; line :  the line number for the test point&lt;br /&gt;
&lt;br /&gt;
The expected data is passed as a single scalar, but you can use references to compound data structures (hashes, arrays) if you need more complex expected data.&lt;br /&gt;
&lt;br /&gt;
The predicate function should return a true value if it passes, false if not, or &#039;&#039;&#039;TEST_POINT_IGNORE&#039;&#039;&#039;&amp;lt;ref name=&amp;quot;const&amp;quot;/&amp;gt; if the test point should be ignored completely.&lt;br /&gt;
&lt;br /&gt;
== STRIDE::Function ==&lt;br /&gt;
&lt;br /&gt;
The STRIDE::Function class uses perl &#039;&#039;&#039;AUTOLOAD&#039;&#039;&#039;ing to provide a convenient syntax for making simple function calls and retrieving database constants in perl. Given any properly initialized Function object, any captured function or constant (macro) is available directly as method or properties of the Function object. Constants can also be accessed via the tied hash &#039;&#039;constants&#039;&#039; member (which is exported as &#039;&#039;&#039;Constants&#039;&#039;&#039; in the STRIDE::Test package). &lt;br /&gt;
&lt;br /&gt;
For example, given a database with two functions: &#039;&#039;&#039;int foo(const char * path)&#039;&#039;&#039; and &#039;&#039;&#039;void bar(double value)&#039;&#039;&#039;, and a macro &#039;&#039;&#039;#define MY_PI_VALUE 3.1415927&#039;&#039;&#039;, these methods/constants are invokable using the exported STRIDE::Test Functions and Constants objects:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;&lt;br /&gt;
my $retval = Functions-&amp;gt;foo(&amp;quot;my string&amp;quot;);&lt;br /&gt;
Functions-&amp;gt;bar(Constants-&amp;gt;{MY_PI_VALUE});&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Asynchronous invocation === &lt;br /&gt;
&lt;br /&gt;
Functions can also be called asynchronously by accessing functions using the {async} delegator within the function object. When invoked this way, the function call will return a handle object that can be used to wait for the function return value - for example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;&lt;br /&gt;
my $h = Functions-&amp;gt;{async}-&amp;gt;foo(&amp;quot;my string&amp;quot;);&lt;br /&gt;
my $retval = $h-&amp;gt;Wait(1000);&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Wait&#039;&#039;&#039; function takes one optional argument -- the timeout duration (in milliseconds) that indicates the maximum time to wait for the function to return. If the timeout value is not provided, &#039;&#039;Wait&#039;&#039; will wait indefinitely for the function to return. If a timeout is specified and expires before the function returns, the method with &#039;&#039;die&#039;&#039; with a timeout error message - so you might want to wrap your &#039;&#039;&#039;Wait&#039;&#039;&#039; call in an &amp;lt;tt&amp;gt;eval{};&amp;lt;/tt&amp;gt; statement if you want to gracefully handle the timeout condition.&lt;br /&gt;
&lt;br /&gt;
== STRIDE::TestPoint ==&lt;br /&gt;
&lt;br /&gt;
STRIDE::TestPoint objects are used to create test point expectation tests. These objects are created using the exported TestPointSetup factory function of the STRIDE::Test class. Once a STRIDE::TestPoint object has been created with the desired expectations, two functions can be called:&lt;br /&gt;
&lt;br /&gt;
; Wait(timeout) : This method processes test points that have occurred on the target and assesses failure based on the parameters you provided when creating the TestPoint object. The timeout parameter indicates how long (in milliseconds) to Wait for the specified events. If no timeout value is provided, &#039;&#039;&#039;Wait&#039;&#039;&#039; will proceed indefinitely or until a clear pass/failure determination can be made.&lt;br /&gt;
; Check() : this is equivalent to Wait with a very small timeout. As such, it essentially verifies that your specified test points have already been hit.&lt;br /&gt;
&lt;br /&gt;
== Notes ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Tests in Script]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Source_Instrumentation_Overview&amp;diff=12895</id>
		<title>Source Instrumentation Overview</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Source_Instrumentation_Overview&amp;diff=12895"/>
		<updated>2010-06-11T23:51:07Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Source instrumentation is the process by which developers and domain experts selectively instrument the source under test for the purpose of writing test scenarios against the executing application. Implementing tests that leverage source instrumentation is called &#039;&#039;&#039;Expectation Testing&#039;&#039;&#039;. This validation technique is very useful for verifying proper code sequencing based on the software&#039;s internal design. &lt;br /&gt;
&lt;br /&gt;
Unlike traditional unit testing that drives testing based on input parameters and isolating functionality, expectation testing is executed within a fully functional software build running on a real target platform. Expectation tests are not dependent on input parameters, but often leverage the same types of input/output controls used by functional and black-box testing. &lt;br /&gt;
&lt;br /&gt;
Another unique feature of expectation testing is that domain expertise is not required to implement a test. Developers and domain experts use instrumentation to export design knowledge of the software to the entire team. Furthermore, there is no stubbing required, no special logic to generate input parameters, and no advanced knowledge required of how the application software is coded. &lt;br /&gt;
&lt;br /&gt;
To enable effective test coverage, developers and domain experts are required to insert instrumentation at key locations to gain insight and testability. Here are some general suggested source code areas to consider instrumenting:&lt;br /&gt;
* critical function entry/exit points&lt;br /&gt;
* state transitions&lt;br /&gt;
* critical or interesting data transitions (using optional payload to convey data values)&lt;br /&gt;
* callback routines&lt;br /&gt;
* data persistence&lt;br /&gt;
* error conditions&lt;br /&gt;
&lt;br /&gt;
== Instrumentation ==&lt;br /&gt;
To make the software &#039;&#039;testable&#039;&#039;, the first step in the process is for the experts to selectively insert instrumentation macros -- called [[Test_Point | Test Points]] -- into the source code. Test Points themselves have nominal impact on the performance of the application – they are only active during test data collection&amp;lt;ref name=&amp;quot;n1&amp;quot;&amp;gt; Test data collection is typically implemented in a low priority background thread. The data is captured in the calling routine&#039;s thread context (no context switch) but processed in the background or on the host. Instrumentation macros return immediately to the caller (i.e. NOP)&amp;lt;/ref&amp;gt; when testing is not active. Test Points contain names and optional payload data. When Test Points are activated, they are collected in the background, along with timing and any associated data. The set of Test Points hit, their order, timing, and data content can all be used to validate that the software is behaving as expected. [[Test_Log | Test Logs]] can also be added to the source code to provide additional information in the context of an executing test. &lt;br /&gt;
&lt;br /&gt;
Here are the requirements for source instrumentation using the STRIDE Framework:&lt;br /&gt;
&lt;br /&gt;
* define the &#039;&#039;&#039;STRIDE_ENABLED&#039;&#039;&#039; preprocessor macro in your build system&lt;br /&gt;
* include the &#039;&#039;&#039;srtest.h&#039;&#039;&#039; header file. This file is included in the [[Distribution_Files#STRIDE_Runtime | Runtime source distribution]]&lt;br /&gt;
* selectively instrument strategic locations in your source code with [[Test_Point | Test Points]] and optionally [[Test_Log | Test Logs]]&lt;br /&gt;
&lt;br /&gt;
== Expectations ==&lt;br /&gt;
&lt;br /&gt;
In addition to instrumenting the source code with Test Points, you must also define the [[Expectations]] of the Test Points. This involves defining the list of Test Points expected to be hit during a given test scenario. An expectation can also include any expected &#039;&#039;&#039;data&#039;&#039;&#039; associated with a Test Point that requires validation.&lt;br /&gt;
&lt;br /&gt;
== Testing ==&lt;br /&gt;
&lt;br /&gt;
Once the source under test has been instrumented and the [[Expectations | expectations]] defined, STRIDE offers a number of techniques that can be used for implementing &#039;&#039;&#039;expectation tests&#039;&#039;&#039;. For non-developers the [[Test_Modules_Overview | STRIDE Scripting Solution]] is recommended. Scripting allows testers to leverage the power of dynamic languages that execute on the host. The framework provides script libraries that automate the behavior validation as well as hooks for customization. The script implementation also reduces software build dependencies since the test code is not part of the device image. Scripting can also leverage [[Function_Capturing | function remoting]] to fully automate test execution by [[Perl_Script_Snippets#Invoking_a_function_on_the_target | invoking functions on the target.]]&lt;br /&gt;
&lt;br /&gt;
For developers writing expectation tests, the [[Test_Units_Overview | STRIDE Test Units]] with test logic implemented in [[Expectation_Tests_in_C/C%2B%2B | C or C++]] is recommended as a starting point.&lt;br /&gt;
&lt;br /&gt;
== Notes ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Source Instrumentation]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Source_Instrumentation_Overview&amp;diff=12894</id>
		<title>Source Instrumentation Overview</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Source_Instrumentation_Overview&amp;diff=12894"/>
		<updated>2010-06-11T23:43:08Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
Source instrumentation is the process by which developers and domain experts selectively instrument the source under test for the purpose of writing test scenarios against the executing application. Implementing tests that leverage source instrumentation is called &#039;&#039;&#039;Expectation Testing&#039;&#039;&#039;. This validation technique is very useful for verifying proper code sequencing based on the software&#039;s internal design. &lt;br /&gt;
&lt;br /&gt;
Unlike traditional unit testing that drives testing based on input parameters and isolating functionality, expectation testing is executed within a fully functional software build running on a real target platform. Expectation tests are not dependent on input parameters, but often leverage the same types of input/output controls used by functional and black-box testing. &lt;br /&gt;
&lt;br /&gt;
Another unique feature of expectation testing is that domain expertise is not required to implement a test. Developers and domain experts use instrumentation to export design knowledge of the software to the entire team. Furthermore, there is no stubbing required, no special logic to generate input parameters, and no advanced knowledge required of how the application software is coded. &lt;br /&gt;
&lt;br /&gt;
To enable effective test coverage, developers and domain experts are required to insert instrumentation at key locations to gain insight and testability. Here are some general suggested source code areas to consider instrumenting:&lt;br /&gt;
* critical function entry/exit points&lt;br /&gt;
* state transitions&lt;br /&gt;
* critical or interesting data transitions (using optional payload to convey data values)&lt;br /&gt;
* callback routines&lt;br /&gt;
* data persistence&lt;br /&gt;
* error conditions&lt;br /&gt;
&lt;br /&gt;
== Instrumentation ==&lt;br /&gt;
To make the software &#039;&#039;testable&#039;&#039; the first step is the process is for the experts to selectively insert instrumentation macros called [[Test_Point | Test Point]] into the source code. Test Points themselves have nominal impact on the performance of the application – they are only active during test data collection&amp;lt;ref name=&amp;quot;n1&amp;quot;&amp;gt; Test data collection is typically implemented in a low priority background thread. The data is captured in the calling routine&#039;s thread context (no context switch) but processed in the background. When testing is not active instrumentation macros return immediately to the caller (i.e. NOP)&amp;lt;/ref&amp;gt;. Test Points contains names and optionally payload data. When Test Points are activated, they are collected in the background, along with timing and any associated data. The set of Test Points hit, their order, timing, and data content can all be used to validate the software is behaving as expected. [[Test_Log | Test Logs]] can also be added to the source code to provide additional information in the context of an executing test. &lt;br /&gt;
&lt;br /&gt;
Here are the requirements for source instrumentation using the STRIDE Framework:&lt;br /&gt;
&lt;br /&gt;
* define the &#039;&#039;&#039;STRIDE_ENABLED&#039;&#039;&#039; preprocessor macro in your build system&lt;br /&gt;
* include the &#039;&#039;&#039;srtest.h&#039;&#039;&#039; header file. This file is included in the [[Distribution_Files#STRIDE_Runtime | Runtime source distribution]]&lt;br /&gt;
* selectively instrument strategic locations in your source code with [[Test_Point | Test Points]] and optionally [[Test_Log | Test Logs]]&lt;br /&gt;
&lt;br /&gt;
== Expectations ==&lt;br /&gt;
In addition to instrumenting the source code with Test Points, defining the [[Expectations]] of the Test Points is required for testing. The involves defining the list of Test Points expected to be hit during a given test scenario. Also included is any expected &#039;&#039;&#039;data&#039;&#039;&#039; associated with a Test Point that requires validation.&lt;br /&gt;
&lt;br /&gt;
== Testings ==&lt;br /&gt;
Once the source under test has been instrumented and the [[Expectations | expectations]] defined, STRIDE offers a number of techniques to be used for implementing &#039;&#039;&#039;expectation tests&#039;&#039;&#039;. For non-developers the [[Test_Modules_Overview | STRIDE Scripting Solution]] is recommended. Scripting allows testers to leverage the power of dynamic languages that execute on the host. The framework provides script libraries that automate the behavior validation as well as hooks for customization. Also there are minimal software build dependencies using scripting for validation. Scripting can also leverage [[Function_Capturing | function remoting]] to fully automate test execution using script modules [[Perl_Script_Snippets#Invoking_a_function_on_the_target | invoking functions on the target.]]&lt;br /&gt;
&lt;br /&gt;
For developers writing expectation tests the [[Test_Units_Overview | STRIDE Test Units]] and implementation in [[Expectation_Tests_in_C/C%2B%2B | C or C++]] is recommended to begin with.&lt;br /&gt;
&lt;br /&gt;
== Notes ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Source Instrumentation]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=What_is_Unique_About_STRIDE&amp;diff=12893</id>
		<title>What is Unique About STRIDE</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=What_is_Unique_About_STRIDE&amp;diff=12893"/>
		<updated>2010-06-11T23:36:16Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* STRIDE optimizes failure resolution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== STRIDE is a cross-platform framework ==&lt;br /&gt;
The [[Runtime_Reference | &#039;&#039;&#039;STRIDE Runtime&#039;&#039;&#039;]] is written in standard C on top of a simple [[Platform_Abstraction_Layer | &#039;&#039;&#039;platform abstraction layer&#039;&#039;&#039;]] that enables it to work on virtually any target platform. It is delivered as source code to be included in the application&#039;s build system. STRIDE also auto-generates [[Intercept_Module | &#039;&#039;&#039;harnessing and remoting logic&#039;&#039;&#039;]] as source code during the make process, removing any dependencies on specific compilers and / or processors. The transport between the host and target is configurable and supports serial/USB and TCP/IP by default. Custom transports are also available. STRIDE is easily configured for single process or multiple process environments (i.e. Linux, Windows CE, etc.). Testing can also be conducted using an [[Off-Target_Environment | &#039;&#039;&#039;off-target environment&#039;&#039;&#039;]] which is provided for Windows and Linux host machines. &lt;br /&gt;
&lt;br /&gt;
The cross-platform framework facilitates a unified testing approach for all team members, enabling organizations to standardize on a test workflow that is independent of the target platform being used or what branch of software is being changed.&lt;br /&gt;
&lt;br /&gt;
== STRIDE software builds are both functional and testable ==&lt;br /&gt;
STRIDE enables software builds to be both fully &#039;&#039;functional&#039;&#039; and &#039;&#039;testable&#039;&#039; at the same time. The software works exactly the same as before. Whatever the software image was used for in the past -- system testing, developer debugging, etc. -- is still applicable. The STRIDE [[Test_Units | &#039;&#039;&#039;test logic&#039;&#039;&#039;]] is separated from the application source code and is NOT executed unless invoked via the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]]. The [[Source_Instrumentation_Overview | &#039;&#039;&#039;source instrumentation&#039;&#039;&#039;]] is only active when executing tests. The impact of built-in testability to the software application is nominal. The application can easily be switched back to a &#039;&#039;non-testable&#039;&#039; build by simply removing the [[Runtime_Integration#STRIDE_Feature_Control | &#039;&#039;&#039;STRIDE_ENABLED&#039;&#039;&#039;]] preprocessor directive. This automatically controls all STRIDE related source code and macros. There are no changes required to the build process to enable or disable this functionality. &lt;br /&gt;
&lt;br /&gt;
The built-in &#039;&#039;testability&#039;&#039; is similar in concept to a &#039;&#039;debug build&#039;&#039; except the focus is on testing. The &#039;&#039;testable build&#039;&#039; leverages the existing software build process by integrating into the same &#039;&#039;make system&#039;&#039; used by the development team (no one-off or special builds). Automatically included in the build is test automation controllable from the host. When tests are executed, timing analysis is also included with the generated [[Reporting_Model | &#039;&#039;&#039;test report&#039;&#039;&#039;]]. &lt;br /&gt;
Developers can easily pre-flight test their source code changes before &#039;&#039;committing&#039;&#039; them to a baseline. Builds can be [[Setting_up_your_CI_Environment | &#039;&#039;&#039;automatically regression tested&#039;&#039;&#039;]] as part of the daily build process.&lt;br /&gt;
&lt;br /&gt;
== STRIDE facilitates deeper API/Unit Testing ==&lt;br /&gt;
STRIDE offers unique techniques that can be leveraged for deeper API/Unit test coverage. STRIDE provides support for this type of testing [[Test_Units_Overview | &#039;&#039;&#039;in C/C++&#039;&#039;&#039;]]. There are no special APIs required to register tests, suites, etc. Just write your test in any combination of C/C++ and the auto-generated [[Intercept_Module | &#039;&#039;&#039;intercept module&#039;&#039;&#039;]] via the [[Build_Tools | &#039;&#039;&#039;STRIDE build tools&#039;&#039;&#039;]] takes care of everything. Tests from separate teams are automatically aggregated by the system -- no coordination is required. &lt;br /&gt;
&lt;br /&gt;
Writing API/Unit tests in native code is the simplest way to begin validating the software. There is no new language to learn, no proprietary editor, and your normal programming workflow is not interrupted. This type of testing works well for:&lt;br /&gt;
&lt;br /&gt;
* calling APIs directly&lt;br /&gt;
* validating C++ classes&lt;br /&gt;
* isolating modules &lt;br /&gt;
* critical processing / timing&lt;br /&gt;
* and much more ...&lt;br /&gt;
&lt;br /&gt;
For more advanced testing scenarios, dependencies can be [[Using_Test_Doubles | &#039;&#039;&#039;doubled&#039;&#039;&#039;]]. This feature provides a means for intercepting C/C++ global functions on the target and substituting a stub, fake, or mock. The substitution is all controllable via the runtime, allowing the software to continue executing normally when not running a test. &lt;br /&gt;
&lt;br /&gt;
[[File_Transfer_Services | &#039;&#039;&#039; File fixturing&#039;&#039;&#039;]] is another technique that can be leveraged to drive better testing. Test code executing on the target platform can perform file operations on the host remotely. This enables opening, reading, writing, etc. files while executing target-based test logic. File fixturing allows bypassing external input into the system for more controlled and isolated testing. &lt;br /&gt;
&lt;br /&gt;
There are numerous other features that can be leveraged to facilitate deeper API/Unit test coverage: &lt;br /&gt;
* [[Test_Macros | &#039;&#039;&#039;Assertion macros&#039;&#039;&#039;]]&lt;br /&gt;
* Leveraging [[Test Point Testing in C/C++ | &#039;&#039;&#039;Test Points&#039;&#039;&#039;]] for behavior testing&lt;br /&gt;
* Passing parameters to Test Units&lt;br /&gt;
* Seamless publishing to [[STRIDE_Test_Space | &#039;&#039;&#039;Test Space&#039;&#039;&#039;]]&lt;br /&gt;
* and much [[Test_API | &#039;&#039;&#039; more ... &#039;&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
== STRIDE includes behavior-based testing techniques ==&lt;br /&gt;
STRIDE leverages [[Source_Instrumentation_Overview | &#039;&#039;&#039;source instrumentation&#039;&#039;&#039;]] to provide &#039;&#039; &#039;&#039;&#039;behavior-based testing techniques&#039;&#039;&#039; &#039;&#039; that can be applied to the executing software application. The execution sequencing of the code, along with data, can be automatically validated based on [[Expectations | &#039;&#039;&#039;expectations&#039;&#039;&#039;]]. Behavior-based testing is different than unit testing or API testing. Behavior testing does not focus on calling functions / methods and validating their return values. Behavior testing validates the &#039;&#039;expected sequencing of the software&#039;&#039; executing under normal operating conditions. This can span threads and process boundaries, and even multiple targets, as the application(s) is running. Leveraging macros -- called [[Test_Point | &#039;&#039;&#039;Test Points&#039;&#039;&#039;]] -- domain experts strategically instrument the source under test. Data can also optionally be associated with a &#039;&#039;&#039; &#039;&#039;Test Point&#039;&#039; &#039;&#039;&#039;, which is often critical to validating that the software is behaving as expected. This type of validation can be applied to a wide-range of testing scenerios:&lt;br /&gt;
&lt;br /&gt;
* State Machines&lt;br /&gt;
* Data flow through system components&lt;br /&gt;
* Sequencing between threads&lt;br /&gt;
* Drivers that don&#039;t return values&lt;br /&gt;
* and much more ..&lt;br /&gt;
&lt;br /&gt;
Another key element to STRIDE based &#039;&#039;behavior testing&#039;&#039; is that the expected code sequencing is automatically validated. This automatic validation can be used for on-going regression testing. The expectations are expressed as simple table definitions along with optional hooks for custom data validation. When failures do occur, context is provided with the file name and associated line number of the unexpected &#039;&#039;&#039;Test Point(s)&#039;&#039;&#039;. The test validation can be implemented in both [[Expectation_Tests_in_C/C%2B%2B | &#039;&#039;&#039;native target code&#039;&#039;&#039;]] and [[Perl_Script_APIs#STRIDE::Test | &#039;&#039;&#039;scripts on the host&#039;&#039;&#039;]]. In either case, the validation is done without impacting the application&#039;s performance (the on-target test code is executed in a background thread and scripts are executed on the host machine). &lt;br /&gt;
&lt;br /&gt;
Software Quality Assurance(SQA) can also add behavior-based testing as part of their existing functional / system testing. Because the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]] is a command line utility, it is easily controlled from existing test infrastructure. SQA can leverage the &#039;&#039; &#039;&#039;&#039;instrumentation&#039;&#039;&#039; &#039;&#039; to create their own [[Reporting_Model#Suites | &#039;&#039;&#039;test suites&#039;&#039;&#039;]] using scripting that executes in concert with existing test automation.&lt;br /&gt;
&lt;br /&gt;
== STRIDE optimizes failure resolution ==&lt;br /&gt;
When executing &#039;&#039; &#039;&#039;&#039;behavior-based tests&#039;&#039;&#039; &#039;&#039; or &#039;&#039; &#039;&#039;&#039;API/unit tests&#039;&#039;&#039; &#039;&#039;, results can be automatically uploaded to [[STRIDE_Test_Space | &#039;&#039;&#039;STRIDE Test Space&#039;&#039;&#039;]] for persistence and analysis.  Test result data is uploaded manually (using the web interface) or automatically using the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]]. Hosting test results in a &#039;&#039;central location&#039;&#039; with easy access via a browser enables the entire team to better participate in ensuring quality during the ongoing development process. [[STRIDE_Test_Space#Results_View | &#039;&#039;&#039;Reports&#039;&#039;&#039;]] containing test results, timing, [[Tracing | &#039;&#039;&#039;tracing logs&#039;&#039;&#039;]], and built-in test documentation (supported in both [[Test_API#Test_Documentation | &#039;&#039;&#039;target code&#039;&#039;&#039;]] and [[Perl_Script_APIs#Documentation | &#039;&#039;&#039;script&#039;&#039;&#039;]]) all correlated together enhances the context for all team members to manage testing and optimize resolving failures. Team collaboration on testing activities is also facilitated with auto-generated email [[Notifications | &#039;&#039;&#039;notifications&#039;&#039;&#039;]] and [[STRIDE_Test_Space#Messages | &#039;&#039;&#039;messaging&#039;&#039;&#039;]]. Emails contain links to test failures where file names and line numbers are provided to aid in resolution. Messaging allows team members to more effectively communicate on the specifics of test results while minimizing information required in traditional emails, status reports, etc. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Overview]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=What_is_Unique_About_STRIDE&amp;diff=12892</id>
		<title>What is Unique About STRIDE</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=What_is_Unique_About_STRIDE&amp;diff=12892"/>
		<updated>2010-06-11T23:33:54Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* STRIDE includes behavior-based testing techniques */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== STRIDE is a cross-platform framework ==&lt;br /&gt;
The [[Runtime_Reference | &#039;&#039;&#039;STRIDE Runtime&#039;&#039;&#039;]] is written in standard C on top of a simple [[Platform_Abstraction_Layer | &#039;&#039;&#039;platform abstraction layer&#039;&#039;&#039;]] that enables it to work on virtually any target platform. It is delivered as source code to be included in the application&#039;s build system. STRIDE also auto-generates [[Intercept_Module | &#039;&#039;&#039;harnessing and remoting logic&#039;&#039;&#039;]] as source code during the make process, removing any dependencies on specific compilers and / or processors. The transport between the host and target is configurable and supports serial/USB and TCP/IP by default. Custom transports are also available. STRIDE is easily configured for single process or multiple process environments (i.e. Linux, Windows CE, etc.). Testing can also be conducted using an [[Off-Target_Environment | &#039;&#039;&#039;off-target environment&#039;&#039;&#039;]] which is provided for Windows and Linux host machines. &lt;br /&gt;
&lt;br /&gt;
The cross-platform framework facilitates a unified testing approach for all team members, enabling organizations to standardize on a test workflow that is independent of the target platform being used or what branch of software is being changed.&lt;br /&gt;
&lt;br /&gt;
== STRIDE software builds are both functional and testable ==&lt;br /&gt;
STRIDE enables software builds to be both fully &#039;&#039;functional&#039;&#039; and &#039;&#039;testable&#039;&#039; at the same time. The software works exactly the same as before. Whatever the software image was used for in the past -- system testing, developer debugging, etc. -- is still applicable. The STRIDE [[Test_Units | &#039;&#039;&#039;test logic&#039;&#039;&#039;]] is separated from the application source code and is NOT executed unless invoked via the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]]. The [[Source_Instrumentation_Overview | &#039;&#039;&#039;source instrumentation&#039;&#039;&#039;]] is only active when executing tests. The impact of built-in testability to the software application is nominal. The application can easily be switched back to a &#039;&#039;non-testable&#039;&#039; build by simply removing the [[Runtime_Integration#STRIDE_Feature_Control | &#039;&#039;&#039;STRIDE_ENABLED&#039;&#039;&#039;]] preprocessor directive. This automatically controls all STRIDE related source code and macros. There are no changes required to the build process to enable or disable this functionality. &lt;br /&gt;
&lt;br /&gt;
The built-in &#039;&#039;testability&#039;&#039; is similar in concept to a &#039;&#039;debug build&#039;&#039; except the focus is on testing. The &#039;&#039;testable build&#039;&#039; leverages the existing software build process by integrating into the same &#039;&#039;make system&#039;&#039; used by the development team (no one-off or special builds). Automatically included in the build is test automation controllable from the host. When tests are executed, timing analysis is also included with the generated [[Reporting_Model | &#039;&#039;&#039;test report&#039;&#039;&#039;]]. &lt;br /&gt;
Developers can easily pre-flight test their source code changes before &#039;&#039;committing&#039;&#039; them to a baseline. Builds can be [[Setting_up_your_CI_Environment | &#039;&#039;&#039;automatically regression tested&#039;&#039;&#039;]] as part of the daily build process.&lt;br /&gt;
&lt;br /&gt;
== STRIDE facilitates deeper API/Unit Testing ==&lt;br /&gt;
STRIDE offers unique techniques that can be leveraged for deeper API/Unit test coverage. STRIDE provides support for this type of testing [[Test_Units_Overview | &#039;&#039;&#039;in C/C++&#039;&#039;&#039;]]. There are no special APIs required to register tests, suites, etc. Just write your test in any combination of C/C++ and the auto-generated [[Intercept_Module | &#039;&#039;&#039;intercept module&#039;&#039;&#039;]] via the [[Build_Tools | &#039;&#039;&#039;STRIDE build tools&#039;&#039;&#039;]] takes care of everything. Tests from separate teams are automatically aggregated by the system -- no coordination is required. &lt;br /&gt;
&lt;br /&gt;
Writing API/Unit tests in native code is the simplest way to begin validating the software. There is no new language to learn, no proprietary editor, and your normal programming workflow is not interrupted. This type of testing works well for:&lt;br /&gt;
&lt;br /&gt;
* calling APIs directly&lt;br /&gt;
* validating C++ classes&lt;br /&gt;
* isolating modules &lt;br /&gt;
* critical processing / timing&lt;br /&gt;
* and much more ...&lt;br /&gt;
&lt;br /&gt;
For more advanced testing scenarios, dependencies can be [[Using_Test_Doubles | &#039;&#039;&#039;doubled&#039;&#039;&#039;]]. This feature provides a means for intercepting C/C++ global functions on the target and substituting a stub, fake, or mock. The substitution is all controllable via the runtime, allowing the software to continue executing normally when not running a test. &lt;br /&gt;
&lt;br /&gt;
[[File_Transfer_Services | &#039;&#039;&#039; File fixturing&#039;&#039;&#039;]] is another technique that can be leveraged to drive better testing. Test code executing on the target platform can perform file operations on the host remotely. This enables opening, reading, writing, etc. files while executing target-based test logic. File fixturing allows bypassing external input into the system for more controlled and isolated testing. &lt;br /&gt;
&lt;br /&gt;
There are numerous other features that can be leveraged to facilitate deeper API/Unit test coverage: &lt;br /&gt;
* [[Test_Macros | &#039;&#039;&#039;Assertion macros&#039;&#039;&#039;]]&lt;br /&gt;
* Leveraging [[Test Point Testing in C/C++ | &#039;&#039;&#039;Test Points&#039;&#039;&#039;]] for behavior testing&lt;br /&gt;
* Passing parameters to Test Units&lt;br /&gt;
* Seamless publishing to [[STRIDE_Test_Space | &#039;&#039;&#039;Test Space&#039;&#039;&#039;]]&lt;br /&gt;
* and much [[Test_API | &#039;&#039;&#039; more ... &#039;&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
== STRIDE includes behavior-based testing techniques ==&lt;br /&gt;
STRIDE leverages [[Source_Instrumentation_Overview | &#039;&#039;&#039;source instrumentation&#039;&#039;&#039;]] to provide &#039;&#039; &#039;&#039;&#039;behavior-based testing techniques&#039;&#039;&#039; &#039;&#039; that can be applied to the executing software application. The execution sequencing of the code, along with data, can be automatically validated based on [[Expectations | &#039;&#039;&#039;expectations&#039;&#039;&#039;]]. Behavior-based testing is different than unit testing or API testing. Behavior testing does not focus on calling functions / methods and validating their return values. Behavior testing validates the &#039;&#039;expected sequencing of the software&#039;&#039; executing under normal operating conditions. This can span threads and process boundaries, and even multiple targets, as the application(s) is running. Leveraging macros -- called [[Test_Point | &#039;&#039;&#039;Test Points&#039;&#039;&#039;]] -- domain experts strategically instrument the source under test. Data can also optionally be associated with a &#039;&#039;&#039; &#039;&#039;Test Point&#039;&#039; &#039;&#039;&#039;, which is often critical to validating that the software is behaving as expected. This type of validation can be applied to a wide-range of testing scenerios:&lt;br /&gt;
&lt;br /&gt;
* State Machines&lt;br /&gt;
* Data flow through system components&lt;br /&gt;
* Sequencing between threads&lt;br /&gt;
* Drivers that don&#039;t return values&lt;br /&gt;
* and much more ..&lt;br /&gt;
&lt;br /&gt;
Another key element to STRIDE based &#039;&#039;behavior testing&#039;&#039; is that the expected code sequencing is automatically validated. This automatic validation can be used for on-going regression testing. The expectations are expressed as simple table definitions along with optional hooks for custom data validation. When failures do occur, context is provided with the file name and associated line number of the unexpected &#039;&#039;&#039;Test Point(s)&#039;&#039;&#039;. The test validation can be implemented in both [[Expectation_Tests_in_C/C%2B%2B | &#039;&#039;&#039;native target code&#039;&#039;&#039;]] and [[Perl_Script_APIs#STRIDE::Test | &#039;&#039;&#039;scripts on the host&#039;&#039;&#039;]]. In either case, the validation is done without impacting the application&#039;s performance (the on-target test code is executed in a background thread and scripts are executed on the host machine). &lt;br /&gt;
&lt;br /&gt;
Software Quality Assurance(SQA) can also add behavior-based testing as part of their existing functional / system testing. Because the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]] is a command line utility, it is easily controlled from existing test infrastructure. SQA can leverage the &#039;&#039; &#039;&#039;&#039;instrumentation&#039;&#039;&#039; &#039;&#039; to create their own [[Reporting_Model#Suites | &#039;&#039;&#039;test suites&#039;&#039;&#039;]] using scripting that executes in concert with existing test automation.&lt;br /&gt;
&lt;br /&gt;
== STRIDE optimizes failure resolution ==&lt;br /&gt;
When executing &#039;&#039; &#039;&#039;&#039;behavior-based tests&#039;&#039;&#039; &#039;&#039; or &#039;&#039; &#039;&#039;&#039;API/unit tests&#039;&#039;&#039; &#039;&#039;, results can be automatically uploaded to [[STRIDE_Test_Space | &#039;&#039;&#039;STRIDE Test Space&#039;&#039;&#039;]] for storing and analyzing.  Result data is uploaded manually (using the web interface) or automatically using the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]]. Hosting test results in a &#039;&#039;central location&#039;&#039; with easy access via a browser enables the entire team to better participate in ensuring quality as apart of the development process. [[STRIDE_Test_Space#Results_View | &#039;&#039;&#039;Reports&#039;&#039;&#039;]] containing test results, timing, [[Tracing | &#039;&#039;&#039;tracing logs&#039;&#039;&#039;]], and built-in test documentation (supported in both [[Test_API#Test_Documentation | &#039;&#039;&#039;target code&#039;&#039;&#039;]] and [[Perl_Script_APIs#Documentation | &#039;&#039;&#039;script&#039;&#039;&#039;]]) all correlated together enhances the context for all team members to manage testing and optimize resolving failures. Team collaboration on testing activities is also facilitated with auto-generated email [[Notifications | &#039;&#039;&#039;notifications&#039;&#039;&#039;]] and [[STRIDE_Test_Space#Messages | &#039;&#039;&#039;messaging&#039;&#039;&#039;]]. Emails contain links to test failures where file names and line numbers are provided to aid in resolution. Messaging allows team members to more effectively communicate on the specifics of test results while minimizing information required in traditional emails, status reports, etc. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Overview]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=What_is_Unique_About_STRIDE&amp;diff=12891</id>
		<title>What is Unique About STRIDE</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=What_is_Unique_About_STRIDE&amp;diff=12891"/>
		<updated>2010-06-11T23:14:32Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* STRIDE facilitates deeper API/Unit Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== STRIDE is a cross-platform framework ==&lt;br /&gt;
The [[Runtime_Reference | &#039;&#039;&#039;STRIDE Runtime&#039;&#039;&#039;]] is written in standard C on top of a simple [[Platform_Abstraction_Layer | &#039;&#039;&#039;platform abstraction layer&#039;&#039;&#039;]] that enables it to work on virtually any target platform. It is delivered as source code to be included in the application&#039;s build system. STRIDE also auto-generates [[Intercept_Module | &#039;&#039;&#039;harnessing and remoting logic&#039;&#039;&#039;]] as source code during the make process, removing any dependencies on specific compilers and / or processors. The transport between the host and target is configurable and supports serial/USB and TCP/IP by default. Custom transports are also available. STRIDE is easily configured for single process or multiple process environments (i.e. Linux, Windows CE, etc.). Testing can also be conducted using an [[Off-Target_Environment | &#039;&#039;&#039;off-target environment&#039;&#039;&#039;]] which is provided for Windows and Linux host machines. &lt;br /&gt;
&lt;br /&gt;
The cross-platform framework facilitates a unified testing approach for all team members, enabling organizations to standardize on a test workflow that is independent of the target platform being used or what branch of software is being changed.&lt;br /&gt;
&lt;br /&gt;
== STRIDE software builds are both functional and testable ==&lt;br /&gt;
STRIDE enables software builds to be both fully &#039;&#039;functional&#039;&#039; and &#039;&#039;testable&#039;&#039; at the same time. The software works exactly the same as before. Whatever the software image was used for in the past -- system testing, developer debugging, etc. -- is still applicable. The STRIDE [[Test_Units | &#039;&#039;&#039;test logic&#039;&#039;&#039;]] is separated from the application source code and is NOT executed unless invoked via the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]]. The [[Source_Instrumentation_Overview | &#039;&#039;&#039;source instrumentation&#039;&#039;&#039;]] is only active when executing tests. The impact of built-in testability to the software application is nominal. The application can easily be switched back to a &#039;&#039;non-testable&#039;&#039; build by simply removing the [[Runtime_Integration#STRIDE_Feature_Control | &#039;&#039;&#039;STRIDE_ENABLED&#039;&#039;&#039;]] preprocessor directive. This automatically controls all STRIDE related source code and macros. There are no changes required to the build process to enable or disable this functionality. &lt;br /&gt;
&lt;br /&gt;
The built-in &#039;&#039;testability&#039;&#039; is similar in concept to a &#039;&#039;debug build&#039;&#039; except the focus is on testing. The &#039;&#039;testable build&#039;&#039; leverages the existing software build process by integrating into the same &#039;&#039;make system&#039;&#039; used by the development team (no one-off or special builds). Automatically included in the build is test automation controllable from the host. When tests are executed, timing analysis is also included with the generated [[Reporting_Model | &#039;&#039;&#039;test report&#039;&#039;&#039;]]. &lt;br /&gt;
Developers can easily pre-flight test their source code changes before &#039;&#039;committing&#039;&#039; them to a baseline. Builds can be [[Setting_up_your_CI_Environment | &#039;&#039;&#039;automatically regression tested&#039;&#039;&#039;]] as part of the daily build process.&lt;br /&gt;
&lt;br /&gt;
== STRIDE facilitates deeper API/Unit Testing ==&lt;br /&gt;
STRIDE offers unique techniques that can be leveraged for deeper API/Unit test coverage. STRIDE provides support for this type of testing [[Test_Units_Overview | &#039;&#039;&#039;in C/C++&#039;&#039;&#039;]]. There are no special APIs required to register tests, suites, etc. Just write your test in any combination of C/C++ and the auto-generated [[Intercept_Module | &#039;&#039;&#039;intercept module&#039;&#039;&#039;]] via the [[Build_Tools | &#039;&#039;&#039;STRIDE build tools&#039;&#039;&#039;]] takes care of everything. Tests from separate teams are automatically aggregated by the system -- no coordination is required. &lt;br /&gt;
&lt;br /&gt;
Writing API/Unit tests in native code is the simplest way to begin validating the software. There is no new language to learn, no proprietary editor, and your normal programming workflow is not interrupted. This type of testing works well for:&lt;br /&gt;
&lt;br /&gt;
* calling APIs directly&lt;br /&gt;
* validating C++ classes&lt;br /&gt;
* isolating modules &lt;br /&gt;
* critical processing / timing&lt;br /&gt;
* and much more ...&lt;br /&gt;
&lt;br /&gt;
For more advanced testing scenarios, dependencies can be [[Using_Test_Doubles | &#039;&#039;&#039;doubled&#039;&#039;&#039;]]. This feature provides a means for intercepting C/C++ global functions on the target and substituting a stub, fake, or mock. The substitution is all controllable via the runtime, allowing the software to continue executing normally when not running a test. &lt;br /&gt;
&lt;br /&gt;
[[File_Transfer_Services | &#039;&#039;&#039; File fixturing&#039;&#039;&#039;]] is another technique that can be leveraged to drive better testing. Test code executing on the target platform can perform file operations on the host remotely. This enables opening, reading, writing, etc. files while executing target-based test logic. File fixturing allows bypassing external input into the system for more controlled and isolated testing. &lt;br /&gt;
&lt;br /&gt;
There are numerous other features that can be leveraged to facilitate deeper API/Unit test coverage: &lt;br /&gt;
* [[Test_Macros | &#039;&#039;&#039;Assertion macros&#039;&#039;&#039;]]&lt;br /&gt;
* Leveraging [[Test Point Testing in C/C++ | &#039;&#039;&#039;Test Points&#039;&#039;&#039;]] for behavior testing&lt;br /&gt;
* Passing parameters to Test Units&lt;br /&gt;
* Seamless publishing to [[STRIDE_Test_Space | &#039;&#039;&#039;Test Space&#039;&#039;&#039;]]&lt;br /&gt;
* and much [[Test_API | &#039;&#039;&#039; more ... &#039;&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
== STRIDE includes behavior-based testing techniques ==&lt;br /&gt;
STRIDE leverages [[Source_Instrumentation_Overview | &#039;&#039;&#039;source instrumentation&#039;&#039;&#039;]] to provide &#039;&#039; &#039;&#039;&#039;behavior-based testing techniques&#039;&#039;&#039; &#039;&#039; that can be applied to the executing software application. The execution sequencing of the code, along with data, can be automatically validated based on [[Expectations | &#039;&#039;&#039;expectations&#039;&#039;&#039;]]. Behavior-based testing is different than unit testing or API testing. Behavior testing does not focus on calling functions / methods and validating their return values. Behavior testing validates the &#039;&#039;expected sequencing of the software&#039;&#039; executing under normal operating conditions. This can span threads and process boundaries, and even multiple targets, as the application(s) is running. Leveraging macros -- called [[Test_Point | &#039;&#039;&#039;Test Points&#039;&#039;&#039;]] -- domain experts strategically instrument the source under test. Data can also optionally be associated with a &#039;&#039;&#039; &#039;&#039;Test Point&#039;&#039; &#039;&#039;&#039;, which is often critical to validating that the software is behaving as expected. This type of validation can be applied to a wide-range of testing scenerios:&lt;br /&gt;
&lt;br /&gt;
* State Machines&lt;br /&gt;
* Data flow through system components&lt;br /&gt;
* Sequencing between threads&lt;br /&gt;
* Drivers that don&#039;t return values&lt;br /&gt;
* and much more ..&lt;br /&gt;
&lt;br /&gt;
Another key element to STRIDE based &#039;&#039;behavior testing&#039;&#039; is that the expected code sequencing is automatically validated. This automatic validation can be used for on-going regression testing. The expectations are realized by simple table definitions along with hooks for customization related to optional data. When failures do occur, context is provided with the file name and associated line number of the unexpected &#039;&#039;&#039;Test Point(s)&#039;&#039;&#039; behavior. The test validation can be implemented in both [[Expectation_Tests_in_C/C%2B%2B | &#039;&#039;&#039;native target code&#039;&#039;&#039;]] and [[Perl_Script_APIs#STRIDE::Test | &#039;&#039;&#039;scripting on the host&#039;&#039;&#039;]]. In either case, the validation is done without impacting the application&#039;s performance (i.e. target code executed in a background thread and host script executed on desktop). &lt;br /&gt;
&lt;br /&gt;
Software Quality Assurance(SQA) can also add behavior-based testing as part of their existing functional / system testing. Because the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]] is a command line utility, it is easily controlled from existing test infrastructure. SQA can also leverage the &#039;&#039; &#039;&#039;&#039;instrumentation&#039;&#039;&#039; &#039;&#039; to create their own [[Reporting_Model#Suites | &#039;&#039;&#039;test suites&#039;&#039;&#039;]] using scripting that executes in concert with existing test automation.  &lt;br /&gt;
&lt;br /&gt;
== STRIDE optimizes failure resolution ==&lt;br /&gt;
When executing &#039;&#039; &#039;&#039;&#039;behavior-based tests&#039;&#039;&#039; &#039;&#039; or &#039;&#039; &#039;&#039;&#039;API/unit tests&#039;&#039;&#039; &#039;&#039;, results can be automatically uploaded to [[STRIDE_Test_Space | &#039;&#039;&#039;STRIDE Test Space&#039;&#039;&#039;]] for storing and analyzing.  Result data is uploaded manually (using the web interface) or automatically using the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]]. Hosting test results in a &#039;&#039;central location&#039;&#039; with easy access via a browser enables the entire team to better participate in ensuring quality as apart of the development process. [[STRIDE_Test_Space#Results_View | &#039;&#039;&#039;Reports&#039;&#039;&#039;]] containing test results, timing, [[Tracing | &#039;&#039;&#039;tracing logs&#039;&#039;&#039;]], and built-in test documentation (supported in both [[Test_API#Test_Documentation | &#039;&#039;&#039;target code&#039;&#039;&#039;]] and [[Perl_Script_APIs#Documentation | &#039;&#039;&#039;script&#039;&#039;&#039;]]) all correlated together enhances the context for all team members to manage testing and optimize resolving failures. Team collaboration on testing activities is also facilitated with auto-generated email [[Notifications | &#039;&#039;&#039;notifications&#039;&#039;&#039;]] and [[STRIDE_Test_Space#Messages | &#039;&#039;&#039;messaging&#039;&#039;&#039;]]. Emails contain links to test failures where file names and line numbers are provided to aid in resolution. Messaging allows team members to more effectively communicate on the specifics of test results while minimizing information required in traditional emails, status reports, etc. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Overview]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=What_is_Unique_About_STRIDE&amp;diff=12890</id>
		<title>What is Unique About STRIDE</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=What_is_Unique_About_STRIDE&amp;diff=12890"/>
		<updated>2010-06-11T23:05:27Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* STRIDE facilitates deeper API/Unit Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== STRIDE is a cross-platform framework ==&lt;br /&gt;
The [[Runtime_Reference | &#039;&#039;&#039;STRIDE Runtime&#039;&#039;&#039;]] is written in standard C on top of a simple [[Platform_Abstraction_Layer | &#039;&#039;&#039;platform abstraction layer&#039;&#039;&#039;]] that enables it to work on virtually any target platform. It is delivered as source code to be included in the application&#039;s build system. STRIDE also auto-generates [[Intercept_Module | &#039;&#039;&#039;harnessing and remoting logic&#039;&#039;&#039;]] as source code during the make process, removing any dependencies on specific compilers and / or processors. The transport between the host and target is configurable and supports serial/USB and TCP/IP by default. Custom transports are also available. STRIDE is easily configured for single process or multiple process environments (i.e. Linux, Windows CE, etc.). Testing can also be conducted using an [[Off-Target_Environment | &#039;&#039;&#039;off-target environment&#039;&#039;&#039;]] which is provided for Windows and Linux host machines. &lt;br /&gt;
&lt;br /&gt;
The cross-platform framework facilitates a unified testing approach for all team members, enabling organizations to standardize on a test workflow that is independent of the target platform being used or what branch of software is being changed.&lt;br /&gt;
&lt;br /&gt;
== STRIDE software builds are both functional and testable ==&lt;br /&gt;
STRIDE enables software builds to be both fully &#039;&#039;functional&#039;&#039; and &#039;&#039;testable&#039;&#039; at the same time. The software works exactly the same as before. Whatever the software image was used for in the past -- system testing, developer debugging, etc. -- is still applicable. The STRIDE [[Test_Units | &#039;&#039;&#039;test logic&#039;&#039;&#039;]] is separated from the application source code and is NOT executed unless invoked via the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]]. The [[Source_Instrumentation_Overview | &#039;&#039;&#039;source instrumentation&#039;&#039;&#039;]] is only active when executing tests. The impact of built-in testability to the software application is nominal. The application can easily be switched back to a &#039;&#039;non-testable&#039;&#039; build by simply removing the [[Runtime_Integration#STRIDE_Feature_Control | &#039;&#039;&#039;STRIDE_ENABLED&#039;&#039;&#039;]] preprocessor directive. This automatically controls all STRIDE related source code and macros. There are no changes required to the build process to enable or disable this functionality. &lt;br /&gt;
&lt;br /&gt;
The built-in &#039;&#039;testability&#039;&#039; is similar in concept to a &#039;&#039;debug build&#039;&#039; except the focus is on testing. The &#039;&#039;testable build&#039;&#039; leverages the existing software build process by integrating into the same &#039;&#039;make system&#039;&#039; used by the development team (no one-off or special builds). Automatically included in the build is test automation controllable from the host. When tests are executed, timing analysis is also included with the generated [[Reporting_Model | &#039;&#039;&#039;test report&#039;&#039;&#039;]]. &lt;br /&gt;
Developers can easily pre-flight test their source code changes before &#039;&#039;committing&#039;&#039; them to a baseline. Builds can be [[Setting_up_your_CI_Environment | &#039;&#039;&#039;automatically regression tested&#039;&#039;&#039;]] as part of the daily build process.&lt;br /&gt;
&lt;br /&gt;
== STRIDE facilitates deeper API/Unit Testing ==&lt;br /&gt;
STRIDE offers unique techniques that can be leveraged for deeper API/Unit test coverage. STRIDE provides support for this type of testing [[Test_Units_Overview | &#039;&#039;&#039;in C/C++&#039;&#039;&#039;]]. There are no special APIs required to register tests, suites, etc. Just write your test in any combination of C/C++ and the auto-generated [[Intercept_Module | &#039;&#039;&#039;intercept module&#039;&#039;&#039;]] via the [[Build_Tools | &#039;&#039;&#039;STRIDE build tools&#039;&#039;&#039;]] takes care of everything. Tests from separate teams are automatically aggregated by the system -- no coordination is required. &lt;br /&gt;
&lt;br /&gt;
Writing API/Unit tests in native code is the simplest way to begin validating the software. There is no new language to learn, no proprietary editor, and your normal programming workflow is not interrupted. This type of testing works well for:&lt;br /&gt;
&lt;br /&gt;
* calling APIs directly&lt;br /&gt;
* validating C++ classes&lt;br /&gt;
* isolating modules &lt;br /&gt;
* critical processing / timing&lt;br /&gt;
* and much more ...&lt;br /&gt;
&lt;br /&gt;
For more advanced testing scenarios, dependencies can be [[Using_Test_Doubles | &#039;&#039;&#039;doubled&#039;&#039;&#039;]]. This feature provides a means for intercepting C/C++ global functions on the target and substituting a stub, fake, or mock. The substitution is all controllable via the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]], allowing the software to continue executing normally when not running a test. &lt;br /&gt;
&lt;br /&gt;
[[File_Transfer_Services | &#039;&#039;&#039; File fixturing&#039;&#039;&#039;]] is another technique that can be leveraged to drive better testing. Test code executing on the target platform can perform file operations on the host remotely. This enables opening, reading, writing, etc. files while executing target-based test logic. File fixturing allows bypassing external input into the system for more controlled and isolated testing. &lt;br /&gt;
&lt;br /&gt;
There are numerous other features that can be leveraged to facilitate deeper API/Unit test coverage: &lt;br /&gt;
* [[Test_Macros | &#039;&#039;&#039;Assertion macros&#039;&#039;&#039;]]&lt;br /&gt;
* Leveraging [[Test Point Testing in C/C++ | &#039;&#039;&#039;Test Points&#039;&#039;&#039;]] for behavior testing&lt;br /&gt;
* Passing parameters to Test Units&lt;br /&gt;
* Seamless publishing to [[STRIDE_Test_Space | &#039;&#039;&#039;Test Space&#039;&#039;&#039;]]&lt;br /&gt;
* and much [[Test_API | &#039;&#039;&#039; more ... &#039;&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
== STRIDE includes behavior-based testing techniques ==&lt;br /&gt;
STRIDE leverages [[Source_Instrumentation_Overview | &#039;&#039;&#039;source instrumentation&#039;&#039;&#039;]] to provide &#039;&#039; &#039;&#039;&#039;behavior-based testing techniques&#039;&#039;&#039; &#039;&#039; that can be applied to the executing software application. The execution sequencing of the code, along with data, can be automatically validated based on [[Expectations | &#039;&#039;&#039;expectations&#039;&#039;&#039;]]. Behavior-based testing is different than unit testing or API testing. Behavior testing does not focus on calling functions / methods and validating their return values. Behavior testing validates the &#039;&#039;expected sequencing of the software&#039;&#039; executing under normal operating conditions. This can span threads and process boundaries, and even multiple targets, as the application(s) is running. Leveraging macros -- called [[Test_Point | &#039;&#039;&#039;Test Points&#039;&#039;&#039;]] -- domain experts strategically instrument the source under test. Data can also optionally be associated with a &#039;&#039;&#039; &#039;&#039;Test Point&#039;&#039; &#039;&#039;&#039;, which is often critical to validating that the software is behaving as expected. This type of validation can be applied to a wide-range of testing scenerios:&lt;br /&gt;
&lt;br /&gt;
* State Machines&lt;br /&gt;
* Data flow through system components&lt;br /&gt;
* Sequencing between threads&lt;br /&gt;
* Drivers that don&#039;t return values&lt;br /&gt;
* and much more ..&lt;br /&gt;
&lt;br /&gt;
Another key element to STRIDE based &#039;&#039;behavior testing&#039;&#039; is that the expected code sequencing is automatically validated. This automatic validation can be used for on-going regression testing. The expectations are realized by simple table definitions along with hooks for customization related to optional data. When failures do occur, context is provided with the file name and associated line number of the unexpected &#039;&#039;&#039;Test Point(s)&#039;&#039;&#039; behavior. The test validation can be implemented in both [[Expectation_Tests_in_C/C%2B%2B | &#039;&#039;&#039;native target code&#039;&#039;&#039;]] and [[Perl_Script_APIs#STRIDE::Test | &#039;&#039;&#039;scripting on the host&#039;&#039;&#039;]]. In either case, the validation is done without impacting the application&#039;s performance (i.e. target code executed in a background thread and host script executed on desktop). &lt;br /&gt;
&lt;br /&gt;
Software Quality Assurance(SQA) can also add behavior-based testing as part of their existing functional / system testing. Because the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]] is a command line utility, it is easily controlled from existing test infrastructure. SQA can also leverage the &#039;&#039; &#039;&#039;&#039;instrumentation&#039;&#039;&#039; &#039;&#039; to create their own [[Reporting_Model#Suites | &#039;&#039;&#039;test suites&#039;&#039;&#039;]] using scripting that executes in concert with existing test automation.  &lt;br /&gt;
&lt;br /&gt;
== STRIDE optimizes failure resolution ==&lt;br /&gt;
When executing &#039;&#039; &#039;&#039;&#039;behavior-based tests&#039;&#039;&#039; &#039;&#039; or &#039;&#039; &#039;&#039;&#039;API/unit tests&#039;&#039;&#039; &#039;&#039;, results can be automatically uploaded to [[STRIDE_Test_Space | &#039;&#039;&#039;STRIDE Test Space&#039;&#039;&#039;]] for storing and analyzing.  Result data is uploaded manually (using the web interface) or automatically using the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]]. Hosting test results in a &#039;&#039;central location&#039;&#039; with easy access via a browser enables the entire team to better participate in ensuring quality as apart of the development process. [[STRIDE_Test_Space#Results_View | &#039;&#039;&#039;Reports&#039;&#039;&#039;]] containing test results, timing, [[Tracing | &#039;&#039;&#039;tracing logs&#039;&#039;&#039;]], and built-in test documentation (supported in both [[Test_API#Test_Documentation | &#039;&#039;&#039;target code&#039;&#039;&#039;]] and [[Perl_Script_APIs#Documentation | &#039;&#039;&#039;script&#039;&#039;&#039;]]) all correlated together enhances the context for all team members to manage testing and optimize resolving failures. Team collaboration on testing activities is also facilitated with auto-generated email [[Notifications | &#039;&#039;&#039;notifications&#039;&#039;&#039;]] and [[STRIDE_Test_Space#Messages | &#039;&#039;&#039;messaging&#039;&#039;&#039;]]. Emails contain links to test failures where file names and line numbers are provided to aid in resolution. Messaging allows team members to more effectively communicate on the specifics of test results while minimizing information required in traditional emails, status reports, etc. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Overview]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=What_is_Unique_About_STRIDE&amp;diff=12889</id>
		<title>What is Unique About STRIDE</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=What_is_Unique_About_STRIDE&amp;diff=12889"/>
		<updated>2010-06-11T23:01:03Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* STRIDE software builds are both functional and testable */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== STRIDE is a cross-platform framework ==&lt;br /&gt;
The [[Runtime_Reference | &#039;&#039;&#039;STRIDE Runtime&#039;&#039;&#039;]] is written in standard C on top of a simple [[Platform_Abstraction_Layer | &#039;&#039;&#039;platform abstraction layer&#039;&#039;&#039;]] that enables it to work on virtually any target platform. It is delivered as source code to be included in the application&#039;s build system. STRIDE also auto-generates [[Intercept_Module | &#039;&#039;&#039;harnessing and remoting logic&#039;&#039;&#039;]] as source code during the make process, removing any dependencies on specific compilers and / or processors. The transport between the host and target is configurable and supports serial/USB and TCP/IP by default. Custom transports are also available. STRIDE is easily configured for single process or multiple process environments (i.e. Linux, Windows CE, etc.). Testing can also be conducted using an [[Off-Target_Environment | &#039;&#039;&#039;off-target environment&#039;&#039;&#039;]] which is provided for Windows and Linux host machines. &lt;br /&gt;
&lt;br /&gt;
The cross-platform framework facilitates a unified testing approach for all team members, enabling organizations to standardize on a test workflow that is independent of the target platform being used or what branch of software is being changed.&lt;br /&gt;
&lt;br /&gt;
== STRIDE software builds are both functional and testable ==&lt;br /&gt;
STRIDE enables software builds to be both fully &#039;&#039;functional&#039;&#039; and &#039;&#039;testable&#039;&#039; at the same time. The software works exactly the same as before. Whatever the software image was used for in the past -- system testing, developer debugging, etc. -- is still applicable. The STRIDE [[Test_Units | &#039;&#039;&#039;test logic&#039;&#039;&#039;]] is separated from the application source code and is NOT executed unless invoked via the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]]. The [[Source_Instrumentation_Overview | &#039;&#039;&#039;source instrumentation&#039;&#039;&#039;]] is only active when executing tests. The impact of built-in testability to the software application is nominal. The application can easily be switched back to a &#039;&#039;non-testable&#039;&#039; build by simply removing the [[Runtime_Integration#STRIDE_Feature_Control | &#039;&#039;&#039;STRIDE_ENABLED&#039;&#039;&#039;]] preprocessor directive. This automatically controls all STRIDE related source code and macros. There are no changes required to the build process to enable or disable this functionality. &lt;br /&gt;
&lt;br /&gt;
The built-in &#039;&#039;testability&#039;&#039; is similar in concept to a &#039;&#039;debug build&#039;&#039; except the focus is on testing. The &#039;&#039;testable build&#039;&#039; leverages the existing software build process by integrating into the same &#039;&#039;make system&#039;&#039; used by the development team (no one-off or special builds). Automatically included in the build is test automation controllable from the host. When tests are executed, timing analysis is also included with the generated [[Reporting_Model | &#039;&#039;&#039;test report&#039;&#039;&#039;]]. &lt;br /&gt;
Developers can easily pre-flight test their source code changes before &#039;&#039;committing&#039;&#039; them to a baseline. Builds can be [[Setting_up_your_CI_Environment | &#039;&#039;&#039;automatically regression tested&#039;&#039;&#039;]] as part of the daily build process.&lt;br /&gt;
&lt;br /&gt;
== STRIDE facilitates deeper API/Unit Testing ==&lt;br /&gt;
STRIDE offers unique techniques that can be leveraged for deeper API/Unit test coverage. STRIDE provides support for this type of testing [[Test_Units_Overview | &#039;&#039;&#039;in C/C++&#039;&#039;&#039;]]. There are no special APIs required to register tests, suites, etc. Just write your test in any combination of C/C++ and the auto-generated [[Intercept_Module | &#039;&#039;&#039;intercept module&#039;&#039;&#039;]] via the [[Build_Tools | &#039;&#039;&#039;STRIDE build tools&#039;&#039;&#039;]] takes care of everything. Also aggregating tests from separate teams is supported -- no coordination is required. Writing API/Unit tests in native code is the simplest way to begin validating the software. There is no new language to learn, no proprietary editor, and the same workflow used when programming can be leveraged. This type of testing works well for:&lt;br /&gt;
&lt;br /&gt;
* calling APIs directly&lt;br /&gt;
* validating C++ classes&lt;br /&gt;
* isolating modules &lt;br /&gt;
* critical processing / timing&lt;br /&gt;
* and much more ...&lt;br /&gt;
&lt;br /&gt;
For more advanced testing scenarios, dependencies can be [[Using_Test_Doubles | &#039;&#039;&#039;doubled&#039;&#039;&#039;]]. This feature provides a means for intercepting C/C++ language global functions on the target and substitute a stub, fake, or mock. The substitution is all controllable via the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]], allowing the software to continue executing normally when not running a test. &lt;br /&gt;
&lt;br /&gt;
[[File_Transfer_Services | &#039;&#039;&#039; File fixturing&#039;&#039;&#039;]] is another technique that can be leveraged to drive better testing. Test code executing on the target platform can perform file operations on the host remotely. This enables opening, reading, writing, etc. files while executing target-based test logic. File fixturing allows bypassing external input into the system for more controlled and isolated testing. &lt;br /&gt;
&lt;br /&gt;
There are numerous other features that can be leveraged to facilitate deeper API/Unit test coverage: &lt;br /&gt;
* [[Test_Macros | &#039;&#039;&#039;Assertion macros&#039;&#039;&#039;]]&lt;br /&gt;
* Leveraging [[Test Point Testing in C/C++ | &#039;&#039;&#039;Test Points&#039;&#039;&#039;]] for behavior testing&lt;br /&gt;
* Passing parameters to Test Units&lt;br /&gt;
* Seamless publishing to [[STRIDE_Test_Space | &#039;&#039;&#039;Test Space&#039;&#039;&#039;]]&lt;br /&gt;
* and much [[Test_API | &#039;&#039;&#039; more ... &#039;&#039;&#039;]] &lt;br /&gt;
&lt;br /&gt;
== STRIDE includes behavior-based testing techniques ==&lt;br /&gt;
STRIDE leverages [[Source_Instrumentation_Overview | &#039;&#039;&#039;source instrumentation&#039;&#039;&#039;]] to provide &#039;&#039; &#039;&#039;&#039;behavior-based testing techniques&#039;&#039;&#039; &#039;&#039; that can be applied to the executing software application. The execution sequencing of the code, along with data, can be automatically validated based on [[Expectations | &#039;&#039;&#039;expectations&#039;&#039;&#039;]]. Behavior-based testing is different than unit testing or API testing. Behavior testing does not focus on calling functions / methods and validating their return values. Behavior testing validates the &#039;&#039;expected sequencing of the software&#039;&#039; executing under normal operating conditions. This can span threads and process boundaries, and even multiple targets, as the application(s) is running. Leveraging macros -- called [[Test_Point | &#039;&#039;&#039;Test Points&#039;&#039;&#039;]] -- domain experts strategically instrument the source under test. Data can also optionally be associated with a &#039;&#039;&#039; &#039;&#039;Test Point&#039;&#039; &#039;&#039;&#039;, which is often critical to validating that the software is behaving as expected. This type of validation can be applied to a wide-range of testing scenerios:&lt;br /&gt;
&lt;br /&gt;
* State Machines&lt;br /&gt;
* Data flow through system components&lt;br /&gt;
* Sequencing between threads&lt;br /&gt;
* Drivers that don&#039;t return values&lt;br /&gt;
* and much more ..&lt;br /&gt;
&lt;br /&gt;
Another key element to STRIDE based &#039;&#039;behavior testing&#039;&#039; is that the expected code sequencing is automatically validated. This automatic validation can be used for on-going regression testing. The expectations are realized by simple table definitions along with hooks for customization related to optional data. When failures do occur, context is provided with the file name and associated line number of the unexpected &#039;&#039;&#039;Test Point(s)&#039;&#039;&#039; behavior. The test validation can be implemented in both [[Expectation_Tests_in_C/C%2B%2B | &#039;&#039;&#039;native target code&#039;&#039;&#039;]] and [[Perl_Script_APIs#STRIDE::Test | &#039;&#039;&#039;scripting on the host&#039;&#039;&#039;]]. In either case, the validation is done without impacting the application&#039;s performance (i.e. target code executed in a background thread and host script executed on desktop). &lt;br /&gt;
&lt;br /&gt;
Software Quality Assurance(SQA) can also add behavior-based testing as part of their existing functional / system testing. Because the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]] is a command line utility, it is easily controlled from existing test infrastructure. SQA can also leverage the &#039;&#039; &#039;&#039;&#039;instrumentation&#039;&#039;&#039; &#039;&#039; to create their own [[Reporting_Model#Suites | &#039;&#039;&#039;test suites&#039;&#039;&#039;]] using scripting that executes in concert with existing test automation.  &lt;br /&gt;
&lt;br /&gt;
== STRIDE optimizes failure resolution ==&lt;br /&gt;
When executing &#039;&#039; &#039;&#039;&#039;behavior-based tests&#039;&#039;&#039; &#039;&#039; or &#039;&#039; &#039;&#039;&#039;API/unit tests&#039;&#039;&#039; &#039;&#039;, results can be automatically uploaded to [[STRIDE_Test_Space | &#039;&#039;&#039;STRIDE Test Space&#039;&#039;&#039;]] for storing and analyzing.  Result data is uploaded manually (using the web interface) or automatically using the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]]. Hosting test results in a &#039;&#039;central location&#039;&#039; with easy access via a browser enables the entire team to better participate in ensuring quality as apart of the development process. [[STRIDE_Test_Space#Results_View | &#039;&#039;&#039;Reports&#039;&#039;&#039;]] containing test results, timing, [[Tracing | &#039;&#039;&#039;tracing logs&#039;&#039;&#039;]], and built-in test documentation (supported in both [[Test_API#Test_Documentation | &#039;&#039;&#039;target code&#039;&#039;&#039;]] and [[Perl_Script_APIs#Documentation | &#039;&#039;&#039;script&#039;&#039;&#039;]]) all correlated together enhances the context for all team members to manage testing and optimize resolving failures. Team collaboration on testing activities is also facilitated with auto-generated email [[Notifications | &#039;&#039;&#039;notifications&#039;&#039;&#039;]] and [[STRIDE_Test_Space#Messages | &#039;&#039;&#039;messaging&#039;&#039;&#039;]]. Emails contain links to test failures where file names and line numbers are provided to aid in resolution. Messaging allows team members to more effectively communicate on the specifics of test results while minimizing information required in traditional emails, status reports, etc. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Overview]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=What_is_Unique_About_STRIDE&amp;diff=12888</id>
		<title>What is Unique About STRIDE</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=What_is_Unique_About_STRIDE&amp;diff=12888"/>
		<updated>2010-06-11T22:58:46Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* STRIDE is a cross-platform framework */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== STRIDE is a cross-platform framework ==&lt;br /&gt;
The [[Runtime_Reference | &#039;&#039;&#039;STRIDE Runtime&#039;&#039;&#039;]] is written in standard C on top of a simple [[Platform_Abstraction_Layer | &#039;&#039;&#039;platform abstraction layer&#039;&#039;&#039;]] that enables it to work on virtually any target platform. It is delivered as source code to be included in the application&#039;s build system. STRIDE also auto-generates [[Intercept_Module | &#039;&#039;&#039;harnessing and remoting logic&#039;&#039;&#039;]] as source code during the make process, removing any dependencies on specific compilers and / or processors. The transport between the host and target is configurable and supports serial/USB and TCP/IP by default. Custom transports are also available. STRIDE is easily configured for single process or multiple process environments (i.e. Linux, Windows CE, etc.). Testing can also be conducted using an [[Off-Target_Environment | &#039;&#039;&#039;off-target environment&#039;&#039;&#039;]] which is provided for Windows and Linux host machines. &lt;br /&gt;
&lt;br /&gt;
The cross-platform framework facilitates a unified testing approach for all team members, enabling organizations to standardize on a test workflow that is independent of the target platform being used or what branch of software is being changed.&lt;br /&gt;
&lt;br /&gt;
== STRIDE software builds are both functional and testable ==&lt;br /&gt;
STRIDE enables software builds to be both fully &#039;&#039;functional&#039;&#039; and &#039;&#039;testable&#039;&#039; at the same time. The software works exactly the same as before. Whatever the software image was used for in the past -- system testing, developer debugging, etc. -- is still applicable. The STRIDE [[Test_Units | &#039;&#039;&#039;test logic&#039;&#039;&#039;]] is separated from the application source code and is NOT executed unless invoked via the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]]. The [[Source_Instrumentation_Overview | &#039;&#039;&#039;source instrumentation&#039;&#039;&#039;]] is only active when executing tests. The impact of built-in testability to the software application is nominal. The application can easily be switched back to a &#039;&#039;non-testable&#039;&#039; build by simply removing the [[Runtime_Integration#STRIDE_Feature_Control | &#039;&#039;&#039;STRIDE_ENABLED&#039;&#039;&#039;]] preprocessor directive. This automatically controls all STRIDE related source code and macros. There are no changes required to the build process to enable or disable this functionality. &lt;br /&gt;
&lt;br /&gt;
The built-in &#039;&#039;testability&#039;&#039; is similar in concept to a &#039;&#039;debug build&#039;&#039; except the focus is on testing. The &#039;&#039;testable build&#039;&#039; leverages the existing software build process by integrating into the same &#039;&#039;make system&#039;&#039; used by the development team (no one-off or special builds). Automatically included in the build is test automation controllable from the host. When tests are executed timing analysis is also included with the generated [[Reporting_Model | &#039;&#039;&#039;test report&#039;&#039;&#039;]]. &lt;br /&gt;
Developers can pre-flight test their source code changes before &#039;&#039;committing&#039;&#039; them to a baseline. Builds can be [[Setting_up_your_CI_Environment | &#039;&#039;&#039;automatically regression&#039;&#039;&#039;]] tested as part of the daily build process. &lt;br /&gt;
&lt;br /&gt;
== STRIDE facilitates deeper API/Unit Testing ==&lt;br /&gt;
STRIDE offers unique techniques that can be leveraged for deeper API/Unit test coverage. STRIDE provides support for this type of testing [[Test_Units_Overview | &#039;&#039;&#039;in C/C++&#039;&#039;&#039;]]. There are no special APIs required to register tests, suites, etc. Just write your test in any combination of C/C++ and the auto-generated [[Intercept_Module | &#039;&#039;&#039;intercept module&#039;&#039;&#039;]] via the [[Build_Tools | &#039;&#039;&#039;STRIDE build tools&#039;&#039;&#039;]] takes care of everything. Also aggregating tests from separate teams is supported -- no coordination is required. Writing API/Unit tests in native code is the simplest way to begin validating the software. There is no new language to learn, no proprietary editor, and the same workflow used when programming can be leveraged. This type of testing works well for:&lt;br /&gt;
&lt;br /&gt;
* calling APIs directly&lt;br /&gt;
* validating C++ classes&lt;br /&gt;
* isolating modules &lt;br /&gt;
* critical processing / timing&lt;br /&gt;
* and much more ...&lt;br /&gt;
&lt;br /&gt;
For more advanced testing scenarios, dependencies can be [[Using_Test_Doubles | &#039;&#039;&#039;doubled&#039;&#039;&#039;]]. This feature provides a means for intercepting C/C++ language global functions on the target and substitute a stub, fake, or mock. The substitution is all controllable via the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]], allowing the software to continue executing normally when not running a test. &lt;br /&gt;
&lt;br /&gt;
[[File_Transfer_Services | &#039;&#039;&#039; File fixturing&#039;&#039;&#039;]] is another technique that can be leveraged to drive better testing. Test code executing on the target platform can perform file operations on the host remotely. This enables opening, reading, writing, etc. files while executing target-based test logic. File fixturing allows bypassing external input into the system for more controlled and isolated testing. &lt;br /&gt;
&lt;br /&gt;
There are numerous other features that can be leveraged to facilitate deeper API/Unit test coverage: &lt;br /&gt;
* [[Test_Macros | &#039;&#039;&#039;Assertion macros&#039;&#039;&#039;]]&lt;br /&gt;
* Leveraging [[Test Point Testing in C/C++ | &#039;&#039;&#039;Test Points&#039;&#039;&#039;]] for behavior testing&lt;br /&gt;
* Passing parameters to Test Units&lt;br /&gt;
* Seamless publishing to [[STRIDE_Test_Space | &#039;&#039;&#039;Test Space&#039;&#039;&#039;]]&lt;br /&gt;
* and much [[Test_API | &#039;&#039;&#039; more ... &#039;&#039;&#039;]] &lt;br /&gt;
&lt;br /&gt;
== STRIDE includes behavior-based testing techniques ==&lt;br /&gt;
STRIDE leverages [[Source_Instrumentation_Overview | &#039;&#039;&#039;source instrumentation&#039;&#039;&#039;]] to provide &#039;&#039; &#039;&#039;&#039;behavior-based testing techniques&#039;&#039;&#039; &#039;&#039; that can be applied to the executing software application. The execution sequencing of the code, along with data, can be automatically validated based on [[Expectations | &#039;&#039;&#039;expectations&#039;&#039;&#039;]]. Behavior-based testing is different than unit testing or API testing. Behavior testing does not focus on calling functions / methods and validating their return values. Behavior testing validates the &#039;&#039;expected sequencing of the software&#039;&#039; executing under normal operating conditions. This can span threads and process boundaries, and even multiple targets, as the application(s) is running. Leveraging macros -- called [[Test_Point | &#039;&#039;&#039;Test Points&#039;&#039;&#039;]] -- domain experts strategically instrument the source under test. Data can also optionally be associated with a &#039;&#039;&#039; &#039;&#039;Test Point&#039;&#039; &#039;&#039;&#039;, which is often critical to validating that the software is behaving as expected. This type of validation can be applied to a wide-range of testing scenerios:&lt;br /&gt;
&lt;br /&gt;
* State Machines&lt;br /&gt;
* Data flow through system components&lt;br /&gt;
* Sequencing between threads&lt;br /&gt;
* Drivers that don&#039;t return values&lt;br /&gt;
* and much more ..&lt;br /&gt;
&lt;br /&gt;
Another key element to STRIDE based &#039;&#039;behavior testing&#039;&#039; is that the expected code sequencing is automatically validated. This automatic validation can be used for on-going regression testing. The expectations are realized by simple table definitions along with hooks for customization related to optional data. When failures do occur, context is provided with the file name and associated line number of the unexpected &#039;&#039;&#039;Test Point(s)&#039;&#039;&#039; behavior. The test validation can be implemented in both [[Expectation_Tests_in_C/C%2B%2B | &#039;&#039;&#039;native target code&#039;&#039;&#039;]] and [[Perl_Script_APIs#STRIDE::Test | &#039;&#039;&#039;scripting on the host&#039;&#039;&#039;]]. In either case, the validation is done without impacting the application&#039;s performance (i.e. target code executed in a background thread and host script executed on desktop). &lt;br /&gt;
&lt;br /&gt;
Software Quality Assurance(SQA) can also add behavior-based testing as part of their existing functional / system testing. Because the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]] is a command line utility, it is easily controlled from existing test infrastructure. SQA can also leverage the &#039;&#039; &#039;&#039;&#039;instrumentation&#039;&#039;&#039; &#039;&#039; to create their own [[Reporting_Model#Suites | &#039;&#039;&#039;test suites&#039;&#039;&#039;]] using scripting that executes in concert with existing test automation.  &lt;br /&gt;
&lt;br /&gt;
== STRIDE optimizes failure resolution ==&lt;br /&gt;
When executing &#039;&#039; &#039;&#039;&#039;behavior-based tests&#039;&#039;&#039; &#039;&#039; or &#039;&#039; &#039;&#039;&#039;API/unit tests&#039;&#039;&#039; &#039;&#039;, results can be automatically uploaded to [[STRIDE_Test_Space | &#039;&#039;&#039;STRIDE Test Space&#039;&#039;&#039;]] for storing and analyzing.  Result data is uploaded manually (using the web interface) or automatically using the [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]]. Hosting test results in a &#039;&#039;central location&#039;&#039; with easy access via a browser enables the entire team to better participate in ensuring quality as apart of the development process. [[STRIDE_Test_Space#Results_View | &#039;&#039;&#039;Reports&#039;&#039;&#039;]] containing test results, timing, [[Tracing | &#039;&#039;&#039;tracing logs&#039;&#039;&#039;]], and built-in test documentation (supported in both [[Test_API#Test_Documentation | &#039;&#039;&#039;target code&#039;&#039;&#039;]] and [[Perl_Script_APIs#Documentation | &#039;&#039;&#039;script&#039;&#039;&#039;]]) all correlated together enhances the context for all team members to manage testing and optimize resolving failures. Team collaboration on testing activities is also facilitated with auto-generated email [[Notifications | &#039;&#039;&#039;notifications&#039;&#039;&#039;]] and [[STRIDE_Test_Space#Messages | &#039;&#039;&#039;messaging&#039;&#039;&#039;]]. Emails contain links to test failures where file names and line numbers are provided to aid in resolution. Messaging allows team members to more effectively communicate on the specifics of test results while minimizing information required in traditional emails, status reports, etc. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Overview]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=STRIDE_Overview&amp;diff=12887</id>
		<title>STRIDE Overview</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=STRIDE_Overview&amp;diff=12887"/>
		<updated>2010-06-11T22:54:16Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;STRIDE™ has been designed specifically for &#039;&#039;on-target white-box testing&#039;&#039;. STRIDE™ is a test system used for validating embedded software executing on-target. The STRIDE™ system consists of a &#039;&#039; &#039;&#039;&#039;Framework&#039;&#039;&#039; &#039;&#039; for testing and a hosted &#039;&#039; &#039;&#039;&#039;Web Application&#039;&#039;&#039; &#039;&#039; for storing and analyzing test results. &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039; &#039;&#039;&#039;Framework&#039;&#039;&#039; &#039;&#039; includes a &#039;&#039;cross-platform&#039;&#039; [[Runtime_Reference | &#039;&#039;&#039;runtime&#039;&#039;&#039;]] source package that supports connectivity with the host system and provides [[Runtime_Test_Services | &#039;&#039;&#039;services for testing&#039;&#039;&#039;]] and [[Source_Instrumentation_Overview | &#039;&#039;&#039;source instrumentation&#039;&#039;&#039;]]. The &#039;&#039; &#039;&#039;&#039;runtime&#039;&#039;&#039; &#039;&#039; enables testability to be compiled into the embedded software with minimal impact on performance or the size of the application. The &#039;&#039; &#039;&#039;&#039;Framework&#039;&#039;&#039; &#039;&#039; also contains a host-based [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]] for interactive and automated test execution and publishing to our [[STRIDE_Test_Space | &#039;&#039;&#039;hosted web application&#039;&#039;&#039;]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[image:STRIDE 4.2.jpg | 500px]]&lt;br /&gt;
&lt;br /&gt;
STRIDE&#039;s test system is designed to deliver a broad spectrum of &#039;&#039;testing capabilities&#039;&#039; that enable teams to test earlier and more effectively. Developers can implement [[Test_Units_Overview | &#039;&#039;&#039;API and unit tests&#039;&#039;&#039;]] in C/C++ that execute on On-Target and are controlled from the host. Unique features such as [[File_Transfer_Services | &#039;&#039;&#039;file fixturing&#039;&#039;&#039;]] and [[Using_Test_Doubles | &#039;&#039;&#039;function doubling&#039;&#039;&#039;]] facilitate deeper coverage of your software components. Because the &#039;&#039; &#039;&#039;&#039;Framework&#039;&#039;&#039; &#039;&#039; is &#039;&#039;cross-platform&#039;&#039;, testing can also be executed using Off-Target environments such as Windows and Linux host machines. The [[Linux SDK | &#039;&#039;&#039;Linux&#039;&#039;&#039;]] and [[Windows SDK | &#039;&#039;&#039;Windows&#039;&#039;&#039;]] &#039;&#039;SDKs&#039;&#039; allow for a seamless transition between the real target and an off-target host environment.  &lt;br /&gt;
&lt;br /&gt;
Beyond traditional developer testing, STRIDE also provides [[Test_Modules_Overview |&#039;&#039;&#039;behavior-based testing techniques&#039;&#039;&#039;]]. Behavior-based testing is different than unit testing or API testing in that it &#039;&#039;does not&#039;&#039; focus on calling functions and validating return values. Behavior testing, rather, validates the &#039;&#039;expected sequencing of the software&#039;&#039; executing under normal operating conditions. This can span threads and process boundaries, and even multiple targets, as the application(s) is running. Developers insert instrumentation [[Test_Point | &#039;&#039;&#039;macros&#039;&#039;&#039;]] to enable this approach, making it especially effective for testing legacy code. This kind of &#039;&#039;expectation-based&#039;&#039; testing can be used to validate broadly scoped scenarios that span system-wide, for example:&lt;br /&gt;
&lt;br /&gt;
* Complete data flow through system components&lt;br /&gt;
* Communication between threads, processes, and targets &lt;br /&gt;
* Behavior of stacks, state machines, and drivers&lt;br /&gt;
* … and much more&lt;br /&gt;
&lt;br /&gt;
Once tests have been implemented, they are executed using the &#039;&#039; &#039;&#039;&#039;runner&#039;&#039;&#039; &#039;&#039; and test reports can be automatically uploaded to our &#039;&#039;&#039;web application&#039;&#039;&#039; - [[STRIDE_Test_Space | &#039;&#039;&#039;STRIDE Test Space&#039;&#039;&#039;]]. This application allows all team members to track and collaborate on results. Failure resolution is optimized by centralizing results, providing specific source information related to failures, and automatic email notification for new results. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--hr/--&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font size=&amp;quot;3&amp;quot;&amp;gt; &lt;br /&gt;
&#039;&#039;&#039;For more details refer to the following:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[What is Unique About STRIDE]]&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Types of Testing Supported]]&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Frequently Asked Questions About STRIDE | Frequently Asked Questions]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: Overview]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=STRIDE_Overview&amp;diff=12886</id>
		<title>STRIDE Overview</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=STRIDE_Overview&amp;diff=12886"/>
		<updated>2010-06-11T22:50:35Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;STRIDE™ has been designed specifically for &#039;&#039;on-target white-box testing&#039;&#039;. STRIDE™ is a test system used for validating embedded software executing on-target. The STRIDE™ system consists of a &#039;&#039; &#039;&#039;&#039;Framework&#039;&#039;&#039; &#039;&#039; for testing and a hosted &#039;&#039; &#039;&#039;&#039;Web Application&#039;&#039;&#039; &#039;&#039; for storing and analyzing test results. &lt;br /&gt;
&lt;br /&gt;
The &#039;&#039; &#039;&#039;&#039;Framework&#039;&#039;&#039; &#039;&#039; includes a &#039;&#039;cross-platform&#039;&#039; [[Runtime_Reference | &#039;&#039;&#039;runtime&#039;&#039;&#039;]] source package that supports connectivity with the host system and provides [[Runtime_Test_Services | &#039;&#039;&#039;services for testing&#039;&#039;&#039;]] and [[Source_Instrumentation_Overview | &#039;&#039;&#039;source instrumentation&#039;&#039;&#039;]]. The &#039;&#039; &#039;&#039;&#039;runtime&#039;&#039;&#039; &#039;&#039; enables testability to be compiled into the embedded software with minimal impact on performance or the size of the application. The &#039;&#039; &#039;&#039;&#039;Framework&#039;&#039;&#039; &#039;&#039; also contains a host-based [[Stride_Runner | &#039;&#039;&#039;runner&#039;&#039;&#039;]] for interactive and automated test execution and publishing to our [[STRIDE_Test_Space | &#039;&#039;&#039;hosted web application&#039;&#039;&#039;]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[image:STRIDE 4.2.jpg | 500px]]&lt;br /&gt;
&lt;br /&gt;
STRIDE&#039;s test system is designed to deliver a broad spectrum of &#039;&#039;testing capabilities&#039;&#039; that enable teams to test earlier and more effectively. Developers can implement [[Test_Units_Overview | &#039;&#039;&#039;API and unit tests&#039;&#039;&#039;]] in C/C++ that execute on On-Target and are controlled from the host. Unique features such as [[File_Transfer_Services | &#039;&#039;&#039;file fixturing&#039;&#039;&#039;]] and [[Using_Test_Doubles | &#039;&#039;&#039;function doubling&#039;&#039;&#039;]] facilitate deeper coverage of your software components. Because the &#039;&#039; &#039;&#039;&#039;Framework&#039;&#039;&#039; &#039;&#039; is &#039;&#039;cross-platform&#039;&#039;, testing can also be executed using Off-Target environments such as Windows and Linux host machines. The [[Linux SDK | &#039;&#039;&#039;Linux&#039;&#039;&#039;]] and [[Windows SDK | &#039;&#039;&#039;Windows&#039;&#039;&#039;]] &#039;&#039;SDKs&#039;&#039; allow for a seamless transition between the real target and an Off-Target host environment.  &lt;br /&gt;
&lt;br /&gt;
Beyond traditional developer testing, STRIDE also provides [[Test_Modules_Overview |&#039;&#039;&#039;behavior-based testing techniques&#039;&#039;&#039;]]. Behavior-based testing is different than unit testing or API testing in that it &#039;&#039;does not&#039;&#039; focus on calling functions and validating return values. Behavior testing, rather, validates the &#039;&#039;expected sequencing of the software&#039;&#039; executing under normal operating conditions. This can span threads and process boundaries, and even multiple targets, as the application(s) is running. Developers insert instrumentation [[Test_Point | &#039;&#039;&#039;macros&#039;&#039;&#039;]] to enable this approach, making it especially effective for testing legacy code. This kind of &#039;&#039;expectation-based&#039;&#039; testing can be used to validate much more comprehensive and system wide testing scenarios, for example:&lt;br /&gt;
&lt;br /&gt;
* Data flow through system components&lt;br /&gt;
* Communication between threads, processes, and targets &lt;br /&gt;
* Behavior of stacks, state machines, and drivers&lt;br /&gt;
* … and much more&lt;br /&gt;
&lt;br /&gt;
Once tests have been implemented, they are executed using the &#039;&#039; &#039;&#039;&#039;runner&#039;&#039;&#039; &#039;&#039; and test reports can be automatically uploaded to our &#039;&#039;&#039;Web application&#039;&#039;&#039; - [[STRIDE_Test_Space | &#039;&#039;&#039;STRIDE Test Space&#039;&#039;&#039;]]. This application allows all team members to track and collaborate on results. Failure resolution is optimized by centralizing results, providing specific source information related to failures, and automatic email notification for new results. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--hr/--&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font size=&amp;quot;3&amp;quot;&amp;gt; &lt;br /&gt;
&#039;&#039;&#039;For more details refer to the following:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[What is Unique About STRIDE]]&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Types of Testing Supported]]&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Frequently Asked Questions About STRIDE | Frequently Asked Questions]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: Overview]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Types_of_Testing_Supported_by_STRIDE&amp;diff=12885</id>
		<title>Types of Testing Supported by STRIDE</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Types_of_Testing_Supported_by_STRIDE&amp;diff=12885"/>
		<updated>2010-06-11T22:40:16Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* Behavior Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
One of the questions that should be asked is &#039;&#039;what is the &#039;&#039;&#039;value&#039;&#039;&#039; of the test?&#039;&#039; If the test does not discover any defects or does not provide ongoing regression, then the value is questionable. Also &#039;&#039;what is the &#039;&#039;&#039;effort&#039;&#039;&#039; in implementing the test?&#039;&#039; STRIDE has been uniquely designed to support maximizing the &#039;&#039; &#039;&#039;&#039;value&#039;&#039;&#039; &#039;&#039; of the test while minimizing the &#039;&#039; &#039;&#039;&#039;effort&#039;&#039;&#039; &#039;&#039; to implement it.&lt;br /&gt;
&lt;br /&gt;
The STRIDE test system supports three general types of testing:&lt;br /&gt;
* &#039;&#039;&#039;Unit Testing&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;API Testing&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Behavior Testing&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Unit Testing ==&lt;br /&gt;
[[Test Units Overview | STRIDE Unit Testing]] supports the model found in typical [http://en.wikipedia.org/wiki/XUnit xUnit-style] testing frameworks. STRIDE also offers a number of [[Test API | additional features]] optimized for testing embedded software. Traditional &#039;&#039;&#039;Unit Testing&#039;&#039;&#039; presents a number of challenges in testing embedded software:&lt;br /&gt;
&lt;br /&gt;
* Testing functions/classes in &#039;&#039;&#039;isolation&#039;&#039;&#039; requires a lot of extra work, especially if your software was not designed upfront for testability&lt;br /&gt;
* Testing legacy software might have limited value, particularly if the software is stable with respect to defects&lt;br /&gt;
* The software is often not well suited for &#039;&#039;&#039;others&#039;&#039;&#039; to participate in the test implementation, since there is too much internal knowledge required to be productive&lt;br /&gt;
* It can be difficult to automate execution of the full set of tests on the real target device&lt;br /&gt;
&lt;br /&gt;
== API Testing ==&lt;br /&gt;
STRIDE supports &#039;&#039;&#039;API Testing&#039;&#039;&#039; by leveraging the same [[Test Units Overview | techniques]] available for Unit Testing. API Testing differs from unit testing in that the tests focus on direct testing of a well-defined interface. This kind of testing is typically easy to scope depends on the size of the interface and often has a better &#039;&#039;return-on-effort&#039;&#039;. The design of &#039;&#039;public interfaces&#039;&#039; often lends itself to testing in isolation &#039;&#039;without&#039;&#039; implementing special test logic (i.e. no stubbing required), which make the test implementation simpler. What&#039;s more, public APIs are most likely documented. As a result, &#039;&#039;non domain experts&#039;&#039; can more easily participate in the test implementation. Although API Testing often represents a smaller percentage of the software being exercised, the &#039;&#039;&#039;value&#039;&#039;&#039; and &#039;&#039;&#039;effort&#039;&#039;&#039; required is well understood.&lt;br /&gt;
&lt;br /&gt;
== Behavior Testing ==&lt;br /&gt;
STRIDE also supports &#039;&#039;&#039;Behavior Testing&#039;&#039;&#039;, which is different than Unit Testing or API Testing in that it does not focus on calling functions and validating return values. Behavior Testing, rather, validates the &#039;&#039;expected sequencing and state of the software&#039;&#039; executing under normal operating conditions. To learn more about the uniqueness of Behavior Testing [[What_is_Unique_About_STRIDE#STRIDE_includes_behavior-based_testing_techniques | &#039;&#039;&#039;read here&#039;&#039;&#039;]]. We believe that Behavior Testing has a very high &#039;&#039;return-on-effort&#039;&#039; and can be easily deployed into legacy software systems. &lt;br /&gt;
&lt;br /&gt;
Some of the advantages of &#039;&#039;&#039;Behavior Testing&#039;&#039;&#039; are:&lt;br /&gt;
* Tests are performed with fully functional software builds -- there are no code isolation issues challenges&lt;br /&gt;
* The technique is easy to grasp and is applicable to a large percentage of code bases&lt;br /&gt;
* The instrumentation is non-obtrusive and allows other functional and black-box testing to be executed on the same build&lt;br /&gt;
* It&#039;s aasy to add [[Test Point | Test Points]] to provide coverage&lt;br /&gt;
* Allows other technical resources to participate in test implementation&lt;br /&gt;
* Tests can be written using [[Perl_Script_APIs#STRIDE::Test | scripting on the host]] and [[Expectation_Tests_in_C/C%2B%2B | native target code]] &lt;br /&gt;
* Validation is fully automated and thus does not rely on manual interpretation by domain experts&lt;br /&gt;
* Design knowledge is extracted and made available to other team members via [[Source Instrumentation Overview | instrumentation]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Overview]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Types_of_Testing_Supported_by_STRIDE&amp;diff=12884</id>
		<title>Types of Testing Supported by STRIDE</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Types_of_Testing_Supported_by_STRIDE&amp;diff=12884"/>
		<updated>2010-06-11T22:36:00Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* Behavior Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
One of the questions that should be asked is &#039;&#039;what is the &#039;&#039;&#039;value&#039;&#039;&#039; of the test?&#039;&#039; If the test does not discover any defects or does not provide ongoing regression, then the value is questionable. Also &#039;&#039;what is the &#039;&#039;&#039;effort&#039;&#039;&#039; in implementing the test?&#039;&#039; STRIDE has been uniquely designed to support maximizing the &#039;&#039; &#039;&#039;&#039;value&#039;&#039;&#039; &#039;&#039; of the test while minimizing the &#039;&#039; &#039;&#039;&#039;effort&#039;&#039;&#039; &#039;&#039; to implement it.&lt;br /&gt;
&lt;br /&gt;
The STRIDE test system supports three general types of testing:&lt;br /&gt;
* &#039;&#039;&#039;Unit Testing&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;API Testing&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Behavior Testing&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Unit Testing ==&lt;br /&gt;
[[Test Units Overview | STRIDE Unit Testing]] supports the model found in typical [http://en.wikipedia.org/wiki/XUnit xUnit-style] testing frameworks. STRIDE also offers a number of [[Test API | additional features]] optimized for testing embedded software. Traditional &#039;&#039;&#039;Unit Testing&#039;&#039;&#039; presents a number of challenges in testing embedded software:&lt;br /&gt;
&lt;br /&gt;
* Testing functions/classes in &#039;&#039;&#039;isolation&#039;&#039;&#039; requires a lot of extra work, especially if your software was not designed upfront for testability&lt;br /&gt;
* Testing legacy software might have limited value, particularly if the software is stable with respect to defects&lt;br /&gt;
* The software is often not well suited for &#039;&#039;&#039;others&#039;&#039;&#039; to participate in the test implementation, since there is too much internal knowledge required to be productive&lt;br /&gt;
* It can be difficult to automate execution of the full set of tests on the real target device&lt;br /&gt;
&lt;br /&gt;
== API Testing ==&lt;br /&gt;
STRIDE supports &#039;&#039;&#039;API Testing&#039;&#039;&#039; by leveraging the same [[Test Units Overview | techniques]] available for Unit Testing. API Testing differs from unit testing in that the tests focus on direct testing of a well-defined interface. This kind of testing is typically easy to scope depends on the size of the interface and often has a better &#039;&#039;return-on-effort&#039;&#039;. The design of &#039;&#039;public interfaces&#039;&#039; often lends itself to testing in isolation &#039;&#039;without&#039;&#039; implementing special test logic (i.e. no stubbing required), which make the test implementation simpler. What&#039;s more, public APIs are most likely documented. As a result, &#039;&#039;non domain experts&#039;&#039; can more easily participate in the test implementation. Although API Testing often represents a smaller percentage of the software being exercised, the &#039;&#039;&#039;value&#039;&#039;&#039; and &#039;&#039;&#039;effort&#039;&#039;&#039; required is well understood.&lt;br /&gt;
&lt;br /&gt;
== Behavior Testing ==&lt;br /&gt;
STRIDE also supports &#039;&#039;&#039;Behavior Testing&#039;&#039;&#039;, which is different than Unit Testing or API Testing in that it does not focus on calling functions and validating return values. Behavior Testing, rather, validates the &#039;&#039;expected sequencing and state of the software&#039;&#039; executing under normal operating conditions. To learn more about the uniqueness of Behavior Testing [[What_is_Unique_About_STRIDE#STRIDE_includes_behavior-based_testing_techniques | &#039;&#039;&#039;read here&#039;&#039;&#039;]]. We believe that Behavior Testing has a very high &#039;&#039;return-on-effort&#039;&#039; and can be easily deployed into legacy software systems. &lt;br /&gt;
&lt;br /&gt;
Some of the advantages of &#039;&#039;&#039;Behavior Testing&#039;&#039;&#039; are:&lt;br /&gt;
* Tests are performed with fully functional software builds -- there are no code isolation issues challenges&lt;br /&gt;
* The technique is easy to grasp and is applicable to a large percentage of code bases&lt;br /&gt;
* Can execute with functional and black-box testing&lt;br /&gt;
* It&#039;s aasy to add [[Test Point | Test Points]] to provide coverage&lt;br /&gt;
* Others can easily participate in test implementation&lt;br /&gt;
* Tests can be written using [[Perl_Script_APIs#STRIDE::Test | scripting on the host]] and [[Expectation_Tests_in_C/C%2B%2B | native target code]] &lt;br /&gt;
* Validation is fully automated and thus does not rely on manual interpretation by domain experts&lt;br /&gt;
* Design knowledge is extracted and made available to other team members via [[Source Instrumentation Overview | instrumentation]]&lt;br /&gt;
* Very useful for continuous integration&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Overview]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Types_of_Testing_Supported_by_STRIDE&amp;diff=12883</id>
		<title>Types of Testing Supported by STRIDE</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Types_of_Testing_Supported_by_STRIDE&amp;diff=12883"/>
		<updated>2010-06-11T22:28:38Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* API Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
One of the questions that should be asked is &#039;&#039;what is the &#039;&#039;&#039;value&#039;&#039;&#039; of the test?&#039;&#039; If the test does not discover any defects or does not provide ongoing regression, then the value is questionable. Also &#039;&#039;what is the &#039;&#039;&#039;effort&#039;&#039;&#039; in implementing the test?&#039;&#039; STRIDE has been uniquely designed to support maximizing the &#039;&#039; &#039;&#039;&#039;value&#039;&#039;&#039; &#039;&#039; of the test while minimizing the &#039;&#039; &#039;&#039;&#039;effort&#039;&#039;&#039; &#039;&#039; to implement it.&lt;br /&gt;
&lt;br /&gt;
The STRIDE test system supports three general types of testing:&lt;br /&gt;
* &#039;&#039;&#039;Unit Testing&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;API Testing&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Behavior Testing&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Unit Testing ==&lt;br /&gt;
[[Test Units Overview | STRIDE Unit Testing]] supports the model found in typical [http://en.wikipedia.org/wiki/XUnit xUnit-style] testing frameworks. STRIDE also offers a number of [[Test API | additional features]] optimized for testing embedded software. Traditional &#039;&#039;&#039;Unit Testing&#039;&#039;&#039; presents a number of challenges in testing embedded software:&lt;br /&gt;
&lt;br /&gt;
* Testing functions/classes in &#039;&#039;&#039;isolation&#039;&#039;&#039; requires a lot of extra work, especially if your software was not designed upfront for testability&lt;br /&gt;
* Testing legacy software might have limited value, particularly if the software is stable with respect to defects&lt;br /&gt;
* The software is often not well suited for &#039;&#039;&#039;others&#039;&#039;&#039; to participate in the test implementation, since there is too much internal knowledge required to be productive&lt;br /&gt;
* It can be difficult to automate execution of the full set of tests on the real target device&lt;br /&gt;
&lt;br /&gt;
== API Testing ==&lt;br /&gt;
STRIDE supports &#039;&#039;&#039;API Testing&#039;&#039;&#039; by leveraging the same [[Test Units Overview | techniques]] available for Unit Testing. API Testing differs from unit testing in that the tests focus on direct testing of a well-defined interface. This kind of testing is typically easy to scope depends on the size of the interface and often has a better &#039;&#039;return-on-effort&#039;&#039;. The design of &#039;&#039;public interfaces&#039;&#039; often lends itself to testing in isolation &#039;&#039;without&#039;&#039; implementing special test logic (i.e. no stubbing required), which make the test implementation simpler. What&#039;s more, public APIs are most likely documented. As a result, &#039;&#039;non domain experts&#039;&#039; can more easily participate in the test implementation. Although API Testing often represents a smaller percentage of the software being exercised, the &#039;&#039;&#039;value&#039;&#039;&#039; and &#039;&#039;&#039;effort&#039;&#039;&#039; required is well understood.&lt;br /&gt;
&lt;br /&gt;
== Behavior Testing ==&lt;br /&gt;
STRIDE supports &#039;&#039;&#039;Behavior Testing&#039;&#039;&#039; which is different than Unit Testing or API Testing in that it does not focus on calling functions and validating return values. Behavior Testing, rather, validates the &#039;&#039;expected sequencing of the software&#039;&#039; executing under normal operating conditions. To learn more about the uniqueness of Behavior Testing click [[What_is_Unique_About_STRIDE#STRIDE_includes_behavior-based_testing_techniques | &#039;&#039;&#039;here&#039;&#039;&#039;]]. We believe that Behavior Testing has a very high &#039;&#039;return-on-effort&#039;&#039; and can be easily deployed into legacy software systems. &lt;br /&gt;
&lt;br /&gt;
Some of the advantages of &#039;&#039;&#039;Behavior Testing&#039;&#039;&#039;:&lt;br /&gt;
* Test with fully functional software builds (no isolation issues)&lt;br /&gt;
* Applies to a large percentage of the software&lt;br /&gt;
* Can execute with functional and black-box testing&lt;br /&gt;
* Easy to add [[Test Point | Test Points]] to provide coverage&lt;br /&gt;
* Others can easily participate in test implementation&lt;br /&gt;
* Tests can be written using [[Perl_Script_APIs#STRIDE::Test | scripting on the host]] and [[Expectation_Tests_in_C/C%2B%2B | native target code]] &lt;br /&gt;
* Validation is fully automated (not done manually by tribal experts)&lt;br /&gt;
* Design knowledge is extracted and made available to other team members via [[Source Instrumentation Overview | instrumentation]]&lt;br /&gt;
* Very useful for continuous integration&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Overview]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Types_of_Testing_Supported_by_STRIDE&amp;diff=12882</id>
		<title>Types of Testing Supported by STRIDE</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Types_of_Testing_Supported_by_STRIDE&amp;diff=12882"/>
		<updated>2010-06-11T22:17:46Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* Unit Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
One of the questions that should be asked is &#039;&#039;what is the &#039;&#039;&#039;value&#039;&#039;&#039; of the test?&#039;&#039; If the test does not discover any defects or does not provide ongoing regression, then the value is questionable. Also &#039;&#039;what is the &#039;&#039;&#039;effort&#039;&#039;&#039; in implementing the test?&#039;&#039; STRIDE has been uniquely designed to support maximizing the &#039;&#039; &#039;&#039;&#039;value&#039;&#039;&#039; &#039;&#039; of the test while minimizing the &#039;&#039; &#039;&#039;&#039;effort&#039;&#039;&#039; &#039;&#039; to implement it.&lt;br /&gt;
&lt;br /&gt;
The STRIDE test system supports three general types of testing:&lt;br /&gt;
* &#039;&#039;&#039;Unit Testing&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;API Testing&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Behavior Testing&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Unit Testing ==&lt;br /&gt;
[[Test Units Overview | STRIDE Unit Testing]] supports the model found in typical [http://en.wikipedia.org/wiki/XUnit xUnit-style] testing frameworks. STRIDE also offers a number of [[Test API | additional features]] optimized for testing embedded software. Traditional &#039;&#039;&#039;Unit Testing&#039;&#039;&#039; presents a number of challenges in testing embedded software:&lt;br /&gt;
&lt;br /&gt;
* Testing functions/classes in &#039;&#039;&#039;isolation&#039;&#039;&#039; requires a lot of extra work, especially if your software was not designed upfront for testability&lt;br /&gt;
* Testing legacy software might have limited value, particularly if the software is stable with respect to defects&lt;br /&gt;
* The software is often not well suited for &#039;&#039;&#039;others&#039;&#039;&#039; to participate in the test implementation, since there is too much internal knowledge required to be productive&lt;br /&gt;
* It can be difficult to automate execution of the full set of tests on the real target device&lt;br /&gt;
&lt;br /&gt;
== API Testing ==&lt;br /&gt;
STRIDE supports &#039;&#039;&#039;API Testing&#039;&#039;&#039; leveraging the same [[Test Units Overview | techniques]] available for Unit Testing. The difference is concerning the focus of API Testing, which often has a better &#039;&#039;return-on-effort&#039;&#039;. API Testing traditionally is about driving well-defined public interfaces. The design of &#039;&#039;public interfaces&#039;&#039; typically lends itself to testing in isolation &#039;&#039;without&#039;&#039; implementing special test logic (i.e. no stubbing required). Also public APIs are most likely documented. As a result, &#039;&#039;others&#039;&#039; can more easily participate in the test implementation. Although API Testing often represents a smaller percentage of the software being exercised, the &#039;&#039;&#039;value&#039;&#039;&#039; and &#039;&#039;&#039;effort&#039;&#039;&#039; required is well understood.&lt;br /&gt;
&lt;br /&gt;
== Behavior Testing ==&lt;br /&gt;
STRIDE supports &#039;&#039;&#039;Behavior Testing&#039;&#039;&#039; which is different than Unit Testing or API Testing in that it does not focus on calling functions and validating return values. Behavior Testing, rather, validates the &#039;&#039;expected sequencing of the software&#039;&#039; executing under normal operating conditions. To learn more about the uniqueness of Behavior Testing click [[What_is_Unique_About_STRIDE#STRIDE_includes_behavior-based_testing_techniques | &#039;&#039;&#039;here&#039;&#039;&#039;]]. We believe that Behavior Testing has a very high &#039;&#039;return-on-effort&#039;&#039; and can be easily deployed into legacy software systems. &lt;br /&gt;
&lt;br /&gt;
Some of the advantages of &#039;&#039;&#039;Behavior Testing&#039;&#039;&#039;:&lt;br /&gt;
* Test with fully functional software builds (no isolation issues)&lt;br /&gt;
* Applies to a large percentage of the software&lt;br /&gt;
* Can execute with functional and black-box testing&lt;br /&gt;
* Easy to add [[Test Point | Test Points]] to provide coverage&lt;br /&gt;
* Others can easily participate in test implementation&lt;br /&gt;
* Tests can be written using [[Perl_Script_APIs#STRIDE::Test | scripting on the host]] and [[Expectation_Tests_in_C/C%2B%2B | native target code]] &lt;br /&gt;
* Validation is fully automated (not done manually by tribal experts)&lt;br /&gt;
* Design knowledge is extracted and made available to other team members via [[Source Instrumentation Overview | instrumentation]]&lt;br /&gt;
* Very useful for continuous integration&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Overview]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Types_of_Testing_Supported_by_STRIDE&amp;diff=12881</id>
		<title>Types of Testing Supported by STRIDE</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Types_of_Testing_Supported_by_STRIDE&amp;diff=12881"/>
		<updated>2010-06-11T22:11:51Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
One of the questions that should be asked is &#039;&#039;what is the &#039;&#039;&#039;value&#039;&#039;&#039; of the test?&#039;&#039; If the test does not discover any defects or does not provide ongoing regression, then the value is questionable. Also &#039;&#039;what is the &#039;&#039;&#039;effort&#039;&#039;&#039; in implementing the test?&#039;&#039; STRIDE has been uniquely designed to support maximizing the &#039;&#039; &#039;&#039;&#039;value&#039;&#039;&#039; &#039;&#039; of the test while minimizing the &#039;&#039; &#039;&#039;&#039;effort&#039;&#039;&#039; &#039;&#039; to implement it.&lt;br /&gt;
&lt;br /&gt;
The STRIDE test system supports three general types of testing:&lt;br /&gt;
* &#039;&#039;&#039;Unit Testing&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;API Testing&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Behavior Testing&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Unit Testing ==&lt;br /&gt;
[[Test Units Overview | STRIDE Unit Testing]] supports the general model found in typical [http://en.wikipedia.org/wiki/XUnit xUnit-style] testing frameworks. STRIDE also offers a number of [[Test API | additional features]] optimized for testing embedded. But traditional &#039;&#039;&#039;Unit Testing&#039;&#039;&#039; does present a number of challenges for testing embedded software:&lt;br /&gt;
&lt;br /&gt;
* Testing functions/classes in &#039;&#039;&#039;isolation&#039;&#039;&#039; requires a lot of extra work, especially if not designed upfront&lt;br /&gt;
* Testing legacy software might have very limited value (i.e. can&#039;t find any new defects)&lt;br /&gt;
* Typically not well suited for &#039;&#039;&#039;others&#039;&#039;&#039; to participate in the test implementation (too much internal knowledge required)&lt;br /&gt;
* Can be difficult to automate execution of the full set of tests on the real target device&lt;br /&gt;
&lt;br /&gt;
== API Testing ==&lt;br /&gt;
STRIDE supports &#039;&#039;&#039;API Testing&#039;&#039;&#039; leveraging the same [[Test Units Overview | techniques]] available for Unit Testing. The difference is concerning the focus of API Testing, which often has a better &#039;&#039;return-on-effort&#039;&#039;. API Testing traditionally is about driving well-defined public interfaces. The design of &#039;&#039;public interfaces&#039;&#039; typically lends itself to testing in isolation &#039;&#039;without&#039;&#039; implementing special test logic (i.e. no stubbing required). Also public APIs are most likely documented. As a result, &#039;&#039;others&#039;&#039; can more easily participate in the test implementation. Although API Testing often represents a smaller percentage of the software being exercised, the &#039;&#039;&#039;value&#039;&#039;&#039; and &#039;&#039;&#039;effort&#039;&#039;&#039; required is well understood.&lt;br /&gt;
&lt;br /&gt;
== Behavior Testing ==&lt;br /&gt;
STRIDE supports &#039;&#039;&#039;Behavior Testing&#039;&#039;&#039; which is different than Unit Testing or API Testing in that it does not focus on calling functions and validating return values. Behavior Testing, rather, validates the &#039;&#039;expected sequencing of the software&#039;&#039; executing under normal operating conditions. To learn more about the uniqueness of Behavior Testing click [[What_is_Unique_About_STRIDE#STRIDE_includes_behavior-based_testing_techniques | &#039;&#039;&#039;here&#039;&#039;&#039;]]. We believe that Behavior Testing has a very high &#039;&#039;return-on-effort&#039;&#039; and can be easily deployed into legacy software systems. &lt;br /&gt;
&lt;br /&gt;
Some of the advantages of &#039;&#039;&#039;Behavior Testing&#039;&#039;&#039;:&lt;br /&gt;
* Test with fully functional software builds (no isolation issues)&lt;br /&gt;
* Applies to a large percentage of the software&lt;br /&gt;
* Can execute with functional and black-box testing&lt;br /&gt;
* Easy to add [[Test Point | Test Points]] to provide coverage&lt;br /&gt;
* Others can easily participate in test implementation&lt;br /&gt;
* Tests can be written using [[Perl_Script_APIs#STRIDE::Test | scripting on the host]] and [[Expectation_Tests_in_C/C%2B%2B | native target code]] &lt;br /&gt;
* Validation is fully automated (not done manually by tribal experts)&lt;br /&gt;
* Design knowledge is extracted and made available to other team members via [[Source Instrumentation Overview | instrumentation]]&lt;br /&gt;
* Very useful for continuous integration&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Overview]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Perl_Script_Snippets&amp;diff=12871</id>
		<title>Perl Script Snippets</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Perl_Script_Snippets&amp;diff=12871"/>
		<updated>2010-06-11T16:33:53Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Below are some brief examples of script test modules that use the [[Perl Script APIs]] for STRIDE. Most of these examples are incomplete and are only intended to give a quick overview of how this API can be used. &lt;br /&gt;
&lt;br /&gt;
With any perl module that you write, we recommend that you perform a quick syntax check before attempting to execute the module using the STRIDE runner. Syntax checking with warnings is easily accomplished by running with the &amp;lt;tt&amp;gt;-cw&amp;lt;/tt&amp;gt; option, e.g.:&lt;br /&gt;
&lt;br /&gt;
  perl -cw MyTests.pm&lt;br /&gt;
&lt;br /&gt;
This will compile the perl script without running it, and it will emit any syntax errors that are found.&lt;br /&gt;
&lt;br /&gt;
== Canonical module format ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;&lt;br /&gt;
use strict;&lt;br /&gt;
use warnings;&lt;br /&gt;
&lt;br /&gt;
package MyTests;&lt;br /&gt;
use base qw(STRIDE::Test);&lt;br /&gt;
use STRIDE::Test;&lt;br /&gt;
&lt;br /&gt;
sub test_one : Test&lt;br /&gt;
{&lt;br /&gt;
    ASSERT_TRUE(1);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub test_two : Test&lt;br /&gt;
{&lt;br /&gt;
    ASSERT_TRUE(0);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
1;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Subroutine attributes to declare test methods and fixtures ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sub a_test : Test {&lt;br /&gt;
    # test method&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub startup_method : Test(startup) {&lt;br /&gt;
    # startup fixturing&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub shutdown_method : Test(shutdown) {&lt;br /&gt;
    # shutdown fixturing&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub setup_method : Test(setup) {&lt;br /&gt;
    # setup fixturing&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub teardown_method : Test(teardown) {&lt;br /&gt;
    # teardown fixture&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Test module including POD documentation ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;&lt;br /&gt;
package MyTests;&lt;br /&gt;
use base qw(STRIDE::Test);&lt;br /&gt;
use STRIDE::Test;&lt;br /&gt;
&lt;br /&gt;
=head1 NAME&lt;br /&gt;
&lt;br /&gt;
MyTests - example tests&lt;br /&gt;
&lt;br /&gt;
=head1 DESCRIPTION&lt;br /&gt;
&lt;br /&gt;
This is MyTests_1, a deeply funky piece of Perl code.&lt;br /&gt;
&lt;br /&gt;
=head1 METHODS&lt;br /&gt;
&lt;br /&gt;
=cut&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=head2 test_one&lt;br /&gt;
&lt;br /&gt;
this is a simple passing test.&lt;br /&gt;
&lt;br /&gt;
=cut&lt;br /&gt;
sub test_one : Test&lt;br /&gt;
{&lt;br /&gt;
    ASSERT_TRUE(1);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=head2 test_two&lt;br /&gt;
&lt;br /&gt;
this is a simple passing test.&lt;br /&gt;
&lt;br /&gt;
=cut&lt;br /&gt;
sub test_two : Test&lt;br /&gt;
{&lt;br /&gt;
    ASSERT_TRUE(0);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
1;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Simple expectation test ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;&lt;br /&gt;
sub sync_exact : Test {    &lt;br /&gt;
    my $h = TestPointSetup(&lt;br /&gt;
        ordered =&amp;gt; 1,&lt;br /&gt;
        expected =&amp;gt; [&lt;br /&gt;
            &amp;quot;point a&amp;quot;,&lt;br /&gt;
            &amp;quot;point b&amp;quot;,&lt;br /&gt;
            &amp;quot;point c&amp;quot;&lt;br /&gt;
        ],&lt;br /&gt;
        unexpected =&amp;gt; [ TEST_POINT_EVERYTHING_ELSE ]&lt;br /&gt;
    );&lt;br /&gt;
    &lt;br /&gt;
    #...start source under test, if necessary&lt;br /&gt;
&lt;br /&gt;
    # use Check if the events have all happened by the time &lt;br /&gt;
    # you validate the test points. Otherwise use Wait with &lt;br /&gt;
    # a reasonable timeout value.&lt;br /&gt;
    $h-&amp;gt;Check(); &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Expectation test with predicate==&lt;br /&gt;
&lt;br /&gt;
Here&#039;s how to specify a predicate for validating a test point. The predicate &lt;br /&gt;
shown is just a stub and doesn&#039;t do any actual validation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;&lt;br /&gt;
sub _myPredicate&lt;br /&gt;
{&lt;br /&gt;
    my ($test_point, $expected_data) = @_;&lt;br /&gt;
    # $test_point is a hashref with the following fields:&lt;br /&gt;
    #  label, data, size, bin, file, line&lt;br /&gt;
    # use the test point data to perform validation and return &lt;br /&gt;
    # nonzero value on success, 0 on failure.&lt;br /&gt;
&lt;br /&gt;
    return 1;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub use_a_predicate : Test {&lt;br /&gt;
    # use arrayref notation to specify label, count, predicate and expected data&lt;br /&gt;
    # if you want to specify a predicate. Predicates can be specified for some, all&lt;br /&gt;
    # or none of the test points. The fourth field is optional - it is just passed&lt;br /&gt;
    # to the predicate as the second argument and is typically used to specify some &lt;br /&gt;
    # expected data for the predicate to validate against.    &lt;br /&gt;
    my $h = TestPointSetup(&lt;br /&gt;
        expected =&amp;gt; [&lt;br /&gt;
            [&amp;quot;point a&amp;quot;, 1, \&amp;amp;_myPredicate, 54321],&lt;br /&gt;
            &amp;quot;point b&amp;quot;,&lt;br /&gt;
            [&amp;quot;point c&amp;quot;, 1, \&amp;amp;_myPredicate]&lt;br /&gt;
        ]&lt;br /&gt;
    );&lt;br /&gt;
&lt;br /&gt;
    $h-&amp;gt;Check(); &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Reusing a trace data file as expectation ==&lt;br /&gt;
&lt;br /&gt;
This examples shows a simple way to reuse captured trace data from a file as your expectation (this is an alternative to specifying the &amp;lt;tt&amp;gt;expected&amp;lt;/tt&amp;gt; array directly). This example assumes the &#039;&#039;trace_data.yaml&#039;&#039; file lives in the same directory as the test module file, but you are free to change this - just change the &amp;lt;tt&amp;gt;expect_file&amp;lt;/tt&amp;gt; argument accordingly.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;&lt;br /&gt;
# these standard modules are only required for the file &lt;br /&gt;
# path logic used to generate a full file path for trace_file&lt;br /&gt;
use File::Basename;&lt;br /&gt;
use File::Spec;&lt;br /&gt;
&lt;br /&gt;
sub trace_data : Test {    &lt;br /&gt;
    my $h = TestPointSetup(&lt;br /&gt;
        ordered =&amp;gt; 1,&lt;br /&gt;
        expect_file =&amp;gt; File::Spec-&amp;gt;catfile(dirname(__FILE__), &#039;trace_data.yaml&#039;),&lt;br /&gt;
    );&lt;br /&gt;
&lt;br /&gt;
    # make any function calls necessary to start the SUT, &lt;br /&gt;
    # then do a Check or Wait, depending on your scenario&lt;br /&gt;
&lt;br /&gt;
    $h-&amp;gt;Check();   &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Invoking a function on the target ==&lt;br /&gt;
&lt;br /&gt;
=== Standalone two-line script for invoking a remote function ===&lt;br /&gt;
This simple script shows the minimal code required to make a remote function call of a function that has been enabled with [[Function Capturing#Remoting|STRIDE remoting]].&lt;br /&gt;
&amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;&lt;br /&gt;
use STRIDE;&lt;br /&gt;
$STRIDE::Functions-&amp;gt;MyRemoteFunction();&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
More functions can be added to this script by making additional calls using the $STRIDE::Functions object. If your remote function accepts arguments, they should be added to the function call parameters (see the [[Perl_Script_APIs#STRIDE::Function|Perl API Reference]] for more information).&lt;br /&gt;
&lt;br /&gt;
=== Making calls with test modules ===&lt;br /&gt;
&lt;br /&gt;
Within test modules, remote functions can be easily invoked using the exported Functions object. The following two examples illustrate this.&lt;br /&gt;
&lt;br /&gt;
=== Blocking syntax (blocks until function returns) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;&lt;br /&gt;
my $retval = Functions-&amp;gt;foo(1, &amp;quot;input string&amp;quot;);&lt;br /&gt;
Functions-&amp;gt;bar();&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== asynchronous execution (return immediately, function continues to execute on device) ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;&lt;br /&gt;
my $fh = Functions-&amp;gt;{async}-&amp;gt;foo(1, &amp;quot;input string&amp;quot;);&lt;br /&gt;
my $retval = $fh-&amp;gt;Wait(1000); #waits up to one second for the function to return&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Accessing compiler macro values (constants) ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
/* some compilation unit included in the STRIDE compilation process */&lt;br /&gt;
#define SOME_DEFINED_VALUE 42&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;&lt;br /&gt;
my $value = Constants-&amp;gt;{SOME_DEFINED_VALUE};&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Tests in Script]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Tests_in_Script&amp;diff=12846</id>
		<title>Training Tests in Script</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Tests_in_Script&amp;diff=12846"/>
		<updated>2010-06-10T21:16:11Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* Can I use STRIDE test modules only for expectation testing ? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
&lt;br /&gt;
The STRIDE Framework allows you to write expectation tests that execute on the host while connected to a running device that has been instrumented with [[Test Point|STRIDE Test Points]]. Host-based expectation tests leverage the power of scripting languages (perl is currently supported, others are expected in the future) to quickly and easily write validation logic for the test points on your system. What&#039;s more, since the test logic is implemented and executed on the host, your device software does not have to be rebuilt when you want to create new tests or change existing ones.&lt;br /&gt;
&lt;br /&gt;
Please review the following reference articles before proceeding:&lt;br /&gt;
&lt;br /&gt;
* [[Test Modules Overview|Scripting Overview]]&lt;br /&gt;
* [[Perl Script APIs|perl Test Modules]]&lt;br /&gt;
&lt;br /&gt;
== What is an expectation test ? ==&lt;br /&gt;
&lt;br /&gt;
An expectation test is a test that validates behavior by verifying the occurrence (or non-occurrence) of specific test points on your running device. The STRIDE Framework makes it easy to define expectation tests &#039;&#039;via a single setup API&#039;&#039; ([[Perl_Script_APIs#Methods|see TestPointSetup]]) . Once defined, expectation tests are executed by invoking a &#039;&#039;wait&#039;&#039; method that evaluates the test points on the device as they occur. The wait method typically blocks until the entire defined expectation set has been satisfied &#039;&#039;&#039;or&#039;&#039;&#039; until a timeout (optional) has been exceeded.&lt;br /&gt;
&lt;br /&gt;
== How to I start my target processing scenario ? ==&lt;br /&gt;
&lt;br /&gt;
It&#039;s often necessary, as part of test, to start the processing on the device that causes the test scenario to occur. Sometimes processing is invoked via external stimulus (e.g. send a command to a serial port or send a network message). Given the [http://search.cpan.org/ wealth of libraries] available for perl, it&#039;s likely that you&#039;ll be able to find modules to help you in automating common communication protocols. &lt;br /&gt;
&lt;br /&gt;
If, on the other hand, the processing can be invoked by direct code paths in your application, you can consider using [[Function_Capturing|function fixturing]] via STRIDE. STRIDE function remoting allows you to specify a set of functions on the device that are to made available in the host scripting environment for remote execution. This can be a convenient way to expose device application hooks to the host scripting environment.&lt;br /&gt;
&lt;br /&gt;
Whatever the approach, we strongly encourage test authors to find ways to minimize the amount of manual interaction required to execute expectation tests. Try to find ways to fully automate the interaction required to run your expectation scenarios. Fully automated tests are more likely to be run regularly and therefore provide a tighter feedback loop for your software quality.&lt;br /&gt;
&lt;br /&gt;
== Can I use STRIDE test modules for any other testing besides expectation validation? ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Yes&#039;&#039;&#039; - STRIDE test modules provide a language-specific way to harness test code. If you have other procedures that can be automated using perl code on the host, then you can certainly use STRIDE test modules to harness the test code. In doing so, you will get the reporting conveniences that test modules provide (like automatic POD doc extraction, etc. and suite/test case generation) - as well as unified reporting with your other STRIDE test cases.&lt;br /&gt;
&lt;br /&gt;
== Sample: Expectations ==&lt;br /&gt;
&lt;br /&gt;
For this training, we will again be using the sample code provided in the [[Expectations Sample]]. This sample demonstrates a number of common expectation test patterns as implemented in perl.&lt;br /&gt;
&lt;br /&gt;
=== Sample Source: test_in_script/Expectations/s2_expectations_testmodules.pm ===&lt;br /&gt;
&lt;br /&gt;
To begin, let&#039;s briefly examine the perl test module that implements the test logic. Open the file in your favorite editor (preferably one that supports perl syntax highlighting) and examine the source code along with a description that is provided at the beginning of each test. Here are some things to observe:&lt;br /&gt;
&lt;br /&gt;
* The package name matches the file name -- &#039;&#039;this is required&#039;&#039;&lt;br /&gt;
* We have included documentation for the module and test cases using standard POD formatting codes. As long as you follow the rules described [[Perl_Script_APIs#Documentation|here]], the STRIDE framework will automatically extract the POD during execution and annotate the report accordingly.&lt;br /&gt;
* Most of the test sequences are initiated via a remote function in the target app (&amp;lt;tt&amp;gt;Exp_DoStateChanges()&amp;lt;/tt&amp;gt;). This function has been [[Function Capturing|captured]] using STRIDE and is therefore available for invocation using the Functions object in the test module. In one case, we also invoke the function asynchronously (see &amp;lt;tt&amp;gt;async_loose&amp;lt;/tt&amp;gt;). Functions are invoke synchronously by default.&lt;br /&gt;
* In the &amp;lt;tt&amp;gt;check_data&amp;lt;/tt&amp;gt; test, we validate integer values coming from the host that were passed as binary payloads to the test point. We use the perl [http://perldoc.perl.org/functions/pack.html pack] function to create a scalar value that matches the data expected from the target (target is the same as the host, in this case). If we were testing against a target with different integer characteristics (size, byte ordering), we would have to adjust our pack statement accordingly to produce a bit pattern that matched the target value(s). In many cases, this binary payload validation proves to be difficult to maintain and this is why &#039;&#039;we typically recommend using string data payloads&#039;&#039; on test points wherever possible.&lt;br /&gt;
&lt;br /&gt;
=== Build the  test app ===&lt;br /&gt;
&lt;br /&gt;
So that we can run the sample, let&#039;s now build an off target test app that contains the source under tests -- you can follow the generic steps [[Off-Target_Test_App#Copy_Sample_Source|described here]].  &#039;&#039;&#039;Note:&#039;&#039;&#039; you can copy all of the sample files into the &amp;lt;tt&amp;gt;sample_src&amp;lt;/tt&amp;gt;  directory -- although only the source will be compiled into the app, this will make it easier to run the module.&lt;br /&gt;
&lt;br /&gt;
=== Run the sample ===&lt;br /&gt;
&lt;br /&gt;
Now launch the test app (if you have not already) and execute the runner with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=../out/TestApp.sidb --run=s2_expectations_testmodule.pm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(this assumes you are running from the &amp;lt;tt&amp;gt;sample_src&amp;lt;/tt&amp;gt; directory of your off-target SDK. If that&#039;s not the case, you need to change the path to the database and test module files accordingly.)&lt;br /&gt;
&lt;br /&gt;
If you&#039;d like to see how log messages will appear in reports, you can add &amp;lt;tt&amp;gt;--log_level=all&amp;lt;/tt&amp;gt; to this command.&lt;br /&gt;
&lt;br /&gt;
=== Examine the results ===&lt;br /&gt;
&lt;br /&gt;
The runner will produce a results (xml) file once the execution is complete. The file will (by default) have the same name as the database and be located in the current directory - so find the &amp;lt;tt&amp;gt;TestApp.xml&amp;lt;/tt&amp;gt; file and open it with your browser. You can then use the buttons to expand the test suites and test cases. Here are some things to observe about the results:&lt;br /&gt;
&lt;br /&gt;
* there is a single suite called &#039;&#039;&#039;s2_expectations_module&#039;&#039;&#039;. This matches the name given to the test module.&lt;br /&gt;
* the test module&#039;s suite has one annotation - it is a trace file containing all of the test points and logs that were reported to the host during the execution of the test module. This trace file can be useful if you want to get a sequential view of all test points that were encountered during the execution of the module.&lt;br /&gt;
* the test module suite contains 13 test cases -- each one corresponds to a single test case (test function) in the module. The description for each test module was automatically generated from the POD documentation that was included in the test module file.&lt;br /&gt;
* each test case has a list of several annotations. The first is a simple HTML view of the source code of the test itself. This can be useful for quickly inspecting the code that was used to run the test without having to go to your actual test module implementation file. The remaining annotation contain information about each test point that was hit during processing and any expectation failures or timeouts that were encountered.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Overview_(old)&amp;diff=12784</id>
		<title>Training Overview (old)</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Overview_(old)&amp;diff=12784"/>
		<updated>2010-06-07T21:06:25Z</updated>

		<summary type="html">&lt;p&gt;Mikee: moved ++WIP++ Training Overview to Training Overview&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Our training approach is based on articles (here in the wiki) and on a set of code samples that readily execute in our off-target (desktop) environment. Our training focuses on a self-guided tour of the product using the samples we provide as the primary study material. Please review the sections below before proceeding to the specific training topics.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
In order to build and execute the samples that we use in the training, please complete or verify the following prerequisites&lt;br /&gt;
&lt;br /&gt;
# complete the [[Desktop Installation]]. Be sure to include perl in the installation if you want to do training related to host-based test scripting. Verify that the diagnostics pass after installation. &lt;br /&gt;
# install the desktop development toolchain for your host, as described for [[Desktop_Installation#GCC|linux]] or [[Desktop_Installation#Microsoft_Visual_Studio|windows]].  This is required for building the STRIDE runtime and samples.&lt;br /&gt;
# read the [[STRIDE Overview]] article to familiarize yourself with the high-level approach and components of STRIDE.&lt;br /&gt;
&lt;br /&gt;
== How we train ==&lt;br /&gt;
&lt;br /&gt;
Our training articles are based on a handful of samples that we provide with the off-target framework distribution. The samples are usually self-documented (using doxygen or perldoc) and this content will be attached to the test report whenever a sample is executed. &lt;br /&gt;
&lt;br /&gt;
The samples were created to be as simple as possible while sufficiently demonstrating the topic at hand. In particular, the samples are &#039;&#039;&#039;very light&#039;&#039;&#039; on core application logic (that is, the source under test) -- they focus instead on the code that leverages STRIDE to define and execute tests. As you review the sample code, if you find yourself confused about which code is the test logic and which is the source under test, try reading the source file comments to discern this. If you are still unclear about how the source is organized, feel free to contact us for clarification.&lt;br /&gt;
&lt;br /&gt;
== What you need to do ==&lt;br /&gt;
&lt;br /&gt;
In order to get the full benefit from these training articles, we recommend you do the following:&lt;br /&gt;
&lt;br /&gt;
* follow the wiki links we provide in the training articles. These links provide rich technical information on the topics covered by the training. These are also articles you will likely refer to in the future when you are implementing your own tests.&lt;br /&gt;
* read/review all sample source code prior to running. The samples consist almost entirely of source code, so it makes sense to use a source code editor (one you are familiar with) for this purpose.&lt;br /&gt;
* build and execute the samples using the off-target framework. If you completed your installation as instructed above, it should be fully functional when you do the training.&lt;br /&gt;
* review the reports that are produced when you run the samples. The reports give you a feel for how data is reported in the STRIDE Framework. The extracted documentation is also provided in the report.&lt;br /&gt;
* For most samples, we provide some observations that help summarize aspects of the results that might be of interest to you. These observations are not necessarily comprehensive - in fact, we hope you&#039;ll discover other interesting features in the samples that we haven&#039;t mentioned.&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Overview_(old)&amp;diff=12783</id>
		<title>Training Overview (old)</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Overview_(old)&amp;diff=12783"/>
		<updated>2010-06-07T21:06:01Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Our training approach is based on articles (here in the wiki) and on a set of code samples that readily execute in our off-target (desktop) environment. Our training focuses on a self-guided tour of the product using the samples we provide as the primary study material. Please review the sections below before proceeding to the specific training topics.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
In order to build and execute the samples that we use in the training, please complete or verify the following prerequisites&lt;br /&gt;
&lt;br /&gt;
# complete the [[Desktop Installation]]. Be sure to include perl in the installation if you want to do training related to host-based test scripting. Verify that the diagnostics pass after installation. &lt;br /&gt;
# install the desktop development toolchain for your host, as described for [[Desktop_Installation#GCC|linux]] or [[Desktop_Installation#Microsoft_Visual_Studio|windows]].  This is required for building the STRIDE runtime and samples.&lt;br /&gt;
# read the [[STRIDE Overview]] article to familiarize yourself with the high-level approach and components of STRIDE.&lt;br /&gt;
&lt;br /&gt;
== How we train ==&lt;br /&gt;
&lt;br /&gt;
Our training articles are based on a handful of samples that we provide with the off-target framework distribution. The samples are usually self-documented (using doxygen or perldoc) and this content will be attached to the test report whenever a sample is executed. &lt;br /&gt;
&lt;br /&gt;
The samples were created to be as simple as possible while sufficiently demonstrating the topic at hand. In particular, the samples are &#039;&#039;&#039;very light&#039;&#039;&#039; on core application logic (that is, the source under test) -- they focus instead on the code that leverages STRIDE to define and execute tests. As you review the sample code, if you find yourself confused about which code is the test logic and which is the source under test, try reading the source file comments to discern this. If you are still unclear about how the source is organized, feel free to contact us for clarification.&lt;br /&gt;
&lt;br /&gt;
== What you need to do ==&lt;br /&gt;
&lt;br /&gt;
In order to get the full benefit from these training articles, we recommend you do the following:&lt;br /&gt;
&lt;br /&gt;
* follow the wiki links we provide in the training articles. These links provide rich technical information on the topics covered by the training. These are also articles you will likely refer to in the future when you are implementing your own tests.&lt;br /&gt;
* read/review all sample source code prior to running. The samples consist almost entirely of source code, so it makes sense to use a source code editor (one you are familiar with) for this purpose.&lt;br /&gt;
* build and execute the samples using the off-target framework. If you completed your installation as instructed above, it should be fully functional when you do the training.&lt;br /&gt;
* review the reports that are produced when you run the samples. The reports give you a feel for how data is reported in the STRIDE Framework. The extracted documentation is also provided in the report.&lt;br /&gt;
* For most samples, we provide some observations that help summarize aspects of the results that might be of interest to you. These observations are not necessarily comprehensive - in fact, we hope you&#039;ll discover other interesting features in the samples that we haven&#039;t mentioned.&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Overview_(old)&amp;diff=12782</id>
		<title>Training Overview (old)</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Overview_(old)&amp;diff=12782"/>
		<updated>2010-06-07T20:47:46Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Our training approach is based on articles (here in the wiki) and on a set of code samples that readily execute in our off-target (desktop) environment. Our training focuses on a self-guided tour of the product using the samples we provide as the primary study material. Please review the sections below before proceeding to the specific training topics.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
In order to build and execute the samples that we use in the training, please complete of verify the following prerequisites&lt;br /&gt;
&lt;br /&gt;
# complete the [[Desktop Installation]]. Be sure to include perl in the installation if you want to do training related to host-based test scripting and verify the diagnostics. &lt;br /&gt;
# install the development toolchain for your host, as described for [[Desktop_Installation#GCC|linux]] or [[Desktop_Installation#Microsoft_Visual_Studio|windows]].  This is required for building the STRIDE runtime and samples.&lt;br /&gt;
# read the [[STRIDE Overview]] article to familiarize yourself with the high-level approach and components of STRIDE.&lt;br /&gt;
&lt;br /&gt;
== What we train on ==&lt;br /&gt;
&lt;br /&gt;
Our training articles are based on a handful of samples that we provide with the off-target framework distribution. The samples are usually self-documented using doxygen, the contents of which will be attached to the test report whenever a sample is executed. &lt;br /&gt;
&lt;br /&gt;
The samples were created to be as simple as possible while still adequately demonstrating the topic at hand. In particular, the samples are &#039;&#039;&#039;very light&#039;&#039;&#039; on core application logic (that is, source under test) -- they focus instead on the code that leverages STRIDE to define and execute tests. As you review the sample code, if you find yourself confused about which code is the test logic and which is the source under test, try reading the source file comments to discern them. If you are still unclear about how the source is organized, feel free to contact us for clarification.&lt;br /&gt;
&lt;br /&gt;
== What you need to do ==&lt;br /&gt;
&lt;br /&gt;
In order to get the full benefit from these training articles, we recommend you do the following:&lt;br /&gt;
&lt;br /&gt;
* follow the wiki links we provide in the training articles. These links provide rich technical information on the topics covered by the training. These are also articles you will likely refer to in the future when you are implementing your own tests.&lt;br /&gt;
* read/review all sample source code prior to running. The samples consist almost entirely of source code, so it makes sense to use a source code editor (one you are familiar with) for this purpose.&lt;br /&gt;
* build and execute the samples using the off-target framework. If you completed your installation as instructed above, it should be fully functional when you do the training.&lt;br /&gt;
* review the reports that are produced when you run the samples. The reports give you a feel for how data is reported in the STRIDE Framework. The extracted documentation is also provided in the report.&lt;br /&gt;
* For most samples, we provide some observations that help summarize aspects of the results that might be of interest to you. These observations are not necessarily comprehensive - in fact, we hope you&#039;ll discover other interesting features in the samples that we haven&#039;t mentioned.&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Overview_(old)&amp;diff=12726</id>
		<title>Training Overview (old)</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Overview_(old)&amp;diff=12726"/>
		<updated>2010-06-04T23:20:06Z</updated>

		<summary type="html">&lt;p&gt;Mikee: Created page with &amp;#039; Our training approach is based on articles (here in the wiki) and on a set of code samples that readily execute in our off-target (desktop) environment. Our training focuses on …&amp;#039;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Our training approach is based on articles (here in the wiki) and on a set of code samples that readily execute in our off-target (desktop) environment. Our training focuses on a self-guided tour of the product using the samples we provide as the primary study material. Please review the sections below before proceeding to the specific training topics.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
In order to build and execute the samples that we use in the training, please complete of verify the following prerequisites&lt;br /&gt;
&lt;br /&gt;
# complete the [[Desktop Installation]]. Be sure to include perl in the installation if you want to do training related to host-based test scripting and verify the diagnostics. &lt;br /&gt;
# install the development toolchain for your host, as described for [[Desktop_Installation#GCC|linux]] or [[Desktop_Installation#Microsoft_Visual_Studio|windows]].  This is required for building the STRIDE runtime and samples.&lt;br /&gt;
# read the [[STRIDE Overview]] article to familiarize yourself with the high-level approach and components of STRIDE.&lt;br /&gt;
&lt;br /&gt;
== What we train on ==&lt;br /&gt;
&lt;br /&gt;
Our training articles are based on a handful of samples that we provide with the off-target framework. The samples are usually self-documented using doxygen and the contents will be attached to the test report whenever a sample is executed. What&#039;s more, these samples were created to be as simple as possible while still adequately demonstrating the topic at hand. In particular, the samples are very light on core application logic and tend to focus instead on the code required that leverages STRIDE to define and execute tests. The actual code under test in these samples is very minimal so as not to distract from the core test or instrumentation logic.&lt;br /&gt;
&lt;br /&gt;
== What you need to do ==&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
Need to review test reports&lt;br /&gt;
&lt;br /&gt;
Need to review script / test code&lt;br /&gt;
&lt;br /&gt;
Reminder that the wiki articles provide the technical content  required to really understand the features&lt;br /&gt;
&lt;br /&gt;
Need to review the “Examine the Results” sections that provide a  summary of techniques / approaches&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Running_Tests&amp;diff=12717</id>
		<title>Training Running Tests</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Running_Tests&amp;diff=12717"/>
		<updated>2010-06-04T22:09:58Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
&lt;br /&gt;
Device connection and STRIDE test execution is handled by the [[STRIDE Runner]] (aka &#039;&#039;&amp;quot;the runner&amp;quot;&#039;&#039;). The runner is a command line tool with a number of options for listing runnable items, executing tests, tracing on test points, and uploading your results. The runner is designed for use in both ad-hoc execution of tests and fully automated CI execution. You&#039;ve already seen some basic test execution scenarios in the other training sections - now we will look explicitly at several of the most common use-cases for the runner.&lt;br /&gt;
&lt;br /&gt;
Please review the following reference articles before proceeding: &lt;br /&gt;
&lt;br /&gt;
* [[STRIDE Runner|STRIDE Runner (reference)]]&lt;br /&gt;
* [[Running Tests]]&lt;br /&gt;
* [[Organizing Tests into Suites]]&lt;br /&gt;
* [[Tracing]]&lt;br /&gt;
&lt;br /&gt;
== Build a test app ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s begin by building an off-target test app to use for these examples. The sources we want to include in this app are &amp;lt;tt&amp;gt;test_in_script/Expectations&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;test_in_c_cpp/TestClass&amp;lt;/tt&amp;gt;. Copy these source files to your &amp;lt;tt&amp;gt;sample_src&amp;lt;/tt&amp;gt; directory and follow [[Off-Target_Test_App#Copy_Source_Sample|these instructions for building]]&lt;br /&gt;
&lt;br /&gt;
== Listing items ==&lt;br /&gt;
&lt;br /&gt;
Listing the contents of database is an easy way to see what test units and fixturing functions are available for execution. Open a command line and try the following&amp;lt;ref  name=&amp;quot;database&amp;quot;&amp;gt;These examples assume you are executing the  runner from the &amp;lt;tt&amp;gt;sample_src&amp;lt;/tt&amp;gt; directory of your off-target  framework. If that&#039;s not the case, you will need to adjust the database  path accordingly.&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --database=&amp;quot;../out/TestApp.sidb&amp;quot; --list&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should see output something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Functions&lt;br /&gt;
  Exp_DoStateChanges()&lt;br /&gt;
Test Units&lt;br /&gt;
  s2_testclass::Basic::Exceptions()&lt;br /&gt;
  s2_testclass::Basic::Fixtures()&lt;br /&gt;
  s2_testclass::Basic::Parameterized(char const * szString, unsigned int uExpectedLen)&lt;br /&gt;
  s2_testclass::Basic::Simple()&lt;br /&gt;
  s2_testclass::RuntimeServices::Dynamic()&lt;br /&gt;
  s2_testclass::RuntimeServices::Override()&lt;br /&gt;
  s2_testclass::RuntimeServices::Simple()&lt;br /&gt;
  s2_testclass::RuntimeServices::VarComment()&lt;br /&gt;
  s2_testclass::srTest::Dynamic()&lt;br /&gt;
  s2_testclass::srTest::Simple()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A few things to notice: &lt;br /&gt;
* The Functions (if any) are listed before the Test Units.&lt;br /&gt;
* Function and Test Unit arguments (input params) will be shown, if any. The parameter types are described shown for each.&lt;br /&gt;
&lt;br /&gt;
== Tracing on test points ==&lt;br /&gt;
&lt;br /&gt;
Tracing using the runner will show any STRIDE Test Points that are generated on the device during the window of time that the runner is connected. If you have test points that are continuously being emitted (for instance, in some background thread), then you can just connect to the device with tracing enabled to see them (you&#039;ll need to specify a &amp;lt;tt&amp;gt;--trace_timeout&amp;lt;/tt&amp;gt; parameter to tell the runner how long to trace for). If your test points require some fixturing to be hit, then you&#039;ll need to specify a script to execute that makes the necessary fixture calls. This is precisely what we did in our previous [[Training_Instrumentation#Build, Run, and Trace|Instrumentation training]]. If you recall from that training, we did the following&amp;lt;ref name=&amp;quot;database&amp;quot;/&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=do_state_changes.pl --trace &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should see output resembling this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
Executing...&lt;br /&gt;
  script &amp;quot;C:\s2\seaside\SDK\Windows\src\do_state_changes.pl&amp;quot;&lt;br /&gt;
1032564500 POINT &amp;quot;SET_NEW_STATE&amp;quot; - START [../sample_src/s2_expectations_source.c:49]&lt;br /&gt;
1032564501 POINT &amp;quot;START&amp;quot; [../sample_src/s2_expectations_source.c:63]&lt;br /&gt;
1032574600 POINT &amp;quot;SET_NEW_STATE&amp;quot; - IDLE [../sample_src/s2_expectations_source.c:49]&lt;br /&gt;
1032574601 POINT &amp;quot;IDLE&amp;quot; - 02 00 00 00 [../sample_src/s2_expectations_source.c:78]&lt;br /&gt;
1032584600 POINT &amp;quot;SET_NEW_STATE&amp;quot; - ACTIVE [../sample_src/s2_expectations_source.c:49]&lt;br /&gt;
1032584601 POINT &amp;quot;ACTIVE&amp;quot; - 03 00 00 00 [../sample_src/s2_expectations_source.c:101]&lt;br /&gt;
1032594700 POINT &amp;quot;ACTIVE Previous State&amp;quot; - IDLE [../sample_src/s2_expectations_source.c:103]&lt;br /&gt;
1032594701 POINT &amp;quot;JSON_DATA&amp;quot; - {&amp;quot;string_field&amp;quot;: &amp;quot;a-string-value&amp;quot;, &amp;quot;int_field&amp;quot;: 42, &amp;quot;bool_field&amp;quot;: true, &amp;quot;hex_field&amp;quot;: &amp;quot;0xDEADBEEF&amp;quot;} [../sample_src/s2_expectations_source.c:105]&lt;br /&gt;
1032604800 POINT &amp;quot;SET_NEW_STATE&amp;quot; - IDLE [../sample_src/s2_expectations_source.c:49]&lt;br /&gt;
1032604801 POINT &amp;quot;IDLE&amp;quot; - 04 00 00 00 [../sample_src/s2_expectations_source.c:78]&lt;br /&gt;
1032614800 POINT &amp;quot;SET_NEW_STATE&amp;quot; - END [../sample_src/s2_expectations_source.c:49]&lt;br /&gt;
1032614801 POINT &amp;quot;END&amp;quot; - 05 00 00 00 [../sample_src/s2_expectations_source.c:117]&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  ---------------------------------------------------------------------&lt;br /&gt;
  Summary: 0 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s trace again, but include a filter expression for the test points:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride  --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot;  --run=do_state_changes.pl --trace=&amp;quot;ACTIVE.*&amp;quot; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
..and now you should see fewer trace points emitted:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
Executing...&lt;br /&gt;
  script &amp;quot;C:\s2\seaside\SDK\Windows\src\do_state_changes.pl&amp;quot;&lt;br /&gt;
1047379801 POINT &amp;quot;ACTIVE&amp;quot; - 03 00 00 00 [../sample_src/s2_expectations_source.c:101]&lt;br /&gt;
1047389800 POINT &amp;quot;ACTIVE Previous State&amp;quot; - IDLE [../sample_src/s2_expectations_source.c:103]&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  ---------------------------------------------------------------------&lt;br /&gt;
  Summary: 0 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;tt&amp;gt;--trace&amp;lt;/tt&amp;gt; argument accepts a filter expression which takes the form of a [http://en.wikipedia.org/wiki/Regular_expression regular expression] that is applied to the test point label. In this case, we&#039;ve specified a filter that permits any test points that begin with &#039;&#039;ACTIVE&#039;&#039;. Filtering gives you a convenient way to quickly inspect specific behavioral aspects of your instrumented software.&lt;br /&gt;
&lt;br /&gt;
== Organizing with suites ==&lt;br /&gt;
&lt;br /&gt;
Now let&#039;s briefly describe how you can use the runner to organize subsets of test units into suites.  First, let&#039;s run our current set of test units without any explicit suite hierarchy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot;  --run=*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you examine the results file, you will see that this creates the default flat hierarchy with each test unit&#039;s corresponding suite at the root level of the report.&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s try grouping our tests into suites:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;/BasicTests{s2_testclass::Basic::Exceptions; s2_testclass::Basic::Fixtures; s2_testclass::Basic::Parameterized; s2_testclass::Basic::Simple}&amp;quot; --run=&amp;quot;/srTest{s2_testclass::srTest::Dynamic; s2_testclass::srTest::Simple}&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now when you view the results, you will see two top-level suites - &#039;&#039;&#039;BasicTests&#039;&#039;&#039; and &#039;&#039;&#039;srTest&#039;&#039;&#039; - and within those are the suites for the test units that we specified to be within each suite.&lt;br /&gt;
&lt;br /&gt;
If you plan to use this functionality to organize your tests into sub-suites, we recommend that you create options files to specify test unit groupings. This makes it easier to update and manage the suite hierarchy for your tests.&lt;br /&gt;
&lt;br /&gt;
== Using options files ==&lt;br /&gt;
&lt;br /&gt;
The runner also supports options files which allow you to place commonly used or lengthy command line options in a file. Let&#039;s run the same example as above (using suites for organization) - however, this time we&#039;ll put the &amp;lt;tt&amp;gt;--run&amp;lt;/tt&amp;gt; arguments into a file.&lt;br /&gt;
&lt;br /&gt;
First, let&#039;s create a text file with the &amp;lt;tt&amp;gt;--run&amp;lt;/tt&amp;gt; arguments in it - call it &#039;&#039;run_suites.opt&#039;&#039; and add these lines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--run=/BasicTests{s2_testclass::Basic::Exceptions;s2_testclass::Basic::Fixtures; s2_testclass::Basic::Parameterized;s2_testclass::Basic::Simple}&lt;br /&gt;
--run=/srTest{s2_testclass::srTest::Dynamic;s2_testclass::srTest::Simple}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When creating an options file, you can use newlines to separate individual options for easier maintenance and readability. Also, you&#039;ll notice we&#039;ve omitted the quotation marks around the strings we pass to each option since they are generally not needed (they are needed on the command line to allow the command shell to pass them along).&lt;br /&gt;
&lt;br /&gt;
Once you have created the options file, you can use it with the runner like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride  --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --options_file=run_suites.opt&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you execute this, you should see the same results as in the previous section.&lt;br /&gt;
&lt;br /&gt;
Options files are a convenient way to persist and reuse common command line settings to the runner or to organize the way in which your individual test units will be run. Once you have lots of groups adding tests to your system, options files will provide a manageable way to group subsets of tests during execution. Option files also lend themselves to persisting/reusing your test space upload settings (if you are uploading), which typically do not change often.&lt;br /&gt;
&lt;br /&gt;
== Notes ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Tests_in_C/C%2B%2B&amp;diff=12714</id>
		<title>Training Tests in C/C++</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Tests_in_C/C%2B%2B&amp;diff=12714"/>
		<updated>2010-06-04T21:10:46Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
&lt;br /&gt;
The STRIDE Framework provides support for implementation of tests in the native C/C++ of the device under test. Once written, these tests are compiled using the device toolchain and are harnessed, via the [[Intercept Module|STRIDE Intercept Module]], into one or more applications under test on the device. These tests have the unique advantage of executing in real-time on the device itself, allowing the tests to operate under actual device conditions during test.&lt;br /&gt;
&lt;br /&gt;
Please review the following reference articles before proceeding: &lt;br /&gt;
&lt;br /&gt;
* [[Test Units Overview|Overview]]&lt;br /&gt;
* [[Test Units]]&lt;br /&gt;
* [[Test Macros]]&lt;br /&gt;
* [[Test Point Testing in C/C++|Test Point Testing]]&lt;br /&gt;
* [[Test API | Test APIs]]&lt;br /&gt;
&lt;br /&gt;
== Why would I want to write tests in native code ? ==&lt;br /&gt;
&lt;br /&gt;
Here are some of the scenarios for which on-target test harnessing is particularly advantageous:&lt;br /&gt;
&lt;br /&gt;
* direct API testing. If you want to validate native APIs by driving the APIs directly, native code is the simplest way to do so. STRIDE provides convenient [[Test Macros|assertion macros]] to validate your variable states. API testing can also be combined with native test point tests (using [[Test Point]] instrumentation) to provide deeper validation of expected behavior of the units under test.&lt;br /&gt;
* unit testing of C objects or C++ classes. The STRIDE native tests execute in the same context as the rest of your code, so it&#039;s possible to fully unit test any objects that can be created in your actual application code.&lt;br /&gt;
* validation logic that requires sensitive timing thresholds. Sometimes  it&#039;s only possible to validate tight timing scenarios on-target.&lt;br /&gt;
* high-volume data processing scenarios. In some cases, the volume of data being processed and validated for a particular test scenario is to large to be easily handled by an off-target harness. In that case, native test units provide a convenient way to write tests that validate that data without shipping the data to the host during testing.&lt;br /&gt;
&lt;br /&gt;
What&#039;s more, you might simply &#039;&#039;prefer&#039;&#039; to write your test logic in C or C++ (as opposed to perl on the host). If that&#039;s the case, we don&#039;t discourage you from using a toolset that your more comfortable with - particularly if it enables you to start writing tests without a new language learning curve.&lt;br /&gt;
&lt;br /&gt;
== Are there any disadvantages ? ==&lt;br /&gt;
&lt;br /&gt;
Sure. Most of the disadvantages of native on-target tests concern the device build process. If your device build is particularly slow (on the order of hours or days), then adding and running new tests can become a tedious waiting game. Testing is always well served by shorter build cycles, and on-target tests are particularly sensitive to this. &lt;br /&gt;
&lt;br /&gt;
In some cases, the additional code space requirements of native tests is a concern, but this is also mitigated by ever-increasing device storage capacities. On platforms that support multiple processes (e.g. embedded linux or WinMobile), it&#039;s possible to bundle tests into one or more separate test process, which further mitigates the code space concern by isolating the test code in one or more separate applications.&lt;br /&gt;
&lt;br /&gt;
== Samples ==&lt;br /&gt;
&lt;br /&gt;
For this training, we will be using some of the samples provided in  the [[C/C++_Samples]]. For any sample that we don&#039;t cover here explicitly, feel free to explore the sample yourself. All of the samples can be easily built and executed using the STRIDE [[Off-Target Environment]]. &lt;br /&gt;
&lt;br /&gt;
The first three samples that we cover are introductions to the different test unit packaging mechanisms that we support in STRIDE. A good overview of the pros and cons for each type is presented [[Test_Units#Test_Units|here]]. The fourth sample demonstrates test point testing in native code on target (i.e. both the generation and &#039;&#039;validation&#039;&#039; of the test points are done on target). The last sample covers [[File_Transfer_Services|file transfer services]]. The STRIDE file transfer APIs enable reading/writing of files on the host from the device under test and can be useful for implementing data driven test scenarios (for instance, media file playback).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; each of the three packaging examples include samples that cover &#039;&#039;basic&#039;&#039; usage and more advanced reporting techniques (&#039;&#039;runtimeservices&#039;&#039;). We recommend for this training that you focus on the &#039;&#039;basic&#039;&#039; samples as they cover the important packaging concepts. The &#039;&#039;runtimeservices&#039;&#039; examples are relevant only if the built-in reporting techniques are not sufficient for your reporting needs.&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestClass ===&lt;br /&gt;
&lt;br /&gt;
This sample shows the techniques available for packaging and writing test units using classes. If you have a C++ capable compiler, we recommend that you use test classes to package your unit tests, even if your APIs under test are C only. Review the source code in the directory and follow the sample description [[Test Class Samples|here]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* all of these example classes have been put into one or more namespaces. This is just for organization purposes, mainly to avoid name collisions when built along with lots of other test classes. Your test classes are &#039;&#039;&#039;not&#039;&#039;&#039; required to be in namespaces, but it can be helpful in avoiding collisions as the number of tests in your system grows.&lt;br /&gt;
* we&#039;ve documented our test classes and methods using [http://www.stack.nl/~dimitri/doxygen/ doxygen] style comments. This documentation is automatically extracted by our tools and added to the results report - more information about this feature is [[Test_API#Test_Documentation|here]]&lt;br /&gt;
* you can optionally write test classes that inherit from a base class that we&#039;ve defined ([[Runtime_Test_Services#class_srTest|stride::srTest]]). We recommend you start by writing your classes this way so that your classes will inherit some methods and members that make some custom reporting tasks simpler.&lt;br /&gt;
* exceptions are generally handled by the STRIDE unit test harness, but can be disabled if your compiler does not support them (see &#039;&#039;s2_testclass_basic_exceptions_tests.h/cpp&#039;&#039;).&lt;br /&gt;
* parameterized tests are supported by test classes as well. In these tests, simple constructor arguments can be passed during execution and are available at runtime to the test unit. The STRIDE infrastructure handles the passing of the arguments to the device and the construction of the test class with these arguments. Parameterization of test classes can be a powerful way to expand your test coverage with data driven test scenarios (varying the input to a single test class).&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestFList ===&lt;br /&gt;
&lt;br /&gt;
This sample demonstrates a simpler packaging technique that is appropriate for systems that support C compilation only (no C++). [[Test_Units#Test_Units|FLists]] are simple a collection of functions that are called in sequence. There is no shared state or data, unless you arrange to use global data for this purpose. Review the source code in the directory and follow the sample description [[Test Function List Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* flist tests support setup/teardown fixturing, but &#039;&#039;&#039;not&#039;&#039;&#039; parameterization or exception handling.&lt;br /&gt;
* we&#039;ve again provided documentation using doxygen formatting for these samples. However, because there is no storage-class entity with which the docs are associated in an FList, there are some restrictions to the documentation, which you can read about [[Test_API#Test_FLists|here]].&lt;br /&gt;
* notice how the [[Scl_test_flist|scl_test_flist pragma]] requires you to both create a name for the test unit (first argument) &#039;&#039;&#039;and&#039;&#039;&#039; explicitly list each test method that is part of the unit. This is one disadvantage of flist over a test class (the latter does not require explicit listing of each test since all conforming public methods are assumed to be test methods).&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestCClass ===&lt;br /&gt;
&lt;br /&gt;
This sample demonstrates a more sophisticated (and complicated) packaging technique for systems that support C compilation only. [[Test_Units#Test_Units|Test C Classes]] are defined by a structure of function pointers (which may also include data) and an initialization function. Review the source code in the directory and follow the sample description [[Test CClass Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* the [[Scl_test_cclass|scl_test_cclass pragma]] requires a structure of function pointers as well as an initialization function that is called prior to running the tests. The initialization function must take the c class structure pointer as it&#039;s first argument and it will assign values to all the function pointer elements as well as perform any other initialization tasks. The pragma also accepts an optional deinitialization function that will be called after test execution (if provided).&lt;br /&gt;
* we&#039;ve provided documentation using  doxygen formatting for these samples. Because the test functions themselves are bound at runtime, the test documentation must be associated with the function pointer elements in the structure - read more [[Test_API#Test_C-Classes|here]].&lt;br /&gt;
* parameterized tests are also supported by test c classes. Arguments to the initialization function that follow the structure pointer argument are considered constructor arguments and can be passed when running the test.&lt;br /&gt;
* because the test methods are assigned to the structure members at runtime, it&#039;s possible (and recommended) to use statically scoped functions, so as not to pollute the global function space with test functions. That said, you are free to use any functions with matching signatures and linkage, regardless of scope.&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestPoint ===&lt;br /&gt;
&lt;br /&gt;
This sample demonstrates how to do tests that validate [[Test_Point_Testing_in_C/C++|STRIDE Test Points]] - with native test code. Although test point validation tests can be written in host-based scripting languages as well, sometimes it&#039;s preferable to write (and execute) the test logic in native target code - for instance, when validating large or otherwise complex data payloads. Review the source code in the directory and follow the sample description [[Test Point Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* test point tests can be packaged into a harness using any of the [[Test_Units#Test_Units|three types of test units]] that we support. In this case, we used an FList so that the sample could be used on systems that were not c++ capable.&lt;br /&gt;
* one of two methods is used to process the test: [[Test_Point_Testing_in_C/C++#srTestPointWait|srTestPointWait]] or [[Test_Point_Testing_in_C/C++#srTestPointCheck|srTestPointCheck]]. The former is used to process test points as they happen (with a specified timeout) and the latter is used to process test points that have already occurred at the time it is called (post completion check).&lt;br /&gt;
* due to the limitations of c syntax, it can be ugly to create the srTestPointExpect_t data, especially where user data validation is concerned (see the &#039;&#039;CheckData&#039;&#039; example, for instance).&lt;br /&gt;
&lt;br /&gt;
===  test_in_c_cpp/FileServices ===&lt;br /&gt;
&lt;br /&gt;
The FileServices sample demonstrates basic usage of the [[File_Transfer_Services|STRIDE File Transfer APIs]], which provide a way to transfer files to/from the host to the running device. All the data for the file transmission is done using STRIDE messaging between the STRIDE Runner on the host and the STRIDE Runtime on the device, so no additional communication ports are required to use these services. Review the source code in the directory and follow the sample  description [[File Services Samples|here]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* most of the functions return an integer status code which should be checked - any nonzero status indicates a failure. We wrote a macro for this sample (&#039;&#039;fsASSERT&#039;&#039;) that checks return codes and adds any errors to the report. You might choose to do something similar, depending on your needs.&lt;br /&gt;
* the file transfer API has both byte and line oriented read/write functions. You can use whichever functions are most appropriate for your needs.&lt;br /&gt;
* this sample uses the local filesystem ([http://en.wikipedia.org/wiki/Stdio stdio]) to write some data to a tempfile - however the stride APIs are themselves buffer/byte oriented and don&#039;t require a local filesystem in general. If your device under test &#039;&#039;does&#039;&#039; have a filesystem, the STRIDE APIs can certainly be used to transfer resources to/from the device filesystem.&lt;br /&gt;
&lt;br /&gt;
=== Build the test app ===&lt;br /&gt;
&lt;br /&gt;
So that we can run these samples, let&#039;s now build an off target test app that contains the source under tests -- you can follow the generic steps  [[Off-Target_Test_App#Copy_Sample_Source|described here]]. When copying the source, make sure you take all the source files from all four of the samples mentioned above.&lt;br /&gt;
&lt;br /&gt;
=== Run the tests ===&lt;br /&gt;
&lt;br /&gt;
Now launch the test app (if you have not already) and execute the runner  with the following commands: &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Test Class tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;s2_testclass::Basic::Exceptions; s2_testclass::Basic::Fixtures;  s2_testclass::Basic::Parameterized; s2_testclass::Basic::Simple&amp;quot; --output=TestClass.xml &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;C Class tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;s2_testcclass_basic_fixtures; s2_testcclass_basic_parameterized(\&amp;quot;mystring\&amp;quot;, 8); s2_testcclass_basic_simple&amp;quot; --output=CClass.xml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;FList tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;s2_testflist_basic_fixtures; s2_testflist_basic_simple&amp;quot; --output=FList.xml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Test Point tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;s2_testpoint_basic&amp;quot; --log_level=all --output=TestPoint.xml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;File Services tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride  --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot;  --run=&amp;quot;s2_fileservices_basic&amp;quot; --output=FileServices.xml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
These commands will produce distinct result files for each run (per the &#039;&#039;--output&#039;&#039; command above). Please use these result files to peruse the results by opening each in your browser.&lt;br /&gt;
&lt;br /&gt;
=== Examine the results ===&lt;br /&gt;
&lt;br /&gt;
Open the result files created above and browse the results. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* the test documentation has been extracted (at compile time) and is attached to the results when the tests are executed. Most of the test suites/test cases should have documentation in the description field.&lt;br /&gt;
* the test point tests cases show the test points that were encountered, information about failures (if any) and log messages (the latter only because we included the &amp;lt;tt&amp;gt;--log_level=all&amp;lt;/tt&amp;gt; option when executing the runner).&lt;br /&gt;
* The two parameterized tests -- s2_testcclass_parameterized&#039;&#039; and &#039;&#039;s2_testclass::Basic::Parameterized&#039;&#039; -- both pass. We passed explicit arguments to the former (on the stride command line above) while we allowed the default arguments (0/null for all) to be passed to the latter by not explicitly specifying how it was to be called.&lt;br /&gt;
&lt;br /&gt;
Explore all the results and make sure that the results meet your expectations based on the test source that you&#039;ve previously browsed.&lt;br /&gt;
&lt;br /&gt;
=== Other Samples ===&lt;br /&gt;
&lt;br /&gt;
We&#039;ve omitted a few samples from this training that cover more advanced and (perhaps) less widely used features. We encourage you to investigate these samples on your own if you are interested - in particular:&lt;br /&gt;
&lt;br /&gt;
* each of the test packaging samples (&#039;&#039;TestClass&#039;&#039;, &#039;&#039;TestCClass&#039;&#039;, and &#039;&#039;TestFList&#039;&#039;) includes examples of using the runtime test APIs to do more advanced reporting (dynamic suite creation, for example). These techniques are applicable for same data driven test scenarios.&lt;br /&gt;
* the [[Test_Double_Samples|TestDouble sample]] shows how to use STRIDE Test Doubles to replace function dependencies at runtime, typically for the purpose of isolating functions under test. This can be a powerful technique, but also requires some up-front work on your part to enable it, as this sample demonstrates.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Tests_in_C/C%2B%2B&amp;diff=12713</id>
		<title>Training Tests in C/C++</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Tests_in_C/C%2B%2B&amp;diff=12713"/>
		<updated>2010-06-04T20:42:20Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* Run the tests */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
&lt;br /&gt;
The STRIDE Framework provides support for implementation of tests in the native C/C++ of the device under test. Once written, these tests are compiled using the device toolchain and are harnessed, via the [[Intercept Module|STRIDE Intercept Module]], into one or more applications under test on the device. These tests have the unique advantage of executing in real-time on the device itself, allowing the tests to operate under actual device conditions during test.&lt;br /&gt;
&lt;br /&gt;
Please review the following reference articles before proceeding: &lt;br /&gt;
&lt;br /&gt;
* [[Test Units Overview|Overview]]&lt;br /&gt;
* [[Test Units]]&lt;br /&gt;
* [[Test Macros]]&lt;br /&gt;
* [[Test Point Testing in C/C++|Test Point Testing]]&lt;br /&gt;
* [[Test API | Test APIs]]&lt;br /&gt;
&lt;br /&gt;
== Why would I want to write tests in native code ? ==&lt;br /&gt;
&lt;br /&gt;
Here are some of the scenarios for which on-target test harnessing is particularly advantageous:&lt;br /&gt;
&lt;br /&gt;
* direct API testing. If you want to validate native APIs by driving the APIs directly, native code is the simplest way to do so. STRIDE provides convenient [[Test Macros|assertion macros]] to validate your variable states. API testing can also be combined with native test point tests (using [[Test Point]] instrumentation) to provide deeper validation of expected behavior of the units under test.&lt;br /&gt;
* unit testing of C objects or C++ classes. The STRIDE native tests execute in the same context as the rest of your code, so it&#039;s possible to fully unit test any objects that can be created in your actual application code.&lt;br /&gt;
* validation logic that requires sensitive timing thresholds. Sometimes  it&#039;s only possible to validate tight timing scenarios on-target.&lt;br /&gt;
* high-volume data processing scenarios. In some cases, the volume of data being processed and validated for a particular test scenario is to large to be easily handled by an off-target harness. In that case, native test units provide a convenient way to write tests that validate that data without shipping the data to the host during testing.&lt;br /&gt;
&lt;br /&gt;
What&#039;s more, you might simply &#039;&#039;prefer&#039;&#039; to write your test logic in C or C++ (as opposed to perl on the host). If that&#039;s the case, we don&#039;t discourage you from using a toolset that your more comfortable with - particularly if it enables you to start writing tests without a new language learning curve.&lt;br /&gt;
&lt;br /&gt;
== Are there any disadvantages ? ==&lt;br /&gt;
&lt;br /&gt;
Sure. Most of the disadvantages of native on-target tests concern the device build process. If your device build is particularly slow (on the order of hours or days), then adding and running new tests can become a tedious waiting game. Testing is always well served by shorter build cycles, and on-target tests are particularly sensitive to this. &lt;br /&gt;
&lt;br /&gt;
In some cases, the additional code space requirements of native tests is a concern, but this is also mitigated by ever-increasing device storage capacities. On platforms that support multiple processes (e.g. embedded linux or WinMobile), it&#039;s possible to bundle tests into one or more separate test process, which further mitigates the code space concern by isolating the test code in one or more separate applications.&lt;br /&gt;
&lt;br /&gt;
== Samples ==&lt;br /&gt;
&lt;br /&gt;
For this training, we will be using some of the samples provided in  the [[C/C++_Samples]]. For any sample that we don&#039;t cover here explicitly, feel free to explore the sample yourself. All of the samples can be easily built and executed using the STRIDE [[Off-Target Environment]]. &lt;br /&gt;
&lt;br /&gt;
The first three samples that we cover are introductions to the different test unit packaging mechanisms that we support in STRIDE. A good overview of the pros and cons for each type is presented [[Test_Units#Test_Units|here]]. The fourth sample demonstrates test point testing in native code on target (i.e. both the generation and &#039;&#039;validation&#039;&#039; of the test points are done on target). The last two samples cover more advanced topics: [[File_Transfer_Services|file transfer services]] and [[Using_Test_Doubles|function doubling]]. STRIDE file transfer enable reading/writing of files on the host from the device under test and can be useful for implementing data driven test scenarios (for instance, media file playback). The function doubling sample cover an advanced testing technique whereby you can temporarily replace specific functions with another version for the purpose of testing.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; each of the three packaging examples include samples that cover &#039;&#039;basic&#039;&#039; usage and more advanced reporting techniques (&#039;&#039;runtimeservices&#039;&#039;). We recommend for this training that you focus on the &#039;&#039;basic&#039;&#039; samples as they cover the important packaging concepts. The &#039;&#039;runtimeservices&#039;&#039; examples are relevant only if the built-in reporting techniques are not sufficient for your reporting needs.&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestClass ===&lt;br /&gt;
&lt;br /&gt;
This sample shows the techniques available for packaging and writing test units using classes. If you have a C++ capable compiler, we recommend that you use test classes to package your unit tests, even if your APIs under test are C only. Review the source code in the directory and follow the sample description [[Test Class Samples|here]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* all of these example classes have been put into one or more namespaces. This is just for organization purposes, mainly to avoid name collisions when built along with lots of other test classes. Your test classes are &#039;&#039;&#039;not&#039;&#039;&#039; required to be in namespaces, but it can be helpful in avoiding collisions as the number of tests in your system grows.&lt;br /&gt;
* we&#039;ve documented our test classes and methods using [http://www.stack.nl/~dimitri/doxygen/ doxygen] style comments. This documentation is automatically extracted by our tools and added to the results report - more information about this feature is [[Test_API#Test_Documentation|here]]&lt;br /&gt;
* you can optionally write test classes that inherit from a base class that we&#039;ve defined ([[Runtime_Test_Services#class_srTest|stride::srTest]]). We recommend you start by writing your classes this way so that your classes will inherit some methods and members that make some custom reporting tasks simpler.&lt;br /&gt;
* exceptions are generally handled by the STRIDE unit test harness, but can be disabled if your compiler does not support them (see &#039;&#039;s2_testclass_basic_exceptions_tests.h/cpp&#039;&#039;).&lt;br /&gt;
* parameterized tests are supported by test classes as well. In these tests, simple constructor arguments can be passed during execution and are available at runtime to the test unit. The STRIDE infrastructure handles the passing of the arguments to the device and the construction of the test class with these arguments. Parameterization of test classes can be a powerful way to expand your test coverage with data driven test scenarios (varying the input to a single test class).&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestFList ===&lt;br /&gt;
&lt;br /&gt;
This sample demonstrates a simpler packaging technique that is appropriate for systems that support C compilation only (no C++). [[Test_Units#Test_Units|FLists]] are simple a collection of functions that are called in sequence. There is no shared state or data, unless you arrange to use global data for this purpose. Review the source code in the directory and follow the sample description [[Test Function List Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* flist tests support setup/teardown fixturing, but &#039;&#039;&#039;not&#039;&#039;&#039; parameterization or exception handling.&lt;br /&gt;
* we&#039;ve again provided documentation using doxygen formatting for these samples. However, because there is no storage-class entity with which the docs are associated in an FList, there are some restrictions to the documentation, which you can read about [[Test_API#Test_FLists|here]].&lt;br /&gt;
* notice how the [[Scl_test_flist|scl_test_flist pragma]] requires you to both create a name for the test unit (first argument) &#039;&#039;&#039;and&#039;&#039;&#039; explicitly list each test method that is part of the unit. This is one disadvantage of flist over a test class (the latter does not require explicit listing of each test since all conforming public methods are assumed to be test methods).&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestCClass ===&lt;br /&gt;
&lt;br /&gt;
This sample demonstrates a more sophisticated (and complicated) packaging technique for systems that support C compilation only. [[Test_Units#Test_Units|Test C Classes]] are defined by a structure of function pointers (which may also include data) and an initialization function. Review the source code in the directory and follow the sample description [[Test CClass Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* the [[Scl_test_cclass|scl_test_cclass pragma]] requires a structure of function pointers as well as an initialization function that is called prior to running the tests. The initialization function must take the c class structure pointer as it&#039;s first argument and it will assign values to all the function pointer elements as well as perform any other initialization tasks. The pragma also accepts an optional deinitialization function that will be called after test execution (if provided).&lt;br /&gt;
* we&#039;ve provided documentation using  doxygen formatting for these samples. Because the test functions themselves are bound at runtime, the test documentation must be associated with the function pointer elements in the structure - read more [[Test_API#Test_C-Classes|here]].&lt;br /&gt;
* parameterized tests are also supported by test c classes. Arguments to the initialization function that follow the structure pointer argument are considered constructor arguments and can be passed when running the test.&lt;br /&gt;
* because the test methods are assigned to the structure members at runtime, it&#039;s possible (and recommended) to use statically scoped functions, so as not to pollute the global function space with test functions. That said, you are free to use any functions with matching signatures and linkage, regardless of scope.&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestPoint ===&lt;br /&gt;
&lt;br /&gt;
This sample demonstrates how to do tests that validate [[Test_Point_Testing_in_C/C++|STRIDE Test Points]] - with native test code. Although test point validation tests can be written in host-based scripting languages as well, sometimes it&#039;s preferable to write (and execute) the test logic in native target code - for instance, when validating large or otherwise complex data payloads. Review the source code in the directory and follow the sample description [[Test Point Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* test point tests can be packaged into a harness using any of the [[Test_Units#Test_Units|three types of test units]] that we support. In this case, we used an FList so that the sample could be used on systems that were not c++ capable.&lt;br /&gt;
* one of two methods is used to process the test: [[Test_Point_Testing_in_C/C++#srTestPointWait|srTestPointWait]] or [[Test_Point_Testing_in_C/C++#srTestPointCheck|srTestPointCheck]]. The former is used to process test points as they happen (with a specified timeout) and the latter is used to process test points that have already occurred at the time it is called (post completion check).&lt;br /&gt;
* due to the limitations of c syntax, it can be ugly to create the srTestPointExpect_t data, especially where user data validation is concerned (see the &#039;&#039;CheckData&#039;&#039; example, for instance).&lt;br /&gt;
&lt;br /&gt;
===  test_in_c_cpp/FileServices ===&lt;br /&gt;
&lt;br /&gt;
The FileServices sample demonstrates basic usage of the [[File_Transfer_Services|STRIDE File Transfer APIs]], which provide a way to transfer files to/from the host to the running device. All the data for the file transmission is done using STRIDE messaging between the STRIDE Runner on the host and the STRIDE Runtime on the device, so no additional communication ports are required to use these services. Review the source code in the directory and follow the sample  description [[File Services Samples|here]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* most of the functions return an integer status code which should be checked - any nonzero status indicates a failure. We wrote a macro for this sample (&#039;&#039;fsASSERT&#039;&#039;) that checks return codes and adds any errors to the report. You might choose to do something similar, depending on your needs.&lt;br /&gt;
* the file transfer API has both byte and line oriented read/write functions. You can use whichever functions are most appropriate for your needs.&lt;br /&gt;
* this sample uses the local filesystem ([http://en.wikipedia.org/wiki/Stdio stdio]) to write some data to a tempfile - however the stride APIs are themselves buffer/byte oriented and don&#039;t require a local filesystem in general. If your device under test &#039;&#039;does&#039;&#039; have a filesystem, the STRIDE APIs can certainly be used to transfer resources to/from the device filesystem.&lt;br /&gt;
&lt;br /&gt;
===  test_in_c_cpp/TestDouble ===&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=== Build the test app ===&lt;br /&gt;
&lt;br /&gt;
So that we can run these samples, let&#039;s now build an off target test app that contains the source under tests -- you can follow the generic steps  [[Off-Target_Test_App#Copy_Sample_Source|described here]]. When copying the source, make sure you take all the source files from all four of the samples mentioned above.&lt;br /&gt;
&lt;br /&gt;
=== Run the tests ===&lt;br /&gt;
&lt;br /&gt;
Now launch the test app (if you have not already) and execute the runner  with the following commands: &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Test Class tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;s2_testclass::Basic::Exceptions; s2_testclass::Basic::Fixtures;  s2_testclass::Basic::Parameterized; s2_testclass::Basic::Simple&amp;quot; --output=TestClass.xml &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;C Class tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;s2_testcclass_basic_fixtures; s2_testcclass_basic_parameterized(\&amp;quot;mystring\&amp;quot;, 8); s2_testcclass_basic_simple&amp;quot; --output=CClass.xml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;FList tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;s2_testflist_basic_fixtures; s2_testflist_basic_simple&amp;quot; --output=FList.xml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Test Point tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;s2_testpoint_basic&amp;quot; --log_level=all --output=TestPoint.xml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;File Services tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride  --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot;  --run=&amp;quot;s2_fileservices_basic&amp;quot; --output=FileServices.xml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
These commands will produce distinct result files for each run (per the &#039;&#039;--output&#039;&#039; command above). Please use these result files to peruse the results by opening each in your browser.&lt;br /&gt;
&lt;br /&gt;
=== Examine the results ===&lt;br /&gt;
&lt;br /&gt;
Open the result files created above and browse the results. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* the test documentation has been extracted (at compile time) and is attached to the results when the tests are executed. Most of the test suites/test cases should have documentation in the description field.&lt;br /&gt;
* the test point tests cases show the test points that were encountered, information about failures (if any) and log messages (the latter only because we included the &amp;lt;tt&amp;gt;--log_level=all&amp;lt;/tt&amp;gt; option when executing the runner).&lt;br /&gt;
* The two parameterized tests -- s2_testcclass_parameterized&#039;&#039; and &#039;&#039;s2_testclass::Basic::Parameterized&#039;&#039; -- both pass. We passed explicit arguments to the former (on the stride command line above) while we allowed the default arguments (0/null for all) to be passed to the latter by not explicitly specifying how it was to be called.&lt;br /&gt;
&lt;br /&gt;
Explore all the results and make sure that the results meet your expectations based on the test source that you&#039;ve previously browsed.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Tests_in_C/C%2B%2B&amp;diff=12712</id>
		<title>Training Tests in C/C++</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Tests_in_C/C%2B%2B&amp;diff=12712"/>
		<updated>2010-06-04T20:41:58Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* Samples */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
&lt;br /&gt;
The STRIDE Framework provides support for implementation of tests in the native C/C++ of the device under test. Once written, these tests are compiled using the device toolchain and are harnessed, via the [[Intercept Module|STRIDE Intercept Module]], into one or more applications under test on the device. These tests have the unique advantage of executing in real-time on the device itself, allowing the tests to operate under actual device conditions during test.&lt;br /&gt;
&lt;br /&gt;
Please review the following reference articles before proceeding: &lt;br /&gt;
&lt;br /&gt;
* [[Test Units Overview|Overview]]&lt;br /&gt;
* [[Test Units]]&lt;br /&gt;
* [[Test Macros]]&lt;br /&gt;
* [[Test Point Testing in C/C++|Test Point Testing]]&lt;br /&gt;
* [[Test API | Test APIs]]&lt;br /&gt;
&lt;br /&gt;
== Why would I want to write tests in native code ? ==&lt;br /&gt;
&lt;br /&gt;
Here are some of the scenarios for which on-target test harnessing is particularly advantageous:&lt;br /&gt;
&lt;br /&gt;
* direct API testing. If you want to validate native APIs by driving the APIs directly, native code is the simplest way to do so. STRIDE provides convenient [[Test Macros|assertion macros]] to validate your variable states. API testing can also be combined with native test point tests (using [[Test Point]] instrumentation) to provide deeper validation of expected behavior of the units under test.&lt;br /&gt;
* unit testing of C objects or C++ classes. The STRIDE native tests execute in the same context as the rest of your code, so it&#039;s possible to fully unit test any objects that can be created in your actual application code.&lt;br /&gt;
* validation logic that requires sensitive timing thresholds. Sometimes  it&#039;s only possible to validate tight timing scenarios on-target.&lt;br /&gt;
* high-volume data processing scenarios. In some cases, the volume of data being processed and validated for a particular test scenario is to large to be easily handled by an off-target harness. In that case, native test units provide a convenient way to write tests that validate that data without shipping the data to the host during testing.&lt;br /&gt;
&lt;br /&gt;
What&#039;s more, you might simply &#039;&#039;prefer&#039;&#039; to write your test logic in C or C++ (as opposed to perl on the host). If that&#039;s the case, we don&#039;t discourage you from using a toolset that your more comfortable with - particularly if it enables you to start writing tests without a new language learning curve.&lt;br /&gt;
&lt;br /&gt;
== Are there any disadvantages ? ==&lt;br /&gt;
&lt;br /&gt;
Sure. Most of the disadvantages of native on-target tests concern the device build process. If your device build is particularly slow (on the order of hours or days), then adding and running new tests can become a tedious waiting game. Testing is always well served by shorter build cycles, and on-target tests are particularly sensitive to this. &lt;br /&gt;
&lt;br /&gt;
In some cases, the additional code space requirements of native tests is a concern, but this is also mitigated by ever-increasing device storage capacities. On platforms that support multiple processes (e.g. embedded linux or WinMobile), it&#039;s possible to bundle tests into one or more separate test process, which further mitigates the code space concern by isolating the test code in one or more separate applications.&lt;br /&gt;
&lt;br /&gt;
== Samples ==&lt;br /&gt;
&lt;br /&gt;
For this training, we will be using some of the samples provided in  the [[C/C++_Samples]]. For any sample that we don&#039;t cover here explicitly, feel free to explore the sample yourself. All of the samples can be easily built and executed using the STRIDE [[Off-Target Environment]]. &lt;br /&gt;
&lt;br /&gt;
The first three samples that we cover are introductions to the different test unit packaging mechanisms that we support in STRIDE. A good overview of the pros and cons for each type is presented [[Test_Units#Test_Units|here]]. The fourth sample demonstrates test point testing in native code on target (i.e. both the generation and &#039;&#039;validation&#039;&#039; of the test points are done on target). The last two samples cover more advanced topics: [[File_Transfer_Services|file transfer services]] and [[Using_Test_Doubles|function doubling]]. STRIDE file transfer enable reading/writing of files on the host from the device under test and can be useful for implementing data driven test scenarios (for instance, media file playback). The function doubling sample cover an advanced testing technique whereby you can temporarily replace specific functions with another version for the purpose of testing.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; each of the three packaging examples include samples that cover &#039;&#039;basic&#039;&#039; usage and more advanced reporting techniques (&#039;&#039;runtimeservices&#039;&#039;). We recommend for this training that you focus on the &#039;&#039;basic&#039;&#039; samples as they cover the important packaging concepts. The &#039;&#039;runtimeservices&#039;&#039; examples are relevant only if the built-in reporting techniques are not sufficient for your reporting needs.&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestClass ===&lt;br /&gt;
&lt;br /&gt;
This sample shows the techniques available for packaging and writing test units using classes. If you have a C++ capable compiler, we recommend that you use test classes to package your unit tests, even if your APIs under test are C only. Review the source code in the directory and follow the sample description [[Test Class Samples|here]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* all of these example classes have been put into one or more namespaces. This is just for organization purposes, mainly to avoid name collisions when built along with lots of other test classes. Your test classes are &#039;&#039;&#039;not&#039;&#039;&#039; required to be in namespaces, but it can be helpful in avoiding collisions as the number of tests in your system grows.&lt;br /&gt;
* we&#039;ve documented our test classes and methods using [http://www.stack.nl/~dimitri/doxygen/ doxygen] style comments. This documentation is automatically extracted by our tools and added to the results report - more information about this feature is [[Test_API#Test_Documentation|here]]&lt;br /&gt;
* you can optionally write test classes that inherit from a base class that we&#039;ve defined ([[Runtime_Test_Services#class_srTest|stride::srTest]]). We recommend you start by writing your classes this way so that your classes will inherit some methods and members that make some custom reporting tasks simpler.&lt;br /&gt;
* exceptions are generally handled by the STRIDE unit test harness, but can be disabled if your compiler does not support them (see &#039;&#039;s2_testclass_basic_exceptions_tests.h/cpp&#039;&#039;).&lt;br /&gt;
* parameterized tests are supported by test classes as well. In these tests, simple constructor arguments can be passed during execution and are available at runtime to the test unit. The STRIDE infrastructure handles the passing of the arguments to the device and the construction of the test class with these arguments. Parameterization of test classes can be a powerful way to expand your test coverage with data driven test scenarios (varying the input to a single test class).&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestFList ===&lt;br /&gt;
&lt;br /&gt;
This sample demonstrates a simpler packaging technique that is appropriate for systems that support C compilation only (no C++). [[Test_Units#Test_Units|FLists]] are simple a collection of functions that are called in sequence. There is no shared state or data, unless you arrange to use global data for this purpose. Review the source code in the directory and follow the sample description [[Test Function List Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* flist tests support setup/teardown fixturing, but &#039;&#039;&#039;not&#039;&#039;&#039; parameterization or exception handling.&lt;br /&gt;
* we&#039;ve again provided documentation using doxygen formatting for these samples. However, because there is no storage-class entity with which the docs are associated in an FList, there are some restrictions to the documentation, which you can read about [[Test_API#Test_FLists|here]].&lt;br /&gt;
* notice how the [[Scl_test_flist|scl_test_flist pragma]] requires you to both create a name for the test unit (first argument) &#039;&#039;&#039;and&#039;&#039;&#039; explicitly list each test method that is part of the unit. This is one disadvantage of flist over a test class (the latter does not require explicit listing of each test since all conforming public methods are assumed to be test methods).&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestCClass ===&lt;br /&gt;
&lt;br /&gt;
This sample demonstrates a more sophisticated (and complicated) packaging technique for systems that support C compilation only. [[Test_Units#Test_Units|Test C Classes]] are defined by a structure of function pointers (which may also include data) and an initialization function. Review the source code in the directory and follow the sample description [[Test CClass Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* the [[Scl_test_cclass|scl_test_cclass pragma]] requires a structure of function pointers as well as an initialization function that is called prior to running the tests. The initialization function must take the c class structure pointer as it&#039;s first argument and it will assign values to all the function pointer elements as well as perform any other initialization tasks. The pragma also accepts an optional deinitialization function that will be called after test execution (if provided).&lt;br /&gt;
* we&#039;ve provided documentation using  doxygen formatting for these samples. Because the test functions themselves are bound at runtime, the test documentation must be associated with the function pointer elements in the structure - read more [[Test_API#Test_C-Classes|here]].&lt;br /&gt;
* parameterized tests are also supported by test c classes. Arguments to the initialization function that follow the structure pointer argument are considered constructor arguments and can be passed when running the test.&lt;br /&gt;
* because the test methods are assigned to the structure members at runtime, it&#039;s possible (and recommended) to use statically scoped functions, so as not to pollute the global function space with test functions. That said, you are free to use any functions with matching signatures and linkage, regardless of scope.&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestPoint ===&lt;br /&gt;
&lt;br /&gt;
This sample demonstrates how to do tests that validate [[Test_Point_Testing_in_C/C++|STRIDE Test Points]] - with native test code. Although test point validation tests can be written in host-based scripting languages as well, sometimes it&#039;s preferable to write (and execute) the test logic in native target code - for instance, when validating large or otherwise complex data payloads. Review the source code in the directory and follow the sample description [[Test Point Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* test point tests can be packaged into a harness using any of the [[Test_Units#Test_Units|three types of test units]] that we support. In this case, we used an FList so that the sample could be used on systems that were not c++ capable.&lt;br /&gt;
* one of two methods is used to process the test: [[Test_Point_Testing_in_C/C++#srTestPointWait|srTestPointWait]] or [[Test_Point_Testing_in_C/C++#srTestPointCheck|srTestPointCheck]]. The former is used to process test points as they happen (with a specified timeout) and the latter is used to process test points that have already occurred at the time it is called (post completion check).&lt;br /&gt;
* due to the limitations of c syntax, it can be ugly to create the srTestPointExpect_t data, especially where user data validation is concerned (see the &#039;&#039;CheckData&#039;&#039; example, for instance).&lt;br /&gt;
&lt;br /&gt;
===  test_in_c_cpp/FileServices ===&lt;br /&gt;
&lt;br /&gt;
The FileServices sample demonstrates basic usage of the [[File_Transfer_Services|STRIDE File Transfer APIs]], which provide a way to transfer files to/from the host to the running device. All the data for the file transmission is done using STRIDE messaging between the STRIDE Runner on the host and the STRIDE Runtime on the device, so no additional communication ports are required to use these services. Review the source code in the directory and follow the sample  description [[File Services Samples|here]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* most of the functions return an integer status code which should be checked - any nonzero status indicates a failure. We wrote a macro for this sample (&#039;&#039;fsASSERT&#039;&#039;) that checks return codes and adds any errors to the report. You might choose to do something similar, depending on your needs.&lt;br /&gt;
* the file transfer API has both byte and line oriented read/write functions. You can use whichever functions are most appropriate for your needs.&lt;br /&gt;
* this sample uses the local filesystem ([http://en.wikipedia.org/wiki/Stdio stdio]) to write some data to a tempfile - however the stride APIs are themselves buffer/byte oriented and don&#039;t require a local filesystem in general. If your device under test &#039;&#039;does&#039;&#039; have a filesystem, the STRIDE APIs can certainly be used to transfer resources to/from the device filesystem.&lt;br /&gt;
&lt;br /&gt;
===  test_in_c_cpp/TestDouble ===&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=== Build the test app ===&lt;br /&gt;
&lt;br /&gt;
So that we can run these samples, let&#039;s now build an off target test app that contains the source under tests -- you can follow the generic steps  [[Off-Target_Test_App#Copy_Sample_Source|described here]]. When copying the source, make sure you take all the source files from all four of the samples mentioned above.&lt;br /&gt;
&lt;br /&gt;
=== Run the tests ===&lt;br /&gt;
&lt;br /&gt;
Now launch the test app (if you have not already) and execute the runner  with the following commands: &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Test Class tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;s2_testclass::Basic::Exceptions; s2_testclass::Basic::Fixtures;  s2_testclass::Basic::Parameterized; s2_testclass::Basic::Simple&amp;quot; --output=TestClass.xml &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;C Class tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;s2_testcclass_basic_fixtures; s2_testcclass_basic_parameterized(\&amp;quot;mystring\&amp;quot;, 8); s2_testcclass_basic_simple&amp;quot; --output=CClass.xml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;FList tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;s2_testflist_basic_fixtures; s2_testflist_basic_simple&amp;quot; --output=FList.xml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Test Point tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;s2_testpoint_basic&amp;quot; --log_level=all --output=TestPoint.xml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Test  Point tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride  --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot;  --run=&amp;quot;s2_fileservices_basic&amp;quot; --output=FileServices.xml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
These commands will produce distinct result files for each run (per the &#039;&#039;--output&#039;&#039; command above). Please use these result files to peruse the results by opening each in your browser.&lt;br /&gt;
&lt;br /&gt;
=== Examine the results ===&lt;br /&gt;
&lt;br /&gt;
Open the result files created above and browse the results. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* the test documentation has been extracted (at compile time) and is attached to the results when the tests are executed. Most of the test suites/test cases should have documentation in the description field.&lt;br /&gt;
* the test point tests cases show the test points that were encountered, information about failures (if any) and log messages (the latter only because we included the &amp;lt;tt&amp;gt;--log_level=all&amp;lt;/tt&amp;gt; option when executing the runner).&lt;br /&gt;
* The two parameterized tests -- s2_testcclass_parameterized&#039;&#039; and &#039;&#039;s2_testclass::Basic::Parameterized&#039;&#039; -- both pass. We passed explicit arguments to the former (on the stride command line above) while we allowed the default arguments (0/null for all) to be passed to the latter by not explicitly specifying how it was to be called.&lt;br /&gt;
&lt;br /&gt;
Explore all the results and make sure that the results meet your expectations based on the test source that you&#039;ve previously browsed.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Tests_in_C/C%2B%2B&amp;diff=12698</id>
		<title>Training Tests in C/C++</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Tests_in_C/C%2B%2B&amp;diff=12698"/>
		<updated>2010-06-04T00:28:37Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* Run the tests */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
&lt;br /&gt;
The STRIDE Framework provides support for implementation of tests in the native C/C++ of the device under test. Once written, these tests are compiled using the device toolchain and are harnessed, via the [[Intercept Module|STRIDE Intercept Module]], into one or more applications under test on the device. These tests have the unique advantage of executing in real-time on the device itself, allowing the tests to operate under actual device conditions during test.&lt;br /&gt;
&lt;br /&gt;
Please review the following reference articles before proceeding: &lt;br /&gt;
&lt;br /&gt;
* [[Test Units Overview|Overview]]&lt;br /&gt;
* [[Test Units]]&lt;br /&gt;
* [[Test Macros]]&lt;br /&gt;
* [[Test Point Testing in C/C++|Test Point Testing]]&lt;br /&gt;
* [[Test API | Test APIs]]&lt;br /&gt;
&lt;br /&gt;
== Why would I want to write tests in native code ? ==&lt;br /&gt;
&lt;br /&gt;
Here are some of the scenarios for which on-target test harnessing is particularly advantageous:&lt;br /&gt;
&lt;br /&gt;
* direct API testing. If you want to validate native APIs by driving the APIs directly, native code is the simplest way to do so. STRIDE provides convenient [[Test Macros|assertion macros]] to validate your variable states. API testing can also be combined with native test point tests (using [[Test Point]] instrumentation) to provide deeper validation of expected behavior of the units under test.&lt;br /&gt;
* unit testing of C objects or C++ classes. The STRIDE native tests execute in the same context as the rest of your code, so it&#039;s possible to fully unit test any objects that can be created in your actual application code.&lt;br /&gt;
* validation logic that requires sensitive timing thresholds. Sometimes  it&#039;s only possible to validate tight timing scenarios on-target.&lt;br /&gt;
* high-volume data processing scenarios. In some cases, the volume of data being processed and validated for a particular test scenario is to large to be easily handled by an off-target harness. In that case, native test units provide a convenient way to write tests that validate that data without shipping the data to the host during testing.&lt;br /&gt;
&lt;br /&gt;
What&#039;s more, you might simply &#039;&#039;prefer&#039;&#039; to write your test logic in C or C++ (as opposed to perl on the host). If that&#039;s the case, we don&#039;t discourage you from using a toolset that your more comfortable with - particularly if it enables you to start writing tests without a new language learning curve.&lt;br /&gt;
&lt;br /&gt;
== Are there any disadvantages ? ==&lt;br /&gt;
&lt;br /&gt;
Sure. Most of the disadvantages of native on-target tests concern the device build process. If your device build is particularly slow (on the order of hours or days), then adding and running new tests can become a tedious waiting game. Testing is always well served by shorter build cycles, and on-target tests are particularly sensitive to this. &lt;br /&gt;
&lt;br /&gt;
In some cases, the additional code space requirements of native tests is a concern, but this is also mitigated by ever-increasing device storage capacities. On platforms that support multiple processes (e.g. embedded linux or WinMobile), it&#039;s possible to bundle tests into one or more separate test process, which further mitigates the code space concern by isolating the test code in one or more separate applications.&lt;br /&gt;
&lt;br /&gt;
== Samples ==&lt;br /&gt;
&lt;br /&gt;
For this training, we will be using some of the samples provided in  the [[C/C++_Samples]]. For any sample that we don&#039;t cover here explicitly, feel free to explore the sample yourself. All of the samples can be easily built and executed using the STRIDE [[Off-Target Environment]]. &lt;br /&gt;
&lt;br /&gt;
The first three samples that we cover are introductions to the different test unit packaging mechanisms that we support in STRIDE. A good overview of the pros and cons for each type is presented [[Test_Units#Test_Units|here]]. The last sample we discuss is the TestPoint sample, which demonstrates test point testing in native code on target (i.e. both the generation and &#039;&#039;validation&#039;&#039; of the test points are done on target).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; each of the packaging examples include samples that cover &#039;&#039;basic&#039;&#039; usage and more advanced reporting techniques (&#039;&#039;runtimeservices&#039;&#039;). We recommend for this training that you focus on the &#039;&#039;basic&#039;&#039; samples as they cover the important packaging concepts. The &#039;&#039;runtimeservices&#039;&#039; examples are relevant only if the built-in reporting techniques are not sufficient for your reporting needs.&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestClass ===&lt;br /&gt;
&lt;br /&gt;
This sample shows the techniques available for packaging and writing test units using classes. If you have a C++ capable compiler, we recommend that you use test classes to package your unit tests, even if your APIs under test are C only. Review the source code in the directory and follow the sample description [[Test Class Samples|here]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* all of these example classes have been put into one or more namespaces. This is just for organization purposes, mainly to avoid name collisions when built along with lots of other test classes. Your test classes are &#039;&#039;&#039;not&#039;&#039;&#039; required to be in namespaces, but it can be helpful in avoiding collisions as the number of tests in your system grows.&lt;br /&gt;
* we&#039;ve documented our test classes and methods using [http://www.stack.nl/~dimitri/doxygen/ doxygen] style comments. This documentation is automatically extracted by our tools and added to the results report - more information about this feature is [[Test_API#Test_Documentation|here]]&lt;br /&gt;
* you can optionally write test classes that inherit from a base class that we&#039;ve defined ([[Runtime_Test_Services#class_srTest|stride::srTest]]). We recommend you start by writing your classes this way so that your classes will inherit some methods and members that make some custom reporting tasks simpler.&lt;br /&gt;
* exceptions are generally handled by the STRIDE unit test harness, but can be disabled if your compiler does not support them (see &#039;&#039;s2_testclass_basic_exceptions_tests.h/cpp&#039;&#039;).&lt;br /&gt;
* parameterized tests are supported by test classes as well. In these tests, simple constructor arguments can be passed during execution and are available at runtime to the test unit. The STRIDE infrastructure handles the passing of the arguments to the device and the construction of the test class with these arguments. Parameterization of test classes can be a powerful way to expand your test coverage with data driven test scenarios (varying the input to a single test class).&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestFList ===&lt;br /&gt;
&lt;br /&gt;
This sample demonstrates a simpler packaging technique that is appropriate for systems that support C compilation only (no C++). [[Test_Units#Test_Units|FLists]] are simple a collection of functions that are called in sequence. There is no shared state or data, unless you arrange to use global data for this purpose. Review the source code in the directory and follow the sample description [[Test Function List Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* flist tests support setup/teardown fixturing, but &#039;&#039;&#039;not&#039;&#039;&#039; parameterization or exception handling.&lt;br /&gt;
* we&#039;ve again provided documentation using doxygen formatting for these samples. However, because there is no storage-class entity with which the docs are associated in an FList, there are some restrictions to the documentation, which you can read about [[Test_API#Test_FLists|here]].&lt;br /&gt;
* notice how the [[Scl_test_flist|scl_test_flist pragma]] requires you to both create a name for the test unit (first argument) &#039;&#039;&#039;and&#039;&#039;&#039; explicitly list each test method that is part of the unit. This is one disadvantage of flist over a test class (the latter does not require explicit listing of each test since all conforming public methods are assumed to be test methods).&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestCClass ===&lt;br /&gt;
&lt;br /&gt;
This sample demonstrates a more sophisticated (and complicated) packaging technique for systems that support C compilation only. [[Test_Units#Test_Units|Test C Classes]] are defined by a structure of function pointers (which may also include data) and an initialization function. Review the source code in the directory and follow the sample description [[Test CClass Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* the [[Scl_test_cclass|scl_test_cclass pragma]] requires a structure of function pointers as well as an initialization function that is called prior to running the tests. The initialization function must take the c class structure pointer as it&#039;s first argument and it will assign values to all the function pointer elements as well as perform any other initialization tasks. The pragma also accepts an optional deinitialization function that will be called after test execution (if provided).&lt;br /&gt;
* we&#039;ve provided documentation using  doxygen formatting for these samples. Because the test functions themselves are bound at runtime, the test documentation must be associated with the function pointer elements in the structure - read more [[Test_API#Test_C-Classes|here]].&lt;br /&gt;
* parameterized tests are also supported by test c classes. Arguments to the initialization function that follow the structure pointer argument are considered constructor arguments and can be passed when running the test.&lt;br /&gt;
* because the test methods are assigned to the structure members at runtime, it&#039;s possible (and recommended) to use statically scoped functions, so as not to pollute the global function space with test functions. That said, you are free to use any functions with matching signatures and linkage, regardless of scope.&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestPoint ===&lt;br /&gt;
&lt;br /&gt;
The last sample we&#039;ll consider demonstrates how to do tests that validate [[Test_Point_Testing_in_C/C++|STRIDE Test Points]] - with native test code. Although test point tests can be written in host-based scriping languages as well, sometimes it&#039;s preferable to write (and execute) the test logic in native target code - for instance, when validating large or otherwise complex data payloads. Review the source code in the directory and follow the sample description [[Test Point Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* test point tests can be packaged into a harness using any of the [[Test_Units#Test_Units|three types of test units]] that we support. In this case, we used an FList so that the sample could be used on systems that were not c++ capable.&lt;br /&gt;
* one of two methods is used to process the test: [[Test_Point_Testing_in_C/C++#srTestPointWait|srTestPointWait]] or [[Test_Point_Testing_in_C/C++#srTestPointCheck|srTestPointCheck]]. The former is used to process test points as they happen (with a specified timeout) and the latter is used to process test points that have already occurred at the time it is called (post completion check).&lt;br /&gt;
* due to the limitations of c syntax, it can be ugly to create the srTestPointExpect_t data, especially where user data validation is concerned (see the &#039;&#039;CheckData&#039;&#039; example, for instance).&lt;br /&gt;
&lt;br /&gt;
=== Build the test app ===&lt;br /&gt;
&lt;br /&gt;
So that we can run these samples, let&#039;s now build an off target test app that contains the source under tests -- you can follow the generic steps  [[Off-Target_Test_App#Copy_Sample_Source|described here]]. When copying the source, make sure you take all the source files from all four of the samples mentioned above.&lt;br /&gt;
&lt;br /&gt;
=== Run the tests ===&lt;br /&gt;
&lt;br /&gt;
Now launch the test app (if you have not already) and execute the runner  with the following commands: &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Test Class tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;s2_testclass::Basic::Exceptions; s2_testclass::Basic::Fixtures;  s2_testclass::Basic::Parameterized; s2_testclass::Basic::Simple&amp;quot; --output=TestClass.xml &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;C Class tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;s2_testcclass_basic_fixtures; s2_testcclass_basic_parameterized(\&amp;quot;mystring\&amp;quot;, 8); s2_testcclass_basic_simple&amp;quot; --output=CClass.xml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;FList tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;s2_testflist_basic_fixtures; s2_testflist_basic_simple&amp;quot; --output=FList.xml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Test Point tests&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;s2_testpoint_basic&amp;quot; --log_level=all --output=TestPoint.xml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These commands will produce distinct result files for each run (per the &#039;&#039;--output&#039;&#039; command above). Please use these result files to peruse the results by opening each in your browser.&lt;br /&gt;
&lt;br /&gt;
=== Examine the results ===&lt;br /&gt;
&lt;br /&gt;
Open the &#039;&#039;TestApp.xml&#039;&#039; file and browse the results. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* there are four top-level suites corresponding to the four samples we discussed above. The command arguments we passed to the runner created these top-level suites.&lt;br /&gt;
* the test documentation has been extracted (at compile time) and is attached to the results when the tests are executed. Most of the test suites/test cases should have documentation in the description field.&lt;br /&gt;
* the test point tests cases show the test points that were encountered, information about failures (if any) and log messages (the latter only because we included the &amp;lt;tt&amp;gt;--log_level=all&amp;lt;/tt&amp;gt; option when executing the runner).&lt;br /&gt;
* The two parameterized tests -- s2_testcclass_parameterized&#039;&#039; and &#039;&#039;s2_testclass::Basic::Parameterized&#039;&#039; -- both pass. We passed explicit arguments to the former (on the stride command line above) while we allowed the default arguments (0/null for all) to be passed to the latter by not explicitly specifying how it was to be called.&lt;br /&gt;
&lt;br /&gt;
Explore all the results and make sure that the results meet your expectations based on the test source that you&#039;ve previously browsed.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Running_Tests&amp;diff=12604</id>
		<title>Training Running Tests</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Running_Tests&amp;diff=12604"/>
		<updated>2010-06-01T18:36:41Z</updated>

		<summary type="html">&lt;p&gt;Mikee: moved ++WIP++ Training Running Tests to Training Running Tests&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
&lt;br /&gt;
Device connection and STRIDE test execution is handled by the [[STRIDE Runner]] (aka &#039;&#039;&amp;quot;the runner&amp;quot;&#039;&#039;). The runner is a command line tool with a number of options for listing runnable items, executing tests, tracing on test points, and uploading your results. The runner is designed for use in both ad-hoc execution of tests and fully automated CI execution. You&#039;ve already seen some basic test execution scenarios in the other training sections - now we will look explicitly at several of the most common use-cases for the runner.&lt;br /&gt;
&lt;br /&gt;
Please review the following reference articles before proceeding: &lt;br /&gt;
&lt;br /&gt;
* [[STRIDE Runner|STRIDE Runner (reference)]]&lt;br /&gt;
* [[Running Tests]]&lt;br /&gt;
* [[Organizing Tests into Suites]]&lt;br /&gt;
* [[Tracing]]&lt;br /&gt;
&lt;br /&gt;
== Build a test app ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s begin by building an off-target test app to use for these examples. The sources we want to include in this app are &amp;lt;tt&amp;gt;test_in_script/Expectations&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;test_in_c_cpp/TestClass&amp;lt;/tt&amp;gt;. Copy these source files to your &amp;lt;tt&amp;gt;sample_src&amp;lt;/tt&amp;gt; directory and follow [[Off-Target_Test_App#Build_Steps|these instructions for building]]&lt;br /&gt;
&lt;br /&gt;
== Listing items ==&lt;br /&gt;
&lt;br /&gt;
Listing the contents of database is an easy way to see what test units and fixturing functions are available for execution. Open a command line and try the following&amp;lt;ref  name=&amp;quot;database&amp;quot;&amp;gt;These examples assume you are executing the  runner from the &amp;lt;tt&amp;gt;src&amp;lt;/tt&amp;gt; directory of your off-target  framework. If that&#039;s not the case, you will need to adjust the database  path accordingly.&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --database=&amp;quot;../out/TestApp.sidb&amp;quot; --list&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should see output something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Functions&lt;br /&gt;
  Exp_DoStateChanges()&lt;br /&gt;
Test Units&lt;br /&gt;
  s2_testclass::Basic::Exceptions()&lt;br /&gt;
  s2_testclass::Basic::Fixtures()&lt;br /&gt;
  s2_testclass::Basic::Parameterized(char const * szString, unsigned int uExpectedLen)&lt;br /&gt;
  s2_testclass::Basic::Simple()&lt;br /&gt;
  s2_testclass::RuntimeServices::Dynamic()&lt;br /&gt;
  s2_testclass::RuntimeServices::Override()&lt;br /&gt;
  s2_testclass::RuntimeServices::Simple()&lt;br /&gt;
  s2_testclass::RuntimeServices::VarComment()&lt;br /&gt;
  s2_testclass::srTest::Dynamic()&lt;br /&gt;
  s2_testclass::srTest::Simple()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A few things to notice: &lt;br /&gt;
* The Functions (if any) are listed before the Test Units.&lt;br /&gt;
* Function and Test Unit arguments (input params) will be shown, if any. The parameter types are described shown for each.&lt;br /&gt;
&lt;br /&gt;
== Tracing on test points ==&lt;br /&gt;
&lt;br /&gt;
Tracing using the runner will show any STRIDE Test Points that are generated on the device during the window of time that the runner is connected. If you have test points that are continuously being emitted (for instance, in some background thread), then you can just connect to the device with tracing enabled to see them (you&#039;ll need to specify a &amp;lt;tt&amp;gt;--trace_timeout&amp;lt;/tt&amp;gt; parameter to tell the runner how long to trace for). If your test points require some fixturing to be hit, then you&#039;ll need to specify a script to execute that makes the necessary fixture calls. This is precisely what we did in our previous [[Training_Instrumentation#Build, Run, and Trace|Instrumentation training]]. If you recall from that training, we did the following&amp;lt;ref name=&amp;quot;database&amp;quot;/&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=do_state_changes.pl --trace &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should see output resembling this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
Executing...&lt;br /&gt;
  script &amp;quot;C:\s2\seaside\SDK\Windows\src\do_state_changes.pl&amp;quot;&lt;br /&gt;
1032564500 POINT &amp;quot;SET_NEW_STATE&amp;quot; - START [../sample_src/s2_expectations_source.c:49]&lt;br /&gt;
1032564501 POINT &amp;quot;START&amp;quot; [../sample_src/s2_expectations_source.c:63]&lt;br /&gt;
1032574600 POINT &amp;quot;SET_NEW_STATE&amp;quot; - IDLE [../sample_src/s2_expectations_source.c:49]&lt;br /&gt;
1032574601 POINT &amp;quot;IDLE&amp;quot; - 02 00 00 00 [../sample_src/s2_expectations_source.c:78]&lt;br /&gt;
1032584600 POINT &amp;quot;SET_NEW_STATE&amp;quot; - ACTIVE [../sample_src/s2_expectations_source.c:49]&lt;br /&gt;
1032584601 POINT &amp;quot;ACTIVE&amp;quot; - 03 00 00 00 [../sample_src/s2_expectations_source.c:101]&lt;br /&gt;
1032594700 POINT &amp;quot;ACTIVE Previous State&amp;quot; - IDLE [../sample_src/s2_expectations_source.c:103]&lt;br /&gt;
1032594701 POINT &amp;quot;JSON_DATA&amp;quot; - {&amp;quot;string_field&amp;quot;: &amp;quot;a-string-value&amp;quot;, &amp;quot;int_field&amp;quot;: 42, &amp;quot;bool_field&amp;quot;: true, &amp;quot;hex_field&amp;quot;: &amp;quot;0xDEADBEEF&amp;quot;} [../sample_src/s2_expectations_source.c:105]&lt;br /&gt;
1032604800 POINT &amp;quot;SET_NEW_STATE&amp;quot; - IDLE [../sample_src/s2_expectations_source.c:49]&lt;br /&gt;
1032604801 POINT &amp;quot;IDLE&amp;quot; - 04 00 00 00 [../sample_src/s2_expectations_source.c:78]&lt;br /&gt;
1032614800 POINT &amp;quot;SET_NEW_STATE&amp;quot; - END [../sample_src/s2_expectations_source.c:49]&lt;br /&gt;
1032614801 POINT &amp;quot;END&amp;quot; - 05 00 00 00 [../sample_src/s2_expectations_source.c:117]&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  ---------------------------------------------------------------------&lt;br /&gt;
  Summary: 0 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s trace again, but include a filter expression for the test points:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride  --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot;  --run=do_state_changes.pl --trace=&amp;quot;ACTIVE.*&amp;quot; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
..and now you should see fewer trace points emitted:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
Executing...&lt;br /&gt;
  script &amp;quot;C:\s2\seaside\SDK\Windows\src\do_state_changes.pl&amp;quot;&lt;br /&gt;
1047379801 POINT &amp;quot;ACTIVE&amp;quot; - 03 00 00 00 [../sample_src/s2_expectations_source.c:101]&lt;br /&gt;
1047389800 POINT &amp;quot;ACTIVE Previous State&amp;quot; - IDLE [../sample_src/s2_expectations_source.c:103]&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  ---------------------------------------------------------------------&lt;br /&gt;
  Summary: 0 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;tt&amp;gt;--trace&amp;lt;/tt&amp;gt; argument accepts a filter expression which takes the form of a [http://en.wikipedia.org/wiki/Regular_expression regular expression] that is applied to the test point label. In this case, we&#039;ve specified a filter that permits any test points that begin with &#039;&#039;ACTIVE&#039;&#039;. Filtering gives you a convenient way to inspect quickly inspet specific behavioral aspects of your STRIDE instrumented software.&lt;br /&gt;
&lt;br /&gt;
== Organizing with suites ==&lt;br /&gt;
&lt;br /&gt;
Now let&#039;s briefly at how you can use the runner to organize subsets of test units into suites.  First, let&#039;s run our current set of test units without any explicit suite hierarchy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot;  --run=*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you examine the results file, you will see that this creates the default flat hierarchy with each test unit&#039;s corresponding suite at the root level of the report.&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s try grouping our tests into suites:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;/BasicTests{s2_testclass::Basic::Exceptions; s2_testclass::Basic::Fixtures; s2_testclass::Basic::Parameterized; s2_testclass::Basic::Simple}&amp;quot; --run=&amp;quot;/srTest{s2_testclass::srTest::Dynamic; s2_testclass::srTest::Simple}&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now when you view the results, you will see two top-level suites - &#039;&#039;&#039;BasicTests&#039;&#039;&#039; and &#039;&#039;&#039;srTest&#039;&#039;&#039; - and within those are the suites for the test units that we specified to be within each suite.&lt;br /&gt;
&lt;br /&gt;
If you plan to use this functionality to organize your tests into subsuites, we recommend that you create options files to specify test unit groupings. This makes it easier to update and manage the suite hierarchy for your tests.&lt;br /&gt;
&lt;br /&gt;
== Notes ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Running_Tests&amp;diff=12603</id>
		<title>Training Running Tests</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Running_Tests&amp;diff=12603"/>
		<updated>2010-06-01T18:36:19Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
&lt;br /&gt;
Device connection and STRIDE test execution is handled by the [[STRIDE Runner]] (aka &#039;&#039;&amp;quot;the runner&amp;quot;&#039;&#039;). The runner is a command line tool with a number of options for listing runnable items, executing tests, tracing on test points, and uploading your results. The runner is designed for use in both ad-hoc execution of tests and fully automated CI execution. You&#039;ve already seen some basic test execution scenarios in the other training sections - now we will look explicitly at several of the most common use-cases for the runner.&lt;br /&gt;
&lt;br /&gt;
Please review the following reference articles before proceeding: &lt;br /&gt;
&lt;br /&gt;
* [[STRIDE Runner|STRIDE Runner (reference)]]&lt;br /&gt;
* [[Running Tests]]&lt;br /&gt;
* [[Organizing Tests into Suites]]&lt;br /&gt;
* [[Tracing]]&lt;br /&gt;
&lt;br /&gt;
== Build a test app ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s begin by building an off-target test app to use for these examples. The sources we want to include in this app are &amp;lt;tt&amp;gt;test_in_script/Expectations&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;test_in_c_cpp/TestClass&amp;lt;/tt&amp;gt;. Copy these source files to your &amp;lt;tt&amp;gt;sample_src&amp;lt;/tt&amp;gt; directory and follow [[Off-Target_Test_App#Build_Steps|these instructions for building]]&lt;br /&gt;
&lt;br /&gt;
== Listing items ==&lt;br /&gt;
&lt;br /&gt;
Listing the contents of database is an easy way to see what test units and fixturing functions are available for execution. Open a command line and try the following&amp;lt;ref  name=&amp;quot;database&amp;quot;&amp;gt;These examples assume you are executing the  runner from the &amp;lt;tt&amp;gt;src&amp;lt;/tt&amp;gt; directory of your off-target  framework. If that&#039;s not the case, you will need to adjust the database  path accordingly.&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --database=&amp;quot;../out/TestApp.sidb&amp;quot; --list&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should see output something like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Functions&lt;br /&gt;
  Exp_DoStateChanges()&lt;br /&gt;
Test Units&lt;br /&gt;
  s2_testclass::Basic::Exceptions()&lt;br /&gt;
  s2_testclass::Basic::Fixtures()&lt;br /&gt;
  s2_testclass::Basic::Parameterized(char const * szString, unsigned int uExpectedLen)&lt;br /&gt;
  s2_testclass::Basic::Simple()&lt;br /&gt;
  s2_testclass::RuntimeServices::Dynamic()&lt;br /&gt;
  s2_testclass::RuntimeServices::Override()&lt;br /&gt;
  s2_testclass::RuntimeServices::Simple()&lt;br /&gt;
  s2_testclass::RuntimeServices::VarComment()&lt;br /&gt;
  s2_testclass::srTest::Dynamic()&lt;br /&gt;
  s2_testclass::srTest::Simple()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A few things to notice: &lt;br /&gt;
* The Functions (if any) are listed before the Test Units.&lt;br /&gt;
* Function and Test Unit arguments (input params) will be shown, if any. The parameter types are described shown for each.&lt;br /&gt;
&lt;br /&gt;
== Tracing on test points ==&lt;br /&gt;
&lt;br /&gt;
Tracing using the runner will show any STRIDE Test Points that are generated on the device during the window of time that the runner is connected. If you have test points that are continuously being emitted (for instance, in some background thread), then you can just connect to the device with tracing enabled to see them (you&#039;ll need to specify a &amp;lt;tt&amp;gt;--trace_timeout&amp;lt;/tt&amp;gt; parameter to tell the runner how long to trace for). If your test points require some fixturing to be hit, then you&#039;ll need to specify a script to execute that makes the necessary fixture calls. This is precisely what we did in our previous [[Training_Instrumentation#Build, Run, and Trace|Instrumentation training]]. If you recall from that training, we did the following&amp;lt;ref name=&amp;quot;database&amp;quot;/&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=do_state_changes.pl --trace &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should see output resembling this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
Executing...&lt;br /&gt;
  script &amp;quot;C:\s2\seaside\SDK\Windows\src\do_state_changes.pl&amp;quot;&lt;br /&gt;
1032564500 POINT &amp;quot;SET_NEW_STATE&amp;quot; - START [../sample_src/s2_expectations_source.c:49]&lt;br /&gt;
1032564501 POINT &amp;quot;START&amp;quot; [../sample_src/s2_expectations_source.c:63]&lt;br /&gt;
1032574600 POINT &amp;quot;SET_NEW_STATE&amp;quot; - IDLE [../sample_src/s2_expectations_source.c:49]&lt;br /&gt;
1032574601 POINT &amp;quot;IDLE&amp;quot; - 02 00 00 00 [../sample_src/s2_expectations_source.c:78]&lt;br /&gt;
1032584600 POINT &amp;quot;SET_NEW_STATE&amp;quot; - ACTIVE [../sample_src/s2_expectations_source.c:49]&lt;br /&gt;
1032584601 POINT &amp;quot;ACTIVE&amp;quot; - 03 00 00 00 [../sample_src/s2_expectations_source.c:101]&lt;br /&gt;
1032594700 POINT &amp;quot;ACTIVE Previous State&amp;quot; - IDLE [../sample_src/s2_expectations_source.c:103]&lt;br /&gt;
1032594701 POINT &amp;quot;JSON_DATA&amp;quot; - {&amp;quot;string_field&amp;quot;: &amp;quot;a-string-value&amp;quot;, &amp;quot;int_field&amp;quot;: 42, &amp;quot;bool_field&amp;quot;: true, &amp;quot;hex_field&amp;quot;: &amp;quot;0xDEADBEEF&amp;quot;} [../sample_src/s2_expectations_source.c:105]&lt;br /&gt;
1032604800 POINT &amp;quot;SET_NEW_STATE&amp;quot; - IDLE [../sample_src/s2_expectations_source.c:49]&lt;br /&gt;
1032604801 POINT &amp;quot;IDLE&amp;quot; - 04 00 00 00 [../sample_src/s2_expectations_source.c:78]&lt;br /&gt;
1032614800 POINT &amp;quot;SET_NEW_STATE&amp;quot; - END [../sample_src/s2_expectations_source.c:49]&lt;br /&gt;
1032614801 POINT &amp;quot;END&amp;quot; - 05 00 00 00 [../sample_src/s2_expectations_source.c:117]&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  ---------------------------------------------------------------------&lt;br /&gt;
  Summary: 0 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s trace again, but include a filter expression for the test points:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride  --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot;  --run=do_state_changes.pl --trace=&amp;quot;ACTIVE.*&amp;quot; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
..and now you should see fewer trace points emitted:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Loading database...&lt;br /&gt;
Connecting to device...&lt;br /&gt;
Executing...&lt;br /&gt;
  script &amp;quot;C:\s2\seaside\SDK\Windows\src\do_state_changes.pl&amp;quot;&lt;br /&gt;
1047379801 POINT &amp;quot;ACTIVE&amp;quot; - 03 00 00 00 [../sample_src/s2_expectations_source.c:101]&lt;br /&gt;
1047389800 POINT &amp;quot;ACTIVE Previous State&amp;quot; - IDLE [../sample_src/s2_expectations_source.c:103]&lt;br /&gt;
    &amp;gt; 0 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
  ---------------------------------------------------------------------&lt;br /&gt;
  Summary: 0 passed, 0 failed, 0 in progress, 0 not in use.&lt;br /&gt;
&lt;br /&gt;
Disconnecting from device...&lt;br /&gt;
Saving result file...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;tt&amp;gt;--trace&amp;lt;/tt&amp;gt; argument accepts a filter expression which takes the form of a [http://en.wikipedia.org/wiki/Regular_expression regular expression] that is applied to the test point label. In this case, we&#039;ve specified a filter that permits any test points that begin with &#039;&#039;ACTIVE&#039;&#039;. Filtering gives you a convenient way to inspect quickly inspet specific behavioral aspects of your STRIDE instrumented software.&lt;br /&gt;
&lt;br /&gt;
== Organizing with suites ==&lt;br /&gt;
&lt;br /&gt;
Now let&#039;s briefly at how you can use the runner to organize subsets of test units into suites.  First, let&#039;s run our current set of test units without any explicit suite hierarchy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot;  --run=*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you examine the results file, you will see that this creates the default flat hierarchy with each test unit&#039;s corresponding suite at the root level of the report.&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s try grouping our tests into suites:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;/BasicTests{s2_testclass::Basic::Exceptions; s2_testclass::Basic::Fixtures; s2_testclass::Basic::Parameterized; s2_testclass::Basic::Simple}&amp;quot; --run=&amp;quot;/srTest{s2_testclass::srTest::Dynamic; s2_testclass::srTest::Simple}&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now when you view the results, you will see two top-level suites - &#039;&#039;&#039;BasicTests&#039;&#039;&#039; and &#039;&#039;&#039;srTest&#039;&#039;&#039; - and within those are the suites for the test units that we specified to be within each suite.&lt;br /&gt;
&lt;br /&gt;
If you plan to use this functionality to organize your tests into subsuites, we recommend that you create options files to specify test unit groupings. This makes it easier to update and manage the suite hierarchy for your tests.&lt;br /&gt;
&lt;br /&gt;
== Notes ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Tests_in_C/C%2B%2B&amp;diff=12601</id>
		<title>Training Tests in C/C++</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Tests_in_C/C%2B%2B&amp;diff=12601"/>
		<updated>2010-06-01T18:01:54Z</updated>

		<summary type="html">&lt;p&gt;Mikee: moved ++WIP++ Training Tests in C/C++ to Training Tests in C/C++&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
&lt;br /&gt;
The STRIDE Framework provides support for implementation of tests in the native C/C++ of the device under test. Once written, these tests are compiled using the device toolchain and are harnessed, via the [[Intercept Module|STRIDE Intercept Module]], into one or more applications under test on the device. These tests have the unique advantage of executing in real-time on the device itself, allowing the tests to operate under actual device conditions during test.&lt;br /&gt;
&lt;br /&gt;
Please review the following reference articles before proceeding: &lt;br /&gt;
&lt;br /&gt;
* [[Test Units Overview|Overview]]&lt;br /&gt;
* [[Test Units]]&lt;br /&gt;
* [[Test Macros]]&lt;br /&gt;
* [[Test Point Testing in C/C++|Test Point Testing]]&lt;br /&gt;
&lt;br /&gt;
== Why would I want to write tests in native code ? ==&lt;br /&gt;
&lt;br /&gt;
Here are some of the scenarios for which on-target test harnessing is particularly advantageous:&lt;br /&gt;
&lt;br /&gt;
* direct API testing. If you want to validate native APIs by driving the APIs directly, native code is the simplest way to do so. STRIDE provides convenient [[Test Macros|assertion macros]] to validate your variable states. API testing can also be combined with native test point tests (using [[Test Point]] instrumentation) to provide deeper validation of expected behavior of the units under test.&lt;br /&gt;
* unit testing of C objects or C++ classes. The STRIDE native tests execute in the same context as the rest of your code, so it&#039;s possible to fully unit test any objects that can be created in your actual application code.&lt;br /&gt;
* validation logic that requires sensitive timing thresholds. Sometimes  it&#039;s only possible to validate tight timing scenarios on-target.&lt;br /&gt;
* high-volume data processing scenarios. In some cases, the volume of data being processed and validated for a particular test scenario is to large to be easily handled by an off-target harness. In that case, native test units provide a convenient way to write tests that validate that data without shipping the data to the host during testing.&lt;br /&gt;
&lt;br /&gt;
What&#039;s more, you might simply &#039;&#039;prefer&#039;&#039; to write your test logic in C or C++ (as opposed to perl on the host). If that&#039;s the case, we don&#039;t discourage you from using a toolset that your more comfortable with - particularly if it enables you to start writing tests without a new language learning curve.&lt;br /&gt;
&lt;br /&gt;
== Are there any disadvantages ? ==&lt;br /&gt;
&lt;br /&gt;
Sure. Most of the disadvantages of native on-target tests concern the device build process. If your device build is particularly slow (on the order of hours or days), then adding and running new tests can become a tedious waiting game. Testing is always well served by shorter build cycles, and on-target tests are particularly sensitive to this. &lt;br /&gt;
&lt;br /&gt;
In some cases, the additional code space requirements of native tests is a concern, but this is also mitigated by ever-increasing device storage capacities. On platforms that support multiple processes (e.g. embedded linux or WinMobile), it&#039;s possible to bundle tests into one or more separate test process, which further mitigates the code space concern by isolating the test code in one or more separate applications.&lt;br /&gt;
&lt;br /&gt;
== Samples ==&lt;br /&gt;
&lt;br /&gt;
For this training, we will be using some of the samples provided in  the [[C/C++_Samples]]. For any sample that we don&#039;t cover here explicitly, feel free to explore the sample yourself. All of the samples can be easily built and executed using the STRIDE off-target framework. &lt;br /&gt;
&lt;br /&gt;
The first three samples that we cover are introductions to the different test unit packaging mechanisms that we support in STRIDE. A good overview of the pros and cons for each type is presented [[Test_Units#Test_Units|here]]. The last sample we discuss is the TestPoint sample, which demonstrates test point testing in native code on target (i.e. both the generation and &#039;&#039;validation&#039;&#039; of the test points are done on target).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; each of the packaging examples include samples that cover &#039;&#039;basic&#039;&#039; usage and more advanced reporting techniques (&#039;&#039;runtimeservices&#039;&#039;). We recommend for this training that you focus on the &#039;&#039;basic&#039;&#039; samples as they cover the important packaging concepts. The &#039;&#039;runtimeservices&#039;&#039; examples are relevant only if the built-in reporting techniques are not sufficient for your reporting needs.&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestClass ===&lt;br /&gt;
&lt;br /&gt;
This sample shows the techniques available for packaging and writing test units using classes. If you have a C++ capable compiler, we recommend that you use test classes to package your unit tests, even if your APIs under test are C only. Review the source code in the directory and follow the sample description [[Test Class Samples|here]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* all of these example classes have been put into one or more namespaces. This is just for organization purposes, mainly to avoid name collisions when built along with lots of other test classes. Your test classes are &#039;&#039;&#039;not&#039;&#039;&#039; required to be in namespaces, but it can be helpful in avoiding collisions as the number of tests in your system grows.&lt;br /&gt;
* we&#039;ve documented our test classes and methods using [http://www.stack.nl/~dimitri/doxygen/ doxygen] style comments. This documentation is automatically extracted by our tools and added to the results report - more information about this feature is [[Test_API#Test_Documentation|here]]&lt;br /&gt;
* you can optionally write test classes that inherit from a base class that we&#039;ve defined ([[Runtime_Test_Services#class_srTest|stride::srTest]]). We recommend you start by writing your classes this way so that your classes will inherit some methods and members that make some custom reporting tasks simpler.&lt;br /&gt;
* exceptions are generally handled by the STRIDE unit test harness, but can be disabled if your compiler does not support them (see &#039;&#039;s2_testclass_basic_exceptions_tests.h/cpp&#039;&#039;).&lt;br /&gt;
* parameterized tests are supported by test classes as well. In these tests, simple constructor arguments can be passed during execution and are available at runtime to the test unit. The STRIDE infrastructure handles the passing of the arguments to the device and the construction of the test class with these arguments. Parameterization of test classes can be a powerful way to expand your test coverage with data driven test scenarios (varying the input to a single test class).&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestFList ===&lt;br /&gt;
&lt;br /&gt;
This sample demonstrates a simpler packaging technique that is appropriate for systems that support C compilation only (no C++). [[Test_Units#Test_Units|FLists]] are simple a collection of functions that are called in sequence. There is no shared state or data, unless you arrange to use global data for this purpose. Review the source code in the directory and follow the sample description [[Test Function List Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* flist tests support setup/teardown fixturing, but &#039;&#039;&#039;not&#039;&#039;&#039; parameterization or exception handling.&lt;br /&gt;
* we&#039;ve again provided documentation using doxygen formatting for these samples. However, because there is no storage-class entity with which the docs are associated in an FList, there are some restrictions to the documentation, which you can read about [[Test_API#Test_FLists|here]].&lt;br /&gt;
* notice how the [[Scl_test_flist|scl_test_flist pragma]] requires you to both create a name for the test unit (first argument) &#039;&#039;&#039;and&#039;&#039;&#039; explicitly list each test method that is part of the unit. This is one disadvantage of flist over a test class (the latter does not require explicit listing of each test since all conforming public methods are assumed to be test methods).&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestCClass ===&lt;br /&gt;
&lt;br /&gt;
This sample demonstrates a more sophisticated (and complicated) packaging technique for systems that support C compilation only. [[Test_Units#Test_Units|Test C Classes]] are defined by a structure of function pointers (which may also include data) and an initialization function. Review the source code in the directory and follow the sample description [[Test CClass Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* the [[Scl_test_cclass|scl_test_cclass pragma]] requires a structure of function pointers as well as an initialization function that is called prior to running the tests. The initialization function must take the c class structure pointer as it&#039;s first argument and it will assign values to all the function pointer elements as well as perform any other initialization tasks. The pragma also accepts an optional deinitialization function that will be called after test execution (if provided).&lt;br /&gt;
* we&#039;ve provided documentation using  doxygen formatting for these samples. Because the test functions themselves are bound at runtime, the test documentation must be associated with the function pointer elements in the structure - read more [[Test_API#Test_C-Classes|here]].&lt;br /&gt;
* parameterized tests are also supported by test c classes. Arguments to the initialization function that follow the structure pointer argument are considered constructor arguments and can be passed when running the test.&lt;br /&gt;
* because the test methods are assigned to the structure members at runtime, it&#039;s possible (and recommended) to use statically scoped functions, so as not to pollute the global function space with test functions. That said, you are free to use any functions with matching signatures and linkage, regardless of scope.&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestPoint ===&lt;br /&gt;
&lt;br /&gt;
The last sample we&#039;ll consider demonstrates how to do tests that validate [[Test_Point_Testing_in_C/C++|STRIDE Test Points]] - with native test code. Although test point tests can be written in host-based scriping languages as well, sometimes it&#039;s preferable to write (and execute) the test logic in native target code - for instance, when validating large or otherwise complex data payloads. Review the source code in the directory and follow the sample description [[Test Point Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* test point tests can be packaged into a harness using any of the [[Test_Units#Test_Units|three types of test units]] that we support. In this case, we used an FList so that the sample could be used on systems that were not c++ capable.&lt;br /&gt;
* one of two methods is used to process the test: [[Test_Point_Testing_in_C/C++#srTestPointWait|srTestPointWait]] or [[Test_Point_Testing_in_C/C++#srTestPointCheck|srTestPointCheck]]. The former is used to process test points as they happen (with a specified timeout) and the latter is used to process test points that have already occurred at the time it is called (post completion check).&lt;br /&gt;
* due to the limitations of c syntax, it can be ugly to create the srTestPointExpect_t data, especially where user data validation is concerned (see the &#039;&#039;CheckData&#039;&#039; example, for instance).&lt;br /&gt;
&lt;br /&gt;
=== Build the test app ===&lt;br /&gt;
&lt;br /&gt;
So that we can run these samples, let&#039;s now build an off target test app that contains the source under tests -- you can follow the generic steps  [[Off-Target_Test_App#Build_Steps|described here]]. When copying the source, make sure you take all the source files from all four of the samples mentioned above.&lt;br /&gt;
&lt;br /&gt;
=== Run the tests ===&lt;br /&gt;
&lt;br /&gt;
Now launch the test app (if you have not already) and execute the runner  with the following command: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;/TestPoint{s2_testpoint_basic}&amp;quot; --run=&amp;quot;/Test FList{s2_testflist_basic_fixtures; s2_testflist_basic_simple}&amp;quot; --run=&amp;quot;/Test C Class{s2_testcclass_basic_fixtures; s2_testcclass_basic_parameterized(\&amp;quot;mystring\&amp;quot;, 8); s2_testcclass_basic_simple}&amp;quot; --run=&amp;quot;/Test Class{s2_testclass::Basic::Exceptions; s2_testclass::Basic::Fixtures;  s2_testclass::Basic::Parameterized; s2_testclass::Basic::Simple}&amp;quot; --log_level=all&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This command line organizes the tests in the four samples above into suites for easier browsing.&lt;br /&gt;
&lt;br /&gt;
=== Examine the results ===&lt;br /&gt;
&lt;br /&gt;
Open the &#039;&#039;TestApp.xml&#039;&#039; file and browse the results. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* there are four top-level suites corresponding to the four samples we discussed above. The command arguments we passed to the runner created these top-level suites.&lt;br /&gt;
* the test documentation has been extracted (at compile time) and is attached to the results when the tests are executed. Most of the test suites/test cases should have documentation in the description field.&lt;br /&gt;
* the test point tests cases show the test points that were encountered, information about failures (if any) and log messages (the latter only because we included the &amp;lt;tt&amp;gt;--log_level=all&amp;lt;/tt&amp;gt; option when executing the runner).&lt;br /&gt;
* The two parameterized tests -- s2_testcclass_parameterized&#039;&#039; and &#039;&#039;s2_testclass::Basic::Parameterized&#039;&#039; -- both pass. We passed explicit arguments to the former (on the stride command line above) while we allowed the default arguments (0/null for all) to be passed to the latter by not explicitly specifying how it was to be called.&lt;br /&gt;
&lt;br /&gt;
Explore all the results and make sure that the results meet your expectations based on the test source that you&#039;ve previously browsed.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Tests_in_C/C%2B%2B&amp;diff=12600</id>
		<title>Training Tests in C/C++</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Tests_in_C/C%2B%2B&amp;diff=12600"/>
		<updated>2010-06-01T18:01:40Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
&lt;br /&gt;
The STRIDE Framework provides support for implementation of tests in the native C/C++ of the device under test. Once written, these tests are compiled using the device toolchain and are harnessed, via the [[Intercept Module|STRIDE Intercept Module]], into one or more applications under test on the device. These tests have the unique advantage of executing in real-time on the device itself, allowing the tests to operate under actual device conditions during test.&lt;br /&gt;
&lt;br /&gt;
Please review the following reference articles before proceeding: &lt;br /&gt;
&lt;br /&gt;
* [[Test Units Overview|Overview]]&lt;br /&gt;
* [[Test Units]]&lt;br /&gt;
* [[Test Macros]]&lt;br /&gt;
* [[Test Point Testing in C/C++|Test Point Testing]]&lt;br /&gt;
&lt;br /&gt;
== Why would I want to write tests in native code ? ==&lt;br /&gt;
&lt;br /&gt;
Here are some of the scenarios for which on-target test harnessing is particularly advantageous:&lt;br /&gt;
&lt;br /&gt;
* direct API testing. If you want to validate native APIs by driving the APIs directly, native code is the simplest way to do so. STRIDE provides convenient [[Test Macros|assertion macros]] to validate your variable states. API testing can also be combined with native test point tests (using [[Test Point]] instrumentation) to provide deeper validation of expected behavior of the units under test.&lt;br /&gt;
* unit testing of C objects or C++ classes. The STRIDE native tests execute in the same context as the rest of your code, so it&#039;s possible to fully unit test any objects that can be created in your actual application code.&lt;br /&gt;
* validation logic that requires sensitive timing thresholds. Sometimes  it&#039;s only possible to validate tight timing scenarios on-target.&lt;br /&gt;
* high-volume data processing scenarios. In some cases, the volume of data being processed and validated for a particular test scenario is to large to be easily handled by an off-target harness. In that case, native test units provide a convenient way to write tests that validate that data without shipping the data to the host during testing.&lt;br /&gt;
&lt;br /&gt;
What&#039;s more, you might simply &#039;&#039;prefer&#039;&#039; to write your test logic in C or C++ (as opposed to perl on the host). If that&#039;s the case, we don&#039;t discourage you from using a toolset that your more comfortable with - particularly if it enables you to start writing tests without a new language learning curve.&lt;br /&gt;
&lt;br /&gt;
== Are there any disadvantages ? ==&lt;br /&gt;
&lt;br /&gt;
Sure. Most of the disadvantages of native on-target tests concern the device build process. If your device build is particularly slow (on the order of hours or days), then adding and running new tests can become a tedious waiting game. Testing is always well served by shorter build cycles, and on-target tests are particularly sensitive to this. &lt;br /&gt;
&lt;br /&gt;
In some cases, the additional code space requirements of native tests is a concern, but this is also mitigated by ever-increasing device storage capacities. On platforms that support multiple processes (e.g. embedded linux or WinMobile), it&#039;s possible to bundle tests into one or more separate test process, which further mitigates the code space concern by isolating the test code in one or more separate applications.&lt;br /&gt;
&lt;br /&gt;
== Samples ==&lt;br /&gt;
&lt;br /&gt;
For this training, we will be using some of the samples provided in  the [[C/C++_Samples]]. For any sample that we don&#039;t cover here explicitly, feel free to explore the sample yourself. All of the samples can be easily built and executed using the STRIDE off-target framework. &lt;br /&gt;
&lt;br /&gt;
The first three samples that we cover are introductions to the different test unit packaging mechanisms that we support in STRIDE. A good overview of the pros and cons for each type is presented [[Test_Units#Test_Units|here]]. The last sample we discuss is the TestPoint sample, which demonstrates test point testing in native code on target (i.e. both the generation and &#039;&#039;validation&#039;&#039; of the test points are done on target).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; each of the packaging examples include samples that cover &#039;&#039;basic&#039;&#039; usage and more advanced reporting techniques (&#039;&#039;runtimeservices&#039;&#039;). We recommend for this training that you focus on the &#039;&#039;basic&#039;&#039; samples as they cover the important packaging concepts. The &#039;&#039;runtimeservices&#039;&#039; examples are relevant only if the built-in reporting techniques are not sufficient for your reporting needs.&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestClass ===&lt;br /&gt;
&lt;br /&gt;
This sample shows the techniques available for packaging and writing test units using classes. If you have a C++ capable compiler, we recommend that you use test classes to package your unit tests, even if your APIs under test are C only. Review the source code in the directory and follow the sample description [[Test Class Samples|here]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* all of these example classes have been put into one or more namespaces. This is just for organization purposes, mainly to avoid name collisions when built along with lots of other test classes. Your test classes are &#039;&#039;&#039;not&#039;&#039;&#039; required to be in namespaces, but it can be helpful in avoiding collisions as the number of tests in your system grows.&lt;br /&gt;
* we&#039;ve documented our test classes and methods using [http://www.stack.nl/~dimitri/doxygen/ doxygen] style comments. This documentation is automatically extracted by our tools and added to the results report - more information about this feature is [[Test_API#Test_Documentation|here]]&lt;br /&gt;
* you can optionally write test classes that inherit from a base class that we&#039;ve defined ([[Runtime_Test_Services#class_srTest|stride::srTest]]). We recommend you start by writing your classes this way so that your classes will inherit some methods and members that make some custom reporting tasks simpler.&lt;br /&gt;
* exceptions are generally handled by the STRIDE unit test harness, but can be disabled if your compiler does not support them (see &#039;&#039;s2_testclass_basic_exceptions_tests.h/cpp&#039;&#039;).&lt;br /&gt;
* parameterized tests are supported by test classes as well. In these tests, simple constructor arguments can be passed during execution and are available at runtime to the test unit. The STRIDE infrastructure handles the passing of the arguments to the device and the construction of the test class with these arguments. Parameterization of test classes can be a powerful way to expand your test coverage with data driven test scenarios (varying the input to a single test class).&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestFList ===&lt;br /&gt;
&lt;br /&gt;
This sample demonstrates a simpler packaging technique that is appropriate for systems that support C compilation only (no C++). [[Test_Units#Test_Units|FLists]] are simple a collection of functions that are called in sequence. There is no shared state or data, unless you arrange to use global data for this purpose. Review the source code in the directory and follow the sample description [[Test Function List Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* flist tests support setup/teardown fixturing, but &#039;&#039;&#039;not&#039;&#039;&#039; parameterization or exception handling.&lt;br /&gt;
* we&#039;ve again provided documentation using doxygen formatting for these samples. However, because there is no storage-class entity with which the docs are associated in an FList, there are some restrictions to the documentation, which you can read about [[Test_API#Test_FLists|here]].&lt;br /&gt;
* notice how the [[Scl_test_flist|scl_test_flist pragma]] requires you to both create a name for the test unit (first argument) &#039;&#039;&#039;and&#039;&#039;&#039; explicitly list each test method that is part of the unit. This is one disadvantage of flist over a test class (the latter does not require explicit listing of each test since all conforming public methods are assumed to be test methods).&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestCClass ===&lt;br /&gt;
&lt;br /&gt;
This sample demonstrates a more sophisticated (and complicated) packaging technique for systems that support C compilation only. [[Test_Units#Test_Units|Test C Classes]] are defined by a structure of function pointers (which may also include data) and an initialization function. Review the source code in the directory and follow the sample description [[Test CClass Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* the [[Scl_test_cclass|scl_test_cclass pragma]] requires a structure of function pointers as well as an initialization function that is called prior to running the tests. The initialization function must take the c class structure pointer as it&#039;s first argument and it will assign values to all the function pointer elements as well as perform any other initialization tasks. The pragma also accepts an optional deinitialization function that will be called after test execution (if provided).&lt;br /&gt;
* we&#039;ve provided documentation using  doxygen formatting for these samples. Because the test functions themselves are bound at runtime, the test documentation must be associated with the function pointer elements in the structure - read more [[Test_API#Test_C-Classes|here]].&lt;br /&gt;
* parameterized tests are also supported by test c classes. Arguments to the initialization function that follow the structure pointer argument are considered constructor arguments and can be passed when running the test.&lt;br /&gt;
* because the test methods are assigned to the structure members at runtime, it&#039;s possible (and recommended) to use statically scoped functions, so as not to pollute the global function space with test functions. That said, you are free to use any functions with matching signatures and linkage, regardless of scope.&lt;br /&gt;
&lt;br /&gt;
=== test_in_c_cpp/TestPoint ===&lt;br /&gt;
&lt;br /&gt;
The last sample we&#039;ll consider demonstrates how to do tests that validate [[Test_Point_Testing_in_C/C++|STRIDE Test Points]] - with native test code. Although test point tests can be written in host-based scriping languages as well, sometimes it&#039;s preferable to write (and execute) the test logic in native target code - for instance, when validating large or otherwise complex data payloads. Review the source code in the directory and follow the sample description [[Test Point Samples|here]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* test point tests can be packaged into a harness using any of the [[Test_Units#Test_Units|three types of test units]] that we support. In this case, we used an FList so that the sample could be used on systems that were not c++ capable.&lt;br /&gt;
* one of two methods is used to process the test: [[Test_Point_Testing_in_C/C++#srTestPointWait|srTestPointWait]] or [[Test_Point_Testing_in_C/C++#srTestPointCheck|srTestPointCheck]]. The former is used to process test points as they happen (with a specified timeout) and the latter is used to process test points that have already occurred at the time it is called (post completion check).&lt;br /&gt;
* due to the limitations of c syntax, it can be ugly to create the srTestPointExpect_t data, especially where user data validation is concerned (see the &#039;&#039;CheckData&#039;&#039; example, for instance).&lt;br /&gt;
&lt;br /&gt;
=== Build the test app ===&lt;br /&gt;
&lt;br /&gt;
So that we can run these samples, let&#039;s now build an off target test app that contains the source under tests -- you can follow the generic steps  [[Off-Target_Test_App#Build_Steps|described here]]. When copying the source, make sure you take all the source files from all four of the samples mentioned above.&lt;br /&gt;
&lt;br /&gt;
=== Run the tests ===&lt;br /&gt;
&lt;br /&gt;
Now launch the test app (if you have not already) and execute the runner  with the following command: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=&amp;quot;/TestPoint{s2_testpoint_basic}&amp;quot; --run=&amp;quot;/Test FList{s2_testflist_basic_fixtures; s2_testflist_basic_simple}&amp;quot; --run=&amp;quot;/Test C Class{s2_testcclass_basic_fixtures; s2_testcclass_basic_parameterized(\&amp;quot;mystring\&amp;quot;, 8); s2_testcclass_basic_simple}&amp;quot; --run=&amp;quot;/Test Class{s2_testclass::Basic::Exceptions; s2_testclass::Basic::Fixtures;  s2_testclass::Basic::Parameterized; s2_testclass::Basic::Simple}&amp;quot; --log_level=all&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This command line organizes the tests in the four samples above into suites for easier browsing.&lt;br /&gt;
&lt;br /&gt;
=== Examine the results ===&lt;br /&gt;
&lt;br /&gt;
Open the &#039;&#039;TestApp.xml&#039;&#039; file and browse the results. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;observations:&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* there are four top-level suites corresponding to the four samples we discussed above. The command arguments we passed to the runner created these top-level suites.&lt;br /&gt;
* the test documentation has been extracted (at compile time) and is attached to the results when the tests are executed. Most of the test suites/test cases should have documentation in the description field.&lt;br /&gt;
* the test point tests cases show the test points that were encountered, information about failures (if any) and log messages (the latter only because we included the &amp;lt;tt&amp;gt;--log_level=all&amp;lt;/tt&amp;gt; option when executing the runner).&lt;br /&gt;
* The two parameterized tests -- s2_testcclass_parameterized&#039;&#039; and &#039;&#039;s2_testclass::Basic::Parameterized&#039;&#039; -- both pass. We passed explicit arguments to the former (on the stride command line above) while we allowed the default arguments (0/null for all) to be passed to the latter by not explicitly specifying how it was to be called.&lt;br /&gt;
&lt;br /&gt;
Explore all the results and make sure that the results meet your expectations based on the test source that you&#039;ve previously browsed.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Tests_in_Script&amp;diff=12598</id>
		<title>Training Tests in Script</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Tests_in_Script&amp;diff=12598"/>
		<updated>2010-06-01T17:39:23Z</updated>

		<summary type="html">&lt;p&gt;Mikee: moved ++WIP++ Training Tests in Script to Training Tests in Script&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
&lt;br /&gt;
The STRIDE Framework allows you to write expectation tests that execute on the host while connected to a running device that has been instrumented with [[Test Point|STRIDE Test Points]]. Host-based expectation tests leverage the power of scripting languages (perl is currently supported, others are expected in the future) to quickly and easily write validation logic for the test points on your system. What&#039;s more, since the test logic is implemented and executed on the host, your device software does not have to be rebuilt when you want to create new tests or change existing ones.&lt;br /&gt;
&lt;br /&gt;
Please review the following reference articles before proceeding:&lt;br /&gt;
&lt;br /&gt;
* [[Test Modules Overview|Scripting Overview]]&lt;br /&gt;
* [[Perl Script APIs|perl Test Modules]]&lt;br /&gt;
&lt;br /&gt;
== What is an expectation test ? ==&lt;br /&gt;
&lt;br /&gt;
An expectation test is a test that validates behavior by verifying the occurrence (or non-occurrence) of specific test points on your running device. The STRIDE Framework makes it easy to define expectation tests &#039;&#039;via a single setup API&#039;&#039; ([[Perl_Script_APIs#Methods|see TestPointSetup]]) . Once defined, expectation tests are executed by invoking a &#039;&#039;wait&#039;&#039; method that evaluates the test points on the device as they occur. The wait method typically blocks until the entire defined expectation set has been satisfied &#039;&#039;&#039;or&#039;&#039;&#039; until a timeout (optional) has been exceeded.&lt;br /&gt;
&lt;br /&gt;
== How to I start my target processing scenario ? ==&lt;br /&gt;
&lt;br /&gt;
It&#039;s often necessary, as part of test, to start the processing on the device that causes the test scenario to occur. Sometimes processing is invoked via external stimulus (e.g. send a command to a serial port or send a network message). Given the [http://search.cpan.org/ wealth of libraries] available for perl, it&#039;s likely that you&#039;ll be able to find modules to help you in automating common communication protocols. &lt;br /&gt;
&lt;br /&gt;
If, on the other hand, the processing can be invoked by direct code paths in your application, you can consider using [[Function_Capturing|function fixturing]] via STRIDE. STRIDE function remoting allows you to specify a set of functions on the device that are to made available in the host scripting environment for remote execution. This can be a convenient way to expose device application hooks to the host scripting environment.&lt;br /&gt;
&lt;br /&gt;
Whatever the approach, we strongly encourage test authors to find ways to minimize the amount of manual interaction required to execute expectation tests. Wherever possible, we encourage you to try to find ways to fully automate the interaction required to execute your expectation scenarios. Fully automated tests are more likely to be run regularly and therefore provide a tighter feedback loop for your software quality.&lt;br /&gt;
&lt;br /&gt;
== Can I use STRIDE test modules only for expectation testing ? ==&lt;br /&gt;
&lt;br /&gt;
No - STRIDE test modules provide a language-specific way to harness test code. If you have other procedures that can be automated using perl code on the host, then you can certainly use STRIDE test modules to harness the test code. In doing so, you will get the reporting conveniences that test modules provide (like automatic POD doc extraction, etc. and suite/test case generation) - as well as unified reporting with your other STRIDE test cases.&lt;br /&gt;
&lt;br /&gt;
== Sample: Expectations ==&lt;br /&gt;
&lt;br /&gt;
For this training, we will again be using the sample code provided in the [[Expectations Sample]]. This sample demonstrates a number of common expectation test patterns as implemented in perl.&lt;br /&gt;
&lt;br /&gt;
=== Sample Source: test_in_script/Expectations/s2_expectations_testmodules.pm ===&lt;br /&gt;
&lt;br /&gt;
To begin, let&#039;s briefly examine the perl test module that implements the test logic. Open the file in your favorite editor (preferably one that supports perl syntax highlighting...) and examine the source code along with the description of each test that is provided at the beginning of each test. Here are some things to observe:&lt;br /&gt;
&lt;br /&gt;
* The package name matches the file name -- &#039;&#039;this is required&#039;&#039;&lt;br /&gt;
* We have included documentation for the module and test cases using standard POD formatting codes. As long as you follow the rules described [[Perl_Script_APIs#Documentation|here]], the STRIDE framework will automatically extract the POD during execution and annotate the report accordingly.&lt;br /&gt;
* Most of the test sequences are initiated via a remote function in the target app (&amp;lt;tt&amp;gt;Exp_DoStateChanges()&amp;lt;/tt&amp;gt;). This function has been [[Function Capturing|captured]] using STRIDE and is therefore available for invocation using the Functions object in the test module. In one case, we also invoke the function asynchronously (see &amp;lt;tt&amp;gt;async_loose&amp;lt;/tt&amp;gt;). Functions are invoke synchronously by default.&lt;br /&gt;
* In the &amp;lt;tt&amp;gt;check_data&amp;lt;/tt&amp;gt; test, we validate integer values coming from the host that were passed as binary payloads to the test point. We use the perl [http://perldoc.perl.org/functions/pack.html pack] function to create a scalar value that matches the data expected from the target (target is the same as the host, in this case). If we were testing against a target with different integer characteristics (size, byte ordering), we would have to adjust our pack statement accordingly to produce a bit pattern that matched the target value(s). In many cases, this binary payload validation proves to be difficult to maintain and this is why &#039;&#039;we typically recommend using string data payloads&#039;&#039; on test points wherever possible.&lt;br /&gt;
&lt;br /&gt;
=== Build the  test app ===&lt;br /&gt;
&lt;br /&gt;
So that we can run the sample, let&#039;s now build an off target test app that contains the source under tests -- you can follow the generic steps [[Off-Target_Test_App#Build_Steps|described here]].  &#039;&#039;&#039;Note:&#039;&#039;&#039; you can copy all of the sample files into the &amp;lt;tt&amp;gt;sample_src&amp;lt;/tt&amp;gt;  directory -- although only the source will be compiled into the app, this will make it easier to run the module.&lt;br /&gt;
&lt;br /&gt;
=== Run the sample ===&lt;br /&gt;
&lt;br /&gt;
Now launch the test app (if you have not already) and execute the runner with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=../out/TestApp.sidb --run=../sample_src/s2_expectations_testmodule.pm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(this assumes you are running from the &amp;lt;tt&amp;gt;src&amp;lt;/tt&amp;gt; directory of your off-target SDK. If that&#039;s not the case, you need to change the path to the database and test module files accordingly.)&lt;br /&gt;
&lt;br /&gt;
If you&#039;d like to see how log messages will appear in reports, you can add &amp;lt;tt&amp;gt;--log_level=all&amp;lt;/tt&amp;gt; to this command.&lt;br /&gt;
&lt;br /&gt;
=== Examine the results ===&lt;br /&gt;
&lt;br /&gt;
The runner will produce a results (xml) file once the execution is complete. The file will (by default) have the same name as the database and be located in the current directory - so find the &amp;lt;tt&amp;gt;TestApp.xml&amp;lt;/tt&amp;gt; file and open it with your browser. You can then use the buttons to expand the test suites and test cases. Here are some things to observe about the results:&lt;br /&gt;
&lt;br /&gt;
* there is a single suite called &#039;&#039;&#039;s2_expectations_module&#039;&#039;&#039;. This matches the name given to the test module.&lt;br /&gt;
* the test module&#039;s suite has one annotation - it is a trace file containing all of the test points and logs that were reported to the host during the execution of the test module. This trace file can be useful if you want to get an sequential view of all test points that were encountered during the execution of the module.&lt;br /&gt;
* the test module suite contains 13 test cases -- each one corresponds to a single test case (test function) in the module. The description for each test module was automatically generated from the POD documentation that was included in the test module file.&lt;br /&gt;
* each test case has a list of several annotations. The first is a simple HTML view of the source code of the test itself. This can be useful for quickly inspecting the code that was used to run the test without having to go to your actual test module implementation file. The remaining annotation contain information about each test point that was hit during processing and any expectation failures or timeouts that were encountered.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Tests_in_Script&amp;diff=12597</id>
		<title>Training Tests in Script</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Tests_in_Script&amp;diff=12597"/>
		<updated>2010-06-01T17:39:09Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
&lt;br /&gt;
The STRIDE Framework allows you to write expectation tests that execute on the host while connected to a running device that has been instrumented with [[Test Point|STRIDE Test Points]]. Host-based expectation tests leverage the power of scripting languages (perl is currently supported, others are expected in the future) to quickly and easily write validation logic for the test points on your system. What&#039;s more, since the test logic is implemented and executed on the host, your device software does not have to be rebuilt when you want to create new tests or change existing ones.&lt;br /&gt;
&lt;br /&gt;
Please review the following reference articles before proceeding:&lt;br /&gt;
&lt;br /&gt;
* [[Test Modules Overview|Scripting Overview]]&lt;br /&gt;
* [[Perl Script APIs|perl Test Modules]]&lt;br /&gt;
&lt;br /&gt;
== What is an expectation test ? ==&lt;br /&gt;
&lt;br /&gt;
An expectation test is a test that validates behavior by verifying the occurrence (or non-occurrence) of specific test points on your running device. The STRIDE Framework makes it easy to define expectation tests &#039;&#039;via a single setup API&#039;&#039; ([[Perl_Script_APIs#Methods|see TestPointSetup]]) . Once defined, expectation tests are executed by invoking a &#039;&#039;wait&#039;&#039; method that evaluates the test points on the device as they occur. The wait method typically blocks until the entire defined expectation set has been satisfied &#039;&#039;&#039;or&#039;&#039;&#039; until a timeout (optional) has been exceeded.&lt;br /&gt;
&lt;br /&gt;
== How to I start my target processing scenario ? ==&lt;br /&gt;
&lt;br /&gt;
It&#039;s often necessary, as part of test, to start the processing on the device that causes the test scenario to occur. Sometimes processing is invoked via external stimulus (e.g. send a command to a serial port or send a network message). Given the [http://search.cpan.org/ wealth of libraries] available for perl, it&#039;s likely that you&#039;ll be able to find modules to help you in automating common communication protocols. &lt;br /&gt;
&lt;br /&gt;
If, on the other hand, the processing can be invoked by direct code paths in your application, you can consider using [[Function_Capturing|function fixturing]] via STRIDE. STRIDE function remoting allows you to specify a set of functions on the device that are to made available in the host scripting environment for remote execution. This can be a convenient way to expose device application hooks to the host scripting environment.&lt;br /&gt;
&lt;br /&gt;
Whatever the approach, we strongly encourage test authors to find ways to minimize the amount of manual interaction required to execute expectation tests. Wherever possible, we encourage you to try to find ways to fully automate the interaction required to execute your expectation scenarios. Fully automated tests are more likely to be run regularly and therefore provide a tighter feedback loop for your software quality.&lt;br /&gt;
&lt;br /&gt;
== Can I use STRIDE test modules only for expectation testing ? ==&lt;br /&gt;
&lt;br /&gt;
No - STRIDE test modules provide a language-specific way to harness test code. If you have other procedures that can be automated using perl code on the host, then you can certainly use STRIDE test modules to harness the test code. In doing so, you will get the reporting conveniences that test modules provide (like automatic POD doc extraction, etc. and suite/test case generation) - as well as unified reporting with your other STRIDE test cases.&lt;br /&gt;
&lt;br /&gt;
== Sample: Expectations ==&lt;br /&gt;
&lt;br /&gt;
For this training, we will again be using the sample code provided in the [[Expectations Sample]]. This sample demonstrates a number of common expectation test patterns as implemented in perl.&lt;br /&gt;
&lt;br /&gt;
=== Sample Source: test_in_script/Expectations/s2_expectations_testmodules.pm ===&lt;br /&gt;
&lt;br /&gt;
To begin, let&#039;s briefly examine the perl test module that implements the test logic. Open the file in your favorite editor (preferably one that supports perl syntax highlighting...) and examine the source code along with the description of each test that is provided at the beginning of each test. Here are some things to observe:&lt;br /&gt;
&lt;br /&gt;
* The package name matches the file name -- &#039;&#039;this is required&#039;&#039;&lt;br /&gt;
* We have included documentation for the module and test cases using standard POD formatting codes. As long as you follow the rules described [[Perl_Script_APIs#Documentation|here]], the STRIDE framework will automatically extract the POD during execution and annotate the report accordingly.&lt;br /&gt;
* Most of the test sequences are initiated via a remote function in the target app (&amp;lt;tt&amp;gt;Exp_DoStateChanges()&amp;lt;/tt&amp;gt;). This function has been [[Function Capturing|captured]] using STRIDE and is therefore available for invocation using the Functions object in the test module. In one case, we also invoke the function asynchronously (see &amp;lt;tt&amp;gt;async_loose&amp;lt;/tt&amp;gt;). Functions are invoke synchronously by default.&lt;br /&gt;
* In the &amp;lt;tt&amp;gt;check_data&amp;lt;/tt&amp;gt; test, we validate integer values coming from the host that were passed as binary payloads to the test point. We use the perl [http://perldoc.perl.org/functions/pack.html pack] function to create a scalar value that matches the data expected from the target (target is the same as the host, in this case). If we were testing against a target with different integer characteristics (size, byte ordering), we would have to adjust our pack statement accordingly to produce a bit pattern that matched the target value(s). In many cases, this binary payload validation proves to be difficult to maintain and this is why &#039;&#039;we typically recommend using string data payloads&#039;&#039; on test points wherever possible.&lt;br /&gt;
&lt;br /&gt;
=== Build the  test app ===&lt;br /&gt;
&lt;br /&gt;
So that we can run the sample, let&#039;s now build an off target test app that contains the source under tests -- you can follow the generic steps [[Off-Target_Test_App#Build_Steps|described here]].  &#039;&#039;&#039;Note:&#039;&#039;&#039; you can copy all of the sample files into the &amp;lt;tt&amp;gt;sample_src&amp;lt;/tt&amp;gt;  directory -- although only the source will be compiled into the app, this will make it easier to run the module.&lt;br /&gt;
&lt;br /&gt;
=== Run the sample ===&lt;br /&gt;
&lt;br /&gt;
Now launch the test app (if you have not already) and execute the runner with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=../out/TestApp.sidb --run=../sample_src/s2_expectations_testmodule.pm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(this assumes you are running from the &amp;lt;tt&amp;gt;src&amp;lt;/tt&amp;gt; directory of your off-target SDK. If that&#039;s not the case, you need to change the path to the database and test module files accordingly.)&lt;br /&gt;
&lt;br /&gt;
If you&#039;d like to see how log messages will appear in reports, you can add &amp;lt;tt&amp;gt;--log_level=all&amp;lt;/tt&amp;gt; to this command.&lt;br /&gt;
&lt;br /&gt;
=== Examine the results ===&lt;br /&gt;
&lt;br /&gt;
The runner will produce a results (xml) file once the execution is complete. The file will (by default) have the same name as the database and be located in the current directory - so find the &amp;lt;tt&amp;gt;TestApp.xml&amp;lt;/tt&amp;gt; file and open it with your browser. You can then use the buttons to expand the test suites and test cases. Here are some things to observe about the results:&lt;br /&gt;
&lt;br /&gt;
* there is a single suite called &#039;&#039;&#039;s2_expectations_module&#039;&#039;&#039;. This matches the name given to the test module.&lt;br /&gt;
* the test module&#039;s suite has one annotation - it is a trace file containing all of the test points and logs that were reported to the host during the execution of the test module. This trace file can be useful if you want to get an sequential view of all test points that were encountered during the execution of the module.&lt;br /&gt;
* the test module suite contains 13 test cases -- each one corresponds to a single test case (test function) in the module. The description for each test module was automatically generated from the POD documentation that was included in the test module file.&lt;br /&gt;
* each test case has a list of several annotations. The first is a simple HTML view of the source code of the test itself. This can be useful for quickly inspecting the code that was used to run the test without having to go to your actual test module implementation file. The remaining annotation contain information about each test point that was hit during processing and any expectation failures or timeouts that were encountered.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Instrumentation&amp;diff=12595</id>
		<title>Training Instrumentation</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Instrumentation&amp;diff=12595"/>
		<updated>2010-06-01T17:08:34Z</updated>

		<summary type="html">&lt;p&gt;Mikee: moved ++WIP++ Training Instrumentation to Training Instrumentation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
&lt;br /&gt;
Source instrumentation is the means by which application domain experts strategically instrument the source under test so as to enable themselves or others to write expectation tests against the running code. Source instrumentation is one of the first steps toward enabling expectation testing of your application.&lt;br /&gt;
&lt;br /&gt;
Source instrumentation is accomplished by a set of simple yet powerful macros provided by the STRIDE Runtime library. The macros are easily activated/deactivated for any build through the use of a single preprocessor macro value. When activated, they provide a means to validate the behavior of your application as it is running.&lt;br /&gt;
&lt;br /&gt;
Please review the following reference articles before proceeding:&lt;br /&gt;
&lt;br /&gt;
* [[Source Instrumentation Overview|Instrumentation Overview]]&lt;br /&gt;
* [[Test Point|Test Points]]&lt;br /&gt;
* [[Test Log|Test Logs]]&lt;br /&gt;
&lt;br /&gt;
== How Do I Instrument ? ==&lt;br /&gt;
&lt;br /&gt;
As described in [[Source Instrumentation Overview|the overview]], you can begin instrumenting your source code by including the &amp;lt;tt&amp;gt;srtest.h&amp;lt;/tt&amp;gt; header file and then adding &#039;&#039;&#039;srTEST_POINT*&#039;&#039;&#039; and &#039;&#039;&#039;srTEST_LOG*&#039;&#039;&#039; in source locations of interest. These macros will be inactive (no-op) unless &#039;&#039;&#039;STRIDE_ENABLED&#039;&#039;&#039; is defined during compilation.&lt;br /&gt;
&lt;br /&gt;
== Where Should I Instrument ? ==&lt;br /&gt;
&lt;br /&gt;
When thinking about &#039;&#039;where&#039;&#039; to instrument your source under test, consider these high-value locations:&lt;br /&gt;
&lt;br /&gt;
* Function entry and exit points. Include call parameters as data if appropriate.&lt;br /&gt;
* State transitions &lt;br /&gt;
* Data transitions (i.e. any point where important data changes value)&lt;br /&gt;
* Error conditions&lt;br /&gt;
* Callback functions&lt;br /&gt;
&lt;br /&gt;
In addition, when you start to instrument your source code, it&#039;s beneficial to pause and consider some of the test cases you expect to validate against the test points you are inserting. For each potential test case, you might also want to consider some of the characteristics you&#039;d use for those tests, as described in [[Expectations|this article]]. The characteristics you expect to apply for various test cases might, for example, inform things like how you label your test points or what kind of data you include.&lt;br /&gt;
&lt;br /&gt;
== What&#039;s The Difference between a Test Point and a Test Log ? ==&lt;br /&gt;
&lt;br /&gt;
[[Test Point|Test Points]] can be used for validation since they are what&#039;s checked when you run a STRIDE expectation test. Test Logs, on the other hand, are purely informational and will be included in the report, according to the log level indicated when the [[Stride_Runner#Options|STRIDE Runner]] was executed. Refer to the [[#Background|background links above]] for more information on each of these instrumentation types.&lt;br /&gt;
&lt;br /&gt;
== What About Data ? ==&lt;br /&gt;
&lt;br /&gt;
Including data with your test points adds another level of power to the validation of your source. Here are some general recommendations for using data effectively:&lt;br /&gt;
&lt;br /&gt;
* Try to use string data payloads wherever possible. String payloads are considerably more human-friendly when viewing test results and they allow for relatively simple string comparison validation.&lt;br /&gt;
* If you need to do complex validation of &#039;&#039;&#039;multi-field data&#039;&#039;&#039; in a test point, consider using an object serialization format such as [http://json.org/ JSON]. Standard formats like this are readily parsable in host scripting languages. If, however, you will &#039;&#039;only&#039;&#039; be writing expectation tests in native code for execution on the target, then string serialization formats might be too cumbersome for validation. In that case, using binary payloads (structures, typically) is reasonable.&lt;br /&gt;
* The data payloads for a test point are limited to a fixed size (512 bytes by default, but configurable if needed). If you have large data payloads that you need to validate and you are using host-based script validation logic, consider using [[File Transfer Services]] to read/write the data as files on the host. If, on the other hand, you are using native code for validation, you can independently manage your own buffers of data (heap allocated, for example) for validation and use the test point payloads only to transmit addresses and sizes of the payloads.&lt;br /&gt;
&lt;br /&gt;
== Sample Code: s2_expectations_source == &lt;br /&gt;
&lt;br /&gt;
For this training, we will simply review some existing sample source code and explain the motivation behind some of the instrumentation. In particular, we will peruse the source code associated with the [[Expectations Sample]].&lt;br /&gt;
&lt;br /&gt;
=== Samples/test_in_script/Expectations/s2_expectations_source.c ===&lt;br /&gt;
&lt;br /&gt;
This is the source under test for a simple example that demonstrates expectation tests where the test logic is written in a script module (perl) that runs on the host. The source under test is a simple state machine and we have chosen to instrument each of the 4 states with one or more test points.  Open the source file in your preferred editor and search for &#039;&#039;&#039;srTEST_POINT&#039;&#039;&#039;. You will see that we have test points in the following functions:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;SetNewState()&#039;&#039;: This is a shared function that is called to effectuate a state change. The test point label is just a static string chosen by the instrumenter and the name of the new state is included as string data on the test point.&lt;br /&gt;
* &#039;&#039;Start()&#039;&#039;: The Start state includes a single test point with no data. The label is gotten from a shard function that returns a string representation for any given state (&#039;&#039;GetStateName()&#039;&#039;).&lt;br /&gt;
* &#039;&#039;Idle()&#039;&#039;: The Idle state records a single test point. The test point label is again obtained from &#039;&#039;GetStateName()&#039;&#039; and the data associated with the test point is a transition count value that the software under test is maintaining. This state function also includes an &#039;&#039;info&#039;&#039; level test log. Since it is an info level log, it will not be captured during testing unless you explicitly set the log level to &#039;&#039;info&#039;&#039; or higher when executing the tests.&lt;br /&gt;
* &#039;&#039;Active()&#039;&#039;: The Active state has three distinct test points. The first point is similar to previous states and records a test point with the name of the current state as a label and transition count as data. The second test point shows another example of including string data in a test point. The third test point shows an example of simple JSON serialization to include several values in a single test point payload.&lt;br /&gt;
* &#039;&#039;Exp_DoStateChanges()&#039;&#039;: This function drives the state transitions and includes one warning log message that is not hit during normal execution of the code.&lt;br /&gt;
* &#039;&#039;End()&#039;&#039;: The End state records a single test point with the state name as label, similar to the other states already mentioned.&lt;br /&gt;
&lt;br /&gt;
=== Build, Run, and Trace ===&lt;br /&gt;
&lt;br /&gt;
So that you can see the trace points and logs in action, we will now build an off-target test app with this sample source included. Follow the [[Off-Target_Test_App#Build_Steps|instructions here]], using the same source files from the Expectation sample that we just reviewed.&lt;br /&gt;
&lt;br /&gt;
Now we want to invoke the function from our source under test using the [[STRIDE Runner]] and request that the runner show a trace of any trace points that occur in the test app during execution. &lt;br /&gt;
&lt;br /&gt;
In order to invoke the function that starts our state transitions, let&#039;s create the following two line perl script (use your favorite editor):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;&lt;br /&gt;
use STRIDE;&lt;br /&gt;
$STRIDE::Functions-&amp;gt;Exp_DoStateChanges();&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save this file as &amp;lt;tt&amp;gt;do_state_changes.pl&amp;lt;/tt&amp;gt; in the &amp;lt;tt&amp;gt;src&amp;lt;/tt&amp;gt; directory of your SDK. Now open a command prompt and change to that same directory. Execute the stride runner:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=do_state_changes.pl --trace&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This tells the runner to execute our script &#039;&#039;and&#039;&#039; to report any test points that are encountered on the device (or our test app, in this case) during the execution of that script. You should see a number of test points reported while the script executes and then some summary information about the tests that were executed (there are no tests executed in this case).&lt;br /&gt;
&lt;br /&gt;
Now let&#039;s include any log messages by specifying the &amp;lt;tt&amp;gt;--log_level&amp;lt;/tt&amp;gt; flag:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot;  --run=do_state_changes.pl --trace --log_level=all&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you run this, you should see the same output as before as well as one &#039;&#039;LOG&#039;&#039; entry. If we had many LOG entries and wanted to filter based on log_level, we would change the value passed to &amp;lt;tt&amp;gt;--log_level&amp;lt;/tt&amp;gt; accordingly.&lt;br /&gt;
&lt;br /&gt;
== What next ? ==&lt;br /&gt;
&lt;br /&gt;
Now you&#039;ve seen how instrumentation is strategically placed in source under test and can be traced during execution under the STRIDE Runner. We recommend that you proceed to some of our other training topics to learn how to create tests on the host (in script) or on the target (in native code) that use your instrumentation test points as a means for validation.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Instrumentation&amp;diff=12594</id>
		<title>Training Instrumentation</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Instrumentation&amp;diff=12594"/>
		<updated>2010-06-01T17:07:28Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
&lt;br /&gt;
Source instrumentation is the means by which application domain experts strategically instrument the source under test so as to enable themselves or others to write expectation tests against the running code. Source instrumentation is one of the first steps toward enabling expectation testing of your application.&lt;br /&gt;
&lt;br /&gt;
Source instrumentation is accomplished by a set of simple yet powerful macros provided by the STRIDE Runtime library. The macros are easily activated/deactivated for any build through the use of a single preprocessor macro value. When activated, they provide a means to validate the behavior of your application as it is running.&lt;br /&gt;
&lt;br /&gt;
Please review the following reference articles before proceeding:&lt;br /&gt;
&lt;br /&gt;
* [[Source Instrumentation Overview|Instrumentation Overview]]&lt;br /&gt;
* [[Test Point|Test Points]]&lt;br /&gt;
* [[Test Log|Test Logs]]&lt;br /&gt;
&lt;br /&gt;
== How Do I Instrument ? ==&lt;br /&gt;
&lt;br /&gt;
As described in [[Source Instrumentation Overview|the overview]], you can begin instrumenting your source code by including the &amp;lt;tt&amp;gt;srtest.h&amp;lt;/tt&amp;gt; header file and then adding &#039;&#039;&#039;srTEST_POINT*&#039;&#039;&#039; and &#039;&#039;&#039;srTEST_LOG*&#039;&#039;&#039; in source locations of interest. These macros will be inactive (no-op) unless &#039;&#039;&#039;STRIDE_ENABLED&#039;&#039;&#039; is defined during compilation.&lt;br /&gt;
&lt;br /&gt;
== Where Should I Instrument ? ==&lt;br /&gt;
&lt;br /&gt;
When thinking about &#039;&#039;where&#039;&#039; to instrument your source under test, consider these high-value locations:&lt;br /&gt;
&lt;br /&gt;
* Function entry and exit points. Include call parameters as data if appropriate.&lt;br /&gt;
* State transitions &lt;br /&gt;
* Data transitions (i.e. any point where important data changes value)&lt;br /&gt;
* Error conditions&lt;br /&gt;
* Callback functions&lt;br /&gt;
&lt;br /&gt;
In addition, when you start to instrument your source code, it&#039;s beneficial to pause and consider some of the test cases you expect to validate against the test points you are inserting. For each potential test case, you might also want to consider some of the characteristics you&#039;d use for those tests, as described in [[Expectations|this article]]. The characteristics you expect to apply for various test cases might, for example, inform things like how you label your test points or what kind of data you include.&lt;br /&gt;
&lt;br /&gt;
== What&#039;s The Difference between a Test Point and a Test Log ? ==&lt;br /&gt;
&lt;br /&gt;
[[Test Point|Test Points]] can be used for validation since they are what&#039;s checked when you run a STRIDE expectation test. Test Logs, on the other hand, are purely informational and will be included in the report, according to the log level indicated when the [[Stride_Runner#Options|STRIDE Runner]] was executed. Refer to the [[#Background|background links above]] for more information on each of these instrumentation types.&lt;br /&gt;
&lt;br /&gt;
== What About Data ? ==&lt;br /&gt;
&lt;br /&gt;
Including data with your test points adds another level of power to the validation of your source. Here are some general recommendations for using data effectively:&lt;br /&gt;
&lt;br /&gt;
* Try to use string data payloads wherever possible. String payloads are considerably more human-friendly when viewing test results and they allow for relatively simple string comparison validation.&lt;br /&gt;
* If you need to do complex validation of &#039;&#039;&#039;multi-field data&#039;&#039;&#039; in a test point, consider using an object serialization format such as [http://json.org/ JSON]. Standard formats like this are readily parsable in host scripting languages. If, however, you will &#039;&#039;only&#039;&#039; be writing expectation tests in native code for execution on the target, then string serialization formats might be too cumbersome for validation. In that case, using binary payloads (structures, typically) is reasonable.&lt;br /&gt;
* The data payloads for a test point are limited to a fixed size (512 bytes by default, but configurable if needed). If you have large data payloads that you need to validate and you are using host-based script validation logic, consider using [[File Transfer Services]] to read/write the data as files on the host. If, on the other hand, you are using native code for validation, you can independently manage your own buffers of data (heap allocated, for example) for validation and use the test point payloads only to transmit addresses and sizes of the payloads.&lt;br /&gt;
&lt;br /&gt;
== Sample Code: s2_expectations_source == &lt;br /&gt;
&lt;br /&gt;
For this training, we will simply review some existing sample source code and explain the motivation behind some of the instrumentation. In particular, we will peruse the source code associated with the [[Expectations Sample]].&lt;br /&gt;
&lt;br /&gt;
=== Samples/test_in_script/Expectations/s2_expectations_source.c ===&lt;br /&gt;
&lt;br /&gt;
This is the source under test for a simple example that demonstrates expectation tests where the test logic is written in a script module (perl) that runs on the host. The source under test is a simple state machine and we have chosen to instrument each of the 4 states with one or more test points.  Open the source file in your preferred editor and search for &#039;&#039;&#039;srTEST_POINT&#039;&#039;&#039;. You will see that we have test points in the following functions:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;SetNewState()&#039;&#039;: This is a shared function that is called to effectuate a state change. The test point label is just a static string chosen by the instrumenter and the name of the new state is included as string data on the test point.&lt;br /&gt;
* &#039;&#039;Start()&#039;&#039;: The Start state includes a single test point with no data. The label is gotten from a shard function that returns a string representation for any given state (&#039;&#039;GetStateName()&#039;&#039;).&lt;br /&gt;
* &#039;&#039;Idle()&#039;&#039;: The Idle state records a single test point. The test point label is again obtained from &#039;&#039;GetStateName()&#039;&#039; and the data associated with the test point is a transition count value that the software under test is maintaining. This state function also includes an &#039;&#039;info&#039;&#039; level test log. Since it is an info level log, it will not be captured during testing unless you explicitly set the log level to &#039;&#039;info&#039;&#039; or higher when executing the tests.&lt;br /&gt;
* &#039;&#039;Active()&#039;&#039;: The Active state has three distinct test points. The first point is similar to previous states and records a test point with the name of the current state as a label and transition count as data. The second test point shows another example of including string data in a test point. The third test point shows an example of simple JSON serialization to include several values in a single test point payload.&lt;br /&gt;
* &#039;&#039;Exp_DoStateChanges()&#039;&#039;: This function drives the state transitions and includes one warning log message that is not hit during normal execution of the code.&lt;br /&gt;
* &#039;&#039;End()&#039;&#039;: The End state records a single test point with the state name as label, similar to the other states already mentioned.&lt;br /&gt;
&lt;br /&gt;
=== Build, Run, and Trace ===&lt;br /&gt;
&lt;br /&gt;
So that you can see the trace points and logs in action, we will now build an off-target test app with this sample source included. Follow the [[Off-Target_Test_App#Build_Steps|instructions here]], using the same source files from the Expectation sample that we just reviewed.&lt;br /&gt;
&lt;br /&gt;
Now we want to invoke the function from our source under test using the [[STRIDE Runner]] and request that the runner show a trace of any trace points that occur in the test app during execution. &lt;br /&gt;
&lt;br /&gt;
In order to invoke the function that starts our state transitions, let&#039;s create the following two line perl script (use your favorite editor):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;&lt;br /&gt;
use STRIDE;&lt;br /&gt;
$STRIDE::Functions-&amp;gt;Exp_DoStateChanges();&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save this file as &amp;lt;tt&amp;gt;do_state_changes.pl&amp;lt;/tt&amp;gt; in the &amp;lt;tt&amp;gt;src&amp;lt;/tt&amp;gt; directory of your SDK. Now open a command prompt and change to that same directory. Execute the stride runner:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=do_state_changes.pl --trace&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This tells the runner to execute our script &#039;&#039;and&#039;&#039; to report any test points that are encountered on the device (or our test app, in this case) during the execution of that script. You should see a number of test points reported while the script executes and then some summary information about the tests that were executed (there are no tests executed in this case).&lt;br /&gt;
&lt;br /&gt;
Now let&#039;s include any log messages by specifying the &amp;lt;tt&amp;gt;--log_level&amp;lt;/tt&amp;gt; flag:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot;  --run=do_state_changes.pl --trace --log_level=all&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you run this, you should see the same output as before as well as one &#039;&#039;LOG&#039;&#039; entry. If we had many LOG entries and wanted to filter based on log_level, we would change the value passed to &amp;lt;tt&amp;gt;--log_level&amp;lt;/tt&amp;gt; accordingly.&lt;br /&gt;
&lt;br /&gt;
== What next ? ==&lt;br /&gt;
&lt;br /&gt;
Now you&#039;ve seen how instrumentation is strategically placed in source under test and can be traced during execution under the STRIDE Runner. We recommend that you proceed to some of our other training topics to learn how to create tests on the host (in script) or on the target (in native code) that use your instrumentation test points as a means for validation.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Training_Instrumentation&amp;diff=12593</id>
		<title>Training Instrumentation</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Training_Instrumentation&amp;diff=12593"/>
		<updated>2010-06-01T16:45:14Z</updated>

		<summary type="html">&lt;p&gt;Mikee: /* Where Should I Instrument ? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
&lt;br /&gt;
Source instrumentation is the means by which application domain experts strategically instrument the source under test so as to enable themselves or others to write expectation tests against the running code. Source instrumentation is one of the first steps toward enabling expectation testing of your application.&lt;br /&gt;
&lt;br /&gt;
Source instrumentation is accomplished by a set of simple yet powerful macros provided by the STRIDE Runtime library. The macros are easily activated/deactivated for any build through the use of a single preprocessor macro value. When activated, they provide a means to validate the behavior of your application as it is running.&lt;br /&gt;
&lt;br /&gt;
Please review the following reference articles before proceeding:&lt;br /&gt;
&lt;br /&gt;
* [[Source Instrumentation Overview|Instrumentation Overview]]&lt;br /&gt;
* [[Test Point|Test Points]]&lt;br /&gt;
* [[Test Log|Test Logs]]&lt;br /&gt;
&lt;br /&gt;
== How Do I Instrument ? ==&lt;br /&gt;
&lt;br /&gt;
As described in [[Source Instrumentation Overview|the overview]], you can begin instrumenting your source code by including the &amp;lt;tt&amp;gt;srtest.h&amp;lt;/tt&amp;gt; header file and then adding &#039;&#039;&#039;srTEST_POINT*&#039;&#039;&#039; and &#039;&#039;&#039;srTEST_LOG*&#039;&#039;&#039; in source locations of interest. These macros will be inactive (no-op) unless &#039;&#039;&#039;STRIDE_ENABLED&#039;&#039;&#039; is defined during compilation.&lt;br /&gt;
&lt;br /&gt;
== Where Should I Instrument ? ==&lt;br /&gt;
&lt;br /&gt;
When thinking about &#039;&#039;where&#039;&#039; to instrument your source under test, consider these high-value locations:&lt;br /&gt;
&lt;br /&gt;
* Function entry and exit points. Include call parameters as data if appropriate.&lt;br /&gt;
* State transitions &lt;br /&gt;
* Data transitions (i.e. any point where important data changes value)&lt;br /&gt;
* Error conditions&lt;br /&gt;
* Callback functions&lt;br /&gt;
&lt;br /&gt;
In addition, when you start to instrument your source code, it&#039;s beneficial to pause and consider some of the test cases you expect to validate against the test points you are inserting. For each potential test case, you might also want to consider some of the characteristics you&#039;d use for those tests, as described in [[Expectations|this article]]. The characteristics you expect to apply for various test cases might, for example, inform things like how you label your test points or what kind of data you include.&lt;br /&gt;
&lt;br /&gt;
== What&#039;s The Difference between a Test Point and a Test Log ? ==&lt;br /&gt;
&lt;br /&gt;
[[Test Point|Test Points]] can be used for validation since they are what&#039;s checked when you run a STRIDE expectation test. Test Logs, on the other hand, are purely informational and will be included in the report, according to the log level indicated when the [[Stride_Runner#Options|STRIDE Runner]] was executed. Refer to the [[#Background|background links above]] for more information on each of these instrumentation types.&lt;br /&gt;
&lt;br /&gt;
== What About Data ? ==&lt;br /&gt;
&lt;br /&gt;
Including data with your test points adds another level of power to the validation of your source. Here are some general recommendations for using data effectively:&lt;br /&gt;
&lt;br /&gt;
* Try to use string data payloads wherever possible. String payloads are considerably more human-friendly when viewing test results and they allow for relatively simple string comparison validation.&lt;br /&gt;
* If you need to do complex validation of &#039;&#039;&#039;multi-field data&#039;&#039;&#039; in a test point, consider using an object serialization format such as [http://json.org/ JSON]. Standard formats like this are readily parsable in host scripting languages. If, however, you will only be writing expectation tests in native code for execution on the target, then string serialization formats might be too cumbersome for validation. In that case, using binary payloads (structures, typically) is reasonable.&lt;br /&gt;
* TBD&lt;br /&gt;
* TBD&lt;br /&gt;
&lt;br /&gt;
== Sample Code: s2_expectations_source == &lt;br /&gt;
&lt;br /&gt;
For this training, we will simply review some existing sample source code and explain the motivation behind some of the instrumentation. In particular, we will peruse the source code associated with the [[Expectations Sample]].&lt;br /&gt;
&lt;br /&gt;
=== Samples/test_in_script/Expectations/s2_expectations_source.c ===&lt;br /&gt;
&lt;br /&gt;
This is the source under test for a simple example that demonstrates  expectation where the test logic is written in a script module (perl) that runs on the host. The source under test is a simple state machine and we have  chosen to instrument each of the 4 states with one or more test points.  Open the source file in your preferred editor and search for  &#039;&#039;&#039;srTEST_POINT&#039;&#039;&#039;. You will see that we have test points in the following functions:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;SetNewState()&#039;&#039;: This is a shared function that is called to effectuate a state change. The test point label is just a static string chosen by the instrumenter and the name of the new state is included as string data on the test point.&lt;br /&gt;
* &#039;&#039;Start()&#039;&#039;: The Start state includes a single test point with no data. The label is gotten from a shard function that returns a string representation for any given state (&#039;&#039;GetStateName()&#039;&#039;).&lt;br /&gt;
* &#039;&#039;Idle()&#039;&#039;: The Idle state records a single test point. The test point label is again obtained from &#039;&#039;GetStateName()&#039;&#039; and the data associated with the test point is a transition count value that the software under test is maintaining. This state function also includes an &#039;&#039;info&#039;&#039; level test log. Since it is an info level log, it will not be captured during testing unless you explicitly set the log level to &#039;&#039;info&#039;&#039; or higher when executing the tests (via the STRIDE Runner).&lt;br /&gt;
* &#039;&#039;Active()&#039;&#039;: The Active state has three distinct test points. The first point is similar to previous states and records a test point with the name of the current state as a label and transition count as data. The second test point shows another example of including string data in a test point. The third test point shows an example of simple JSON serialization to include several values in a single test point payload.&lt;br /&gt;
* &#039;&#039;Exp_DoStateChanges()&#039;&#039;: This function drives the state transitions and includes one warning log message that is not hit during normal execution of the code.&lt;br /&gt;
* &#039;&#039;End()&#039;&#039;: The End state records a single test point with the state name as label, similar to the other states already mentioned.&lt;br /&gt;
&lt;br /&gt;
=== Build, Run, and Trace ===&lt;br /&gt;
&lt;br /&gt;
So that you can see the trace points and logs in action, we will now build an off-target test app with this sample source included. Follow the [[Off-Target_Test_App#Build_Steps|instructions here]], using the same source files from the Expectation sample that we just reviewed.&lt;br /&gt;
&lt;br /&gt;
Now we want to invoke the function from our source under test using the [[STRIDE Runner]] and request that the runner show a trace of any trace points that occur in the test app during execution. &lt;br /&gt;
&lt;br /&gt;
In order to invoke the function that starts our state transitions, let&#039;s create the following two line perl script (use your favorite editor):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;perl&amp;quot;&amp;gt;&lt;br /&gt;
use STRIDE;&lt;br /&gt;
$STRIDE::Functions-&amp;gt;Exp_DoStateChanges();&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save this file as &amp;lt;tt&amp;gt;do_state_changes.pl&amp;lt;/tt&amp;gt; in the &amp;lt;tt&amp;gt;src&amp;lt;/tt&amp;gt; directory of your SDK. Now open a command prompt and change to that same directory. Execute the stride runner:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot; --run=do_state_changes.pl --trace&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This tells the runner to execute our script &#039;&#039;and&#039;&#039; to report any test points that are encountered on the device (or our test app, in this case) during the execution of that script. You should see a number of test points reported while the script executes and then some summary information about the tests that were executed (there are no tests executed in this case).&lt;br /&gt;
&lt;br /&gt;
Now let&#039;s include any log messages by specifying the &amp;lt;tt&amp;gt;--log_level&amp;lt;/tt&amp;gt; flag:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stride --device=&amp;quot;TCP:localhost:8000&amp;quot; --database=&amp;quot;../out/TestApp.sidb&amp;quot;  --run=do_state_changes.pl --trace --log_level=all&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you run this, you should see the same output as before as well as one &#039;&#039;LOG&#039;&#039; entry. If we had many LOG entries and wanted to filter based on log_level, we would change the value passed to &amp;lt;tt&amp;gt;--log_level&amp;lt;/tt&amp;gt; accordingly.&lt;br /&gt;
&lt;br /&gt;
== What next ? ==&lt;br /&gt;
&lt;br /&gt;
Now you&#039;ve seen how instrumentation is strategically placed in source under test and can be traced during execution under the STRIDE Runner. We recommend you proceed to some of our other training topics to learn how to create tests on the host (in script) or on the target (in native code) that use your instrumentation test points as a means for validation.&lt;br /&gt;
&lt;br /&gt;
[[Category: Training]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Test_Documentation_in_C/C%2B%2B&amp;diff=12587</id>
		<title>Test Documentation in C/C++</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Test_Documentation_in_C/C%2B%2B&amp;diff=12587"/>
		<updated>2010-05-28T22:40:32Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
The STRIDE  Framework provides an integrated solution for automatically  extracting  documentation for your test units using the well-known [http://www.stack.nl/~dimitri/doxygen/ doxygen format]. In order to enable test unit   documentation, here is summary of the necessary steps:&lt;br /&gt;
* document your test unit source with [http://www.stack.nl/~dimitri/doxygen/docblocks.html doxygen] formatted blocks. Since only header files   are typically passed to the [[S2scompile|stride   compiler]], we recommend you place your   documentation in the header file. Alternatively, if you have an   implementation (c or cpp) file with the same basename located in the   same directory as your header file, we will search this file for   documentation as well.&lt;br /&gt;
* configure your build/make process to call   the [[S2scompile|stride compiler]] with the additional &#039;&#039;&#039;--documentation&#039;&#039;&#039; flag. If you are using one of our   preconfigured makefiles in a sandbox environment, this option has   already been enabled.&lt;br /&gt;
* run your build process to produce a  stride  database and stride enabled target application.&lt;br /&gt;
* start your   application and execute the [[Stride_Runner|stride runner]]   to connect and run the tests.&lt;br /&gt;
* The generated report will contain   description information for the test suites and test cases generated   from the doxygen blocks.&lt;br /&gt;
&lt;br /&gt;
== Recommendations and Guidelines ==&lt;br /&gt;
As mentioned above, we generally   recommend that you document your test units in the header file so as  to  ensure that the stride toolchain is able to properly correlate test   units with the extracted documentation. For simplicity, we also   recommend that you place your documentation blocks as close as possibly   to the documented entity (class, struct, or method) so as to avoid   confusion. The following are specific notes about documenting each of   the three types of test units that the STRIDE Framework supports.&lt;br /&gt;
&lt;br /&gt;
=== Test Classes ===&lt;br /&gt;
Source documentation is   generally straight-foward, with doc blocks preceding the corresponding   class and method declaration. If you prefer to locate your  documentation  blocks elsewhere in the header file, use the &#039;&#039;&#039;\class&#039;&#039;&#039;  tag to correlate your docs and  declared test class.&lt;br /&gt;
&lt;br /&gt;
=== Test C-Classes ===&lt;br /&gt;
Source  documentation must relate  to the structure that is used as the  &amp;quot;C-Object&amp;quot; for the test unit.  Since a C-Class uses function pointers to  call its individual tests,  test method documentation &#039;&#039;&#039;must&#039;&#039;&#039;  be associated with the  corresponding structure function pointer member.  As such, method  documentation for C-Classes must be in the header file.&lt;br /&gt;
&lt;br /&gt;
=== Test FLists ===&lt;br /&gt;
Since FLists have no specific   storage entity (class or struct) to which they are associated. As such,   the only way to provide unit-level documentation for FLists is to   document the source file in which its methods are declared. The unit   documentation is associated with its file using the &#039;&#039;&#039;\file&#039;&#039;&#039; tag. Because of this restriction,   you will only be able to provide documentation for one FList in each   header file - so we recommend that you generally confine each Test  FList  to it&#039;s own source file pair (.h and .c). The test methods  associated  with an FList are documented as expected, with the  documentation block  preceding the function declaration.&lt;br /&gt;
&lt;br /&gt;
== Supported  Doxygen Tags ==&lt;br /&gt;
&lt;br /&gt;
Doxygen  has a rich set of tags  and formatting options aimed at comprehensive  source documentation. The  STRIDE Framework uses doxygen on a per-file  basis to extract standalone  documentation for individual test units and  their methods. As such, we  only support limited set of doxygen  features in the code documentation.  The STRIDE Framework supports the  following doc formatting:&lt;br /&gt;
* custom HTML  formatting&lt;br /&gt;
* lists  (ordered, unordered, definition)&lt;br /&gt;
* code blocks&lt;br /&gt;
* bold and   emphasis text&lt;br /&gt;
More advanced doxygen formatting tags   (such as tables and parameter lists) are not supported at this time,  but  will likely be in future releases.&lt;br /&gt;
&lt;br /&gt;
For more information on   Doxygen formatting, see [http://www.doxygen.nl/docblocks.html]&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
&lt;br /&gt;
=== Test Class ===&lt;br /&gt;
&amp;lt;source lang=c&amp;gt;&lt;br /&gt;
#pragma once&lt;br /&gt;
#include &amp;lt;srtest.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
///   \brief Summary description for MyTestUnit (optional)&lt;br /&gt;
///&lt;br /&gt;
/// More   detailed documentation goes here&lt;br /&gt;
/// This example shows the C++   commenting style, the C style may also be used&lt;br /&gt;
///&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
class MyTestClass{&lt;br /&gt;
  public:&lt;br /&gt;
    /// &lt;br /&gt;
    ///   Description for Test_1 here.  &lt;br /&gt;
    ///&lt;br /&gt;
    bool   Test_1();&lt;br /&gt;
    &lt;br /&gt;
    /// &lt;br /&gt;
    ///   Description for Test_2 here.  &lt;br /&gt;
    ///&lt;br /&gt;
    bool   Test_2();&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
#ifdef _SCL&lt;br /&gt;
#pragma   scl_test_class(MyTestClass)&lt;br /&gt;
#endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Test C-Class ===&lt;br /&gt;
&amp;lt;source   lang=c&amp;gt;&lt;br /&gt;
#pragma once&lt;br /&gt;
#include &amp;lt;srtest.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
/*!   \brief Summary description for my_c_class(optional)&lt;br /&gt;
&lt;br /&gt;
More  detailed  documentation goes here &lt;br /&gt;
&lt;br /&gt;
*/&lt;br /&gt;
typedef   struct my_c_class&lt;br /&gt;
{&lt;br /&gt;
    /*! &lt;br /&gt;
    Description   for Test_1 here.  &lt;br /&gt;
    */&lt;br /&gt;
    int     (*Test_1)(struct my_c_class* pcc);&lt;br /&gt;
    &lt;br /&gt;
    /*! &lt;br /&gt;
    Description   for Test_2 here.  &lt;br /&gt;
    */  &lt;br /&gt;
    int     (*Test_2)(struct my_c_class* pcc);&lt;br /&gt;
} my_c_class;&lt;br /&gt;
&lt;br /&gt;
#ifdef __cplusplus&lt;br /&gt;
extern   &amp;quot;C&amp;quot; {&lt;br /&gt;
#endif&lt;br /&gt;
void   my_c_class_init(struct my_c_class* pcc);&lt;br /&gt;
#ifdef   __cplusplus&lt;br /&gt;
}&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
#ifdef _SCL&lt;br /&gt;
#pragma   scl_test_cclass(my_c_class, my_c_class_init)&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Test FList ===&lt;br /&gt;
(source file name is &#039;&#039;my_flist.h&#039;&#039; in the example below)&lt;br /&gt;
&amp;lt;source lang=c&amp;gt;&lt;br /&gt;
#pragma once&lt;br /&gt;
#include &amp;lt;srtest.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#ifdef __cplusplus&lt;br /&gt;
extern   &amp;quot;C&amp;quot; {&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
/*! &lt;br /&gt;
\file   my_flist.h&lt;br /&gt;
\brief Summary description for my_flist(optional)&lt;br /&gt;
    &lt;br /&gt;
More detailed   documentation goes here  &lt;br /&gt;
 &lt;br /&gt;
*/&lt;br /&gt;
&lt;br /&gt;
/*! &lt;br /&gt;
Description   for Test_1 here.  &lt;br /&gt;
*/&lt;br /&gt;
int     Test_1();&lt;br /&gt;
&lt;br /&gt;
/*! &lt;br /&gt;
Description for Test_2 here.  &lt;br /&gt;
*/&lt;br /&gt;
int   Test_2();&lt;br /&gt;
&lt;br /&gt;
#ifdef __cplusplus&lt;br /&gt;
}&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
#ifdef _SCL&lt;br /&gt;
#pragma   scl_test_flist(&amp;quot;my_flist&amp;quot;, \&lt;br /&gt;
    Test_1,\&lt;br /&gt;
    Test_2)&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category: Test Units]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Test_Fixturing_in_C/C%2B%2B&amp;diff=12586</id>
		<title>Test Fixturing in C/C++</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Test_Fixturing_in_C/C%2B%2B&amp;diff=12586"/>
		<updated>2010-05-28T22:40:23Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is  Test  Fixturing? ==&lt;br /&gt;
Recall  that  generic xUnit testing comprises four discrete phases:&lt;br /&gt;
# Setup&lt;br /&gt;
# Exercise&lt;br /&gt;
# Verify&lt;br /&gt;
# Teardown&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Test fixturing&#039;&#039; refers to the Setup and Teardown   phases of the testing.&lt;br /&gt;
&lt;br /&gt;
In the &#039;&#039;&#039;Setup&#039;&#039;&#039;   phase, we put all of the things into place that are required in order   to run a our test and expect a particular outcome. This includes things   like:&lt;br /&gt;
* Acquiring resources such as memory,   hardware, etc.&lt;br /&gt;
* Setting up required states such as input   files in place, memory filled with a pattern, dependencies initialized,   etc.&lt;br /&gt;
&lt;br /&gt;
In the &#039;&#039;&#039;Tear down&#039;&#039;&#039;   phase, we clean up the fixturing we did in the Setup phase, leaving  the  system in a state that is ready to be used by the next test.&lt;br /&gt;
&lt;br /&gt;
== The Importance  of  Fixturing ==&lt;br /&gt;
The proper use  of  fixturing can simplify test writing and lead to these benefits:&lt;br /&gt;
&lt;br /&gt;
* Separation of   initialization/deinitialization code from your test code&lt;br /&gt;
* Reuse of   setup and teardown code within a test unit&lt;br /&gt;
*  Simplification  of resource cleanup in test methods&lt;br /&gt;
&lt;br /&gt;
== STRIDE  Fixturing Resources ==&lt;br /&gt;
===Specifying   Fixturing Methods===&lt;br /&gt;
Within   your source code, you can optionally specify &#039;&#039;&#039;setup&#039;&#039;&#039;   and &#039;&#039;&#039;teardown&#039;&#039;&#039; methods using the [[Test Unit Pragmas|scl pragmas]]: &lt;br /&gt;
&lt;br /&gt;
*[[Scl test setup|scl_test_setup()]]&lt;br /&gt;
*[[Scl test teardown|scl_test_teardown()]]&lt;br /&gt;
&lt;br /&gt;
When   declaring these pragmas, you specify 1) the test unit the pragma   applies to; and 2) the name of the method that will be called by the   STRIDE framework to perform the Setup or Teardown fixturing. If   specifed, the STRIDE framework will call the Setup method before each   test method in the test unit, and the Teardown method after each test   method in the test unit.&lt;br /&gt;
&lt;br /&gt;
You can see examples of fixturing   delcarations in these test samples:&lt;br /&gt;
*[[Test Class Samples]]&lt;br /&gt;
*[[Test CClass Samples]]&lt;br /&gt;
&lt;br /&gt;
===Advanced   Fixturing===&lt;br /&gt;
A common test   pattern--especially in the area of multimedia--is to create a test that   is parametrized by an input file. The test is run multiple times with  a  different input file used for each run. &lt;br /&gt;
&lt;br /&gt;
In this case the  setup  fixturing makes the file data available to the test (typically  opening a  file on the host, then copying data from host to target), and  the  teardown fixturing removes any files created on the target and so  forth.&lt;br /&gt;
&lt;br /&gt;
STRIDE  offers an integrated solution to file fixturing  that makes it possible  from your target test code to specify a  host-based file and transfer its  data to the target. Refer to the [[File_Services_Samples  | File  Services  Samples]] that demonstrates techniques and   syntax for performing basic tasks using the [[File  Transfer  Services|File Transfer Services API]].&lt;br /&gt;
&lt;br /&gt;
[[Category: Test Units]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Test_API&amp;diff=12585</id>
		<title>Test API</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Test_API&amp;diff=12585"/>
		<updated>2010-05-28T22:38:52Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The APIs for testing in C/C++ are documented in the following articles:&lt;br /&gt;
&lt;br /&gt;
* [[Test Point  Testing in C/C++|Test Point Testing API]] (for writing native test point validation tests)&lt;br /&gt;
* [[Runtime Test Services|Runtime Test Services API]] (for dynamic test result manipulation)&lt;br /&gt;
* [[File  Transfer Services|File Transfer API]] (for host file manipulation from the device under test)&lt;br /&gt;
* [[Test Fixturing in C/C++|Test Fixturing]] (for test unit fixturing in native code)&lt;br /&gt;
* [[Test  Documentation in C/C++|Test Documentation]] (how to document your native test units)&lt;br /&gt;
* [[Using Test Doubles|Test Doubles]] (for advanced dependency interception/function mocking in native code)&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Test_API&amp;diff=12584</id>
		<title>Test API</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Test_API&amp;diff=12584"/>
		<updated>2010-05-28T22:38:37Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Test APIs for testing in C/C++ are documented in the following articles:&lt;br /&gt;
&lt;br /&gt;
* [[Test Point  Testing in C/C++|Test Point Testing API]] (for writing native test point validation tests)&lt;br /&gt;
* [[Runtime Test Services|Runtime Test Services API]] (for dynamic test result manipulation)&lt;br /&gt;
* [[File  Transfer Services|File Transfer API]] (for host file manipulation from the device under test)&lt;br /&gt;
* [[Test Fixturing in C/C++|Test Fixturing]] (for test unit fixturing in native code)&lt;br /&gt;
* [[Test  Documentation in C/C++|Test Documentation]] (how to document your native test units)&lt;br /&gt;
* [[Using Test Doubles|Test Doubles]] (for advanced dependency interception/function mocking in native code)&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Main_Page&amp;diff=12583</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Main_Page&amp;diff=12583"/>
		<updated>2010-05-28T22:32:36Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;#color:#0067A5&amp;quot;&amp;gt; &amp;lt;font size=&amp;quot;5&amp;quot;&amp;gt; Welcome to the STRIDE™ Wiki &amp;lt;/font&amp;gt; &amp;lt;/span&amp;gt; &lt;br /&gt;
&lt;br /&gt;
STRIDE has been designed specifically for testing embedded software On-Target. &lt;br /&gt;
&amp;lt;hr/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For an overview of STRIDE, including screencasts, [[STRIDE Overview| please click here]]&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;hr/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;FCK__ShowTableBorders&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; | &lt;br /&gt;
{| class=&amp;quot;FCK__ShowTableBorders&amp;quot; style=&amp;quot;border-right: rgb(187,204,204) 1px solid; border-top: rgb(187,204,204) 1px solid; vertical-align: top; border-left: rgb(187,204,204) 1px solid; border-bottom: rgb(187,204,204) 1px solid&amp;quot; cellspacing=&amp;quot;5&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;border-right: rgb(163,176,191) 1px solid; padding-right: 0.4em; border-top: rgb(163,176,191) 1px solid; padding-left: 0.4em; font-weight: bold; font-size: 120%; background: rgb(206,223,242) 0% 50%; padding-bottom: 0.2em; margin: 0pt; border-left: rgb(163,176,191) 1px solid; color: rgb(0,0,0); padding-top: 0.2em; border-bottom: rgb(163,176,191) 1px solid; text-align: left; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial&amp;quot; | [[:Category:Source Instrumentation | Instrumentation]] &lt;br /&gt;
&lt;br /&gt;
! style=&amp;quot;border-right: rgb(163,176,191) 1px solid; padding-right: 0.4em; border-top: rgb(163,176,191) 1px solid; padding-left: 0.4em; font-weight: bold; font-size: 120%; background: rgb(206,223,242) 0% 50%; padding-bottom: 0.2em; margin: 0pt; border-left: rgb(163,176,191) 1px solid; color: rgb(0,0,0); padding-top: 0.2em; border-bottom: rgb(163,176,191) 1px solid; text-align: left; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial&amp;quot; | [[:Category:Tests in Script| Tests in Script]]&lt;br /&gt;
&lt;br /&gt;
! style=&amp;quot;border-right: rgb(163,176,191) 1px solid; padding-right: 0.4em; border-top: rgb(163,176,191) 1px solid; padding-left: 0.4em; font-weight: bold; font-size: 120%; background: rgb(206,223,242) 0% 50%; padding-bottom: 0.2em; margin: 0pt; border-left: rgb(163,176,191) 1px solid; color: rgb(0,0,0); padding-top: 0.2em; border-bottom: rgb(163,176,191) 1px solid; text-align: left; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial&amp;quot; | [[:Category:Test Units| Tests in C/C++]]&lt;br /&gt;
&lt;br /&gt;
! style=&amp;quot;border-right: rgb(163,176,191) 1px solid; padding-right: 0.4em; border-top: rgb(163,176,191) 1px solid; padding-left: 0.4em; font-weight: bold; font-size: 120%; background: rgb(206,223,242) 0% 50%; padding-bottom: 0.2em; margin: 0pt; border-left: rgb(163,176,191) 1px solid; color: rgb(0,0,0); padding-top: 0.2em; border-bottom: rgb(163,176,191) 1px solid; text-align: left; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial&amp;quot; | [[:Category:Running Tests | Running Tests]] &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Cell 1&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; |&lt;br /&gt;
* [[Source Instrumentation Overview | Overview]]&lt;br /&gt;
* [[Test Point | Test Points]]&lt;br /&gt;
* [[Test Log | Test Logs]]&lt;br /&gt;
* [[Function_Capturing  | Functions]]&lt;br /&gt;
* [[Expectations | Expectations]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Cell 2&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; |&lt;br /&gt;
* [[Test Modules Overview | Overview]]&lt;br /&gt;
* [[Perl Script APIs | Perl Script APIs]]&lt;br /&gt;
* [[Perl Script Snippets]]&lt;br /&gt;
* [[Script Samples | Samples]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Cell 3&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; |&lt;br /&gt;
* [[Test Units Overview | Overview]]&lt;br /&gt;
* [[Test Units]]&lt;br /&gt;
* [[Test Macros | Test Macros]]&lt;br /&gt;
* [[Test API | Test APIs]]&lt;br /&gt;
* [[C/C++ Samples | Samples]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Cell 4&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; | &lt;br /&gt;
* [[Running Tests]]&lt;br /&gt;
* [[Listing Functions and Test Units|Listing Functions/Tests]]&lt;br /&gt;
* [[Tracing  | Tracing]]&lt;br /&gt;
* [[Organizing Tests into Suites]]&lt;br /&gt;
* [[Setting up your CI Environment]]&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;border-right: rgb(163,176,191) 1px solid; padding-right: 0.4em; border-top: rgb(163,176,191) 1px solid; padding-left: 0.4em; font-weight: bold; font-size: 120%; background: rgb(206,223,242) 0% 50%; padding-bottom: 0.2em; margin: 0pt; border-left: rgb(163,176,191) 1px solid; color: rgb(0,0,0); padding-top: 0.2em; border-bottom: rgb(163,176,191) 1px solid; text-align: left; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial&amp;quot; | [[:Category:STRIDE Test Space|Test Space]]&lt;br /&gt;
&lt;br /&gt;
! style=&amp;quot;border-right: rgb(163,176,191) 1px solid; padding-right: 0.4em; border-top: rgb(163,176,191) 1px solid; padding-left: 0.4em; font-weight: bold; font-size: 120%; background: rgb(206,223,242) 0% 50%; padding-bottom: 0.2em; margin: 0pt; border-left: rgb(163,176,191) 1px solid; color: rgb(0,0,0); padding-top: 0.2em; border-bottom: rgb(163,176,191) 1px solid; text-align: left; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial&amp;quot; | [[:Category:Reference | Reference]]&lt;br /&gt;
&lt;br /&gt;
! style=&amp;quot;border-right: rgb(163,176,191) 1px solid; padding-right: 0.4em; border-top: rgb(163,176,191) 1px solid; padding-left: 0.4em; font-weight: bold; font-size: 120%; background: rgb(206,223,242) 0% 50%; padding-bottom: 0.2em; margin: 0pt; border-left: rgb(163,176,191) 1px solid; color: rgb(0,0,0); padding-top: 0.2em; border-bottom: rgb(163,176,191) 1px solid; text-align: left; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial&amp;quot; | [[:Category:Installation | Installation]]&lt;br /&gt;
&lt;br /&gt;
! style=&amp;quot;border-right: rgb(163,176,191) 1px solid; padding-right: 0.4em; border-top: rgb(163,176,191) 1px solid; padding-left: 0.4em; font-weight: bold; font-size: 120%; background: rgb(206,223,242) 0% 50%; padding-bottom: 0.2em; margin: 0pt; border-left: rgb(163,176,191) 1px solid; color: rgb(0,0,0); padding-top: 0.2em; border-bottom: rgb(163,176,191) 1px solid; text-align: left; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial&amp;quot; | [[:Category:Training | Training]]&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- ROW 2 --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Cell 5&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; | &lt;br /&gt;
* [[STRIDE_Test_Space|Overview]]&lt;br /&gt;
* [[User Administration]]&lt;br /&gt;
* [[Creating Test Spaces]]&lt;br /&gt;
* [[Uploading Test Results]]&lt;br /&gt;
* [[Notifications]]&lt;br /&gt;
* [[Reporting Entities]]&lt;br /&gt;
* [[Creating And Using Baselines]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Cell 6&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; |&lt;br /&gt;
* [[Stride Runner|STRIDE Runner]]&lt;br /&gt;
* [[Build Tools|STRIDE Build Tools]]&lt;br /&gt;
* [[Runtime Reference|STRIDE Runtime]]&lt;br /&gt;
* [[Platform Abstraction Layer]]&lt;br /&gt;
* [[Intercept Module]]&lt;br /&gt;
* [[Test Unit Pragmas]]&lt;br /&gt;
* [[Reporting Model]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Cell 7&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; | &lt;br /&gt;
* [[Desktop Installation | Desktop]]&lt;br /&gt;
* [[Test Space Setup | Test Space]]&lt;br /&gt;
* [[Off-Target Environment | Off-Target]]&lt;br /&gt;
* [[Integration Overview | Target]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Cell 8&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; |&lt;br /&gt;
* [[Training Overview | Overview]]&lt;br /&gt;
* [[Training Prerequisites | Prerequisites]]&lt;br /&gt;
* [[Training Instrumentation | Instrumentation]]&lt;br /&gt;
* [[Training Tests in Script | Tests in Script]]&lt;br /&gt;
* [[Training Tests in C/C++ | Tests in C/C++]]&lt;br /&gt;
* [[Training Running Tests | Running Tests]]&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Click here for [[:Category:Release Notes| Release Notes]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Main_Page&amp;diff=12582</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Main_Page&amp;diff=12582"/>
		<updated>2010-05-28T22:32:21Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;#color:#0067A5&amp;quot;&amp;gt; &amp;lt;font size=&amp;quot;5&amp;quot;&amp;gt; Welcome to the STRIDE™ Wiki &amp;lt;/font&amp;gt; &amp;lt;/span&amp;gt; &lt;br /&gt;
&lt;br /&gt;
STRIDE has been designed specifically for testing embedded software On-Target. &lt;br /&gt;
&amp;lt;hr/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For an overview of STRIDE, including screencasts, [[STRIDE Overview| please click here]]&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;hr/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;FCK__ShowTableBorders&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; | &lt;br /&gt;
{| class=&amp;quot;FCK__ShowTableBorders&amp;quot; style=&amp;quot;border-right: rgb(187,204,204) 1px solid; border-top: rgb(187,204,204) 1px solid; vertical-align: top; border-left: rgb(187,204,204) 1px solid; border-bottom: rgb(187,204,204) 1px solid&amp;quot; cellspacing=&amp;quot;5&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;border-right: rgb(163,176,191) 1px solid; padding-right: 0.4em; border-top: rgb(163,176,191) 1px solid; padding-left: 0.4em; font-weight: bold; font-size: 120%; background: rgb(206,223,242) 0% 50%; padding-bottom: 0.2em; margin: 0pt; border-left: rgb(163,176,191) 1px solid; color: rgb(0,0,0); padding-top: 0.2em; border-bottom: rgb(163,176,191) 1px solid; text-align: left; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial&amp;quot; | [[:Category:Source Instrumentation | Instrumentation]] &lt;br /&gt;
&lt;br /&gt;
! style=&amp;quot;border-right: rgb(163,176,191) 1px solid; padding-right: 0.4em; border-top: rgb(163,176,191) 1px solid; padding-left: 0.4em; font-weight: bold; font-size: 120%; background: rgb(206,223,242) 0% 50%; padding-bottom: 0.2em; margin: 0pt; border-left: rgb(163,176,191) 1px solid; color: rgb(0,0,0); padding-top: 0.2em; border-bottom: rgb(163,176,191) 1px solid; text-align: left; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial&amp;quot; | [[:Category:Tests in Script| Tests in Script]]&lt;br /&gt;
&lt;br /&gt;
! style=&amp;quot;border-right: rgb(163,176,191) 1px solid; padding-right: 0.4em; border-top: rgb(163,176,191) 1px solid; padding-left: 0.4em; font-weight: bold; font-size: 120%; background: rgb(206,223,242) 0% 50%; padding-bottom: 0.2em; margin: 0pt; border-left: rgb(163,176,191) 1px solid; color: rgb(0,0,0); padding-top: 0.2em; border-bottom: rgb(163,176,191) 1px solid; text-align: left; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial&amp;quot; | [[:Category:Test Units| Tests in C/C++]]&lt;br /&gt;
&lt;br /&gt;
! style=&amp;quot;border-right: rgb(163,176,191) 1px solid; padding-right: 0.4em; border-top: rgb(163,176,191) 1px solid; padding-left: 0.4em; font-weight: bold; font-size: 120%; background: rgb(206,223,242) 0% 50%; padding-bottom: 0.2em; margin: 0pt; border-left: rgb(163,176,191) 1px solid; color: rgb(0,0,0); padding-top: 0.2em; border-bottom: rgb(163,176,191) 1px solid; text-align: left; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial&amp;quot; | [[:Category:Running Tests | Running Tests]] &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Cell 1&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; |&lt;br /&gt;
* [[Source Instrumentation Overview | Overview]]&lt;br /&gt;
* [[Test Point | Test Points]]&lt;br /&gt;
* [[Test Log | Test Logs]]&lt;br /&gt;
* [[Function_Capturing  | Functions]]&lt;br /&gt;
* [[Expectations | Expectations]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Cell 2&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; |&lt;br /&gt;
* [[Test Modules Overview | Overview]]&lt;br /&gt;
* [[Perl Script APIs | Perl Script APIs]]&lt;br /&gt;
* [[Perl Script Snippets]]&lt;br /&gt;
* [[Script Samples | Samples]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Cell 3&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; |&lt;br /&gt;
* [[Test Units Overview | Overview]]&lt;br /&gt;
* [[Test Units]]&lt;br /&gt;
* [[Test Macros | Test Macros]]&lt;br /&gt;
* [[Test API | Test API]]&lt;br /&gt;
* [[C/C++ Samples | Samples]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Cell 4&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; | &lt;br /&gt;
* [[Running Tests]]&lt;br /&gt;
* [[Listing Functions and Test Units|Listing Functions/Tests]]&lt;br /&gt;
* [[Tracing  | Tracing]]&lt;br /&gt;
* [[Organizing Tests into Suites]]&lt;br /&gt;
* [[Setting up your CI Environment]]&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;border-right: rgb(163,176,191) 1px solid; padding-right: 0.4em; border-top: rgb(163,176,191) 1px solid; padding-left: 0.4em; font-weight: bold; font-size: 120%; background: rgb(206,223,242) 0% 50%; padding-bottom: 0.2em; margin: 0pt; border-left: rgb(163,176,191) 1px solid; color: rgb(0,0,0); padding-top: 0.2em; border-bottom: rgb(163,176,191) 1px solid; text-align: left; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial&amp;quot; | [[:Category:STRIDE Test Space|Test Space]]&lt;br /&gt;
&lt;br /&gt;
! style=&amp;quot;border-right: rgb(163,176,191) 1px solid; padding-right: 0.4em; border-top: rgb(163,176,191) 1px solid; padding-left: 0.4em; font-weight: bold; font-size: 120%; background: rgb(206,223,242) 0% 50%; padding-bottom: 0.2em; margin: 0pt; border-left: rgb(163,176,191) 1px solid; color: rgb(0,0,0); padding-top: 0.2em; border-bottom: rgb(163,176,191) 1px solid; text-align: left; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial&amp;quot; | [[:Category:Reference | Reference]]&lt;br /&gt;
&lt;br /&gt;
! style=&amp;quot;border-right: rgb(163,176,191) 1px solid; padding-right: 0.4em; border-top: rgb(163,176,191) 1px solid; padding-left: 0.4em; font-weight: bold; font-size: 120%; background: rgb(206,223,242) 0% 50%; padding-bottom: 0.2em; margin: 0pt; border-left: rgb(163,176,191) 1px solid; color: rgb(0,0,0); padding-top: 0.2em; border-bottom: rgb(163,176,191) 1px solid; text-align: left; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial&amp;quot; | [[:Category:Installation | Installation]]&lt;br /&gt;
&lt;br /&gt;
! style=&amp;quot;border-right: rgb(163,176,191) 1px solid; padding-right: 0.4em; border-top: rgb(163,176,191) 1px solid; padding-left: 0.4em; font-weight: bold; font-size: 120%; background: rgb(206,223,242) 0% 50%; padding-bottom: 0.2em; margin: 0pt; border-left: rgb(163,176,191) 1px solid; color: rgb(0,0,0); padding-top: 0.2em; border-bottom: rgb(163,176,191) 1px solid; text-align: left; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial&amp;quot; | [[:Category:Training | Training]]&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- ROW 2 --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Cell 5&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; | &lt;br /&gt;
* [[STRIDE_Test_Space|Overview]]&lt;br /&gt;
* [[User Administration]]&lt;br /&gt;
* [[Creating Test Spaces]]&lt;br /&gt;
* [[Uploading Test Results]]&lt;br /&gt;
* [[Notifications]]&lt;br /&gt;
* [[Reporting Entities]]&lt;br /&gt;
* [[Creating And Using Baselines]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Cell 6&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; |&lt;br /&gt;
* [[Stride Runner|STRIDE Runner]]&lt;br /&gt;
* [[Build Tools|STRIDE Build Tools]]&lt;br /&gt;
* [[Runtime Reference|STRIDE Runtime]]&lt;br /&gt;
* [[Platform Abstraction Layer]]&lt;br /&gt;
* [[Intercept Module]]&lt;br /&gt;
* [[Test Unit Pragmas]]&lt;br /&gt;
* [[Reporting Model]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Cell 7&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; | &lt;br /&gt;
* [[Desktop Installation | Desktop]]&lt;br /&gt;
* [[Test Space Setup | Test Space]]&lt;br /&gt;
* [[Off-Target Environment | Off-Target]]&lt;br /&gt;
* [[Integration Overview | Target]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Cell 8&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; |&lt;br /&gt;
* [[Training Overview | Overview]]&lt;br /&gt;
* [[Training Prerequisites | Prerequisites]]&lt;br /&gt;
* [[Training Instrumentation | Instrumentation]]&lt;br /&gt;
* [[Training Tests in Script | Tests in Script]]&lt;br /&gt;
* [[Training Tests in C/C++ | Tests in C/C++]]&lt;br /&gt;
* [[Training Running Tests | Running Tests]]&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Click here for [[:Category:Release Notes| Release Notes]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
	<entry>
		<id>https://www.stridewiki.com/index.php?title=Test_API&amp;diff=12581</id>
		<title>Test API</title>
		<link rel="alternate" type="text/html" href="https://www.stridewiki.com/index.php?title=Test_API&amp;diff=12581"/>
		<updated>2010-05-28T22:31:47Z</updated>

		<summary type="html">&lt;p&gt;Mikee: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Test Points ==&lt;br /&gt;
&lt;br /&gt;
Refer to [[Test Point Testing in C/C++|this article]].&lt;br /&gt;
&lt;br /&gt;
== Test Fixturing ==&lt;br /&gt;
&lt;br /&gt;
Refer to [[Test Fixturing in C/C++|this article]].&lt;br /&gt;
&lt;br /&gt;
== Test Documentation ==&lt;br /&gt;
&lt;br /&gt;
Refer to [[Test Documentation in C/C++|this article]].&lt;br /&gt;
&lt;br /&gt;
==  Runtime  Test Services  ==&lt;br /&gt;
&lt;br /&gt;
Refer  to [[Runtime Test Services|this article]]  for a complete listing of the Runtime Test Services APIs.&lt;br /&gt;
&lt;br /&gt;
== File Transfer Services ==&lt;br /&gt;
&lt;br /&gt;
Refer to [[File Transfer Services|this article]] for a complete listing  of the File Transfer Services APIs.&lt;br /&gt;
&lt;br /&gt;
== Test Doubles ==&lt;br /&gt;
&lt;br /&gt;
Refer to [[Using Test Doubles|this article]].&lt;br /&gt;
&lt;br /&gt;
[[Category:Test Units]]&lt;/div&gt;</summary>
		<author><name>Mikee</name></author>
	</entry>
</feed>