Perl Script APIs: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
No edit summary
 
(30 intermediate revisions by 2 users not shown)
Line 1: Line 1:
The STRIDE Framework perl script model requires you to create perl modules (*.pm files) to group your tests. The following documents the API for the STRIDE::Test base class that you use when creating these modules.
__NOTOC__
The Stride perl script model requires you to create perl modules (*.pm files) to group your tests. The following documents the API for the STRIDE::Test base class that you use when creating these modules.


== STRIDE::Test ==
= STRIDE::Test =


This is the base class for test modules. It provides the following methods.
This is the base class for test modules. It provides the following methods.


=== Declaring Tests ===
== Declaring Tests ==


Once you have created a package that inherits from STRIDE::Test, you can declare any subroutine to be a test method by declaring it with the '': Test'' attribute. In addition to test methods, the following attributes declare other kinds of subroutines:
Once you have created a package that inherits from STRIDE::Test, you can declare any subroutine to be a test method by declaring it with the '': Test'' attribute. In addition to test methods, the following attributes declare other kinds of subroutines:


{| class="prettytable"
{| class="prettytable"
| colspan="2" | '''subroutine attributes'''
| colspan="2" | '''subroutine attributes'''


|-valign="top"  
|-valign="top"
! attribute!! description
! attribute!! description


|-valign="top"
|-valign="top"
| Test
| Test
| declares a test method - will be executed automatically when the module is run.
| declares a test method - will be executed automatically when the module is run.


|-valign="top"
|-valign="top"
| Test(startup)
| Test(startup)
| startup method, called once before any of the test methods have been executed.
| startup method, called once before any of the test methods have been executed.


|-valign="top"
|-valign="top"
| Test(shutdown)
| Test(shutdown)
| shutdown method, called once after all test methods have been executed.
| shutdown method, called once after all test methods have been executed.


|-valign="top"
|-valign="top"
| Test(setup)
| Test(setup)
| setup fixture, called before each test method.
| setup fixture, called before each test method.


|-valign="top"
|-valign="top"
| Test(teardown)
| Test(teardown)
| teardown fixture, called after each test method.
| teardown fixture, called after each test method.


|}
|}


You are free to declare as many methods as you like with these attributes. When more than one method has been declared with the same attribute, the methods will be called at the appropriate time in the order declared.
You are free to declare as many methods as you like with these attributes. When more than one method has been declared with the same attribute, the methods will be called at the appropriate time in the order declared.


=== Methods ===
== Methods ==


These methods are all available in the context of a test module that inherits from STRIDE::Test.
These methods are all available in the context of a test module that inherits from STRIDE::Test.


{| class="prettytable"
{| class="prettytable"
| colspan="2" | '''Methods'''
| colspan="2" | '''Methods'''


|-valign="top"  
|-valign="top"
! name !! description
! name !! description
|-valign="top"
| <source  lang="perl">TestCase</source>
| returns the default test case object. See [[#TestCase|here]] for more info.
 
|-valign="top"
| <source  lang="perl">Remote</source>
| Returns a [[#STRIDE::Remote|STRIDE::Remote]] object that was initialized with the active database. This object is used for accessing captured at the time of compilation remote functions and constant values (macros and enums).
 
|-valign="top"
|-valign="top"
| <source lang="perl">TestCase</source>
| <source lang="perl">GetParam(name, defval)</source>
| returns the current test case object.
| returns an input parameter's value associated with the test suite.


|-valign="top"
|-valign="top"
| <source lang="perl">TestSuite</source>
| <source lang="perl">TestCase AddCase(name, description)</source>
| returns the current test suite object.
| creates a new test case and adds it to the test suite.


|-valign="top"
|-valign="top"
| <source lang="perl">AddAnnotation(  
| <source lang="perl">TestAnnotation AddAnnotation(level, name, description)</source>
  test_case => case,
| creates a new test annotation and adds it to the test suite. The value of ''level'' could be one of the predefined constants:
  label => "",
* '''ANNOTATION_TRACE'''<ref name="const">symbols like these are exported as perl constants - don't quote these values when you use them - rather, use the bare symbols and perl will use the constant value we provide</ref>
  level => LEVEL,
  message => "")</source>
| Adds an annotation to the current test case (or to test_case if that named parameter is provided). A ''label'' can be provided but will default to "Host Annotation". A ''level'' can be provided as one of the following, but will default to INFO level:
* '''ANNOTATION_TRACE'''<ref name="const">symbols like these are exported as perl constants - don't quote these values when you use them - rather, use the bare symbols and perl will use the constant value we provide</ref>
* '''ANNOTATION_DEBUG'''
* '''ANNOTATION_DEBUG'''
* '''ANNOTATION_INFO'''  
* '''ANNOTATION_INFO'''  
Line 69: Line 74:
* '''ANNOTATION_ERROR'''
* '''ANNOTATION_ERROR'''
* '''ANNOTATION_FATAL'''
* '''ANNOTATION_FATAL'''
The ''message'' parameter is optional and specifies the text to use in the annotation description field.
|
 
|-valign="top"
| <source lang="perl">AddTestCase(suite)</source>
| Creates a new test case in the specified suite (or the current TestSuite(), if none is provided). This also updates the current test case value that is returned by TestCase().
 
|-valign="top"
| <source lang="perl">AddTestSuite(suite)</source>
| Creates a new sub-suite in the specified suite (or the current TestSuite(), if none is provided). This also updates the current test suite value that is returned by TestSuite().
 
|-valign="top"
| <source lang="perl">Functions</source>
| Returns a [[#STRIDE::Function|STRIDE::Function]] object that was initialized with the active database. This object is used for calling captured functions in the database.


|-valign="top"
|-valign="top"
| <source lang="perl">Constants</source>
| <source lang="perl">SetData(name, value)</source>
| Returns a [[#STRIDE::Function|STRIDE::Function->{constants}]] hash that was initialized with the active database. This object is a tied hash that can be used to retrieve database constant values (macros and enums) at the time of compilation.
| associates a custom name-value pair with the test suite.


|-valign="top"
|-valign="top"
| <source lang="perl">TestPointSetup(  
| <source lang="perl">TestPointSetup(  
   expected => [],  
   expected => [],  
   unexpected => [],
   unexpected => [],
   ordered  => 0|1,
   ordered  => 0|1,
   strict => 0|1,
   strict =>  0|1,
  continue => 0|1,
   expect_file => filepath,
   expect_file => filepath,
   predicate => coderef,
   predicate => coderef,
   replay_file  => filepath,
   replay_file  => filepath,
   test_case => case)</source>
   test_case => case)</source>
| Creates a new instance of [[#STRIDE::TestPoint|STRIDE::TestPoint]], automatically passing the default TestCase() as the test case if none is provided. Options are passed using hash-style arguments. The supported arguments are:
| Creates a new instance of [[#STRIDE::TestPoint|STRIDE::TestPoint]], automatically passing the default TestCase() as the test case if none is provided. Options are passed using hash-style arguments. The supported arguments are:
; ordered : flag that specifies whether or not the expectation set must occur in the order specified. If set to true, the expectation list is treated as an ordered list, otherwise we assume the list is unordered. Default value is true (ordered processing).  
; ordered : flag that specifies whether or not the expectation set must occur in the order specified. If set to true, the expectation list is treated as an ordered list, otherwise we assume the list is unordered. Default value is true (ordered processing).  
; strict : flag that specifies if the list is exclusive. If strict is specified, then the actual events (within the specified universe of test points) must match the expectation list exactly as specified. If strict processing is disabled, then other test points within the universe are allowed to occur between items specified in the expectation list. The default value is true (strict processing).
; strict : flag that specifies if the list is exclusive. If strict is specified, then the actual events (within the specified universe of test points) must match the expectation list exactly as specified. If strict processing is disabled, then other test points within the universe are allowed to occur between items specified in the expectation list. The default value is true (strict processing).
; expected : is an array reference containing elements that are either  strings representing the test point labels '''OR''' an anonymous sub-array that contains these four items: '''[label, count, predicate, expected_data]'''. The four element arrayref elements are:
; continue : flag that specifies whether the expectation should continue to process test points even after the specified expectation has been minimally satisfied. Normally, a test will complete as soon as the expectation is satisfied. In some circumstances, however, it is necessary to continue waiting until the full timeout period (specified to the Wait function) before exiting. The default value is 0, indicating that the test will exit as soon as it is satisfied (or otherwise fails).
* ''label'' is the test point label (same as the non array argument type). the label is also allowed to be specified using the predefined constant value '''TEST_POINT_ANYTHING'''<ref name="const"/> to indicate a place in the expectation list where '''any''' test point within the current universe is permitted. When this special value is used in the expectation list, a predicate must ''also'' be specified. The test point is only considered satisfied when the predicate returns true. '''TEST_POINT_ANYTHING''' can be used to implement, among other things, startup scenarios where you want to defer your expectation list processing until a particular test point and/or data state have been encountered. When specifying '''TEST_POINT_ANYTHING''' in an '''unordered''' expectation, it is only allowed to appear '''once''' in the expectation list and, in that case, it is treated as a startup expectation whereby none of the other test points are processed until the '''TEST_POINT_ANYTHING''' expectation has been satisfied (when it's predicate returns true).
; expected : is an array reference containing elements that are either  strings representing the test point labels '''OR''' an anonymous sub-array that contains these four items: '''[label, count, predicate, expected_data]'''. The four element arrayref elements are:
* ''count'' is the number of expected occurrences - can be any positive integer value, or the special value '''TEST_POINT_ANY_COUNT'''<ref name="const"/>.
* ''label'' is the test point label (same as the non array argument type). the label is also allowed to be specified using one of the predefined constant values: '''TEST_POINT_ANY_IN_SET'''<ref name="const"/> or '''TEST_POINT_ANY_AT_ALL'''.  '''TEST_POINT_ANY_IN_SET''' is used to indicate a place in the expectation list where '''any''' test point that's otherwise explicitly defined in the set is permitted. '''TEST_POINT_ANY_AT_ALL''' is used to indicate a place in the expectation where '''any''' test point in the system is allowed. When either special value is used in the expectation list, a predicate must ''also'' be specified. The test point is only considered satisfied when the predicate returns true. These special values can be used to implement, among other things, startup scenarios where you want to defer your expectation list processing until a particular test point and/or data state have been encountered. When specifying either value  in an '''unordered''' expectation, it is only allowed to appear '''once''' in the expectation list and, in that case, it is treated as a startup expectation whereby none of the other test points are processed until the startup  expectation has been satisfied (when it's predicate returns true).
* ''predicate'' must be a perl coderef for a function to call as a predicate. You are free to define your own predicate function OR use on the three [[#Predicates | standard ones]] provided by STRIDE::Test.
* ''count'' is the number of expected occurrences - can be any positive integer value, or the special value '''TEST_POINT_ANY_COUNT'''<ref name="const"/>.
* ''expected_data'' will be passed as an argument to the predicate - intended to be used to specify expected data.  
* ''predicate'' must be a perl coderef for a function to call as a predicate. You are free to define your own predicate function OR use on the three [[#Predicates | standard ones]] provided by STRIDE::Test.
If using the array form of expectation, only the label entry is required - the remaining elements are optional.
* ''expected_data'' will be passed as an argument to the predicate - intended to be used to specify expected data.  
; unexpected : is an array reference containing labels that are to be treated as failure if they are encountered. For either ''expected'' and ''unexpected'', the special value '''TEST_POINT_EVERYTHING_ELSE'''<ref name="const"/> can be used alone to indicate that any test points not explicitly listed in the set are considered part of this set.
If using the array form of expectation, only the label entry is required - the remaining elements are optional.
; expect_file : if you have previously captured trace data in a file (using the [[STRIDE Runner]]), you can specify the file as the source of expectations using this parameter. If you specify a trace file, you should NOT also specify the '''expected''' items as they will be overridden by the trace file.
; unexpected : is an array reference containing labels that are to be treated as failure if they are encountered. For either ''expected'' and ''unexpected'', the special value '''TEST_POINT_EVERYTHING_ELSE'''<ref name="const"/> can be used alone to indicate that any test points not explicitly listed in the set are considered part of this set.
; predicate : is a perl coderef to a default predicate function to use for all expectation items. If any specific entry in the expectation list has a predicate function, the expectation's predicate will override this global value. By default, no global predicate is assumed and no predicate is called unless specified for each expectation item.
; expect_file : if you have previously captured YAML trace data in a file (using the [[Stride Runner]]), you can specify the file as the source of expectations using this parameter. If you specify a YAML trace file, you should NOT also specify the '''expected''' items as they will be overridden by the YAML trace file.
; replay_file : allows you to specify a trace file and '''''input''''' to the current test. If specified, the expectation will be validated against the events specified in the file rather than live events generated by a device under test.
; predicate : is a perl coderef to a default predicate function to use for all expectation items. If any specific entry in the expectation list has a predicate function, the expectation's predicate will override this global value. By default, no global predicate is assumed and no predicate is called unless specified for each expectation item.
; replay_file : allows you to specify a YAML trace file and '''''input''''' to the current test. If specified, the expectation will be validated against the events specified in the file rather than live events generated by a device under test.
; test_case : allows you to specify a test case to use for reporting the results. This is only useful for advanced users that are generating test cases dynamically within a test method.  
; test_case : allows you to specify a test case to use for reporting the results. This is only useful for advanced users that are generating test cases dynamically within a test method.  


Line 116: Line 111:
|}
|}


=== Assertions ===
== Assertions ==
 
Each of the following  assertion methods are provided for standard comparisons. For each, there  there are three different types, depending on the desired behavior upon  failure: '''EXPECT''', '''ASSERT''',  and '''EXIT'''. '''EXPECT'''  checks will fail the current test case but continue executing the test  method. '''ASSERT''' checks will fail the current test  case and exit the test method immediately.  '''EXIT'''  checks will fail the current test case, immediately exit the current  test method AND cease further execution of the test module.


Each of the following assertion methods are provided for standard comparisons. For each, there there are three different types, depending on the desired behavior upon failure: '''EXPECT''', '''ASSERT''', and '''EXIT'''. '''EXPECT''' checks will fail the current test case but continue executing the test method. '''ASSERT''' checks will fail the current test case and exit the test method immediately.  '''EXIT''' checks will fail the current test case, immediately exit the current test method AND cease further execution of the test module.
For simplicity, we refer to all the macros using a '''''prefix''''' tag - when using the macros in test code, the '''''prefix''''' should be replaced by one of the following: '''EXPECT''', '''ASSERT''', or '''EXIT''', depending on how the test writer wants failures to be handled.




{| class="prettytable"
{| class="prettytable"
| colspan="2" | '''Boolean'''
| colspan="2" | '''Boolean'''


|-  
|-  
! macro !! Pass if
! macro !! Pass if
|-
|-
| '''''prefix'''''_TRUE(''cond'');
| '''''prefix'''''_TRUE(''cond'');
Line 138: Line 135:


{| class="prettytable"
{| class="prettytable"
| colspan="2" | '''Comparison'''
| colspan="2" | '''Comparison'''
|-  
|-  
! macro !! Pass if
! macro !! Pass if


|-
|-
| '''''prefix'''''_EQ(''val1'', ''val2'');
| '''''prefix'''''_EQ(''val1'', ''val2'');
| ''val1'' == ''val2''
| ''val1'' == ''val2''


Line 168: Line 165:
|}
|}


For all of the value comparison methods ('''_EQ''', '''_NE''', etc.), the comparison is numeric if both arguments are numeric -- otherwise the comparison is a case sensitive string comparison. If case insensitive comparison is needed, simply wrap both arguments with perl's builtin '''lc()''' (lowercase) or '''uc()''' (uppercase) functions.
For all of the value comparison methods ('''_EQ''', '''_NE''', etc.), the comparison is numeric if both arguments are numeric -- otherwise the comparison is a case sensitive string comparison. If case insensitive comparison is needed, simply wrap both arguments with perl's builtin '''lc()''' (lowercase) or '''uc()''' (uppercase) functions.


{| class="prettytable"
{| class="prettytable"
| colspan="2" | '''Predicates'''
| colspan="2" | '''Predicates'''
|-  
|-  
! macro !! Pass if
! macro !! Pass if


|-
|-
| '''''prefix'''''_PRED(''coderef'', ''data'')
| '''''prefix'''''_PRED(''coderef'', ''data'')
| ''&coderef''(''data'') returns true. The predicate function is specified by ''coderef'' with optional data ''data''. The predicate can also return the special value '''TEST_POINT_IGNORE'''<ref name="const"/>' to indicate that the event should be ignored.
| ''&coderef''(''data'') returns true. The predicate function is specified by ''coderef'' with optional data ''data''. The predicate can also return the special value '''TEST_POINT_IGNORE'''<ref name="const"/>' to indicate that the event should be ignored.


|}
|}


Each of these expectation methods also supports the following optional named arguments:  
Each of these expectation methods also supports the following optional named arguments:  
; test_case => case : allows you to apply the check to a test case other than the current default
; test_case => case : allows you to apply the check to a test case other than the current default
; message => "message" : allows you to specify an additional message to include if the check fails.  
; message => "message" : allows you to specify an additional message to include if the check fails.  


Because these arguments are optional, they are passed using named argument (hash-style) syntax after the required parameters that are shown above.
Because these arguments are optional, they are passed using named argument (hash-style) syntax after the required parameters that are shown above.


=== Logging ===
== Annotations ==


The following methods can be used to annotate a test case. Typically these methods are used to add additional information about the state of the test to the report.
The following methods can be used to annotate a test case. Typically these methods are used to add additional information about the state of the test to the report.


{| class="prettytable"
{| class="prettytable"
| colspan="2" | '''Logging Methods'''
| colspan="2" | '''Annotation Methods'''


|-valign="top"  
|-valign="top"
! name !! description
! name !! description


|-valign="top"
|-valign="top"
| <source lang="perl">NOTE_INFO(message)</source>
| <source lang="perl">NOTE_INFO(message)</source>
| creates an info note in your test results report.
| creates an info note in your test results report.


|-valign="top"
|-valign="top"
| <source lang="perl">NOTE_WARN(message)</source>
| <source lang="perl">NOTE_WARN(message)</source>
| creates an warning note in your test results report.
| creates a warning note in your test results report.


|-valign="top"
|-valign="top"
| <source lang="perl">NOTE_ERROR(message)</source>
| <source lang="perl">NOTE_ERROR(message)</source>
| creates an error note in your test results report.
| creates an error note in your test results report.


|}
|}


Each of these note methods also supports the following optional named arguments:  
Each of these note methods also supports the following optional named arguments:  
; test_case => case : allows you to add the log to a test case other than the current default
; test_case => case : allows you to add the note to a test case other than the current default
; file => file : allows you to attach a file along with the annotation message that is generated for the log message.  
; file => file : allows you to attach a file along with the annotation message.  
; test_point => test_point_hashref : If you are annotating your report in the context of a predicate with a specific test point, you might want to specify the test point using this parameter. This will cause your annotation to be grouped in the final report with the annotation message that corresponds to the test point hit message. By default, a host timestamp value is used to generate the NOTE annotation, which generally causes the NOTE annotations to group toward the end of the test case report.
; test_point => test_point_hashref : If you are annotating your report in the context of a predicate with a specific test point, you might want to specify the test point using this parameter. This will cause your annotation to be grouped in the final report with the annotation message that corresponds to the test point hit message. By default, a host timestamp value is used to generate the NOTE annotation, which generally causes the NOTE annotations to group toward the end of the test case report.


Because these arguments are optional, they are passed using named argument (hash-style) syntax after the required parameters that are shown above.
Because these arguments are optional, they are passed using named argument (hash-style) syntax after the required parameters that are shown above.


=== Documentation ===
== Documentation ==


We have preliminary support for documentation extraction in the test modules using the standard perl POD formatting tokens.  
We have preliminary support for documentation extraction in the test modules using the standard perl POD formatting tokens.  


The POD that you include in your test module currently must follow these conventions:
The POD that you include in your test module currently must follow these conventions:


* it must begin with a ''head1 NAME'' section and the text of this section must contain the name of the package, preferably near the beginning.
* it must begin with a ''head1 NAME'' section and the text of this section must contain the name of the package, preferably near the beginning.
* a ''head1 DESCRIPTION'' can follow the ''NAME'' section. If provided, it will be used as the description of the test suite created for the test unit.
* a ''head1 DESCRIPTION'' can follow the ''NAME'' section. If provided, it will be used as the description of the test suite created for the test unit.
* This NAME/DESCRIPTION block must finish with an empty ''head1 METHODS'' section.
* This NAME/DESCRIPTION block must finish with an empty ''head1 METHODS'' section.
* each of the test methods can be documented by preceding them with a ''head2'' section with the same name as the test method (subroutine name). The text in this section will be used as the testcase description.
* each of the test methods can be documented by preceding them with a ''head2'' section with the same name as the test method (subroutine name). The text in this section will be used as the testcase description.


=== Predicates ===
== Predicates ==


STRIDE expectation testing allows you to specify predicate functions for sophisticated data validation. We provide several standard predicates in the STRIDE::Test package, or you are free to define your own predicate functions.
STRIDE expectation testing allows you to specify predicate functions for sophisticated data validation. We provide several standard predicates in the STRIDE::Test package, or you are free to define your own predicate functions.


==== Builtin Predicates ====
=== Builtin Predicates ===
The STRIDE::Test library provides a few standard predicates which you are free to use in your expectations:
The STRIDE::Test library provides a few standard predicates which you are free to use in your expectations:


{| class="prettytable"
{| class="prettytable"
| colspan="2" | '''Built-In Predicates'''
| colspan="2" | '''Built-In Predicates'''


|-valign="top"  
|-valign="top"
! predicate !! description
! predicate !! description


|-valign="top"
|-valign="top"
| '''TestPointStrCmp'''
| '''TestPointStrCmp'''
| does a case '''''sensitive''''' comparison of the test point data and the ''expected_data'' (specified as part of the expectation)
| does a case '''''sensitive''''' comparison of the test point data and the ''expected_data'' (specified as part of the expectation)


|-valign="top"  
|-valign="top"
| '''TestPointStrCaseCmp'''
| '''TestPointStrCaseCmp'''
| does a case '''''insensitive''''' comparison of the test point data and the ''expected_data''
| does a case '''''insensitive''''' comparison of the test point data and the ''expected_data''


|-valign="top"  
|-valign="top"
| '''TestPointMemCmp'''
| '''TestPointMemCmp'''
| does a bytewise comparison of the test point data and the ''expected_data''
| does a bytewise comparison of the test point data and the ''expected_data''


|-valign="top"  
|-valign="top"
| '''TestPointDefaultCmp'''
| '''TestPointDefaultCmp'''
| pass-through function that calls '''TestPointMemCmp''' for binary test point data or '''TestPointStrCmp''' otherwise. This is useful as a global predicate since it implements an appropriate default data comparison.
| pass-through function that calls '''TestPointMemCmp''' for binary test point data or '''TestPointStrCmp''' otherwise. This is useful as a global predicate since it implements an appropriate default data comparison.


|}
|}


==== User Defined Predicates ====
=== User Defined Predicates ===
User defined predicates are subroutines of the following form:
User defined predicates are subroutines of the following form:


<source lang="perl">
<source lang="perl">
sub myPredicate  
sub myPredicate  
{
{
     my ($test_point, $expected_data) = @_;
     my ($test_point, $expected_data) = @_;
     my $status = 0;
     my $status = 0;
    # access the test point data as $test_point->{data},  
    # access the test point data as $test_point->{data},  
     # and the label as $test_point->{label}
     # and the label as $test_point->{label}


     # set $status according to whether or not your predicate passes
     # set $status according to whether or not your predicate passes
     return $status;
     return $status;
}
}
</source>
</source>


The predicate function is passed two arguments: the current test point and the expected data that was specified as part of the expectation. The test point data is a reference to a hash with the following fields:  
The predicate function is passed two arguments: the current test point and the expected data that was specified as part of the expectation. The test point data is a reference to a hash with the following fields:  
; label : the test point label
; label : the test point label
; data : the data payload for the test point (if any)
; data : the data payload for the test point (if any)
; data_as_hex : an alternate form of the data payload, rendered as a string of hex characters
; data_as_hex : an alternate form of the data payload, rendered as a string of hex characters
; size : the size of the data payload
; size : the size of the data payload
; bin : flag indicating whether or not the data payload is binary
; bin : flag indicating whether or not the data payload is binary
; file : the source file for the test point
; file : the source file for the test point
; line :  the line number for the test point
; line :  the line number for the test  point
 
The expected data is passed as a single  scalar, but you can use references to compound data structures (hashes,  arrays) if you need more complex expected data.
 
The predicate  function should return a true value if it passes, false if not, or '''TEST_POINT_IGNORE'''<ref  name="const"/> if the test point should be ignored completely.
 
= STRIDE::Remote =


The expected data is passed as a single scalar, but you can use references to compound data structures (hashes, arrays) if you need more complex expected data.
The ''STRIDE::Remote'' class uses perl '''AUTOLOAD'''-ing to provide a convenient syntax for making simple function calls and retrieving database constants in perl. Given any properly initialized ''STRIDE::Test'' object, any captured function or constant (macro) is available directly as method or properties of the exported ''Remote'' object. Constants can also be accessed via the tied hash ''constants'' member.  


The predicate function should return a true value if it passes, false if not, or '''TEST_POINT_IGNORE'''<ref name="const"/> if the test point should be ignored completely.
For example, given a database with two functions and a macro:


== STRIDE::Function ==
<source lang="c">
int foo(const char * path);
void bar(double value);


The STRIDE::Function class uses perl '''AUTOLOAD'''ing to provide a convenient syntax for making simple function calls and retrieving database constants in perl. Given any properly initialized Function object, any captured function or constant (macro) is available directly as method or properties of the Function object. Constants can also be accessed via the tied hash ''constants'' member (which is exported as '''Constants''' in the STRIDE::Test package).  
#define MY_PI_VALUE 3.1415927
</source>


For example, given a database with two functions: '''int foo(const char * path)''' and '''void bar(double value)''', and a macro '''#define MY_PI_VALUE 3.1415927''', these methods/constants are invokable using the exported STRIDE::Test Functions and Constants objects:
In perl these methods/constants are invokable using the exported ''STRIDE::Test'' Remote object:


<source lang="perl">
<source lang="perl">
my $retval = Functions->foo("my string");
my $retval = Remote->foo("my string");
Functions->bar(Constants->{MY_PI_VALUE});
Remote->bar(Constants->{MY_PI_VALUE});
</source>
</source>


=== Asynchronous invocation ===  
== Asynchronous invocation ==  


Functions can also be called asynchronously by accessing functions using the {async} delegator within the function object. When invoked this way, the function call will return a handle object that can be used to wait for the function return value - for example:
Functions can also be called asynchronously by using the ''async'' delegator within the ''Remote'' object. When invoked this way, the function call will return a handle object that can be used to wait for the function return value - for example:


<source lang="perl">
<source lang="perl">
my $h = Functions->{async}->foo("my string");
my $h = Remote->async->foo("my string");
my $retval = $h->Wait(1000);
my $retval = $h->Wait(1000);
</source>
</source>


The '''Wait''' function takes one optional argument -- the timeout duration (in milliseconds) that indicates the maximum time to wait for the function to return. If the timeout value is not provided, ''Wait'' will wait indefinitely for the function to return. If a timeout is specified and expires before the function returns, the method with ''die'' with a timeout error message - so you might want to wrap your '''Wait''' call in an <tt>eval{};</tt> statement if you want to gracefully handle the timeout condition.
The '''Wait''' function takes one optional argument -- the timeout duration (in milliseconds) that indicates the maximum time to wait for the function to return. If the timeout value is not provided, ''Wait'' will wait indefinitely for the function to return. If a timeout is specified and expires before the function returns, the method with ''die'' with a timeout error message - so you might want to wrap your '''Wait''' call in an <tt>eval{};</tt> statement if you want to gracefully handle the timeout condition.
 
= STRIDE::TestPoint =
 
STRIDE::TestPoint  objects are used to create test point expectation tests. These objects  are created using the exported TestPointSetup factory function of the  STRIDE::Test class. Once a STRIDE::TestPoint object has been created  with the desired expectations, two functions can be called:
 
; Wait(timeout) : This method processes test  points that have occurred on the target and assesses failure based on  the parameters you provided when creating the TestPoint object. The  timeout parameter indicates how long (in milliseconds) to Wait for the  specified events. If no timeout value is provided, '''Wait''' will proceed indefinitely or until a  clear pass/failure determination can be made.
; Check() : this  is equivalent to Wait with a very small timeout. As such, it  essentially verifies that your specified test points have already been  hit.


== STRIDE::TestPoint ==
= Reporting Model =


STRIDE::TestPoint objects are used to create test point expectation tests. These objects are created using the exported TestPointSetup factory function of the STRIDE::Test class. Once a STRIDE::TestPoint object has been created with the desired expectations, two functions can be called:
The STRIDE perl framework includes an implementation of our [[Reporting Model]] that is common across all STRIDE components. The Test module gives explicit access to key elements of the report model, Cases and Annotaions. Here are a description of the methods available for each of these Objects.


; Wait(timeout) : This method processes test points that have occurred on the target and assesses failure based on the parameters you provided when creating the TestPoint object. The timeout parameter indicates how long (in milliseconds) to Wait for the specified events. If no timeout value is provided, '''Wait''' will proceed indefinitely or until a clear pass/failure determination can be made.
== TestCase ==
; Check() : this is equivalent to Wait with a very small timeout. As such, it essentially verifies that your specified test points have already been hit.
 
{|  class="prettytable"
| colspan="2" | '''Methods'''
 
|-valign="top" 
!  name !! description
 
|-valign="top"
| <source lang="perl">SetStatus(status, duration) </source>
| sets the test case status. The value of ''status'' could be one of the predefined constants:
* '''TEST_FAIL'''<ref name="const"/>
* '''TEST_PASS'''
* '''TEST_NOTINUSE'''
* '''TEST_INPROGRESS'''
* '''TEST_DONE''' - applicable to dynamic cases - sets the status to '''pass''' unless already set to '''fail''' or '''not-in-use'''
 
|-valign="top"
| <source lang="perl">TestAnnotation AddAnnotation(level, name, description)</source>
| creates a new test annotation and adds it to the test case. The value of ''level'' could be one of the predefined consttants:
* '''ANNOTATION_TRACE'''<ref name="const"/>
* '''ANNOTATION_DEBUG'''
* '''ANNOTATION_INFO'''
* '''ANNOTATION_WARNING'''
* '''ANNOTATION_ERROR'''
* '''ANNOTATION_FATAL'''
 
|-valign="top"
| <source lang="perl">SetData(name, value)</source>
| associates a custom name-value pair with the test case.
 
|}
 
== TestAnnotation ==
 
{|  class="prettytable"
| colspan="2" | '''Methods'''
 
|-valign="top" 
!  name !! description
 
|-valign="top"
| <source lang="perl">AddComment(label, message)</source>
| adds a new comment to the test annotation.
 
|}


== Notes ==
== Notes ==
<references/>
<references/>
[[Category:Tests in Script]]

Latest revision as of 22:22, 27 July 2016

The Stride perl script model requires you to create perl modules (*.pm files) to group your tests. The following documents the API for the STRIDE::Test base class that you use when creating these modules.

STRIDE::Test

This is the base class for test modules. It provides the following methods.

Declaring Tests

Once you have created a package that inherits from STRIDE::Test, you can declare any subroutine to be a test method by declaring it with the : Test attribute. In addition to test methods, the following attributes declare other kinds of subroutines:

subroutine attributes
attribute description
Test declares a test method - will be executed automatically when the module is run.
Test(startup) startup method, called once before any of the test methods have been executed.
Test(shutdown) shutdown method, called once after all test methods have been executed.
Test(setup) setup fixture, called before each test method.
Test(teardown) teardown fixture, called after each test method.

You are free to declare as many methods as you like with these attributes. When more than one method has been declared with the same attribute, the methods will be called at the appropriate time in the order declared.

Methods

These methods are all available in the context of a test module that inherits from STRIDE::Test.

Methods
name description
TestCase
returns the default test case object. See here for more info.
Remote
Returns a STRIDE::Remote object that was initialized with the active database. This object is used for accessing captured at the time of compilation remote functions and constant values (macros and enums).
GetParam(name, defval)
returns an input parameter's value associated with the test suite.
TestCase AddCase(name, description)
creates a new test case and adds it to the test suite.
TestAnnotation AddAnnotation(level, name, description)
creates a new test annotation and adds it to the test suite. The value of level could be one of the predefined constants:
  • ANNOTATION_TRACE[1]
  • ANNOTATION_DEBUG
  • ANNOTATION_INFO
  • ANNOTATION_WARNING
  • ANNOTATION_ERROR
  • ANNOTATION_FATAL
SetData(name, value)
associates a custom name-value pair with the test suite.
TestPointSetup( 
  expected => [], 
  unexpected  => [],
  ordered  => 0|1,
  strict =>  0|1,
  continue => 0|1,
  expect_file => filepath,
  predicate  => coderef,
  replay_file  => filepath,
  test_case  => case)
Creates a new instance of STRIDE::TestPoint, automatically passing the default TestCase() as the test case if none is provided. Options are passed using hash-style arguments. The supported arguments are:
ordered
flag that specifies whether or not the expectation set must occur in the order specified. If set to true, the expectation list is treated as an ordered list, otherwise we assume the list is unordered. Default value is true (ordered processing).
strict
flag that specifies if the list is exclusive. If strict is specified, then the actual events (within the specified universe of test points) must match the expectation list exactly as specified. If strict processing is disabled, then other test points within the universe are allowed to occur between items specified in the expectation list. The default value is true (strict processing).
continue
flag that specifies whether the expectation should continue to process test points even after the specified expectation has been minimally satisfied. Normally, a test will complete as soon as the expectation is satisfied. In some circumstances, however, it is necessary to continue waiting until the full timeout period (specified to the Wait function) before exiting. The default value is 0, indicating that the test will exit as soon as it is satisfied (or otherwise fails).
expected
is an array reference containing elements that are either strings representing the test point labels OR an anonymous sub-array that contains these four items: [label, count, predicate, expected_data]. The four element arrayref elements are:
  • label is the test point label (same as the non array argument type). the label is also allowed to be specified using one of the predefined constant values: TEST_POINT_ANY_IN_SET[1] or TEST_POINT_ANY_AT_ALL. TEST_POINT_ANY_IN_SET is used to indicate a place in the expectation list where any test point that's otherwise explicitly defined in the set is permitted. TEST_POINT_ANY_AT_ALL is used to indicate a place in the expectation where any test point in the system is allowed. When either special value is used in the expectation list, a predicate must also be specified. The test point is only considered satisfied when the predicate returns true. These special values can be used to implement, among other things, startup scenarios where you want to defer your expectation list processing until a particular test point and/or data state have been encountered. When specifying either value in an unordered expectation, it is only allowed to appear once in the expectation list and, in that case, it is treated as a startup expectation whereby none of the other test points are processed until the startup expectation has been satisfied (when it's predicate returns true).
  • count is the number of expected occurrences - can be any positive integer value, or the special value TEST_POINT_ANY_COUNT[1].
  • predicate must be a perl coderef for a function to call as a predicate. You are free to define your own predicate function OR use on the three standard ones provided by STRIDE::Test.
  • expected_data will be passed as an argument to the predicate - intended to be used to specify expected data.

If using the array form of expectation, only the label entry is required - the remaining elements are optional.

unexpected
is an array reference containing labels that are to be treated as failure if they are encountered. For either expected and unexpected, the special value TEST_POINT_EVERYTHING_ELSE[1] can be used alone to indicate that any test points not explicitly listed in the set are considered part of this set.
expect_file
if you have previously captured YAML trace data in a file (using the Stride Runner), you can specify the file as the source of expectations using this parameter. If you specify a YAML trace file, you should NOT also specify the expected items as they will be overridden by the YAML trace file.
predicate
is a perl coderef to a default predicate function to use for all expectation items. If any specific entry in the expectation list has a predicate function, the expectation's predicate will override this global value. By default, no global predicate is assumed and no predicate is called unless specified for each expectation item.
replay_file
allows you to specify a YAML trace file and input to the current test. If specified, the expectation will be validated against the events specified in the file rather than live events generated by a device under test.
test_case
allows you to specify a test case to use for reporting the results. This is only useful for advanced users that are generating test cases dynamically within a test method.

The returned object is of type STRIDE::TestPoint and has access to all it's member functions.

Assertions

Each of the following assertion methods are provided for standard comparisons. For each, there there are three different types, depending on the desired behavior upon failure: EXPECT, ASSERT, and EXIT. EXPECT checks will fail the current test case but continue executing the test method. ASSERT checks will fail the current test case and exit the test method immediately. EXIT checks will fail the current test case, immediately exit the current test method AND cease further execution of the test module.

For simplicity, we refer to all the macros using a prefix tag - when using the macros in test code, the prefix should be replaced by one of the following: EXPECT, ASSERT, or EXIT, depending on how the test writer wants failures to be handled.


Boolean
macro Pass if
prefix_TRUE(cond); cond is true
prefix_FALSE(cond); cond is false


Comparison
macro Pass if
prefix_EQ(val1, val2); val1 == val2
prefix_NE(val1, val2); val1 != val2
prefix_LT(val1, val2); val1 < val2
prefix_LE(val1, val2); val1 <= val2
prefix_GT(val1, val2); val1 > val2
prefix_GE(val1, val2); val1 >= val2

For all of the value comparison methods (_EQ, _NE, etc.), the comparison is numeric if both arguments are numeric -- otherwise the comparison is a case sensitive string comparison. If case insensitive comparison is needed, simply wrap both arguments with perl's builtin lc() (lowercase) or uc() (uppercase) functions.

Predicates
macro Pass if
prefix_PRED(coderef, data) &coderef(data) returns true. The predicate function is specified by coderef with optional data data. The predicate can also return the special value TEST_POINT_IGNORE[1]' to indicate that the event should be ignored.

Each of these expectation methods also supports the following optional named arguments:

test_case => case
allows you to apply the check to a test case other than the current default
message => "message"
allows you to specify an additional message to include if the check fails.

Because these arguments are optional, they are passed using named argument (hash-style) syntax after the required parameters that are shown above.

Annotations

The following methods can be used to annotate a test case. Typically these methods are used to add additional information about the state of the test to the report.

Annotation Methods
name description
NOTE_INFO(message)
creates an info note in your test results report.
NOTE_WARN(message)
creates a warning note in your test results report.
NOTE_ERROR(message)
creates an error note in your test results report.

Each of these note methods also supports the following optional named arguments:

test_case => case
allows you to add the note to a test case other than the current default
file => file
allows you to attach a file along with the annotation message.
test_point => test_point_hashref
If you are annotating your report in the context of a predicate with a specific test point, you might want to specify the test point using this parameter. This will cause your annotation to be grouped in the final report with the annotation message that corresponds to the test point hit message. By default, a host timestamp value is used to generate the NOTE annotation, which generally causes the NOTE annotations to group toward the end of the test case report.

Because these arguments are optional, they are passed using named argument (hash-style) syntax after the required parameters that are shown above.

Documentation

We have preliminary support for documentation extraction in the test modules using the standard perl POD formatting tokens.

The POD that you include in your test module currently must follow these conventions:

  • it must begin with a head1 NAME section and the text of this section must contain the name of the package, preferably near the beginning.
  • a head1 DESCRIPTION can follow the NAME section. If provided, it will be used as the description of the test suite created for the test unit.
  • This NAME/DESCRIPTION block must finish with an empty head1 METHODS section.
  • each of the test methods can be documented by preceding them with a head2 section with the same name as the test method (subroutine name). The text in this section will be used as the testcase description.

Predicates

STRIDE expectation testing allows you to specify predicate functions for sophisticated data validation. We provide several standard predicates in the STRIDE::Test package, or you are free to define your own predicate functions.

Builtin Predicates

The STRIDE::Test library provides a few standard predicates which you are free to use in your expectations:

Built-In Predicates
predicate description
TestPointStrCmp does a case sensitive comparison of the test point data and the expected_data (specified as part of the expectation)
TestPointStrCaseCmp does a case insensitive comparison of the test point data and the expected_data
TestPointMemCmp does a bytewise comparison of the test point data and the expected_data
TestPointDefaultCmp pass-through function that calls TestPointMemCmp for binary test point data or TestPointStrCmp otherwise. This is useful as a global predicate since it implements an appropriate default data comparison.

User Defined Predicates

User defined predicates are subroutines of the following form:

sub  myPredicate 
{
    my  ($test_point, $expected_data) = @_;
    my $status  = 0;
     # access the test point data as $test_point->{data}, 
    # and the  label as $test_point->{label}

    # set $status according to whether or  not your predicate passes
    return $status;
}

The predicate function is passed two arguments: the current test point and the expected data that was specified as part of the expectation. The test point data is a reference to a hash with the following fields:

label
the test point label
data
the data payload for the test point (if any)
data_as_hex
an alternate form of the data payload, rendered as a string of hex characters
size
the size of the data payload
bin
flag indicating whether or not the data payload is binary
file
the source file for the test point
line
the line number for the test point

The expected data is passed as a single scalar, but you can use references to compound data structures (hashes, arrays) if you need more complex expected data.

The predicate function should return a true value if it passes, false if not, or TEST_POINT_IGNORE[1] if the test point should be ignored completely.

STRIDE::Remote

The STRIDE::Remote class uses perl AUTOLOAD-ing to provide a convenient syntax for making simple function calls and retrieving database constants in perl. Given any properly initialized STRIDE::Test object, any captured function or constant (macro) is available directly as method or properties of the exported Remote object. Constants can also be accessed via the tied hash constants member.

For example, given a database with two functions and a macro:

int foo(const char * path);
void bar(double value);

#define MY_PI_VALUE 3.1415927

In perl these methods/constants are invokable using the exported STRIDE::Test Remote object:

my  $retval = Remote->foo("my string");
Remote->bar(Constants->{MY_PI_VALUE});

Asynchronous invocation

Functions can also be called asynchronously by using the async delegator within the Remote object. When invoked this way, the function call will return a handle object that can be used to wait for the function return value - for example:

my $h = Remote->async->foo("my string");
my $retval = $h->Wait(1000);

The Wait function takes one optional argument -- the timeout duration (in milliseconds) that indicates the maximum time to wait for the function to return. If the timeout value is not provided, Wait will wait indefinitely for the function to return. If a timeout is specified and expires before the function returns, the method with die with a timeout error message - so you might want to wrap your Wait call in an eval{}; statement if you want to gracefully handle the timeout condition.

STRIDE::TestPoint

STRIDE::TestPoint objects are used to create test point expectation tests. These objects are created using the exported TestPointSetup factory function of the STRIDE::Test class. Once a STRIDE::TestPoint object has been created with the desired expectations, two functions can be called:

Wait(timeout)
This method processes test points that have occurred on the target and assesses failure based on the parameters you provided when creating the TestPoint object. The timeout parameter indicates how long (in milliseconds) to Wait for the specified events. If no timeout value is provided, Wait will proceed indefinitely or until a clear pass/failure determination can be made.
Check()
this is equivalent to Wait with a very small timeout. As such, it essentially verifies that your specified test points have already been hit.

Reporting Model

The STRIDE perl framework includes an implementation of our Reporting Model that is common across all STRIDE components. The Test module gives explicit access to key elements of the report model, Cases and Annotaions. Here are a description of the methods available for each of these Objects.

TestCase

Methods
name description
SetStatus(status, duration)
sets the test case status. The value of status could be one of the predefined constants:
  • TEST_FAIL[1]
  • TEST_PASS
  • TEST_NOTINUSE
  • TEST_INPROGRESS
  • TEST_DONE - applicable to dynamic cases - sets the status to pass unless already set to fail or not-in-use
TestAnnotation AddAnnotation(level, name, description)
creates a new test annotation and adds it to the test case. The value of level could be one of the predefined consttants:
  • ANNOTATION_TRACE[1]
  • ANNOTATION_DEBUG
  • ANNOTATION_INFO
  • ANNOTATION_WARNING
  • ANNOTATION_ERROR
  • ANNOTATION_FATAL
SetData(name, value)
associates a custom name-value pair with the test case.

TestAnnotation

Methods
name description
AddComment(label, message)
adds a new comment to the test annotation.

Notes

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 symbols like these are exported as perl constants - don't quote these values when you use them - rather, use the bare symbols and perl will use the constant value we provide