Perl Script APIs: Difference between revisions
Line 338: | Line 338: | ||
|-valign="top" | |-valign="top" | ||
| <source lang="perl"> | | <source lang="perl">SetName</source> | ||
| | | sets suite's name. | ||
|-valign="top" | |-valign="top" | ||
| <source lang="perl"> | | <source lang="perl">SetDescription</source> | ||
| | | sets suite's description. | ||
|-valign="top" | |-valign="top" | ||
| <source lang="perl"> | | <source lang="perl">AddSuite</source> | ||
| | | adds a new sub test suite. | ||
|-valign="top" | |-valign="top" | ||
| <source lang="perl"> | | <source lang="perl">AddCase</source> | ||
| | | add a new sub test case. | ||
|-valign="top" | |-valign="top" | ||
| <source lang="perl"> | | <source lang="perl">AddAnnotation</source> | ||
| | | add a new sub test annotation. | ||
|} | |} |
Revision as of 19:21, 13 June 2011
The STRIDE Framework perl script model requires you to create perl modules (*.pm files) to group your tests. The following documents the API for the STRIDE::Test base class that you use when creating these modules.
STRIDE::Test
This is the base class for test modules. It provides the following methods.
Declaring Tests
Once you have created a package that inherits from STRIDE::Test, you can declare any subroutine to be a test method by declaring it with the : Test attribute. In addition to test methods, the following attributes declare other kinds of subroutines:
subroutine attributes | |
attribute | description |
---|---|
Test | declares a test method - will be executed automatically when the module is run. |
Test(startup) | startup method, called once before any of the test methods have been executed. |
Test(shutdown) | shutdown method, called once after all test methods have been executed. |
Test(setup) | setup fixture, called before each test method. |
Test(teardown) | teardown fixture, called after each test method. |
You are free to declare as many methods as you like with these attributes. When more than one method has been declared with the same attribute, the methods will be called at the appropriate time in the order declared.
Methods
These methods are all available in the context of a test module that inherits from STRIDE::Test.
Methods | |
name | description |
---|---|
TestCase
|
returns the default test case object. See here for more info. |
TestSuite
|
returns the default test suite object. See here for more info. |
AddAnnotation(
test_case => case,
label => "",
level => LEVEL,
message => "")
|
Adds an annotation to the current test case (or to test_case if that named parameter is provided). A label can be provided but will default to "Host Annotation". A level can be provided as one of the following, but will default to INFO level:
The message parameter is optional and specifies the text to use in the annotation description field. |
AddTestCase(name, suite)
|
Creates a new test case in the specified suite (or the current test suite, if none is provided). This also updates the current test case value. |
AddTestSuite(name, suite)
|
Creates a new sub-suite in the specified suite (or the current test suite, if none is provided). This also updates the current test suite value. |
Remote
|
Returns a STRIDE::Remote object that was initialized with the active database. This object is used for accessing captured at the time of compilation remote functions and constant values (macros and enums). |
TestPointSetup(
expected => [],
unexpected => [],
ordered => 0|1,
strict => 0|1,
continue => 0|1,
expect_file => filepath,
predicate => coderef,
replay_file => filepath,
test_case => case)
|
Creates a new instance of STRIDE::TestPoint, automatically passing the default TestCase() as the test case if none is provided. Options are passed using hash-style arguments. The supported arguments are:
If using the array form of expectation, only the label entry is required - the remaining elements are optional.
The returned object is of type STRIDE::TestPoint and has access to all it's member functions. |
Assertions
Each of the following assertion methods are provided for standard comparisons. For each, there there are three different types, depending on the desired behavior upon failure: EXPECT, ASSERT, and EXIT. EXPECT checks will fail the current test case but continue executing the test method. ASSERT checks will fail the current test case and exit the test method immediately. EXIT checks will fail the current test case, immediately exit the current test method AND cease further execution of the test module.
Boolean | |
macro | Pass if |
---|---|
prefix_TRUE(cond); | cond is true |
prefix_FALSE(cond); | cond is false |
Comparison | |
macro | Pass if |
---|---|
prefix_EQ(val1, val2); | val1 == val2 |
prefix_NE(val1, val2); | val1 != val2 |
prefix_LT(val1, val2); | val1 < val2 |
prefix_LE(val1, val2); | val1 <= val2 |
prefix_GT(val1, val2); | val1 > val2 |
prefix_GE(val1, val2); | val1 >= val2 |
For all of the value comparison methods (_EQ, _NE, etc.), the comparison is numeric if both arguments are numeric -- otherwise the comparison is a case sensitive string comparison. If case insensitive comparison is needed, simply wrap both arguments with perl's builtin lc() (lowercase) or uc() (uppercase) functions.
Predicates | |
macro | Pass if |
---|---|
prefix_PRED(coderef, data) | &coderef(data) returns true. The predicate function is specified by coderef with optional data data. The predicate can also return the special value TEST_POINT_IGNORE[1]' to indicate that the event should be ignored. |
Each of these expectation methods also supports the following optional named arguments:
- test_case => case
- allows you to apply the check to a test case other than the current default
- message => "message"
- allows you to specify an additional message to include if the check fails.
Because these arguments are optional, they are passed using named argument (hash-style) syntax after the required parameters that are shown above.
Annotations
The following methods can be used to annotate a test case. Typically these methods are used to add additional information about the state of the test to the report.
Annotation Methods | |
name | description |
---|---|
NOTE_INFO(message)
|
creates an info note in your test results report. |
NOTE_WARN(message)
|
creates an warning note in your test results report. |
NOTE_ERROR(message)
|
creates an error note in your test results report. |
Each of these note methods also supports the following optional named arguments:
- test_case => case
- allows you to add the log to a test case other than the current default
- file => file
- allows you to attach a file along with the annotation message that is generated for the log message.
- test_point => test_point_hashref
- If you are annotating your report in the context of a predicate with a specific test point, you might want to specify the test point using this parameter. This will cause your annotation to be grouped in the final report with the annotation message that corresponds to the test point hit message. By default, a host timestamp value is used to generate the NOTE annotation, which generally causes the NOTE annotations to group toward the end of the test case report.
Because these arguments are optional, they are passed using named argument (hash-style) syntax after the required parameters that are shown above.
Documentation
We have preliminary support for documentation extraction in the test modules using the standard perl POD formatting tokens.
The POD that you include in your test module currently must follow these conventions:
- it must begin with a head1 NAME section and the text of this section must contain the name of the package, preferably near the beginning.
- a head1 DESCRIPTION can follow the NAME section. If provided, it will be used as the description of the test suite created for the test unit.
- This NAME/DESCRIPTION block must finish with an empty head1 METHODS section.
- each of the test methods can be documented by preceding them with a head2 section with the same name as the test method (subroutine name). The text in this section will be used as the testcase description.
Predicates
STRIDE expectation testing allows you to specify predicate functions for sophisticated data validation. We provide several standard predicates in the STRIDE::Test package, or you are free to define your own predicate functions.
Builtin Predicates
The STRIDE::Test library provides a few standard predicates which you are free to use in your expectations:
Built-In Predicates | |
predicate | description |
---|---|
TestPointStrCmp | does a case sensitive comparison of the test point data and the expected_data (specified as part of the expectation) |
TestPointStrCaseCmp | does a case insensitive comparison of the test point data and the expected_data |
TestPointMemCmp | does a bytewise comparison of the test point data and the expected_data |
TestPointDefaultCmp | pass-through function that calls TestPointMemCmp for binary test point data or TestPointStrCmp otherwise. This is useful as a global predicate since it implements an appropriate default data comparison. |
User Defined Predicates
User defined predicates are subroutines of the following form:
sub myPredicate
{
my ($test_point, $expected_data) = @_;
my $status = 0;
# access the test point data as $test_point->{data},
# and the label as $test_point->{label}
# set $status according to whether or not your predicate passes
return $status;
}
The predicate function is passed two arguments: the current test point and the expected data that was specified as part of the expectation. The test point data is a reference to a hash with the following fields:
- label
- the test point label
- data
- the data payload for the test point (if any)
- data_as_hex
- an alternate form of the data payload, rendered as a string of hex characters
- size
- the size of the data payload
- bin
- flag indicating whether or not the data payload is binary
- file
- the source file for the test point
- line
- the line number for the test point
The expected data is passed as a single scalar, but you can use references to compound data structures (hashes, arrays) if you need more complex expected data.
The predicate function should return a true value if it passes, false if not, or TEST_POINT_IGNORE[1] if the test point should be ignored completely.
STRIDE::Remote
The STRIDE::Remote class uses perl AUTOLOAD-ing to provide a convenient syntax for making simple function calls and retrieving database constants in perl. Given any properly initialized STRIDE::Test object, any captured function or constant (macro) is available directly as method or properties of the exported Remote object. Constants can also be accessed via the tied hash constants member.
For example, given a database with two functions and a macro:
int foo(const char * path);
void bar(double value);
#define MY_PI_VALUE 3.1415927
In perl these methods/constants are invokable using the exported STRIDE::Test Remote object:
my $retval = Remote->foo("my string");
Remote->bar(Constants->{MY_PI_VALUE});
Asynchronous invocation
Functions can also be called asynchronously by using the async delegator within the Remote object. When invoked this way, the function call will return a handle object that can be used to wait for the function return value - for example:
my $h = Remote->async->foo("my string");
my $retval = $h->Wait(1000);
The Wait function takes one optional argument -- the timeout duration (in milliseconds) that indicates the maximum time to wait for the function to return. If the timeout value is not provided, Wait will wait indefinitely for the function to return. If a timeout is specified and expires before the function returns, the method with die with a timeout error message - so you might want to wrap your Wait call in an eval{}; statement if you want to gracefully handle the timeout condition.
STRIDE::TestPoint
STRIDE::TestPoint objects are used to create test point expectation tests. These objects are created using the exported TestPointSetup factory function of the STRIDE::Test class. Once a STRIDE::TestPoint object has been created with the desired expectations, two functions can be called:
- Wait(timeout)
- This method processes test points that have occurred on the target and assesses failure based on the parameters you provided when creating the TestPoint object. The timeout parameter indicates how long (in milliseconds) to Wait for the specified events. If no timeout value is provided, Wait will proceed indefinitely or until a clear pass/failure determination can be made.
- Check()
- this is equivalent to Wait with a very small timeout. As such, it essentially verifies that your specified test points have already been hit.
Reporting Model
The STRIDE perl framework includes an implementation of our Reporting Model that is common across all STRIDE components. The Test module gives explicit access to two key elements of the report model, Cases and Suites. Here are a description of the methods available for each of these Objects.
TestSuite
Methods | |
name | description |
---|---|
SetName
|
sets suite's name. |
SetDescription
|
sets suite's description. |
AddSuite
|
adds a new sub test suite. |
AddCase
|
add a new sub test case. |
AddAnnotation
|
add a new sub test annotation. |
TestCase
Methods | |
name | description |
---|---|
getStatus
|
returns the test case status as one of the following defined constant values: STATUS_FAILED, STATUS_PASSED, STATUS_INPROGRESS, STATUS_NA, STATUS_UNKNOWN. These constants are not exported by default and therefore need to be qualified by the full package scope in order to be used ($STRIDE::Reporter::Case::STATUS_FAILED, for example). |
setStatus
|
sets the test case status, using one of the predefined constant values listed above. |
getStatusStr
|
alternative to getStatus method, returns the status as a simple string value - one of the following values: passed, failed, in_progress, not_applicable, unknown. |
getAnnotations
|
returns the annotations collection that holds the direct descendant child annotations of this case. |
getComments
|
returns the comments collection that holds the direct descendant child comments of this case. |
getParent
|
returns the immediate parent suite of this case - cannot be undef since every test belongs to a suite. |
getDuration
|
returns the test case duration in milliseconds. |
setDuration
|
sets the test case duration in milliseconds. |
getStartTime
|
returns the start time attribute, as a string. |
setStartTime
|
sets the start time attribute. Must be a ISO 8601 formatted string |