Test Point: Difference between revisions
mNo edit summary |
mNo edit summary |
||
Line 23: | Line 23: | ||
Unlike traditional unit testing that drives testing based on input parameters and isolating functionality, | Unlike traditional unit testing that drives testing based on input parameters and isolating functionality, Expectation testing is executed within a fully functional software build running on a real target platform. Test Points are not dependent on input parameters, but often leverage the same types of input/output controls used by functional and black-box testing. | ||
Another unique feature of this type of testing is that domain expertise is not required to implement a test. Developers and domain experts use instrumentation to export design knowledge of the software to the entire team. Furthermore, | Another unique feature of this type of testing is that domain expertise is not required to implement a test. Developers and domain experts use instrumentation to export design knowledge of the software to the entire team. Furthermore, no stubbing is required, no special logic to generate input parameters, and no advanced knowledge required of how the application software is coded. | ||
To enable effective test coverage, developers and domain experts | To enable effective test coverage, developers and domain experts insert instrumentation at key locations to gain insight and testability. Here are some general suggested source code areas to consider instrumenting: | ||
* entry/exit points of critical functions | * entry/exit points of critical functions | ||
* state transitions | * state transitions | ||
Line 41: | Line 41: | ||
== Instrumentation == | == Instrumentation == | ||
To make the software ''testable'', the first step in the process is for the experts to selectively insert instrumentation macros into the source code. Test Points | To make the software ''testable'', the first step in the process is for the experts to selectively insert instrumentation macros into the source code. Test Points have a nominal impact on the performance of the application as they are only active during test data collection<ref name="n1"> Test data collection is typically implemented in a low priority background thread. The data is captured in the calling routine's thread context (no context switch) but processed in the background or on the host. Instrumentation macros return immediately to the caller (i.e. no-op) when testing is not active.</ref>. Test Points contain names and optional payload data. When Test Points are activated, they are collected in the background, along with timing and any associated data. The set of Test Points hit, their order, timing, and data content can all be used to validate that the software is behaving as expected. [[Test_Log | Test Logs]] can also be added to the source code to provide additional information in the context of an executing test. | ||
To specify a | To specify a Test Point you should include the '''srtest.h''' header file from the Stride Runtime in your compilation unit. The Test Point macros are active only when <tt>STRIDE_ENABLED</tt> is <tt>#define</tt>d, therefore it is practical to place these macros in-line in production source. When <tt>STRIDE_ENABLED</tt> is not <tt>#define</tt>d, these macros evaluate to nothing. | ||
{| class="prettytable" | {| class="prettytable" | ||
Line 67: | Line 67: | ||
== Define your Expectations == | == Define your Expectations == | ||
In addition to instrumenting the source code with Test Points, you must also define the [[Expectations]] of the Test Points. This involves defining the list of Test Points expected to be hit during a given test scenario. | In addition to instrumenting the source code with Test Points, you must also define the [[Expectations]] of the Test Points. This involves defining the list of Test Points expected to be hit during a given test scenario. Expectations can also include any '''data''' associated with a Test Point that requires validation. | ||
== Write the Test Unit == | == Write the Test Unit == |
Revision as of 17:16, 21 July 2015
Source instrumentation is the process by which developers and domain experts selectively instrument the source under test for the purpose of writing test scenarios against the executing application. Implementing tests that leverage source instrumentation is called Expectation Testing. This validation technique is very useful for verifying proper code sequencing based on the software's internal design.
#include <srtest.h>
...
/* a test point with no payload */
srTEST_POINT("first test point");
/* a test point with binary payload */
srTEST_POINT_DATA("second test point", myData, sizeofMyData);
/* a test point with simple string payload */
srTEST_POINT_STR("third test point", "payload with simple string");
/* a test point with formatted string payload */
srTEST_POINT_STR("third test point", "payload with format string %d", myVar);
#ifdef __cplusplus
srTEST_POINT_STR("c++ test point", "") << "stream input supported under c++";
#endif
Unlike traditional unit testing that drives testing based on input parameters and isolating functionality, Expectation testing is executed within a fully functional software build running on a real target platform. Test Points are not dependent on input parameters, but often leverage the same types of input/output controls used by functional and black-box testing.
Another unique feature of this type of testing is that domain expertise is not required to implement a test. Developers and domain experts use instrumentation to export design knowledge of the software to the entire team. Furthermore, no stubbing is required, no special logic to generate input parameters, and no advanced knowledge required of how the application software is coded.
To enable effective test coverage, developers and domain experts insert instrumentation at key locations to gain insight and testability. Here are some general suggested source code areas to consider instrumenting:
- entry/exit points of critical functions
- state transitions
- critical or interesting data transitions (using optional payload to convey data values)
- callback routines
- data persistence
- error conditions
The steps required to implement an Expectation test are the following:
Instrumentation
To make the software testable, the first step in the process is for the experts to selectively insert instrumentation macros into the source code. Test Points have a nominal impact on the performance of the application as they are only active during test data collection[1]. Test Points contain names and optional payload data. When Test Points are activated, they are collected in the background, along with timing and any associated data. The set of Test Points hit, their order, timing, and data content can all be used to validate that the software is behaving as expected. Test Logs can also be added to the source code to provide additional information in the context of an executing test.
To specify a Test Point you should include the srtest.h header file from the Stride Runtime in your compilation unit. The Test Point macros are active only when STRIDE_ENABLED is #defined, therefore it is practical to place these macros in-line in production source. When STRIDE_ENABLED is not #defined, these macros evaluate to nothing.
Test Point Macros | |
srTEST_POINT(label) | label is a pointer to a null-terminated string |
srTEST_POINT_DATA(label, data, size) | label is a pointer to a null-terminated string data is a pointer to a byte sequence |
srTEST_POINT_STR(label, message) | label is a pointer to a null-terminated string message is a pointer to a null-terminated format string |
Define your Expectations
In addition to instrumenting the source code with Test Points, you must also define the Expectations of the Test Points. This involves defining the list of Test Points expected to be hit during a given test scenario. Expectations can also include any data associated with a Test Point that requires validation.
Write the Test Unit
Once the source under test has been instrumented and the Expectations defined, Stride offers a number of techniques that can be used for implementing expectation tests.
- Tests can be written in C or C++ and executed on the device under test using the Stride framework.
- Tests can be written in Perl.
Notes
- ↑ Test data collection is typically implemented in a low priority background thread. The data is captured in the calling routine's thread context (no context switch) but processed in the background or on the host. Instrumentation macros return immediately to the caller (i.e. no-op) when testing is not active.