|
|
(57 intermediate revisions by 9 users not shown) |
Line 1: |
Line 1: |
| == Introduction == | | == Introduction == |
| The Test Function List Samples are part of the [[Test_Unit_Samples|STRIDE Test Unit Samples]]. Function Lists are abbreviated as 'FList' in both pragmas (as in scl_test_flist) and documentation. The Test FList Samples pertain to test units that contain lists of functions to be executed. The Test FList functionality is designed to be used for the C language (although it is not restricted from compilation in a C++ environment as well).
| |
| <br><br>
| |
| The following content relates to the sample files and workspaces installed in ''%STRIDE_DIR%\Samples\TestUnits\TestFList''. This sample consists of a [http://en.wikipedia.org/wiki/Microsoft_Visual_Studio Visual Studio] workspace for building an [[Windows_Off-Target_Apps| off-target simulator]], sample [[Test Units|scl_test_flist]] source code, and a STRIDE workspace for doing more advanced test function list execution.
| |
|
| |
|
| == Getting Started ==
| | Function Lists are abbreviated as ''flist'' in both pragmas (as in [[scl_test_flist]]) and documentation. The Test flist Samples pertain to test units that contain lists of functions to be executed. The Test FList functionality is intended as a simple grouping of test functions and are designed to be used with the C language (although it is not restricted from compilation in a C++ environment as well). Test FLists does not support more advanced usages patterns like private test data. If you need more advanced functionality, consider using [[Test Class Sample|Test Classes (C++)]] or [[Test CClass Sample|Test C-Classes]]. |
| To begin, open the Visual Studio Solution file in the sample directory. This solution (and corresponding project) were created for Visual Studio 2005. If you have a later version installed, you should be able to open this solution (it will be automatically upgraded if necessary). If you do not have any version of Visual Studio, we recommend you install the current free version of [http://en.wikipedia.org/wiki/Visual_Studio_Express Visual Studio Express].
| |
|
| |
|
| Once you have successfully opened the solution, rebuild it. The build process has custom STRIDE build rules integrated, The rebuilding will produce a STRIDE database, intercept module source files, and an off-target simulator application that incorporates the test class source.
| | == Tests Description == |
|
| |
|
| Once the build is complete, perform the following steps to run the test flists in the workspace:
| | === Basic === |
| | |
| # launch the off-target simulator, TestFList.exe. This will run in a standard console window.
| |
| # open a command prompt window and change to this sample's directory.
| |
| # at the command prompt, run the command <tt>'''TestUnitRun.pl -v'''</tt>. This will execute all of the test units in the workspace and open a browser to display the results.
| |
| # quit the TestFList.exe application by typing 'q' in its console window.
| |
| | |
| == Sample Test FLists ==
| |
| Now that you have built the off-target simulator and executed the test flists it contains, you can take time to peruse the test flist source and the corresponding results that each produces. This section provides a brief description for each.
| |
| | |
| === Basic Examples === | |
| These examples cover the simplest, easiest way to code STRIDE scl_test_flist functionality. These examples use simple [http://en.wikipedia.org/wiki/Plain_Old_Data_Structures POD] return types to indicate status and do not annotate the tests with any rich information (such as comments). | | These examples cover the simplest, easiest way to code STRIDE scl_test_flist functionality. These examples use simple [http://en.wikipedia.org/wiki/Plain_Old_Data_Structures POD] return types to indicate status and do not annotate the tests with any rich information (such as comments). |
|
| |
|
| ==== 01_01_Basic_Simple ==== | | ==== basic_simple ==== |
| Demonstrates passing and failing tests using the primary POD types that we can infer status from (int, bool, short, and char). The bool POD type is only accepted in C++ mode.
| | This example demonstrates passing and failing tests using the primary integer (int, short, and char) types that infer status from. |
| | |
| ==== 01_02_Basic_Fixtures ====
| |
| Shows how to use [[Test_Units#Pragmas_for_Test_Units|setup]] and [[Test_Units#Pragmas_for_Test_Units|teardown]] fixtures. The setup and teardown methods are called immediately before and after the execution of each test method, respectively.
| |
| | |
| === Runtime Services Examples ===
| |
| These examples cover basic usage of our Runtime Test Services API (as declared in srtest.h).
| |
| | |
| ==== 02_01_RuntimeServices_Simple ====
| |
| Shows how to use [[Test_Units#srTestCaseSetStatus|srTestCaseSetStatus]] to set status, [[Test_Units#srTestCaseAddComment|srTestCaseAddComment]] to add a comment, and srTEST_ADD_COMMENT_WITH_LOCATION to add a comment that automatically includes line and file information.
| |
| | |
| ==== 02_02_RuntimeServices_Dynamic ====
| |
| Shows how to use [[Test_Units#srTestSuiteAddSuite|srTestSuiteAddSuite]] and [[Test_Units#srTestSuiteAddTest|srTestSuiteAddTest]] for dynamic suite and test creation in the context of a single test method.
| |
| | |
| ==== 02_03_RuntimeServices_Override ====
| |
| Shows how to use [[Test_Units#srTestCaseSetStatus|srTestCaseSetStatus]] to override the status that would otherwise be inferred from the return value.
| |
| | |
| ==== 02_04_RuntimeServices_VarComment ====
| |
| Demonstrates the use of [http://en.wikipedia.org/wiki/Printf printf] style format strings with [[Test_Units#srTestCaseAddComment|srTestCaseAddComment]].
| |
| | |
| == Test FList Execution ==
| |
| This sample demonstrates two different techniques for executing scl_test_flist code.
| |
| | |
| === Commandline Execution ===
| |
| Command line execution for test flists is done using the [[Test_Runners#TestUnitRun.pl|TestUnitRun]] utility. Here are several examples of specific syntax to execute function lists. All of these commands can be invoked from a standard command shell and the arguments shown assume that the commands are executed with the sample's directory as the starting directory. You must have your TestFList.exe application running in order for the runner to be able to initiate a connection to the target simulator. In addition, you should verify that your %STRIDE_DIR%\bin\transport.cfg file is using the TCP transport to connect to port 8000 (these are the default settings when the product is installed).
| |
| | |
| ==== Simple execution of all test units ====
| |
| The following command executes all of the test units found in the STRIDE database you have previously generated. For the purpose of this sample, since there is only one database, the -d parameter is not strictly needed, but we show it here for completeness.
| |
| | |
| TestUnitRun.pl -d TestFList.sidb
| |
| | |
| This command executes all Test Units found in the database in descending alpha-numeric sort order. Any Test FList initialization arguments are given default values (typically zero or NULL).
| |
| | |
| When you run this command, you should see the following:
| |
|
| |
|
| Attempting connection using [Sockets (S2)] transport ...
| | ==== basic_fixtures ==== |
| Connected to device.
| | This example shows how to use [[Test Unit Pragmas#Fixturing Pragmas|setup and teardown]] fixtures. The setup and teardown methods are called immediately before and after the execution of each test method, respectively. |
| Initializing STRIDE database objects...
| |
| Done.
| |
| Running Test Basic_Fixtures...
| |
| Running Test Basic_Simple...
| |
| Running Test RuntimeServices_Dynamic...
| |
| Running Test RuntimeServices_Override...
| |
| Running Test RuntimeServices_Simple...
| |
| Running Test RuntimeServices_VarComment...
| |
| Disconnected from device.
| |
| Test Results saved to C:\s2\seaside\Samples\TestUnits\TestFList\TestFList.xml
| |
| Test Report saved to C:\s2\seaside\Samples\TestUnits\TestFList\TestFList.html
| |
| ***************************************************************************
| |
| Results Summary
| |
| ***************************************************************************
| |
| Passed: 17
| |
| Failed: 6
| |
| In Progress: 1
| |
| Not Applicable: 1
| |
| ...in 7 suites.
| |
| ***************************************************************************
| |
|
| |
|
| ==== Execution based on an order file ==== | | === Runtime Services=== |
| TestUnitRun can optionally base its execution on simple text file input. We have provided a simple order file, ''RunSimple.txt'', which specifies a subset of all the function list tests in this sample. This order file also shows how to create subsuites in the final output by using the special '''{suitepath}''' syntax, as described in [[Test_Runners#Usage|the usage section]].
| | These examples cover basic usage of our [[Runtime Test Services|Runtime Test Services API]] (as declared in srtest.h). |
|
| |
|
| TestUnitRun.pl -d TestFList.sidb -o RunSimple.txt
| | ==== runtimeservices_simple ==== |
| | This example shows how to use [[Runtime Test Services#srTestCaseSetStatus|srTestCaseSetStatus]] to set status and [[Runtime Test Services#srTestCaseAddAnnotation|srTestCaseAddAnnotation]] to add a comment. |
|
| |
|
| ...and will produce this output: | | ==== runtimeservices_dynamic ==== |
| | This example shows how to use [[Runtime Test Services#srTestSuiteAddCase|srTestSuiteAddCase]], [[Runtime Test Services#srTestCaseAddAnnotation|srTestCaseAddAnnotation]], and [[Runtime Test Services#srTestAnnotationAddComment|srTestAnnotationAddComment]] for dynamic suite, test, and annotation creation in the context of a single test method. |
|
| |
|
| Attempting connection using [Sockets (S2)] transport ...
| | ==== runtimeservices_override ==== |
| Connected to device.
| | This example shows how to use [[Runtime Test Services#srTestCaseSetStatus|srTestCaseSetStatus]] to override the status that would otherwise be inferred from the return value. |
| Initializing STRIDE database objects...
| |
| Done.
| |
| Running Test Basic_Simple...
| |
| Running Test Basic_Fixtures...
| |
| Running Test RuntimeServices_Simple...
| |
| Running Test RuntimeServices_Dynamic...
| |
| Running Test RuntimeServices_Override...
| |
| Running Test RuntimeServices_VarComment...
| |
| Disconnected from device.
| |
| Test Results saved to C:\s2\seaside\Samples\TestUnits\TestFList\TestFList.xml
| |
| Test Report saved to C:\s2\seaside\Samples\TestUnits\TestFList\TestFList.html
| |
| ***************************************************************************
| |
| Results Summary
| |
| ***************************************************************************
| |
| Passed: 17
| |
| Failed: 6
| |
| In Progress: 1
| |
| Not Applicable: 1
| |
| ...in 9 suites.
| |
| ***************************************************************************
| |
|
| |
|
| ==== Execution based on file system order ==== | | ==== runtimeservices_varcomment ==== |
| TestUnitRun can also try to infer execution order and suite hierarchy from filesystem organization. If you have organized your function list test source files (only the files that contain the scl_test_flist pragma matter here) in a filesystem hierarchy that you want to mimic in your tests, you can specify a root of a directory tree that contains the relevant test flist source. TestUnitRun will walk the directory structure and determine order and hierarchy of tests based on the directory structure. To see an example of this in action, you can execute this command with the sample:
| | This example demonstrates the use of [http://en.wikipedia.org/wiki/Printf printf] style format strings with [[Runtime Test Services#srTestCaseAddAnnotation|srTestCaseAddAnnotation]]. |
|
| |
|
| TestUnitRun.pl -d TestFList.sidb -f Tests
| | == Run Tests == |
| | Now launch the test app (if you have not already) and execute the runner with the following commands: |
|
| |
|
| This will cause the runner to examine the Tests subdirectory structure and build a hierarchy of tests based on that directory tree. Subdirectories will map to suites in the final report. Here is the output for this example:
| | ''Test FList tests'': |
|
| | <pre> |
| Attempting connection using [Sockets (S2)] transport ...
| | stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="s2_testflist_basic_fixtures; s2_testflist_basic_simple" --output=FList.xml |
| Connected to device.
| | </pre> |
| Initializing STRIDE database objects...
| |
| Done.
| |
| Running Test Basic_Simple...
| |
| Running Test Basic_Fixtures...
| |
| Running Test RuntimeServices_Simple...
| |
| Running Test RuntimeServices_Dynamic...
| |
| Running Test RuntimeServices_Override...
| |
| Running Test RuntimeServices_VarComment...
| |
| Disconnected from device.
| |
| Test Results saved to C:\s2\seaside\Samples\TestUnits\TestFList\TestFList.xml
| |
| Test Report saved to C:\s2\seaside\Samples\TestUnits\TestFList\TestFList.html
| |
| ***************************************************************************
| |
| Results Summary
| |
| ***************************************************************************
| |
| Passed: 17
| |
| Failed: 6
| |
| In Progress: 1
| |
| Not Applicable: 1
| |
| ...in 9 suites.
| |
| ***************************************************************************
| |
|
| |
|
| === Workspace-based Execution ===
| | Note the command will produce distinct result files for the run (per the --output command above). Please use the result file to peruse the results by opening each in your browser. |
| We also provide a sample STRIDE workspace that demonstrates the use of script execution with STRIDE Studio to manage test order and hierarchy. This workspace was created using [[WorkspaceSetup.pl]] and the [[Provided_Frameworks#WindowsTestApp|WindowsTestApp framework]]. The setup and teardown folders provided basic infrastructure scripts that start/stop the simulator application (TestFList.exe) and to manage traceviews used for [[Runtime_Reference#Logging_Services|srPrint]] message collection. The scripts that drive the testing are in the workspace '''test''' folder. What follows is a brief description for each.
| |
|
| |
|
| ==== RunAll ==== | | == Observations == |
| This folder contains a single script that iterates through the entire collection of test units and executes them one at a time. The order of execution will be in descending alpabetical order (on name) since we explictly called the [[AutoScript#ascript.TestUnits|ArrangeBy]] collection method. | | This sample demonstrates a simpler packaging technique that is appropriate for systems that support C compilation only (no C++). [[Test_Units#Test_Units|FLists]] are simple a collection of functions that are called in sequence. There is no shared state or data, unless you arrange to use global data for this purpose. Review the source code in the directory and follow the sample description. |
|
| |
|
| ==== Run Individual ====
| | The following are some test observations: |
| This folder shows how to use individual scripts to execute scl_test_flist tests. Each script takes the very simple form:
| |
|
| |
|
| ascript.TestUnits.Item('''TEST_FLIST_NAME''').Run();
| | * flist tests support setup/teardown fixturing, but '''not''' parameterization or exception handling. |
| | * we've again provided documentation using doxygen formatting for these samples. However, because there is no storage-class entity with which the docs are associated in an FList, there are some restrictions to the documentation, which you can read about [[Test_API#Test_FLists|here]]. |
| | * notice how the [[Scl_test_flist|scl_test_flist pragma]] requires you to both create a name for the test unit (first argument) '''and''' explicitly list each test method that is part of the unit. This is one disadvantage of flist over a test class (the latter does not require explicit listing of each test since all conforming public methods are assumed to be test methods). |
|
| |
|
| ..where '''TEST_FLIST_NAME''' is the name of the scl_test_flist test you want to run. You can then use the Studio tree control to change the order and hierarchy of each item by moving the scripts and creating folders. The sample contains individual scripts for a few of the sample scl_test_flist tests - you are free to move, add, or delete any items as you experiment with the workspace.
| |
|
| |
|
| [[Category: Samples]] | | [[Category: Samples]] |
Introduction
Function Lists are abbreviated as flist in both pragmas (as in scl_test_flist) and documentation. The Test flist Samples pertain to test units that contain lists of functions to be executed. The Test FList functionality is intended as a simple grouping of test functions and are designed to be used with the C language (although it is not restricted from compilation in a C++ environment as well). Test FLists does not support more advanced usages patterns like private test data. If you need more advanced functionality, consider using Test Classes (C++) or Test C-Classes.
Tests Description
Basic
These examples cover the simplest, easiest way to code STRIDE scl_test_flist functionality. These examples use simple POD return types to indicate status and do not annotate the tests with any rich information (such as comments).
basic_simple
This example demonstrates passing and failing tests using the primary integer (int, short, and char) types that infer status from.
basic_fixtures
This example shows how to use setup and teardown fixtures. The setup and teardown methods are called immediately before and after the execution of each test method, respectively.
Runtime Services
These examples cover basic usage of our Runtime Test Services API (as declared in srtest.h).
runtimeservices_simple
This example shows how to use srTestCaseSetStatus to set status and srTestCaseAddAnnotation to add a comment.
runtimeservices_dynamic
This example shows how to use srTestSuiteAddCase, srTestCaseAddAnnotation, and srTestAnnotationAddComment for dynamic suite, test, and annotation creation in the context of a single test method.
runtimeservices_override
This example shows how to use srTestCaseSetStatus to override the status that would otherwise be inferred from the return value.
This example demonstrates the use of printf style format strings with srTestCaseAddAnnotation.
Run Tests
Now launch the test app (if you have not already) and execute the runner with the following commands:
Test FList tests:
stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="s2_testflist_basic_fixtures; s2_testflist_basic_simple" --output=FList.xml
Note the command will produce distinct result files for the run (per the --output command above). Please use the result file to peruse the results by opening each in your browser.
Observations
This sample demonstrates a simpler packaging technique that is appropriate for systems that support C compilation only (no C++). FLists are simple a collection of functions that are called in sequence. There is no shared state or data, unless you arrange to use global data for this purpose. Review the source code in the directory and follow the sample description.
The following are some test observations:
- flist tests support setup/teardown fixturing, but not parameterization or exception handling.
- we've again provided documentation using doxygen formatting for these samples. However, because there is no storage-class entity with which the docs are associated in an FList, there are some restrictions to the documentation, which you can read about here.
- notice how the scl_test_flist pragma requires you to both create a name for the test unit (first argument) and explicitly list each test method that is part of the unit. This is one disadvantage of flist over a test class (the latter does not require explicit listing of each test since all conforming public methods are assumed to be test methods).