Training Tests in C/C++: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
Line 85: Line 85:
=== Run the tests ===
=== Run the tests ===


Now launch the test app (if you have not already) and execute the runner  with the following command:  
Now launch the test app (if you have not already) and execute the runner  with the following commands:  


''Test Class tests'':
<pre>
<pre>
stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="/TestPoint{s2_testpoint_basic}" --run="/Test FList{s2_testflist_basic_fixtures; s2_testflist_basic_simple}" --run="/Test C Class{s2_testcclass_basic_fixtures; s2_testcclass_basic_parameterized(\"mystring\", 8); s2_testcclass_basic_simple}" --run="/Test Class{s2_testclass::Basic::Exceptions; s2_testclass::Basic::Fixtures;  s2_testclass::Basic::Parameterized; s2_testclass::Basic::Simple}" --log_level=all
stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="s2_testclass::Basic::Exceptions; s2_testclass::Basic::Fixtures;  s2_testclass::Basic::Parameterized; s2_testclass::Basic::Simple" --output=TestClass.xml
</pre>
</pre>


This command line organizes the tests in the four samples above into suites for easier browsing.
''C Class tests'':
<pre>
stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="s2_testcclass_basic_fixtures; s2_testcclass_basic_parameterized(\"mystring\", 8); s2_testcclass_basic_simple" --output=CClass.xml
</pre>
 
''FList tests'':
<pre>
stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="s2_testflist_basic_fixtures; s2_testflist_basic_simple" --output=FList.xml
</pre>
 
''Test Point tests'':
<pre>
stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="s2_testpoint_basic" --log_level=all --output=TestPoint.xml
</pre>
 
These commands will produce distinct result files for each run (per the ''--output'' command above). Please use these result files to peruse the results by opening each in your browser.


=== Examine the results ===
=== Examine the results ===

Revision as of 00:28, 4 June 2010

Background

The STRIDE Framework provides support for implementation of tests in the native C/C++ of the device under test. Once written, these tests are compiled using the device toolchain and are harnessed, via the STRIDE Intercept Module, into one or more applications under test on the device. These tests have the unique advantage of executing in real-time on the device itself, allowing the tests to operate under actual device conditions during test.

Please review the following reference articles before proceeding:

Why would I want to write tests in native code ?

Here are some of the scenarios for which on-target test harnessing is particularly advantageous:

  • direct API testing. If you want to validate native APIs by driving the APIs directly, native code is the simplest way to do so. STRIDE provides convenient assertion macros to validate your variable states. API testing can also be combined with native test point tests (using Test Point instrumentation) to provide deeper validation of expected behavior of the units under test.
  • unit testing of C objects or C++ classes. The STRIDE native tests execute in the same context as the rest of your code, so it's possible to fully unit test any objects that can be created in your actual application code.
  • validation logic that requires sensitive timing thresholds. Sometimes it's only possible to validate tight timing scenarios on-target.
  • high-volume data processing scenarios. In some cases, the volume of data being processed and validated for a particular test scenario is to large to be easily handled by an off-target harness. In that case, native test units provide a convenient way to write tests that validate that data without shipping the data to the host during testing.

What's more, you might simply prefer to write your test logic in C or C++ (as opposed to perl on the host). If that's the case, we don't discourage you from using a toolset that your more comfortable with - particularly if it enables you to start writing tests without a new language learning curve.

Are there any disadvantages ?

Sure. Most of the disadvantages of native on-target tests concern the device build process. If your device build is particularly slow (on the order of hours or days), then adding and running new tests can become a tedious waiting game. Testing is always well served by shorter build cycles, and on-target tests are particularly sensitive to this.

In some cases, the additional code space requirements of native tests is a concern, but this is also mitigated by ever-increasing device storage capacities. On platforms that support multiple processes (e.g. embedded linux or WinMobile), it's possible to bundle tests into one or more separate test process, which further mitigates the code space concern by isolating the test code in one or more separate applications.

Samples

For this training, we will be using some of the samples provided in the C/C++_Samples. For any sample that we don't cover here explicitly, feel free to explore the sample yourself. All of the samples can be easily built and executed using the STRIDE Off-Target Environment.

The first three samples that we cover are introductions to the different test unit packaging mechanisms that we support in STRIDE. A good overview of the pros and cons for each type is presented here. The last sample we discuss is the TestPoint sample, which demonstrates test point testing in native code on target (i.e. both the generation and validation of the test points are done on target).

Note: each of the packaging examples include samples that cover basic usage and more advanced reporting techniques (runtimeservices). We recommend for this training that you focus on the basic samples as they cover the important packaging concepts. The runtimeservices examples are relevant only if the built-in reporting techniques are not sufficient for your reporting needs.

test_in_c_cpp/TestClass

This sample shows the techniques available for packaging and writing test units using classes. If you have a C++ capable compiler, we recommend that you use test classes to package your unit tests, even if your APIs under test are C only. Review the source code in the directory and follow the sample description here.

observations:

  • all of these example classes have been put into one or more namespaces. This is just for organization purposes, mainly to avoid name collisions when built along with lots of other test classes. Your test classes are not required to be in namespaces, but it can be helpful in avoiding collisions as the number of tests in your system grows.
  • we've documented our test classes and methods using doxygen style comments. This documentation is automatically extracted by our tools and added to the results report - more information about this feature is here
  • you can optionally write test classes that inherit from a base class that we've defined (stride::srTest). We recommend you start by writing your classes this way so that your classes will inherit some methods and members that make some custom reporting tasks simpler.
  • exceptions are generally handled by the STRIDE unit test harness, but can be disabled if your compiler does not support them (see s2_testclass_basic_exceptions_tests.h/cpp).
  • parameterized tests are supported by test classes as well. In these tests, simple constructor arguments can be passed during execution and are available at runtime to the test unit. The STRIDE infrastructure handles the passing of the arguments to the device and the construction of the test class with these arguments. Parameterization of test classes can be a powerful way to expand your test coverage with data driven test scenarios (varying the input to a single test class).

test_in_c_cpp/TestFList

This sample demonstrates a simpler packaging technique that is appropriate for systems that support C compilation only (no C++). FLists are simple a collection of functions that are called in sequence. There is no shared state or data, unless you arrange to use global data for this purpose. Review the source code in the directory and follow the sample description here.

observations:

  • flist tests support setup/teardown fixturing, but not parameterization or exception handling.
  • we've again provided documentation using doxygen formatting for these samples. However, because there is no storage-class entity with which the docs are associated in an FList, there are some restrictions to the documentation, which you can read about here.
  • notice how the scl_test_flist pragma requires you to both create a name for the test unit (first argument) and explicitly list each test method that is part of the unit. This is one disadvantage of flist over a test class (the latter does not require explicit listing of each test since all conforming public methods are assumed to be test methods).

test_in_c_cpp/TestCClass

This sample demonstrates a more sophisticated (and complicated) packaging technique for systems that support C compilation only. Test C Classes are defined by a structure of function pointers (which may also include data) and an initialization function. Review the source code in the directory and follow the sample description here.

observations:

  • the scl_test_cclass pragma requires a structure of function pointers as well as an initialization function that is called prior to running the tests. The initialization function must take the c class structure pointer as it's first argument and it will assign values to all the function pointer elements as well as perform any other initialization tasks. The pragma also accepts an optional deinitialization function that will be called after test execution (if provided).
  • we've provided documentation using doxygen formatting for these samples. Because the test functions themselves are bound at runtime, the test documentation must be associated with the function pointer elements in the structure - read more here.
  • parameterized tests are also supported by test c classes. Arguments to the initialization function that follow the structure pointer argument are considered constructor arguments and can be passed when running the test.
  • because the test methods are assigned to the structure members at runtime, it's possible (and recommended) to use statically scoped functions, so as not to pollute the global function space with test functions. That said, you are free to use any functions with matching signatures and linkage, regardless of scope.

test_in_c_cpp/TestPoint

The last sample we'll consider demonstrates how to do tests that validate STRIDE Test Points - with native test code. Although test point tests can be written in host-based scriping languages as well, sometimes it's preferable to write (and execute) the test logic in native target code - for instance, when validating large or otherwise complex data payloads. Review the source code in the directory and follow the sample description here.

observations:

  • test point tests can be packaged into a harness using any of the three types of test units that we support. In this case, we used an FList so that the sample could be used on systems that were not c++ capable.
  • one of two methods is used to process the test: srTestPointWait or srTestPointCheck. The former is used to process test points as they happen (with a specified timeout) and the latter is used to process test points that have already occurred at the time it is called (post completion check).
  • due to the limitations of c syntax, it can be ugly to create the srTestPointExpect_t data, especially where user data validation is concerned (see the CheckData example, for instance).

Build the test app

So that we can run these samples, let's now build an off target test app that contains the source under tests -- you can follow the generic steps described here. When copying the source, make sure you take all the source files from all four of the samples mentioned above.

Run the tests

Now launch the test app (if you have not already) and execute the runner with the following commands:

Test Class tests:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="s2_testclass::Basic::Exceptions; s2_testclass::Basic::Fixtures;  s2_testclass::Basic::Parameterized; s2_testclass::Basic::Simple" --output=TestClass.xml 

C Class tests:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="s2_testcclass_basic_fixtures; s2_testcclass_basic_parameterized(\"mystring\", 8); s2_testcclass_basic_simple" --output=CClass.xml

FList tests:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="s2_testflist_basic_fixtures; s2_testflist_basic_simple" --output=FList.xml

Test Point tests:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="s2_testpoint_basic" --log_level=all --output=TestPoint.xml

These commands will produce distinct result files for each run (per the --output command above). Please use these result files to peruse the results by opening each in your browser.

Examine the results

Open the TestApp.xml file and browse the results.

observations:

  • there are four top-level suites corresponding to the four samples we discussed above. The command arguments we passed to the runner created these top-level suites.
  • the test documentation has been extracted (at compile time) and is attached to the results when the tests are executed. Most of the test suites/test cases should have documentation in the description field.
  • the test point tests cases show the test points that were encountered, information about failures (if any) and log messages (the latter only because we included the --log_level=all option when executing the runner).
  • The two parameterized tests -- s2_testcclass_parameterized and s2_testclass::Basic::Parameterized -- both pass. We passed explicit arguments to the former (on the stride command line above) while we allowed the default arguments (0/null for all) to be passed to the latter by not explicitly specifying how it was to be called.

Explore all the results and make sure that the results meet your expectations based on the test source that you've previously browsed.