Training Tests in C/C++: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
 
(2 intermediate revisions by 2 users not shown)
Line 26: Line 26:
Sure. Most of the disadvantages of native on-target tests concern the device build process. If your device build is particularly slow (on the order of hours or days), then adding and running new tests can become a tedious waiting game. Testing is always well served by shorter build cycles, and on-target tests are particularly sensitive to this.  
Sure. Most of the disadvantages of native on-target tests concern the device build process. If your device build is particularly slow (on the order of hours or days), then adding and running new tests can become a tedious waiting game. Testing is always well served by shorter build cycles, and on-target tests are particularly sensitive to this.  


In some cases, the additional code space requirements of native tests is a concern, but this is also mitigated by ever-increasing device storage capacities. On platforms that support multiple processes (e.g. embedded linux or WinMobile), it's possible to bundle tests into one or more separate test process, which further mitigates the code space concern by isolating the test code in one or more separate applications.
In some cases, the additional code space requirements of native tests is a concern, but this is also mitigated by ever-increasing device storage capacities. On platforms that support multiple processes (e.g. Linux or Windows), it's possible to bundle tests into one or more separate test process, which further mitigates the code space concern by isolating the test code in one or more separate applications.


== Samples ==
== Samples ==
Line 43: Line 43:


* all of these example classes have been put into one or more namespaces. This is just for organization purposes, mainly to avoid name collisions when built along with lots of other test classes. Your test classes are '''not''' required to be in namespaces, but it can be helpful in avoiding collisions as the number of tests in your system grows.
* all of these example classes have been put into one or more namespaces. This is just for organization purposes, mainly to avoid name collisions when built along with lots of other test classes. Your test classes are '''not''' required to be in namespaces, but it can be helpful in avoiding collisions as the number of tests in your system grows.
* we've documented our test classes and methods using [http://www.stack.nl/~dimitri/doxygen/ doxygen] style comments. This documentation is automatically extracted by our tools and added to the results report - more information about this feature is [[Test_API#Test_Documentation|here]]
* we've documented our test classes and methods using [https://en.wikipedia.org/wiki/Doxygen doxygen] style comments. This documentation is automatically extracted by our tools and added to the results report - more information about this feature is [[Test_API#Test_Documentation|here]]
* you can optionally write test classes that inherit from a base class that we've defined ([[Runtime_Test_Services#class_srTest|stride::srTest]]). We recommend you start by writing your classes this way so that your classes will inherit some methods and members that make some custom reporting tasks simpler.
* you can optionally write test classes that inherit from a base class that we've defined ([[Runtime_Test_Services#class_srTest|stride::srTest]]). We recommend you start by writing your classes this way so that your classes will inherit some methods and members that make some custom reporting tasks simpler.
* exceptions are generally handled by the STRIDE unit test harness, but can be disabled if your compiler does not support them (see ''s2_testclass_basic_exceptions_tests.h/cpp'').
* exceptions are generally handled by the STRIDE unit test harness, but can be disabled if your compiler does not support them (see ''s2_testclass_basic_exceptions_tests.h/cpp'').
Line 144: Line 144:
* the [[Test_Double_Samples|TestDouble sample]] shows how to use STRIDE Test Doubles to replace function dependencies at runtime, typically for the purpose of isolating functions under test. This can be a powerful technique, but also requires some up-front work on your part to enable it, as this sample demonstrates.
* the [[Test_Double_Samples|TestDouble sample]] shows how to use STRIDE Test Doubles to replace function dependencies at runtime, typically for the purpose of isolating functions under test. This can be a powerful technique, but also requires some up-front work on your part to enable it, as this sample demonstrates.


[[Category: Training]]
[[Category: Training_OLD]]

Latest revision as of 19:01, 28 December 2018

Background

The STRIDE Framework provides support for implementation of tests in the native C/C++ of the device under test. Once written, these tests are compiled using the device toolchain and are harnessed, via the STRIDE Intercept Module, into one or more applications under test on the device. These tests have the unique advantage of executing in real-time on the device itself, allowing the tests to operate under actual device conditions during test.

Please review the following reference articles before proceeding:

Why would I want to write tests in native code ?

Here are some of the scenarios for which on-target test harnessing is particularly advantageous:

  • direct API testing. If you want to validate native APIs by driving the APIs directly, native code is the simplest way to do so. STRIDE provides convenient assertion macros to validate your variable states. API testing can also be combined with native test point tests (using Test Point instrumentation) to provide deeper validation of expected behavior of the units under test.
  • unit testing of C objects or C++ classes. The STRIDE native tests execute in the same context as the rest of your code, so it's possible to fully unit test any objects that can be created in your actual application code.
  • validation logic that requires sensitive timing thresholds. Sometimes it's only possible to validate tight timing scenarios on-target.
  • high-volume data processing scenarios. In some cases, the volume of data being processed and validated for a particular test scenario is to large to be easily handled by an off-target harness. In that case, native test units provide a convenient way to write tests that validate that data without shipping the data to the host during testing.

What's more, you might simply prefer to write your test logic in C or C++ (as opposed to perl on the host). If that's the case, we don't discourage you from using a toolset that your more comfortable with - particularly if it enables you to start writing tests without a new language learning curve.

Are there any disadvantages ?

Sure. Most of the disadvantages of native on-target tests concern the device build process. If your device build is particularly slow (on the order of hours or days), then adding and running new tests can become a tedious waiting game. Testing is always well served by shorter build cycles, and on-target tests are particularly sensitive to this.

In some cases, the additional code space requirements of native tests is a concern, but this is also mitigated by ever-increasing device storage capacities. On platforms that support multiple processes (e.g. Linux or Windows), it's possible to bundle tests into one or more separate test process, which further mitigates the code space concern by isolating the test code in one or more separate applications.

Samples

For this training, we will be using some of the samples provided in the C/C++_Samples. For any sample that we don't cover here explicitly, feel free to explore the sample yourself. All of the samples can be easily built and executed using the STRIDE Off-Target Environment.

The first three samples that we cover are introductions to the different test unit packaging mechanisms that we support in STRIDE. A good overview of the pros and cons for each type is presented here. The fourth sample demonstrates test point testing in native code on target (i.e. both the generation and validation of the test points are done on target). The last sample covers file transfer services. The STRIDE file transfer APIs enable reading/writing of files on the host from the device under test and can be useful for implementing data driven test scenarios (for instance, media file playback).

Note: each of the three packaging examples include samples that cover basic usage and more advanced reporting techniques (runtimeservices). We recommend for this training that you focus on the basic samples as they cover the important packaging concepts. The runtimeservices examples are relevant only if the built-in reporting techniques are not sufficient for your reporting needs.

test_in_c_cpp/TestClass

This sample shows the techniques available for packaging and writing test units using classes. If you have a C++ capable compiler, we recommend that you use test classes to package your unit tests, even if your APIs under test are C only. Review the source code in the directory and follow the sample description here.

observations:

  • all of these example classes have been put into one or more namespaces. This is just for organization purposes, mainly to avoid name collisions when built along with lots of other test classes. Your test classes are not required to be in namespaces, but it can be helpful in avoiding collisions as the number of tests in your system grows.
  • we've documented our test classes and methods using doxygen style comments. This documentation is automatically extracted by our tools and added to the results report - more information about this feature is here
  • you can optionally write test classes that inherit from a base class that we've defined (stride::srTest). We recommend you start by writing your classes this way so that your classes will inherit some methods and members that make some custom reporting tasks simpler.
  • exceptions are generally handled by the STRIDE unit test harness, but can be disabled if your compiler does not support them (see s2_testclass_basic_exceptions_tests.h/cpp).
  • parameterized tests are supported by test classes as well. In these tests, simple constructor arguments can be passed during execution and are available at runtime to the test unit. The STRIDE infrastructure handles the passing of the arguments to the device and the construction of the test class with these arguments. Parameterization of test classes can be a powerful way to expand your test coverage with data driven test scenarios (varying the input to a single test class).

test_in_c_cpp/TestFList

This sample demonstrates a simpler packaging technique that is appropriate for systems that support C compilation only (no C++). FLists are simple a collection of functions that are called in sequence. There is no shared state or data, unless you arrange to use global data for this purpose. Review the source code in the directory and follow the sample description here.

observations:

  • flist tests support setup/teardown fixturing, but not parameterization or exception handling.
  • we've again provided documentation using doxygen formatting for these samples. However, because there is no storage-class entity with which the docs are associated in an FList, there are some restrictions to the documentation, which you can read about here.
  • notice how the scl_test_flist pragma requires you to both create a name for the test unit (first argument) and explicitly list each test method that is part of the unit. This is one disadvantage of flist over a test class (the latter does not require explicit listing of each test since all conforming public methods are assumed to be test methods).

test_in_c_cpp/TestCClass

This sample demonstrates a more sophisticated (and complicated) packaging technique for systems that support C compilation only. Test C Classes are defined by a structure of function pointers (which may also include data) and an initialization function. Review the source code in the directory and follow the sample description here.

observations:

  • the scl_test_cclass pragma requires a structure of function pointers as well as an initialization function that is called prior to running the tests. The initialization function must take the c class structure pointer as it's first argument and it will assign values to all the function pointer elements as well as perform any other initialization tasks. The pragma also accepts an optional deinitialization function that will be called after test execution (if provided).
  • we've provided documentation using doxygen formatting for these samples. Because the test functions themselves are bound at runtime, the test documentation must be associated with the function pointer elements in the structure - read more here.
  • parameterized tests are also supported by test c classes. Arguments to the initialization function that follow the structure pointer argument are considered constructor arguments and can be passed when running the test.
  • because the test methods are assigned to the structure members at runtime, it's possible (and recommended) to use statically scoped functions, so as not to pollute the global function space with test functions. That said, you are free to use any functions with matching signatures and linkage, regardless of scope.

test_in_c_cpp/TestPoint

This sample demonstrates how to do tests that validate STRIDE Test Points - with native test code. Although test point validation tests can be written in host-based scripting languages as well, sometimes it's preferable to write (and execute) the test logic in native target code - for instance, when validating large or otherwise complex data payloads. Review the source code in the directory and follow the sample description here.

observations:

  • test point tests can be packaged into a harness using any of the three types of test units that we support. In this case, we used an FList so that the sample could be used on systems that were not C++ capable.
  • one of two methods is used to process the test: srTestPointWait or srTestPointCheck. The former is used to process test points as they happen (with a specified timeout) and the latter is used to process test points that have already occurred at the time it is called (post completion check).
  • due to the limitations of C syntax, it can be ugly to create the srTestPointExpect_t data, especially where user data validation is concerned (see the CheckData example, for instance).

test_in_c_cpp/FileServices

The FileServices sample demonstrates basic usage of the STRIDE File Transfer APIs, which provide a way to transfer files to/from the host to the running device. All the data for the file transmission is done using STRIDE messaging between the STRIDE Runner on the host and the STRIDE Runtime on the device, so no additional communication ports are required to use these services. Review the source code in the directory and follow the sample description here.

observations:

  • most of the functions return an integer status code which should be checked - any nonzero status indicates a failure. We wrote a macro for this sample (fsASSERT) that checks return codes and adds any errors to the report. You might choose to do something similar, depending on your needs.
  • the file transfer API has both byte and line oriented read/write functions. You can use whichever functions are most appropriate for your needs.
  • this sample uses the local filesystem (stdio) to write some data to a tempfile - however the STRIDE APIs are themselves buffer/byte oriented and don't require a local filesystem in general. If your device under test does have a filesystem, the STRIDE APIs can certainly be used to transfer resources to/from the device filesystem.

Build the test app

So that we can run these samples, let's now build an off target test app that contains the source under tests -- you can follow the generic steps described here. When copying the source, make sure you take all the source files from all four of the samples mentioned above.

Run the tests

Now launch the test app (if you have not already) and execute the runner with the following commands:

Test Class tests:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="s2_testclass::Basic::Exceptions; s2_testclass::Basic::Fixtures;  s2_testclass::Basic::Parameterized; s2_testclass::Basic::Simple" --output=TestClass.xml 

Test FList tests:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="s2_testflist_basic_fixtures; s2_testflist_basic_simple" --output=FList.xml

Test C Class tests:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="s2_testcclass_basic_fixtures; s2_testcclass_basic_parameterized(\"mystring\", 8); s2_testcclass_basic_simple" --output=CClass.xml

Test Point tests:

stride --device="TCP:localhost:8000" --database="../out/TestApp.sidb" --run="s2_testpoint_basic" --log_level=all --output=TestPoint.xml

File Services tests:

stride  --device="TCP:localhost:8000" --database="../out/TestApp.sidb"  --run="s2_fileservices_basic" --output=FileServices.xml


These commands will produce distinct result files for each run (per the --output command above). Please use these result files to peruse the results by opening each in your browser.

Examine the results

Open the result files created above and browse the results.

observations:

  • the test documentation has been extracted (at compile time) and is attached to the results when the tests are executed. Most of the test suites/test cases should have documentation in the description field.
  • the test point tests cases show the test points that were encountered, information about failures (if any) and log messages (the latter only because we included the --log_level=all option when executing the runner).
  • The two parameterized tests -- s2_testcclass_parameterized and s2_testclass::Basic::Parameterized -- both pass. We passed explicit arguments to the former (on the stride command line above) while we allowed the default arguments (0/null for all) to be passed to the latter by not explicitly specifying how it was to be called.

Explore all the results and make sure that the results meet your expectations based on the test source that you've previously browsed.

Other Samples

We've omitted a few samples from this training that cover more advanced and (perhaps) less widely used features. We encourage you to investigate these samples on your own if you are interested - in particular:

  • each of the test packaging samples (TestClass, TestCClass, and TestFList) includes examples of using the runtime test APIs to do more advanced reporting (dynamic suite creation, for example). These techniques are applicable for same data driven test scenarios.
  • the TestDouble sample shows how to use STRIDE Test Doubles to replace function dependencies at runtime, typically for the purpose of isolating functions under test. This can be a powerful technique, but also requires some up-front work on your part to enable it, as this sample demonstrates.