Frequently Asked Questions About STRIDE: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
 
(4 intermediate revisions by 2 users not shown)
Line 1: Line 1:
__NOTOC__
== Types of Testing Supported ==
One of the questions that should be asked is ''what is the '''value''' of the test?'' If the test does not discover any defects or does not provide ongoing regression, then the value is questionable. Also ''what is the '''effort''' in implementing the test?'' Stride has been uniquely designed to support maximizing the '' '''value''' '' of the test while minimizing the '' '''effort''' '' to implement it.
Stride supports three general types of testing:
* '''Unit Testing'''
* '''API Testing'''
* '''Integration Testing'''
=== Unit Testing ===
'''Unit Testing''' is supported following the model found in typical [http://en.wikipedia.org/wiki/XUnit xUnit-style] testing frameworks.
Traditional '''Unit Testing''' presents a number of challenges in testing embedded software:
* Testing functions/classes in '''isolation''' requires a lot of extra work, especially if your software was not designed upfront for testability
* The software is often not well suited for '''others''' to participate in the test implementation, since there is too much internal knowledge required to be productive
* It can be difficult to automate execution of the full set of tests on the real target device
'''Unit Testing''' of legacy software may have limited value, particularly if the software is stable with respect to defects. The best ''return-of-effort'' is often experienced when focused on '''brand new''' software components.
=== API Testing ===
Stride supports '''API Testing''' by leveraging the same techniques available for Unit Testing.
'''API Testing''' differs from unit testing in that the tests focus on direct testing (calling) of a well-defined interface.
* The design of ''public interfaces'' often lends itself to testing in isolation ''without'' implementing special test logic (i.e. no stubbing required), which make the test implementation simpler.
* Public APIs are most likely documented and as a result, ''non domain experts'' can more easily participate in the test implementation
Although '''API Testing''' often represents a smaller percentage of the software being exercised, this kind of testing is typically well understood, easy to scope, and often has a better ''return-on-effort''.
=== Integration Testing ===
Stride also supports '''Integration Testing''', which is different than Unit Testing or API Testing in that it does not focus simply on calling functions and validating return values. To learn more about some of the unique testing techniques well suited for this type of testing [[Expectations | '''read here''']].
'''Integration Testing''' focuses on validating a larger scope of the software while executing under normal operating conditions.
* Tests are performed typically with fully functional software build
* There are minimal code isolation challenges
* Test results provide a sanity check on the health the software
We believe that '''Integration Testing''' has a very high ''return-on-effort'' and is more applicable to legacy software systems.
== Getting Started ==
== Getting Started ==


Line 7: Line 48:


==== On-Target Installation ====
==== On-Target Installation ====
For standard embedded platforms such as Linux, Android and Windows the installation process varies between a few hours to a couple of days. We provide [[:Category:SDKs|SDK packages]] that work out of the box for our [[Off-Target Environment]], but also provide a reference to integrators.  
For standard embedded platforms such as Linux, Android and Windows the installation process varies between a few hours to a couple of days, depending on the complexity of your build environment. We provide SDK packages that provide a reference to integrators.  


For proprietary embedded targets, a custom [[Platform_Abstraction_Layer|Platform Abstraction Layer (PAL)]] is required. The PAL provides the glue between the [[STRIDE Runtime]] and services offered by the OS. It is the only piece of the runtime that is customized between operating systems. The implementation of the PAL ranges between a day to a week depending on the complexity of the OS. There is also a [[Build Integration | build integration]] step. This involves integrating the [[STRIDE Build Tools]] into your software make process. The activity ranges from a single day to several days.  
For proprietary embedded targets, a custom [[Platform_Abstraction_Layer|Platform Abstraction Layer (PAL)]] is required. The PAL provides the glue between the [[STRIDE Runtime | Stride Runtime]] and services offered by the OS. It is the only piece of the runtime that is customized between operating systems. The implementation of the PAL ranges between a day to a week depending on the complexity of the OS. There is also a [[Build Integration | build integration]] step. This involves integrating the [[STRIDE Build Tools | Stride Build Tools]] into your software make process. The activity ranges from a single day to several days.


=== What kind of training is required? ===
=== What kind of training is required? ===


Our training [[Training Overview | approach]] is based on wiki articles, [[Samples | samples]], and leveraging the [[Off-Target Environment]]. The training has been set up for self-guided instruction that can be leveraged for an initial introduction of the technology and on-demand for specific topics when required.
Our training [[Training Overview | approach]] is based on wiki articles, samples, and leveraging the [[Stride Sandbox]]. The training has been set up for self-guided instruction that can be leveraged for an initial introduction of the technology and on-demand for specific topics when required.


== Integration with STRIDE ==
== Integration with Stride ==


=== What is the size of the STRIDE Runtime? ===
=== What is the size of the Stride Runtime? ===


The [[STRIDE Runtime]] is a source package that supports connectivity with the host system and provides[[Runtime_Test_Services | services for testing]] and [[Source_Instrumentation_Overview | source instrumentation]].  The ''runtime'' is tailored specifically to embedded applications, overhead is minimal. It consumes very little memory for table and control block storage. Resource usage is [[STRIDE_Runtime#Runtime_Configuration | configurable]] and can be tailored to the limitations of the target platform.
The [[STRIDE Runtime |Runtime]] is a source package that supports connectivity with the host system and provides[[Runtime_Test_Services | services for testing]] and [[Test Macros]].  The ''Runtime'' is tailored specifically to embedded applications, overhead is minimal. It consumes very little memory for table and control block storage. Resource usage is [[STRIDE_Runtime#Runtime_Configuration | configurable]] and can be tailored to the limitations of the target platform.


{| class="wikitable"
{| class="wikitable"
Line 34: Line 75:
=== What is the processing overhead? ===
=== What is the processing overhead? ===


The [[STRIDE Runtime]] overhead is minimized by collecting raw data on the target, and transferring information at a low priority task to the host for processing. Processing is only active when executing tests via the [[STRIDE Runner]].
The [[STRIDE Runtime | Stride Runtime]] overhead is minimized by collecting raw data on the target, and transferring information at a low priority task to the host for processing. Processing is only active when executing tests via the [[Stride Runner]].
 


== Testing ==
== Testing ==
Line 41: Line 81:
=== What languages are supported? ===
=== What languages are supported? ===


[[Types_of_Testing_Supported_by_STRIDE#Unit Testing | Unit tests]] and  [[Types_of_Testing_Supported_by_STRIDE#API_Testing | API Tests]] are written using '''C and C++'''. [[Types_of_Testing_Supported_by_STRIDE#Integration_Testing | Integration Tests]] can be written using '''Perl''' and '''C/C++'''.
Tests can be written using '''C''', '''C++''', and '''Perl'''.


=== Is there any alternative to running STRIDE tests with a real device? ===
=== Is there any alternative to running Stride tests with a real device? ===
 
Yes. Tests can be also be built and executed using the [[Off-Target Environment]]. In order for this to work, your device source code must be built along with the test code using the host's desktop toolchain (MSVC on Windows, gcc on Linux).


Yes. Tests can be also be built and executed using the [[Stride Sandbox]]. In order for this to work, your device source code must be built along with the test code using the host's desktop toolchain (MSVC on Windows, gcc on Linux).


== Test Automation  ==
== Test Automation  ==
Line 52: Line 91:
=== What is continuous integration and why should I care? ===
=== What is continuous integration and why should I care? ===


The key principle of continuous integration is regular testing of your  software--ideally done in an automated fashion. STRIDE tests are  reusable and automated. Over time, these tests accumulate, providing  more and more comprehensive coverage. By automating the execution of  tests and results publication via [[STRIDE Test Space]] with every software build,  development teams gain immediate feedback on defects and the health of  their software. By detecting and repairing defects immediately, the  expense and time involved with correcting bugs is minimized.
The key principle of continuous integration is regular testing of your  software--ideally done in an automated fashion. Stride tests are  reusable and automated. Over time, these tests accumulate, providing  more and more comprehensive coverage. By automating the execution of  tests and results publication via [http://www.testspace.com Testspace] with every software build,  development teams gain immediate feedback on defects and the health of  their software. By detecting and repairing defects immediately, the  expense and time involved with correcting bugs is minimized.


=== Does STRIDE support continuous integration? ===
=== Does Stride support continuous integration? ===


Yes. The [[STRIDE Runner]] provides a straightforward means to connect to the device under test and execute the test cases you've implemented using the STRIDE Framework. The runner allows you to configure which tests to run and how to organize the results using subsuites in the report. Since the runner supports an option-based command line interface, this tool is easy to integrate with typical continuous integration servers. Please refer to [[Setting_up_your_CI_Environment|this article]] as well.
Yes. The [[Stride Runner]] provides a straightforward means to connect to the device under test and execute the test cases you've implemented using Stride. The runner allows you to configure which tests to run and how to organize the results using sub-suites in the report. Since the runner supports an option-based command line interface, this tool is easy to integrate with typical continuous integration servers.


=== Where/how do you store test results? ===
=== Where/how do you store test results? ===


When you execute your tests using the [[STRIDE Runner]], upon completion the results are written to an xml file. This xml file uses a custom schema for representing the hierarchy of results (suites, cases, etc.). These files also include a stylsheet specification (which will be written to the same directory as the xml file) that allows them to be viewed as HTML in a browser. You are free to store these files for future use/reference. The runner '''also''' supports direct uploading to [[STRIDE Test Space]] which is a hosted web application for viewing and collaborating on your test results. Once you are regularly executing tests, whether automatically or manually, we recommend you use the upload feature to persist and share your test results.
When you execute your tests using the [[Stride Runner]], upon completion the results are written to an xml file. This xml file uses a custom schema for representing the hierarchy of results (suites, cases, etc.). These files also include a stylsheet specification (which will be written to the same directory as the xml file) that allows them to be viewed as HTML in a browser. You are free to store these files for future use/reference. The runner '''also''' supports direct publishing to [http://www.testspace.com Testspace] which is a hosted web application for viewing and collaborating on your test results. Once you are regularly executing tests, whether automatically or manually, we recommend you use the publish feature to persist and share your test results.


=== Can I get  email containing test reports? ===
=== Can I get  email containing test reports? ===


Yes. If you use [[STRIDE Test Space]] to store your results, you can optionally configure your test space(s) to automatically notify users when new results are uploaded. The email that is generated by Test Space contains only summary information and provides links so that you can view the complete report data.
Yes. If you use [http://www.testspace.com Testspace] to store your results, you can optionally configure your test space(s) to automatically notify users when new results are uploaded. The email that is generated by Test Space contains only summary information and provides links so that you can view the complete report data.


If you are using a continuous integration server to initiate your testing, it's likely that it supports different forms of notification when the testing is complete, so it's often possible to attach the xml report data as part of the CI server notification.
If you are using a continuous integration server to initiate your testing, it's likely that it supports different forms of notification when the testing is complete, so it's often possible to attach the xml report data as part of the CI server notification.


== Source Instrumentation ==
== Source Instrumentation ==
Line 77: Line 115:
=== What about source instrumentation bloat? ===
=== What about source instrumentation bloat? ===


Any mature code base has some level of diagnostic bloat to it. This often takes the form of ad-hoc logging or debug statements. With STRIDE [[Test Point | Test Points]] and [[Test Log | Test Logs]], you open your software to better automated test scenarios. All STRIDE instrumentation takes the form of single line macros, so the amount of instrumentation bloat is no worse than other typical ad-hoc diagnostics. What's more, the STRIDE macros are all designed to be quickly no-opped in a build via a single preprocessor macro, making it possible to completely eliminate any actual impact on certain builds, if so desired.
Any mature code base has some level of diagnostic bloat to it. This often takes the form of ad-hoc logging or debug statements. With Stride [[Test Point | Test Points]] and [[Test Log | Test Logs]], you open your software to better automated test scenarios. All Stride instrumentation takes the form of single line macros, so the amount of instrumentation bloat is no worse than other typical ad-hoc diagnostics. What's more, the Stride macros are all designed to be quickly no-opped in a build via a single preprocessor macro, making it possible to completely eliminate any actual impact on certain builds, if so desired.


=== Are all Test Points active? ===
=== Are all Test Points active? ===


No. Under normal testing scenarios, only the specific test points that are needed for a test are actually broadcast through the system. We accomplish this by setting test point filters (by label) on the system whenever one of the Test Point setup functions is called (in script or native code). These filters are reset or removed at the end of the test, so in general ''none'' of the test points are actually sent through the system if no test is currently active. That said, there are a few special use cases in which ''all'' test points become active in the system - namely, when [[Tracing | tracing]] is activated on the host runner and when a specific test case uses a set that include ''TEST_POINT_EVERYTHING_ELSE''. In general, however, the test points that are actually sent from the system are ''only'' those that are needed to execute the behavior validation for the current test.
No. Under normal testing scenarios, only the specific test points that are needed for a test are actually broadcast through the system. We accomplish this by setting test point filters (by label) on the system whenever one of the Test Point setup functions is called (in script or native code). These filters are reset or removed at the end of the test, so in general ''none'' of the test points are actually sent through the system if no test is currently active. That said, there are a few special use cases in which ''all'' test points become active in the system - namely, when tracing is activated on the host runner and when a specific test case uses a set that include ''TEST_POINT_EVERYTHING_ELSE''. In general, however, the test points that are actually sent from the system are ''only'' those that are needed to execute the behavior validation for the current test.


=== Will it affect performance? ===
=== Will it affect performance? ===


Our experience on a wide-range of systems has shown minimal impact from the STRIDE instrumentation.  The STRIDE Runtime has been designed to be small, portable, and readily configurable, allowing it to be optimized for the platform's specific characteristics.  
Our experience on a wide-range of systems has shown minimal impact from the Stride instrumentation.  The Stride Runtime has been designed to be small, portable, and readily configurable, allowing it to be optimized for the platform's specific characteristics.  


=== Should I leave Test Points in? ===
=== Should I leave Test Points in? ===


Yes. Once you have some behavior tests written, it's worthwhile to maintain that instrumentation and the corresponding tests, which allows you to run the tests on any stride-enabled build. All of the instrumentation macros are easily no-opped via a single preprocessor flag, so you can choose to effectively remove the instrumentation code on select builds (production/release builds, for example). The ultimate value of instrumentation is the continuous feedback you get by regularly executing the automated tests on the build.
Yes. Once you have some behavior tests written, it's worthwhile to maintain that instrumentation and the corresponding tests, which allows you to run the tests on any stride-enabled build. All of the instrumentation macros are easily no-opped via a single preprocessor flag, so you can choose to effectively remove the instrumentation code on select builds (production/release builds, for example). The ultimate value of instrumentation is the continuous feedback you get by regularly executing the automated tests on the build.
[[Category: Overview]]

Latest revision as of 18:35, 7 July 2015

Types of Testing Supported

One of the questions that should be asked is what is the value of the test? If the test does not discover any defects or does not provide ongoing regression, then the value is questionable. Also what is the effort in implementing the test? Stride has been uniquely designed to support maximizing the value of the test while minimizing the effort to implement it.

Stride supports three general types of testing:

  • Unit Testing
  • API Testing
  • Integration Testing

Unit Testing

Unit Testing is supported following the model found in typical xUnit-style testing frameworks.

Traditional Unit Testing presents a number of challenges in testing embedded software:

  • Testing functions/classes in isolation requires a lot of extra work, especially if your software was not designed upfront for testability
  • The software is often not well suited for others to participate in the test implementation, since there is too much internal knowledge required to be productive
  • It can be difficult to automate execution of the full set of tests on the real target device

Unit Testing of legacy software may have limited value, particularly if the software is stable with respect to defects. The best return-of-effort is often experienced when focused on brand new software components.

API Testing

Stride supports API Testing by leveraging the same techniques available for Unit Testing.

API Testing differs from unit testing in that the tests focus on direct testing (calling) of a well-defined interface.

  • The design of public interfaces often lends itself to testing in isolation without implementing special test logic (i.e. no stubbing required), which make the test implementation simpler.
  • Public APIs are most likely documented and as a result, non domain experts can more easily participate in the test implementation

Although API Testing often represents a smaller percentage of the software being exercised, this kind of testing is typically well understood, easy to scope, and often has a better return-on-effort.

Integration Testing

Stride also supports Integration Testing, which is different than Unit Testing or API Testing in that it does not focus simply on calling functions and validating return values. To learn more about some of the unique testing techniques well suited for this type of testing read here.

Integration Testing focuses on validating a larger scope of the software while executing under normal operating conditions.

  • Tests are performed typically with fully functional software build
  • There are minimal code isolation challenges
  • Test results provide a sanity check on the health the software

We believe that Integration Testing has a very high return-on-effort and is more applicable to legacy software systems.

Getting Started

How long does it take to install STRIDE?

Off-Target Installation

Off-target installation to a desktop or laptop PC takes just minutes; support is offered for Windows and Linux.

On-Target Installation

For standard embedded platforms such as Linux, Android and Windows the installation process varies between a few hours to a couple of days, depending on the complexity of your build environment. We provide SDK packages that provide a reference to integrators.

For proprietary embedded targets, a custom Platform Abstraction Layer (PAL) is required. The PAL provides the glue between the Stride Runtime and services offered by the OS. It is the only piece of the runtime that is customized between operating systems. The implementation of the PAL ranges between a day to a week depending on the complexity of the OS. There is also a build integration step. This involves integrating the Stride Build Tools into your software make process. The activity ranges from a single day to several days.

What kind of training is required?

Our training approach is based on wiki articles, samples, and leveraging the Stride Sandbox. The training has been set up for self-guided instruction that can be leveraged for an initial introduction of the technology and on-demand for specific topics when required.

Integration with Stride

What is the size of the Stride Runtime?

The Runtime is a source package that supports connectivity with the host system and provides services for testing and Test Macros. The Runtime is tailored specifically to embedded applications, overhead is minimal. It consumes very little memory for table and control block storage. Resource usage is configurable and can be tailored to the limitations of the target platform.

Typical resource usage
Aspect Resources
Code Space About 90-130 KB depending on the compiler of use and the level of optimization.
Memory Usage Configurable, by default set to about 10 KB
Threads 3 Threads; configurable priority; blocked when inactive

What is the processing overhead?

The Stride Runtime overhead is minimized by collecting raw data on the target, and transferring information at a low priority task to the host for processing. Processing is only active when executing tests via the Stride Runner.

Testing

What languages are supported?

Tests can be written using C, C++, and Perl.

Is there any alternative to running Stride tests with a real device?

Yes. Tests can be also be built and executed using the Stride Sandbox. In order for this to work, your device source code must be built along with the test code using the host's desktop toolchain (MSVC on Windows, gcc on Linux).

Test Automation

What is continuous integration and why should I care?

The key principle of continuous integration is regular testing of your software--ideally done in an automated fashion. Stride tests are reusable and automated. Over time, these tests accumulate, providing more and more comprehensive coverage. By automating the execution of tests and results publication via Testspace with every software build, development teams gain immediate feedback on defects and the health of their software. By detecting and repairing defects immediately, the expense and time involved with correcting bugs is minimized.

Does Stride support continuous integration?

Yes. The Stride Runner provides a straightforward means to connect to the device under test and execute the test cases you've implemented using Stride. The runner allows you to configure which tests to run and how to organize the results using sub-suites in the report. Since the runner supports an option-based command line interface, this tool is easy to integrate with typical continuous integration servers.

Where/how do you store test results?

When you execute your tests using the Stride Runner, upon completion the results are written to an xml file. This xml file uses a custom schema for representing the hierarchy of results (suites, cases, etc.). These files also include a stylsheet specification (which will be written to the same directory as the xml file) that allows them to be viewed as HTML in a browser. You are free to store these files for future use/reference. The runner also supports direct publishing to Testspace which is a hosted web application for viewing and collaborating on your test results. Once you are regularly executing tests, whether automatically or manually, we recommend you use the publish feature to persist and share your test results.

Can I get email containing test reports?

Yes. If you use Testspace to store your results, you can optionally configure your test space(s) to automatically notify users when new results are uploaded. The email that is generated by Test Space contains only summary information and provides links so that you can view the complete report data.

If you are using a continuous integration server to initiate your testing, it's likely that it supports different forms of notification when the testing is complete, so it's often possible to attach the xml report data as part of the CI server notification.

Source Instrumentation

What are the advantages of Test Points over logging?

Test Points can be used for direct validation during real-time execution. Logging systems, while often useful for historical record, can only be used for post-processing validation at best. What's more, since there is no standard way to post-process logs, users often rely on manual inspection or non-scalable homegrown solutions for validation. Test Point validation is fully automated using the STRIDE Framework, which provides robust harnessing and reporting. Test Points also can include optional data payloads which can be very useful for validating state when a point is hit. Test Points have potentially lower impact on the system than standard logging since they (1) are only sent from the origin if a test is actively subscribed to test point (labels are used as a filter at runtime) and (2) test points can be completely disabled (no-opped) in your build by undefining a single preprocessor macro (STRIDE_ENABLED).

What about source instrumentation bloat?

Any mature code base has some level of diagnostic bloat to it. This often takes the form of ad-hoc logging or debug statements. With Stride Test Points and Test Logs, you open your software to better automated test scenarios. All Stride instrumentation takes the form of single line macros, so the amount of instrumentation bloat is no worse than other typical ad-hoc diagnostics. What's more, the Stride macros are all designed to be quickly no-opped in a build via a single preprocessor macro, making it possible to completely eliminate any actual impact on certain builds, if so desired.

Are all Test Points active?

No. Under normal testing scenarios, only the specific test points that are needed for a test are actually broadcast through the system. We accomplish this by setting test point filters (by label) on the system whenever one of the Test Point setup functions is called (in script or native code). These filters are reset or removed at the end of the test, so in general none of the test points are actually sent through the system if no test is currently active. That said, there are a few special use cases in which all test points become active in the system - namely, when tracing is activated on the host runner and when a specific test case uses a set that include TEST_POINT_EVERYTHING_ELSE. In general, however, the test points that are actually sent from the system are only those that are needed to execute the behavior validation for the current test.

Will it affect performance?

Our experience on a wide-range of systems has shown minimal impact from the Stride instrumentation. The Stride Runtime has been designed to be small, portable, and readily configurable, allowing it to be optimized for the platform's specific characteristics.

Should I leave Test Points in?

Yes. Once you have some behavior tests written, it's worthwhile to maintain that instrumentation and the corresponding tests, which allows you to run the tests on any stride-enabled build. All of the instrumentation macros are easily no-opped via a single preprocessor flag, so you can choose to effectively remove the instrumentation code on select builds (production/release builds, for example). The ultimate value of instrumentation is the continuous feedback you get by regularly executing the automated tests on the build.