Frequently Asked Questions About STRIDE: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
 
(33 intermediate revisions by 4 users not shown)
Line 1: Line 1:
<span style="#color:#0067A5"> <font size="4.5"> Frequently Asked Questions about STRIDE™ </font> </span>
__NOTOC__
== Types of Testing Supported ==
One of the questions that should be asked is ''what is the '''value''' of the test?'' If the test does not discover any defects or does not provide ongoing regression, then the value is questionable. Also ''what is the '''effort''' in implementing the test?'' Stride has been uniquely designed to support maximizing the '' '''value''' '' of the test while minimizing the '' '''effort''' '' to implement it.


The following is a list of ''high level'' questions often asked by customers. 
Stride supports three general types of testing:
* '''Unit Testing'''
* '''API Testing'''
* '''Integration Testing'''


= Testing =
=== Unit Testing ===
'''Unit Testing''' is supported following the model found in typical [http://en.wikipedia.org/wiki/XUnit xUnit-style] testing frameworks.


== What specific testing techniques are enabled by STRIDE? ==
Traditional '''Unit Testing''' presents a number of challenges in testing embedded software:


=== Unit Testing ===
* Testing functions/classes in '''isolation''' requires a lot of extra work, especially if your software was not designed upfront for testability
* The software is often not well suited for '''others''' to participate in the test implementation, since there is too much internal knowledge required to be productive
* It can be difficult to automate execution of the full set of tests on the real target device
 
'''Unit Testing''' of legacy software may have limited value, particularly if the software is stable with respect to defects. The best ''return-of-effort'' is often experienced when focused on '''brand new''' software components.


=== API Testing ===
=== API Testing ===
Stride supports '''API Testing''' by leveraging the same techniques available for Unit Testing.


=== Behavior Testing ===
'''API Testing''' differs from unit testing in that the tests focus on direct testing (calling) of a well-defined interface.
Scenario-based white box testing.
* The design of ''public interfaces'' often lends itself to testing in isolation ''without'' implementing special test logic (i.e. no stubbing required), which make the test implementation simpler.  
* Public APIs are most likely documented and as a result, ''non domain experts'' can more easily participate in the test implementation


== What about source instrumentation bloat? ==
Although '''API Testing''' often represents a smaller percentage of the software being exercised, this kind of testing is typically well understood, easy to scope, and often has a better ''return-on-effort''.


=== Using Test Points / Logs ===
=== Integration Testing ===
Stride also supports '''Integration Testing''', which is different than Unit Testing or API Testing in that it does not focus simply on calling functions and validating return values. To learn more about some of the unique testing techniques well suited for this type of testing [[Expectations | '''read here''']].


=== Should I leave the testability in? ===
'''Integration Testing''' focuses on validating a larger scope of the software while executing under normal operating conditions.


=== Are all Test Points active? ===
* Tests are performed typically with fully functional software build
* There are minimal code isolation challenges
* Test results provide a sanity check on the health the software


== Can developers really enable QA to create effective white-box tests? ==
We believe that '''Integration Testing''' has a very high ''return-on-effort'' and is more applicable to legacy software systems.


=== Yes if you do the following ===
== Getting Started ==


=== No if you don't do anything different ===
=== How long does it take to install STRIDE? ===


=== Maybe ===
==== Off-Target Installation ====
Off-target installation to a desktop or laptop PC takes just minutes; support is offered for Windows and Linux.


== What do you mean by behavior testing? ==
==== On-Target Installation ====
For standard embedded platforms such as Linux, Android and Windows the installation process varies between a few hours to a couple of days, depending on the complexity of your build environment. We provide SDK packages that provide a reference to integrators.


== How are test cases managed? ==
For proprietary embedded targets, a custom [[Platform_Abstraction_Layer|Platform Abstraction Layer (PAL)]] is required. The PAL provides the glue between the [[STRIDE Runtime | Stride Runtime]] and services offered by the OS. It is the only piece of the runtime that is customized between operating systems. The implementation of the PAL ranges between a day to a week depending on the complexity of the OS. There is also a [[Build Integration | build integration]] step. This involves integrating the [[STRIDE Build Tools | Stride Build Tools]] into your software make process. The activity ranges from a single day to several days.
Individual test cases are organized into ''test units'' (on-target c/c++ tests) and ''test modules'' (host-based script tests) which typically target the verification of a specific subsystem or component.


Test units and test modules are runnable individually or in sequence with other test units/mocules.
=== What kind of training is required? ===


Test unit/module results may also be further organized into named test suites, which provide convenient results roll-ups to groups of test units/modules.
Our training [[Training Overview | approach]] is based on wiki articles, samples, and leveraging the [[Stride Sandbox]]. The training has been set up for self-guided instruction that can be leveraged for an initial introduction of the technology and on-demand for specific topics when required.


== Can I use STRIDE together with test equipment? ==
== Integration with Stride ==
External test equipment can further leverage STRIDE's value by providing sophisticated test harnessing.


The operation of computer-controlled test equipment can be automatically synchronized with STRIDE test unit/module execution for repeatable execution of complicated scenarios.
=== What is the size of the Stride Runtime? ===


== Can I use STRIDE if my embedded code has real-time constraints? ==
The [[STRIDE Runtime |Runtime]] is a source package that supports connectivity with the host system and provides[[Runtime_Test_Services | services for testing]] and [[Test Macros]].  The ''Runtime'' is tailored specifically to embedded applications, overhead is minimal. It consumes very little memory for table and control block storage. Resource usage is [[STRIDE_Runtime#Runtime_Configuration | configurable]] and can be tailored to the limitations of the target platform.
The STRIDE components and architecture are tailored specifically to embedded applications; overhead is minimal.
 
Resource usage is configurable for extremely sensitive applications, but typical resource usage is shown below:


{| class="wikitable"
{| class="wikitable"
Line 53: Line 66:
! Aspect !! Resources
! Aspect !! Resources
|-
|-
| STRIDE Runtime Code Space || TBD
| Code Space || About 90-130 KB depending on the compiler of use and the level of optimization.
|-
|-
| STRIDE Runtime Memory || TBD
| Memory Usage || Configurable, by default set to about 10 KB
|-
|-
| STRIDE Threads || 2 Threads; configurable priority; blocked when inactive
| Threads || 3 Threads; configurable priority; blocked when inactive
|}
|}


= Installation and Deployment =
=== What is the processing overhead? ===


== What up-front integration is required to begin using STRIDE? ==
The [[STRIDE Runtime | Stride Runtime]] overhead is minimized by collecting raw data on the target, and transferring information at a low priority task to the host for processing. Processing is only active when executing tests via the [[Stride Runner]].


;PAL and Runtime Integration
== Testing ==
:To support STRIDE remoting, the common STRIDE Runtime and OS-specific Platform Abstraction Layer (PAL) must be integrated into the target environment. PAL implementations exist for many popular embedded operating systems.


;Build System Integration
=== What languages are supported? ===
To automatically generate harnessing code and test metadata, several STRIDE-related executables are added to your software build process.


== How will STRIDE affect my real-time constraints  ==
Tests can be written using '''C''', '''C++''', and '''Perl'''.


=== Size of runtime / intercept module ===
=== Is there any alternative to running Stride tests with a real device? ===


=== Typical RAM usage ===
Yes. Tests can be also be built and executed using the [[Stride Sandbox]]. In order for this to work, your device source code must be built along with the test code using the host's desktop toolchain (MSVC on Windows, gcc on Linux).


=== Processing Impact ===
== Test Automation  ==


== What process changes are required to adopt STRIDE ==
=== What is continuous integration and why should I care? ===


=== Testable Build ===
The key principle of continuous integration is regular testing of your  software--ideally done in an automated fashion. Stride tests are  reusable and automated. Over time, these tests accumulate, providing  more and more comprehensive coverage. By automating the execution of  tests and results publication via [http://www.testspace.com Testspace] with every software build,  development teams gain immediate feedback on defects and the health of  their software. By detecting and repairing defects immediately, the  expense and time involved with correcting bugs is minimized.


=== Creating and maintaining Test Assets ===
=== Does Stride support continuous integration? ===


=== Etc. ===
Yes. The [[Stride Runner]] provides a straightforward means to connect to the device under test and execute the test cases you've implemented using Stride. The runner allows you to configure which tests to run and how to organize the results using sub-suites in the report. Since the runner supports an option-based command line interface, this tool is easy to integrate with typical continuous integration servers. 
 
=== Where/how do you store test results? ===
 
When you execute your tests using the [[Stride Runner]], upon completion the results are written to an xml file. This xml file uses a custom schema for representing the hierarchy of results (suites, cases, etc.). These files also include a stylsheet specification (which will be written to the same directory as the xml file) that allows them to be viewed as HTML in a browser. You are free to store these files for future use/reference. The runner '''also''' supports direct publishing to [http://www.testspace.com Testspace] which is a hosted web application for viewing and collaborating on your test results. Once you are regularly executing tests, whether automatically or manually, we recommend you use the publish feature to persist and share your test results.
 
=== Can I get  email containing test reports? ===
 
Yes. If you use [http://www.testspace.com Testspace] to store your results, you can optionally configure your test space(s) to automatically notify users when new results are uploaded. The email that is generated by Test Space contains only summary information and provides links so that you can view the complete report data.
 
If you are using a continuous integration server to initiate your testing, it's likely that it supports different forms of notification when the testing is complete, so it's often possible to attach the xml report data as part of the CI server notification.
 
== Source Instrumentation ==
 
=== What are the advantages of Test Points over logging? ===
 
[[Test Point | Test Points]] can be used for direct validation during real-time execution. Logging systems, while often useful for historical record, can only be used for post-processing validation at best. What's more, since there is no standard way to post-process logs, users often rely on manual inspection or non-scalable homegrown solutions for validation. Test Point validation is fully automated using the STRIDE Framework, which provides robust harnessing and reporting. Test Points also can include optional data payloads which can be very useful for validating state when a point is hit. Test Points have potentially lower impact on the system than standard logging since they '''(1)''' are only sent from the origin if a test is actively subscribed to test point (labels are used as a filter at runtime) and '''(2)''' test points can be completely disabled (no-opped) in your build by undefining a single preprocessor macro [[Runtime_Integration#STRIDE_Feature_Control | (STRIDE_ENABLED)]].
 
=== What about source instrumentation bloat? ===
 
Any mature code base has some level of diagnostic bloat to it. This often takes the form of ad-hoc logging or debug statements. With Stride [[Test Point | Test Points]] and [[Test Log | Test Logs]], you open your software to better automated test scenarios. All Stride instrumentation takes the form of single line macros, so the amount of instrumentation bloat is no worse than other typical ad-hoc diagnostics. What's more, the Stride macros are all designed to be quickly no-opped in a build via a single preprocessor macro, making it possible to completely eliminate any actual impact on certain builds, if so desired.
 
=== Are all Test Points active? ===


== How much time and resources are required to get STRIDE running in a typical embedded environment? * ==
No. Under normal testing scenarios, only the specific test points that are needed for a test are actually broadcast through the system. We accomplish this by setting test point filters (by label) on the system whenever one of the Test Point setup functions is called (in script or native code). These filters are reset or removed at the end of the test, so in general ''none'' of the test points are actually sent through the system if no test is currently active. That said, there are a few special use cases in which ''all'' test points become active in the system - namely, when tracing is activated on the host runner and when a specific test case uses a set that include ''TEST_POINT_EVERYTHING_ELSE''. In general, however, the test points that are actually sent from the system are ''only'' those that are needed to execute the behavior validation for the current test.


== What does it take to train developers in using STRIDE? * ==
=== Will it affect performance? ===


== How does STRIDE support continuous integration? ==
Our experience on a wide-range of systems has shown minimal impact from the Stride instrumentation. The Stride Runtime has been designed to be small, portable, and readily configurable, allowing it to be optimized for the platform's specific characteristics.  
The key principle of continuous integration is regular testing of your software--ideally done in an automated fashion. STRIDE tests are reusable and automated. Over time, these tests accumulate, providing more and more comprehensive coverage.


By automating the execution of tests and results publication via Test Space with every software build, development teams gain immediate feedback on defects and the health of their software. By detecting and repairing defects immediately, the expense and time involved with correcting bugs is minimized.
=== Should I leave Test Points in? ===


[[Category: Overview]]
Yes. Once you have some behavior tests written, it's worthwhile to maintain that instrumentation and the corresponding tests, which allows you to run the tests on any stride-enabled build. All of the instrumentation macros are easily no-opped via a single preprocessor flag, so you can choose to effectively remove the instrumentation code on select builds (production/release builds, for example). The ultimate value of instrumentation is the continuous feedback you get by regularly executing the automated tests on the build.

Latest revision as of 18:35, 7 July 2015

Types of Testing Supported

One of the questions that should be asked is what is the value of the test? If the test does not discover any defects or does not provide ongoing regression, then the value is questionable. Also what is the effort in implementing the test? Stride has been uniquely designed to support maximizing the value of the test while minimizing the effort to implement it.

Stride supports three general types of testing:

  • Unit Testing
  • API Testing
  • Integration Testing

Unit Testing

Unit Testing is supported following the model found in typical xUnit-style testing frameworks.

Traditional Unit Testing presents a number of challenges in testing embedded software:

  • Testing functions/classes in isolation requires a lot of extra work, especially if your software was not designed upfront for testability
  • The software is often not well suited for others to participate in the test implementation, since there is too much internal knowledge required to be productive
  • It can be difficult to automate execution of the full set of tests on the real target device

Unit Testing of legacy software may have limited value, particularly if the software is stable with respect to defects. The best return-of-effort is often experienced when focused on brand new software components.

API Testing

Stride supports API Testing by leveraging the same techniques available for Unit Testing.

API Testing differs from unit testing in that the tests focus on direct testing (calling) of a well-defined interface.

  • The design of public interfaces often lends itself to testing in isolation without implementing special test logic (i.e. no stubbing required), which make the test implementation simpler.
  • Public APIs are most likely documented and as a result, non domain experts can more easily participate in the test implementation

Although API Testing often represents a smaller percentage of the software being exercised, this kind of testing is typically well understood, easy to scope, and often has a better return-on-effort.

Integration Testing

Stride also supports Integration Testing, which is different than Unit Testing or API Testing in that it does not focus simply on calling functions and validating return values. To learn more about some of the unique testing techniques well suited for this type of testing read here.

Integration Testing focuses on validating a larger scope of the software while executing under normal operating conditions.

  • Tests are performed typically with fully functional software build
  • There are minimal code isolation challenges
  • Test results provide a sanity check on the health the software

We believe that Integration Testing has a very high return-on-effort and is more applicable to legacy software systems.

Getting Started

How long does it take to install STRIDE?

Off-Target Installation

Off-target installation to a desktop or laptop PC takes just minutes; support is offered for Windows and Linux.

On-Target Installation

For standard embedded platforms such as Linux, Android and Windows the installation process varies between a few hours to a couple of days, depending on the complexity of your build environment. We provide SDK packages that provide a reference to integrators.

For proprietary embedded targets, a custom Platform Abstraction Layer (PAL) is required. The PAL provides the glue between the Stride Runtime and services offered by the OS. It is the only piece of the runtime that is customized between operating systems. The implementation of the PAL ranges between a day to a week depending on the complexity of the OS. There is also a build integration step. This involves integrating the Stride Build Tools into your software make process. The activity ranges from a single day to several days.

What kind of training is required?

Our training approach is based on wiki articles, samples, and leveraging the Stride Sandbox. The training has been set up for self-guided instruction that can be leveraged for an initial introduction of the technology and on-demand for specific topics when required.

Integration with Stride

What is the size of the Stride Runtime?

The Runtime is a source package that supports connectivity with the host system and provides services for testing and Test Macros. The Runtime is tailored specifically to embedded applications, overhead is minimal. It consumes very little memory for table and control block storage. Resource usage is configurable and can be tailored to the limitations of the target platform.

Typical resource usage
Aspect Resources
Code Space About 90-130 KB depending on the compiler of use and the level of optimization.
Memory Usage Configurable, by default set to about 10 KB
Threads 3 Threads; configurable priority; blocked when inactive

What is the processing overhead?

The Stride Runtime overhead is minimized by collecting raw data on the target, and transferring information at a low priority task to the host for processing. Processing is only active when executing tests via the Stride Runner.

Testing

What languages are supported?

Tests can be written using C, C++, and Perl.

Is there any alternative to running Stride tests with a real device?

Yes. Tests can be also be built and executed using the Stride Sandbox. In order for this to work, your device source code must be built along with the test code using the host's desktop toolchain (MSVC on Windows, gcc on Linux).

Test Automation

What is continuous integration and why should I care?

The key principle of continuous integration is regular testing of your software--ideally done in an automated fashion. Stride tests are reusable and automated. Over time, these tests accumulate, providing more and more comprehensive coverage. By automating the execution of tests and results publication via Testspace with every software build, development teams gain immediate feedback on defects and the health of their software. By detecting and repairing defects immediately, the expense and time involved with correcting bugs is minimized.

Does Stride support continuous integration?

Yes. The Stride Runner provides a straightforward means to connect to the device under test and execute the test cases you've implemented using Stride. The runner allows you to configure which tests to run and how to organize the results using sub-suites in the report. Since the runner supports an option-based command line interface, this tool is easy to integrate with typical continuous integration servers.

Where/how do you store test results?

When you execute your tests using the Stride Runner, upon completion the results are written to an xml file. This xml file uses a custom schema for representing the hierarchy of results (suites, cases, etc.). These files also include a stylsheet specification (which will be written to the same directory as the xml file) that allows them to be viewed as HTML in a browser. You are free to store these files for future use/reference. The runner also supports direct publishing to Testspace which is a hosted web application for viewing and collaborating on your test results. Once you are regularly executing tests, whether automatically or manually, we recommend you use the publish feature to persist and share your test results.

Can I get email containing test reports?

Yes. If you use Testspace to store your results, you can optionally configure your test space(s) to automatically notify users when new results are uploaded. The email that is generated by Test Space contains only summary information and provides links so that you can view the complete report data.

If you are using a continuous integration server to initiate your testing, it's likely that it supports different forms of notification when the testing is complete, so it's often possible to attach the xml report data as part of the CI server notification.

Source Instrumentation

What are the advantages of Test Points over logging?

Test Points can be used for direct validation during real-time execution. Logging systems, while often useful for historical record, can only be used for post-processing validation at best. What's more, since there is no standard way to post-process logs, users often rely on manual inspection or non-scalable homegrown solutions for validation. Test Point validation is fully automated using the STRIDE Framework, which provides robust harnessing and reporting. Test Points also can include optional data payloads which can be very useful for validating state when a point is hit. Test Points have potentially lower impact on the system than standard logging since they (1) are only sent from the origin if a test is actively subscribed to test point (labels are used as a filter at runtime) and (2) test points can be completely disabled (no-opped) in your build by undefining a single preprocessor macro (STRIDE_ENABLED).

What about source instrumentation bloat?

Any mature code base has some level of diagnostic bloat to it. This often takes the form of ad-hoc logging or debug statements. With Stride Test Points and Test Logs, you open your software to better automated test scenarios. All Stride instrumentation takes the form of single line macros, so the amount of instrumentation bloat is no worse than other typical ad-hoc diagnostics. What's more, the Stride macros are all designed to be quickly no-opped in a build via a single preprocessor macro, making it possible to completely eliminate any actual impact on certain builds, if so desired.

Are all Test Points active?

No. Under normal testing scenarios, only the specific test points that are needed for a test are actually broadcast through the system. We accomplish this by setting test point filters (by label) on the system whenever one of the Test Point setup functions is called (in script or native code). These filters are reset or removed at the end of the test, so in general none of the test points are actually sent through the system if no test is currently active. That said, there are a few special use cases in which all test points become active in the system - namely, when tracing is activated on the host runner and when a specific test case uses a set that include TEST_POINT_EVERYTHING_ELSE. In general, however, the test points that are actually sent from the system are only those that are needed to execute the behavior validation for the current test.

Will it affect performance?

Our experience on a wide-range of systems has shown minimal impact from the Stride instrumentation. The Stride Runtime has been designed to be small, portable, and readily configurable, allowing it to be optimized for the platform's specific characteristics.

Should I leave Test Points in?

Yes. Once you have some behavior tests written, it's worthwhile to maintain that instrumentation and the corresponding tests, which allows you to run the tests on any stride-enabled build. All of the instrumentation macros are easily no-opped via a single preprocessor flag, so you can choose to effectively remove the instrumentation code on select builds (production/release builds, for example). The ultimate value of instrumentation is the continuous feedback you get by regularly executing the automated tests on the build.