Frequently Asked Questions About STRIDE: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
(Created page with '<span style="#color:#0067A5"> <font size="4.5"> Frequently Asked Questions about STRIDE™ </font> </span> The following is a list of ''high level'' questions often asked by cu…')
 
 
(40 intermediate revisions by 4 users not shown)
Line 1: Line 1:
<span style="#color:#0067A5"> <font size="4.5"> Frequently Asked Questions about STRIDE™ </font> </span>
__NOTOC__
== Types of Testing Supported ==
One of the questions that should be asked is ''what is the '''value''' of the test?'' If the test does not discover any defects or does not provide ongoing regression, then the value is questionable. Also ''what is the '''effort''' in implementing the test?'' Stride has been uniquely designed to support maximizing the '' '''value''' '' of the test while minimizing the '' '''effort''' '' to implement it.


The following is a list of ''high level'' questions often asked by customers. 
Stride supports three general types of testing:
* '''Unit Testing'''
* '''API Testing'''
* '''Integration Testing'''


= Testing =
=== Unit Testing ===
'''Unit Testing''' is supported following the model found in typical [http://en.wikipedia.org/wiki/XUnit xUnit-style] testing frameworks.
 
Traditional '''Unit Testing''' presents a number of challenges in testing embedded software:


== What specific testing techniques are enabled by STRIDE? ==
* Testing functions/classes in '''isolation''' requires a lot of extra work, especially if your software was not designed upfront for testability
* The software is often not well suited for '''others''' to participate in the test implementation, since there is too much internal knowledge required to be productive
* It can be difficult to automate execution of the full set of tests on the real target device


=== Unit Testing ===
'''Unit Testing''' of legacy software may have limited value, particularly if the software is stable with respect to defects. The best ''return-of-effort'' is often experienced when focused on '''brand new''' software components.


=== API Testing ===
=== API Testing ===
Stride supports '''API Testing''' by leveraging the same techniques available for Unit Testing.
'''API Testing''' differs from unit testing in that the tests focus on direct testing (calling) of a well-defined interface.
* The design of ''public interfaces'' often lends itself to testing in isolation ''without'' implementing special test logic (i.e. no stubbing required), which make the test implementation simpler.
* Public APIs are most likely documented and as a result, ''non domain experts'' can more easily participate in the test implementation
Although '''API Testing''' often represents a smaller percentage of the software being exercised, this kind of testing is typically well understood, easy to scope, and often has a better ''return-on-effort''.
=== Integration Testing ===
Stride also supports '''Integration Testing''', which is different than Unit Testing or API Testing in that it does not focus simply on calling functions and validating return values. To learn more about some of the unique testing techniques well suited for this type of testing [[Expectations | '''read here''']].
'''Integration Testing''' focuses on validating a larger scope of the software while executing under normal operating conditions.
* Tests are performed typically with fully functional software build
* There are minimal code isolation challenges
* Test results provide a sanity check on the health the software
We believe that '''Integration Testing''' has a very high ''return-on-effort'' and is more applicable to legacy software systems.
== Getting Started ==


=== Behavior Testing ===
=== How long does it take to install STRIDE? ===
Scenario-based white box testing.


== What about source instrumentation bloat? ==
==== Off-Target Installation ====
Off-target installation to a desktop or laptop PC takes just minutes; support is offered for Windows and Linux.


=== Using Test Points / Logs ===
==== On-Target Installation ====
For standard embedded platforms such as Linux, Android and Windows the installation process varies between a few hours to a couple of days, depending on the complexity of your build environment. We provide SDK packages that provide a reference to integrators.


=== Should I leave the testability in? ===
For proprietary embedded targets, a custom [[Platform_Abstraction_Layer|Platform Abstraction Layer (PAL)]] is required. The PAL provides the glue between the [[STRIDE Runtime | Stride Runtime]] and services offered by the OS. It is the only piece of the runtime that is customized between operating systems. The implementation of the PAL ranges between a day to a week depending on the complexity of the OS. There is also a [[Build Integration | build integration]] step. This involves integrating the [[STRIDE Build Tools | Stride Build Tools]] into your software make process. The activity ranges from a single day to several days.


=== Are all Test Points active? ===
=== What kind of training is required? ===
 
Our training [[Training Overview | approach]] is based on wiki articles, samples, and leveraging the [[Stride Sandbox]]. The training has been set up for self-guided instruction that can be leveraged for an initial introduction of the technology and on-demand for specific topics when required.
 
== Integration with Stride ==


== Can developers really enable QA to create effective white-box tests? ==
=== What is the size of the Stride Runtime? ===


=== Yes if you do the following ===
The [[STRIDE Runtime |Runtime]] is a source package that supports connectivity with the host system and provides[[Runtime_Test_Services | services for testing]] and [[Test Macros]].  The ''Runtime'' is tailored specifically to embedded applications, overhead is minimal. It consumes very little memory for table and control block storage. Resource usage is [[STRIDE_Runtime#Runtime_Configuration | configurable]] and can be tailored to the limitations of the target platform.


=== No if you don't do anything different ===
{| class="wikitable"
|+ Typical resource usage
! Aspect !! Resources
|-
| Code Space || About 90-130 KB depending on the compiler of use and the level of optimization.
|-
| Memory Usage || Configurable, by default set to about 10 KB
|-
| Threads || 3 Threads; configurable priority; blocked when inactive
|}


=== Maybe ===
=== What is the processing overhead? ===


== What do you mean by behavior testing? ==
The [[STRIDE Runtime | Stride Runtime]] overhead is minimized by collecting raw data on the target, and transferring information at a low priority task to the host for processing. Processing is only active when executing tests via the [[Stride Runner]].


== How are test cases managed? ==
== Testing ==
Individual test cases are organized into ''test unit''s which typically target the verification of a specific subsystem or component.


Test units are runnable individually or in sequence with other test units.
=== What languages are supported? ===


Test unit results may also be further organized into named test suites, which provide convenient results roll-ups.
Tests can be written using '''C''', '''C++''', and '''Perl'''.


== Can I use STRIDE together with test equipment? * ==
=== Is there any alternative to running Stride tests with a real device? ===
STRIDE can be used with test equipment for various objectives – for example, using STRIDE to trace on certain APIs or to capture the parameters of certain interfaces that result from network traffic.


Another use case is to use test equipment to perform concurrency testing, such as verifying application behavior when a phone call is interrupted by a text message.
Yes. Tests can be also be built and executed using the [[Stride Sandbox]]. In order for this to work, your device source code must be built along with the test code using the host's desktop toolchain (MSVC on Windows, gcc on Linux).


== Can I use STRIDE if my embedded code has real-time constraints? * ==
== Test Automation  ==
The STRIDE components and architecture are tailored specifically to embedded applications; overhead is minimal.


STRIDE’s target components consume very little memory and can be configured to run at low priorities. Similarly, tracing overhead is minimized by collecting raw data on the target, and uploading tracing information at a low priority task to the host for processing.
=== What is continuous integration and why should I care? ===


If tight real-time constraints are a concern, STRIDE also supports target-resident test code, which minimizes transactions or overhead over the transport. In this case, STRIDE provides a framework for automating and managing the execution of the on-target test code, and publishing the test results.
The key principle of continuous integration is regular testing of your  software--ideally done in an automated fashion. Stride tests are  reusable and automated. Over time, these tests accumulate, providing  more and more comprehensive coverage. By automating the execution of  tests and results publication via [http://www.testspace.com Testspace] with every software build, development teams gain immediate feedback on defects and the health of their software. By detecting and repairing defects immediately, the expense and time involved with correcting bugs is minimized.


= Installation and Deployment =
=== Does Stride support continuous integration? ===


== What up-front integration is required to begin using STRIDE? * ==
Yes. The [[Stride Runner]] provides a straightforward means to connect to the device under test and execute the test cases you've implemented using Stride. The runner allows you to configure which tests to run and how to organize the results using sub-suites in the report. Since the runner supports an option-based command line interface, this tool is easy to integrate with typical continuous integration servers. 
Build system


Target Integration
=== Where/how do you store test results? ===


To support remotely accessing function calls, the STRIDE Runtime and Platform Abstraction Layer (PAL) components must be integrated into the target environment, including support for the available transport. S2 already has pre-ported or reference PALs available for many popular embedded RTOSes. In this case, there is only a few days of integration and verification work.
When you execute your tests using the [[Stride Runner]], upon completion the results are written to an xml file. This xml file uses a custom schema for representing the hierarchy of results (suites, cases, etc.). These files also include a stylsheet specification (which will be written to the same directory as the xml file) that allows them to be viewed as HTML in a browser. You are free to store these files for future use/reference. The runner '''also''' supports direct publishing to [http://www.testspace.com Testspace] which is a hosted web application for viewing and collaborating on your test results. Once you are regularly executing tests, whether automatically or manually, we recommend you use the publish feature to persist and share your test results.


For custom or new OSes, implementing a PAL takes only a few extra days. A new PAL involves implementing 10 – 12 well-defined primitive interfaces, based on common OS services. In either case, S2 is generally involved with this process in order to ensure proper integration.
=== Can I get  email containing test reports? ===


If the software architecture includes a messaging subsystem, then a custom remote message server (RMS) component is also required. S2 generally undertakes this work and delivers, integrates, and verifies the RMS with the customer. The scope of the RMS varies depending on the complexity of the messaging subsystem.
Yes. If you use [http://www.testspace.com Testspace] to store your results, you can optionally configure your test space(s) to automatically notify users when new results are uploaded. The email that is generated by Test Space contains only summary information and provides links so that you can view the complete report data.


== How will STRIDE affect my real-time constraints  ==
If you are using a continuous integration server to initiate your testing, it's likely that it supports different forms of notification when the testing is complete, so it's often possible to attach the xml report data as part of the CI server notification.


=== Size of runtime / intercept module ===
== Source Instrumentation ==


=== Typical RAM usage ===
=== What are the advantages of Test Points over logging? ===


=== Processing Impact ===
[[Test Point | Test Points]] can be used for direct validation during real-time execution. Logging systems, while often useful for historical record, can only be used for post-processing validation at best. What's more, since there is no standard way to post-process logs, users often rely on manual inspection or non-scalable homegrown solutions for validation. Test Point validation is fully automated using the STRIDE Framework, which provides robust harnessing and reporting. Test Points also can include optional data payloads which can be very useful for validating state when a point is hit. Test Points have potentially lower impact on the system than standard logging since they '''(1)''' are only sent from the origin if a test is actively subscribed to test point (labels are used as a filter at runtime) and '''(2)''' test points can be completely disabled (no-opped) in your build by undefining a single preprocessor macro [[Runtime_Integration#STRIDE_Feature_Control | (STRIDE_ENABLED)]].


== What process changes are required to adopt STRIDE ==
=== What about source instrumentation bloat? ===


=== Testable Build ===
Any mature code base has some level of diagnostic bloat to it. This often takes the form of ad-hoc logging or debug statements. With Stride [[Test Point | Test Points]] and [[Test Log | Test Logs]], you open your software to better automated test scenarios. All Stride instrumentation takes the form of single line macros, so the amount of instrumentation bloat is no worse than other typical ad-hoc diagnostics. What's more, the Stride macros are all designed to be quickly no-opped in a build via a single preprocessor macro, making it possible to completely eliminate any actual impact on certain builds, if so desired.


=== Creating and maintaining Test Assets ===
=== Are all Test Points active? ===


=== Etc. ===
No. Under normal testing scenarios, only the specific test points that are needed for a test are actually broadcast through the system. We accomplish this by setting test point filters (by label) on the system whenever one of the Test Point setup functions is called (in script or native code). These filters are reset or removed at the end of the test, so in general ''none'' of the test points are actually sent through the system if no test is currently active. That said, there are a few special use cases in which ''all'' test points become active in the system - namely, when tracing is activated on the host runner and when a specific test case uses a set that include ''TEST_POINT_EVERYTHING_ELSE''. In general, however, the test points that are actually sent from the system are ''only'' those that are needed to execute the behavior validation for the current test.


== How much time and resources are required to get STRIDE running in a typical embedded environment? * ==
=== Will it affect performance? ===
A typical environment using a common embedded RTOS can be up and running in a week or less. Custom RTOSes or environments using messaging or host-based simulators will require more time to integrate the necessary support. During this period, S2 typically works with a customer’s developer who is knowledgeable about their software architecture, RTOS primitives, and software build environment.


== What does it take to train developers in using STRIDE? * ==
Our experience on a wide-range of systems has shown minimal impact from the Stride instrumentation.  The Stride Runtime has been designed to be small, portable, and readily configurable, allowing it to be optimized for the platform's specific characteristics.


== How does STRIDE support continuous integration? * ==
=== Should I leave Test Points in? ===
The principle in continuous integration is to continually test your software, ideally through automation. The STRIDE test scripts created by developers are reusable and automated. Over time, these scripts accumulate, providing more and more comprehensive coverage. By integrating STRIDE to the build environment and automating the execution of scripts with every software build, development teams gain immediate feedback on defects and the health of their software. By detecting and repairing defects immediately, the expense and time involved with correcting bugs is minimized.


[[Category: Overview]]
Yes. Once you have some behavior tests written, it's worthwhile to maintain that instrumentation and the corresponding tests, which allows you to run the tests on any stride-enabled build. All of the instrumentation macros are easily no-opped via a single preprocessor flag, so you can choose to effectively remove the instrumentation code on select builds (production/release builds, for example). The ultimate value of instrumentation is the continuous feedback you get by regularly executing the automated tests on the build.

Latest revision as of 18:35, 7 July 2015

Types of Testing Supported

One of the questions that should be asked is what is the value of the test? If the test does not discover any defects or does not provide ongoing regression, then the value is questionable. Also what is the effort in implementing the test? Stride has been uniquely designed to support maximizing the value of the test while minimizing the effort to implement it.

Stride supports three general types of testing:

  • Unit Testing
  • API Testing
  • Integration Testing

Unit Testing

Unit Testing is supported following the model found in typical xUnit-style testing frameworks.

Traditional Unit Testing presents a number of challenges in testing embedded software:

  • Testing functions/classes in isolation requires a lot of extra work, especially if your software was not designed upfront for testability
  • The software is often not well suited for others to participate in the test implementation, since there is too much internal knowledge required to be productive
  • It can be difficult to automate execution of the full set of tests on the real target device

Unit Testing of legacy software may have limited value, particularly if the software is stable with respect to defects. The best return-of-effort is often experienced when focused on brand new software components.

API Testing

Stride supports API Testing by leveraging the same techniques available for Unit Testing.

API Testing differs from unit testing in that the tests focus on direct testing (calling) of a well-defined interface.

  • The design of public interfaces often lends itself to testing in isolation without implementing special test logic (i.e. no stubbing required), which make the test implementation simpler.
  • Public APIs are most likely documented and as a result, non domain experts can more easily participate in the test implementation

Although API Testing often represents a smaller percentage of the software being exercised, this kind of testing is typically well understood, easy to scope, and often has a better return-on-effort.

Integration Testing

Stride also supports Integration Testing, which is different than Unit Testing or API Testing in that it does not focus simply on calling functions and validating return values. To learn more about some of the unique testing techniques well suited for this type of testing read here.

Integration Testing focuses on validating a larger scope of the software while executing under normal operating conditions.

  • Tests are performed typically with fully functional software build
  • There are minimal code isolation challenges
  • Test results provide a sanity check on the health the software

We believe that Integration Testing has a very high return-on-effort and is more applicable to legacy software systems.

Getting Started

How long does it take to install STRIDE?

Off-Target Installation

Off-target installation to a desktop or laptop PC takes just minutes; support is offered for Windows and Linux.

On-Target Installation

For standard embedded platforms such as Linux, Android and Windows the installation process varies between a few hours to a couple of days, depending on the complexity of your build environment. We provide SDK packages that provide a reference to integrators.

For proprietary embedded targets, a custom Platform Abstraction Layer (PAL) is required. The PAL provides the glue between the Stride Runtime and services offered by the OS. It is the only piece of the runtime that is customized between operating systems. The implementation of the PAL ranges between a day to a week depending on the complexity of the OS. There is also a build integration step. This involves integrating the Stride Build Tools into your software make process. The activity ranges from a single day to several days.

What kind of training is required?

Our training approach is based on wiki articles, samples, and leveraging the Stride Sandbox. The training has been set up for self-guided instruction that can be leveraged for an initial introduction of the technology and on-demand for specific topics when required.

Integration with Stride

What is the size of the Stride Runtime?

The Runtime is a source package that supports connectivity with the host system and provides services for testing and Test Macros. The Runtime is tailored specifically to embedded applications, overhead is minimal. It consumes very little memory for table and control block storage. Resource usage is configurable and can be tailored to the limitations of the target platform.

Typical resource usage
Aspect Resources
Code Space About 90-130 KB depending on the compiler of use and the level of optimization.
Memory Usage Configurable, by default set to about 10 KB
Threads 3 Threads; configurable priority; blocked when inactive

What is the processing overhead?

The Stride Runtime overhead is minimized by collecting raw data on the target, and transferring information at a low priority task to the host for processing. Processing is only active when executing tests via the Stride Runner.

Testing

What languages are supported?

Tests can be written using C, C++, and Perl.

Is there any alternative to running Stride tests with a real device?

Yes. Tests can be also be built and executed using the Stride Sandbox. In order for this to work, your device source code must be built along with the test code using the host's desktop toolchain (MSVC on Windows, gcc on Linux).

Test Automation

What is continuous integration and why should I care?

The key principle of continuous integration is regular testing of your software--ideally done in an automated fashion. Stride tests are reusable and automated. Over time, these tests accumulate, providing more and more comprehensive coverage. By automating the execution of tests and results publication via Testspace with every software build, development teams gain immediate feedback on defects and the health of their software. By detecting and repairing defects immediately, the expense and time involved with correcting bugs is minimized.

Does Stride support continuous integration?

Yes. The Stride Runner provides a straightforward means to connect to the device under test and execute the test cases you've implemented using Stride. The runner allows you to configure which tests to run and how to organize the results using sub-suites in the report. Since the runner supports an option-based command line interface, this tool is easy to integrate with typical continuous integration servers.

Where/how do you store test results?

When you execute your tests using the Stride Runner, upon completion the results are written to an xml file. This xml file uses a custom schema for representing the hierarchy of results (suites, cases, etc.). These files also include a stylsheet specification (which will be written to the same directory as the xml file) that allows them to be viewed as HTML in a browser. You are free to store these files for future use/reference. The runner also supports direct publishing to Testspace which is a hosted web application for viewing and collaborating on your test results. Once you are regularly executing tests, whether automatically or manually, we recommend you use the publish feature to persist and share your test results.

Can I get email containing test reports?

Yes. If you use Testspace to store your results, you can optionally configure your test space(s) to automatically notify users when new results are uploaded. The email that is generated by Test Space contains only summary information and provides links so that you can view the complete report data.

If you are using a continuous integration server to initiate your testing, it's likely that it supports different forms of notification when the testing is complete, so it's often possible to attach the xml report data as part of the CI server notification.

Source Instrumentation

What are the advantages of Test Points over logging?

Test Points can be used for direct validation during real-time execution. Logging systems, while often useful for historical record, can only be used for post-processing validation at best. What's more, since there is no standard way to post-process logs, users often rely on manual inspection or non-scalable homegrown solutions for validation. Test Point validation is fully automated using the STRIDE Framework, which provides robust harnessing and reporting. Test Points also can include optional data payloads which can be very useful for validating state when a point is hit. Test Points have potentially lower impact on the system than standard logging since they (1) are only sent from the origin if a test is actively subscribed to test point (labels are used as a filter at runtime) and (2) test points can be completely disabled (no-opped) in your build by undefining a single preprocessor macro (STRIDE_ENABLED).

What about source instrumentation bloat?

Any mature code base has some level of diagnostic bloat to it. This often takes the form of ad-hoc logging or debug statements. With Stride Test Points and Test Logs, you open your software to better automated test scenarios. All Stride instrumentation takes the form of single line macros, so the amount of instrumentation bloat is no worse than other typical ad-hoc diagnostics. What's more, the Stride macros are all designed to be quickly no-opped in a build via a single preprocessor macro, making it possible to completely eliminate any actual impact on certain builds, if so desired.

Are all Test Points active?

No. Under normal testing scenarios, only the specific test points that are needed for a test are actually broadcast through the system. We accomplish this by setting test point filters (by label) on the system whenever one of the Test Point setup functions is called (in script or native code). These filters are reset or removed at the end of the test, so in general none of the test points are actually sent through the system if no test is currently active. That said, there are a few special use cases in which all test points become active in the system - namely, when tracing is activated on the host runner and when a specific test case uses a set that include TEST_POINT_EVERYTHING_ELSE. In general, however, the test points that are actually sent from the system are only those that are needed to execute the behavior validation for the current test.

Will it affect performance?

Our experience on a wide-range of systems has shown minimal impact from the Stride instrumentation. The Stride Runtime has been designed to be small, portable, and readily configurable, allowing it to be optimized for the platform's specific characteristics.

Should I leave Test Points in?

Yes. Once you have some behavior tests written, it's worthwhile to maintain that instrumentation and the corresponding tests, which allows you to run the tests on any stride-enabled build. All of the instrumentation macros are easily no-opped via a single preprocessor flag, so you can choose to effectively remove the instrumentation code on select builds (production/release builds, for example). The ultimate value of instrumentation is the continuous feedback you get by regularly executing the automated tests on the build.