What is Unique About STRIDE: Difference between revisions
No edit summary |
|||
Line 1: | Line 1: | ||
= | = STRIDE works on virtually any target platform = | ||
STRIDE's cross-platform framework facilitates a ''unified testing approach'' for all team members, enabling organizations to standardize on a test workflow that is independent of the target platform being used or what branch of software is being changed. | |||
The | === Runtime written in standard C === | ||
The [[Runtime_Reference | '''STRIDE Runtime''']] is written in standard C on top of a simple [[Platform_Abstraction_Layer | '''platform abstraction layer''']] that enables it to work accross platforms. It can be configured for a single process multi-threading environment or multiple process environments (i.e. Linux, Windows CE, Embedded RTOS, etc.). It is delivered as source code to be included in the application's build system. The transport between the host and target is configurable and supports serial/USB and TCP/IP by default. Custom transports are also available. | |||
== | ===Integrates With Your Existing Make System === | ||
STRIDE | STRIDE also auto-generates [[Intercept_Module | '''harnessing and remoting logic''']] as source code during the make process, removing any dependencies on specific compilers and / or processors. | ||
===Supports Off-Target Testing === | |||
Testing can also be conducted using an [[Off-Target_Environment | '''Off-Target Environment''']] which is provided for Windows and Linux host machines. The [[Linux SDK | '''Linux''']] and [[Windows SDK | '''Windows''']] ''SDKs'' allow for a seamless transition between the real target and an Off-Target host environment. | |||
= STRIDE provides built-in automation and reporting = | |||
STRIDE | STRIDE enables software builds to be both fully ''functional'' and ''testable'' at the same time. Built-in ''automation and reporting'' is similar in concept to a ''debug build'' except the focus is on testing. | ||
===Testable Builds=== | |||
The ''testable build'' leverages the existing software build process by integrating into the same ''make system'' used by the development team (no one-off or special builds). Automatically included in the build is test automation controllable from the host. When tests are executed, a [[Reporting_Model | '''test report''']] is generated on the host (and optionally uploaded to Test Space) which includes detailed test results and timing analysis. Developers can easily pre-flight test their source code changes before ''committing'' them to a baseline. Builds can be [[Setting_up_your_CI_Environment | '''automatically regression tested''']] as part of the daily build process. | |||
===Functional Builds=== | |||
The ''testable software'' is still fully functional and works exactly the same as before. Whatever the software image was used for in the past -- system testing, developer debugging, etc. -- is still applicable. The STRIDE [[Test_Units | '''test logic''']] is separated from the application source code and is NOT executed unless invoked via the [[Stride_Runner | '''runner''']]. The [[Source_Instrumentation_Overview | '''source instrumentation''']] is only active when executing tests. The impact of built-in testability to the software application is nominal. | |||
The application can easily be switched back to a ''non-testable'' build by simply removing the [[Runtime_Integration#STRIDE_Feature_Control | '''STRIDE_ENABLED''']] preprocessor directive and rebuilding. This flag controls all STRIDE related source code and macros; there are no changes required to the build process to enable or disable this functionality. | |||
= STRIDE offers testing techniques for deeper coverage = | |||
STRIDE offers numerous testing techniques that enable deeper and more effective testing with less effort. | |||
===Test Macros=== | |||
[[Test_Macros | Test Macros in native code]] provide one-line shortcuts for validating assertions and automatic report annotation in the case of failures. Test Macros are supported in both C/C++ and [[Perl_Script_APIs#Assertions | our scripting solution]]. | |||
===File Fixturing=== | |||
[[File_Transfer_Services | ''' File fixturing''']] is another technique that can be leveraged to drive better testing. Test code executing on the target platform can perform file operations on the host remotely. This enables opening, reading, writing, etc. files while executing target-based test logic. File fixturing allows bypassing external input into the system for more controlled and isolated testing. | [[File_Transfer_Services | ''' File fixturing''']] is another technique that can be leveraged to drive better testing. Test code executing on the target platform can perform file operations on the host remotely. This enables opening, reading, writing, etc. files while executing target-based test logic. File fixturing allows bypassing external input into the system for more controlled and isolated testing. | ||
===Expectations=== | |||
STRIDE leverages [[Source_Instrumentation_Overview | '''source instrumentation''']] to provide '' [[Expectations | '''Expectations''']] ''as an additional validation technique that can be applied to the executing software application. The execution sequencing of the code, along with state data, can be automatically validated based on ''what is expected''. This validation technique does not focus on calling functions / methods but rather verifies ''code sequencing''. This can span threads, process boundaries, and even multiple targets, as the application(s) is running. Leveraging simple macros -- called [[Test_Point | '''Test Points''']] -- developers strategically instrument the source under test. | |||
The test validation can be implemented in both [[Expectation_Tests_in_C/C%2B%2B | '''C/C++ on the target''']] and [[Perl_Script_APIs#STRIDE::Test | '''Perl script on the host''']]. In either case, the validation is done without impacting the application's performance (the on-target test code is executed in a background thread and scripts are executed on the host machine). When failures do occur, context is provided with the file name and associated line number of the failed expectations. This type of validation can be applied to a wide-range of testing scenarios: | |||
* [[ | * State Machines | ||
* Data flow through system components | |||
* Passing parameters to | * Sequencing between threads | ||
* Drivers that don't return values | |||
* and much more .. | |||
''Software Quality Assurance (SQA)'' can also leverage [[Expectations | '''Expectations''']] as part of their existing functional / system testing. Because the [[Stride_Runner | '''runner''']] is a command line utility, it is easily controlled from existing test infrastructure. SQA can use the [[Source_Instrumentation_Overview | '''instrumentation''']] to create their own [[Reporting_Model#Suites | '''test suite''']] using scripting that executes in concert with existing test automation. | |||
===Test Doubles=== | |||
For more advanced testing scenarios, dependencies can be [[Using_Test_Doubles | '''doubled''']]. This feature provides a means for intercepting C/C++ global functions on the target and substituting a stub, fake, or mock. The substitution is all controllable via the runtime, allowing the software to continue executing normally when not running a test. | |||
===Other Features=== | |||
There are numerous other features that can be leveraged to facilitate deeper test coverage: | |||
* Remoting C/C++ global functions | |||
* Passing parameters to tests from the host | |||
* Dynamic test / suite creation | |||
* Seamless publishing to [[STRIDE_Test_Space | '''Test Space''']] | * Seamless publishing to [[STRIDE_Test_Space | '''Test Space''']] | ||
* and much [[Test_API | ''' more ... ''']] | * and much [[Test_API | ''' more ... ''']] | ||
== | = STRIDE supports test implementation in C/C++ and Script = | ||
The test validation can be implemented in both ''native code'' on the target and ''script'' on the host. | |||
===C/C++=== | |||
Writing API/Unit tests in [[Test_Units_Overview | '''native code''']] is the simplest way to begin validating the software. There is no new language to learn, no proprietary editor, and your normal programming workflow is not interrupted. Also there is no special APIs required to register tests, suites, etc. Just write your test in any combination of C/C++ and the auto-generated [[Intercept_Module | '''intercept module''']] via the [[Build_Tools | '''STRIDE build tools''']] takes care of everything. Tests from separate teams are automatically aggregated by the system -- no coordination is required. | |||
This type of testing works well for: | |||
* Calling APIs directly | |||
* Validating C++ classes | |||
* Isolating modules | |||
* Critical processing / timing | |||
* and much more... | |||
===Perl=== | |||
STRIDE also supports writing tests in [[Test_Modules_Overview | '''Perl script''']]. When writing test scripts there are minimal dependencies on the software build process. Tests can also validate global functions, setup conditions from the host, etc. | |||
Scripts are well suited for Integration Testing focusing on: | |||
* State Machines | * State Machines | ||
* Data flow through system components | * Data flow through system components | ||
* Sequencing between threads | * Sequencing between threads | ||
* and much more... | |||
* and much more .. | |||
In either case, the validation is done without impacting the application's performance. | |||
= STRIDE includes Web-based test results management = | |||
When executing '''tests''', results can be uploaded to [[STRIDE_Test_Space | '''STRIDE Test Space''']] for persistence and analysis. Test result data can be uploaded manually (using the web interface) or automatically using the [[Stride_Runner | '''Runner''']]. | |||
===Centralized Hosting=== | |||
Hosting test results in a ''central location'' with easy access via a browser enables the entire team to better participate in ensuring quality during the ongoing development process. [[STRIDE_Test_Space#Results_View | '''Reports''']] containing test results, timing, [[Tracing | '''tracing logs''']], and built-in test documentation (supported in both [[Test_API#Test_Documentation | '''C/C++''']] and [[Perl_Script_APIs#Documentation | '''Perl''']]) all correlated together enhances the context for all team members to manage testing and optimize resolving failures. | |||
===Team Collaboration=== | |||
Team collaboration on testing activities is also facilitated with auto-generated email [[Notifications | '''notifications''']] and [[STRIDE_Test_Space#Messages | '''messaging''']]. Emails contain links to test failures where file names and line numbers are provided to aid in resolution. Messaging allows team members to more effectively communicate on the specifics of test results while minimizing information required in traditional emails, status reports, etc. | |||
[[Category: Overview]] | [[Category: Overview]] |
Revision as of 22:45, 2 November 2010
STRIDE works on virtually any target platform
STRIDE's cross-platform framework facilitates a unified testing approach for all team members, enabling organizations to standardize on a test workflow that is independent of the target platform being used or what branch of software is being changed.
Runtime written in standard C
The STRIDE Runtime is written in standard C on top of a simple platform abstraction layer that enables it to work accross platforms. It can be configured for a single process multi-threading environment or multiple process environments (i.e. Linux, Windows CE, Embedded RTOS, etc.). It is delivered as source code to be included in the application's build system. The transport between the host and target is configurable and supports serial/USB and TCP/IP by default. Custom transports are also available.
Integrates With Your Existing Make System
STRIDE also auto-generates harnessing and remoting logic as source code during the make process, removing any dependencies on specific compilers and / or processors.
Supports Off-Target Testing
Testing can also be conducted using an Off-Target Environment which is provided for Windows and Linux host machines. The Linux and Windows SDKs allow for a seamless transition between the real target and an Off-Target host environment.
STRIDE provides built-in automation and reporting
STRIDE enables software builds to be both fully functional and testable at the same time. Built-in automation and reporting is similar in concept to a debug build except the focus is on testing.
Testable Builds
The testable build leverages the existing software build process by integrating into the same make system used by the development team (no one-off or special builds). Automatically included in the build is test automation controllable from the host. When tests are executed, a test report is generated on the host (and optionally uploaded to Test Space) which includes detailed test results and timing analysis. Developers can easily pre-flight test their source code changes before committing them to a baseline. Builds can be automatically regression tested as part of the daily build process.
Functional Builds
The testable software is still fully functional and works exactly the same as before. Whatever the software image was used for in the past -- system testing, developer debugging, etc. -- is still applicable. The STRIDE test logic is separated from the application source code and is NOT executed unless invoked via the runner. The source instrumentation is only active when executing tests. The impact of built-in testability to the software application is nominal.
The application can easily be switched back to a non-testable build by simply removing the STRIDE_ENABLED preprocessor directive and rebuilding. This flag controls all STRIDE related source code and macros; there are no changes required to the build process to enable or disable this functionality.
STRIDE offers testing techniques for deeper coverage
STRIDE offers numerous testing techniques that enable deeper and more effective testing with less effort.
Test Macros
Test Macros in native code provide one-line shortcuts for validating assertions and automatic report annotation in the case of failures. Test Macros are supported in both C/C++ and our scripting solution.
File Fixturing
File fixturing is another technique that can be leveraged to drive better testing. Test code executing on the target platform can perform file operations on the host remotely. This enables opening, reading, writing, etc. files while executing target-based test logic. File fixturing allows bypassing external input into the system for more controlled and isolated testing.
Expectations
STRIDE leverages source instrumentation to provide Expectations as an additional validation technique that can be applied to the executing software application. The execution sequencing of the code, along with state data, can be automatically validated based on what is expected. This validation technique does not focus on calling functions / methods but rather verifies code sequencing. This can span threads, process boundaries, and even multiple targets, as the application(s) is running. Leveraging simple macros -- called Test Points -- developers strategically instrument the source under test.
The test validation can be implemented in both C/C++ on the target and Perl script on the host. In either case, the validation is done without impacting the application's performance (the on-target test code is executed in a background thread and scripts are executed on the host machine). When failures do occur, context is provided with the file name and associated line number of the failed expectations. This type of validation can be applied to a wide-range of testing scenarios:
- State Machines
- Data flow through system components
- Sequencing between threads
- Drivers that don't return values
- and much more ..
Software Quality Assurance (SQA) can also leverage Expectations as part of their existing functional / system testing. Because the runner is a command line utility, it is easily controlled from existing test infrastructure. SQA can use the instrumentation to create their own test suite using scripting that executes in concert with existing test automation.
Test Doubles
For more advanced testing scenarios, dependencies can be doubled. This feature provides a means for intercepting C/C++ global functions on the target and substituting a stub, fake, or mock. The substitution is all controllable via the runtime, allowing the software to continue executing normally when not running a test.
Other Features
There are numerous other features that can be leveraged to facilitate deeper test coverage:
- Remoting C/C++ global functions
- Passing parameters to tests from the host
- Dynamic test / suite creation
- Seamless publishing to Test Space
- and much more ...
STRIDE supports test implementation in C/C++ and Script
The test validation can be implemented in both native code on the target and script on the host.
C/C++
Writing API/Unit tests in native code is the simplest way to begin validating the software. There is no new language to learn, no proprietary editor, and your normal programming workflow is not interrupted. Also there is no special APIs required to register tests, suites, etc. Just write your test in any combination of C/C++ and the auto-generated intercept module via the STRIDE build tools takes care of everything. Tests from separate teams are automatically aggregated by the system -- no coordination is required.
This type of testing works well for:
- Calling APIs directly
- Validating C++ classes
- Isolating modules
- Critical processing / timing
- and much more...
Perl
STRIDE also supports writing tests in Perl script. When writing test scripts there are minimal dependencies on the software build process. Tests can also validate global functions, setup conditions from the host, etc.
Scripts are well suited for Integration Testing focusing on:
- State Machines
- Data flow through system components
- Sequencing between threads
- and much more...
In either case, the validation is done without impacting the application's performance.
STRIDE includes Web-based test results management
When executing tests, results can be uploaded to STRIDE Test Space for persistence and analysis. Test result data can be uploaded manually (using the web interface) or automatically using the Runner.
Centralized Hosting
Hosting test results in a central location with easy access via a browser enables the entire team to better participate in ensuring quality during the ongoing development process. Reports containing test results, timing, tracing logs, and built-in test documentation (supported in both C/C++ and Perl) all correlated together enhances the context for all team members to manage testing and optimize resolving failures.
Team Collaboration
Team collaboration on testing activities is also facilitated with auto-generated email notifications and messaging. Emails contain links to test failures where file names and line numbers are provided to aid in resolution. Messaging allows team members to more effectively communicate on the specifics of test results while minimizing information required in traditional emails, status reports, etc.