Training Expectations: Difference between revisions
(9 intermediate revisions by 4 users not shown) | |||
Line 20: | Line 20: | ||
=== Build and Run TestApp === | === Build and Run TestApp === | ||
* Build TestApp using SDK makefile | * [[Building_an_Off-Target_Test_App#Build_Steps | Build TestApp]] using SDK makefile | ||
* Startup TestApp | * Startup TestApp | ||
* If not | * If you have not created an option file, please refer to [[Training_Getting_Started#Run_Training_Tests| setup]] | ||
* Execute ''Test Expectations'' Test Units only | * Execute ''Test Expectations'' Test Units only | ||
> stride - | > stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc | ||
Loading database... | Loading database... | ||
Line 49: | Line 43: | ||
Saving result file... | Saving result file... | ||
* Review the details of the test results using a Browser. Open | * Review the details of the test results using a Browser. Open [[Building_an_Off-Target_Test_App#Interpreting_Results | TestApp.xml ]] which can be found in the ''sample_src'' directory (based on the ''output'' option). By opening the xml file in a web browser the xsl is automatically applied to create html. | ||
=== Implement Exercise === | === Implement Exercise === | ||
* '''TestExpect_Seq::Exercise''' | * '''TestExpect_Seq::Exercise''' | ||
** Validate '''ALL''' upper case Test Points | ** Validate '''ALL''' upper case Test Points (A - I) | ||
** Use ''Unordered'' and ''Nonstrict'' sequencing | ** Use ''Unordered'' and ''Nonstrict'' sequencing | ||
** Use ''sut_DoSequencing(SEQ_1)'' to generate part of the sequence | ** Use ''sut_DoSequencing(SEQ_1)'' to generate part of the sequence | ||
** Use ''sut_start_thread(SEQ_3)'' to generate the rest of the sequence | ** Use ''sut_start_thread(SEQ_3)'' to generate the rest of the sequence | ||
** Add a ''NOTE_INFO(..)'' to log that the test is executing | ** Add a ''NOTE_INFO(..)'' to log that the test is executing | ||
** ''Hint:'' You will probably want to read about [[Test Point Testing in C/C++]] | |||
Line 89: | Line 84: | ||
* Execute ''Test Expectations'' Test Units only | * Execute ''Test Expectations'' Test Units only | ||
> stride - | > stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc | ||
Loading database... | Loading database... | ||
Line 108: | Line 103: | ||
=== Run and Publish Results === | === Run and Publish Results === | ||
When you have completed the Exercise(s) publish your results to Test Space. | When you have completed the Exercise(s) publish your results to Test Space. If you have not added test space options to your options file (<tt>myoptions.txt</tt>) please see [[Training_Getting_Started#Test_Space_Access| testspace access]]. | ||
> stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc --space TestExpect --upload | |||
> stride - | |||
Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | ''expected test results'']] that you can use to validate your results. | Note: This space has been set up with a Baseline of [[Training_Getting_Started#Test_Space_Access | ''expected test results'']] that you can use to validate your results. |
Latest revision as of 17:39, 27 February 2013
Objectives
This Training Module is focused on Test Points and how to validate them using Expectations. The module covers the following topics:
- Presentation of a validation technique based on code sequencing and state data
- Overview of source instrumentation
- Review of expectation tables and predicates
- Example use cases such as concurrent validation, using trigger conditions, etc.
There are two test files used -- TestExpect.cpp & TestExpect.h -- that implement three Test Units:
- TestExpect_Seq
- TestExpect_Data
- TestExpect_Misc
The first two Test Units have two test methods already implemented and have one method each that you are required to implement called Exercise. The third Test Unit has four test methods already implemented and also has one method to implement as an exercise. Currently the exercise methods return a NOT IN USE status.
Instructions
Build and Run TestApp
- Build TestApp using SDK makefile
- Startup TestApp
- If you have not created an option file, please refer to setup
- Execute Test Expectations Test Units only
> stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc
Loading database... Connecting to device... Executing... test unit "TestExpect_Seq" > 2 passed, 0 failed, 0 in progress, 1 not in use. test unit "TestExpect_Data" > 2 passed, 0 failed, 0 in progress, 1 not in use. test unit "TestExpect_Misc" > 3 passed, 1 failed, 0 in progress, 1 not in use. ----------------------------------------------------------- Summary: 7 passed, 1 failed, 0 in progress, 3 not in use. Disconnecting from device... Saving result file...
- Review the details of the test results using a Browser. Open TestApp.xml which can be found in the sample_src directory (based on the output option). By opening the xml file in a web browser the xsl is automatically applied to create html.
Implement Exercise
- TestExpect_Seq::Exercise
- Validate ALL upper case Test Points (A - I)
- Use Unordered and Nonstrict sequencing
- Use sut_DoSequencing(SEQ_1) to generate part of the sequence
- Use sut_start_thread(SEQ_3) to generate the rest of the sequence
- Add a NOTE_INFO(..) to log that the test is executing
- Hint: You will probably want to read about Test Point Testing in C/C++
- TestExpect_Data::Exercise
- Validate the following Test Points {D, G, F, H}
- Check that F occurs 2 times
- Use Unordered sequencing
- Write a new custom predicate that validates data for both D and H Test Points
- Add NOTE_INFO(..) to capture content of the Test Points
- Add extra check for D that the status is GOOD
- Confirm that data fields d1 and d2 are as expected
- Pass the expected data fields (for both Test Points) as part of the user data within the setup
- Use sut_start_thread(SEQ_3) to generate the sequence
- Write another custom predicate for validating data for G
- Add NOTE_INFO(..) to capture content of the Test Point
- Validate the expected string using user data
- Add a NOTE_INFO(..) to log that the test is executing
- TestExpect_Misc::Exercise
- Validate 2 sequences using a Trigger in between
- Sequence 1 = D E and A
- Trigger = C
- Sequence 2 = F and F (2 occurrences)
- Use ANY AT ALL special member with trigger
- Use Ordered and Strict sequencing
- Use sut_start_thread(SEQ_2) to generate the expected sequences
- Validate 2 sequences using a Trigger in between
- Execute Test Expectations Test Units only
> stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc
Loading database... Connecting to device... Executing... test unit "TestExpect_Seq" > 3 passed, 0 failed, 0 in progress, 0 not in use. test unit "TestExpect_Data" > 3 passed, 0 failed, 0 in progress, 0 not in use. test unit "TestExpect_Misc" > 4 passed, 1 failed, 0 in progress, 0 not in use. ----------------------------------------------------------- Summary: 10 passed, 1 failed, 0 in progress, 0 not in use. Disconnecting from device... Saving result file...
Run and Publish Results
When you have completed the Exercise(s) publish your results to Test Space. If you have not added test space options to your options file (myoptions.txt) please see testspace access.
> stride --options_file myoptions.txt --run TestExpect_Seq --run TestExpect_Data --run TestExpect_Misc --space TestExpect --upload
Note: This space has been set up with a Baseline of expected test results that you can use to validate your results.
Reference
The following reference information is related to passing parameters to Test Units.
Wiki
- Instrumentation Overview providing high-level concepts of this validation technique
- Test Point Macro definition
- Expectations definition and how to set your Expectations
Samples
- Test Point Sample - Demonstrates simple technique to monitor and test activity occurring in another thread