STRIDE Test Space: Difference between revisions

From STRIDE Wiki
Jump to navigation Jump to search
No edit summary
 
(16 intermediate revisions by 4 users not shown)
Line 1: Line 1:
== What is STRIDE Test Space? ==
== What is STRIDE Test Space? ==


STRIDE Test Space is a hosted web application for storing and analyzing your test results. Test Space accepts data that results from the execution of STRIDE Test Units or Test Scripts. Data is uploaded to Test Space manually (using the web interface) or automatically from one of the STRIDE execution tools ([[Stride Runner|STRIDE Runner]] or [[STRIDE Studio]]). Once data is uploaded, it is retained until it is manually removed or automatically deleted (depending on the space configuration).
STRIDE Test Space is a hosted web application for storing and analyzing your test results. Test Space accepts data that results from the execution of STRIDE Test Units or Test Scripts. Data is uploaded to Test Space manually (using the web interface) or automatically from the [[STRIDE Runner]]. Once data is uploaded, it is retained until it is manually removed or automatically deleted (depending on the space configuration).


STRIDE Test Space organizes results in a hierarchy of ''projects'', ''test spaces'' and ''result sets''. Each time you upload data to a test space, a new result set is created (unless you explicitly add your data to an existing one). With a given result set, test results are further organized into test suites containing test cases. Any number of these organizational entities can be used to create a fluid hierarchy of test results that adapts to shifting needs during a product lifecycle.
STRIDE Test Space organizes results in a hierarchy of ''projects'', ''test spaces'' and ''result sets''. Each time you upload data to a test space, a new result set is created (unless you explicitly add your data to an existing one). With a given result set, test results are further organized into test suites containing test cases. Any number of these organizational entities can be used to create a fluid hierarchy of test results that adapts to shifting needs during a product lifecycle.
Line 16: Line 16:
===Uploading Results===
===Uploading Results===


New results set can be easily uploaded using the automatic upload features in the [[Running_Test_Units#Publishing_Results_to_STRIDE_Test_Space|STRIDE Runner]] or [[STRIDE Studio]]. You can also upload XML results data files manually using the web interface by navigating to the detail view for a specific test case and clicking the ''upload data'' button.
New results set can be easily uploaded using the automatic upload feature in the [[Running_Test_Units#Publishing_Results_to_STRIDE_Test_Space|STRIDE Runner]]. You can also upload XML results data files manually using the web interface by navigating to the detail view for a specific test case and clicking the ''upload data'' button. For information on the XML Schema refer [[Test_Space_XML_Schema | here]].


===Viewing Results===
===Viewing Results===
Line 29: Line 29:
* a new comment added to an existing message
* a new comment added to an existing message
This view is the default view for Test Space. This data is also available as an RSS feed (your browser should detect the feed and allow you to subscribe).
This view is the default view for Test Space. This data is also available as an RSS feed (your browser should detect the feed and allow you to subscribe).
[[Image:Test_space_overview.png|600px]]


====All Projects====
====All Projects====


The ''All Projects'' view shows all test spaces to which you have access, grouped by project. This is typically how users will navigate to specific test spaces, although links from events will also take you to specific test spaces. This view shows the space names, result set counts, and two spark-line graphs of the pass and fail count trends. For stable test-beds, these graphs should appear flat - otherwise they will indicate the trend of pass and fail counts for your current development effort.
The ''All Projects'' view shows all test spaces to which you have access, grouped by project. This is typically how users will navigate to specific test spaces, although links from events will also take you to specific test spaces. This view shows the space names, result set counts, and two spark-line graphs of the pass and fail count trends. For stable test-beds, these graphs should appear flat - otherwise they will indicate the trend of pass and fail counts for your current development effort.
[[Image:Test_space_projects.png|600px]]


====Project View====
====Project View====
Line 43: Line 47:


The last segment is the result set list. This shows all result sets in descending sequence order (most recently added result sets at top). Each row in this tabular view includes the result set name and description, total duration, pass and fail totals and any baseline comparison results (if comparisons are activated for the space).
The last segment is the result set list. This shows all result sets in descending sequence order (most recently added result sets at top). Each row in this tabular view includes the result set name and description, total duration, pass and fail totals and any baseline comparison results (if comparisons are activated for the space).
[[Image:Test_space_space.png|600px]]


====Results View====
====Results View====


Clicking on a result set name will take you to the ''Results View''. The results view show a list of test suites. The presence of subsuites and test cases within the suite is indicated by [[Image: arrow_blue_square_right.GIF]] button. Clicking this will cause the immediate children to be displayed. This drilldown button will be shown for any suites that contain children. Other icons that appear indicate the presence of other data for display - e.g. the [[Image: Book_blue_closed.png]] icon indicates that annotatations are present (used for log messages) and the [[Image: plus_expand.GIF]] button indicates that comments can be displayed for the item.
Clicking on a result set name will take you to the ''Results View''. The results view show a list of test suites. The presence of subsuites and test cases within the suite is indicated by [[Image: arrow_blue_square_right.GIF]] button. Clicking this will cause the immediate children to be displayed. This drilldown button will be shown for any suites that contain children. Other icons that appear indicate the presence of other data for display - e.g. the [[Image: Book_blue_closed.JPG]] icon indicates that annotatations are present (used for log messages) and the [[Image: plus_expand.GIF]] button indicates that comments can be displayed for the item.


Each row in the display also shows the pass/fail status for test cases, or the total number of each for test suites. Baseline data comparison data is shown to the right, if a comparison is set-up in the test space's properties.
Each row in the display also shows the pass/fail status for test cases, or the total number of each for test suites. Baseline data comparison data is shown to the right, if a comparison is set-up in the test space's properties. Baseline fields will generally only be displayed when there are differences with the baseline data - these differences can include status and duration (if timing data is present in the baseline).
 
[[Image:Test_space_results.png|600px]]


===Baseline Comparison===
===Baseline Comparison===
Baseline comparison provides a powerful yet simple means to compare related sets of test results.  What's more, it provides a way to measure progress against a defined goal or gold-standard file. Baseline comparisons consider two metrics when comparing: test case status and (optionally) test case duration.
We define two types of baseline comparisons in Test Space: fixed and sequential. A sequential baseline comparison causes each new result set to be compared against the previous result set. A fixed baseline comparison compares every result set against one more fixed sets of data. These fixed baselines are created by copying results from an existing result set. The creation of fixed baseline data is typically a one-time operation, perhaps with ongoing updates to the baseline data as tests change.
A baseline comparison always considers status. Any difference in status between actual data and baseline is either considered a setback, progress, or - in some cases - a nil difference (for instance when the status changes to or from "not applicable"). 
Baseline data can also optionally have duration thresholds set for some or all test cases. For sequential comparisons, the test space properties can be configured so that sequential comparisons apply a fixed percentage threshold (upper and lower bounds) to the previous data when comparing test case durations. For most test cases - especially short tests - durations value are not stable on the order or milliseconds. As such, this sort of gross duration comparison is usually only useful for detecting large (say factor of one or more) changes in the test case durations.
When creating a fixed baseline from existing data, you can also specify optional upper and lower bounds as percentages to use to calculate the baseline duration values. With fixed baselines, however, these values can be further refined or tuned by editing the baseline data. With both fixed and sequential baselines, if not timing threshold numbers are specified, the timing data is ignored when comparing results.


===Notifications===
===Notifications===
A test space can be configured to notify users of errors (as log error messages) in a result set and/or regressions relative to a baseline (whether fixed or sequential). Regression notifications for status (pass/fail) and duration are separately configurable in the test space's properties.
Users can also be notified when a new message or comment is added somewhere within the test space. Each message thread has separate notification properties that control who will get notified as comments are added to the message.


===Messages===
===Messages===
Messages are light-weight discussion threads with context - they can apply to a specific suite, result set or to generally to a test space. In the latter case, users can select a title for the message thread since there can be more than one message thread for a given test space.  Messages (and comments thereto) support limited formatting via [http://redcloth.org/hobix.com/textile/ Textile markdown]. Messages can be added to result sets and suites by selecting the [[Image:Comments-empty.JPG]] icon. If a message already exists for a particular item, the icon will appear with a black background.


==Glossary of Terms==
==Glossary of Terms==


===Test Case===
===Test Case===
This is the unit of measure for test results - and single pass/fail entity. Test cases can be supplemented with additional information in the form of annotations and comments.
 
This is the unit of measure for test results - a single pass/fail entity. Test cases can be supplemented with additional information in the form of annotations and comments.


===Test Suite===
===Test Suite===
TBD
 
A Test Suite is just a grouping of test cases. Test Suites can have descriptions and annotations associated with them, but primarily they serve to group test cases. In the STRIDE Test Framework, each Test Unit creates a suite with the name of the test unit and the tests are placed in this suite.


===Annotation===
===Annotation===
TBD
 
Annotations provide additional information about test cases or test suites. Each annotation has a level associated with it as well as a name and description. For this reason, the STRIDE Test Framework maps assert/note messages from [[Test Code Macros]] to annotations.
===Comments===
TBD


===Result Set===
===Result Set===
Line 79: Line 103:


===Project===
===Project===
A project is logical grouping of test spaces. Every test space must be assigned to one and only one project.
A project is logical grouping of test spaces. Every test space must be assigned to one and only one project.


===Baseline===
===Baseline===
TBD
 
A baseline is a copy of some test data (suites and cases) that is maintained for the purpose of comparison with other results sets.  This is the primary mechanism in Test Space for determining if test results have regressed.
 
===Messages===
===Messages===
TBD
 
Messages are simple discussion threads that can be attached to test spaces, result sets, or test suites. Any number of messages can be created for a test space, but only one message thread can be created for result sets and test suites. For any message, the initial message can be followed-up by sequential comments added by other users.




[[Category: Test Space]]
[[Category: Test Space]]

Latest revision as of 23:18, 1 June 2012

What is STRIDE Test Space?

STRIDE Test Space is a hosted web application for storing and analyzing your test results. Test Space accepts data that results from the execution of STRIDE Test Units or Test Scripts. Data is uploaded to Test Space manually (using the web interface) or automatically from the STRIDE Runner. Once data is uploaded, it is retained until it is manually removed or automatically deleted (depending on the space configuration).

STRIDE Test Space organizes results in a hierarchy of projects, test spaces and result sets. Each time you upload data to a test space, a new result set is created (unless you explicitly add your data to an existing one). With a given result set, test results are further organized into test suites containing test cases. Any number of these organizational entities can be used to create a fluid hierarchy of test results that adapts to shifting needs during a product lifecycle.

Test Space is primarily a repository for your test results. Whether you are doing ad-hoc testing with the STRIDE Framework or running fully-automated continuous integration of your STRIDE-enabled code base, Test Space provides a central place to store all the test data that is produced by the tests. Results are uploaded into specific test spaces, which allows the maintainer to control access and notifications for the results.

Test Space also provides easy regression analysis in the form of baseline comparison. Users can create one or more fixed baseline data sets from existing results and further specify that all result sets in a test space should be compared with the fixed set of data. This is sometimes known as "gold standard" comparison and is very useful for detecting regressions in a set of stable tests. Similarly, individual test spaces can be configured to automatically compare each new result set with the previous result. This kind of comparison can also be helpful in detecting regressions in stable code bases. Baseline comparison data can also optionally include timing thresholds so as to enable automatic comparison of test case durations.

Collaboration and communication are built-in to Test Space in the form of messaging and notifications. Once users are granted access to a specific test space, they have full access to view and manage test results. The test space properties can also be configured to notify all users of potential problems with new result sets - specifically regressions against baselines, log errors, and timing threshold violations (if your baseline was configured to compare durations).

Test Space enables focused communication about test results by allowing users to create simple message threads. Messages can be associated with a test space, a result set, or even with a specific test suite. The latter can be very useful when users need to discuss specific test failures, while messages at the space or result set level might be used, for example, to discuss general trends and goals.

What can I do with STRIDE Test Space?

Uploading Results

New results set can be easily uploaded using the automatic upload feature in the STRIDE Runner. You can also upload XML results data files manually using the web interface by navigating to the detail view for a specific test case and clicking the upload data button. For information on the XML Schema refer here.

Viewing Results

STRIDE Test Space presents several distinct views of your data.

Overview

The overview shows any recent activity (within the last 10 days) across all of your test spaces. Noteworthy events include the following:

  • a new result set added. Summary stats and any potential problems (errors or setbacks) are reported.
  • a new message added
  • a new comment added to an existing message

This view is the default view for Test Space. This data is also available as an RSS feed (your browser should detect the feed and allow you to subscribe).

Test space overview.png

All Projects

The All Projects view shows all test spaces to which you have access, grouped by project. This is typically how users will navigate to specific test spaces, although links from events will also take you to specific test spaces. This view shows the space names, result set counts, and two spark-line graphs of the pass and fail count trends. For stable test-beds, these graphs should appear flat - otherwise they will indicate the trend of pass and fail counts for your current development effort.

Test space projects.png

Project View

This view is identical to the All Projects view, except it displays test spaces for only one project. You can navigate to this view my clicking on a project name in the All Projects view.

Space View

The Space view provides an overview for a specific test space. There are three segments to this view: events, trend graphs, and result set list. The events section shows the recent events associated with this test space. These are the same events that appear for the given space in the Overview. The trend graph show bar and line trend charts for the 15 most recent result sets. Both the events segment and trend graphs can be hidden from view using the corresponing hide buttons.

The last segment is the result set list. This shows all result sets in descending sequence order (most recently added result sets at top). Each row in this tabular view includes the result set name and description, total duration, pass and fail totals and any baseline comparison results (if comparisons are activated for the space).

Test space space.png

Results View

Clicking on a result set name will take you to the Results View. The results view show a list of test suites. The presence of subsuites and test cases within the suite is indicated by Arrow blue square right.GIF button. Clicking this will cause the immediate children to be displayed. This drilldown button will be shown for any suites that contain children. Other icons that appear indicate the presence of other data for display - e.g. the Book blue closed.JPG icon indicates that annotatations are present (used for log messages) and the Plus expand.GIF button indicates that comments can be displayed for the item.

Each row in the display also shows the pass/fail status for test cases, or the total number of each for test suites. Baseline data comparison data is shown to the right, if a comparison is set-up in the test space's properties. Baseline fields will generally only be displayed when there are differences with the baseline data - these differences can include status and duration (if timing data is present in the baseline).

Test space results.png

Baseline Comparison

Baseline comparison provides a powerful yet simple means to compare related sets of test results. What's more, it provides a way to measure progress against a defined goal or gold-standard file. Baseline comparisons consider two metrics when comparing: test case status and (optionally) test case duration.

We define two types of baseline comparisons in Test Space: fixed and sequential. A sequential baseline comparison causes each new result set to be compared against the previous result set. A fixed baseline comparison compares every result set against one more fixed sets of data. These fixed baselines are created by copying results from an existing result set. The creation of fixed baseline data is typically a one-time operation, perhaps with ongoing updates to the baseline data as tests change.

A baseline comparison always considers status. Any difference in status between actual data and baseline is either considered a setback, progress, or - in some cases - a nil difference (for instance when the status changes to or from "not applicable").

Baseline data can also optionally have duration thresholds set for some or all test cases. For sequential comparisons, the test space properties can be configured so that sequential comparisons apply a fixed percentage threshold (upper and lower bounds) to the previous data when comparing test case durations. For most test cases - especially short tests - durations value are not stable on the order or milliseconds. As such, this sort of gross duration comparison is usually only useful for detecting large (say factor of one or more) changes in the test case durations.

When creating a fixed baseline from existing data, you can also specify optional upper and lower bounds as percentages to use to calculate the baseline duration values. With fixed baselines, however, these values can be further refined or tuned by editing the baseline data. With both fixed and sequential baselines, if not timing threshold numbers are specified, the timing data is ignored when comparing results.

Notifications

A test space can be configured to notify users of errors (as log error messages) in a result set and/or regressions relative to a baseline (whether fixed or sequential). Regression notifications for status (pass/fail) and duration are separately configurable in the test space's properties.

Users can also be notified when a new message or comment is added somewhere within the test space. Each message thread has separate notification properties that control who will get notified as comments are added to the message.

Messages

Messages are light-weight discussion threads with context - they can apply to a specific suite, result set or to generally to a test space. In the latter case, users can select a title for the message thread since there can be more than one message thread for a given test space. Messages (and comments thereto) support limited formatting via Textile markdown. Messages can be added to result sets and suites by selecting the Comments-empty.JPG icon. If a message already exists for a particular item, the icon will appear with a black background.

Glossary of Terms

Test Case

This is the unit of measure for test results - a single pass/fail entity. Test cases can be supplemented with additional information in the form of annotations and comments.

Test Suite

A Test Suite is just a grouping of test cases. Test Suites can have descriptions and annotations associated with them, but primarily they serve to group test cases. In the STRIDE Test Framework, each Test Unit creates a suite with the name of the test unit and the tests are placed in this suite.

Annotation

Annotations provide additional information about test cases or test suites. Each annotation has a level associated with it as well as a name and description. For this reason, the STRIDE Test Framework maps assert/note messages from Test Code Macros to annotations.

Result Set

a result set is a collection of test suites and test cases that represent a complete set of test results for a given test space. In it's simplest form, is the output from a single execution of the STRIDE Runner for a given set of test units.

Test Space

A test space is a logical grouping of test results. Although STRIDE Test Space does not enforce any relationship between result sets, test spaces are only useful for analysis when you use them to hold sequential result set data that represent the same set of tests. That is, meaningful comparison between subsequent results sets can only be done when each result set represents execution of the same set of tests.

Each test space has properties that allow you to control who has access, if and how the results are compared to other results, whether to notify users of potential regressions, and how many result sets to keep in the space.

Project

A project is logical grouping of test spaces. Every test space must be assigned to one and only one project.

Baseline

A baseline is a copy of some test data (suites and cases) that is maintained for the purpose of comparison with other results sets. This is the primary mechanism in Test Space for determining if test results have regressed.

Messages

Messages are simple discussion threads that can be attached to test spaces, result sets, or test suites. Any number of messages can be created for a test space, but only one message thread can be created for result sets and test suites. For any message, the initial message can be followed-up by sequential comments added by other users.