A set of new features is being introduced for EIS. What´s mainly new is that there is now a table of test-results plus additional buttons to start tests on the "Tests" tabpage when viewing the CWS. Additionally a CWS gets an overall status which is calculated from the stati of the individual tests. This is shown on the "Tests" Tabpage as well as on the "Overview" tabpage. It is recommended to review this overall status and trying to fix potential issues before setting a CWS to "ready for QA". Other actions buttons previously present on the "Tests and Actions" Tabpage have been moved to the "Overview" tabpage. The "Tests and Actions" Tabpage has been renamed to "Tests".
The new Features
- A Table with results for a couple of tests and some buttons to start tests on the "Tests" tabpage for a CWS.
- Overall summarized status for the CWS on the "Overview" and on the "Tests" tabpage for a CWS. The overall status is taking into account wether tests are mandatory for all platforms, mandatory to be run at least on one platform, optional or experimental. In the future it is planned that there would be a process in place that the status should be green here before a ChildWorkspace is going to be set to state "ready for QA".
- ConvwatchTests, PerformanceTests and AutomationCAT0 tests can be started on dedicated computers provided by Sun Microsystems
- MasterWorkspace / BrowseMWS Treeview for MasterWorkspace with tables for test results on MasterWorkspace milestones which currently contain only external buildbot build results and buttons to start builds on external buildbots for MasterWorkspace milestones.
- For AutomationCAT0 Tests an email is send to the QARep when the test has finished with a link to a webpage in QUASTe where the QARep can review the result of the tests. When the AutomationCAT0 test is finished the status of the test is first set to "finished, review pending" by the testbot and afterwards to "failed" or "ok" by the QARep via a new feature in QUASTe.
- An additional "cwstestresult" script in the development environment which can be used to add additional results of individual experimental tests to a CWS manually.
- new EIS SOAP interfaces to submit results for tests which are used by the tests on dedicate machines like performance tests, convwatch test and external buildbot builds.
There are two types of Tests
- Quick conformance tests that are run bei EIS each time the "Tests" CWS Tabpage is being viewed. These include a tests for wether all Tasks on the CWS are already fixed, a test for wether the release set at the CWS is allowed, a test for wether all task targets are allowed for the MasterWorkspace of the CWS, a test for wether the developer already called cwslocalize if the CWS is help- or ui-relevant and a check of the tinderbox status message.
For these tests the Action column on the table describes what is needed in order to pass the test.
- Longer running Tests which are run on dedicated test hardware and transmit their status to EIS via a SOAP interface. These include Performance Tests, ConvWatch Tests, AutomationCAT0 tests and builds on Sun external Buildbots. For these tests there is a button for Sun users in the Action table to start the test on the dedicated test machines. Non Sun users do only have a button for starting builds on external buildbots here and for other tests that are run on interally reachable hardware only a message that they can ask a Sun Contributor to start the test for them.
The following stati are available: failed, ok, running, queued, incomplete, untested.
The status queued is set if a test-subsystem knows that the test should run and put it in a queue with other tests to run via the same subsystem. The status incomplete is set if the test consisted of a couple of subtests and the whole test was interrupted while not all subtests have been run yet or the test could not really be run for some reason at all, like eg. installation sets where not available.
Meaning of Stati
|failed||one or more mandatory, QA or optional test failed|
|ok||all mandatory tests have been run and have a ok result|
|running||not all mandatory tests have been run at least one of them is in status running|
|finished, needs review||test finished but needs manual review to qualify results|
|queued||not all mandatory tests have been run at least one of them is in status queued|
|incomplete||from all mandatory tests at least one has not even been queued, developer needs to start at least one more test|
|untested||no mandatory tests has been done yet, developer needs to start all mandatory tests.|
As configuration on how to count the results of individual tests into the over summarized result for the CWS Requirement-Levels are defined.
|Mandatory||Test with this testrunName must always run on all platforms for which it is defined.|
|One Platform||Test with this testrunName must always run on at least one platforms for which it is defined.|
|Optional||Test with this testrunName is optional if it has not been run yet the overall result can still be green. If it failed the overall result is also failed tough.|
|QA||Test with this testrunName is run after setting state of CWS to readyForQA by the QA team. If it failed the overall result is also failed. But if it has not been run yet the overall result can still be green.|
|Experimental||Test with this testrunName is experimental only. The result of this test is not relevant for the overall summarized result at all.|
Note: tests which are flagged experimental in the EIS database do not contribute to the overall status at all.
Note on Convwatch, Performance and AutomationCAT0 tests
- These tests are executed on dedicated computers provided by Sun Microsystems. They take some time and they are queued for execution on the dedicated machines. So if there are some others tests in the queue you will have to wait some time till it is your turn. Performance Tests need round about 15 Minutes. Convwatch Tests between two and a half and three hours and AutomationCAT0 tests need even round about 8 hours to run.
Note on Buildbots
- Buildbots may reject builds if they know the corresponding milestone is broken. The status is changed to incomplete in this case. The only chance to get an OK status for such a CWS is to resync to a newer milestone where the build breaker known to the buildbot has been fixed.
There is also a new SOAP and Commandline API for EIS Tests available.
Feedback is being collected on the email@example.com Mailinglist.