|
|
|
|
|
- Overview
- Displaying the test manager
- Features of the test manager
- Element test list management window
- Ribbon options
- Description of a test list and scenario
- Test list description
- Describing a scenario
- Code of scenario
- "Before the test"
- Event "Test scenario"
- Event "After the test"
Manager of automated tests
The test manager is used to: - display in the editor the list of tests (or scenarios) attached to an element.
- configure the different tests.
Displaying the test manager The "Project Explorer" pane displays all existing automation tests on the various project elements ("Tests" folder). The various scenarios are grouped together according to the element tested (or the element that started the automation test) in a test list. For example, the list of tests named "TEST_FEN_AddressEntry" groups together all the test scenarios performed from the "FEN_AddressEntry" window. The test manager allows you to manipulate a list of tests. To display a test list in the Test Managerdouble-click on the test list name (in the "Project Explorer" pane). Note: It is also possible to view (in the test manager) the list of tests associated with the current item under the editor: under the "Automated tests" pane, in the "Tests" group, click on "See associated tests". Features of the test manager Element test list management window Below is the window for managing the automated tests of an element:
This window allows you to: - View a list of tests (set of scenarios associated with an item). In the left section of the window. The various tests (or scenarios) are classified into several categories:
- Stand-alone tests: independent tests. For example, for a stand-alone test on a window, the window is closed at the end of test.
- Chained tests: tests run one after the other. For example, for sequenced tests on a window, the window remains opened at the end of first test in order to run the next test on the same window.
- Sub-tests: tests not called directly but only via the EmulateWindow function.
- Display scenario code Simply select a scenario from the tree structure on the left. The corresponding WLanguage code is displayed on the right-hand side of the editor.
From the tree structure on the left, you can: - Launch a scenario To launch a scenario, simply select the desired scenario from the tree structure on the left, then select "Launch" from the context menu.
- Display test list description Simply select the test list in the tree structure on the left and select "Description" from the context menu.
- Display a scenario description To display a scenario's description, simply select a scenario from the tree structure on the left and select "Description" from the context menu.
- Rename or delete a scenario To rename or delete a scenario, simply select a scenario from the tree structure on the left, then select "Rename" or "Delete" from the context menu.
- Create a new scenario ("New scenario" option in the context menu): the scenario created is empty and the corresponding WLanguage code can be entered.
- Create a test group ("New test group" option in the context menu): the various scenarios can be grouped together in a test group by dragging and dropping the scenario.
Different icons allow you to find out the test status: : test passed without error. : test modified. : test in error.
Ribbon options Some features of the test editor are grouped in the "Automated tests" tab of the ribbon. The options are as follows: - "Tests" group options:
- New: Creates a new script. It is possible to:
- Record a new scenario: Allows you to create a new scenario in the list of tests currently in progress in the Test Manager. If the test list is associated with a window, the editor indicates that the window test is about to begin and that all operations performed will be recorded in a scenario.
- New blank scenario: Creating a blank scenario.
- Import a Record from the application: Allows you to import a test scenario (wsct file) created by the user into the current project. For more details, see Automated test created by the user.
- Execute test: Executes the current test in the test editor.
- Run all:
- Run all project tests: Allows you to run all defined scenarios on all project elements.
- Run all tests not yet run: Allows you to launch all scenarios that have not yet been run.
Note: A scenario that has been modified and not run is considered to be a test that has not been passed. - Run all tests that have detected errors: Allows you to run only those tests that have detected errors.
- Launch automation tests in slow motion: If this option is selected, the scenario execution speed is slowed down.
- Enable dynamic auditing during automatic tests: If this option is selected, dynamic auditing is automatically launched at the start of test scenario execution, and the audit report is displayed at the end of scenario execution.
- Enable strict mode during automatic tests: If this option is selected, in the event of a test error (TestCheck function returning False, errors, assertions, etc.), the test is automatically stopped in the debugger on the current iteration.
- View results: Displays the results of the current test in the "Test results" pane.
- Go to object: Displays, in the corresponding editor, the project element associated with the scenario displayed in the test editor.
- "Automated test" group options:
- Options for the "Code Coverage" group: Allows you to configure the data displayed by the code coverage. For more details, see Code coverage.
Description of a test list and scenario Test list description To display the description of a test list: - Select the test list in the Test Manager tree structure.
- Select "Description" in the context menu of the report.
The description of a test list allows you to enter information in various tabs: - "General": Allows you to specify:
- the name of the test and a caption.
- the type of tests in the test list:
- Unit tests: automation tests linked to an element (a window, for example).
- Application tests: automatic application test, recorded with WDAutomate. For more details, see Automated tests on an executable.
- "Details": Used to specify (in the case of a unit test on a window) the window opening parameters for the test (in the case of a stand-alone test only).
Indeed, when running the test of a window with parameters, it may be interesting to test the opening the window by using specific parameters. The parameters specified when creating the test are indicated by default.
Describing a scenario To display the description of a test scenario: - Select the scenario in the tree structure of the test manager.
- Select "Description" in the context menu of the report.
The description of a test scenario is used to enter information in the different tabs: - "General": Allows you to specify the name of the test and a comprehensible label.
- "Details": Allows you to specify (in the case of a window):
- Whether the test must trigger an error if the window cannot be closed at the end of the test. When running unit tests on a window, this option is used to check whether the window is closed. If this option is selected, the test is a stand-alone test.
- Whether the test must keep the window opened at the end of its execution so that other tests can be run on this window. This option allows you to link several window tests together. This option is automatically checked if another window is opened during a window test. If this option is checked, the test is a sequenced test.
- Whether the test result depends on a sequence of tests. In this case, the test cannot be run on its own. If this option is checked, the test is a sub-test.
The "Details" tab can also be used to redefine the opening parameters of the window for test (for a stand-alone test only). Indeed, when running the test of a window with parameters, it may be interesting to test the opening the window by using specific parameters. The parameters specified when creating the test are indicated by default.
The code of test scenario can be modified. Indeed, the scenario code is written in WLanguage and it can be easily modified. To access the scenario code, all you have to do is select a scenario in the tree structure displayed on the left. The corresponding WLanguage code is displayed on the right-hand side of the editor. Three code sections are displayed: - the event "Before the test". This event is run before the test.
- the event "Test scenario". This event corresponds to the code run to test the desired element.
- the event "After the test". This event is used to check the test results (according to the data found in a reference file for example). Generally, this event allows you to close and free all the elements open and allocated in the "Before the test" event.
"Before the test" The event "Before the test" is used to: - locate the data that will be used for test,
- pass the necessary parameters to the test,
- define variables and define a set of test data (via TestAddIteration).
Example:
TestAddIteration("test@test.fr", True)
TestAddIteration("non valide", False)
TestAddIteration("", False)
TestAddIteration("test.test.test@test.fr", True)
TestAddIteration("test@test@test.fr", False)
TestAddIteration("test.fr", False)
TestAddIteration("http://www.test.fr", False)
Event "Test scenario" The event "Test scenario" corresponds to the code run by the test. You also have the ability to replace values by parameters (to use a set of test data). In this case, you have the ability to declare a procedure in this code, to specify the expected parameters. Example 1: Script using a test data set and a controller In this code, the Controller1 variable is a variable defined as controller (via the <Controller> extension attribute). This controller variable can then be used in the test code, for example to check the result of a procedure. In our example, the line of code: Contrôleur1 = bVerifEmail(psEMail)
runs the bCheckEmail procedure by passing the "psEmail" parameter and compares the result to the value defined for the "Controller1" variable.
PROCEDURE MonScénario(psEMail, Contrôleur1 is boolean<controller>)
Contrôleur1 = bVerifEmail(psEMail)
Example 2: Script using a test dataset and the TestCheck function In this code, the procedure corresponding to the test expects two parameters. The test of data set is run by TestCheck.
PROCEDURE MonScénario(psEMail, Contrôleur1 is boolean)
TestCheck(Contrôleur1 = bVerifEmail(psEMail), "Erreur du test pour " + psEMail)
Event "After the test" The event "After the test" is used to check the test results (according to the data found in a reference file for example). Generally, this event allows you to close and free all the elements open and allocated in the "Before the test" event. Example: Deleting test data: this code checks whether the test address has been created and deletes it if necessary.
HReadSeekFirst(ADRESSE, NOM, "TEST")
IF HFound(ADRESSE) = False THEN
TestWriteResult(twrError, "La base de données n'a pas été correctement mise à jour")
ELSE
TestWriteResult(twrInfo, "Base de données mise à jour")
HDelete(ADRESSE)
TestWriteResult(twrInfo, "Adresse de test supprimée")
END
This page is also available for…
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|