ONLINE HELP
 WINDEVWEBDEV AND WINDEV MOBILE

This content has been translated automatically.  Click here  to view the French version.
Help / Developing an application or website / Test / Automated tests
  • Overview
  • Displaying the test manager
  • Features of the test manager
  • Element test list management window
  • Ribbon options
  • Description of a test list and scenario
  • Test list description
  • Describing a scenario
  • Code of scenario
  • "Before the test"
  • Event "Test scenario"
  • Event "After the test"
WINDEV
WindowsLinuxJavaReports and QueriesUser code (UMC)
WEBDEV
WindowsLinuxPHPWEBDEV - Browser code
WINDEV Mobile
AndroidAndroid Widget iPhone/iPadIOS WidgetApple WatchMac Catalyst
Others
Stored procedures
Overview
The test manager is used to:
  • display in the editor the list of tests (or scenarios) attached to an element.
  • configure the different tests.
The status report of test execution is displayed in the "Results of tests" pane.
Displaying the test manager
The "Project Explorer" pane displays all existing automation tests on the various project elements ("Tests" folder).
The various scenarios are grouped together according to the element tested (or the element that started the automation test) in a test list. For example, the list of tests named "TEST_FEN_AddressEntry" groups together all the test scenarios performed from the "FEN_AddressEntry" window.
The test manager allows you to manipulate a list of tests.
To display a test list in the Test Managerdouble-click on the test list name (in the "Project Explorer" pane).
Note: It is also possible to view (in the test manager) the list of tests associated with the current item under the editor: under the "Automated tests" pane, in the "Tests" group, click on "See associated tests".
Features of the test manager

Element test list management window

Below is the window for managing the automated tests of an element:
This window allows you to:
  • View a list of tests (set of scenarios associated with an item). In the left section of the window. The various tests (or scenarios) are classified into several categories:
    • Stand-alone tests: independent tests. For example, for a stand-alone test on a window, the window is closed at the end of test.
    • Chained tests: tests run one after the other. For example, for sequenced tests on a window, the window remains opened at the end of first test in order to run the next test on the same window.
    • Sub-tests: tests not called directly but only via the EmulateWindow function.
  • Display scenario code Simply select a scenario from the tree structure on the left. The corresponding WLanguage code is displayed on the right-hand side of the editor.
From the tree structure on the left, you can:
  • Launch a scenario To launch a scenario, simply select the desired scenario from the tree structure on the left, then select "Launch" from the context menu.
  • Display test list description Simply select the test list in the tree structure on the left and select "Description" from the context menu.
  • Display a scenario description To display a scenario's description, simply select a scenario from the tree structure on the left and select "Description" from the context menu.
  • Rename or delete a scenario To rename or delete a scenario, simply select a scenario from the tree structure on the left, then select "Rename" or "Delete" from the context menu.
  • Create a new scenario ("New scenario" option in the context menu): the scenario created is empty and the corresponding WLanguage code can be entered.
  • Create a test group ("New test group" option in the context menu): the various scenarios can be grouped together in a test group by dragging and dropping the scenario.
Different icons allow you to find out the test status:
  • : test passed without error.
  • : test modified.
  • : test in error.
The status report of test execution is displayed in the "Results of tests" pane.

Ribbon options

Some features of the test editor are grouped in the "Automated tests" tab of the ribbon.
The options are as follows:
  • "Tests" group options:
    • New: Creates a new script. It is possible to:
      • Record a new scenario: Allows you to create a new scenario in the list of tests currently in progress in the Test Manager. If the test list is associated with a window, the editor indicates that the window test is about to begin and that all operations performed will be recorded in a scenario.
      • New blank scenario: Creating a blank scenario.
      • Import a Record from the application: Allows you to import a test scenario (wsct file) created by the user into the current project. For more details, see Automated test created by the user.
    • Execute test: Executes the current test in the test editor.
    • Run all:
      • Run all project tests: Allows you to run all defined scenarios on all project elements.
      • Run all tests not yet run: Allows you to launch all scenarios that have not yet been run.
        Note: A scenario that has been modified and not run is considered to be a test that has not been passed.
      • Run all tests that have detected errors: Allows you to run only those tests that have detected errors.
      • Launch automation tests in slow motion: If this option is selected, the scenario execution speed is slowed down.
      • Enable dynamic auditing during automatic tests: If this option is selected, dynamic auditing is automatically launched at the start of test scenario execution, and the audit report is displayed at the end of scenario execution.
      • Enable strict mode during automatic tests: If this option is selected, in the event of a test error (TestCheck function returning False, errors, assertions, etc.), the test is automatically stopped in the debugger on the current iteration.
    • View results: Displays the results of the current test in the "Test results" pane.
    • Go to object: Displays, in the corresponding editor, the project element associated with the scenario displayed in the test editor.
  • "Automated test" group options:
  • Options for the "Code Coverage" group: Allows you to configure the data displayed by the code coverage. For more details, see Code coverage.
Description of a test list and scenario

Test list description

To display the description of a test list:
  1. Select the test list in the Test Manager tree structure.
  2. Select "Description" in the context menu of the report.
The description of a test list allows you to enter information in various tabs:
  • "General": Allows you to specify:
    • the name of the test and a caption.
    • the type of tests in the test list:
      • Unit tests: automation tests linked to an element (a window, for example).
      • Application tests: automatic application test, recorded with WDAutomate. For more details, see Automated tests on an executable.
  • "Details": Used to specify (in the case of a unit test on a window) the window opening parameters for the test (in the case of a stand-alone test only).
    Indeed, when running the test of a window with parameters, it may be interesting to test the opening the window by using specific parameters. The parameters specified when creating the test are indicated by default.

Describing a scenario

To display the description of a test scenario:
  1. Select the scenario in the tree structure of the test manager.
  2. Select "Description" in the context menu of the report.
The description of a test scenario is used to enter information in the different tabs:
  • "General": Allows you to specify the name of the test and a comprehensible label.
  • "Details": Allows you to specify (in the case of a window):
    • Whether the test must trigger an error if the window cannot be closed at the end of the test. When running unit tests on a window, this option is used to check whether the window is closed. If this option is selected, the test is a stand-alone test.
    • Whether the test must keep the window opened at the end of its execution so that other tests can be run on this window. This option allows you to link several window tests together. This option is automatically checked if another window is opened during a window test. If this option is checked, the test is a sequenced test.
    • Whether the test result depends on a sequence of tests. In this case, the test cannot be run on its own. If this option is checked, the test is a sub-test.
    The "Details" tab can also be used to redefine the opening parameters of the window for test (for a stand-alone test only).
    Indeed, when running the test of a window with parameters, it may be interesting to test the opening the window by using specific parameters. The parameters specified when creating the test are indicated by default.
Code of scenario
The code of test scenario can be modified. Indeed, the scenario code is written in WLanguage and it can be easily modified. To access the scenario code, all you have to do is select a scenario in the tree structure displayed on the left. The corresponding WLanguage code is displayed on the right-hand side of the editor.
Three code sections are displayed:
  • the event "Before the test". This event is run before the test.
  • the event "Test scenario". This event corresponds to the code run to test the desired element.
  • the event "After the test". This event is used to check the test results (according to the data found in a reference file for example). Generally, this event allows you to close and free all the elements open and allocated in the "Before the test" event.

"Before the test"

The event "Before the test" is used to:
  • locate the data that will be used for test,
  • pass the necessary parameters to the test,
  • define variables and define a set of test data (via TestAddIteration).
Example:
// -- Evénement "Avant le test" d'un test sur une procédure
// Deux paramètres ont été définis dans l'événement "Scénario de test"
// Ajoute les données de test
TestAddIteration("test@test.fr", True)
TestAddIteration("non valide", False)
TestAddIteration("", False)
TestAddIteration("test.test.test@test.fr", True)
TestAddIteration("test@test@test.fr", False)
TestAddIteration("test.fr", False)
TestAddIteration("http://www.test.fr", False)

Event "Test scenario"

The event "Test scenario" corresponds to the code run by the test.
You have the ability to modify the scenario code, to add operations into the test. You have the ability to use the functions specific to the automated tests or the Emulate functions.
You also have the ability to replace values by parameters (to use a set of test data). In this case, you have the ability to declare a procedure in this code, to specify the expected parameters.
Example 1: Script using a test data set and a controller
In this code, the Controller1 variable is a variable defined as controller (via the <Controller> extension attribute). This controller variable can then be used in the test code, for example to check the result of a procedure. In our example, the line of code:
Contrôleur1 = bVerifEmail(psEMail)
runs the bCheckEmail procedure by passing the "psEmail" parameter and compares the result to the value defined for the "Controller1" variable.
// Définition de deux paramètres pour créer un jeu de données de test 
// La valeur de ces paramètres est définie dans l'événement "Avant le test"
PROCEDURE MonScénario(psEMail, Contrôleur1 is boolean<controller>)
Contrôleur1 = bVerifEmail(psEMail)
Example 2: Script using a test dataset and the TestCheck function
In this code, the procedure corresponding to the test expects two parameters. The test of data set is run by TestCheck.
// Définition de deux paramètres pour créer un jeu de données de test
// La valeur de ces paramètres est définie dans l'événement "Avant le test" 
PROCEDURE MonScénario(psEMail, Contrôleur1 is boolean)
TestCheck(Contrôleur1 = bVerifEmail(psEMail), "Erreur du test pour " + psEMail)

Event "After the test"

The event "After the test" is used to check the test results (according to the data found in a reference file for example). Generally, this event allows you to close and free all the elements open and allocated in the "Before the test" event.
Example: Deleting test data: this code checks whether the test address has been created and deletes it if necessary.
// On vérifie si l'adresse s'est bien enregistrée 
HReadSeekFirst(ADRESSE, NOM, "TEST")
IF HFound(ADRESSE) = False THEN
	// Le test n'est pas ok
	TestWriteResult(twrError, "La base de données n'a pas été correctement mise à jour")
ELSE
	// Test ok, on peut supprimer
	TestWriteResult(twrInfo, "Base de données mise à jour")
	// Suppression de l'adresse de test
	HDelete(ADRESSE)
	TestWriteResult(twrInfo, "Adresse de test supprimée")
END
Minimum version required
  • Version 14
This page is also available for…
Comments
Click [Add] to post a comment

Last update: 03/28/2025

Send a report | Local help