|
Overview
- The test cases specification file
-
File testcases.csv specifies all test cases
related to model coverage criteria: each model coverage goal is identified
by means of a test case tag and a logical formula specifying the
constraint for covering the goal. The current baseline of the benchmark
model specifies the goals for
Basic Control State (BCS) coverage
The goals consist in reaching basic control states in (potentially
hierarchic) state machines of the model.
State Machine Transition (TR) coverage
The goals consist in covering transitions between (higher- or
lower-level) control states of state machines.
MC/DC Coverage
The objective is again to cover state machine transitions, with
the refinement that guard conditions a && b should be
covered with different cases a && b, a && !b, !a && b ,
and guard conditions a || b should be covered with test
cases a && !b, !a && b, !a && !b , as far as possible
(for more complex Boolean expressions the MC/DC criteria are refined
in a natural way).
All test cases are automatically derived from the model by the RT-Tester
symbolic test case generator and specified in file
testcases.csv with the following syntax:
test-case-tag;[test-case-specification];
|
The test case tag is a unique identifier carrying the type of coverage
goal (BCS, TR, MCDC) in its name. The test case specification
spec is a logical formula with inputs, outputs, model variables
and basic control states as free variables. It specifies the condition to
be fulfilled by at least one model state, in order to cover the test case.
Interpreting the test case specification as an LTL formula, an implicit "F
(Finally)" has to added, in the sense that "finally a model state
satisfying spec has to be reached" in order to cover the test case.
For example,
TC-turn_indication-BCS-0026;
["IMR.SystemUnderTest.LampControl.DisplayLights.IC_L.IC_L.OFF"@0];
|
specifies that finally a model state should be reached where state machine
IMR.SystemUnderTest.LampControl.DisplayLights.IC_L is in
basic control state OFF . Note that for state machine
transition coverage the constraint to be fulfilled is not just "the guard
condition should be true" but "the guard condition should be true AND
((the source location is a basic control state AND the state machine
should be in this control state when the model state is reached) OR (the
source location is a higher-level control state AND the state machine is
in one of its sub-ordinate basic control states)) AND no higher-priority
transition is enabled". This condition is called the trigger condition
of the state machine transition. For example, test case
TC-turn_indication-TR-0047;
[("IMR.SystemUnderTest.CrashFlashing.SendCrashBits.SendCrashBits.IMPACT_CONFIRMED"@0 && (("_timeTick"@0 - "IMR.SystemUnderTest.CrashFlashing.SendCrashBits.t"@0) >= 20))];
|
specifies that finally a model state should be reached where state machine
IMR.SystemUnderTest.CrashFlashing.SendCrashBits.SendCrashBits
is in control state IMPACT_CONFIRMED and that 20ms have passed
since setting the timer t . Test case
TC-turn_indication-TR-0048;
[(("IMR.SystemUnderTest.CrashFlashing.SendCrashBits.SendCrashBits.IMPACT_CONFIRMED"@0 && (("_timeTick"@0 - "IMR.SystemUnderTest.CrashFlashing.SendCrashBits.t"@0) < 20)) && ("IMR.SystemUnderTest.CrashFlashing.SendCrashBits.ctr"@0 >= 50))];
|
specifies the trigger condition for the lower-priority condition emanating
from control state IMPACT_CONFIRMED to be covered: the timer
t should not have elapsed yet and the local variable
ctr should have a value greater than or equal to 20.
- Coverage of symbolic test cases
-
A symbolic test case TC is covered if the SUT performs a
computation satisfying the LTL formula associated with TC. To this end,
the generator will calculate a sequence of input vectors to the SUT and
associated timing conditions or wait conditions for some expected output
of the SUT, such that this input sequence stimulates a SUT computation
covering TC. If such an input sequence can be found, we say that the
generator discharges the test case: a test procedure executing the
input sequence will drive the SUT through a computation satisfying the TC
specification formula.
- Configuration of the test generation process
-
Each test generation benchmark is accompanied by three CSV-files:
- Goals-File
The goals-file specifies all test cases to be covered by the
test procedure which is to be generated. Each line in the goals-file
is formatted like a test case
test-case-tag;[test-case-specification];
|
as described for file testcases.csv above.
- Configuration-File
The configuration-file lists all model components and allows
generation-dependent activation or de-activation of environment
simulations. In principle, this configuration file can also be used
to de-activate incomplete components during SUT model development or
to de-activate component variants, if the SUT model contains
different alternatives of SUT behavior which are only activated in
specific configurations (so-called 150% models). These
possibilities, however, are not used in the current turn indication
model discussed here. The attributes of configuration-files are
described in the following table:
Column Name |
Description |
Name |
Name of the component, statechart or location, respectively. |
Simulation |
Marked with an 'X' if this entry belongs to the environment
simulation. |
Deactivated |
Marked with an 'X' if this part of the model is deactivated, so
that it will be disregarded during the generation process. |
- Signalmap-File
The signalmap-file lists system interface variables and their
admissible value range. The attributes of signalmap-files are
described in the following table:
Column Name |
Description |
Abstract Signal |
The system interface variable. |
lower bound |
Lower bound of the variable. |
upper bound |
Upper bound of the variable. |
- Generation results
-
The generation results file contains the test data created in the test
generation run. This comprises both input data to the SUT and the internal
transitions of model state, as well as the outputs from SUT to test
environment. While the input data has been calculated by the constraint
solver, the internal transitions and (expected) outputs have been
determined by an interpreter executing the model from the current
pre-state with the input data from the solver. The generation results file
has the following structure:
File Section |
Description |
Signal Specifications |
This is the first section of the file with syntax:
signalspec ::= signalspec { signal_specifications
}
signal_specifications ::= empty |
signal_specifications , "signal_name"
lower_bound upper_bound
signal_name ::= signal name or basic control state name,
as used in the model
lower_bound ::= smallest signal value assigned during the
test generation (represented as floating point value)
upper_bound ::= biggest signal value assigned during the
test generation (represented as floating point value)
|
Model computation |
This section follows after the signal specification. It contains
the complete computation associated with the test generation,
structured according to the following syntax:
model_computation ::= empty |
model_computation timestep timestamp {
state_specification }
state_specification ::= empty |
state_specification , "signal_name" value
value ::= current signal value (represented as floating
point value); for basic control states value 1.0 indicates that
the associated state machines resides in this state
|
In addition to the test data created, the full set of test cases
discharged by the generated test data is listed in the format
UNDER CONSTRUCTION
This section contains benchmarks of the classification 'test generation
benchmarks' and corresponding run-times of the RT-Tester tool whose model-based
test generation component RTT-MBT has been used as reference tool for creating
the tests.
-
The following three tests generate 100% BCS coverage, causing one SUT
reset per test. No backtracking or random simulation techniques were
applied, only the SMT solver accelerated by abstract interpretation. The
whole generation process took 207s. The following tables refer to the
configuration of each test generation process and give short descriptions
of what the generated tests do. Test BCS 001 covers all basic control
states related to crash flashing, Test BCS 002 covers states related to
left/right and emergency flashing, and Test BCS 003 focusses on coverage
of control states related to theft alarms.
-
The following two tests generate 100% state machine transition coverage,
causing one SUT reset per test. No backtracking or random simulation
techniques were applied, only the SMT solver accelerated by abstract
interpretation. The whole generation process took 229s. The following
tables refer to the configuration of each test generation process and give
short descriptions of what the generated tests do.
-
-
The following tests are run in addition to the state machine transition
coverage tests. Together, they generate 100% MC/DC coverage,
causing one SUT reset per test. No backtracking or random simulation
techniques were applied, only the SMT solver accelerated by abstract
interpretation. The whole generation process took
309s [transition coverage] + 25s [MCDC tests] = 334s. The following
tables refer to the configuration of each test generation process and give
short descriptions of what the generated tests do.
-
UNDER CONSTRUCTION
-
Our application-specific ("user-defined") test cases are symbolic test cases specified in
Linear Temporal Logic (LTL).
|
|