diagram approach, while the second group related to “formal approaches and automation” covers formal
models, domain-specific solutions and generation.
. Regarding modelling process, several studies propose using conventions in order to
prevent syntactic and semantic errors [P1] [P3] [P38], while Unhelkar [P38] also provides checklists to
check the correctness of UML diagrams. Berenbach proposes enforcing some of the conventions by tool
[P3]. Performing inspections [P15] [P22] [P26] and model analysis techniques [P40] are also proposed for
quality assurance of models.
Regarding formal approaches and automation, Giese and Heldal [P10], Staron et al. [P34] and van Der
Straeten [P36] propose using OCL constraints in order to remove semantic ambiguities from models and
prevent wrong usage of elements. Using stereotypes [P34] and defining formal semantic of elements (for
example in a DSML) are also techniques that remove ambiguity of informal semantics and thus improve
semantic correctness. McUmber and Cheng introduce a framework for formalizing a subset of UML
diagrams by mapping between metamodels describing UML and the formal language to be able to use
model checking and simulation tools, for example SPIN to check for correctness and other quality goals
such as consistency [P24].
Regarding modelling process, some research hints that single-diagram approaches
have fewer defects regarding missing elements compared to the multi-diagram approaches [P27].
Conventions provided as guidelines, checklists or best practices in modelling processes are also proposed
to improve completeness of models, as in [P1] [P3] [P25] [P38]. Examples are to be found in Table III.
Tool support is discussed by Berenbach who proposes defining completeness rules that can be
programmatically verified [P4] while metrics for completeness evaluation are proposed by Berenbach [P4]
and Lange et al. [P21] who propose defining a special task for completeness evaluation that is performed
based on metrics collected from models.
Regarding formal approaches and automation, generating models from other models is a technique for
achieving completeness with an example given in [P31] where object models are generated from process
models with all the necessary states.
. Regarding modelling process, conventions regarding consistency between diagrams
are proposed in [P1] [P5] [P3] [P25] [P38] with examples given in Table III. Gavras et al. propose
defining a traceability strategy in the modelling process, which refers to the ability to establish
relationships between model concepts that represent the same concept in different models [P9].
Regarding formal approaches and automation, consistency has received more attention in literature than
correctness and completeness since it may be improved to a large extent by using tools and formal
languages, for example inter- and intra-consistency rules may be checked by OCL evaluator tools [P12].
Consistency constraints may also de defined in a DSML [P39] and consistency conditions can be checked
during transformations. By using a formal language, consistency of UML diagrams can be transformed to
well-formedness of the language specifications [P11] [P23]. Finally, Haesen and Snoeck propose using
tools that implement the observer patterns and generate the necessary elements when diagrams are updated
to keep diagrams consistent with one another [P11].
. Regarding modelling process and when it comes to comprehensibility by
humans, we found conventions for naming and structuring of models [P1] [P3] [P25] (since information is
easier to locate in a proper-named and well-organized model), aesthetics of diagrams [P1] [P28] [P29]
[P7], closeness to users’ view [P3] and to the problem domain [P25], and documentation of models [P1]
[P3]. Examples can be found in Table III. Finally, experiments on the single-diagram approach indicate
that these models are sometimes easier to understand compared to the multi-diagram approach where
information is spread over several diagrams, while comprehensibility is decreased if answering a question
requires information on only one aspect that may be expressed in one UML diagram [P27] [P30].
Regarding formal approaches and automation, while using formal languages in combination with UML
is proposed to improve consistency, it is also discussed that formal models are more difficult to read for
humans and it is better to generate them from the informal ones and constraints should be first written in a
human-readable form [P34]. On the other hand, formality allows simulation and analysis which may
improve comprehensibility by humans [P22] [P24] [P34]. Using concepts close to the domain as in a
DSML is also supposed to improve the comprehensibility of models especially by non-technical experts
[P13] [P39] [P33] and some experiments suggest that stereotyped models are understood better [P16]
Comprehensibility by tools is achieved by having a formal or precise semantics by adding such
semantics in models, stereotypes and DSMLs, and of course during transformations.
Some research discuss that problems with comprehensibility by users may indicate poor design [P7]
[P8] while good design improves also comprehensibility. We have not covered design quality in this
review but this observation is interesting to have in mind.
. Regarding modelling process, Berenbach [P3], Mitchell [P25] and Ambler [P1]
have proposed conventions that impact confinement such as using the right modelling artefacts, being free
of implementation and design decisions for an analysis model, having focus on correct separation of
concerns (also in [P5]), identifying scope as early as possible, having focus on domain concepts and
starting modelling process from them, clear separation between models and what they should cover, and
modelling from different views. Many of these conventions may be integrated in a model-based
development process, for example Gavras et al. emphasize defining an activity for selecting modelling
languages and tools appropriate for the domain and the needs of modelling [P9].
Regarding formal approaches and automation, sometimes developing a DSML is the right solution for
an organization [P13] [P33] [P39] since it includes only elements and diagrams necessary for the domain.
. Maintainability of models and updating them is a major challenge as it is with
code, especially when models gets large and complex as it is in many industry applications. However,
maintenance and evolution of models has not received much attention by now.
Regarding modelling process, maintenance of models is discussed in Agile Modelling (AM) which has
recommendations such as a single source of information, creating simple content and depicting them
Regarding formal approaches and automation, Jonkers et al. discuss that modelling in a DSML brings
the modelling discipline much closer to domain experts and at the same time enables simpler maintenance
and evolution of such models, which contributes to the agility of the model-driven development [P13]. On
the other hand, Safa writes that updating the metamodel of a DSL and the associated tools with a limited
user base is costly and as a result the language may lag behind the changes in the domain [P33]. We also
think that generating models from other models is a practice that allows keeping models in synch with one
another when changes in one happen.
Many practices regarding maintenance of code also apply to models. For example models that are well-
organized and well-documented are easier to maintain and update. A model which is easier to
communicate may reduce the “mythical man month”, the time that is normally taken to learn a new
system, which will improve the cost effectiveness of maintaining systems .
Figure XI summarizes the literature on the impact of the practices on the
dashed lines indicate that both positive and negative impact is observed. As depicted in Figure XI, the
proposed practices often impact several quality goals:
Improving the modelling process may impact all quality goals if proper activities are included.
Coding conventions or styles have earlier been promoted to improve the quality of code. In order
to improve the uniformity of models and prevent defects, some authors advocate the use of
modelling conventions. Styles, rules and conventions are kind of best practices proposed to
improve all aspects of model quality and should be included in a model-based development
Using less number of views is proposed to reduce the complexity of modelling and improve
completeness. It may impact comprehensibility by humans positive or negative.
Using a formal modelling language improves correctness and consistency of models and may also
improve comprehensibility by humans if models may be simulated. Formal models are on the
other hand more difficult to read for humans.
DSMLs or UML profiles allow developing models with the vocabulary of the domain that is more
comprehensible for humans. Other advantages are developing models that are formal, more
concise, correct and suitable for code generation. However, updating the language and editors is
difficult and models should be updated with changes in the metamodel.
Model-based generation by transformations improves consistency between artefacts and a
transformation tool can check models for their correctness and completeness during
Formal approaches and automation
Figure XI. The impact of practices on model quality goals. Continuous lines indicate positive impact while dashed
lines indicate that the impact may be positive or negative
In this section we present some observations regarding tool support and quality assurance techniques from
the covered literature. The main methods for detecting errors or assessing quality are:
: several quality goals such as consistency, completeness and confinement can be
assessed by means of manual human inspections; as proposed in [P3] [P22] [P26], also by using
checklists [P38]. Both modelling experts and non-technical experts should be involved in
inspections; especially for evaluating comprehensibility and confinement aspects. The OORT
techniques (Object-Oriented Reading Techniques) is an example of systematic inspection
techniques to inspect (“compare”) UML diagrams with each other for completeness and
consistency (vertical and horizontal) .
Tools for error detection
: Some have developed tools that check models for inconsistency,
incompleteness and incorrectness problems such as naming conflicts, missing associations and
incorrectly defined interfaces. Examples are the DesignAdvisor tool [P3] and SDMetrics [P20].
Tools for model checking based on formal approaches (adhering to rules and constraints for
example related to consistency and requirement goals) are covered as well such as the SPIN’s
Collecting metrics from models
: Berenbach proposes collecting metrics from models to evaluate
their completeness [P4] and Lange et al. have developed the MetricViewEvolution tool for
collecting metrics and visualization them [P21]. Saeki and Kaiya propose defining metrics at the
metamodel level [P32].
For evaluating the usefulness of practices, we found examples of:
: Controlled experiments are often performed in academia, for example related to
using UML profiles [P16] or the single-diagram approach [P27].
: Some industry cases described in studies are actually pilot studies performed in
order to evaluate usefulness of an approach in a specific context, for example in [P26] and related
to DSMLs [P33] [P39].
Feedback from practitioners
. Some studies have systematically collected feedback in industrial
cases and analyzed them, such as in [P34].
This article covered mainly two issues: identifying model quality goals (
goals) important in model-
based software development and an overview of practices proposed in literature to improve model quality,
both based on the results of a systematic literature review. The question facing us is whether the
classification is more useful than other classifications defined in [P15] [P22] [P26] [P38]; such as dividing
quality goals into syntax, semantics and pragmatic quality. When trying to define what exactly syntactic,
semantic or other quality goals mean in the above studies, the authors often tend to use the same
terminology covered by the
goals; i.e. correctness, completeness, consistency etc. There are other
quality goals for models – such as being simple – that are defined as being comprehensible and easy to
change, which are thus covered by our classification.
In addition to using a simple terminology, the
classification is based on the results of a systematic
review of literature on model quality. In our opinion using the review results provides relevance. For
example one may talk of consistency as a semantic quality type while the term “consistency between
models” is better understood by practitioners and also used in several studies.
Another question is whether we can join any two goals or remove any without significant impact on the
discussion, which we mean is not possible. Other quality goals may be added if necessary and future
research on the subject is necessary.
Quality of models is especially important in MDE:
Since models are transformed into other models, they should be correct. Otherwise, the principle
of “garbage in, garbage out” applies [P40].
Model completeness is a prerequisite for transformation, consistency checking and
implementation. Coverage is one of the requirements of completeness; for example all the use
cases are covered in the implementation.
Consistency between diagrams becomes important if information from separate diagrams should
be combined for the purpose of understanding or generation. Consistency between models of the
same system is important for keeping them in sync for future evolution.
Comprehensibility either by tools or humans is the main reason for doing any modelling.
MDE often involves developing several models of the system with different purposes. Thus
models should include information depending on the purpose.
Finally, models should be easy to change in order to support evolution and maintenance.
Persuading industry to use models depends on whether modellers can change models easily and
In short, it is hard or impossible to create something complete and correct from something incomplete,
erroneous or inconsistent. Persuading industry to use MDE requires taking away some burden of
development by providing tools and methodologies that support developing and maintaining high-quality
models. For example a recent article on model-based testing states that the ultimate success of the
approach relies on the quality of models that support them , such as having enough details (being
As the results of this review show, several practices are proposed for improving the quality of models
and some studies also include empirical evidence. Studies on the impact of modelling processes on the
quality of models report positive feedback from industry without providing details. While one may find a
lot of literature on the benefits of software processes in general, the relevant tasks and activities for a
MDE approach should be evaluated in industry cases. Studies discussing the benefits of formal models
and generating models or diagrams from other models or diagrams contained only small examples, while
modelling conventions, domain-specific approaches and the single-diagram modelling approach are
supported by industrial cases and student experiments. However, the evidence from the domain-specific
cases supports the benefits of formal methods to some degree since DSMLs are usually formal.
Some practices may be enforced by CASE tools, for example OCL constraints prevent making wrong
choices and tools can prevent syntactic errors and may keep models consistent with one another if
consistency rules are defined and support for checking is provided. Experiments on applying modelling
conventions have confirmed that tool support is important for reducing the effort spent on modelling when
using conventions [P18]. Using the proposed practices especially supported by tools is the
or quality by construction. The second approach to quality is an
discussed in Section 6.2 based on static analysis of models: model checking for formal models, collecting
metrics from models or getting feedback via inspections and interviews. Finally, there is also a
, for example the quality of models may be evaluated from the quality of predictions made from
them if they are used for simulation and prediction. However, the studies covered in this review did not
include any examples on using this approach.
8. Conclusions and Future Work
This article reviewed literature on the quality of models to answer three research questions. The results are
RQ1. What quality goals are defined in literature for models in model-based software
We identified six model quality goals relevant for model-based software development; i.e.,
correctness, completeness, consistency, comprehensibility by humans and tools, confinement (as having
precise modelling goals and being restricted to them) and changeability (as being easily extensible and
modifiable). While some of these quality goals such as consistency are studied in depth and solutions are
proposed and implemented in tools, others - such as changeability- are less discussed in the covered
RQ2. What practices are proposed to achieve or improve the above quality goals?
six practices and divided them in two groups. The first group is related to modelling process and covers
having a model-based development process, using modelling conventions and the single-diagram
approach. The second group is related to formal approaches and automation and covers formal models,
UML profiles and domain-specific modelling languages, and generating models or diagrams from other
models or diagrams. We discussed the impact of the proposed practices on the
goals with examples
and empirical evidence reported in the covered literature.
RQ3. What types of models and modelling approaches are covered in literature?
covered UML models, however, in approaches where models play a central role in software development
or on the right hand side of the spectrum shown in Figure I. However even when models are merely
sketches, their quality has gained attention since high-quality models ease communication between
development teams. We also found literature covering UML profiles and domain-specific languages in the
spirit of model-driven engineering.
Empirical evidence in the covered literature is also included in the article. Modelling conventions and
the single-diagram modelling approach have been subject of student experiments that confirm some
benefits but question others. For example the impact of conventions depends on the task and tool-support.
The benefits of model-based development process and domain-specific modelling approaches (including
UML profiles) are observed in industrial cases while formal models and generating models / diagrams
from models / diagrams are mostly discussed by examples and no empirical evidence was detected in the
covered literature. Additional evidence may however be detected by performing a review with focus on
The main purpose of this article has been to provide definitions and classifications that can be part of a
quality model with focus on model quality. We have developed a tool for visual specification of quality
models as presented in  where we intend to insert the results of this review. The next challenge of
improving model quality is to select quality goals for a given context and to identify practices that may be
applied in that context. Quality goals vary in the lifecycle of a project and for different types of models.
For example the degree of required formality and detail vary. Models may also be the intermediate or the
final products of software development. In short, a model should “fit for the purpose”. Thus a goal-driven
process for selecting quality goals and practices is proposed which is subject of our future work.
Other research gaps are identified as well. While traditional quality assurance techniques such as
inspections and measurement are applicable to models, they should be adapted to modelling purposes,
tasks and artefacts involved. Managing changeability and complexity of large and complex models,
keeping them consistent and verifying quality on the model level are challenges in model-driven
engineering that are not yet properly covered.
Performing literature reviews is time consuming and integrating the results is not easy. The main
benefits are however to provide new insight and identify research gaps. One challenge of this review was
selecting a terminology for classifying model quality goals that is based on the existing work and is
considered useful, without being difficult to understand for practitioners. Since our classification is based
on the terminology used in the reviewed literature, we mean that it provides relevance and
understandability. We must further improve the classification by increasing the breath of search for studies
especially with focus on quality promises of the model-driven engineering approach. We are involved in
the MODELPLEX project
which has the vision to evolve modelling technologies and tools for complex
system development. In MODELPLEX, an empirical research plan is defined in order to evaluate the
impact of modelling technologies and tools on several attributes such as the productivity of software
developers and the quality of models or generated artefacts. The results of empirical work will be used to
evaluate the quality impact of model-drive engineering and the usefulness of our classification.
. This work has been funded by the Quality in Model-Driven Engineering project (cf.
http://quality-mde.org/) at SINTEF and the European Commission within the 6
project MODELPLEX contract number 034081 (cf. http://www.modelplex.org). We thank Dr. Marcela
Fabiana Genero Bocco and Dr. Michel Chaudron for their valuable comments and suggestions.
Appendix I- List of Primary Studies Included in the Review
[P1] S.W. Ambler, The Elements of UML 2.0 Style, Cambridge University Press, 2005.
[P2] M.C. Bastarrica, S. Rivas, P.O. Rossel, Designing and implementing a product family of model
consistency checkers, Proc. Workshop on Quality in Modelling (QiM’07) held at MODELS 2007, 2007,
Documents you may be interested
Documents you may be interested