Bork, D., & Roelens, B. (2021). A technique for evaluating and improving the semantic transparency of modeling language notations. Software and Systems Modeling, 20(4), 939–963. https://doi.org/10.1007/s10270-021-00895-w
Modeling and Simulation; Software; Modeling language Notation; Concrete syntax; Semantic transparency; Empirical evaluation
en
Abstract:
The notation of a modeling language is of paramount importance for its efficient use and the correct comprehension of created models. A graphical notation, especially for domain-specific modeling languages, should therefore be aligned to the knowledge, beliefs, and expectations of the targeted model users. One quality attributed to notations is their semantic transparency, indicating the extent to which a notation intuitively suggests its meaning to untrained users. Method engineers should thus aim at semantic transparency for realizing intuitively understandable notations. However, notation design is often treated poorly-if at all-in method engineering methodologies. This paper proposes a technique that, based on iterative evaluation and improvement tasks, steers the notation toward semantic transparency. The approach can be efficiently applied to arbitrary modeling languages and allows easy integration into existing modeling language engineering methodologies. We show the feasibility of the technique by reporting on two cycles of Action Design Research including the evaluation and improvement of the semantic transparency of the Process-Goal Alignment modeling language notation. An empirical evaluation comparing the new notation against the initial one shows the effectiveness of the technique.
en
Research Areas:
Visual Computing and Human-Centered Technology: 10% Information Systems Engineering: 90%