Archief voor categorie MDD
(cross-posted from blog.conceptworks.eu)
Due to increasing use of domain specific languages (DSLs), declarative style of modeling is quetly spreading among users of MDE tools. Indeed, it is easy to find examples of declarative DSLs, e.g. at DSM Forums or this blog. There is however a group of users, among which the delarative style of modeling has not managed to spread – transformation developers. I am not sure if it has something todo with the group itself or with the fact that the majority of today’s transformation definition languages (TDL) are still more imperative in style (I am aware of QVT Relations and ATL, but these are rather exceptions than the norm).
There are quite a few good reasons why one would consider using a declarative language for transformation definition: reduction of information content in transformation definitions (and hence higher productivity of transformation developers), more agile DSL evolution, transformation definitions as models, higher compatibility with parallel computing, etc..
Today I would like to share some practical results that illustrate reduction of information content due to use of a declarative language.
CHART vs. Java
The following examples are kindly provided by Maarten de Mol and Arend Rensink from University of Twente. In CHARTER project, they are working on certifiable transformation technology for development of safety-critical embedded systems.
Before proceeding to the examples, here are a few relevant highlights of their technology:
- Partially declarative transformation definition language (CHART): based on graph transformation and intended to be useable by Java programmers.
- Transformation compiler (RDT): given a transformation definition written in CHART, generates its executable implementation in Java. The produced code runs against and transforms user provided data.
Figures in rows of the gallery present transformation rules findRich and addPicture respectively. Figures in columns show these rules written in CHART and Java. The important Java methods are match() and update(), which are the translations of the similarly named blocks in the CHARTER rules.
Figures 1-4: Rules findRich and addPicture written in CHART and Java
In Figure 1, a match block counts 10 LOC against 41 LOC in Java (Figure 2), which constitutes a reduction of information content by 75%. In Figure 3, an update block counts 12 LOC against 65 LOC in Java, an 80% reduction.
Both examples show significant reduction of information content in CHART rules. The reduction is even stronger if one takes into account that Java implementations also have to address technical concerns, which do not exist in CHART rules. In this case reduction is 92% (13 LOC vs. 160 LOC for rule findRich).
In experence of another CHARTER partner, who evaluates CHART/RDT in practice, a CHART transformation definition counted 1024 lines of code against 8000 in Java, an 87% reduction of information content . Author’s own industrial experiences elsewhere with AToM3 GG rules (declarative) and QVT Operational (imperative) agree with the above results as well.
While exact reduction numbers are certainly arguable, the overal trend in the above experiences is that use of a declarative TDL can result in dramatic reduction of information content and manyfold increase of development productivity.
Despite industrial successes of MDE (which are often hidden), it is my experience that model-driven methods have hard times keeping up as organizations evolve. One factor behing this lag is slow speed of transformation development. Practical industrial experiences such as above, show that declarative languages have potential to significantly improve agility of transformation development.
What are your experiences with declarative TDLs and agile language development? Can you share concrete examples or provide references to declarative TDLs?
 de Mol, M.J. and Rensink, A. and Hunt, J.J. (2012) Graph Transforming Java Data. In: Proceedings of the 15th International Conference on Fundamental Approaches to Software Engineering (FASE 2012), 26-29 Mar 2012, Talinn, Estonia. Lecture Notes in Computer Science. Springer Verlag.
(cross-posted from blog.conceptworks.eu)
Recently I came across an interesting article by Jordi Cabot, in which he shares a tool vendor’s experience of high customer acquisition cost for MDD tools. In my opinion the main reason for this is that MDE is still not a mainstream methodology and hence benefits of MDD tools (and what it takes to reach those benefits) are often not clear to customers. This an issue I’ve had to deal with in all MDE-projects and currently in the CHARTER project, where Luminis is involved in industrial evaluation of an MDE tool-chain. In this post I would like to look at the problem from the customer perspective.
Figure 1: Customer processes and MDE tools
Given a large number and diversity of MDE tools (actually too many, one may argue), it is impractical to list added value cited by their vendors. Moreover, added value is per definition subjective to customer’s opinion. Instead I would like to step back and place these tools in customer’s context. For this I will use a simple classification model that partitions all customer processes and tools supporting them in four quadrants (see the figure). There are two dimensions: uniqueness (core/context) and technology (problem/solution). Core represents unique processes that are specific to a customer, which is in contrast to generic processes in Context. Customer’s business processes are found in category Problem. Solution is reserved for IT or Engineering processes supporting the business processes.
The four quadrants in the figure help to reason about the value of customer processes and tools. The most important are the core quadrants as these define the customer itself. Depending on the type of the customer’s industry, critical processes are found either in the 1st (service-oriented) or the 3rd quadrant (product-oriented). The context quadrants have the least value. Here customers often seek to outsource processes or purchase generic off-the-shelf solutions.
The following is a summary of MDE value for customers, based on my personal experiences in the four quadrants:
- Q4 is the hardest to sell. The reason for this is that here MDE provides a generic solution for a solution to a business problem. In other words, an unknown (and therefore regarded risky) MDE tool has to compete against multiple possible solutions that customer is more comfortable with. These include alternative mainstream technologies, in-house frameworks and run-time platforms, and even business changes that may cut ground from under the foot a tool vendor might have in the customer’s door. Warm reception by end users that love tinkering with their own ad hoc productivity solutions, is not guaranteed (not to mention their concern of becoming unneeded). Moreover, any MDE gains achieved in Q4 are likely to be watered down by waste in customer processes elsewhere. Industrial practices show that successful MDE application results in maximum 30%-40% faster overall development. These gains are among the lowest in MDE (compare these with up to 10x acceleration reported in Q1). Moreover, these gains are further relatively reduced when contrasted against alternative solutions. Last but not least, value of the initial gains is the hardest to demonstrate (you may discover that your stakeholders are not even aware of real business problems). The same stakeholders have the weakest influence over budget. It really looks like odds are against MDE tools in this quadrant today. Still, organizations without core processes, e.g. startup customers or customers developing one of a kind systems, may be interested in MDE tools here.
- Q2: MDE tools are typically not found here.
- Q3: This is the place of unique processes and numerous stakeholders from various domains: engineering disciplines (software development being one of them), architecturing, system design to name a few examples. Here MDE tools together with process optimizations can produce dramatic overall improvements (especially in product-oriented industries), which can be measured and linked to business needs recognizable by the customer. However, processes here are often knowledge-intensive and free-form, which makes the road towards these improvements longer and more complex compared to Q1, where processes are more structured and information-intensive. Off-the-shelf fixed MDE solutions are at disadvanage here.
- Q1 is the easiest for customer acquisition. Tool vendors can produce proofs of concepts in really short sprints (typically up to two weeks), deliver never seen before results (e.g. up to 10x faster development), directly relate these results to real business needs (usually falling under the business agility umbrella, but also including the more common T2M and cost reduction benefits) and have the shortest lines to budget holders. It is no wonder that vendors in this quadrant report numerous successes and are even able to grow under currently declining economic conditions.
So what kind of MDE tools are found in these four quadrants? Figure 1 illustrates my attempt to classify the tools. (This classification provides sufficient coverage and is by no means exhaustive.) The categories are:
- CIM tools are fixed business modelers with interpretation or code generation capabilities. Modifier CIM (Computation Independent Model) stresses that these tools focus on business solutions and isolate users from developing source code. Well-known examples come from Mendix and OutSystems.
- OTS are off-the-shelf applications that support generic business processes. Examples include CRM, ERP, Office and Simulation packages.
- DSM tools are language workbenches that enable customers define their own domain-specific modeling (DSM) and transformation processes. Typical tools in this group are MetaEdit+, Obeo Designer, oAW, AToM3, etc.. Note that while application of DSL tools in quadrants 1 and 2 is not unimaginable due to adaptability of such tools, there is not much “meta-” variability and too much integration complexity to justify such application.
- PIM tools are fixed UML modeling tools with (generic) code generators for one or more software technology platforms (e.g. .NET, Java, PHP). Modifier PIM (platform-independent model) indicates that users are at best isolated from details and differences of the platforms. Numerous examples can be found be found here.
Since I’ve mentioned the CHARTER project, it would be natural to place its tool-chain (aka CTC) in Figure 1 as well, especially because it presents an exception to the above experiences. CTC provides support for Java source code verification, testing and compilation, which are generic processes, thus placing CTC among PIM tools in Q4. However, due to restrictions that safety critical systems impose on those processes, there are no alternatives currently available. This coupled with huge error costs in such systems, significantly increases the tool-chain’s value for its potential customers.
In conclusion, I agree with Jordi that the cost of customer acquisition is high. However the situation is not the same for all MDE tool vendors. Those behind CIM tools, enjoy impressive interest from business customers. Their successes are responsible for the current breakthrough in popularity of MDE. Life is the most difficult for PIM tool vendors as too many factors may work against their tools in customer context. Moreover, they face competition from more versatile DSM tools. Last but not least, DSM tools can provide impressive benefits in Q3 (especially in product-oriented industries), but require that customer first commits to learning its own often knowledge-intensive processes. In my opinion, this quadrant and DSM tools are the stage for the next MDE breakthrough.
A necessary disclaimer is that the above conclusions are based on my own limited experiences. If you are an MDE tool vendor, customer or MDE professional, I would love to hear your opinion!
This month, Luminis has started development of a surveillance use case. The purpose of the case is industrial assessment and validation of tools and technologies developed in the “Critical and High Assurance Requirements Transformed through Engineering Rigor” project (CHARTER). The ultimate goal of CHARTER is to ease, accelerate, and cost-reduce the certification of embedded systems. The CHARTER tool-suite employs real-time Java, Model Driven Development (MDD), rule-based compilation and formal verification. The coming series of articles will describe evaluation experiences in the surveillance use case.
The CHARTER project includes user partners from four key industries: aerospace, automotive, surveillance and medical, each of which develops embedded systems that require high assurance or formal certification in order to meet business or governmental requirements. The four user partners will each validate the CHARTER tools and methodology using industrial applications and actual development scenarios, which will provide feedback for the project and ensure the tools and technologies perform as expected, and deliver the expected improvements in embedded systems development. As part of the evaluation process, metrics will be used to quantify industrial experiences in terms of development effort, cost savings, verification time, etc., to document for others the benefits achieved.
The CHARTER project was established to improve the software development process for developing critical embedded systems. Critical embedded software systems assist, accelerate, and control various aspects of society and are common in cars, aircraft, medical instruments and major industrial and utility plants. These systems are critical to human life and need to be held to the highest standards of performance through formal certification procedures. Improving the quality and robustness of these systems is paramount to their widespread adoption.
While effort reduction and quality increase are both commonly recognized benefits of MDE, the former particularly has become its trademark, thanks to numerous generative uses in model-driven software development (MDSD). Examples include generation of code and configurations from models written in UML, DSLs and XML.
Figure: The effort in MDE approach with partial manual coding (adapted from )
The generative MDE automates well defined routine activities. An effective metric of depicting economical benefit thereof is effort. The above figure illustrates effort reduction due to automation and reuse in an MDE approach with partial manual coding. Of cause, the generative MDE improves quality as well: error reduction, enforced architecture conformance, and up-to-date documentation are common factors that have positive effect on software quality. But usually these are considered as icing on the cake that is effort reduction. In my experience this perspective on the economical value of MDE is common among both customers and MDE professionals. The perspective can be summarized as “the same with less”.
More with the same
Recently a client tasked me together with its domain experts to assess benefits of applying MDE to a difficult process within the organization. Having analyzed the before and after situations, we came up with estimated economical benefit expressed in effort savings. The estimate was hard to quantify, but “should” have been OK. Although I wanted to share this optimism, I felt that in practice the effort saving would be negligible if not even negative. This paradox was due to the fact that the largest activity in the problem domain was inherently creative and exploratory.
In the figure, the output of a single exploration in this activity is shown as intermediate result, corresponding to line ad. As the figure suggests, code generation directly from the output is not possible (this happens further downstream in development). You may have noticed that the modelling curve rises more steeply towards point a. This rise occurs because modelling requires increased level of domain understanding and more information is needed by semantically rich operations, such as simulation, verification, code generation (eventually), etc. On the other hand, the figure shows effort reduction indicated by distance cd, which is the result of providing end users with proper abstractions, faster access to right knowledge, separation of concerns, DRY modelling, maintained consistency and integrity.
While working efficiency per exploration is likely to increase (compare ab and cd), the leading concern is quality of the output. Here benefits are early detection of design errors, deep exploration of design choices, better communication and documentation, maximized reuse of domain-specific platforms in further development. Moreover, the domain experts noted that any saved effort would be re-invested in more alternative explorations in search of a more optimal output. This increased number of explorations, is likely to balance out any savings due to higher efficiency.
With these insights, economical benefits were expressed with quality metrics and linked to different business goals than initially thought.
The described MDE assessment targets a highly creative engineering activity that explores alternative choices. In extreme case, the main benefit is not effort reduction, but increased product and process quality. The icing on the cake is that processable models can open opportunities for generative uses as well.
In my experience, such and certainly less extreme quality-driven cases are not exotic. In recent years, quite a few MDE projects I’ve participated in, had benefits strongly linked to quality improvement. What are your MDE experiences with creative activities? What were the economical benefits and how were they conveyed?
 Thomas Stahl, Markus Voelter, Krzysztof Czarnecki. “Model-Driven Software Development: Technology, Engineering, Management”. Wiley; 1 edition (May 19, 2006)
Blogs by Johan den Haan, Stefano Butti and Jordi Cabot raised interesting discussions about code generation (CG) and model interpretation (MI). One observation I took from these discussions is that MI is still little known. Previously I demonstrated operation of a custom-made model interpreter for a so-called weighbridge domain. Today I would like to share my experience of building this interpreter in a model-driven way.
Two main choices underpin the process and technology used to develop the interpreter:
- Execution semantics of the interpreter is defined within the problem domain itself (weighbridge in this case), without translating it to another domain (e.g. .Net or Java) as it is the case with CG. Such definition of semantics is also known as operational semantics. The advantage is reduction of development complexity: out of at least 2 domains needed for CG, only one and the more abstract domain is sufficient.
- Operational semantics is defined within an MDE framework. This enables customization of modeling language for problem domains besides that of the weighbridge example. Moreover, transformation capabilities are used to define operational semantics.
Figure 1: Domain-specific, nested interpretation (DSNI) MDE framework
Figure 1 shows the MDA framework  after it has been adapted to reflect the above mentioned choices. (If you are confused between MDA and MDE, you might find this article useful.) In contrast to MDA, there is no PIM or PSM model, but single computational independent model (CIM) written in DSL. CIM is both source and target of Transformation Tool. Transformation Tool carries out transformation classified as same language, same model. Transformation Definition defines operational semantics. It is not important if Transformation Definition Language (TDL), extends the Metalanguage as in MDA or is customizable by means of meta-specification. Therefore TDL is omitted from the framework and TDL selection criteria are defined instead (see below). Finally, new concept System Context is connected to Transformation Tool. This is due to the fact that interpretation as system exhibits external behavior through communication with other systems.
This approach can be described as nested interpretation, where a domain-specific interpreter is executed (nested) by a generic interpreter. From this perspective, Transformation Tool assumes the role of a generic interpreter and execution of Transformation Definition fills in the role of the domain-specific interpreter.
TDL Selection Criteria
Selection criteria for transformation definition language are:
- declarative modeling
- support for domain-specific notation
These criteria help to reduce development complexity and improve communication with problem domain experts.
Selected MD Technology
AToM3 is a free language workbench written in Python and under development at the Modelling, Simulation and Design Lab (MSDL) in the School of Computer Science of McGill University. The workbench closely matches the DSNI framework and meets the TDL selection criteria.
In AToM3, DSLs and models are described as graphs. From a language specification written in the metalanguage (ER formalism), AToM3 generates a tool to visually manipulate (create and edit) models written in the specified DSL. Model transformations are performed by graph rewriting. The transformations themselves can thus be declaratively expressed as graph-grammar models. However, AToM3 provides no communication infrastructure as needed by the framework.
Proof of Concept
As demonstration, a language specification for the weighbridge domain was defined (see sections domain and weighbridge DSL here) and graph rewriting was used to model operational semantics (see below). Source code of AToM3 itself, being written in Python, was extended to support web services for the communication purpose.
As blueprint for operational semantics of the interpreter, we took πDemos , a small process-oriented discrete event simulation language. There is a number of πDemos events that change state of a weighbridge system. For each such event,  defined the transitions induced on system state. While the original used functional programming language, this work uses graph rewriting and a graph grammar (GG) rule is defined per event.
|50||importProcess||Adds an external vehicle to EL|
|25||removeProcess||Deletes a vehicle that has completed its todos (events)|
|40||newR||Creates a new weighbridge|
|40||decP||Creates a new vehicle class|
|40||newP||Creates a vehicle from a vehicle class|
|40||getR||Acquires a non-busy weighbridge|
|40||blockProcess||Blocks a vehicle acquiring a busy weighbridge|
|40||promoteProcess||Unblocks a delayed vehicle|
|40||useR||Moves a vehicle on a weighbridge until service is complete|
|30||releaseResource||Moves a served vehicle from a weighbridge to EL|
|41||putR||Releases an occupied weighbridge|
Table 1: Graph grammar rules of weighbridge events
Table 1 lists the minimum set of required events and their corresponding GG rules. Execution of such rules needs to be globally orchestrated through proper sequencing. The rules, together with execution sequencing, form an operational semantics model of the interpreter.
For complete description of the model, please refer to . In the following, we present a detailed description of an example rule, followed by the execution sequencing model.
Example GG Rule
a) Left-hand side (LHS)
b) Right-hand side (RHS)
Figure 2: Subgraphs of the promoteProcess rule
Rule promoteProcess releases a busy weighbridge (bluish box in Figure 2a) that delays at least one vehicle (yellow box labelled 5). In the new state, the weighbridge remains busy and the blocked vehicle (5) is removed from the head of queue Delay and inserted in waiting queue EL.
The rule is executed if:
- The left-hand side (LHS) shown in Figure 2a is matched in the host graph (the CIM model).
- Associated condition is true: the weighbridge in LHS is the one referred to in the imminent event putR (a todo) in the body (a todo list) of the first vehicle (label 21) in queue EL.
If the above holds, the matched part of the CIM model is substituted with the right-hand side (RHS) shown in Figure 2b. Note new objects are labelled 10, 11, 13. The entities and relationships in RHS are initialized as follows:
- Objects copied from LHS keep all their properties.
- Imminent event putR (a todo) of the current vehicle (21) is completed.
- All properties of blocked vehicle (5) are copied to vehicle (10).
The execution sequencing is based on the next-event approach: Next event to execute is always the imminent event in the body of the current vehicle. Informally, the operational semantics of execution sequencing is as follows: if EL is empty, interpreter idles until at least one vehicle is inserted in EL. Such vehicle becomes current. If the body of the current vehicle is empty then it is removed from EL and EL is examined again. Otherwise, interpreter executes a GG rule corresponding to the imminent event of the current and EL is examined again. Note that whenever interpreter is idle, EL is being updated with new vehicles that meanwhile might have arrived from system context.
The execution sequencing is implemented by organizing GG rules into groups, each group having its own base priority. These groups, in the descending order of priority are: vehicle removal, weighing activities and vehicle arrival. Within a group, each rule is assigned a relative priority. If pattern matching of two and more rules within a group is deterministic on the basis of LHSs and conditions, then these rules can share the same priority level. Example rule priorities are given in Table 1.
The demonstrated development approach is characterized by a very high level of abstraction, direct involvement of problem domain experts and absence of software development. All these factors contribute to fast development times: The lead time of this one man project including research and development was 3 weeks. Admittedly, tests of the produced model interpreter showed noticeable performance penalty due to 1) repurposing of MD technology that was not designed for use as interpreter and 2) the overhead introduced by nested interpretation. In my opinion there is much room for performance improvement and I am wondering if MDE can prove useful again. An important message from this experience is that model interpretation does not have to be prerogative of big commercial tools and can get closer to code generation in terms of accessibility.
 Anneke G. Kleppe, Jos Warmer and Wim Bast. “MDA Explained: The Model Driven Architecture: Practice and Promise”. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, April 2003.
 Graham Birtwistle and Chris Tofts. “An operational semantics of process-oriented simulation languages: Part 1 πDemos”. ACM Transactions on Modeling and Com- puter Simulation, 10(4):299–333, December 1994.
 Andriy Levytskyy. “Model Driven Construction and Customization of Modeling & Simulation Web Applications”. PhD thesis, Delft University of Technology, Delft, The Netherlands, January 2009.
In MDD explicit knowledge of the domain is crucial for successful development of domain-specific modeling languages (DSML). Yet many starting DSL developers are missing the skill of domain knowledge conceptualization. The main symptoms are difficulty to come up with good language concepts and struggling with levels of abstractions.
While language design remains an art, there are a number of paradigms, techniques and guidelines that can make creation of DSLs easier. These helping means are the core of the DSL design tutorial developed at Luminis Software Development.
The tutorial was given for the first time during the PPL2010 conference that took place on November 17 & 18 at Océ R&D, Venlo, NL. A small group of participants learned basics of domain analysis, participated in domain definition and implemented a simple metamodel of their own. The general feedback was very positive.
The slides for the tutorial can be downloaded here from the Bits&Chips website.