Tagungsband. Dagstuhl-Workshop MBEES: Modellbasierte Entwicklung eingebetteter Systeme VII

Größe: px
Ab Seite anzeigen:

Download "Tagungsband. Dagstuhl-Workshop MBEES: Modellbasierte Entwicklung eingebetteter Systeme VII"

Transkript

1 Tagungsband Dagstuhl-Workshop MBEES: Modellbasierte Entwicklung eingebetteter Systeme VII Model-Based Development of Embedded Systems fortiss GmbH Guerickestr München

2 Organisationskomitee Holger Giese, Hasso- Plattner- Institut an der Universität Potsdam Michaela Huhn, TU Clausthal Jan Philipps, Validas AG Bernhard Schätz, fortiss GmbH Programmkomitee Dr. Mirko Conrad, The Mathworks GmbH Prof. Dr. Holger Giese, Hasso- Plattner- Institut an der Universität Potsdam Dr. Michaela Huhn, TU Clausthal Dr. Hardi Hungar, OFFIS Prof. Dr. Stefan Kowalewski, RWTH Aachen Ulrich Nickel, Hella KGaA Hueck&Co Dr. Oliver Niggemann, dspace GmbH Jan Philipps, Validas AG Dr. Ralf Pinger, Siemens AG Prof. Dr. Bernhard Rumpe, RWTH Aachen Dr. Bernhard Schätz, fortiss GmbH Wladimir Schamai, EADS Prof Dr.- Ing. Birgit Vogel- Heuser, TU München Prof. Dr. Albert Zündorf, Universität Kassel

3 Inhaltsverzeichnis Delta Modeling for Software Architectures Arne Haber, Holger Rendel, Bernhard Rumpe, Ina Schäfer 1 Towards Optimization of Design Decisions for Embedded Systems by Exploiting Dependency Relationships Matthias Riebisch, Alexander Pacholik, Stephan Bode 11 Eine durchgängige Entwicklungsmethode von der System- Architektur bis zur Softwarearchitektur mit AUTOSAR Jan Meyer, Jörg Holtmann 21 Beschreibung der Plattformabhängigkeit eingebetteter Applikationen mit Dienstmodellen Simon Barner, Andreas Raabe, Christian Buckl, Alois Knoll 31 Herausforderungen bei der Performanz-Analyse automatisierungstechnischer Kommunikationssysteme Dimitry Renzhin, Jens Folmer 41 Electric/electronic architecture model driven FlexRay configuration Matthias Heinz, Martin Hillenbrand, Klaus D. Müller Glaser 51 Comparing Continous Behavior in Model-based Development of Embedded Software Jakob Palczynski, Carsten Weise, Sebastian Moj, Stefan Kowalewski 61 Using Guided Simulation to Assess Driver Assistance Systems Martin Fränzele, Tayfun Gezgin, Hardy Hungar, Stefan Puch Gerald Sauter 71 FALTER in the Loop: Testing UAV Software in Virtual Environments Florian Mutter, Stefanie Gareis, Bernhard Schätz, Andreas Bayha, Franziska Grüneis, Michael Kanis, Dagmar Koss 81 Quantitative Analysis of UML Models Florian Leitner-Fischer, Stefan Leue 91

4

5 Diagnosis in Rail Automation: A Case Study on Megamodels in Practice Dennis Klar, Michaela Huhn, Jochen Grühser Effizientes Erstellen von Simulink Modellen mit Hilfe eines Spezifisch angepassten Layoutalgorithmus Lars Kristian Klauske, Christian Dziobek MeMo Methods of Model Quality Wie Hu, Joachim Wegener, Ingo Stürmer, Robert Reichert, Elke Salecker, Sabine Glesner 127 Ontology-Based Consideration of Electric/Electronic Architectures of Vehicles Martin Hillenbrand, Matthias Heinz, Markus Morhard, Jochen Kramer, Klaus D. Müller-Glaser 133 Requirements, Tracability and DSLs in Eclipse with the Requirements Interchange Format (RIF/ReqIF) Andreas Graf, Michael Jastram 147 Fighting the Modeling Bottleneck Learning Models for Production Plants Oliver Niggemann, Alexander Maier, Asmir Vodenčarević, Bernhard Jantscher 157 Concatenating Sequence-Based Requirements in Model-Based Testing with State Machines Stephan Weißleder, Dehla Sokenou 167

6

7 Dagstuhl-Workshop MBEES: Modellbasierte Entwicklung eingebetteter Systeme VII (Model-Based Development of Embedded Systems) Das Schlagwort der modellgetriebenen Entwicklung ist inzwischen bei der Ent- wicklung eingebetteter Software allgegenwärtig. Stand bisher ausschließlich die Generierung von ablauffähiger, einbettbarer Software aus ausführbaren Model- len oft als Autocoding bezeichnet im Zentrum der industriellen Anwendung, so setzen sich zunehmend andere Anwendungen von Modellen in der Praxis durch. Nicht zuletzt die Entwicklung oder Anpassung von Normen und Stan- dards, die den Einsatz von modellbasierter Entwicklung explizit berücksichtigen wie z.b. in der ISO 26262, trägt zu anderen Anwendungsformen von Modellen bei: Architekturmodelle zur Dokumentation von Entwurfswissen, Testmodelle zur Erhöhung der Wiederverwendbarkeit von Tests, oder Plattformmodelle zur Realisierung verteilter Anwendungen sind dabei nur einige Beispiele. Damit wächst aber auch die Bedeutung der Qualität von Modellen, wie der Einsatz von Maßnahmen wie Modellinspektionen oder Walk- Throughs, Modellierungs- richtlinien, oder der Einsatz von spezialisierten Teilsprachen zeigen. In der Summe werden Modelle schwerpunktmäßig für den Entwurf und die Do- kumentation von Systemen genutzt, aber die Tendenzen zum Einsatz bei Analy- se, Betrieb, Wartung und Diagnose verstärken sich. Der Stand der Technik nähert sich deutlich der 2005 im ersten Workshop angedachten Durchgängigkeit von Methoden und Werkzeugen eines modellbasierten Entwicklungsprozesses. Trotzdem sind noch immer insbesondere Aspekte jenseits des funktionellen Entwurfs der eingebetteter Systeme wenig beachtet, wie etwa die Erweiterung von Modellen um sicherheitsrelevante Aspekte wie Zuverlässigkeit, die Ablei- tung von Diagnosemodellen und Sicherheitsnachweise. Wie in den Jahren zuvor, bestätigen auch dieses Jahr die Beiträge die Grundthese der MBEES- Workshop- Reihe, dass sich die Modelle, die in einem modellbasier- ten Entwicklungsprozess eingesetzt werden, an der Problem- anstatt der Lö- sungsdomäne orientieren müssen. Dies wird auch durch die zunehmende Ver- drängung von General Purpose - Formalismen wie der UML mit Objekt- oder Komponentendiagrammen und Sequenz- und Zustandsdiagrammen mit so ge- nannten Domain Specific Languages deutlich. Die verbesserte Werkzeugland- schaft und die starke Kopplung zwischen Anwendungsdomäne und eingebette- ten Systeme haben spezifische Modellierungsansätze verstärkt aus Domänen der early adopters in der Automotive- und Aeronautics- Branche in Anlagenbau und Automatisierungstechnik, Bahntechnik, Robotik und andere Anwendungsfehler getragen. Erfahrungen aus den Domänen der early adopters in der Automotive- und Ae- ronautics- Branche haben gezeigt, dass ein erfolgreicher modellbasierter Ansatz wesentlich auf eine funktionierende Werkzeuglandschaft mit spezifischer Aus- richtung auf die Modelle der Anwendungsdomäne und deren eingebetteten Sy-

8 steme haben. Durch die deutlich verbesserte Werkzeugentwicklungslandschaft und spezifische Modellierungsansätze verbreitet sich die modellbasierte Ent- wicklung zunehmend auch im Anlagenbau und Automatisierungstechnik, Bahn- technik, Robotik und anderen Anwendungsfeldern. In bewährter Tradition stellen die in diesem Tagungsband zusammengefassten Papiere sowohl gesicherte Ergebnisse, als auch Work- In- Progress, industrielle Erfahrungen und innovative Ideen aus diesem Bereich zusammen und erreichen damit eine interessante Mischung theoretischer Grundlagen und praxisbezoge- ner Anwendung. Genau wie bei den ersten sechs, in den Jahren 2005 bis 2010 erfolgreich durchgeführten Workshops sind damit wesentliche Ziele dieses Workshops erreicht: - Austausch über Probleme und existierende Ansätze zwischen den unter- schiedlichen Disziplinen (insbesondere Elektro- und Informationstechnik, Maschinenwesen/Mechatronik und Informatik) - Austausch über relevante Probleme in der Anwendung/Industrie und existie- rende Ansätze in der Forschung - Verbindung zu nationalen und internationalen Aktivitäten (z.b. Initiative des IEEE zum Thema Model- Based Systems Engineering, GI- AK Modellbasierte Entwicklung eingebetteter Systeme, GI- FG Echtzeitprogrammierung, MDA Initiative der OMG) Die Themengebiete, für die dieser Workshop gedacht ist, sind fachlich sehr gut abgedeckt. Die Beiträge adressieren verschiedenste Aspekte modellbasierter Entwicklung eingebetteter Softwaresysteme, unter anderem: - Domänenspezifische Ansätze zur Modellierung von Systemen - Modelle in der architekturzentrierten Entwicklung - Bewertung und Verbesserung der Qualität von Modellen - Modellbasierte Validierung, Verifikation und Diagnose - Evolution von Modellen - Modelle und Simulation - Tracing in der Modellbasierten Entwicklung - Modellgestützte Design- Space Exploration Das Organisationskomitee ist der Meinung, dass mit den Teilnehmern aus Industrie, Werkzeugherstellern und der Wissenschaft die bereits seit 2005 erfolgte Community- Bildung erfolgreich weitergeführt wurde. Der nunmehr siebte MBEES Workshop belegt, dass eine solide Basis zur Weiterentwicklung des Themas modellbasierter Entwicklung eingebetteter Systeme existiert. Der hohe Anteil von deutschen Forschern und Entwicklern an den in den in den letzten Jahren eingerichteten internationalen Konferenzreihen zu Modellierung

9 und Cyper- Physical Systems zeigt, dass die deutsche Zusammenarbeit in diesem Themenfeld Früchte getragen hat. Die Durchführung eines erfolgreichen Workshops ist ohne vielfache Unter- stützung nicht möglich. Wir danken daher den Mitarbeitern von Schloss Dagstuhl und natürlich unseren Sponsoren, der Delta Energy Systems GmbH und der Validas AG. Schloss Dagstuhl im Februar 2011, Das Organisationskomitee: Holger Giese, Hasso- Plattner- Institut an der Universität Potsdam Michaela Huhn, TU Clausthal Jan Philipps, Validas AG Bernhard Schätz, fortiss GmbH Mit Unterstützung von Dagmar Koss, fortiss GmbH

10

11 Innerhalb der Gesellschaft für Informatik e.v. (GI) befasst sich eine große Anzahl von Fachgruppen explizit mit der Modellierung von Software- bzw. Informationssystemen. Der erst neu gegründete Querschnittsfachausschuss Modellierung der GI bietet den Mitgliedern dieser Fachgruppen der GI - wie auch nicht organisierten Wissenschaftlern und Praktikern - ein Forum, um gemeinsam aktuelle und zukünftige Themen der Modellierungsforschung zu erörtern und den gegenseitigen Erfahrungsaustausch zu stimulieren. Die Arbeitsgruppe Grundlagen der Informatik am Institut für Informatik der TU Clausthal forscht auf dem Gebiet der formalen Methoden für sicherheitskritische, softwaregesteuerte Systeme. Dies umfasst modellbasierte Entwurfsmethoden sowie formale Validierung und Verifikation. Erarbeitet werden Methoden und Werkzeuge zum Nachweis der funktionalen Korrektheit, aber auch zum Echtzeit- und Störungsverhalten komplexer eingebetteter Systeme. In diesem Rahmen wird auch die Qualitätsplanung und Sicherheitsnachweisführung thematisiert. Die Anwendbarkeit der Lösungen wird immer wieder in industrienahen Projekten überprüft, wobei bisher auf Kooperationen mit der Automobilindustrie und ihren Zulieferern, der Bahnindustrie und aus der Robotik verwiesen werden kann. Schloss Dagstuhl wurde 1760 von dem damals regierenden Fürsten Graf Anton von Öttingen-SoeternHohenbaldern erbaut erwarb das Saarland das Schloss zur Errichtung des Internationalen Begegnungsund Forschungszentrums für Informatik. Das erste Seminar fand im August 1990 statt. Jährlich kommen ca Wissenschaftler aus aller Welt zu Seminaren und viele sonstigen Veranstaltungen. Die fortiss GmbH ist ein Innovationszentrum für softwareintensive Systeme in Form eines An-Instituts der Technischen Universität München. Als Forschungs- und Transferinstitut liegt der Fokus auf der angewandten Forschung zukünftiger Software- und Systemlösungen mit Schwerpunkt eingebettete und verteilte Systeme sowie Informationssysteme. Bearbeitete Themenfelder sind dabei unter anderem Modellierungstheorien einschließlich funktionaler, zeitlicher und nicht-funktionaler Aspekte, Werkzeugunterstützung zur Erstellung von dömanenspezifischen, modellbasierten Entwicklungswerkzeugen, Architekturen insbesondere für langlebige und sicherheitskritische Systeme, sowie modellbasierte Anforderunganalyse und Qualitätsicherung. Lösungen werden dabei vorwiegend in den Anwendungsfeldern Automobilindustrie, Luft- und Raumfahrt, Automatisierungstechnik, Medizintechnik, Kommunikationstechnik, öffentliche Verwaltung und Gesundheitswirtschaft erarbeitet.

12 Die Validas AG ist ein Beratungsunternehmen im Bereich Software-Engineering für eingebettete Systeme. Die Validas AG bietet Unterstützung in allen Entwicklungsphasen, vom Requirements- Engineering bis zum Abnahmetest. Die Auswahl und Einführung qualitätssteigender Maßnahmen folgt dabei den Leitlinien modellbasierter Entwicklung, durchgängiger Automatisierung und wissenschaftlicher Grundlagen. Besonderer Fokus der Arbeit des Fachgebiets "Systemanalyse und Modellierung" des Hasso Plattner Instituts liegt im Bereich der modellgetriebenen Softwareentwicklung für software- intensive Systeme. Dies umfasst die UML- basierte Spezifikation von flexiblen Systemen mit Mustern und Komponenten, Ansätze zur formalen Verifikation dieser Modelle und Ansätze zur Synthese von Modellen. Darüber hinaus werden Transformationen von Modellen, Konzepte zur Codegenerierung für Struktur und Verhalten für Modelle und allgemein die Problematik der Integration von Modellen bei der modellgetriebenen Softwareentwicklung betrachtet. Der Delta Konzern- gegründet ist mit einem Umsatz von 6 Milliarden USD und weit über Mitarbeiter weltweit Marktführer kundenspezifischer und standardisierter Stromversorgungen für die Computer-, Telekommunikation- und Netzwerkindustrie sowie ein bedeutender Lieferant von Videodisplays und elektronischen Komponenten wie DC Lüfter. Weitere Geschäftsbereiche sind Industrieautomation, Lader und Wandler für die Automobilindustrie, LED-Straßenbeleuchtungssysteme sowie Wandler und Systeme für erneuerbare Energien, wie Wind- und Solarenergie. Der Name Delta Energy Systems steht vor allem für Stromversorgungssysteme und -geräte mit breitem Anwendungsspektrum und einem enormen Leistungsbereich. Unsere langjährige Erfahrung im High Power Design (> 500 W) und die flexible, bedarfsgerechte Ausgestaltung der Produkte macht diese zu oft unverzichtbaren Bausteinen in Kleinstsystemen wie auch in komplexen Supercomputereinheiten und Speichersystemen. Die Netzwerk- und Telekommunikationsindustrie bezieht darüber hinaus von uns intelligente Power Distribution Units. Im Sektor Elektrofahrzeuge engagieren wir uns zunehmend als Lieferant von Ladegeräten für Batterien, DC/DC Wandler sowie Inverter. Hier und im Markt für Windenergie, für den wir Umrichter entwickeln, setzen wir verstärkt Akzente, die nachhaltig umwelt- und zukunftsfreundlich sind.

13 Delta Modeling for Software Architectures Arne Haber 1, Holger Rendel 1, Bernhard Rumpe 1, Ina Schaefer 2 1 Software Engineering RWTH Aachen University, Germany 2 Software Systems Engineering Institut Technische Universität Braunschweig, Germany Abstract: Architectural modeling is an integral part of modern software development. In particular, diverse systems benefit from precise architectural models since similar components can often be reused between different system variants. However, during all phases of diverse system development, system variability has to be considered and modeled by appropriate means. Delta modeling is a language-independent approach for modeling system variability. A set of diverse systems is represented by a core system and a set of deltas specifying modifications to the core system. In this paper, we give a first sketch of how to apply delta modeling in MontiArc, an existing architecture description language, in order to obtain an integrated modeling language for architectural variability. The developed language, -MontiArc, allows the modular modeling of variable software architectures and supports proactive as well as extractive product line development. 1 Introduction Today, modeling of software architectures is an integral part of the software development process [MT10]. Dividing a system into small manageable parts provides an overview of the system structure, subcomponent dependencies, and communication paths. Architectural modeling allows reasoning about structural system properties in early development stages. Reusing well-defined modular components reduces development costs and increases product quality. Especially embedded system architectures have to be carefully designed, because resources like bandwidth or memory are restricted, product quantities are high, and faults that are detected late are extremely expensive. Diversity is prevalent in modern software systems, in particular in the embedded systems domain. Systems exist in many different variants simultaneously in order to adapt to their application context or customer needs. Software product line engineering [PBvdL05] is a commercially successful approach to develop a set of systems with well-defined commonality and variability. Product line engineering benefits from architectural modeling, since common components can be reused in different system variants. However, in order

14 to develop a software product line, system variability has to be considered in all phases of the development process, including architectural modeling. This means that the variability of the architectures in different system variants has to be modeled by appropriate means. Delta modeling [CHS10, SBB + 10, Sch10] is a transformational approach to represent product variability. It combines the modular representation of changes between system variants with expressive means to capture the influence of product features. In delta modeling, a set of diverse systems is represented by a designated core system and a set of deltas describing modifications to the core system to implement further system variants. A particular product configuration is obtained by applying the changes specified in the applicable deltas to the core system. The concepts of delta modeling are language-independent. In this paper, we apply delta modeling to represent variability in software architectures that are described by the existing architecture description language MontiArc [HKRR11]. MontiArc is designed to model architectures for asynchronously communicating (logically) distributed systems. In order to express variable MontiArc architectures by delta modeling, the modification operations that can be specified in deltas over architecture descriptions will be defined. An architectural delta can add, remove and modify components and alter the communication structure between these components. This reflects the variability that is induced by different product features. By applying a sequence of deltas to the MontiArc description of a designated core architecture, MontiArc descriptions for architectures of other product variants can be obtained. The resulting modeling language for architectural variability, -MontiArc, provides means to modularly specify architectural variability by defining architectural deltas. -MontiArc supports the proactive and extractive development [Kru02] of software product line architectures. Hence, it can be used methodologically as an incremental variability model where components are added or replaced by refined and enhanced versions. This is in contrast to annotative approaches [ZHJ03, Gom04, CA05], where a model containing all possible variants is stripped down for a particular feature configuration. Furthermore, existing legacy applications can be transformed into a product line by specifying the architectural deltas that are required to describe the complete product space. 2 Architectural Modeling with MontiArc Architecture description languages (ADLs) [MT00] support the modeling, design, analysis, and evolution of system architectures. In ADLs, architectures are generally described in terms of components, connectors and communication relationships. ADLs facilitate a high-level description of the system structures in a specific domain and support reasoning about structural system properties [GMW97]. The textual ADL MontiArc [HKRR11] is designed for modeling distributed information flow architectures in which communication is based on asynchronous messages. It is developed using the DSL framework MontiCore [GKR + 08] that supports the agile development of textual domain-specific languages. Following [MT00], architectural components in MontiArc are units of computation or storage defining their computational commitments

15 MontiArc IntervalControl IntervalSelection IntervalCmd- Processor VehicleSpeed icp vs WipeCmd Figure 1: Architecture of the IntervalControl via interfaces. Theses interfaces are the only interaction points of components to provide clear concepts of interaction between entities of computation [BS01]. As an example of a system architecture description in MontiArc, we use an excerpt of an embedded windshield wiper system. The architecture is graphically represented in Figure 1. The component IntervalControl receives an interval selection from the driver and emits a command WipeCmd to control a connected wiper actuator. It is hierarchically decomposed into a subcomponent IntervalCmdProcessor named icp that calculates the desired wiper behavior based on the selection and the vehicle speed. In MontiArc, a subcomponent is either an instance of a component referenced from a separate model, or a local component definition (similar to a private inner classes in object-oriented programming languages). The interface definition of the component IntervalControl contains an incoming port IntervalSelection, an incoming port vs, and an outgoing port WipeCmd. All ports are connected to the subcomponent icp using connectors. Ports and subcomponents obey an implicit naming rule. If the used type is unique in the current component definition, the usage of an explicit name is optional and the artifact is implicitly named after its type. Listing 1 shows the textual description of the above described architecture (cf. Figure 1) in MontiArc syntax. Components are organized in packages (l. 1). A component defini- 1 package wipe; 2 3 component IntervalControl { 4 autoconnect port; 5 6 port 7 in IntervalSelection, 8 in VehicleSpeed vs, 9 out WipeCmd; component IntervalCmdProcessor icp; connect vs -> icp.vehiclespeed; 14 } Listing 1: Structural component IntervalControl in MontiArc syntax

16 tion consists of the keyword component followed by its name and curly brackets that surround the component s architectural elements. As in the graphical notation, the component s interfaces are described by ports (ll. 6-9). The keyword port and the direction (in or out) are followed by a communication data type and the port s name (optional). Subcomponents that instantiate other component definitions are created with the keyword component followed by the subcomponent s type and an optional name (l. 11). Subcomponents that define an inner component are created with the same component definition syntax as described above. MontiArc offers several mechanisms to connect subcomponents: the autoconnect port (l. 4) statement automatically connects all type-compatible ports with the same name. If the autoconnect statement is parametrized with the keyword type, ports with the same type are connected. Using this mechanism, the connector from the outgoing port WipeCmd of component icp to the outgoing port WipeCmd is created in the example. If it is not possible to create all connections automatically as uniquely identifying names may not be always given, explicit connections can be created using the connect statement (l. 13) connecting one source port with one or more target ports. If a target or source port belongs to a subcomponent, it is qualified with the subcomponent s name. Implicit connectors can always be redefined by explicit connector definitions. 3 Delta Modeling Delta modeling [CHS10, SBB + 10, Sch10] is a transformational approach for modular variability modeling. The concepts of delta modeling are language-independent. A deltaoriented product line is split into a core model and a set of model deltas that are developed during domain engineering [PBvdL05]. The core model corresponds to a product for some valid feature configuration. The core model can be developed according to wellestablished single application engineering principles or derived from an existing legacy system. The variability of the product line is handled by model deltas. The model deltas specify modifications to the core model in order to integrate other product features. The modifications comprise additions of modeling entities, removals of modeling entities, and modifications of modeling entities by changing the internal structure of these entities. The model deltas contain application conditions determining under which feature configurations the specified modifications have to be carried out. These application conditions are Boolean constraints over the features in the feature model and build the connection between the feature model and the variability of the product artifacts. A model delta does not necessarily refer to exactly one feature, but potentially to a combination of features which allows handling the influence of feature combinations to the artifacts individually. The number of model deltas that have to be created to cover the complete product space depends on the desired granularity of the application conditions. A model for a particular feature configuration is obtained by delta application. The modifications specified by the model deltas which have a valid application condition for the respective feature configuration are applied to the core model. To avoid conflicts between modifications targeting the same model entities, a partial order between the model deltas

17 can be defined that determines in which order the model deltas have to be applied if applied together. The partial order captures the necessary dependencies in order to guarantee that for every feature configuration a unique model can be generated. Furthermore, the partial order ensures that a model delta is applicable to the core model or an intermediate model during delta application. Applicability requires that added model entities do not exist and removed and modified entities exist. An intermediate model during delta application may not be well-formed according to the well-formedness rules of the underlying modeling language. However, after all applicable model deltas are applied, the resulting product model must be well-formed. 4 Architectural Variability Modeling in -MontiArc In order to make the ADL MontiArc presented in Section 2 more amenable to product line engineering, we apply delta modeling presented in Section 3 to MontiArc. This results in the architectural variability modeling language -MontiArc. The modification operations that can be specified in model deltas in -MontiArc allow the addition, removal and modification of all architectural elements of MontiArc, i.e., subcomponents, associated ports and connectors. The replace operation substitutes a subcomponent by another subcomponent with the same interface. This allows changing the internal realization of a subcomponent. A MontiCore [GKR + 08] grammar defining the syntax of -MontiArc is shown in Listing 2. Based on MontiCore s reuse mechanisms, the existing MontiArc grammar can be extended by the MADelta production defining model deltas for MontiArc descriptions. The production ArcElement for representing architectural elements, like components, ports, or connectors, in MontiArc is also reused. A model delta in -MontiArc starts with the keyword delta (l. 2) followed by the name of the delta. Using the optional after clause (l. 3), a partial order on the application of model deltas can be defined. The keyword after is followed by a list of model delta names. The defined model delta has to be applied after the listed deltas during delta application, if the deltas are applied together. The keyword when is followed by a logical constraint (l. 5) which defines the application condition of the model delta. The Constraint production represents Boolean constraints over the feature model and is omitted for space reasons. The changes of a MontiArc architecture specified by a model delta are defined using the production MADeltaBody (l. 7) which may contain an optional (?) ExpandAutoConStatement and an arbitrary number (*) of statements defining modifications of architectural elements. The ExpandAutoConStatement (ll ) triggers the MontiArc autoconnect mechanism after the application of delta modifications. The interface Statement is implemented by several productions that allow modifying, adding, or removing ArcElements from the target model (ll ). The ReplaceStatement (ll ) removes a subcomponent oldcomp and replaces it with a new subcomponent newcomp. A -MontiArc product line model consists of the core model which is a standard MontiArc architecture for the core feature configuration and a set of model deltas specfied using the above syntax. A MontiArc architecture for a particular feature configuration is

18 1 MADelta = 2 "delta" Name 3 ("after" predecessors:qualifiedname 4 ("," predecessors:qualifiedname)? )? 5 "when" Constraint MADeltaBody; 6 7 MADeltaBody = "{" ExpandAutoConStatement? Statement* "}"; interface Statement; 11 ModifyStatement implements Statement = 12 "modify" ArcElement ";"; 13 AddStatement implements Statement = 14 "add" ArcElement ";"; 15 RemoveStatement implements Statement = 16 "remove" ArcElement ";"; 17 ReplaceStatement implements Statement = 18 "replace" oldcomp:arcreference 19 "with" newcomp:arcreference ";"; 20 ExpandAutoConStatement = 21 "expand autoconnect;"; Listing 2: MontiCore grammar for the -MontiArc language obtained from a -MontiArc product line model as follows: First, determine the model deltas that have a valid application condition in the when clause for the given feature configuration. Second, find a linear order of the applicable deltas that is compatible with the partial order of delta application specified in the after clauses. Third, apply the modifications specified in the model deltas one by one in the given order to the core model. If the expand autoconnect statement is provided, matching ports are connected using the autoconnect mechanism, after the architectural modifications specified in the delta. In order to ensure that the application of every model delta is defined, the following conditions must hold for each model delta during delta application. In the current stage of language definition, we use a rather restrictive approach. (1) A subcomponent named sc must not be added to component c, if c already contains a subcomponent named sc. (2) A port named p must not be added to component c, if c already contains a port named p. (3) A connector with the target port tp must not be added to component c, if c already contains a connector with the target tp, or if tp does not exist (as a port in c or a port in a subcomponent of c). (4) An architectural element named ae must not be removed from component c, if c does not contain an architectural element named ae. A port named p must not be removed from component c, if c containts a connector with p as its source or target. (5) A subcomponent named sc must not be removed from component c, if c contains a connector that has a port that belongs to sc as its source or target. (6) An architectural element named ae must not be modified, if an architectural element named ae does not exist. (7) The operator replace can only substitute a subcomponent sc 1 of component c with a subcomponent sc 2 (specified by the Target construct), if sc 2 has the same interface like sc 1. The new component sc 2 will be connected as the component sc 1.

19 Wiper Feature Diagram Interval Vehicle_Speed Rain_Sensor Figure 2: Feature model for the IntervalControl 1 delta DRainSensor when Rain_Sensor { 2 3 expand autoconnect; 4 5 modify component IntervalControl { 6 add port in RainSensorStat; 7 add component RainEval; 8 }; 9 10 modify component IntervalCmdProcessor { 11 add port in RainIntensity; 12 }; } Listing 3: Model delta adding the Rain Sensor feature Example To illustrate architectural variability modeling using -MontiArc, we transform the component IntervalControl presented in Section 2 into a product line of similar components. The product line of similar components can be described by the feature model shown in Figure 2. All components have a mandatory feature Interval which allows selecting the interval for the windshield wipers. Optional features are Vehicle Speed which adjusts the wiping speed based on the vehicle speed and a Rain Sensor which determines the wiping based on the current rain intensity. As core model for the delta-oriented product line, we choose the component IntervalControl shown in Listing 1 implementing the the features Interval and Vehicle Speed. Listing 3 shows the model delta DRainSensor adding the Rain Sensor feature. The model delta adds a new component RainEval and an incoming port RainSensor- Stat for the input from a rain sensor. Furthermore, an incoming port RainIntensity is added to the component IntervalCmdProcessor receiving the output from the component RainEval. The connections between the added components and ports are created by the expand autoconnect statement. Applying the delta DRainSensor to the core model results in a system with all three features. In order to obtain a system which only realizes the mandatory feature, we have to provide a model delta that removes the feature Vehicle Speed. This delta DRemSpeed is shown in Listing 4. It removes the ports VehicleSpeed from the sub- and supercomponents of the core model. The associated connector is also removed due to the usage of

20 1 delta DRemSpeed when!vehicle_speed { 2 3 expand autoconnect; 4 5 modify component IntervalCmdProcessor { 6 remove port in VehicleSpeed; 7 }; 8 9 modify component IntervalControl { 10 remove port in VehicleSpeed; 11 }; } Listing 4: Model delta removing Vehicle Speed feature expand autoconnect. In order to obtain the fourth possible system variant, realizing the feature Rain Sensor, but not the feature Vehicle Speed, both model deltas have to be applied the core model. 5 Related Work Existing approaches to express variability in modeling languages can be classified in two main directions [VG07]: annotative and compositional approaches. Annotative approaches consider one model of all products of the product line. Variant annotations, e.g., UML stereotypes [ZHJ03, Gom04] or presence conditions [CA05], define which parts of the model have to be removed to derive a concrete product model. The orthogonal variability model (OVM) [PBvdL05] captures the variability of product line artifacts in a separate variability model where artifact dependencies take the place of annotations. In the Koala component model [vo05], the variability of a component architecture that contains all possible components is expressed by explicit components, called switches. Switches select between component variants in different system configurations playing the role of annotations. Compositional approaches associate model fragments with product features that are composed for a particular feature configuration. In [HW07, VG07, NK08], models are constructed by aspect-oriented composition. In [AJTK09], model fragments are composed by model superposition. In feature-oriented model-driven development [SBD07], a product model is composed from a base module and a sequence of feature modules. Additionally, model transformations [HMPO + 08, JWEG07] can represent variations of models, e.g., in [TM10, WF02], architectural variability is captured by graph transformation rules. Delta modeling [CHS10, SBB + 10, Sch10] can be seen as a transformational approach which provides a modular definition of transformations dependent on feature configurations. For architectural variability modeling in [MKM06], a resemblance operator is provided that allows creating a component that is a variant of an existing component by adding,

21 deleting, renaming or replacing component elements. The old and the new component can be used together to build further components. Instead, in delta modeling, the existing component is destroyed since a complete model of the resulting architecture will be generated by delta application. In [HvdH07], architectural variability is represented by change sets containing additions and removals of components and component connections that are applied to a base line architecture. Relationships between change sets specify which change sets may be applied together similar to application conditions of model deltas. However, the order in which change sets are applied cannot be explicitly specified. Conflicts between change sets have to be resolved by excluding the conflicting combination using a relationship and providing a new change set covering the combination which may lead to a combinatorial explosion of change sets to represent all possible variants. 6 Conclusion In this paper, we have presented a first version of -MontiArc to support the flexible modular modeling of architectural variability by extending the existing ADL MontiArc with the concepts of delta modeling. In order to guarantee the the uniqueness of the resulting artifacts we plan to apply the criteria given in [CHS10] to -MontiArc. We expect to get more experience from case studies to investigate how to optimize this integration and compare it with other similar approaches. We are in particular interested in restricted forms of modifications that add and refine architectures. An appropriate refinement calculus for this is given in [PR99]. Additionally, we aim at developing a tool infrastructure for -MontiArc. Common language artifacts, like symbol tables, context condition checkers and model editors, as well as a generator that creates a MontiArc model for a particular feature configuration by delta application have to be developed. -MontiArc is a promising language for modeling architecture evolution since delta modeling is flexible enough to deal with anticipated and unanticipated variability. References [AJTK09] [BS01] [CA05] [CHS10] [GKR + 08] Sven Apel, Florian Janda, Salvador Trujillo, and Christian Kästner. Model Superimposition in Software Product Lines. In International Conference on Model Transformation (ICMT), Manfred Broy and Ketil Stølen. Specification and Development of Interactive Systems. Focus on Streams, Interfaces and Refinement. Springer Verlag Heidelberg, Krzysztof Czarnecki and Michal Antkiewicz. Mapping Features to Models: A Template Approach Based on Superimposed Variants. In GPCE D. Clarke, M. Helvensteijn, and I. Schaefer. Abstract Delta Modeling. In Proc. of GPCE, Hans Grönniger, Holger Krahn, Bernhard Rumpe, Martin Schindler, and Steven Völkel. MontiCore: a Framework for the Development of Textual Domain Specific Languages. In 30th International Conference on Software Engineering (ICSE 2008), Leipzig, Germany, May 10-18, 2008, Companion Volume, 2008.

22 [GMW97] David Garlan, Robert T. Monroe, and David Wile. Acme: An Architecture Description Interchange Language. In Proceedings of CASCON 97, [Gom04] H. Gomaa. Designing Software Product Lines with UML. Addison Wesley, [HKRR11] Arne Haber, Thomas Kutz, Jan Oliver Ringert, and Bernhard Rumpe. MontiArc Architectural Modeling Of Interactive Distributed Systems. Technical report, RWTH Aachen University, (to appear). [HMPO + 08] Ø. Haugen, B. Møller-Pedersen, J. Oldevik, G. Olsen, and A. Svendsen. Adding Standardized Variability to Domain Specific Languages. In SPLC, [HvdH07] Scott A. Hendrickson and André van der Hoek. Modeling Product Line Architectures through Change Sets and Relationships. In ICSE, pages , [HW07] F. Heidenreich and Chr. Wende. Bridging the Gap Between Features and Models. In Aspect-Oriented Product Line Engineering (AOPLE 07), [JWEG07] Praveen K. Jayaraman, Jon Whittle, Ahmed M. Elkhodary, and Hassan Gomaa. Model Composition in Product Lines and Feature Interaction Detection Using Critical Pair Analysis. In MoDELS, pages , [Kru02] Charles Krueger. Eliminating the Adoption Barrier. IEEE Software, 19(4):29 31, [MKM06] Andrew McVeigh, Jeff Kramer, and Jeff Magee. Using resemblance to support component reuse and evolution. In SAVCBS, pages 49 56, [MT00] Nenad Medvidovic and Richard N. Taylor. A Classification and Comparison Framework for Software Architecture Description Languages. IEEE Transactions on Software Engineering, [MT10] Nenad Medvidovic and Richard N. Taylor. Software architecture: foundations, theory, and practice. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 2, ICSE 10, pages , New York, NY, USA, ACM. [NK08] N. Noda and T. Kishi. Aspect-Oriented Modeling for Variability Management. In SPLC, [PBvdL05] K. Pohl, G. Böckle, and F. van der Linden. Software Product Line Engineering - Foundations, Principles, and Techniques. Springer, [PR99] Jan Philipps and Bernhard Rumpe. Refinement of Pipe And Filter Architectures. In FM 99, LNCS 1708, pages , [SBB + 10] Ina Schaefer, Lorenzo Bettini, Viviana Bono, Ferruccio Damiani, and Nico Tanzarella. Delta-oriented Programming of Software Product Lines. In SPLC, [SBD07] S.Trujillo, D. Batory, and O. Díaz. Feature Oriented Model Driven Development: A Case Study for Portlets. In ICSE, [Sch10] Ina Schaefer. Variability Modelling for Model-Driven Development of Software Product Lines. In VaMoS, [TM10] D. Tamzalit and T. Mens. Guiding Architectural Restructuring through Architectural Styles. In Proc. ECBS 2010, pages IEEE, [VG07] Markus Völter and Iris Groher. Product Line Implementation using Aspect-Oriented and Model-Driven Software Development. In SPLC, pages , [vo05] Rob C. van Ommering. Software Reuse in Product Populations. IEEE Trans. Software Eng., 31(7): , [WF02] Michel Wermelinger and Jos Luiz Fiadeiro. A graph transformation approach to software architecture reconfiguration. Science of Computer Programming, 44(2): , [ZHJ03] Tewfik Ziadi, Loïc Hélouët, and Jean-Marc Jézéquel. Towards a UML Profile for Software Product Lines. In Product Familiy Engineering (PFE), pages , 2003.

23 Towards Optimization of Design Decisions for Embedded Systems by Exploiting Dependency Relationships Matthias Riebisch, Alexander Pacholik, Stephan Bode Ilmenau University of Technology, Germany {matthias.riebisch alexander.pacholik Abstract: Design decisions for the development of embedded systems demand for a consideration of complex goals and constraints. In order to reduce risks and optimize the design, model-based approaches are needed for an explicit representation of goals and constraints as well as for early assessments. The explicit representation of dependencies is required to make design decisions in a reasonable way. Existing works do not sufficiently support the mapping between problem space and solution space together with a consideration of technological constraints. In this paper the Goal Solution Scheme approach developed for software architectural design is extended for the development of embedded systems considering specific needs for flexible decisions late in the development process. The adaptation of the approach for the relevant goals and development steps of embedded systems is illustrated by its application in a case study of a complex embedded system project. 1 Introduction During the last years the complexity of embedded software systems has steadily increased. Embedded systems have to satisfy numerous goals simultaneously. This situation results from the systems integration into heterogeneous environments regarding both technology and organization. They have become more and more critical for the success of products and services. Furthermore, there is a constant need for optimization and change, together with a high pressure for cost reduction. For example, cost aspects could demand for a change from a hardware- to a software-implementation of a feature at a later point in the development process. To manage complexity and risk, and to provide the required flexibility for late changes of implementation decisions, model-based approaches have been introduced. Model-based development processes help to reduce the risks and to increase the efficiency by providing support with methods and tools. Decisions during the design process play a key role for the satisfaction of the various goals. Unfortunately, there are competing or even conflicting goals. For optimization, all relevant goals have to be satisfied and balanced. However, method and tool support does not cover all types of goals. Furthermore, there are complex dependencies between the decisions, which limit the set of possible alternatives for a decision the so-called decision space. The missing comprehension of the dependencies hampers decisionmaking. To solve this problem, an explicit modelling of these dependencies can provide

24 a base for both effective tool support and the developer s comprehension of the decision space. The modelled information has to cover: Goals and preferences. Especially quality goals have to be covered because they can hardly to be achieved by later changes of the implementation. Constraints. In the case of embedded systems, a high number of constraints have to be met by solution instruments. For example, certain hardware is not supported by a model-based platform. These constraints restrict the decision space. Solution instruments. The potential elements of a solution even partly abstract ones such as patterns and heuristics as well as process patterns such as refactorings have to be represented with preconditions for their application and with their impact on goals. Due to the high risk of the design decisions, a goal-oriented, iterative development procedure together with early assessments is required. Model-based approaches enable such assessments and help to minimize the risks. Other means for risk reduction are (a) simulation especially for the goals performance and computing power and (b) prediction especially for performance and reliability. The contribution of this paper consists in a model-based, goal-oriented approach, which uses dependency relationships to represent a mapping between problem space and solution space. In this paper the formerly introduced Goal Solution Scheme [Bo09] is adapted to the embedded systems domain. For this adaptation, new mechanisms for the selection of a solution and for decision-making namely preconditions and constraints are introduced, together with additional goals and solution instruments. For illustration, the application of the scheme is shown by an example from a large-scale case study. 2 State of the Art Modelling techniques for competing requirements and goals together with their refinement and resolution have been developed in the Goal-Oriented Requirements Engineering, such as the NFR framework with the Softgoal Interdependency Graph, the i* notation, and the standardization as the User Requirements Notation (URN) [CP09]. The strengths of these approaches consist in their support for the elicitation of requirements and their priorities, the discovery of conflicts, the conflict resolution and scoping, and the support for the classification of the goals. However, they insufficiently consider the transition to the solution space, because the impact of solutions on goals as weights is not sufficiently represented. Furthermore, constraints for the applicability of solutions are not covered. Support for multi-criteria decision-making is provided by various approaches developed for economy [Tr00]. Several approaches apply a decision matrix to visualize and compare criteria and options, similar to the House of Quality matrix of the Quality Function Deployment method [BR95]. The strength of these approaches consists in the various ways of visualization, which provide support for a manual selection by weighting different factors. Unfortunately, the support for a classification of solution

25 alternatives regarding the goals is rather limited, and the consideration of the transition to the solution space by representing the impact of solutions on goals is missing. Furthermore, these approaches are developed to assist human decision-making, and they do not sufficiently support a tool-based or even model-based one. Support for mapping between problem and solution space is provided to some extent by all design methodologies. Strongly related approaches in the field of software architectural design are QASAR [Bo00] and ADD [BK02]. They do not sufficiently support the establishment of solutions and the explicit representation of dependencies to goals and constraints. A classification of solutions is provided by various catalogue approaches, such as Design Pattern catalogues [GH95],[BM96] or the various component catalogues. Unfortunately, most of them do not provide a classification regarding goals. Furthermore, a representation of preconditions for the applicability of tool-based preselection is missing. For the development of embedded systems, system-level synthesis approaches have been developed. They require as input an executable specification, constraints, and target platform templates; and they apply design space exploration and synthesis to derive optimally suited hardware architecture and functional deployment [St10]. However, design decisions on this base require a complete tool chain for synthesis together with platform template databases. Incomplete models and uncertainties in the design prevent the application of such techniques. A broad literature base provides principles and solution instruments as contributions to system development. The principles for performance optimization in embedded systems design [Wa05] constitute an example. The approach presented in this paper extends these works with a classification regarding goals, which enables their inclusion into layer III and IV of the Goal Solution Scheme (see section 4). 3 Problem Statement To support design decisions for embedded and software intensive systems, the modelbased approach has to fulfil the following objectives: Represent the design space and manage its complexity. Decisions in this domain have to fulfil complex requirements and various mutual constraints. The set of possible solutions is influenced by methods and principles from different research fields. Manage situations of missing information. By nature, a design process for a technical system is characterized by incomplete requirements, references to components with yet undefined properties, and missing knowledge about the satisfiability of requirements. Provide support for flexibility. Due to the mutual constraints and the need for optimization, decisions are wanted to be postponed as far as possible. This covers decisions on platform technology such as hardware and software.

26 Facilitate a goal-oriented evaluation of the solutions. Due to the high risks and the need for optimization, solutions have to be evaluated as early as possible. The explicit representation of dependencies in the Goal Solution Scheme, together with its integration in a decision-making process, shall fulfil these objectives. 4 Goal Solution Scheme For a support of goal-oriented development, a mapping between problem space and solution space is necessary. In iterative development processes, the establishment of decisions and the evaluation of their results are firmly related. The Goal Solution Scheme has been developed to represent a mapping between elements of both spaces by explicit relationships. The layers of the scheme (see Figure 1) correspond to stages of the development process and contain the elements of these stages. Each relation between elements expresses a dependency: a change of one element requires changes of its related elements. A weight added to the relationship expresses its impact, which can be positive or negative. As additional relations, preconditions for the applicability of elements of the solution space are managed, however they are not represented graphically. The layers I and II as well as project constraints represent the problem space, while layers III and IV represent the solution space. I Quality Goals II Subgoals III Solution Principles IV Solution Instruments Figure 1: Layers of the Goal Solution Scheme Layer I covers the top-level goals, such as security, performance, portability, and maintainability for embedded systems frequently extended by energy consumption, size, and cost. Layer II represents the subgoals. The transition I II represents a goal refinement, which is derived from quality models, cost models, or similar. The subgoal level facilitates the resolution of trade-off situations between competing goals. Layer III covers solution principles for a design regarding the different goals. The transition II III represents the mapping from the problem space to the solution space. The relations represent the impact of a solution principle on the related goals. Examples for positive and negative impact will be discussed in the next section of the paper. Layer IV contains solution instruments at different levels of abstraction. Examples are building blocks, patterns, reference architectures, and tools for analysis and code generation. The transition III IV provides a classification of the instruments regarding

27 the principles and thus regarding the goals. The relations represent the impact of a solution principle on the related goals. In this way, layer IV represents the design space as a set of solutions with properties regarding goals and constraints. The information on layers III and IV as well as the impact relations to the layers above have been acquired and incrementally improved during previous projects [Bo09]. As an extension of the Goal Solution Scheme for the embedded systems domain technological constraints are considered explicitly. It turned out to be necessary to check if the preconditions for the solutions application are fulfilled. These preconditions are modelled by attributes of solution principles (layer III) and solution instruments (layer IV). They are evaluated during a preselection step by comparison to the constraints of the current design task. Only solution principles and solution instruments with fulfilled preconditions are preselected. In this way the applicable ones are identified. The preselected candidates are then ranked according to their impact on the relevant goals, which is derived from the relationships of the transition II III IV of the Goal Solution Scheme. The preselection reduces the number of ranked elements significantly, and thus reduces the complexity of the decision support task. As a result, a three-step decision-making process is established (see Figure 2), as an extension of earlier works [RW07]. Firstly, the goals are defined based on the preferences of the stakeholders and the constraints and preconditions are identified. Secondly, the constraints and preconditions are compared to preselect a set of candidate solution principles and instruments (layer III and IV). Thirdly, the impact values of the solution principles and instruments and the priorities of the goals are used to calculate weights for the preselected solution instruments to establish a ranking. The resulting ranked lists is then presented to the developer as proposed solution instruments. Figure 2: The decision-making process according to the Goal Solution Scheme The constraints are identified during the requirements analysis or architectural analysis phase. The preconditions for the application of the solution principles and instruments (layer III and IV) are represented by attributes as part of their description within the scheme. For the sake of effort reduction they are partly generalized for classes of solution instruments, as shown in the case study. The preconditions are not visualized. 5 Decision Support To illustrate the utilization of the Goal Solution Scheme in a design decision, an example from a case study is discussed. The case study deals with the development of a new

28 version of the control unit of a nanopositioning and measuring machine [Mu09]. The purpose and the fields of application of this nanopositioning machine are semiconductor production, biotechnology research, and nano-scale production and research. The objective of the machine consists in a control of the position and the trajectory of an object with a high precision and a high positioning speed. To achieve a high resolution during motions over long distances, classical motion controllers have to be replaced by dynamic control solutions with a higher control loop frequency, which minimize the dynamic control error by estimating the nonlinear dominant disturbance, e.g. friction and stick-slip motion. As a result, the computing requirements increase by orders of magnitude. The production figures for this machine are low, 100 to 200 units per year can be expected. Embedding System Distributed Embedded System ADC ADC ADC Measurement Measurement Correction * Measurement Correction * Correction * Topology Adaption * Service System Control * DAC DAC DAC DAC * = Algorithms to change Figure 3: Positioning control unit as an embedded system The control unit considered here consists of different parts (Figure 3). There are parts for the correction of the deviation of measured input data, for the adaptation of the topology of the measuring and positioning environment, a service system as an interface for user interaction, and the actual controller. The control unit works with blocks for the various control algorithms. There is a demand for a new version of the machine with an increased data volume, speed, and complexity of the control. This requests for a replacement of the formerly used single processor platform by a more powerful one. Furthermore, the processor has to provide floating-point operations. The control algorithms are implemented on the base of models using a tool chain with MATLAB Simulink based code generation [Mu09]. As an alternative, the tool chain supports the derivation of an implementation as programmable hardware via hardware description language HDL. For an illustration of the approach, the decision on the platform technology for the control unit implementing the control algorithms is used as an example. For this decision there are three major alternatives: 1. Software-implementation using a real-time operating system (RTOS) as platform; 2. Software-implementation applying a static scheduling design, without RTOS; 3. Implementation on a heterogeneous platform with programmable hardware, such as FPGA or ASIC.

29 For both alternatives 1 and 2 there is another choice between a multicore processor and a distributed system as processing platform. I Quality Goals II Subgoals III Solution Principles IV Solution Instruments Changeability Portability Efficiency Cost minimization Platform independence Abstraction Performance Minimized energy consumption Minimized memory consumption Model-based code generation Code optimization Parallelization Performance oriented coding styles Code compression Static scheduling Expert optimization RTOS Profiling Optimizing Compiler HDL based hardware implementation Distributed processing Multicore processing Style guide checker Custom libraries Vendor libraries ertos VxWorks Embedded Linux Code Composer Green Hills compiler Cosmic compiler FPGA ASIC Legend: Positive impact Negative impact Figure 4: Goal Solution Scheme of the case study (partly) The goals and their priorities constitute an input for the considered case study. The goals and subgoals for the implementation of the control unit result from the objectives of the nanopositioning and measuring machine: a. Efficiency regarding performance: real time constraints, high positioning speed for a high precision the most important competitive feature of the system; b. Changeability: the ability to change or replace the positioning algorithms even late in the development process highly important for the optimization of the precision and speed of the control; c. Portability: exchange of the hardware platform, of the computing platform (processor), and the communication infrastructure important; d. Efficiency regarding minimized energy consumption of the control unit: need for additional cooling medium priority, because the control unit could be placed outside. Goal (a) performance got the highest priority by the stakeholders. Goal (b) changeability got a high priority, whereas goals (c) and (d) are not discussed further in this paper. Goal (a) is twice as important as goal (b). Therefore, using the well-known analytic hierarchy process, the resulting priorities are 0.66 for goal (a) and 0.33 for goal (b). During the decision-making, constraints and preconditions are evaluated. Constraints that were identified during requirements analysis and architectural analysis include dependencies between certain development tools, methods, and available hardware.

30 Preconditions control the preselection. For our example, some of the various preconditions for solution instruments have been generalized to classes (see Table 1). According to the constraints of the project and to those derived from previous decisions, a subset of the available solution instruments is preselected for a further evaluation. We removed the ASIC alternative due to cost and risk. Furthermore we decided to restrict on target hardware and tool chains, which are already available for the development project, more precisely C6000 series floating point DSPs from texas instruments and Virtex V and Spartan III series FPGAs from Xilinx. Therefore only the Code Composer compiler and the ertos operating system alternative remain. Table 1: Preconditions for the application of solution instruments (examples, generalized) Solution instrument RTOS Compiler Profiling Vendor library Vendor library FPGA ASIC ASIC Preconditions Support by selected processor Support for selected processor Support by selected processor Support for selected compiler Support for selected processor Support by HDL vendor and tool chain Support by HDL vendor and tool chain Cost: minimum number of units The next step of the application of the Goal Solution Scheme consists in a consideration of the solution principles (layer III) for the discussed goals. Due to space reasons, only a part of the solution principles is covered by the example (see Figure 4). For the subgoal performance we consider the solution principles as the ones with a positive impact: code optimization, static scheduling design, and parallelization. Related to them there are more solution principles optimizing compiler, hardware implementation, distributed and multicore processing. A related solution principle with a negative impact is RTOS because of the overhead. The solution principle performance oriented coding style is not considered further because the preconditions for its applications are not fulfilled. For the goal changeability two subgoals platform independence and abstraction are relevant. For platform independence, layer III provides the solution principles model-based code generation, HDL based hardware implementation, and with a negative impact code optimization. For abstraction, the solution principles RTOS and static scheduling (with a negative impact) are the related ones. There is a trade-off situation between the competing goals (or subgoals) performance and changeability, which is represented in the Goal Solution Scheme by mutually negative impact relations on layer III (dashed lines in Figure 4). The explicit representation of the modelled dependencies in the scheme facilitates the resolution of this competition during the decision-making. In our example there are two solution principles with a positive impact on both goals, static scheduling and HDL based hardware implementation. Model-based code generation and HDL based hardware implementation have a positive impact on platform independence and further on changeability because of the support of the tool chain. Static scheduling is supported by model-based code generation due to a derivation of related models by the tool chain.

31 The impact of RTOS to model-based code generation is negative because a special support of a certain RTOS by code generators is rarely available. Furthermore, the relation between model-based code generation and distributed processing shows a negative impact because special support for distribution and configuration by code generators is rarely available. These and the other dependencies lead to higher scores of the solution principles model-based code generation and HDL based hardware implementation compared to other ones such as RTOS. This results in a higher ranking of the related solution instruments as proposed solutions for the design decisions. For the given GSS the impact values between solution instruments and goals have been calculated by backward propagation (for details on the calculation procedure see [BR10]). The resulting matrix contains values between -1 and 1, where the minimummaximum magnitude ratio is 1:5. Thus the resulting matrix is equivalent to a -5 to +5 evaluation scheme. By multiplying the Matrix with the goal priorities, the final impact values of solution instruments result. The final ranking (best to worst) is as follows: static scheduling, FPGA, multicore processing, distributed processing, vendor libraries, profiling, optimizing compiler, custom libraries, and ertos. 6 Conclusion and Future Work In this paper, decisions for embedded system design are discussed as transitions from problem space to solution space with dependency relationships as links. By an explicit representation of these dependencies in the presented Goal Solution Scheme, the treatment of complex goals, constraints, and dependencies during the decision-making process is simplified. A resolution of competing goals and a simultaneous consideration of multiple design principles and candidate solutions by a developer are facilitated. The application of the Goal Solution Scheme is explained by decision-making examples from a larger case study. As future work, the extension of the Goal Solution Scheme by more solution principles regarding performance, parallelization, and distributed system design is intended. Furthermore, cost as a major goal shall be included. Related to this goal, principles and procedures for economic decisions have to be analyzed, formulated and incorporated into the layers III and IV of the scheme. Other next steps involve the extension of the tool support for the selection of a broader variety of solution instruments regarding their technological constrains, for example of more real-time operation systems and libraries with their options for profiling and code optimization, and of the various programmable hardware technologies and products with their concrete dependency relations to tool support. With a high coverage of dependencies between the relevant solution instruments, the method of goal-oriented design shall be integrated with the concept of system synthesis [St10] for a certain platform.

32 7 Acknowledgements The research on model-based decision methods has been partly funded by the federal state Thuringia and the European Regional Development Fund ERDF through the Thüringer Aufbaubank under grant 2007 FE The project of the nanopositioning and measuring machine has been partly funded by the German Research Council DFG under grant SFB References [BK02] Bass, L.J.; Klein, M.; Bachmann, F.: Quality Attribute Design Primitives and the Attribute Driven Design Method. In (F. van der Linden Ed.): Proc. Workshop Software Product-Family Engineering PFE2001, Springer, Berlin, 2002, pp [Bo00] Bosch, J.: Design and Use of Software Architectures. Addison Wesley, New York, [Bo09] Bode, S.; Fischer, A.; Kühnhauser, W.; Riebisch, M.: Software Architectural Design meets Security Engineering. In: Proc. 16th Annual IEEE International Conference and Workshop on the Engineering of Computer Based Systems (ECBS2009), San Francisco, CA, USA, April IEEE CS, 2009, pp [BR10] Bode, S.; Riebisch, M.: Impact Evaluation for Quality-Oriented Architectural Decisions Regarding Evolvability. In: M.A. Babar and I. Gorton (Eds.): 4th European Conference on Software Architecture (ECSA2010), LNCS 6285, Springer, 2010, pp [BR95] Barnett, W.D.; Raja, M.K.: Application of QFD to the software development process. International Journal of Quality & Reliability Management, Vol. 12, No 6, 1995, pp [BM96] Buschmann, F.; Meunier, R.; Rohnert, H.; Sommerlad, P.: Pattern-Oriented Software Architecture, Volume 1: A System of Patterns. Wiley [CP09] Chung, L.; do Prado Leite, J.: On Non-Functional Requirements in Software Engineering. In Borgida, A. et al. (Eds.): Conceptual Modeling Foundations and Applications. Springer, Berlin, 2009, pp [GH95] Gamma, E.; Helm, R.; Johnson, R.; Vlissides, J.: Design Patterns. Elements of Reusable Object-Oriented Software. Addison Wesley, 1995 [Ma10] MATLAB and Simulink, The Mathworks, Inc. 3 Dec 2010 [Mu09] Müller, M.; Amthor, A.; Fengler, W.; Ament, C.: Model-driven Development and Multiprocessor Implementation of a Dynamic Control Algorithm for Nanomeasuring Machines. In: Journal of System and Control Engineering (JSCE), Vol. 223, No 3, 2009, pp , Prof. Engin. Pub., London. [RW07] Riebisch, M.; Wohlfarth, S.: Introducing Impact Analysis for Architectural Decisions. Proc. ECBS2007, IEEE CS, 2007, pp [St10] [Tr00] Streubühr, M.; Gladigau, J.; Haubelt C.; Teich, J.: Efficient Approximately-Timed Performance Modeling for Architectural Exploration of MPSoCs. In Borrione D. (Ed.): Advances in Design Methods from Modeling Languages for Embedded Systems and SoC's, LNEE 63. pp , Springer Netherlands, Triantaphyllou, E.: Multi-Criteria Decision Making: A Comparative Study. Kluwer, Dordrecht, [Wa05] Wasson, C.S.: System Analysis, Design, and Development: Concepts, Principles, and Practices. Wiley, 2005.

33 Eine durchgängige Entwicklungsmethode von der Systemarchitektur bis zur Softwarearchitektur mit AUTOSAR Jan Meyer und Jörg Holtmann s-lab Software Quality Lab, Fachgruppe Softwaretechnik, Heinz Nixdorf Institut, Universität Paderborn Warburger Str. 100, Paderborn [jmeyer Abstract: Heutige Steuergeräte im Automobilbereich zeichnen sich durch eine hohe Funktionsvielfalt und eine hohe Vernetzung untereinander aus. Dies führt zu immer komplexeren Systemen, wobei auch immer mehr sicherheitskritische Funktionen durch Software realisiert werden. Damit die Qualität der Software entsprechend hoch und zufriedenstellend ist, erfordert die Entwicklung ein systematisches und prozesskonformes Vorgehen. Der für die Softwarearchitektur entwickelte AUTOSAR Standard ist allerdings nicht für die frühen Entwicklungsphasen wie die Anforderungsanalyse und das Systemarchitekturdesign gedacht, welche von Prozessbewertungsmodellen wie Automotive SPICE gefordert werden. Als Lösung für die Analyse bietet sich die Nutzung der Systems Modeling Language (SysML) mit Anpassungen bzw. Erweiterungen an die Bedürfnisse der Automobilindustrie, an. Damit aber keine Lücke im Entwicklungsprozess entsteht, wird bei der hier vorgestellten Methode ein wohldefinierter Übergang zur AUTOSAR Architektur, und zwar zur Applikations- und zur Basissoftware, definiert. 1 Motivation und Problemstellung In der Automobilbranche wird immer mehr Funktionalität durch Software realisiert [Bro06]. Dabei ist zu erkennen, dass die verschiedenen Softwaresysteme immer mehr über heterogene Netzwerke miteinander kommunizieren und letztendlich ein komplexes Netzwerk von Funktionen darstellen [PBKS07]. Früher wurde für jede Funktionalität ein eigenes Steuergerät (Electronic Control Unit - ECU) entwickelt. Durch die Zunahme der Funktionalität ist die Anzahl der Steuergeräte auf deutlich über 60 gestiegen [Vog05]. Gerade in letzter Zeit entsteht neue Funktionalität, z. B. im Bereich der Fahrerassistenzsysteme. Eine Realisierung dieser neuen Systeme ist, allein durch die Hinzunahme von neuen Steuergeräten nicht mehr möglich. Vielmehr wird erst durch die Kombination von neuen und bereits existierenden Funktionen, die sich auf vorhandenen Steuergeräten befinden, eine ganz neue Funktionalität realisierbar. Die Tendenz geht somit immer mehr von einer Diese Arbeit entstand teilweise im Rahmen des SPES2020 Projektes gefördert durch das Bundesministerium für Bildung und Forschung (BMBF), Förderkennzeichen SPES2020, 01IS08045H.

34 Steuergeräte- hin zu einer Netzwerk-orientierten Sichtweise. Deshalb spielt die Spezifikation des verteilten Verhaltens, der Kommunikation und der Datenkonsistenz eine immer größere Rolle. Im Bereich der Softwareentwicklung für die automobilen Steuergeräte nimmt der AUTO- SAR Standard an Bedeutung zu. Ein wesentliches Ziel des AUTOSAR Standards und der damit verbundenen Entwicklungsmethodik ist es, die immer weiter steigende (Software- ) Komplexität beherrschbar zu machen. Grundlage hierfür ist eine standardisierte Softwarearchitektur, welche u. a. die Entwicklung von AUTOSAR Softwarekomponenten unterstützt, die flexibel auf verschiedene ECUs verteilt werden können (vgl. Abschnitt 2). Dabei wird eine konkrete Softwarearchitektur gemäß der im Standard definierten Referenzarchitektur erstellt und anschließend der entsprechende Quellcode generiert. Der AUTOSAR Standard lässt aber offen, wie die notwendigen Informationen zur Erstellung der Softwarearchitektur ermittelt werden, da eine explizite Analysephase nicht vorgesehen ist [GHK + 07]. Der Zulieferer erhält vom Automobilhersteller (OEM) eine Vielzahl von Anforderungen und ggf. Modellinformationen. Das Know-How und das Expertenwissen liegen aber bei dem Zulieferer [BS05]. Deshalb sind die Anforderungen an das zu entwickelnde Steuergerät oft ungenau, lückenhaft und zum Teil widersprüchlich [ABNM06]. Von daher muss der Zulieferer zunächst in einer Analysephase die Anforderungen analysieren. Dies wird beispielsweise in dem Vorgehen nach Automotive SPICE gefordert [MHDZ07]. Weiterhin ist der Analyseschritt besonders für sicherheitskritische Funktionen unerlässlich und wird ebenso in der neuen Sicherheitsnorm ISO vorgeschrieben (vgl. [Int10]). Innerhalb der Analysephase können auftretende Missverständnisse mit dem OEM geklärt und ungenaue Anforderungen durch das Expertenwissen ergänzt werden, so dass auf Basis der Anforderungen das Pflichtenheft erstellt werden kann. Für die Analyse der Anforderungen hat sich ein modellbasierter Ansatz durchgesetzt. Während früher die strukturierte Analyse/strukturiertes Design (SA/SD) genutzt wurde, hat sich heutzutage die Verwendung der Systems Modeling Language (SysML) bzw. der Unified Modeling Language (UML) durchgesetzt. Im Automobilbereich kann zu diesem Zwecke aber ebenso die domänenspezifische Sprache EAST-ADL2 verwendet werden [ATE08], die eine Erweiterung der SysML darstellt. Diese modellbasierten Ansätze werden im Entwicklungsprozess vor der Verwendung des AUTOSAR Standards genutzt, um eine Analyse durchzuführen. Damit diese Modellierungssprachen möglichst optimal für den Automobilbereich genutzt werden können, ist eine Anpassung an diesen unerlässlich. So spielt vor allem die Echtzeitfähigkeit bei der Modellierung eine entscheidende Rolle. Um die zeitlichen Informationen in einem formalen und somit analysierbaren Verhaltensmodell zu spezifizieren, haben wir die SysML um Real-Time Statecharts, welche die UML Zustandsautomaten um die Semantik der Timed Automata erweitern, ergänzt [HMSN10]. Für die Verhaltensmodellierung eines automobilen Steuergerätes muss aber ebenso das Betriebssystem mit den verschiedenen Tasks und deren Aktivierung spezifiziert werden. Hierzu wird in diesem Papier eine Modellierungsmöglichkeit dargestellt, wie das Betriebssystem in dem Systemmodell spezifiziert werden kann. Des Weiteren wird ein Konzept dargestellt, wie die Lücke im Entwicklungsprozess zwischen den frühen Modellierungsphasen mit der SysML bzw. UML und der Softwarearchitektur im AUTOSAR Standard durch größtenteils automatische Mechanismen geschlossen werden können. Somit wird

35 der Übergang von der Systemmodellierung hin zur AUTOSAR Softwarearchitektur verbessert. Hierzu werden bereits vorhandene Informationen aus dem Systemmodell in der Softwarearchitektur wiederverwendet. Dabei wird in AUTOSAR nicht nur die Applikationssoftware betrachtet, sondern auch die Erstellung bzw. Konfiguration der Basissoftware. Das Konzept stellt eine werkzeugunterstützte und methodische Vorgehensweise für die Erstellung der Konfigurationsdaten dar, wie sie in [NWWS06] gefordert und in [MS09] bereits in Teilen vorgestellt wurde. Hierdurch wird der Entwicklungsprozess auf der einen Seite deutlich optimiert, auf der anderen Seite werden durch die Wiederverwendung von Daten Redundanzen und somit Fehlerquellen eliminiert und möglichen Inkonsistenzen vorgebeugt. Das Papier ist wie folgt gegliedert. Nach der eben beschriebenen Motivation und der Problemstellung wird in Kapitel 2 der AUTOSAR Standard genauer erläutert. Hierbei stehen die Referenzarchitektur und das darin beinhaltete Vorgehen im Vordergrund. In Kapitel 3.2 wird das SysML Systemmmodell mit den notwendigen Erweiterungen vorgestellt, welches die Basis für die Erstellung einer initialen Softwarearchitektur in AUTOSAR darstellt. Anschließend wird der Übergang zu AUTOSAR erläutert. Im Anschluss an diesen Übergang werden verwandte Arbeiten in Kapitel 4 betrachtet, bevor in Kapitel 5 eine Zusammenfassung und ein Ausblick gegeben werden. 2 Der AUTOSAR Standard Der AUTOSAR Standard [AUT10a] wurde von Automobilherstellern, Zulieferern und Toolherstellern entwickelt, um eine gemeinsame Referenzarchitektur an Stelle der bisher üblichen Standardarchitekturen eines jeden Herstellers zur Softwareentwicklung in einem Automobil zu definieren. Er zeichnet sich durch standardisierte Schnittstellen und Abstraktionsschichten aus und besitzt eine eigene Methodik. Der Standard gewährleistet, dass einzelne Komponenten leichter austauschbar sind bzw. wieder verwendet werden können. Dies wird durch die standardisierten Schnittstellen, sowie eine standardisierte Laufzeitumgebung - den Virtual Functional Bus (VFB) und seine technische Realisierung - der Runtime Environment (RTE) ermöglicht. Die ECU Architektur nach dem AUTOSAR Standard ist in Abbildung 1 zu sehen. Die Software, welche die eigentliche Funktionalität umsetzt, befinden sich auf der höchsten Architekturebene (Applikationssoftware). Darunter befindet sich die Runtime Environment (RTE), die von der Basissoftware (BSW) abstrahiert. Die Basissoftware lässt sich in drei verschiedene Schichten (Service-, Abstraktion- und Treiberschicht) und sechs Application Software System Services Onbord Device Abstraction Micro Controller Drivers Device Stack Application Software Application Software Runtime Environment (RTE) Memory Services Memory HW Abstraction Memory Drivers Com. Services Electronical Control Unit (ECU) Memory Stack Com. HW Abstraction Com. Drivers Com. Stack Application Software I/O Hardware Abstraction I/O Drivers I/O Stack Application Software Complex Device Drivers Complex Device Drivers Abbildung 1: AUTOSAR Architektur (nach [AUT10a])

36 verschiedene Stacks (Service-, Geräte-, Speicher-, Kommunikations- und I/O-Stack, sowie die Complex Device Driver) unterteilen. Der Übergang zwischen den einzelnen Schichten ist dabei durch standardisierte Schnittstellen spezifiziert. Beispielsweise werden im Kommunikations-Stack alle Einstellungen für eine korrekte Kommunikation des Systems vorgenommen. Neben der Einführung der Schnittstellen beinhaltet der AUTOSAR Standard auch eine neue Entwicklungsmethodik (vgl. Abbildung 2). Diese sieht aber keine Anforderungsanalyse vor, wie sie beispielsweise von Automotive SPICE gefordert wird. Vielmehr beruht sie darauf, dass zunächst das System und dann anschließend die einzelnen Steuergeräte (ECUs) konfiguriert werden. Dies geschieht in Beschreibungsdokumenten System oder ECU Description in Form von AUTOSAR XML-Dateien. In der Systems Description wird dabei nur die Applikationssoftware (logische Architektur) betrachtet, wobei in der ECU Description zusätzlich die Basissoftware und die Konfigurationen zur Erstellung der RTE spezifiziert sind (siehe [AUT10b]). OEM Zulieferer System Configuration Input ECU Extract of Configuration Description Configure System Configure ECU AUTOSAR Model SW Architecture OS Configuration Service Mapping COM Configuration BSW Configuration System Configuration Description ECU Configuration Description Extract ECU specific info Generate Executable Generatoren RTE Generator OS Generator MCAL Generator COM Generator Basis SW Generator ECU Configuration Description ECU Executable Abbildung 2: Die AUTOSAR Methodik ECU RTE OS MCAL COM BSW Die Informationen für die notwendigen Einstellungen, zur Generierung der RTE (z. B. das Internal Behavior (IB) mit implizitem bzw. explizitem Kommunikationsverhalten und die Kapselung in Datenstrukturen bzw. Nutzung von Exclusive Areas, oder bei mehrfacher Instanziierung die Verwendung von Per-Instance-Memories zur Sicherung der Datenkonsistenz) und die Konfiguration der Basissoftware werden von AUTO- SAR vorausgesetzt. Dies sind bei den heutigen umfangreichen Produkten aber komplexe Daten und können daher nicht direkt aus den Anforderungen abgeleitet werden. Somit ist eine Anforderungsanalysephase notwendig, um diese Informationen in einer formalen, vollständigen, korrekten und sich nicht widersprechenden Weise vorliegen zu haben. Nach der Konfiguration erzeugen Generatoren den entsprechenden Code (siehe Abbildung 2). Die Basissoftware macht einen Großteil (projektabhängig bis zu 50%) der Gesamtsoftware aus [Hel10]. Ihre Aufgabe ist es, von der Hardware zu abstrahieren und so die Hardwareunabhängigkeit der Applikationssoftware zu unterstützen. Im AUTOSAR Standard wurden viele Module der BSW vordefiniert. Dies bedeutet, dass die Schnittstellen und die Struktur der Module spezifiziert wurden. Gleichwohl können und müssen aber die Module noch konfiguriert werden, um sie dem jeweiligen System anzupassen (vgl. [AUT10b]). Beispielsweise sind einige Schnittstellen als optional spezifiziert. Es muss projektabhängig entschieden werden, ob sie notwendig sind oder nicht. Durch den AUTOSAR Standard gibt es einen Paradigmenwechsel von einer Codierung hin zu einer Konfiguration der Software. Diese Konfiguration ist, wie die Codierung, durch entsprechende Werkzeuge zu unterstützen oder zu automatisieren, damit eine manuelle Pflege minimiert und daher

37 Anforderungen AUTOSAR ECU Extrakt /Kommunikationsmatrix Softwarearchitektur (AUTOSAR) Systemmodell (inkl. logische Architektur) Erweiterungen OS BSW SWCs RTE Verteilung Kommunikation Compiler + Linker Hardware Software Betriebssystem OIL-File Generator Steuergerät Abbildung 3: Zusammenspiel Systemmodellierung und AUTOSAR eine möglichst schnelle und fehlerunanfällige Entwicklung realisiert werden kann. 3 Von der System- zur Softwarearchitektur In dem hier vorgestellten Ansatz wird die erwähnte Lücke zwischen der Anforderungsanalyse und der Softwarearchitektur mit dem AUTOSAR Standard durch die Modellierungssprachen SysML bzw. UML geschlossen (siehe ebenso [GHK + 07]). Die verwendeten Modellierungssprachen bieten weitreichende Design- und Analysemöglichkeiten, um die Lücke zwischen der Anforderungsanalyse und dem Softwaredesign zu schließen. So können die nur informell vorliegenden Anforderungen analysiert und auf ihre Vollständigkeit und Konsistenz überprüft werden. Dabei werden Use-Cases genutzt, die dann durch Aktivitäts- und Sequenzdiagramme weiter verfeinert werden. So ist es möglich, einzelne Szenarien darzustellen, die das zu entwickelnde System erfüllen muss. Dieser Ansatz ist vergleichbar mit dem in [PS09]. Aus den in dieser Analyse gewonnenen Daten lässt sich eine Systemarchitektur in Form von Blockdefinitions- und internen Blockdiagrammen erstellen. Diese Systemarchitektur ist Grundlage für die Entscheidung, welche Elemente in Form von Hardware und welche in Form von Software realisiert werden. Dies wird durch die Stereotypen hardware bzw. software dokumentiert. Zusätzlich werden für die Systemmodellierung bei eingebetteten Systemen aus dem Automobilbereich noch die Kommunikationstopologie, die Verteilung der Software auf die Hardware (Deployment) als auch die Modellierung von Betriebssystemeigenschaften zu einer vollständigen Modellierung benötigt (vgl. Abbildung 3). Wenn das Systemmodell diese Informationen beinhaltet, ist das zu entwickelnde System so detailliert beschrieben, dass auf der einen Seite die getroffenen Design- und Architekturentscheidungen durch eine Simulation bereits in frühen Entwicklungsphasen überprüft und ggf. geeignete Gegenmaßnahmen eingeleitet werden können (vgl. [NMK10]). Dies ermöglicht eine frühere und somit kostengünstigere Fehlerbeseitigung im Vergleich zu einer konventionel-

38 len Überprüfung der Architektur, bei dem zunächst große Teile des Systems implementiert und zu einem Prototyp integriert werden müssen (siehe [Ric06]). Zum anderen können Informationen im Softwarearchitekturmodell wiederverwendet werden, welches im Kapitel 3.2 erläutert wird. 3.1 Erweiterungen für die Architekturmodellierung In der Abbildung 3 ist ein durchgehender Entwicklungsprozess von der Systemarchitektur bis zur Implementierung dargestellt. Um alle relevanten Bestandteile eines eingebetteten Systems des Automobilbereichs zu beschreiben, Abbildung 4: Verwendung und Transformation von Kommunikationsdaten haben wir Erweiterungen an der SysML vorgenommen. In der Kommunikationsmatrix (System Description) befindet sich die Spezifikation der Kommunikationsstruktur innerhalb des Fahrzeugs. In dieser können die ein- und ausgehenden Signale des zu erstellenden Systems identifiziert werden. Gleichzeitig werden hierbei die Versendungszeitpunkte der Bussignale in das Modell importiert und können für spätere Analysen genutzt werden. Damit keine Fehler bei der Übertragung entstehen, haben wir einen automatischen Import und zusätzliche Modellierungsmöglichkeiten entwickelt. Der Import speichert die Daten im Architekturmodell in Schnittstellen, die mit den zusätzlichen Stereotypen (SignalGroup und Signal) versehen sind. Beispielsweise werden die Bussignale Lichtwerte und Fahrdaten mit den Signalen Lichtsensorwert und Klemme 58 bzw. Geschwindigkeit importiert und zu einer Schnittstelle Innenlicht mit den Attributen Lichtsensorwert, Klemme 58 und Geschwindigkeit transformiert (vgl. die linken beiden Boxen in Abbildung 4). Dabei werden nicht nur die Namen der einzelnen Signale berücksichtigt, sondern ebenso Wertebereiche, Skalierung, Datenlänge etc. Für diesen automatischen Import haben wir die verschiedenen Kommunikationsbeschreibungen (Beschreibung für CAN, LIN und AUTOSAR) analysiert und in einem Meta-Modell zusammengefasst, so dass der Import für verschiedene Kommunikationsbeschreibungen verwendet werden kann. Neben der Verteilung von Software auf Hardware muss der Systemarchitekt ebenso die Betriebssystemeigenschaften (Priorität, Taskdauer, Funktionszuordnung) modellieren, da sie maßgeblichen Einfluss auf die Performance eines Systems haben. Für die Modellierung der Betriebssystemeigenschaften existieren derzeit in Abbildung 5: Modellierung von Betriebssystemeigenschaften

39 SysML keine Beschreibungsmittel. Aus diesem Grund haben wir die SysML erweitert, um die Spezifikation eines OSEK bzw. AUTOSAR Betriebssystems [AUT10a] zu ermöglichen. Es existieren Stereotypen für die Elemente, wie Tasks, Alarme, Interrupts, Counter, etc. Ein Beispiel hierfür findet sich in Abbildung 5. Dort ist ein Task Innenlicht abgebildet, der zyklisch alle 50 ms aufgerufen wird und der die beiden Funktionen updatesignals (in der z. B. der Status der Klemme 58 aktualisiert wird) und switchlights nacheinander aufgerufen werden (spezifiziert durch die Guards an den entsprechenden Kontrollflüssen). Der Vorteil der formalen Modellierung der Betriebssysteminformationen ist, dass ausgehend von diesen Informationen eine Simulation durchgeführt werden kann, um die Designentscheidungen zu überprüfen. Das zu Grunde liegende Simulationsmodell wird dabei automatisch aus den im Systemmodell hinterlegten Informationen über die Architektur und die Betriebssystemeigenschaften generiert [NMK10]. Bei einem positiven Ergebnis der Simulation kann ein automatischer Export in die sogenannte OSEK Implementation Language (OIL) durchgeführt werden, damit die Daten im AUTOSAR Softwarearchitekturmodell oder speziellen AUTOSAR Konfigurations-Werkzeugen weiterverwendet werden können (vgl. Abbildung 3). 3.2 Der Übergang zu AUTOSAR In dem Systemmodell befinden sich alle notwendigen und bereits durch die Simulation überprüften Daten des Systems. Der nächste Schritt ist nun die Übertragung der Informationen aus dem Systemmodell hin zu einem AUTOSAR Softwarearchitekturmodell. Dieser Schritt erfolgt heutzutage noch manuell und ist daher fehleranfällig. Vor allem die Redundanz der Daten in beiden Modellen ist von Nachteil. Zur Steigerung der Qualität und der Produktivität ist es naheliegend, dass auch dieser Schritt automatisiert werden muss. Hierzu kann auf die eben dargestellten Erweiterungen (Kommunikations- und Betriebssystemeigenschaften) zurückgegriffen werden. Aus dieser können Transformationen nach AUTOSAR entwickelt werden, die neben der Abbildung der Struktur [GHN10] auch eine Abbildung des Verhaltens der jeweiligen Komponente nach außen (Internal Behavior) erlauben. Dies ist besonders für die Erstellung der AUTOSAR Runtime Environment (RTE) notwendig, da diese Informationen hierfür benötigt werden. Die Daten sind natürlich nicht nur für die Applikationssoftware von Bedeutung, sondern ebenso für die Basissoftware. Die Daten aus dem Systemmodell kommen jedoch nicht nur auf der Applikationsebene zum Einsatz, sondern müssen gleichfalls in der BSW eingesetzt werden. Dies kann am Beispiel des Communication Stacks (COM-Stack) verdeutlicht werden, den wir durch die Ausnutzung der Informationen im Systemmodell vorkonfigurieren. Die Details der Abbildung sind in der Abbildung 6 dargestellt. Hierbei wird ausgehend von dem Beispiel aus Abbildung 4) ein zusätzliches Signal Geschwindigkeit zu der Schnittstelle hinzugefügt. Es sind die Punkte aufgezeigt, an denen Informationen aus dem Systemmodell für die AU- TOSAR Architektur eine Rolle spielen. Die Daten werden in den obersten vier Schichten des Stacks benötigt. In den Modulen COM und PduR werden Verteilungsinformationen genutzt. In diesen Modulen werden die Nachrichten für den Bus (in diesem Fall dem CAN-Bus) zusam-

40 Application Software Runtime Environment (RTE) Communication Services Can Tp IduMP COM Dcm PduR CanNm Debug ging Communication HW Abstraction CanIf CanTrcv Nm CanSM Änderung an der Schnittstelle Innenlicht (Erweiterung um Signal Geschwindigkeit) Nutzung der Schnittstellen- und Kommunikationsdaten (Zusätzliches IPDU Fahrdaten) Verwendung der Sender-Receiver Informationen der Kommunikationsdaten (Neuer Routingeintrag für Bussignal Fahrdaten) Verwendung der Größe der Kommunikationsdaten (Umwandlung des Bussignals in eine IPDU) Communication Drivers Spi Fr Can Lin Abbildung 6: AUTOSAR Kommunikations-Stack mit Änderungspunkten mengestellt und zum Versand bereit gestellt. Daher müssen diese Pakete mit Informationen versehen werden, auf welcher Hardware die jeweiligen Softwarekomponenten verteilt sind. Diese Informationen werden in der ECU-Description in Form einer XML-Datei gespeichert, beispielsweise in den Abschnitten IPDUMapping, IPDUGroup, ISignalItem, CANTPConnectionChannel etc. So werden die Bussignale Lichtwerte und Fahrdaten zu jeweils einer IPDUGroup und die Signale Lichtsensorwert, Klemme 58 und Geschwindigkeit zu einer IPDU (vgl. Abbildung 4). Somit ist eine Abbildung zwischen den Modellen der SysML und der AUTOSAR Spezifikation erstellt worden. So können die Informationen aus dem Verteilungsdiagramm und den Verhalten der Schnittstellen genutzt werden, um die Werte in der ECU-Description zu setzen und somit eine Teil-Konfiguration der BSW vorzunehmen. Hierdurch bleibt dem Entwickler ein fehler- und arbeitsintensiver Schritt erspart und die Entwicklung kann zusätzlich auch noch beschleunigt werden. Eine ähnliches Vorgehen erfolgt für die Konfiguration des Betriebssystems im System-Stack. 4 Verwandte Arbeiten Die Kombination von UML/SysML in frühen Entwicklungsphasen, als auch die Nutzung des AUTOSAR Standards in den späteren Entwicklungsphasen, wird ebenso von anderen Ansätzen aufgegriffen. In [GB10] wird das Verhalten der einzelnen Komponenten in Statecharts modelliert und dann mit dem AUTOSAR Standard kombiniert. Dies bedeutet, dass der generierte C-Code als Implementation in das AUTOSAR Modell aufgenommen wird. Bei diesem Ansatz steht das Funktionsverhalten im Vordergrund und nicht die Modellierung des Gesamtsystems bzw. der Architektur. Bei der EAST-ADL2 [ATE08] ist vorgesehen, dass zunächst eine Analyse basierend auf einer angepassten SysML durchgeführt wird, bevor ein Übergang zu AUTOSAR erfolgt. Bei diesem Ansatz sind die Übergänge aber manuell vorzunehmen, so dass die in dieser

41 Arbeit vorgestellten Automatisierungen einen Mehrwert darstellen. Des weiteren ist zwar die Modellierung des Betriebssystems in EAST-ADL2 durch den Stereotype Operating- System vorgesehen, jedoch können nicht alle notwendigen Eigenschaften (Verteilung der Funktionen auf die Tasks) modelliert werden. In dem Ansatz [KPLJ08] wird eine Möglichkeit vorgestellt, wie bestehende Applikationssoftware nach AUTOSAR migriert werden kann. In dem Ansatz wird dabei die Konfiguration der Basissoftware teilweise gelöscht, statt weiter verwendet zu werden. Auch in [GHN10] wird dargelegt, wie Informationen aus einem SysML-Modell nach AUTO- SAR und zurück transformiert werden können. Der Ansatz hat den großen Vorteil, dass er in beide Richtungen eine Transformation der Daten ermöglicht. Jedoch wird dort der Schwerpunkt auf die Struktur der Applikationssoftware gelegt. Das Verhalten und die Konfiguration der Basissoftware wurden vernachlässigt. Somit wird nur ein Teil der Daten, die bereits im Systemmodell vorhanden sind genutzt. 5 Zusammenfassung und Ausblick In diesem Papier wurde ein Ansatz zur Nutzung der SysML bzw. UML für die frühen Systementwicklungsphasen vorgestellt. Hierbei sind die Anforderungsanalyse- und die Systemarchitekturdesignphasen ein Schwerpunkt, die durch den AUTOSAR Standard nicht abgedeckt werden [GHK + 07]. Für die Nutzung des SysML Standards ist eine Erweiterung der Systemmodellierung mit SysML notwendig, um alle automobilspezifischen Informationen darzustellen (Kommunikation und Betriebssystem). Mittels dieser Informationen ist eine Verifikation in frühen Entwicklungsphasen durch eine Simulation möglich (siehe [NMK10]). Die hierfür notwendigen Informationen sind ebenso für die Erstellung der Softwarearchitektur in AUTOSAR notwendig. Damit die Daten nicht redundant gehalten werden und somit zu Inkonsistenzen führen, wird eine automatische Teil-Konfiguration der Softwarearchitektur im AUTOSAR vorgestellt. Dabei wird nicht nur die Applikationssoftware betrachtet, sondern ebenso Teile der Basissoftware. Es wird davon ausgegangen, dass zunächst ein solches Systemmodell erstellt werden muss, da die derzeitigen Systeme zu komplex sind und daher eine eingehende Analyse unumgänglich ist. Durch die automatisierte Ermittlung und Übertragung der Daten, entfällt ein manueller und daher auch fehleranfälliger Schritt. Somit wird die Entwicklung beschleunigt. Anhand der Konfiguration des Kommunikations-Stacks und Teilen des System-Stacks (Betriebssystem) wurde die Machbarkeit demonstriert. Jedoch befinden sich noch weitere Informationen in dem Systemmodell, um weitere Basissoftware-Module zumindest teilweise zu konfigurieren (z. B. Speicherbelegung). Dies wird in zukünftigen Arbeiten weiter betrachtet. Literatur [ABNM06] L. Almefelt, F. Berglund, P. Nilsson und J. Malmqvist. Requirements management in practice: findings from an empirical study in the automotive industry. Research in Engineering Design, 17: , 2006.

42 [ATE08] ATESST2 Project. EAST-ADL2 Specification, [AUT10a] AUTOSAR GbR. AUTOSAR Specification, Version 4.0, [AUT10b] AUTOSAR GbR. Specification of ECU Configuration, Version 4.0, [Bro06] M. Broy. Challenges in Automotive Software Engineering. In 28th International Conference on Software Engineering, Seiten 33 42, New York, NY, USA, ACM. [BS05] B. Bouyssounouse und J. Sifakis. Embedded Systems Design. Springer, [GB10] A. Graf und M. Brückner. AUTOSAR und modellbasierte Softwareentwicklung - AUTOSAR-Integration mit Eclipse -. Elektronik Automotive, [GHK + 07] H. Grönniger, J. Hartmann, H. Krahn, S. Kriebel, L. Rothhardt und B. Rumpe. View- Based Modeling of Function Nets. In Proceedings of the 4th Workshop on Objectoriented Modeling of Embedded Real-Time Systems (OMER4), [GHN10] [Hel10] H. Giese, S. Hildebrandt und S. Neumann. Model Synchronization at Work: Keeping SysML and AUTOSAR Models Consistent. In Graph Transformations and Model- Driven Engineering, Jgg of LNCS, Seiten Springer, Hella KGaA Hueck & Co. Fallstudie Komfortsteuergerät des Projektes Software Plattform Embedded Systems 2020, [HMSN10] J. Holtmann, J. Meyer, W. Schäfer und U. Nickel. Eine erweiterte Systemmodellierung zur Entwicklung von softwareintensiven Anwendungen in der Automobilindustrie. In Software Engineering 2010, Jgg. 160 of GI-Edition - LNI. Bonner Köllen Verlag, [Int10] [KPLJ08] International Organization for Standards. ISO/DIS Road vehicles Functional safety, D. Kum, G. Park, S. Lee und W. Jung. AUTOSAR Migration from Existing Automotive Software. International Conference on Control, Automation and Systems 2008, [MHDZ07] M. Müller, K. Hörmann, L. Dittmann und J. Zimmer. Automotive SPICE in der Praxis. dpunkt Verlag, [MS09] J. Meyer und W. Schäfer. Automatische Analyse und Generierung von AUTOSAR- Konfigurationsdaten. MBEES: Modellbasierte Entwicklung eingebetteter Systeme V., Informatik-Bericht , [NMK10] U. Nickel, J. Meyer und T. Kramer. Wie hoch ist die Performance? Automobil- Elektronik, 03:36 38, Juni [NWWS06] A. Nechypurenko, E. Wuchner, J. White und D. Schmidt. Applying Model Intelligence Frameworks for Deployment Problem in Real-Time and Embedded Systems. MODELS, [PBKS07] A. Pretschner, M. Broy, I. Krüger und T. Stauner. Software Engineering for Automotive Systems: A Roadmap. Future of Software Engineering (FOSE 07), [PS09] [Ric06] K. Pohl und E. Sikora. COSMOD-RE: Verzahnung des Architekturentwurfs mit dem Requirements Engineering. OBJEKTspektrum, Architekturen/2009, K. Richter. The AUTOSAR Timing Model Status and Challenges. Leveraging Applications of Formal Methods, Verification and Validation, [Vog05] S. Voget. Future Trends in software architectures for automotive systems. AMAA - Yearbook, 2005.

43 Beschreibung der Plattformabhängigkeit eingebetteter Applikationen mit Dienstmodellen Simon Barner 1, Andreas Raabe 2, Christian Buckl 2 und Alois Knoll Abstract: Modellbasierte Entwicklung ist ein zunehmend populärer Ansatz, um der wachsenden Komplexität eingebetteter Anwendungen zu begegnen. Eine besondere Herausforderung besteht in der Berücksichtigung nicht-funktionaler und zeitlicher Anforderungen (z.b. Kommunikation, Safety, Energieverbrauch), deren Erfüllung ein nahtloses Zusammenspiel der Hardware, der verschiedenen Ebenen der Systemsoftware und der Anwendungssoftware bedingt. Die vorliegende Arbeit schlägt vor, die Abhängigkeiten der Hardware/-Software-Schichten mit einem Dienstmodell zu beschreiben. Dieses Modell umfasst einerseits die Kapselung funktionaler Abhängigkeiten zwischen verschiedenen Systemebenen (Peripherietreiber, Taskausführung, Kommunikation, etc.); andererseits beinhaltet es die Spezifikation nicht-funktionaler Eigenschaften in Form von Garantien durch den Dienstanbieter sowie in Form von Anforderungen durch die Bindung an einen Dienst. Beide Aspekte kommen bei der Verfeinerung einer Applikation bis hin zu einer plattformspezifischen Implementierung zum Tragen. Die spezifizierten nicht-funktionalen Abhängigkeiten können dabei als Ausgangspunkt für Analysen dienen (z.b. Mapping, Scheduling), deren Ergebnisse für die automatische Erzeugung von Programmcode genutzt werden können. Diese Arbeit präsentiert neben einem geeigneten Metamodel auch eine Taxonomie von Diensten für die Entwicklung eingebetteter Systeme und validiert den vorgeschlagenen Ansatz in einer Fallstudie. 1 Einleitung Mehr als 90 Prozent der verkauften Prozessoren kommen in eingebetteten Anwendungen zum Einsatz [Bro05]. Im Bereich der eingebetteten Systeme ist die Variantenvielfalt und Heterogenität der verfügbaren Hardware sowie zugehöriger Abstraktionsschichten, Middlewares und Betriebssysteme eine zentrale Problemstellung, die durch die dadurch bedingte Zunahme des Anteils plattformabhängiger Software verschärft wird [EMD08]. Modellbasierte Softwareentwicklung ist ein möglicher Ansatz, der beschriebenen Komplexität durch Abstraktion und Hierarchiebildung zu begegnen. Da bei eingebetteten Systemen die Berücksichtigung nicht-funktionaler Anforderungen eine zentrale Rolle spielt und deren Erfüllbarkeit eng mit der Plattform verwoben ist, muss die Plattform bei der Systemmodellierung mit in Betracht gezogen werden. Hierbei kann grob zwischen leichtgewichtigen Ansätzen wie etwa der Definition von Profilen für die Unified Modelling Language (UML) und schwergewichtigen Varianten in Form von domänenspezifischen Sprachen (DSLs) unterschieden werden [PHG + 09]. Die vorliegende Arbeit verfolgt das Ziel, eine problemorientierte Sicht auf den Entwurf eingebetteter Systeme zu bieten und umfasst daher sowohl Metamodelle zur Beschrei-

44 bung der Schichtenarchitektur der Plattform (Hardware, Abstraktionsschichten, Middleware, OS) als auch der Applikation. Hierbei steht die Beschreibung des Zusammenwirkens der unterschiedlichen Schichten durch ein Dienstmodell im Vordergrund, mit dem sowohl funktionale Abhängigkeiten als auch nicht-funktionale Anforderungen und Garantien beschrieben werden können. Der Rest dieser Arbeit ist wie folgt aufgebaut: Nach einer Motivation der Untersuchung von Dienstmodellen in Abs. 2 geben wir eine Zusammenfassung des aktuellen Stands der Technik in Abs. 3. Anschließend beschreiben wir die Schichtenarchitektur der Systemmodellierung in Abs. 4. In Abs. 5 definieren wir neben einem geeigneten Metamodell auch eine Taxonomie jener Dienste, die für die Entwicklung eingebetteter Anwendungen relevant sind. In Abs. 6 werden die vorgeschlagenen Konzepte im Rahmen einer Fallstudie auf ihre Anwendbarkeit geprüft und in Abs. 7 nochmals zusammengefasst und bewertet. 2 Anwendbarkeit von Dienstmodellen Im Folgenden wird erläutert, welche Beiträge der vorgeschlagene Ansatz zu verschiedenen Phasen des Entwicklungsprozesses liefert. Auslegung des Gesamtsystems: Eingebettete Systeme kommen in verschiedensten Anwendungsdomänen mit ebenso vielfältigen Anforderungen zum Einsatz. Bei der Auslegung des Gesamtsystems sind insbesondere drei Komponenten zu unterscheiden: Die zugrundeliegende Hardware, die verwendete Middleware und die zu implementierenden Applikationen. Zwischen den Ebenen der Hardware/Software-Schichten kann mit gewissen Einschränkungen Funktionalität verschoben werden. So überwiegen etwa bei hohen Stückzahlen die Hardwarekosten, wohingegen andernfalls die Entwicklungskosten entscheidend sind, so dass zwischen dem Einkaufspreis und der von der Hardware/Middleware-Kombination angebotenen Funktionalität abgewogen werden muss. Das vorgeschlagene Dienstmodell erlaubt eine schrittweise Verfeinerung der Spezifikation im Hinblick auf Lokalisierung von Funktionalität und Zusicherung nichtfunktionaler Eigenschaften. Analyse nicht-funktionaler Eigenschaften: Ein weiteres häufig anzutreffendes Anwendungsszenario ist der Einsatz von commercial-of-the-shelf (COTS) Plattformen. Hier ist vom Systemarchitekten vor allem eine Auswahl aus den angebotenen Hardwarealternativen zu treffen, wobei insbesondere auch die Eigenschaften der jeweils verfügbaren Middlewares zu beachten ist. Eine besondere Schwierigkeit stellt hier die Berücksichtigung jener nicht-funktionalen Anforderungen dar, die entweder auf verschiedenen Ebenen der Hardware/Software-Schichten realisiert werden können, oder die sogar mehrere Ebenen involvieren. Eine zentrale Voraussetzung für die Analyse der Eignung einer Plattformkombination ist demnach die geeignete Darstellung ihrer nicht-funktionalen Zusicherungen sowie die Zuordnung nach deren Ursprung. Software-Synthese: Im diesem Bereich sind zwei weitgehend unabhängige Konzepte umzusetzen. Einerseits muss Funktionalität (je nach Granularität der Anwendungsbeschreibung bspw. Komponenten, Anweisungen, Tasks) jeweils geeigneten Hardwarekomponenten zugewiesen werden (Bindung). Dies bedingt insbesondere eine automatisierte Analyse nicht-funktionaler Eigenschaften sowie die Darstellbarkeit dieser Bindung. Andererseits

45 muss die Verwendung automatisch auflösbar sein, also die entsprechende Funktion auf der dafür vorgesehenen Hardware ausgeführt werden können. Hierzu ist die Bereitstellung der entsprechenden Code-Templates, Compileraufrufen u.ä. notwendig. 3 Stand der Technik Wie von Passerone et al. ausgeführt [PHG + 09] ist modellbasierte Entwicklung ein geeigneter Ansatz, der wachsenden Komplexität eingebetteter Anwendungen zu begegnen. Aufgrund der inhärenten wechselseitigen Abhängigkeit von Ausführungsplattform und Applikation geben wir einen Überblick über existierende Ansätze, mit denen diese präzise beschrieben werden kann und die insbesondere zur Modellierung der Interaktion zwischen den verschiedenen Systemschichten geeignet sind. Kontrakte (contracts) sind ein vielfach verfolgter Ansatz, der auf die Zusicherung funktionaler Eigenschaften zwischen Eiffel-Modulen mit Hilfe von Annahmen und Garantien (require und ensure) zurückgeht [Mey92]. Der Ansatz von Beugnard et al. macht sich dieses Konzept zunutze, um Abhängigkeiten zwischen Komponenten zu beschreiben. Der Fokus liegt dabei auf der Betrachtung syntaktischer Aspekte (IDLs), des Verhaltens (Prä- und Postbedingungen), der Synchronisation zur Koordination von Operationen sowie der Dienstgüte (quality of service) [BJPW99]. In einer Retroperspektive [BJP10] stellen die Autoren sowohl die Bedeutung von Interaktionen (zwischen Komponenten sowie von Komponenten mit der Umwelt) als auch die zentrale Rolle nicht-funktionaler Anforderungen heraus. Beide Aspekte werden u.a. im HRC-Komponentenmodell [Dam05] berücksichtigt, das verschiedene Modellierungssichten mit Hilfe von Kontrakten realisiert [BCF + 08]. Neben der formalen Definition von Kontrakten durch Annahmen (assumptions: Anforderungen von Komponenten an ihre Umgebung) und Versprechen (promises: bei Erfüllung der Annahmen garantierte Eigenschaften) wird mit CSL (contract specification language) eine Patternsprache zur Spezifikation von Echtzeitkontrakten vorgeschlagen [GBC + 08]. Im Projekt FRESCOR 1 werden ähnliche Aspekte behandelt, allerdings liegt der Fokus auf der Aushandlung von Kontrakten zur Laufzeit (Scheduling, Kommunikation). Bei DOL (Distributed Operation Layer) [TBHH07] werden Applikationen als Prozess- Netzwerk-Modell dargestellt. Bei der Plattformmodellierung werden dabei sowohl die Topologie der Hardware als auch die Schichten der Systemsoftwares berücksichtigt. Der Fokus der Arbeit liegt auf der Performanzanalyse auf Systemebene, auf deren Basis Applikationen auf die jeweilige Plattform abgebildet werden. In SysCOLA [WHHW09], einer Kombination der formalen COLA-Komponentensprache und SystemC, kommen COLA-Modelle für die Beschreibung der Applikation sowie von Topologie und Features der abstrakten Plattform zum Einsatz. Das Hauptziel der Arbeit liegt darin, einen virtuellen Prototyp der Ausführungsplattform zur Verfügung zu stellen, mit dem das zu entwickelnde System simuliert werden kann. Hierzu kommt eine Plattform-Abstraktionsschicht (VPAL: virtual plattform abstraction layer) zum Einsatz, die allerdings nur sehr grundlegende Dienste zur Verfügung stellt (Kommunikation, zustandsbehaftete Tasks). 1

46 Hardware-dependent Software (HdS) beschreibt den Softwareanteil der Hardware-/Software-Schichten eingebetteter Anwendungen wie etwa BSPs (board support packages), HALs (hardware abstraction layers) oder Treiber, der nicht nur einen Großteil der Komplexität der Systeme ausmacht, sondern auch wesentlichen Einfluss auf deren Eigenschaften hat [EMD08]. Daher stellt HdS Modelle zur Spezifikation der HW/SW- Schnittstelle (bspw. spezielle Prozessorinstruktionen, memory-mapped I/O) sowie von Kommunikation und Ausführungskontexten zur Verfügung. Während HdS somit zwar Mechanismen zur detaillierten Darstellung des Hardwarezugriffs durch die Software bietet, ist aufgrund der fehlenden Mechanismen zur Beschreibung der Applikation keine ganzheitliche Systemmodellierung möglich. Das MaCC-Framework [PKMM10] ist eine Modellierungssprache zur strukturellen Beschreibung von Systemen. Auf der gewählten Abstraktionsebene werden bspw. Prozessoren, DMA-Controller, Speicher und deren Verbindungen wie etwa Busse, Punkt-zu- Punkt-Verbindungen, Registerfile, etc. strukturell beschrieben. Das Ziel ist es dabei, die Entwicklung von Source-Level-Codeoptimierungen zu unterstützen, die vom Wissen über die Speicherhierarchie profitieren (etwa: Nutzung von Scratchpads), wozu das Framework das Mapping zwischen verschiedenen Adressräumen unterstützt. Der Fokus liegt darauf, einen datenbankartigen Zugriff auf die Systembeschreibung zu bieten, aus der für die Optimierung benötigte Größen abgefragt werden können (bspw. Energie, Latenz, Durchsatz). SysML [OMG10] wird zur Beschreibung komplexer Systeme eingesetzt, die u.a. aus Hardware und Software bestehen können und baut dazu auf einer Teilmenge von UML 2 auf. SysML bietet dabei allerdings mit sog. allocations nur einen sehr generischen Mechanismus zur Beschreibung der Beziehung zwischen verschiedenen Hierarchieebenen. AADL [FGH06] verfolgt ähnliche Ziele, indem es Modellierungskonzepte zur Beschreibung und Analyse von Systemarchitekturen in Form von Komponenten und deren Interaktion bietet, wozu eine Reihe von Abstraktionen von Software-, Hardware- und Systemkomponenten zur Verfügung gestellt werden. 4 Überblick über die Modellierung Mehrschichtige Modelle sind ein Ansatz zur Beschreibung von eingebetteten Anwendungen. Mit ihnen lassen sich neben der eigentlichen Anwendung und den verschiedenen Schichten der Ausführungsplattform insbesondere deren wechselseitige Abhängigkeiten erfassen (vgl. Abs. 3). Im Folgenden wird ein geeignetes Metamodell zur Beschreibung von eingebetteten Systemen skizziert, das neben verschiedenen Komponenten-Metamodellen zur Beschreibung der Systemschichten, über ein Dienst-Metamodell verfügt. Auf letzteres wird in Abs. 5 detailliert eingegangen wird. Das abstrakte Komponenten-Metamodell ist die Basis zur Beschreibung der topologischen Struktur von Systemkomponenten. Es umfasst Komponenten, deren Schnittstelle zur Umgebung durch Ports repräsentiert wird. Die strukturelle Relation zwischen Komponenten wird mit Hilfe von an Ports gebundene Kanäle dargestellt. Hierarchiebildung ermöglicht es, bestehende Komponentenmodelle zu kapseln und wiederzuverwenden. Das Applikations-Metamodell stellt die erste Gruppe der Verfeinerungen des abstrak-

47 Abbildung 1: Metamodell und Taxonomie von Diensten sowie einfache Beispiele (unten). ten Komponenten-Metamodells dar und dient zur Spezifikation von funktionalen Eigenschaften. Beispiele sind Datenflussmodelle wie SDF oder Automatenmodelle wie SFC oder FSM. Details zu beiden Typen finden sich in [BGBK08]. Plattform-Metamodelle sind eine Verfeinerung des abstrakten Komponenten-Metamodells zur Spezifikation der Ausführungsplattform. Dabei dient das topologische Hardware- Metamodell dazu, die Einzelkomponenten einer Hardware-Plattform darzustellen und deren Topologie zu erfassen. Es umfasst daher Basisklassen zur Repräsentation von Prozessoren und Kernen, Bussen, Netzwerken, Speichern sowie Sensoren und Aktuatoren, die zur Beschreibung einer konkreten Hardware entsprechend verfeinert werden. Darüber hinaus werden im Plattformmodell die Komponenten des Softwarestapels beschrieben, etwa Prozesse einer Middleware, Module einer Hardwareabstraktionsschicht oder Treiber eines Betriebssystems. 5 Dienste Um die Abhängigkeiten der im vorangegangenen Abschnitt erwähnten Ebenen der HW/SW-Schichten darstellbar zu machen, schlagen wir im Folgenden ein Dienstmodell vor. Die reine Darstellung dieser Abhängigkeiten nimmt Anleihen am Konzept der Kontrakte (vgl. [Mey92, BJPW99, BCF + 08, GBC + 08]), sowie am HRC-Modell [Dam05]. Die besondere Neuerung unseres Ansatzes ist, dass zusätzlich zur reinen Darstellung der Abhängigkeit die Funktionalität tieferliegender Schichten durch bereitgestellte Codetemplates zugänglich gemacht wird. Dies kann als Interface-Kapselung interpretiert werden. Da nun jede Komponente ihre Interfacedefinition samt Codetemplate in Form eines Dienstes zur Verfügung stellt, ermöglicht dies die Implementierung eines völlig hardwareunabhängigen Codegenerators. Im Folgenden wird ein Metamodell zur Beschreibung solcher Dienste vorgestellt und eine entsprechende Taxonomie angegeben. Dienst-Metamodell. Dienste dienen dazu, die Beziehung zwischen Komponenten zu beschreiben, die in unterschiedlichen Schichten des Systems beschrieben sind (siehe Abb. 1). Sie werden in unserem Metamodell durch die Klasse Service repräsentiert. Komponenten können einerseits eine Menge von Diensten anbieten. Andererseits können sie Dienste

48 anfordern, und so ihre Implementierung auf diesen abstützen. Die Realisierung dieser Beziehungen erfolgt durch eine geeignete Aggregation bzw. Komposition (nicht abgebildet). Falls ein von einer Komponente angebotener Dienst P mit einem von einer Komponente einer anderen Modellierungsebene angeforderten Dienst R, kompatibel ist, so können beide aneinander gebunden werden (ServiceBinding). Dabei ist die syntaktische Verträglichkeit gewährleistet, falls der Typ von R ein Supertyp von P ist, d.h. falls der angebotene Dienst den Anforderungen entspricht bzw. noch weiter verfeinert ist. Eine Bindung repräsentiert somit die durch die Verwendung eines angebotenen Dienstes dargestellte funktionale Abhängigkeit der R-Komponente von der zugrundeliegenden P-Komponente. Zur Realisierung dieser funktionalen Beziehung sind Diensten geeignete Codetemplates zugeordnet, wie bspw. ein Treiberaufruf durch eine Applikationskomponente, die so einen Zugriff auf eine Hardwarekomponente durchführt. Abb. 1 zeigt die vorgeschlagene Taxonomie von Diensten zusammen mit einigen einfachen Beispielen. Um die semantische, sowie die nicht-funktionale Kompatibilität von zu bindenden Diensten nach unterschiedlichen Gesichtspunkten wie etwa Echtzeit, Kommunikation, Safety, Energieverbrauch, etc. beschreiben zu können, wird jedem Dienst eine Beschreibung seines Angebots bzw. seiner Forderung zugeordnet. Hierzu können einerseits direkt formale Spezifikationen wie etwa temporale Logik zur Definition von zeitlichen Bedingungen verwendet werden. Gafni et al. haben für zeitliche Eigenschaften einen vielversprechenden Ansatz vorgestellt, wie mit Hilfe einer Menge von vordefinierten Pattern eine formale, aber dennoch menschenlesbare Codierung der Bedingungen angegeben werden kann [GBC + 08]. Daher erscheint es sinnvoll, diesen Ansatz auch auf andere durch Dienste darstellbare Eigenschaften zu übertragen und geeignete Pattern zu definieren. Nicht-funktionale Eigenschaften von angebotenen Diensten sowie Anforderungen an Dienste werden in Abb. 1 durch die Referenz nfp repräsentiert. Dazu werden Verfeinerungen der Klasse NFP für die Beschreibung verschiedener Aspekte benötigt, die in geeigneten Darstellungen unterschiedlicher Ausdrucksmächtigkeit ausgedrückt werden können. Jedem Diensttyp können so unterschiedliche nicht-funktionale Angebote / Forderungen zugeordnet sein. Typische Beispiele sind: Durchschnittliche Latenz, maximale Latenz, Durchsatz, Fehlerwahrscheinlichkeit und Energieverbrauch. Dienst-Taxonomie. Basisdienste (BasicService) stellen eine Schnittstelle zu grundlegenden, nur in Hardware realisierbaren Primitiven dar (vgl. Abs. 4), die sich wie folgt einteilen lassen: Storage abstrahiert verschiedenartige Speicher (RAM, Flash, etc.). Peripheral fasst die Eigenschaften von Ein-/Ausgabegeräten wie etwa Ports oder A/D bzw. D/A-Wandlern zusammen. Trigger stellen eine Abstraktion von Ereignisquellen dar (Beispiele: Interrupts wie Timer, Peripherieeinheiten, etc.). Synchronization ist eine Basisklasse für Synchronisationsdienste, die direkt von der Hardware zur Verfügung gestellt werden (z.b.: Semaphore, Mutexe,... ). Transport beschreibt Dienste von Hardwarekomponenten, mit denen Daten im System bewegt werden können (Bsp.: Busse, NoC,... ). HardThread abstrahiert Berechnungsdienste, die hardwareseitig durch einen Programmzähler und einen Registersatz zur Verfügung gestellt werden.

49 Die eigentliche Taxonomie von Diensten ist jedoch auf zusammengesetzten Diensten (ComposedService) definiert. Diese ordnen sich einer der folgenden Gruppen zu: Communication, IO, Task oder Memory. Ableitungen dieser Klassen fassen Basisdienste zusammen und können so komplexere, entweder in Software oder Hardware realisierte Dienste beschreiben. Diese explizite Modellierung der Fähigkeiten/Forderungen der HW/SW-Schichten ermöglicht es, die Abhängigkeiten zwischen diesen Schichten in der Modellierung zu erfassen. Hierbei wird nur deren statische verwendet -Beziehung erfasst. Anhand der Verhaltensbeschreibung des zusammengesetzten Dienstes muss die dynamische Verträglichkeit der Dienstbindung mit der Applikation analysiert sowie für geeignete Arbitrierung gesorgt werden (s.u.). Falls ein Dienst auf verschiedenen Systemebenen realisiert werden kann, und er daher durch verschiedene Ausprägungen des gleichen abstrakten Dienstes repräsentiert wird, kann sowohl eine nachträgliche Verfeinerung bzw. Verschiebung einer Dienstbindung erfolgen. Ein Beispiel hierfür ist der Einsatz eines Betriebssystems während einer schrittweisen Verfeinerungen des Designs, dessen Dienste anstelle des direkten Hardwarezugriffs verwendet werden. Im unteren Teil von Abb. 1 sind einige einfache Beispiele abgeleiteter Dienste dargestellt. So setzt sich bspw. eine Point-to-Point Kommunikation aus Synchronisationen und einem Datentransport zusammen. Eine CoRoutine ist ein Task ohne eigenen Kontext, besitzt aber einen Abarbeitungsfaden HardThread. Ein SoftThread hat darüber hinaus einen eigenen Stack, während ein Process auch einen Heap besitzt. In dieser Darstellung ist ein DMA lediglich ein (Kopier-)Task. Port, IRQ, Semaphore und Mutex sind einfache Dienste, welche Spezialisierungen von Basisdiensten darstellen. Über die reine Darstellung der funktionalen und nicht-funktionalen Kompatibilität und der tatsächlichen Bindung hinaus stellen Dienste der jeweils höheren Schicht im Stapel Codetemplates zur Verfügung, die den Zugriff auf die unterliegende Komponente ermöglichen. Beispiele hierfür sind die Implementierungen von Zugriffsfunktionen für Pins eines Ports oder die Berechnung von physikalischen Adressen wie in [PKMM10] vorgeschlagen. Arbitrierung. Da in dieser Arbeit nebenläufige Applikationsmodelle betrachtet werden, muss der Zugriff auf Dienste, die Ressourcen der Ausführungsplattform kapseln, geregelt werden. Eine Möglichkeit besteht darin, dass die Hardware geeignete Arbiter (HWArbitration) bereit stellt. Falls dies nicht der Fall ist, muss der Zugriff auf die Ressource durch geeignete Ausführungspläne (Schedule) geregelt werden (statisch oder dynamisch). Dabei ist zu beachten, dass die gewählte Arbitrierungsstrategie konsistent mit den durch den jeweiligen Dienst beschriebenen nicht-funktionalen Anforderungen sein muss (z.b. worst-case response time). 6 Fallstudie Das Ziel dieses Abschnittes ist es, die in dieser Arbeit eingeführten Konzepte an einem Beispiel zu validieren. In Abb. 2 ist hierzu das Modell eines eingebetteten Systems dargestellt, dessen Hauptbestandteile das Plattformmodell (unten), ein Anwendungsmodell (oben, rot umrahmte Bereiche) sowie die durch die jeweiligen Dienste spezifizierten wechselseitigen Anforderungen und Garantien sind.

50 E/A Berechnung FT C2C-Kommunikation trigger Memory Process in IRQ-Contr. Sensor-Port trans Mem PE Mem PE NoC NI NI trans_ft Process trans Mem PE NI Mem NI PE Memory Actuator-Port out C2C-Kommunikation requests provides Abbildung 2: Exemplarisches 4-Kern-System mit Anbindung an Sensorik und Aktuatorik. Die Hardwareplattform besteht dabei aus 4 Rechenkernen (PE: processing element), die jeweils über lokale Speicher (Mem) verfügen, und die untereinander über ein NoC verbunden sind, auf das sie über ein geeignetes Netzwerkinterface (NI) zugreifen. Eines der PEs ist dabei an einen Eingangsport und einen Interrupt-Controller angebunden; ein anderes verfügt über einen Ausgangsport, über den ein Aktuator angesteuert wird. Wie in Abs. 5 ausgeführt, bieten die Plattformkomponenten geeignete Verfeinerungen der folgenden Basisdienste an, die durch gleichfarbige Pfeile repräsentiert werden. Sensorik- und Aktuatorik-Ports: Peripheral Interrupt-Controller: Trigger Kommunikationspfade im NoC: Transport Rechenkerne: HardThread Im Beispiel sind die Fähigkeiten der Hardware durch die Dienste einer Middleware bzw. eines Betriebssystems weiter abstrahiert: Memory, Process: Ausführungskontext für die gewählte Partitionierung des Anwendungsmodells (s.u.). Communication: Kommunikation zwischen den Anwendungskomponenten. Im oberen Teil der Grafik ist das Datenflussmodell einer Anwendung dargestellt, in der ein PI-Regler realisiert ist (je ein Sensor und Aktuator), der gemäß einer gegebenen Partitionierung auf zwei Kernen ausgeführt wird. Im Folgenden wird die Modellierung der Abhängigkeit der Anwendung von der Ausführungsplattform erläutert, die in der Abbildung durch Linien, die in gefüllten Kreisen enden, dargestellt werden. Die eigentliche Dienstbindung, d.h. die Erfüllung einer Anforderung durch einen vom der Plattform zur Verfügung gestellten Dienst, wird durch ungefüllte Kreise dargestellt. Allen Komponenten des Applikationsmodells ist gemein, dass sie einen Berechnungsdienst (hier: Prozess) sowie Speicher (Memory) benötigen. Die Bindung der durch

51 die Anwendung angeforderten Plattformdienste stellt ein Mapping der Datenfluss- Komponenten auf Rechenkerne und deren zugehörigen lokalen Speicher dar. Da einem Kern mehrere Komponenten zugewiesen werden können, wird zur zeitlichen Arbitrierung der HardThreads ein Schedule benötigt. Im Falle des betrachteten synchronen Datenfluss-Modells kann dieser Schedule statisch vorberechnet werden; andere Applikationsmodelle sind dagegen auf die Schedulingdienste eines Betriebssystems angewiesen. Die Kommunikation zwischen den Komponenten des Anwendungsmodells wird über entsprechende Plattformdienste realisiert. Lokale Kommunikation kann bspw. über einen an den jeweiligen Kern angebundenen Speicher realisiert werden. Die Inter-Core- Kommunikation, die durch das Mapping der Anwendung nötig wird, soll im Beispiel fehlertolerant ausgeführt sein. Sie wird hier durch eine Middleware realisiert, die sich den Kommunikationsdiensten zweier unabhängiger Pfade im NoC bedient. Bei der Implementierung der Applikation auf einer Plattform, die fehlertolerante Kommunikation in Hardware realisiert, kann dieser Schritt entfallen und die Anforderung der Software direkt erfüllt werden. Der Zugriff auf die Peripherie erfolgt über entsprechende IO-Dienste. Im Beispiel signalisiert der Sensor über einen Interrupt, dass ein neuer Wert anliegt. Daher wird ein Trigger verwendet, um eine Interrupt-Service-Routine an den entsprechenden Kern zu binden. 7 Zusammenfassung und Ausblick Der Beitrag dieser Arbeit liegt in der Einführung und Definition eines Dienstmodells, das zur Spezifikation der Abhängigkeiten zwischen den verschiedenen Schichten der Systemmodellierung verwendet werden kann. Dabei kann eine Bindung zwischen einer anfordernden und einer bietenden Komponente zustande kommen, falls diese syntaktisch kompatibel sind. Hierzu muss der Nachweis für die in den Diensten spezifizierten nichtfunktionalen Anforderungen erbracht und für eine geeignete Arbitrierung gesorgt werden. Ein wesentlicher Aspekt ist dabei, dass Dienste nicht nur die Verträglichkeit von Bindungen modellieren, sondern durch die Bereitstellung von Codetemplates auch deren Realisierbarkeit gewährleisten. Dies ermöglicht die Implementierung eines vollständig hardwareunabhängigen Codegenerators. Es ist geplant, die Skalierbarkeit des Ansatzes im Rahmen mehrerer realer Anwendungsfälle zu überprüfen. Ein möglicher Anknüpfungspunkt an das in dieser Arbeit vorgeschlagene Framework ist die Integration von formalen Techniken zur Analyse der Verträglichkeit von Dienstbindungen. Ein anderer betrifft die Darstellbarkeit der dabei involvierten nicht-funktionalen Anforderungen, deren Lesbarkeit und Anwendbarkeit eine zentrale Rolle für die Akzeptanz der Methodik spielt. Hierbei scheint die Definition von patternbasierten Sprachen zur Beschreibung weiterer Aspekten (neben zeitlichen [GBC + 08]) von Interesse. Literatur [BCF + 08] Albert Benveniste, Benoît Caillaud, Alberto Ferrari, Leonardo Mangeruca, Roberto Passerone und Christos Sofronis. Multiple Viewpoint Contract-Based Specification and Design. In Frank de Boer, Marcello Bonsangue, Susanne Graf und Willem-Paul de Roever, Hrsg., Formal Methods for Components and Objects, Jgg d. Lecture Notes in Computer Science, Seiten Springer Berlin / Heidelberg, 2008.

52 [BGBK08] Simon Barner, Michael Geisinger, Christian Buckl und Alois Knoll. EasyLab: Model- Based Development of Software for Mechatronic Systems. In IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications, Seiten , Beijing, China, Oktober [BJP10] Antoine Beugnard, Jean-Marc Jezéquel und Noël Plouzeau. Contract aware components, 10 years after. Electronic proceedings in theoretical computer science, 37:1 11, Oktober [BJPW99] Antoine Beugnard, Jean-Marc Jezequel, Noël Plouzeau und Damien Watkins. Making components contract aware. Computer, 32(7):38 45, Juli [Bro05] Manfred Broy. Automotive software and systems engineering. In Proceedings of the 3rd ACM/IEEE International Conference on Formal Methods and Models for Co- Design, MEMOCODE 05, Seiten , Washington, DC, USA, Juli IEEE Computer Society. [Dam05] Werner Damm. Controlling Speculative Design Processes Using Rich Component Models. In Proceedings of the Fifth International Conference on Application of Concurrency to System Design, Seiten , Washington, DC, USA, Juni IEEE Computer Society. [EMD08] Wolfgang Ecker, Wolfgang Müller und Rainer Dömer, Hrsg. Hardware-dependent Software Principles and Practice. Springer, [FGH06] Peter H. Feiler, David P. Gluch und John J. Hudak. The Architecture Analysis & Design Language (AADL): An Introduction. Technical Note CMU/SEI-2006-TN- 011, Carnegie Mellon University, Februar [GBC + 08] Vered Gafni, Albert Benveniste, Benoit Caillaud, Susanne Graf und Bernhard Josko. D Contract Specification Language (CSL). Bericht, SPEEDS project, SPEEDS deliverable D [Mey92] Bertrand Meyer. Applying Design by Contract. Computer, 25:40 51, Oktober [OMG10] Object Management Group. OMG. Systems Modeling Language 1.2, Juni [PHG + 09] Roberto Passerone, Imene Ben Hafaiedh, Susanne Graf, Albert Benveniste, Daniela Cancila, Arnaud Cuccuru, Sebastien Gerard, Francois Terrier, Werner Damm, Alberto Ferrari, Leonardo Mangeruca, Bernhard Josko, Thomas Peikenkamp und Alberto Sangiovanni-Vincentelli. Metamodels in Europe: Languages, Tools, and Applications. IEEE Design & Test of Computers, 26:38 53, Mai [PKMM10] Robert Pyka, Felipe Klein, Peter Marwedel und Stylianos Mamagkakis. Versatile system-level memory-aware platform description approach for embedded MPSoCs. SIGPLAN Not., 45:9 16, April [TBHH07] Lothar Thiele, Iuliana Bacivarov, Wolfgang Haid und Kai Huang. Mapping Applications to Tiled Multiprocessor Embedded Systems. In Proceedings of the Seventh International Conference on Application of Concurrency to System Design, Seiten 29 40, Washington, DC, USA, Juli IEEE Computer Society. [WHHW09] Zhonglei Wang, Andreas Herkersdorf, Wolfgang Haberl und Martin Wechs. SysCO- LA: a framework for co-development of automotive software and system platform. In Proceedings of the 46th Annual Design Automation Conference, DAC 09, Seiten 37 42, New York, NY, USA, Juli ACM.

53 Herausforderungen bei der Performanz-Analyse automatisierungstechnischer Kommunikationssysteme D. Renzhin, J. Folmer Lehrstuhl für Informationstechnik im Maschinenwesen Technische Universität München Boltzmannstr. 15, 85748, Garching b. München {renzhin, Abstrakt: Die Performanz-Analyse von Ethernet-Basierten Netzwerken, die in der Automatisierungstechnik eingesetzt werden, ist durch die Übertragung von echtzeit- und nicht-echtzeitkritischen Daten nur schwer möglich. Zum einen fehlt ein einheitliches und herstellerunabhängiges Beschreibungsmittel, das zum einen die Modellierung von Echtzeitanforderungen zulässt und zum anderen eine Darstellung ermöglicht, was vom Netzwerk technisch möglich ist. In diesem Beitrag wird ein Beschreibungsmittel für die Planung von Netzwerken vorgestellt. Das Beschreibungsmittel wird als Grundlage für die Performanz-Analyse dienen. 1 Einleitung Die Automatisierungstechnik befindet sich im dynamischen Wandel, der an dem immer schneller ablaufenden technischen Progress stark gekoppelt ist. Die grundlegende Idee, dass für die Automatisierung Aktoren und Sensoren verwendet werden, bleibt unverändert. Hingegen lässt sich durch neue Übertragungstechniken das ganze Konzept der Automatisierung immer wieder von neuen Blinkwinkeln betrachten. Drastisch steigende Anforderungen der Industrie und neue technische Möglichkeiten tragen dazu bei, dass die Kommunikationssysteme immer flexibler, universaler, schneller und sicherer einsetzbar sind, aber damit auch komplexer und schwer analysierbar werden. Seit Ende der Achtzigerjahre wurde die Entwicklung von Feldbussystemen mit Hilfe vieler Forschungsprojekte in den verschiedensten Ländern, vor allem in Deutschland, in Frankreich und in den USA, vorangetrieben. Die dadurch entstehenden Feldbusse konkurrierten sehr stark, so dass es keinen einheitlichen Standard für netzwerkbasierte Automatisierungssysteme gab. Man sprach sogar über einen sogenannten Feldbuskrieg. [Fe02] Parallel zu der Entwicklung von netzwerkbasierten Automatisierungssystemen, wurden auch Office-Netzwerke stark voran getrieben. Daraus resultierte Ethernet, was zu einem weltweit verwendeten Standard geworden ist. Heutzutage werden Ethernet-Anschlüsse in jedem Haus und an jedem PC als eine Selbstverständigkeit eingesetzt. Ende der Neunzigerjahre wurde das Potenzial des Ethernets von der Industrie aufgegriffen. Zwei neue Standards, die IEC und die ergänzende IEC definieren den Begriff Industrial Ethernet als Ethernet-Derivat mit Echtzeiterweiterungen. Die darauf basierenden Systeme verfügen über höhere Funktionalität und Komplexität durch verschiedene In-

54 dustrial-ethernet-varianten. Sie sind anpassbar an konkrete Anwendungsfälle und erfüllen somit die gewünschten Anforderungen optimal [We10]. 2 Industrial Ethernet Bei Industrial Ethernet handelt es sich um Switched-Ethernet, d.h. die Kommunikation zwischen einzelnen Teilnehmern erfolgt über eine Punkt-zu-Punkt -Verbindung. Das führt dazu, dass kein gemeinsam hörbares Medium mehr existiert. Die einzelnen Datenübertragungsstrecken sind durch Switches verbunden. Solch eine Netzwerkstruktur verhindert Datenkollisionen, was in modernen und schnellen Netzwerken auschlaggebend ist. Die Folge einer Unterteilung in kleine und voneinander unabhängige Netzwerksegmente (Punkt-zu-Punkt) ist der Übersichtsverlust auf das Geschehen in dem gesamten Kommunikationssystem. Kompatibilität mit IEEE802 Echtzeitfächgkeit Besteffort Echtzeit Besteffort Echtzeit Besteffort Echtzeit TCP / UDP IP TCP / UDP IP Prioritizing TCP/UDP IP Scheduling ETH MAC ETH MAC ETH MAC z.b Modbus/IDA, Ethernet/IP FF HSE z.b PROFINET IO z.b PROFINET IRT Abb. 1: Typen von Industrial Ethernet [Jas05]. Industrial Ethernet lässt sich allgemein in drei Gruppen kategorisieren [Jas05], dessen klassische Darstellung in Abbildung 1 zu sehen ist. Bei der Gruppe 1 handelt es sich um eine nicht-echtzeitfähige Kommunikation, die für viele Anwendungsfälle, wie z.b. Mensch-Maschine-Interaktion, ausreichend ist. Die Kommunikation der ersten Gruppe ist eine klassische Kommunikation, die in Office-Netzwerken etabliert ist. Für die Automatisierungsansätze werden die Protokolle TCP/IP oder UDP/IP benutzt. Das führt dazu, dass die Übertragungszeiten sehr stark von der jeweiligen Netz-Topologie, Busauslastung und Speicherkapazität der Switches abhängen. Die zweite Gruppe repräsentiert eine Soft-Real-Time Kommunikation, welche die Übertragung von nichtzeitkritischen Daten und zeitkritischen Real-Time-Daten (RT-Daten) mithilfe von Priorisierung [IEEE802D/Q] vereint. Im Regelfall werden die RT-Daten unabhängig von ISO/OSI Schichten 3-6 direkt an der Schicht 7 Anwendungsschicht der Automatisierungskomponente übertragen. In dieser Gruppe besteht das Problem, dass hochpriorisierte RT-Daten in einem Switch durch ein in Bedienung befindlichen niedrigpriorisierten Frame verzögert werden können. Solche Wartezeiten können bei einem maximal langen Frame mit Industrial Ethernet bis zu 120 µs pro Switch betragen [We10].

55 Die Netzwerkkomponenten die harte Echtzeitanforderungen (Jitter weniger als 1µs) erfüllen müssen, können erst in der Gruppe drei realisiert werden. Die Busschnittstellen für diese Gruppe müssen deutlich leistungsfähiger sein, als die der vorherigen Gruppen. Außerdem erfordert das Scheduling einen zusätzlichen Planungsaufwand. Als Beispiel dazu kann ein genauer Topologie-Plan mit Zeitverzögerungen in Kabeln und Switches dienen oder die Erstellung eines Planes für Sendezeiten, je nach dem wo das Gerät sich im Netz befindet und wie schnell es ist, usw. Eine Voraussetzung für diese Gruppe ist ein reservierter Echtzeitkanal. Es wird eine Zeitspane reserviert in der ausschließlich Echtzeitdaten übertragen werden. Die einfachen Daten werden in den Switches während dieser Zeitspanne zwischengespeichert oder überhaupt verworfen, falls die Zwischenspeicherung nicht möglich ist. Industrial Ethernet erfordert von der ersten zu der dritten Gruppe in zunehmendem Maße technische Erweiterungen, die den Einsatz von Standard-Hardware einschränken. Dies steigert deutlich die Komplexität von Automatisierungssystemen und u.a. die Errichtungskosten. In den realen Anwendungen des Maschinen- und Anlagenbaus wird eine Kombination von in Abbildung 1 dargestellten Gruppen verwendet, wie es folgend anhand eines Fallbeispiels erläutert wird. 3 Industrial-Ethernet-Basierte verteilte Automatisierungssysteme Die Abbildung 2 stellt schematisch den Informationsweg von Daten einer automatisierungstechnischen Anlage dar. Angefangen bei einem technischen Prozess, wie es links in der Abbildung 2 zu sehen ist, nimmt ein Sensor den benötigten Messwert vom technischen Prozess auf und wandelt diesen in ein elektrisches Analog-Signal um. Dann wird das Analog-Signal in ein digitales umgewandelt, in einem Datenpaket verpackt und durch das Netzwerk an den Netzwerkadapter der CPU gesendet. Die Daten werden empfangen, verarbeitet und durch einen ähnlichen Weg zu dem zuständigen Aktor gesendet. D.h. das Signal wird mit jedem einzelnen Schritt im Informationsweg verzögert. Außerdem läuft das Signal mindestens durch zwei unabhängige Netzwerkwege vom Sensor zur CPU, und von der CPU zum Aktor. Dieses Beispiel zeigt wie komplex ein einfach erscheinender Übertragungsablauf ist. Abb. 2: Übertragung von Prozesssignalen zwischen Automatisierungsgerät und technischen Prozess (vgl. [LG99]). Die Architektur eines Fallbeispiels eines verteilten vernetzten Systems ist in Abbildung 3 dargestellt. Die Idee ist, dass von jedem Punkt auf jeden Teilnehmer Zugriffsmöglich-

56 keiten bestehen. Das ermöglicht die Verteilung der Steueraufgaben auf mehrere Steuerungseinheiten, ohne dass es zusätzliche Informationstauschwege und damit verbunden Kosten, Fehler, Inkompatibilitäten usw. realisiert werden müssen. Dies wird erst mit dem Einsatz von Industrial Ethernet ermöglicht. Es ist üblich, dass Industrial Ethernet Geräte einen integrierten zwei-port-switch haben, was das Verschalten der Geräte in einer Linientopologie ermöglicht und einigen Industrial Ethernet Ausprägungen zu Grunde liegt. Eine komplexe vernetzte Anlage der Zukunft wird verschiedene Topologie- und Kommunikations-Typen vereinen. Einerseits werden harte Echtzeit-Kanäle (bspw. für die Prozesswerte) gefordert, anderseits auch Datenaustauschmöglichkeiten mittels Standard-Ethernet (bspw. für die Prozessleitwarte). Endgeräte sind so intelligent geworden, dass die meisten Hersteller zusätzliche Dienste, wie OPC-Server oder Web- Interfaces integrieren. Es wird von einer Anlage ausgegangen, die Geräte beinhaltet, die für langsam ablaufende Prozesse bestimmt sind und keine Hart-Echtzeit-Kanäle nutzen. Außerdem sind nur wenige Geräte integriert die harte Echtzeitanforderungen erfüllen müssen. Abb. 3: Eine Beispielarchitektur des verteilten Automatisierungssystems Die Vorteile des parallelen Betriebes von zeitkritischen und nicht zeitkritischen Datenaustausch stellen eine Herausforderung bei der Netzwerkplanung und Netzwerkanalyse dar. Der Entwurf eines verteilten vernetzen Systems erfordert u.a., dass die Topologie sowie die Parameter des Datenaustauschs wie beispielsweise die Zykluszeiten festgelegt werden. Durch nicht-echtzeit-daten entsteht andererseits ein zusätzliches Datenaufkommen, das in der Regel zeitlich nicht gleichmäßig verteilt ist und bei der Planung nur schwer abzuschätzen ist. Dieses zusätzliche Datenaufkommen kann niedrigpriorisierte

57 Kommunikation verzögern, und somit unerwünschte Funktionsstörungen der Anlage verursachen. Aufgrund der Punkt-zu-Punkt -Verbindung geschieht in einzelnen Teilen der Anlage eine parallele Kommunikation, d.h. es kann nicht genau festgelegt werden in welchem Netzabschnitt sich welches Telegramm befindet. Außerdem werden Telegramme, die zeitgleich an einem Switch von mehreren Kanälen eintreten, zwischengespeichert und erst nacheinander weitergeleitet oder sie werden gar verworfen, falls die Speicherkapazität des Switches erschöpft ist. Das kann dazu führen, dass sämtliche Zeitanforderungen überschritten werden. Heutzutage existiert kein Werkzeug, das die komplette Kommunikation zu jedem Zeitpunkt in jedem Netzwerkteil erfassen könnte [We10]. Falls obengenannte Probleme in den Anlagen auftreten, sind deren Ursachen nur schwer zu entdecken, wenn dies überhaupt möglich ist. Existierende Werkzeuge lassen das Datenaufkommen an einem Punkt des Netzwerkes überwachen, was jedoch keine Aussage zulässt wann, wie, wo und/oder warum ein Datenpaket verzögert wurde. Ein Entwurf eines Werkzeuges, welches den gesamten Netzverkehr zeitgleich überwachen könnte, ist aus den technischen Gründen derzeit nicht möglich [We10]. Außerdem wurde das Zeitverhalten komplexer verteilter Netzwerksysteme nur gering untersucht [Gr07]. Der Einsatz einer geeigneten Modellierung könnte den Anlagenbau schon von der Planungsphase stark unterstützen [Gr07]. Außerdem bietet ein Modell die Möglichkeit der Analyse, die an einer realen Anlage nicht möglich wäre. Aufgrund der Komplexität von verteilten Automatisierungssystemen und nicht-deterministischem Kommunikationsverhalten von nicht-hart-echtzeit-datenübertragung, muss eine geeignete Modellierungssprache ausgewählt werden, die für eine Performanz-Analyse geeignet ist. 4 Modellierung von Automatisierungsarchitekturen In [WV08] wird ein Modellierungsansatz für Netzwerke gezeigt. Durch eine Sammlung von Basiskomponenten, welche die Grundbausteine des Netzwerkdiagramms definieren, wird ein Beschreibungsmittel zur Planung von Netzwerken vorgestellt. Das Beschreibungsmittel ist hersteller- und geräteunabhängig und gewährleistet die logische und physikalische Planung Ethernet-Basierter Netzwerke. Die Verknüpfung der Sensoren und Aktoren, die über einen eigenen Feldbusanschluss verfügen, mit dem technischen Prozess, geschieht über eine strukturierte tabellarische Form. Die Bezeichnungen der Komponenten sind angelehnt an die Bezeichnung aus dem zugrundeliegenden R&I- Fließbild. Das durch die Modellierung entstandene und mit dem technischen System verknüpfte Beschreibungsmittel kann durch Echtzeitanforderungen ergänzt werden. Durch Parametrierung der Grundbausteine können zeitliche Verzögerungen über die Wirkungskette angegeben werden (vgl. [LG99, WV08]). Beispielsweise können dadurch die maximalen Reaktionszeiten, maximalen Totzeiten und Synchronitäten mit einer maximalen Abweichung definiert werden. Als graphisches Element werden die Echtzeitanforderungen mittels Langrunds und den dazugehörigen Parametern direkt im Netzwerkdiagramm eingezeichnet.

58 Das Beschreibungsmittel dient in diesem Artikel als grundlegendes Beschreibungsmittel für automatisierungstechnische Netzwerke. 5 Modellierung von Kommunikationssystemen Zur Modellierung von Netzwerken wurde im Rahmen eigener Arbeiten eine angepasste Form des Beschreibungsmittels, basierend auf den Arbeiten von [WV08] entwickelt In Abbildung 4 wird die Modellierung des Kommunikationssystems der Beispielarchitektur von der Streuung, der Formstraße und der kontinuierlichen Antriebe/Presse des technischen Prozesses verdeutlicht (vgl. Abbildung 4). Zugunsten der Übersichtlichkeit wurden in der Abbildung 4 die Echtzeitanforderungen an das Kommunikationssystem und die Verknüpfung des Kommunikationssystems mit dem technischen Systems ausgeblendet. Die Operator-Station (OS) ist an oberster Ebene der Modellierung angeordnet. Da es sich bei der Operator-Station um einen nicht-echtzeitfähigen Übertragungskanal handelt, wird dies gekennzeichnet. Die Übertragung von Daten geschieht über einen Ethernetbasierten (ETH) Kommunikationskanal der über einen Switch mit der SPS (CPU) verbunden ist, wobei die SPS über einen echtzeitfähigen Kommunikationskanal verfügt (PN1). Auf unterster Ebene der Abbildung 4 werden die Schnittstellen zur Feldebene angedeutet. Hierbei werden die Gerätetypen entsprechend ihrer Art kategorisierend gekennzeichnet. Die beiden Module auf der linken Seite der Abbildung 4 sind mit einer Ethernet-basierten Anschaltung versehen. Bei dem dritten Modul von links und dem Modul auf der rechten Seite der Abbildung 4 handelt es sich um Feldgeräte mit überlagerter Netzwerkaufgabe. Die mit IRT gekennzeichneten Module sind Feldgeräte mit echtzeitfähiger Busanschaltung. OS P C ETH C P PN1 U E T H ETH ETH ETH IRT IRT ETH Abb. 4: Modellierung (rechts) der Automatisierungsarchitektur (links) - angelehnt an [WV08] Die Abbildung 4 zeigt, dass das Beschreibungsmittel Strukturierung, Hierarchisierung und zur Darstellung der Bustopologie Übersichtlichkeit bietet. Ebenfalls sind in der

59 Abbildung kritische Pfade ersichtlich. Ein kritischer Pfad bezeichnet jenen Feldbusstrang, an dem zum einen echtzeitkritisch (harte Echtzeit gekennzeichnet durch IRT- Geräte) Informationen und zum andere nicht-echtzeitkritische Informationen übertragen werden. Dadurch dass Echtzeit und nicht-echtzeit Informationen über einen Feldbusstrang gesendet werden, wird die Übertragungsrate für nicht-echtzeit Informationen durch die Echtzeit-Teilnehmer durch festgelegte Zeitrahmen zum Senden und Empfangen reduziert. Da die nicht-echtzeitkritischen Informationen nicht-deterministisch gesendet werden, kann keine absolute Garantie angenommen werden, wann die versendete Information beim Empfänger (in Abbildung 4 die CPU) ankommt. Besonders kritisch ist es für den Fall, dass alle ETH-Teilnehmer Informationen über den Feldbusstrang senden wollen und dadurch die Leitung belegt ist, so dass manche ETH-Teilnehmer nicht senden können. Die CPU würde, durch die fehlenden Nachrichten, schlimmsten Falls den Teilnehmer als ausgefallen melden, so dass die Anlage in einen sicheren Zustand gefahren werden würde und wirtschaftlichen Schaden durch Produktionsausfall zur Folge hat. Um die kritischen Pfade in der Planung uns Auslegung des Kommunikationssystems zu kennzeichnen, wurde das Beschreibungsmittel von [WV08] angepasst. Die Erweiterung ist in der Abbildung 4 abstrahiert durch zwei zusätzliche Parameter an den Feldbussträngen gezeigt. Durch die örtliche Trennung von Operator-Station, der SPS (Speicherprogrammierbare Steuerung) und der dezentralen Peripherie ergibt sich, dass sich u.a. zum einen verschiedene Feldbussysteme, als auch Leitungslängen in der Architektur befinden. Δt trans: Übertragungszeit, bei gegebener Kabellänge Δt Switch: Übertragungszeit, bei gegebener Bearbeitungsdauer des Switches P C ETH OS Δt trans Kritischer Pfad C P PN1 U Δt trans T E H Δtswitch Δt trans Δt switch Δt switch ETH ETH ETH IRT IRT ETH Δt trans Δt trans Δt trans Abb. 5: Modellierung der Beispielarchitektur unter Angabe von Übertragungs- und Bearbeitungszeiten - angelehnt an [WV08] Somit muss eine durchschnittliche Übertragungszeit (t trans ) für den Echtzeitkanal, als auch für den nicht-echtzeitkanal angegeben werden, um kenntlich zu machen, wie viel Bandbreite den nicht-echtzeitinformationen zur Verfügung steht. Somit ergibt sich, dass die Übertragungszeit die Summe der Übertragungszeit des Echtzeitkanals und die Übertragungszeit des nicht-echtzeitkanals ist (t trans = t trans,rt + t trans,nrt ; rt: real-time, nrt: nonreal-time). Dadurch, dass die Informationen über den nicht-echtzeitkanal nichtdeterministisch sind, kann eine exakte Übertragungszeit nicht angegeben werden. Hie-

60 raus resultiert, dass eine mittlere Übertragungs-zeit und eine Abweichung angegeben werden muss (t trans,nrt = t mtrans,nrt ± Δt trans,nrt ; t mtrans,nrt : mittlere Übertragungszeit, Δt trans,nrt : zeitliche Abweichung zur mittleren Übertragungszeit). Gleiches gilt für die Bearbeitungszeiten der Switches (t Switch in Abbildung 5). Wie bei der Übertragungszeit über den Feldbusstrang, wird die Verarbeitungszeit des Switches, durch den Echtzeitkanal, für nicht-echtzeitinformationen eingeschränkt. Wollen mehrere Teilnehmer nicht- Echtzeitinformationen versenden, kann es passieren dass der Switch durch die Menge an Daten überlastet ist (bspw. Puffer im Switch ist voll) und Datenpakete verloren gehen Die Übertragungs- und Bearbeitungszeiten manuell zu errechnen würde einen hohen Zeitaufwand bedeuten und ist oft, wenn überhaupt möglich, nur durch Fachpersonal (Mathematiker, Informatiker, etc.) zu bewerkstelligen. Somit müssen Eigenschaften und Verhalten von Netzwerkteilnehmern in die Modellierung integriert werden. Aufgrund der Vielzahl an Varianten von Netzwerkteilnehmern die auf dem Markt verfügbar sind ist es durch die Zeitintensivität der Beschreibung schwer möglich jedes Gerät einzeln mit Verhalten und Struktur in der Modellierung abzubilden. Abb. 6: Dekomposition eines Netzwerkteilnehmers und Verhaltensmodellierung Die grundlegende Idee die Varianten in der Modellierung abzubilden ist es, die ganzheitlichen Netzwerkteilnehmer in deren interne Bestandteile zu zerlegen um dadurch die Dekomposition des Netzwerkteilnehmers abzubilden. Die Netzwerkteilnehmer müssen derart dekomponiert werden, dass Module im Sinne der Modularität und Wiederverwendung entwickelt werden. Durch dieses Vorgehen werden Varianten verschiedener Netzwerkteilnehmer abgebildet. Die einzelnen Komponenten werden attribuiert (bspw. Pufferkapazität usw.) und mit Zeitverhalten versehen. Außerdem weisen die einzelnen Komponenten einen hohen Verzweigungsgrad hinsichtlich des Datenaustauschs untereinander auf. Somit muss eine geeignete Beschreibung der Synchronisation und des Datenaustauschs herangezogen werden. In zukünftigen eigenen Forschungsarbeiten soll herausgearbeitet werden, welches Beschreibungsmittel, bzw. Kombination von Beschreibungsmitteln für die Modellierung und Performanz-Analyse herangezogen werden können. Im Folgenden werden Techniken zur Performanz-Analyse vorgestellt.

61 6 Herausforderungen bei der Netzwerk-Analyse Industrielle Anlagen sind aufgrund ihrer Vielzahl an Komponenten und ihren hohen Sicherheitsanforderungen hochkomplexe Systeme. Eine manuelle Analyse ist auf Grund der Hohen Komplexität und des nicht-deterministischen Kommunikationsverhaltens nur schwierig realisierbar und für die Verifizierung kommen meist Simulationen zum Einsatz [WV06]. In den Arbeiten von Marsal et al. [Ma05, Ma06] wird die Einsatzmöglichkeit zur Evaluation von Ethernet-basierten Echtzeitsystemen basierend auf Timed Automata (TA) [nach BEH04] und der Simulation mit Colored Petri Nets (CPN) gezeigt. Das Verhalten der Ethernet-Module, der E/A-Geräte und der Switches in der Systemarchitektur werden jeweils mit TA und CPN modelliert. Auf Basis der TAs wird durch Model-Checking geprüft, welche E/A-Scanzyklen realisierbar sind. Variabel sind bei der Verifizierung die Scan-Zyklen, so dass mehrere Verifizierungsdurchläufe durchgeführt werden müssen. Die durch das Model-Checking verifizierten Scanzyklen werden als Parameter für die Simulation verwendet, um die zeitliche Abweichung zwischen dem Versenden und dem Empfang von Daten zwischen den Geräten zu gewinnen. Die Evaluation hat gezeigt, dass beim Model-Checking unerwünschtes Verhalten, je nach Kombination der Scanzyklen entsteht und somit ist es schwierig herauszufinden, ob Scanzyklen das gewünschte Verhalten verifizieren oder nicht. Auf Basis des in [Ma05] vorgestellten Arbeiten wurde von Witsch et al. [Wi06] gezeigt, wie Model-Checking für die Performanz- Analyse von Netzwerken verwendet werden. Hierbei wurden Richtlinien zur Modellierung von Netzwerk-Teilnehmern mittels TA erarbeitet, um die Zustandsraumexplosion zu reduzieren. Es sind TAs für Puffer, Switch und Ethernet-Koppler für die optimierte Verifizierung erstellt worden. Die erhobenen Resultate sind mit den Resultaten einer Simulation verglichen worden, die eine Kohärenz aufzeigten. Das UML Profil MARTE wieder außerdem zur Modellierung verteilter Netzwerke verwendet. Die Arbeiten von [CJ09] fokussieren sich auf Extraktion für die Netzwerkanalyse relevanten Daten aus verschiedenen UML-Diagrammen. Die extrahierten Daten werden können für analytische Modelle oder für die Simulation verwendet werden. 7 Zusammenfassung & Ausblick In diesem Beitrag ist ein Beschreibungsmittel zur Planung von automatisierungstechnischen Netzwerken präsentiert worden, mit dem Zeitanforderungen und technisch mögliche Realisierungen von Zeitanforderungen modellierungstechnisch gegenüber gestellt werden. Das Beschreibungsmittel soll in zukünftigen Forschungsarbeiten als Grundlage zur Performanz-Analyse von Netzwerken verwendet werden. Aus dem Beschreibungsmittel sollen die notwendigen Informationen extrahiert werden, um an das Netzwerk gestellte Anforderungen automatisch zu verifizieren. Im Fokus der Arbeit steht die Herausforderung die deterministischen (Echtzeit) und die nicht-deterministischen (nicht Echtzeit) Übertragungskanäle im gemeinsamen Kontext des Netzwerkes automatisch zu analysieren.

62 Literaturverzeichnis [BEH04] Behrmann, G.; David, A.; Larsen, K.G.: A Tutorial on UPPAAL. Dep. of Comp. Science, Aalborg University, Denmark, [CJ09] Chise, C.; Jurca, I.;, Towards early performance assessment based on UML MARTE models for distributed systems. 5th International Symposium on Applied Computational Intelligence and Informatics. SACI '09., pp , 28-29, 2009 [De07] Denis, B.; Ruel, S.; Faure, J.-M.; Marsal, G.; Frey, G.: Measuring the impact of vertical integration on response times in ethernet fieldbuses. In: Emerging Technologies and Factory Automation, ETFA. IEEE Conference on pp , [Fe02] Felser, M.: Vom Feldbus-Krieg zur Feldbus-Koexistenz, Bulletin SEV/VSE 9/2002, felser.ch/download/fe-tr-0202.pdf. [Gr07] Greifeneder, J.: Formale Analyse des Zeitverhaltens Netzwerkbasierter Automatisierungssysteme, Techn. Univ. Kaiserslautern, Dissertation, [Ja05] Jasperneite, J.; Feld, J.: PROFINET: An Integration Platform for heterogeneous Industrial Communication Systems. In: Tagungsband 10th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA) Catania, [LG99] R. Lauber, P. Göhner; Prozessautomatisierung I, 3. Vollst. Überarb. Aufl., Springer, [Ma05] Marsal, G.; Witsch, D.; Denis, B.; Faure, J.-M.; Frey, G.: Evaluation of Real-Time Capabilities of Ethernet-based Automation Systems using Formal Verification and Simulation. In: Proc. 1Çùre Rencontres des Jeunes Chercheurs en Informatique Temps RÇ el 2005, RJCITR'05, Nancy, France, 2005, pp [Ma06] Marsal, G., Denis, B.; Faure, J.-M.; Frey, G.: Evaluation of Response Time in Ethernet-based Automation Systems. In Emerging Technologies and Factory Automation, ETFA '06. IEEE Conference on, pp , [We10] Welter, J.: Diagnosesystem für Industrial Ethernet Netzwerke, Techn. Univ. München, Dissertation, [Wi06] Witsch, D.; Vogel-Heuser, B.; Faure, J.-M.; Marsal, G.: Performance Analysis of Industrial Ethernet Networks By Means Of Timed Model-Checking. In: 12th IFAC Symposium on Information Control Problems in Manufacturing (INCOM), Etienne, France, [WV06] Witsch, D.; Vogel-Heuser, B.: Techniken zur effizienten Verifikation von Echtzeitsystemen durch Model-Checking. In: Proc. 9. Fachtagung mit Tutorium vom Mai 2006 in Braunschweig: Entwurf komplexer Automatisierungssysteme (EKA 2006), [WV08] Witsch, D.; Vogel-Heuser, B.: Modellierungsansatz für Zeitanforderungen und Kommunikationsnetze. In: Automatisierungstechnische Praxis (atp), Vol. 6, 2008.

63 Electric/electronic architecture model driven FlexRay configuration Matthias Heinz, Martin Hillenbrand, K.-D. Müller-Glaser Institute for Information Processing Technology, KIT, Karlsruhe, Germany {heinz, hillenbrand, Abstract: The configuration of an automotive FlexRay bus system, with its more than 70 parameters, forms a challenging task. Changes in the network topology or signal transmissions quickly lead to an inconsistent set of parameters and so to an unstable bus behavior or even a inoperative bus. The manual configuration of FlexRay parameters can be very time consuming and error-prone. To avoid these problems, we present a method to formally accumulate and derive configuration data, based on the electric/electronic architecture model, that is developed in the vehicle design phase. Our approach also includes the calculation of FlexRay parameters, frame packing and message scheduling. The data exchange between the EEA model and the configuration tool is based on a customized XML structure covering all information for a complete FlexRay configuration. The overall design flow was tested using various modeling data sets. Based on the calculated parameters, our tool was used to configure real FlexRay controllers, successfully running a hardware network. 1 Introduction The FlexRay bus system in the meantime is well established by many car manufacturers. Due to its great flexibility it allows to fulfill the various communication needs in vehicle communication. This flexibility however, leads to more than 70 parameters which have to be configured to set up a correct and robust FlexRay communication. Setting up these parameters by hand can quickly lead to a faulty configuration. A change in network topology for example affects propagation delays which also have to be concerned for parameter calculation. We developed a method to automatically extract a set of input parameters from an electric/electronic architecture (EEA) model. This data is used to feed an automatic FlexRay configuration flow. The EEA models are developed in the concept phase of the vehicle development and contain all necessary information to derive a FlexRay configuration set. The automated calculation allows for fast exploration of different bus system architectures and their corresponding configurations. This paper is organized in seven sections. A short introduction to FlexRay and EEA modeling is given in Section 2 and 3. Section 4 presents related work on this topic. Section 5 and 6 explains the extraction of data and the automated configuration of FlexRay parameters. Finally Section 7 describes the verification of the approach followed by conclusions and outlook to future work in Section 8.

64 2 FlexRay Communication on the FlexRay bus is arranged in 64 equally long, repeating cycles. A cycle consists of a static segment, dynamic segment, symbol window and dynamic slot idle phase. The static segment holds a configurable number of slots of equal length, applying a time division multiple access (TDMA) method. Each message (frame) is numbered by a slot-id and assigned to a certain time slot at design time. The number of static slots can be configured from 2 to The dynamic segment, which is optional in the communication cycle, is used to send event triggered messages on the basis of so called mini-slots. In this flexible time division multiple access (FTDMA) schema, the number of occupied minislots varies depending on the length of the transmitted messages. The optional symbol window can be utilized for sending special symbols and is currently not employed. The concluding network idle time is used by the FlexRay communication controllers (CC) for clock synchronization. Configuration of a FlexRay network is done by setting the parameters conform to the specification [Fle05]. Generally, there are two types of FlexRay parameters. Global parameters are set for the entire FlexRay cluster while local parameters are individually set for each single node. The calculation of these parameters is based on more than 60 equations and constraints, which have to be met to allow a successful bus operation. 3 Electric/electronic architecture modeling The electric/electronic architecture (EEA) of a vehicle is developed in the concept phase where architects investigate and optimize different realizations [HHA + 10]. So the EEA model is an essential part of the development process. The EEA modeling tool PREEvision [aqu09] is applied by leading car manufacturers to model and evaluate architectural realization alternatives. We use PREEvision EEA models as basis for our FlexRay configuration flow. Since the PREEvision-Software is based on the Eclipse Framework it can easily be extended by additional software elements. PREEvision provides different perspectives to an EEA (Figure 1). For the FlexRay configuration, the function network, the component network, the wiring harness and the topology hold all relevant data. Applicable elements are ECUs, bus connectors, bus systems, clusters and active stars. Bus connectors hold so called connector descriptors which describe attributes like termination, wake up, etc. Communication is modeled using function networks. Function blocks use ports are connected by interfaces, which are implemented by ports. The interface in between holds a data element which features a certain length and a data type. The function blocks are allocated to ECUs, using mappings. This information later is used to generate signals and signal transmissions between the ECUs. Different send modes allow the assignment to the static segment, the dynamic segment or both. The extraction of modeling data out of PREEvision is done by model queries. Based on the EEA meta model, queries can be executed to search for modeling artifacts. Information

65 Requirements Requirement 1.2 Description Requirement Description Requirement Description Topology Schematics Grounding Function Concept network Sensor 1 Sensor 2 Sensor 1 Sensor 2 Ground point ECU 1 ECU 2 Detailed Placed in Installation Location Composition Function Actor 1 Actor 2 Networking& Communication Detailed ECU 1 ECU 2 Branch off Installation Location Installation space ECU 1 ECU 2 Wiring harness Segment Placed in Detailed ECU 1 RAM CPU Software Inline connector Installation Location PCB ROM Installation Location Installation Location Installation space Actor 1 Actor 2 Fuse Box Power distribution Component description Figure 1: Structure of layered E/E architecture gained by these queries can be used to provide input data for calculation blocks, which can execute java code (Figure 2). Details on the gaining of data out of an EEA model is presented in [HMG09]. bus query block 1 calculation block Language: Java -1 FlexRayChannelA::Bus System FlexRayChannelA::FlexRay-Bus Type BusSystemType FlexRayBusSystems Calculation User Result FlexRayChannelB_14::Bus System FlexRayChannelB::FlexRay-Bus Type BusSystem BusSystems Figure 2: Model query and calculation block 4 Related work Currently, available commercial tools for FlexRay configuration allow a detailed configuration of all available parameters. These tools, e.g. DaVinci Network Designer FlexRay [Vec09], TTX Plan [TTT09] or FlexConfig [Ebe09] presuppose an in-depth knowledge of dependencies between FlexRay parameters. Consistency checks avoid the parameter set from out of specification settings, but the consistency to a certain kind of topology and communication signals can not be proven automatically. The user has to set the designated values itself. Several methods for frame packing, scheduling and configuration are presented in literature. Ideas about the scheduling of the static segment are given in [LGTM09], [PPE + 06], [WT06] and [SS09b]. Some methods, for example given in [SS09b] are not compliant

66 with todays FlexRay controllers, due to an arbitrary cycle mask. Papers [SS09a] and [PPEP07] deal with the configuration of the dynamic segment. Papers [HE05], [DMT05] use evolutionary search algorithms to solve the scheduling problem. The listed papers do not give an idea on how to calculate the full set of FlexRay parameters and the configuration for the communication nodes. Furthermore there is no information where the necessary data is originated, on which the calculations are based. Our approach allows the complete procedure from extraction of the data from the EEA model to the overall configuration of the FlexRay parameters down to the configuration of physical FlexRay controllers. 5 Accumulation and extraction of configuration data The first step to set-up a FlexRay parameter configuration, is to specify a calculation order which is the basis for solving the FlexRay parameter calculation formulas. The calculation order solves the problem of occurring dependencies between the FlexRay parameters. As a next step the input data set for the calculation has to be chosen which enables the calculation of all other parameters. The initial data set for the overall configuration flow is extracted from the EEA model using query rules. There is a number of rules needed to provide the input data set for the overall calculation: FlexRay clusters, FlexRay bus systems, ECUs, FlexRay controllers, active star couplers, signal transmissions and topology dependent data. source : electroniccomposite ECU eelogicalconnectors BusConnector : BusConnector * * bussystem BusSystem : BusSystem busconnectors electroniccomposite transmissioncontainer * processunits functionblockmapping transmissions * ProcessUnit : ProcessUnit * FBMapping : FunctionBlockMapping SignalTransmission : SignalTransmission mappedhwswelement functionblockmapping * mappedtransmission mappedfunctionblock pporttransmissionmapping * FunctionBlock : FunctionBlock ProvidedPort : ProvidedPort PPortTXMapping : PPortTransmissionMapping block ports * * mappedpport pporttransmissionmapping Figure 3: Model query rule - signal transmissions Since there are to many query rules to depict all of them, an example of a diagram for extracting the signal transmissions is given in Figure 3. Here a signal transmission is connected to a function block which is assigned to a process unit. This process unit is part of a ECU which has a bus connector connected to a bus system. The signal transmission also belongs to this bus system. Executing this query rule, returns all matching signal transmissions and their corresponding bus connectors. For storing the configuration of FlexRay networks the widely used Field-Bus-Exchange Format (FIBEX) is used [ASA08]. The export from the EEA holds more data than FIBEX can cover. No information about topology dependent data, controller clock frequencies or signal cycle times can be stored in FIBEX. The AUTomotive Open System ARchitecture

67 (AUTOSAR) ECU configuration format does hold the communication dependent ECU parameters, but in contrast doesn t cover the necessary bus values we need for configuration [AUT08]. To cover all necessary information, a customized exchange format with Extensible Markup Language (XML) serialization has been created to overcome this deficiencies. Several calculation blocks in PREEvision have been used to concatenate the information gained by different query rules to establish the customized XML output file. PREEVISION-EE-ARCHITECTURE +CLUSTERS : CLUSTER +ECUS : ECU +SIGNALS : SIGNAL +CONNECTIONS : CONNECTION 1..* 1..* * ECU +SHORT-NAME : string +DESCRIPTION : string +CONTROLLERS : CONTROLLER 1 1..* CONTROLLER +SHORT-NAME : string +DESCRIPTION : string +FLEXRAY-CHANNEL-A-REF +FLEXRAY-CHANNEL-B-REF +VENDOR-NAME : string +CHIP-NAME : string +TERMINATION-IMPEDANCE : int +TRANSCEIVER : string +BUS-DRIVER-TX : float +BUS-DRIVER-RX : float +BUS-DRIVER-RX-IA : float +BUS-DRIVER-RX-AI : float +FREQUENCY : float +ALLOW-HALT-DUE-TO-CLOCK : bool +ALLOW-PASSIVE-TO-ACTIVE : int +WAKE-UP-CHANNEL : string +WAKE-UP-PATTERN : int +KEY-SLOT-USAGE : string +COLDSTART-INHIBIT : bool CLUSTER +SHORT-NAME : string +DESCRIPTION : string +SAMPLE-CLOCK-PERIOD : float +CLUSTER-DRIT-DAMPING : int +COLD-START-ATTEMPS : int +LISTEN-NOISE : int +NETWORK-MANAGEMENT-VECTOR-LENGTH : int +MAX-WITHOUT-CLOCK-CORRECTION-FATAL : int +MAX-WITHOUT-CLOCK-CORRECTION-PASSIVE : int +FLEXRAY-CHANNEL-A +FLEXRAY-CHANNEL-B 1..* SIGNAL +SHORT-NAME : string +DESCRIPTION : string +CHANNEL-REF +RX-CONTROLLER-REF : string +TX-CONTROLLER-REF : string +CYCLE-TIME-1 : int +CYCLE-TIME-2 : int +SEND-MODE : string +BIT-LENGTH : int +IDENTIFIER : int CONNECTION +LENGTH : int +CHANNEL-REF +CONTROLLER-REFS : CONTROLLER-REFS +STARS : STAR 1 1 CONTROLLER-REFS 1 2 CONTROLLER-REF +CONTROLLER-REF STAR +DELAY : float +TRUNCATION : float Figure 4: Class diagram XML import format The structure of the data model, given in Figure 4, contains the following classes: A comprehensive description of all existing FlexRay clusters. It consists of protocol dependent data and the information about the used channels. Detailed information about all ECUs connected to a FlexRay bus system and its associated communication controllers. It holds protocol dependent data and information about the connected bus driver. Communication elements are given in form of signals. They hold information about cyclic or event triggered transmission, according cycle times and the length of the communication element. Information about the topology of the FlexRay cluster is given for the calculation of delay times. This consists of all connections between communication controllers, their length and all active star couplers in between.

68 6 FlexRay parameter calculation Once the relevant data has been accumulated and stored, the computational phase of the FlexRay network setup begins. This uses the work product (XML-based data file) from the previous phase. The following sections describe the necessary steps to calculate the FlexRay parameters. 6.1 Dependencies and calculation sequence To solve the problem of mutual dependencies between the parameters, we developed a calculation sequence which uses the EEA-data as input set and calculates all dependent parameters step by step (Figure 5). The topmost block holds the input parameters, while all lower ones are calculated on these. Horizontal lines separate sets of parameters which can be calculated concurrently. gdsampleclockperiod gclusterdriftdamping pdmicrotick gdcycle dbdtx dbdrx dbdrxai dbdrxia gnumberofstaticslots gnumberofminislots gpayloadlengthstatic Network topology data gdbit gdmaxmicrotick gdminpropagationdelay gdmaxpropagationdelay Calculation order gdwakeupsymboltxlow gdwakeupsymboltxidle gdwakeupsymbolrxidle gdmacrotick gdcasrxlowmax gdbitmax gdwakeupsymbolrxwindow gdwakeupsymbolrxlow gdminislotactionpointoffset gdbitmin gdtsstransmitter gdactionpointoffset gassumedprecision gdmaxinitializationerror gmacropercycle gdstaticslot gdminislotactionpointoffset goffsetcorrectionmax gdminislot goffsetcorrectionstart gddynamicslotidlephase gdsymbolwindow gdnit Figure 5: Calculation order of global Parameters pdmicrotick ppayloadlengthdynmax Calculated cluster parameters Calculation order platesttx pdelaycompensation pclusterdriftdamping pdmaxdrift pmacroinitialoffset pdecodingcorrection pmicropermacronom pmicropercycle pratecorrectionout pdacceptedstartuprange pexternoffsetcorrection poffsetcorrectionout pmicroinitialoffset pdlistentimeout pexternratecorrection Figure 6: Calculation order of local parameters Finishing the calculation of global parameters enables computing of local parameters (Figure 6). Therefore the microtick length of the current controller and the maximum payload length, is used as input set, besides the already calculated global parameters.

69 6.2 Frame packing To calculate the payload length of the static segment, the signals given in the EEA have to be packed together to frames. This frame packing step leads to a bin-packing optimization problem, because the frames shall ideally be used to 100%. This can be solved interatively using all values from the finite set of 1 to byte words. Together with the scheduling of frames into the FlexRay cycles this problem can be solved optimally. The imported input data set contains signals, a periodic timing requirement and the corresponding sender and receiver ECU. To get an optimal solution, when packing these signals into frames, the payload to overhead ratio of a frame must be maximized. Dynamic frames are send on demand by the sender and also consist of a payload and an overhead like static frames. The payload length can, in contrast be set different for each frame, based on the 2-byte structure. Since the appearance of dynamic frames can not be foreseen, the length of the dynamic segment is currently set manually by the bus designer. However our method allows the frame generation for the dynamic segment to better utilize the 2-byte blocks. 6.3 Message scheduling Messages in the static segment are send on the basis of FlexRay cycles. Each static frame is assigned to a free slot in the static segment. A frame can be send in each cycle, or in each 2 n, n 0..6 cycle, using a so called cycle mask. In addition, an offset can be assigned to a frame to arrange them between others. Accorcing to the specification, each slot can only be used by one controller, although different frames can be send in different cycles [Fle05]. The key slots, used for initial clock synchronization, should have a safety margin of two slots in between [Rau08]. This guarantees a successful startup, even in the case of completely unsynchronized controller clocks. We developed three different methods to solve the scheduling problem dependent on the cycle time of the signals. The first method to calculate the cycle length uses the greatest common divisor (gcd) and is able to do a scheduling with or without jitter. Therefore all cyclic transmitted signals of the import file are analyzed for the greatest common divisor. If the found solution is a valid cycle length between µs, this value is used as starting value. Signals are grouped to frames and the optimum payload length is calculated. The gained frames are placed in the available free slots, using as few slots as possible. Additional requirements for startup and synchronization frames must also be taken into concern here. If all frames are placed, the number of static slots can be calculated. The second method to calculate the cycle length tries to find a solution starting with the shortest signal period. This can be used if no valid gcd can be found. A procedure for shortening the cycle length by a defined value is applied, like in the first algorithm. This inevitably leads to signal jitter. The third method implemented also uses the shortest signal period, but here the cycle time

70 is lengthened stepwise if not all frames fit into the cycle. All signals featuring a shorter period than the cycle time, are placed in the cycle more than once. During scheduling, these frames are prioritized, because they are dependent on certain slots in the schedule. If a valid cycle time holding all frames is calculated and scheduling is finished, the global and local parameter set can be calculated. More information about the algorithms used to perform the frame packing and to calculate the schedule is given in [HHvBMG10]. The message scheduling, as well as the frame packing generate dependable solutions, that can directly be used for the later bus operation. 6.4 PC application The overall parameter calculation flow starting from import of accumulated content from the EEA to the output of FIBEX and communication controller configuration files has been integrated in a windows based tool [HHvBMG10]. The necessary settings for the different optimization algorithms are user selectable. The export of controller specific configuration data, using controller host interface (CHI) files, can be completed using the integrated export. 7 Results The presented method to accumulate relevant information from an EEA and the configuration of a FlexRay Network was successfully proven by using different EEA models. The extraction of the relevant configuration data featuring some thousand entries for a complex network is done fault free and within seconds. The XML based export file provided the appropriate set of data needed for the FlexRay configuration process. The provided values have successfully been used to generate a correct FlexRay configuration. The overall process only takes minutes and leads to resilient solutions. Since our tool allows a configuration of a set of FlexRay controllers, the results could be tested on real FlexRay hardware. To verify our approach we used 90 different sets of EEA models realizing different network complexities, to test the performance of the calculations. We used a randomly selected rectangular distribution, between 1-64 bit data length for the signals, 2-100ms (2 or 4 ECUS) and ms (8 ECUs) for signal timing requirements, and a cable length for the overall network between 2-10m. All three algorithms were successfully tested, the results for the first method using gcd is shown in figure 7. The utilization of the maximum bandwidth of 20 MBit/s in this example, rises corresponding to the number of signals transferred per ECU. Doubling the number of ECUs does not double the utilized bandwidth due to the improved distribution of signals enabling a more efficient frame packing.

71 12,00 11,00 10,00 2 ECUs 4 ECUs 8 ECUs 10,57 9,00 8,00 8,33 Used bandwidth [MBit/s] 7,00 6,00 5,00 4,00 3,00 3,37 2,60 5,10 3,83 3,40 6,65 5,36 4,87 6,79 2,00 1,39 1,00 0,99 1,63 1,60 1,29 2,12 0,00 0, Signal transmissions per ECU Figure 7: Bandwidth utilization for a different number of signals and ECUs 8 Conclusion and future work In this paper we presented a method to accumulate and extract information available in a EEA model and to use it for configuration of a FlexRay bus system. The necessary input values for the configuration process where extracted and saved in a customized data file. Based on this information we provided methods to generate an overall FlexRay configuration. This comprises the automatic generation of frames, the creation of a scheduling and the calculation of all necessary bus parameters. This automated approach helps to avoid errors and speeds up the whole configuration process. So different bus architectures can be investigated very quickly using the EEA model. The use of EEA information allows to set the configuration parameters according to the modeled topology, signals and ECU parameters. So it can be secured that the calculated parameters fits the network architecture. The configuration of the length for the dynamic segment and the handling of protocol data units (PDU) is part of our ongoing research. References [aqu09] aquintos GmbH. E/E-Architekturwerkzeug PREEvision, [ASA08] [AUT08] [DMT05] ASAM e.v. FIBEX - Field Bus Exchange Format. Association for Standardisation of Automation and Measuring Systems, version 3.0 edition, January AUTOSAR Development Partnership. Specification of ECU Configuration, release 3.0 version edition, Shan Ding, Naohiko Murakami, and Hiroyuki Tomiyama. A GA-based scheduling method for FlexRay systems. In EMSOFT 05: Proceedings of the 5th ACM international, pages , New York, NY, USA, ACM.

72 [Ebe09] Eberspächer Electronics GmbH & Co. KG. FlexConfig Developer User Manual. Eberspächer Electronics GmbH & Co. KG, d1v4-f edition, [Fle05] FlexRay Consortium. FlexRay Communications System - Protocol Specification Version 2.1, December Version 2.1 Revision A. [HE05] [HHA + 10] A. Hamann and R. Ernst. TDMA time slot and turn optimization with evolutionary search techniques. In Proc. Design, Automation and Test in Europe, pages Vol. 1, Martin Hillenbrand, Matthias Heinz, Nico Adler, Klaus Müller-Glaser, Johannes Matheis, and Clemens Reichmann. ISO/DIS in the Context of Electric and Electronic Architecture Modeling. In Holger Giese, editor, Architecting Critical Systems, volume 6150 of Lecture Notes in Computer Science, pages Springer Berlin / Heidelberg, [HHvBMG10] Matthias Heinz, Martin Hillenbrand, Patrick von Brunn, and K.-D. Mueller-Glaser. A FlexRay parameter calculation methodology based on the electric/electronic architecture of vehicles. In IFAC Symposium Advances in Automotive Control, Munich, [HMG09] M. Hillenbrand and K.D. Muller-Glaser. An Approach to Supply Simulations of the Functional Environment of ECUs for Hardware-in-the-Loop Test Systems Based on EE-architectures Conform to AUTOSAR. In Rapid System Prototyping, RSP 09. IEEE/IFIP International Symposium on, pages , June [LGTM09] Martin Lukasiewycz, Michael Glaß, Jürgen Teich, and Paul Milbredt. FlexRay schedule optimization of the static segment. In CODES+ISSS 09: Proceedings of the 7th IEEE/ACM international conference on Hardware/software codesign and system synthesis, pages , New York, NY, USA, ACM. [PPE + 06] [PPEP07] T. Pop, P. Pop, P. Eles, Z. Peng, and A. Andrei. Timing analysis of the FlexRay communication protocol. In Real-Time Systems, th Euromicro Conference on, pages 11 pp., T. Pop, P. Pop, P. Eles, and Z. Peng. Bus Access Optimisation for FlexRay-based Distributed Embedded Systems. In Design, Automation & Test in Europe Conference & Exhibition, DATE 07, pages 1 6, April [Rau08] Mathias Rausch. FlexRay : Grundlagen, Funktionsweise, Anwendung; mit 59 Tabellen. Hanser, München, Wien, [SS09a] [SS09b] E.G. Schmidt and K. Schmidt. Message Scheduling for the FlexRay Protocol: The Dynamic Segment. Vehicular Technology, IEEE Transactions on, 58(5): , jun K. Schmidt and E.G. Schmidt. Message Scheduling for the FlexRay Protocol: The Static Segment. Vehicular Technology, IEEE Transactions on, 58(5): , jun [TTT09] TTTech Automotive GmbH. TTX Plan, [Vec09] Vector Informatik GmbH. DaVinci Network Designer FlexRay, V [WT06] E. Wandeler and L. Thiele. Optimal TDMA time slot and cycle length allocation for hard real-time systems. In Proc. Asia and South Pacific Conference on Design Automation, pages 6 pp., 2006.

73 Comparing Continuous Behaviour in Model-based Development of Embedded Software Jacob Palczynski, Carsten Weise, Sebastian Moj, Stefan Kowalewski Lehrstuhl für Informatik 11, RWTH Aachen Abstract: In many embedded systems, continuous signals play an important role as input and output signals. Being the interface to the environment, continuous signals are omnipresent in model-based development of controller software. Validation and verification activities need to compare the behavior of the model with the real system or a reference system. While behavioral equivalences are well established for the pure discrete part, there is no obvious choice for the best way to compare continuous behavior. In this paper, we present an approach to the comparison of continuous signals that avoids pointwise equality of signal values and instead focuses on similarities of the observable signals. 1 Introduction Back-to-back testing is a well known and recommended method for validating code generated automatically from models; test suites can be used even to validate code generators [SWC05]. Two systems, one being the system under test (SuT), the other serving as oracle, are stimulated with the same inputs. Depending on the test objective, the test outcomes, e.g. output, visited states, taken paths, are compared. When the inner structure of a SuT is not the objective but the outside behaviour, the SuT is treated as black box. When it comes to embedded software development, one has to cope with behaviour continuous both in time and value. Signals provided by the controlled system often are not discrete but are continuous originally and quasi-continuous after sampling. Comparison of real values is hard with digital computers. Further, continuous signals are often seen as matching even if they are not pointwise identical. Thus for comparison of the behaviours of two systems exhibited in continuous signals, we have to think about different ways and measures. In fact, there is a need for such tests, since the possibility of different behaviours of both model and generated code is observed in industrial models [PGCN05]. We suggest a similarity relation involving abstract modelling of a system s behaviour. A system is treated as a black box, the model is a set of sequences of the system s observable variables partial functions. On the one hand we have the input signal runs representing the stimuli, on the other hand the reaction is manifested in the output signal sequences. Each collection of input and output signal runs is combined to a so called behaviour, a set of several behaviours forms the system s model. As will become clear, the abstraction comes in with the representation of the signal runs, which we describe based on the mathematical properties of the signal function and omit exact information on time and value.

74 Two systems are similar with respect to an abstract model, if their signal runs obtained by tests or simulation conform to the behaviours defined in the abstract model. A central aspect when putting this approach in practice is the question how to decide whether a signal conforms to an abstract model. In order to tackle this question, we generate reference signals of the single properties of a signal s abstract model and search for occurrences in the sampled continuous signal. We do this by computing the cross correlation of reference and sampled signal, validating these results with mean square error and other metrics and finally checking the order and timing of the found occurrences. The abstract models are inspired by Wiesbrock [GW07, GSW08] who introduced a propertybased modelling of signal runs. This technique originally is used to define test cases, which also cannot have exact stimuli, e.g. test drives with cars. We first used this models for the most abstract level in a modelling hierarchy for control plants [PK09]. Wiesbrock suggested a methodology for comparing signals in [WCFP02] where, amongst others, cross correlation is used as metric. In contrast to our work, the signals are supposed to be discrete in both time and value, sampled with a equidistant rate, and to have the same duration. We consider the reference signals as short in comparison, can deal with variable sampling rates, and we take into account that the underlying signals are continuous. With our work we want to contribute to other existing testing techniques, e.g. [Zan08, BK06]. There rarely the problem of continuous data is approached directly, and rather the data is assumed to be discrete after sampling. In the following section we introduce our approach for evaluation of continuous signal data obtained from back-to-back tests, including formal definitions of the suggested similarity relation. Afterwards we focus on the implementation of our approach in Matlab and discuss some additional challenges, raised by this. We conclude by summing up our results and give an outlook on further work. 2 Similarity with Respect to Abstract Models First we introduce the general idea of our approach for behaviour-based testing of embedded systems. Our method is based on a conformance relation between two systems A and B described by their discrete and continuous behaviour. In this paper, we assume that for the discrete part a suitable conformance relation is given, and focus on the continuous behaviour of a system only. The continuous behaviour of a system is given by a set of signals, in pairs of input/output signals. The basic conformance relation we are using in this paper is a relation between two continuous behaviours involving an abstract behavioural model. While we introduce our notion of input and output signals later in an informal way, we define here the syntax and semantics of behaviour and system. Definition 2.1 (Syntax of behaviour and system). A behaviour is a triple B = (I, O, χ), where I is a set of m inputs, O is a set of n outputs (where m, n N, n > 0), and χ a finite set of interdependencies. A system S is a finite set of behaviours S = {B 1,..., B k k N + } where all behaviours have the same number of in- and outputs.

75 Figure 1: Left: Similarity conformance; if the behaviours of both systems conform to the abstract behavioural model, their behaviours are similar with regard to this model. Right: Testing using the similarity conformance. Definition 2.2 (Semantics of a behaviour). Let B = (I, O, χ) be a behaviour, where I = {I 1,..., I m } and O = {O 1,..., O n }. Let V1 I,..., Vm, I V1 O,..., Vn O be suitable value domains for the inputs and outputs, resp. Then the semantics of B, written as b (B), is a set of n + m families of functions, where each family consists of functions R + 0 V i I resp. R + 0 V j O. The idea of our semantics is simple: a behaviour induces a number of function families. Each family consists of functions over the positive reals as time domain, one family for each input and each output. The values of the function depend on the behaviour itself, and will often be real values as well, e.g. when the function represent velocity, distance, pressure etc. For a behaviour, each of the function families in its semantics is an admissible evolution of its inputs and outputs. With the abstract behavioural models at hand, we now can define our conformance relation. As this is in fact a containment relation, we abuse notation and reuse the usual set inclusion symbol for our relation. Definition 2.3 (Conformance). A behaviour A conforms to a behaviour B (written A B), iff A is contained in the set of behaviours allowed by B, b (B): A B b (A) b (B). A behaviour A conforms to a system S (written A S) iff there exists a behaviour B in S such that A conforms to B: A S B S : A B. This definition can easily be extended to an analogous definition of similarity of two systems showing continuous behaviours. Definition 2.4 (Similarity). Two behaviours A and B are similar w.r.t. a system model S iff both behaviours conform to the same behaviour S of the behavioural model S: A S B S S : A S B S. Two systems A and B are similar w.r.t. a system model S, iff for every continuous behaviour A of A there exists a continuous behaviour B of B, such that A S B, and vice versa. This relation can be exploited in several ways, we will now present two of them. In the model-based development of embedded software we usually have to kinds of artifacts, models and code, which was derived from the models manually or automatically. The question whether both of them behave in a similar way, can be answered by back-toback testing, where the code is the SuT, while the model serves as oracle. Since we are

76 dealing with embedded systems, continuous variables are a crucial issue, and they can show different behaviours in both kind of artifacts as was reported, e.g., in [PGCN05]. Still, both behaviours, though different, can be acceptable with regard to a specification. Taking advantage of abstract behavioural models and the relation defined above, we can decide, whether the behaviours of two systems conform to the abstract model and therefore the systems themselves are similar w.r.t. this model (Fig. 1, left). We generalise this scenario to a different one, where we want to validate, whether a system showing continuous behaviour conforms to its specification (Fig. 1, right). From this specification accurate continuous reference signals are created which serve as oracle now. Again, we can take advantage of abstract models and the conformance relation. The continuous behaviour we obtained from the specification can be defined based on abstract models; we already have introduced the use of abstract models for specification in [PK09]. If we can show that the continuous behaviour of a SuT lies in the set of allowed behaviours of the abstract models, we can show that our SuT implements its specification with regard to the abstract model. The test suite with the test cases can be derived from the specification and our test stimuli are sets of continuous signals. For test execution, the input signal is fed to the SuT, and the output of the signal is measured. Then the output signal from the SuT is checked for the conformance with the output signal of the test case specification. The test case is passed when the output signals conform, the test suite is passed when all test cases are passed. In Def. 2.1 and 2.2 it becomes clear that we model every system as a black box and only the defined inputs and outputs can be treated, unknown inputs cannot be processed. Each of a system s behaviours contains two sets of signals, representing inputs and outputs, resp. and the interdependency relation between the signals components. For the signals representation we introduce three different levels of abstraction. On signal function level (SFL), we describe the complete signal by a sequence of function fragments. Time intervals between two elements of a continuous set T are labeled with a function of the form y = f (t). As each of these function fragments has a unique value at each point in time, and intervals do not overlap, the complete signal is fully and uniquely defined on this level. Time metric level (TML) serves as intermediate level for our computations. The intervals are labeled with elements of a set containing natural language representations of mathematical functions properties that have a duration we call such characteristics phases. The information on the duration is not a mandatory part in a TML specification, but we will see that this becomes convenient on the next level. The points in time are also labeled with natural language descriptions of characteristics of mathematical functions, but in contrast to the intervals labels these represent events, i.e. characteristics that take place instantaneously. Event labels can contain information on time, but this information does not refer to the duration but to the allowed times of occurrence instead. On property flow level (PFL), both kinds of labels are attached to the elements of a bipartite graph: two sets containing vertices representing events and phases, respectively. Every vertex in such a graph has at most one predecessor and one successor, except the start and end events. On this level ambiguities i.e. nondeterminism in the signal are not limited to the function values only, but can now occur in the timing of the properties as well. Interdependencies connect events of input and output signals and represent causalities.

77 Visual Paradigm for UML Standard Edition(RWTH Aachen) Figure 2: Left: Abstraction levels and their associations. Right: Class diagram of abstract model hierarchy. Systems are composed of behaviours which themselves are composed of signals of different abstraction levels. The left side of Fig. 2 illustrates how signals on the different abstraction levels can be associated. Events on PFL are associated with elements of T on TML and SFL, and phases are associated with intervals between two succeeding points in time. Events and phases labels are linked to the mathematical functions used on SFL to describe the exact value of the signal. In particular, we can order the abstraction levels, PFL being the most abstract one, SFL the least, and TML lying in between. We omit a formal definition here due to space limitations. Our main goal is to check similarity of different systems behaviour based on the input and output signals. Usually the information on the signal has the form of sampled continuous data, so we need to connect this data with our definitions. A straightforward way would be checking a whole sampled signal whether it lies in the semantics of a abstract signal. This method is not feasible on higher abstraction levels, since the single properties have to be identified in the signal before. Therefore we have to take a different approach and associate sampled continuous data with models on higher abstraction levels, in particular on PFL. We represent the continuous data (i.e. samples of a continuous signal) as a set of tuples that map real values to specific time points: C = { (t 1, x 1 ),..., (t h, x h ) t i R + t i < t i+1, h N +, x i R }. Lemmas and corollaries derived from the following theorem allow us to formulate the conformance of signals of different abstraction levels based on their structure. Theorem 2.5. Let C be sampled continuous signal data and P a behaviour on PFL. Then C P iff there exists an behaviour R on SFL such that C b (R) R P. 3 Conformance Check in Practice We now focus on the practical aspects of our approach, in particular implementing the abstract behavioural models and checking the conformance. We will present some results of the latter with the help of an example signal taken from the automotive domain. Since

78 we want to support developers of embedded software, we have chosen a well known tool chain in this domain, Matlab/Simulink [Mat]. This development environment provides all necessary tools for designing control systems, e.g. programming language, graphical modelling environment, and a simulator. Implementing a prototypical version of our tool with Matlab allows us to offer a environment known to the developers and to ease the integration of our methodology in existing processes. Data obtained in simulations can be processed directly, the results are presented to the user in the same environment. For both abstract models and algorithms we choose an object oriented implementation; the simplified class diagram of the abstract models is depicted in Fig. 2, right. The classes are defined according to the formal definitions, thus a system model is composed of at least one behaviour. Each behaviour contains input and output signals and interdependencies between them. A signal can contain representations of every abstraction level, i.e. on PFL build up by events and phases, and absolute time signals on TML and SFL. One observes that it is possible to describe a signal for example not only by one SFL signal, but by several, as long as they do not contradict representations on higher abstraction levels. Additionally, for each property we have to store reference signals and occurrences, i.e. the data needed for our computations and their results. Figure 3: Left: Conformance checking of continuous signal data with an abstract model; continuous data origins from test execution. Right: Detailed work flow of property identification. We check the conformance of sampled continuous data w.r.t. an abstract model following the work flow depicted on the left side of Fig. 3. After generating reference signals for each properties, we identify their occurrences in the sampled signal. If every required property occurs in the signal, we proceed with comparing the order of these occurrences with the order of the properties in the signal s abstract representation. With the order of each signal validated, we can turn on checking the timing of the occurrences; the times at which the properties are calculated to occur have to lie within the intervals allowed

79 by the properties labels. While the latter two steps can be solved straightforwardly, the identification of occurrences of properties within a given signal is a more difficult task. Again, we omit the formal background for the suggested process and present the work flow itself (Fig. 3, right). Cross correlation is a well known method to measure the similarity of two signals. Matlab provides a build-in function xcorr computing the cross correlation of two arrays: N m 1 x n+m yn ˆR m 0 xy (m) = n=0 (1) ˆR yx ( m) m < 0 Thus this implementation relies on discrete data equidistantly distributed in time. When applied to signals of different length sampled with different rates xcorr s results are not convincing [PWK10]. Using resampled signals yields slightly better results, but still occurrences are not found correctly. We want to deal with continuous data sampled at a not constant rate due to, e.g. jitter, clock drift, etc. Therefore we suggest to compute the cross correlation according to the definition for continuous functions: R xy (τ) = lim T F 1 T F T F 2 T F 2 x (t) y (t τ) dt } {{ } =I From both definitions it becomes clear that two functions are shifted against each other, and their products integrated. In our case one of the functions is the sampled signal, the other the reference of the property. In order to use (2), we have to reconstruct the functions of both signals by interpolation. After comparing several methods, we decided to focus on piecewise polynomials to solve this task. For this we use Matlab s interp1 function, the results are the coefficients of piecewise polynomials. These coefficients are constant on intervals lying between the shifted sample times, we use this fact to subdivide I in several I l. For this we have to compute intervals depending on the shift τ, thus we calculate an increasing sequence T (τ) = (t 0,..., t m) with t l T x (T y + τ), t 0 = max (min (T x ), min (T y + τ)), and t m = min (max (T x ), max (T y + τ)). T x and T y are the sampling times sets, i.e. T x = {t (t, x) C x } and T y = {t (t, y) C y }, C x and C y being the sampled data sets of signals x and y, resp. (cf. (2)). We set δ l = t l+1 t l, x i (t) = x t l,i for t [ t l, l+1] t (yt l,j is defined analogously), and sum up the I l over all intervals. The limit function in (2) is not applicable in our case, since our data has limited range in time, and thus we have to substitute it. The term lim TF 1 T F is used to scale the result of I, so we have to introduce a different scaling. In contrary to [PWK10], where we chose the finite length of the signal, we here suggest to compute a scaling factor dependent on the reference signal. No signal should be more similar to the reference signal than itself, thus we compute the unscaled cross correlation of the reference signal with itself Rxx u (0) called autocorrelation and use this as scaling factor. In the end we obtain a polynomial of τ with the same degree n as the original interpolation polynomial. ( m 1 1 n n ) n R xy (τ) = Rxx u ( 1) r δ i+r s+1 l (0) i + r s + 1 x t l,i y t l τ,r τ s (3) l=0 s=0 r=s i=0 (2)

80 This computation of cross correlation takes place in axcorr, our Matlab implementation of Eq. (3). With the axcorr at hand we determine shifts of the reference signal against the test signal, at which the normalised cross correlation shows values close to 1. Since the values of the cross correlation function indicate the time of a property s occurrences, we have to validate the results of the step mentioned above. Additionally, we cannot conclude from the value how big the differences from the reference are, and, therefore, whether the test signal at that time conforms to the property. We choose to evaluate the deviations with the help of the mean square error, as well as the absolute maximum positive and negative errors. Mean square errors are used to decide at what point of time a property really occurs, and since cross correlation gives a good hint we just have to calculate on certain intervals. For each interval of τ where 0.8 R xy (τ) 1.1 holds we determine the minimal mean square error. In the case that it lies beneath a certain threshold, the occurrence is valid, otherwise it is discarded and not used in the next steps. Currently we are working on a method to determine the value of this threshold based on the abstract signal definition. Having determined possible occurrences of a property p P in the signal, we have to filter those which lie in b (p). This can be achieved by discarding those occurrences at which the absolute errors of sampled signal and reference signal lie outside of the properties semantics. Therefore we have to determine the tolerances of a reference signal when generating it from the property. When all required properties have been found in the test signal, we have to make sure they occur in the correct order and at acceptable times. In order to validate the order of the properties in the test signal, we sort the found occurrences according to the found times. By this we obtain a sequence of properties similar to the one defined in the abstract model of the signal. Comparing the order of both sequences shows whether a test signal might conform to the abstract model. After having validated the order, we still have to take care of the second criterion, timing of the occurrences. The timing information of the test signal s sequence is easy to obtain, but the information of the PFL representation has to be processed to be usable in this step. The information easiest to gain are the intervals in time in which a property has to occur. These intervals are constrained by the durations of phases and interdependencies between properties of inputs and outputs. After these restricting steps we can compare the times of the properties occurrences with the computed intervals in the PFL signal. In our example we restrict ourselves to a system with one behaviour, that consists of one output. Moreover, for brevity we pick one single property to demonstrate the aforementioned concepts. Our example is similar to those presented in [GSW08], where the authors use a velocity control s output; we deal with a vehicle s velocity, too. One of the properties in the PFL model is the ramp with a gradient (vehicle s acceleration a) between 2 and 1, and a duration between 1.9 and 2.1. In order to generate a fitting reference signal we also have to take a look at the predecessor or the successor, where we learn that the starting value of our ramp lies around 16 (vehicle s velocity). We choose the following times and functions for our reference signal: T x = {0, 1, 9, 10}, constant value of 16 for t [0, 1] and 0 for t [9, 10], and 2t + 16 for t [1, 9]. With this reference signal at hand, we search for an occurrence of the property it belongs to. From the test we obtained

81 Figure 4: Left: Test output in comparison with SFL model. Right: Analytically computed cross correlation of our example test signal and reference signal, MSE is computed in marked interval. Figure 5: Reference signal shifted against test output (left: after computing cross correlation; right: after additionally computing minimal MSE). sampled data, thus we have to interpolate it first; the best results we got with piecewise linear interpolation. This interpolation technique also supplies the functions we need for computing the cross correlation of reference signal with the test signal. The result of this is shown in Fig. 4, where also the interval 0.8 R xy {τ} 1.1 is marked. As one can see, a peak lies in this interval at a lag of τ = When we take into account the MSE of the surrounding interval, we obtain a minimal MSE at τ = In fact, when we compute the absolute errors at both lags, we get and 0.15 for 25.4, and -.60 and 0.53 for Fig. 5 shows the reference signal shifted by the computed values; the computation of cross correlation gives a very good hint (left), while additionally taking the minimal MSE into account gives an almost perfect match (right). 4 Summary Equality is often a too strong criterion, when it comes to comparison of continuous signals. We suggest a similarity relation which involves a property-based conformance relation of signals and behaviours. Both conformance and similarity relation are defined with respect to an abstract model. Therefore, in this paper, we define a methodology to model systems by their continuous signals behaviour abstracting from exact value and time. Based on this, we introduce the aforementioned relations, which allow us to state whether

82 two systems are similar w.r.t. to an abstract model. For our comparison we use sampled signal data obtained from simulations, tests, etc. of two systems, e.g. in back-to-back tests. A central element for practical usage of this methodology is an adequate process to find properties defined in the abstract model in these sampled signals. We apply an analytical computation of cross correlation on piecewise interpolation polynomials (axcorr) and refine the results by applying further well-established metrics. We demonstrate crucial elements of our work with an example taken from the automotive domain. We continue our work by enhancing the used methods and by integrating the single prototypical elements to an usable tool. References [BK06] [GSW08] [GW07] [Mat] Eckard Bringmann and Andreas Krämer. Systematic testing of the continuous behavior of automotive systems. In Proceedings of the 2006 international workshop on Software engineering for automotive systems, SEAS 06, pages 13 20, New York, NY, USA, ACM. Jürgen Großmann, Ina Schieferdecker, and Hans-Werner Wiesbrock. Modeling Property Based Stream Templates with TTCN-3. In Kenji Suzuki, Teruo Higashino, Andreas Ulrich, and Toru Hasegawa, editors, TestCom/FATES, volume 5047 of LNCS, pages Springer, C. Gips and H.-W. Wiesbrock. Notation und Verfahren zur automatischen Überprüfung von temporalen Signalabhängigkeiten und -merkmalen für modellbasiert entwickelte Software. In MBEES 2007, The MathWorks homepage. [PGCN05] Mario G. Pehinschi, Srikanth Gururajan, Bojan Cukic, and Marcello Napolitano. Investigation of Issues in the Conversion of Simulink Models to ANSI C Code for a Neuronal Network Based Adaptive Control System. Technical report, West Virginia University / NASA IV&V, December [PK09] [PWK10] [SWC05] Jacob Palczynski and Stefan Kowalewski. Early Behaviour Modelling for Control Systems. In UKSIM European Symposium on Computer Modeling and Simulation, volume 0, pages , Los Alamitos, CA, USA, IEEE Computer Society. Jacob Palczynski, Carsten Weise, and Stefan Kowalewski. Testing Continuous Systems Conformance Using Cross Correlation. In Proceedings of the 22nd IFIP International Conference on Testing Software and Systems: Short Papers, pages 31 36, Montréal, CRIM. Ingo Stürmer, Daniela Weinberg, and Mirko Conrad. Overview of existing safeguarding techniques for automatically generated code. SIGSOFT Softw. Eng. Notes, 30(4):1 6, [WCFP02] Hans-Werner Wiesbrock, Mirko Conrad, Ines Fey, and Hartmut Pohlheim. Ein neues automatisiertes Auswerteverfahren für Regressions- und Back-to-Back-Tests eingebetteter Regelsysteme. Softwaretechnik-Trends, 22(3), [Zan08] Justyna Zander-Nowicka. Model-based Testing of Real-Time Embedded Systems in the Automotive Domain. PhD thesis, Technische Universität Berlin, 2008.

83 Using Guided Simulation to Assess Driver Assistance Systems Martin Fränzle 1,TayfunGezgin 2,HardiHungar 2,StefanPuch 1, Gerald Sauter 1 1 Carl von Ossietzky Universität Oldenburg, Department of Computer Science, Oldenburg, Germany, 2 OFFIS, Escherweg 2, Oldenburg, Germany, Abstract. The goal of our approach is the model-based prediction of the effects of driver assistance systems. To achieve this we integrate models of a driver and a car within a simulation environment and face the problem of analysing the emergent effects of the resulting complex system with discrete, numeric and probabilistic components. In particular, it is difficult to assess the probability of rare events, though we are specifically interested in critical situations which will be infrequent for any reasonable system. For that purpose, we use a quantitative logic which enables us to specify criticality and other properties of simulation runs. An online evaluation of the logic permits us to define a procedure which guides the simulation towards critical situations and allows to estimate the risk connected with the introduction of the assistance system. 1 Introduction The design of an assistance system in the automotive domain (and elsewhere) requires several exploration and evaluation activities with potential users of the system to assess the effect of the system under development. As a consequence, the development process is difficult to organize, it is expensive and time consuming. The goal of the IMoST 1 project is to reduce the amount of involvement of human test subjects through the introduction of executable models of the driver. These models shall be able to replace the driver in that they are capable of reproducing human behaviour. Combining them with executable models of the car, traffic scenario, and the assistance system, a complete operational representation of the assistance system in its application environment can then be constructed and employed to predict effects of introducing the assistance without having to The research reported here has been mainly performed in the project IMoST which is funded by the Ministry of Science and Culture of Lower Saxonia. 1 The full name of the project is Integrated Modelling for Safe Transportation. Further information can be found in [1] and at the URL uni-oldenburg.de/

84 resort to experiments with humans. While the construction of driver models is a both scientifically and practically challenging task which is addressed in a number of other reports, e.g. [5, 6, 8], in this paper we focus on techniques concerned with using these models, i.e., with evaluating functionality and safety aspects of driving with assistance. The evaluation is performed by studying the emergent behaviour of the integrated models. As the models are rather complex, the main means for assessing them must be simulation, because other analysis methods (e.g., computing all states the model may reach or even formal verification) are only applicable to much simpler classes of systems or smaller models. Of course, the simulation activity must be well organized to produce reliable assessments. Our approach combines a systematic parameter coverage with property-specific guidance. If, for instance, we are interested in a particular aspect of criticality, we start with a function assigning a numeric criticality value to each run. After covering the parameter space roughly, areas where high values have been observed get analysed in more detail. Thus, the simulation proceeds, guided by observations, towards points of interest. In particular, the (hopefully low) probability of situations like those involving a high accident risk can be assessed with much greater accuracy than by simpler procedures. The application scenario on which IMoST develops and tests its approach is that of an advanced driver assistance system (ADAS) supporting the driver in filtering into an expressway, including the gap selection and speed adaptation.this scenario captures one of the most critical expressway manoeuvres. On the other hand, compared to other potentially critical traffic situations (e.g., crossings), it is limited in its variability and is thus suited for developing and assessing a new approach. Variables we considered were the number of other traffic participants, speed differences, and gap sizes. This paper is organized as follows. In the following section we present our simulation platform, i.e. list the components of the considered co-simulation and state how the interaction between all components takes place. In Sec. 3 we give a formalism allowing to specify (among other) safety properties. The succeeding section deals with the online evaluation of the specified formulas. In Sec. 5 we will give a procedure to automatically determine critical situations, and we conclude with Sec Simulation Platform The complete model consists of several software modules. These are provided from different sources: They incorporate a commercial traffic simulation, the models of the driver and the assistance system developed in the project and components for monitoring and recording. The most convenient way to cope with continuing changes of these modules is to refrain from a deep integration into one system and rather combine them via a co-simulation environment. For that purpose, we use a commercial implementation of the IEEE standard 1516 [3] for coupling simulators (HLA, High Level Architecture ). This standard defines

85 how a joint run of different component simulators is orchestrated by a central component (RTI, Run-Time Infrastructure ). The HLA term for a set of combined simulators is federation,andeachpartner is called a federate. HLA offers a time management service which enables to synchronize federates running at different and even variable step resolutions. A federate is time regulating if it influences the advance of other federates, and it is time constrained if its own evolution is restricted by others. Time management permits to keep the data exchange in accordance with the progress of logical time, opposed to best-effort simulation where data are consumed as they become available during simulation. To limit variation between different simulation runs with the same parameters, i.e., to achieve a high degree of reproducibility, we used this time management. For technical reasons, in particular the nature of the commercial traffic simulation software, even this does not suffice for full reproducibility. It is, though, planned to replace that component with another one what we expect to remove these problems. Fig. 1. Architecture of the federated simulation. Fig. 2 depicts the structure of the main components and their integration by the RTI. In particular, the main components are models of the driver and assistance-system (Advanced Driver Assistance System, ADAS) on the left of the figure and a simulation of the ego car, which is the car controlled by the driver model, and the traffic environment on the right. Further components not shown in the figure are property monitors (or observer, see Sec. 3 for details) and a recorder. The co-simulation is executed on a cluster of standard PCs. Each component model evolves in discrete time steps, with update frequency resolutions in the order of 20 to 35 Hz. Synchronisation and data exchange is managed by the RTI. The output behaviour of models of the ADAS, the ego car and the environment depends deterministically on their input, where the traffic environment is parameterized by scripts defining the street layout and number

86 and actions of other cars. A complete run of the scenario consists on average of about 2700 discrete time steps. The IMoST cognitive driver model consist of two parts: 1) The cognitive architecture CASCaS, which integrates task-independent human cognitive processes, e.g. a model of visual perception, declarative and procedural memory models and a processor for task knowledge. 2) A formal model of driving-task specific knowledge (e.g. knowledge about different driving manoeuvres or traffic rules), which is uploaded onto the architecture for simulation purposes. Thus, acognitivearchitecturecanbeunderstoodasagenericinterpreterthatexecutes task-specific knowledge in a psychologically plausible way. The driver model incorporates different types of behavioural variation concerning acceleration style (sportive vs. relaxed), gaze strategies (variations in gaze duration and frequency) and safety margins (preferred distance to lead car / rear car), which where assessed through a series of experiments with subjects. During simulation the model probabilistically chooses amongst those different behaviours. These probabilistic capabilities are of specific interest when thinking about guided simulation, because the probabilistic choice will be replaced by a systematic variation of the possible behaviours. 3 Property Specification The properties of interest are defined in a first-order version of linear temporal logic. In their atoms, the formulas may thus refer to attributes of the system constituents like car positions, their speed, visible actions of the driver and so on, complementing discrete observations (e.g., turn indicator, assistance-system signals). The usual temporal operators (always, eventually, unless, until) permit to express temporal relations of their occurrence. Formulas are evaluated over complete traces which usually originate from the simulation environment but might also be recordings of driver tests. With the temporal operators one can formulate specific requirements on different situations and phases of driving. Extending the usual interpretation of logics, we chose a nonstandard, quantitative semantics [2, 7] which assigns a numerical value to a formula for each trace: A positive number means that the formula is satisfied, and the result value gives the minimal distance in variable values which would have made the formula false (conversely for negative values). Thus, a formula defines a function which assigns a numerical value to each simulation run. Due to this nonstandard interpretation, numerical assessment functions can be expressed in the logic. As an example, we use formulas to describe (aspects of) criticality of simulator runs. With an adequate subformula defining TimeToCollision, computed from distance and relative speed of a leading car, if there is any, the formula (TimeToCollision > 2.6sec) (1) expresses that this value never drops below 2.6 seconds (which would be rather safe). Usually, we want more than one aspect evaluated. As an example, also TimeHeadway enters criticality, and an acceptable lower bound on this value

87 in our scenario is 0.3 seconds. To provide an adequate criticality criterion in one formula, both observations are transformed into a risk valuation. The risk is inversely proportional to the respective time values, and with adequate scaling factors we arrive at the following definitions. RiskTTC = df 2.6/ max(timetocollision, 0.01) (2) RiskTHw = df 0.3/ max(timeheadway, 0.01) (3) These formulas yield 1 for the values of 2.6, resp., 0.3, seconds, and higher values (with an upper bound of 100) for tighter situations. A criticality criterion for our scenario is specified by OnAccelLane W ( OnAccelLane RiskTTC 1 RiskThw 1), (4) with W for unless. If this formula evaluates to a positive value for a run, the driver has performed the manoeuvre with sufficient safety margins in every respect. A value of 2 indicates an already severe criticality with only 4.8 meters distance at a velocity difference of 20 km/h (RiskTTC) or 0.1 seconds headway only2.2 meters at 80 km/h (RiskTHw). The evaluation of formulas is automated by translating them into monitor programs [4] observing to which degree the property is satisfied or violated. These monitors enter the simulation environment as additional federates. Upon termination of the simulation, each observer provides the numerical evaluation of the property it stands for on the completed run. Thus, the run can be classified as good or bad according to the resulting numbers. Moreover, the observers are capable of computing lower and upper bounds for the final value while the system is evolving. This allows us to identify and stop irrelevant runs at an early stage of evolvment in order to save simulation resources. The evaluation process is presented in more detail in the following section. 4 Online Evaluation of LTL formulas Fig. 2. Refinement of the always operator. The shaded area is the last evaluated frame. Left: Evaluation before refinement; Right: Evaluation after refinement.

88 Our goal is to evaluate (linear) temporal-logic formulas during simulation, i.e. over traces while they are evolving. The evaluation has to be performed in realtime in order to provide the result timely. As mentioned in the previous section we evaluate such formulas quantitatively. We face two main problems in this evaluation process: First, the nature of the temporal operators requires that in order to evaluate the truth value (or degree) at one time instant we need the truth values of all future time instants. The second one is that the semantics of such formulas are defined over infinite traces, so we have to find a consistent interpretation on finite traces. The first problem is solved in the manner that traces are safely extrapolated for all future time instants. This is realized by interpreting attributes over intervals rather than single values. For all time instants where a measured value for an attribute is already available we get a singleton interval. If we have no further information about the evolution of an attribute, we have to extrapolate its value for all time instants without measurements with [, ]. By annotating an attribute with a monotonicity information, a tighter extrapolation can be performed: If an attribute is monotonically increasing the last measured value is used as the lower bound for all future time instants. For monotonically decreasing attributes the upper bound is set analogously. Whenever new measurements are available, the values of each previously evaluated time instant are refined. The interval based evaluation also solves the above mentioned second problem. At the end of each simulation run, the given formula f is evaluated by an interval [lb, ub] with lb, ub N which is interpreted as follows: If lb > 0, thenall possible extensions of the observed trace will satisfy f. Analogously, if ub < 0 then all extensions of the trace will not satisfy f. The evaluation may result in an indefinite answer if the interval includes zero. For our implementation it is crucial that the extrapolation for all future time instants can be finitely represented. In the following, we sketch the evaluation procedure. The evaluation of a formula is realized bottom-up (wrt. the formula structure) and in a depth-first search manner (wrt. the states to be checked). The online checking procedure is performed iteratively due to the evolving nature of the trace. During a checking iteration the trace is not changed, i.e. newly measured values are put into a buffer. Before the next iteration starts, the trace is extended accordingly. A checking iteration consists of two steps: Evaluation Phase: In the evaluation phase all subformulas and all states from a start state up to an end state are sequentially computed. The end state is determined by the largest time instant for which a value for all attributes exists. For states beyond this last state the extrapolated interval, as explained above, is used. The evaluated area will be called frame. The start state for the next checking iteration is the last checked state. Refinement Phase: The evaluation of the last frame results in new information of attribute behaviours. States already evaluated in previous iterations can thus be refined. This is done by backward iterating all previously evaluated states until no further refinement on values can be performed.

89 The evaluation of temporal operators is realized according to their recursive characterizations. In the following, the idea of the procedure is shown for the always operator. Let T be an (extrapolated) trace, v be an attribute and i N a time instant. The evaluation of v at time i then is denoted by i N : T ϕ(i) = T ϕ(i) T ϕ(i + 1). (5) In order to be able to compute the formula values correctly, we store the current evaluation results for all time instants of all attributes and subformulas. That this information is needed becomes clear when one considers how to evaluate formulas with nested temporal operators like (a >c 1 ) U (b >c 2 ). The refinement phase concerns the update of these preliminary values, reflecting the effect of tightening intervals extrapolated previously. Note that the final result is the evaluation of the topmost formula operator in the first state. The idea of the refinement process is shown in the example of Fig. 2: In the figure, the truth values starting from state j up to state endstate of both formulas, f 1 = ϕ and f 2 = ϕ are depicted. The left part of the figure illustrates the result after the evaluation phase for the shaded area. The effect of the subsequent refinement phase is demonstrated in the right part of the figure, leading to tighter intervals up to state tracerefinedupto. The information about changed values of subformulas during refinement is of course important for the refinement of the enclosing formula. Formula (i) (ii) (iii) Evaluation Result f 1 = v > [26, D max] f 2 = (d >0 lane = right) [0.1, D max] f 3 = v<30 Ud> [5.49, 5.49] f 4 = v<30 Uf [0.104, D max] Table 1. Online evaluation of formulas in the context of the IMoST scenario. Here d is the distance to the acceleration lane and v the speed of the considered car (in m s ). Both d and v are monotonically increasing. Size of state spaces resulting from (i) no merging, (ii) value threshold and (iii) time threshold. D max is the representation of. An exact evaluation of formulas according to the procedure sketched above is rather costly. In order to reduce the state space, one can merge multiple measured values into one if they are very similar, e.g., their values only differ by a fixed threshold. In practice, this has been proven to be very useful, as measurements arrive with a very high frequency while the corresponding values do only differ slightly. To illustrate the effects of such merging techniques, we have specified and evaluated different formulas which refer to the attributes of the car in our application scenario. The results are illustrated in Table 1. The effects on the state spaces using different merging techniques are shown in the central columns: In Column (i), no merging was applied at all, such that the state spaces grew very fast. E.g., to evaluate formula f 1, 2600 states were processed. In (ii), states

90 where attribute values differed by less than 0.1 were merged. Finally in (iii), the merging was performed with respect to the time stamps of the measurements, i.e., measurements with differences in their time stamps less than 10ms were merged. As a result, by using these merging techniques, far less states had to be treated by our evaluation procedure, with of course direct effects on the computation time needed for the refinement procedure. The evaluation results of the formulas are listed in the rightmost column of the table. Merging affected the results only negligibly, wherefore we did not list these effects. We will consider how to capitalize in practice on the substantial optimization potential of these techniques. 5 Guided Simulation by Exploring the Behaviour Spectrum It should be obvious that the variability of the scenario is too high to cover all its instances by simulation, let alone by experiments with human subjects. Also, the probabilistic nature of the driver model does complicate matters. To overcome this problem, we propose the following approach: Apropertyofinterestisspecifiedbyatemporal-logicformula. A batch of simulations is performed with the intent of roughly exploring the spectrum. For that, test points covering the value ranges of the scenario parameters are chosen. This batch of simulations provides a grid of sampling points for the property function from which an approximation of the function is derived by interpolation. At each test point, the probabilistic property function is evaluated either by Monte-Carlo simulation or, more elaborately, by systematically exploring the behaviour spectrum of the driver model (see below for details). Further simulation refines the approximation in areas of interest (e.g., in regions with high function values) by selecting input values accordingly. Via such a guided simulation, maxima (or minima) of the property function can be detected with far less simulation runs than by brute force. We have instantiated this approach for our application scenario, where indeed the main factors determining the course a simulation run takes are the parameters of the traffic scenario to be explored and the decisions the driver model takes in reaction to the scenario. We will not describe here the adaptations necessary to cope with the nondeterminism occurring in practice, which results from, e.g., race conditions. Results of an exploration where we applied Monte-Carlo simulation at each test point are depicted in Fig. 3. The property used is defined by the formula (4) from Sec. 3. It yields negative values for critical runs, so we seek minima of this function. We considered variations of the attributes v diff [km/h],dist [m] {20, 30, 40}, where dist denotes the size of the gap size on the right lane of the expressway to be used by the ego car, and v diff is the speed difference to the cars on the right lane at the point in time when the ego car enters the acceleration

91 Fig. 3. Evaluating criticality at test points in a scenario with parameters v diff (speed difference to cars on the right lane, [km/h]) and dist (gap size, [m]). lane. Each combination of values of these attributes yields a test point, and 20 simulation runs were performed for each point. Entries in the result matrix are the numbers of unacceptable and severely critical runs, defined by formula values below -10 or in [-10,-2], respectively. For instance, we get 6 unacceptable and 10 severely critical runs for the test point v diff = 40 km/h and dist = 30 m. These rather high criticalities have their reason in the fact that, with these parameters, the scenario is very demanding and that the driver model we used was not yet fully adapted to the scenario. The right part of Fig. 3 shows how the criticality function is refined in the vicinity of the point with highest observed criticality. Abetterevaluationofthepropertyvalueatatestpointcanbeachievedby replacing the Monte-Carlo simulation by a systematic exploration of the probability space. This is possible in our setup as we can control the probabilistic decisions of the driver model externally. Depending on the history of the simulation, the driver model reaches points where it uses random numbers to chose between different courses of action. Conceptually, this yields a tree of possible runs for each set of scenario parameters. Each node in the tree stands for a probabilistic decision, and each edge is labelled accordingly by a probability. By its nature, this random tree is accessible (only) in a top-down fashion. To explore it, paths are taken systematically. The branch probabilities encountered along the way are multiplied to compute the path probability. If a path probability gets below a minimum threshold fixed at the beginning of the procedure, its further exploration is stopped. Completed runs yield values for the property of interest. These are used to annotate the branches which have been taken with estimations of (maximal) property values and reliability information, guiding the further exploration of the tree. An implementation of this exploration procedure is currently under development, but still in an experimental state. Due to intricacies of the driver model and the simulation environment, further stabilization is needed to get a procedure yielding highly valid and dependable estimations. 6 Summary We have presented a way of exploring via simulation the functionality of assistance systems and their effect on safety, given executable behaviour models of

92 the driver and all other constituents of the scenario. The presented criticalityguided simulation explores the complex model and provides the designer with meaningful information on potentially dangerous situations arising from the current ADAS design and thus increases the quality. Our results on the presented case study indicate that this approach may indeed be helpful to reduce the number of tests with human subjects. The techniques are yet to be explored on a larger scale, which we intend to do in the near future. Also we will develop and test further techniques for speeding up the simulation and guaranteeing a high reliability of the resulting assessment. In particular, we will use the information about the internal states of driver and ADAS model for coverage and guidance. Acknowledgements: We acknowledge the many fruitful discussions and in particular the work of the other participants in the IMoST project and further cooperating projects which provided the models whose behaviour we have set out to explore. References 1. M. Baumann, H. Colonius, H. Hungar, F. Köster, M. Langner, A. Lüdtke, C. Möbus, J. Peinke, S. Puch, C. Schiessl, R. Steenken, and L. Weber. Integrated modelling for safe transportation - driver modeling and driver experiments, In: Fahrermodellierung in Wissenschaft und Wirtschaft, 2. Berliner Fachtagung für Fahrzeugsmodellierung. VDI Verlag, Düesseldorf, L. de Alfaro, M. Faella, and M. I. A. Stoelinga. Linear and branching metrics for quantitative transition systems. In Proc. 31st Int l Colloq. on Automata, Languages and Programming (ICALP04), Turku, Finland, volume3142oflecture Notes in Computer Science, pages97 109,Berlin,2004.Springer. 3. IEEE Standard for Modeling and Simulation (M&S). High level architecture - framework and rules. IEEE Computer Society Press, September T. Gezgin. Observerbasierte on-the-fly Auswertung von QLTL-Formeln innerhalb eines HLA-Simulationsverbundes. Master s thesis, Carl von Ossietzky University Oldenburg, Claus Möbus and Mark Eilers. Further steps towards driver modeling according to the bayesian programming approach. In Vincent Duffy, editor, Digital Human Modeling, volume5620oflecture Notes in Computer Science, pages Springer Berlin / Heidelberg, Claus Möbus, Mark Eilers, Hilke Garbe, and Malte Zilinski. Probabilistic and empirical grounded modeling of agents in (partial) cooperative traffic scenarios. In Vincent Duffy, editor, Digital Human Modeling, volume5620oflecture Notes in Computer Science, pages SpringerBerlin/Heidelberg, Gerald Sauter, Henning Dierks, Martin Fränzle, and Michael R. Hansen. Lightweight hybrid model checking facilitating online prediction of temporal properties. In Proceedings of the 21st Nordic Workshop on Programming Theory, NWPT 09, pages 20 22, Kgs. Lyngby, Denmark, Danmarks Tekniske Universitet. 8. L. Weber, M. Baumann, A. Lüdtke, and R. Steenken. Modellierung von Entscheidungen beim Einfädeln auf die Autobahn. To appear: Fortschritts-Berichte VDI: Der Mensch im Mittelpunkt technischer Systeme. 8. Berliner Werkstatt Mensch- Maschine-Systeme, 2009.

93 FALTER in the Loop: Testing UAV Software in Virtual Environments Florian Mutter, Stefanie Gareis, Bernhard Schätz, Andreas Bayha, Franziska Grüneis, Michael Kanis, Dagmar Koss fortiss GmbH Guerickestr. 25, München, Germany Abstract: With the availability of the off-the-shelf quadrocopter platforms, the implementation of autonomous unmanned aerial vehicles (UAV) has substantially been simplified. Such UAVs can explore unaccessible terrain and provide information about the local situation in a specified target area. For the early development of an autonomous aerial vehicle, virtual integration of the system makes it possible to test a software implementation without endangering the hardware or the environment. Key elements of such a virtual test environment for UAV software are the modeling and simulation of the environment and the hardware platform, as well as the integration of the software in the simulation. 1 Introduction The increased computational power available in current embedded processors has drastically simplified the implementation of robotic hardware by providing off-the-shelf basic components with complex control functionality. In the field of UAVs (unmanned aerial vehicles) they have lead to low-cost quadrocopter platforms. UAVs are vehicles that are operated without a pilot on board. UAVs are mostly remotely controlled by a pilot, but increasingly autonomously operating UAVs are explored, specifically UAVs that are used for scientific observations from the air or to explore unaccessible areas [CCC10]. The FALTER project [FAL10] implements an autonomous UAV that is intended to be used indoors. The major use case for the FALTER UAV is the exploration of areas in buildings that are not accessible for humans, e.g., to explore a factory site after an accident. To that end, the FALTER unit is equipped with different sensors to investigate the terrain and (current) situation. Unlike most autonomously operating units, FALTER is designed for the use of indoor exploration, therefore the GPS-based navigation, commonly used in autopilot approaches for UAVs [CCC10], cannot be used. The unit operates on its own after it received the mission data from the mission control. It starts at a given point with a rough floor plan of the building under consideration and tries to find a way to a given target area. After collecting situation data, it returns to the start point and transmits the data to the mission control station. Equipped with a large collection of sensors including gyroscopes, accelerometers for position estimation, and infrared sensors for collision avoidance the FALTER unit autonomously controls the actuators its propellors to achieve the mission goal. Obviously, the development of such an autonomous behavior requires the implementation of complex algorithmic functionally, from collecting the sensor data and controlling the actuators via the execution of flight commands up to the complete (re-)planing of the mission. Testing

94 such a complex functionality poses threats to both the equipment under development as well as the test environment and test personnel, since the UAV can move with a substantial speed, using rotations of the propellors of up to 5000 rpm. As the system under development has to operate autonomously, classical stimulus-response black box testing is not adequate. Instead, the system under test must continuously interact with its environment in a feedback-loop fashion, issuing commands via its actors based on the data collected via its sensors. To avoid such kind of damages during the development and test of the software and still support in-the-loop testing, virtual integration of the software can be used, i.e., the process of bringing a system into service without actually building the system by simulating it with all components of interest included. This allows to adequately and safely test the system under development on the software as well as on the hardware level; either by simulating hardware, equipment, and environment ( software-in-the-loop ), or by simulating only (part of) the equipment and the environment ( hardware-in-the-loop ). 1.1 Overview and Contribution In the following, an approach to support early testing of embedded software via a softwarein-the-loop virtual integration is presented. The approach consists of the definition of platform model and environment model capable of expressing UAVs with range sensors the implementation of such a combined platform and environment model using Matlab/Simulink the simulation and visualization of this combined model based on the Simulink VRML To that end, the entire hardware platform, equipment under development and environment is modeled in software and simulated. This method provides a fast way to get test results, while using the unmodified software under development, and does not need expensive testbed equipment. After relating the presented approach to other research in this field, the remainder of this contribution is structured as follows: Section 2 shows how simulatable models for the hardware platform as well as the environment can be built, with a specific focus on the modeling of ultrasonice distance sensors. Section 3 shows how the software under development is integrated into a simulation of these models based on Matlab/Simulink, and how the execution of the simulation can be visualized. Section 4 gives a short summary and discusses future work. 1.2 Related Work In-the-loop simulation is commonly used in the embedded industry to reduce development time and cost. To that end, a simulation is constructed by using a plant model to be controlled by the system under development, either explicitly by building a mathematical representation of the plant or by using prerecorded observations of the plant. The work presented here focuses on the construction of an explicit platform and environment model for UAVs that allows to include the simulation of range sensors. Especially in the virtual integration and simulation of UAV software, the focus is generally put mainly on the modeling of the aerodynamics. Little or no support is offered

95 Figure 1: The FALTER unit for the simulation of the sensor interface specifically used in the autonomous control of UAVs. Therefore, even extensive virtual test beds like [JSPV04] or [Gar08] use simulation engines provided for flight simulators, but provide no facilities to simulate sensors other than gyroscopes, accelerometers, altimeters, GPS, etc, as needed in the FALTER project. As a result, current virtual test beds are mainly suited for detailed tests of low-level control functionality e.g., of the self-stabilization properties of a flight control but do not provide the functionalities for in-the-loop testing of autonomous control algorithms. The construction of simulation models of sensors like the ultrasonic sensors used in the FALTER project is generally only found in the field of sensor and/or filter development. Here, the sensor is modeled in very fine-grained detail to describe the signal propagation on the transducer level [Bil03]. However, this level of detail is not suitable to efficiently support in-the-loop simulation of systems with multiple sensors and autonomous behavior as in the FALTER unit in a complex environment like a building with several walls. In contrast to the approaches mentioned above, the approach presented here allows to perform functional testing of autonomic UAVs using range sensors via virtual integration, with a sufficient level of abstraction to enable efficient simulation of the platform under development and complex environments. 2 Platform and Environment Model To support a virtual software-in-the-loop test, a model for the FALTER platform i.e., the unit without the software under test as well as a model of the environment the unit operating is in i.e., the building including additional obstacles are needed. The first model mainly has to deal with the dynamics of a moving UAV, while the second model has to address mainly the aspects of sensing objects. In the following, these two models are described, after a brief overview of the FALTER unit. 2.1 FALTER Overview The FALTER unit, as shown in Figure 1, is based on two main hardware components. The first one is the quadrocopter platform L3-ME from the HiSystems GmbH. The second components is the RoBoard RB-100 from DMP Electronics Inc. The L3-ME is a self-assembly kit for a quadrocopter. The controller, that comes with the quadrocopter platform, is the preassembled FlightCtrl. It has gyroscopes, a 3D accelerometer and an atmospheric pressure meter installed. The FlightCtrl is responsible for holding

96 Figure 2: Axes and Momentums 1 the unit in the air. In the FALTER unit setup, the FlightCtrl receives commands from the RoBoard and translates these commands into signals for every motor. The accelerometer can measure three axes at the same time. For the rotations around the three axes, three gyroscopes are installed. The FlightCtrl has a hold-flight-level mode in which is used to keep the unit on a specified flight level. The controller which is responsible for the autonomous flight is a RoBoard RB-100, using a Vertex86DX CPU running at 1000MHz and 256MB DRAM. The Pulse Width Modulation interface is used to transmit the commands from the RoBoard to the FlightCtrl. An I 2 C interface is used to connect the ultra sonic range finders. The three front range finders are used to triangulate the position of an object that is detected in front of the copter. Two sensors are faced to the left and the right side of the copter to measure the distance to objects that get currently passed by the unit. Another range finder measures the distance to the ground supporting the air pressure meter to hold the flight level. One is pointed to the rear of the unit and one to the top to keep enough distance to the ceiling. 2.2 Platform Model For the FALTER project, the path planning and sensor data aggregation are the main parts which need to be tested. The project is not about the control algorithms to stabilize the unit or any other low level control of a quadrocopter and because of that, these aspects are not the key part of the simulation. Therefore the flight dynamics are not modeled up to a degree of detail corresponding to the real dynamics of the system but only in a simplified way. The unit is represented as a single point of mass, aerodynamic aspects are not modeled at all. Consequently, the unit can stand perfectly still in the air if no commands are issued without drifting from its position. While these simplifications do not reflect the behavior of the unit at a fine-grained level of control, they allow to limit the complexity and to make it possible to test the path finding and sensor data fusion algorithms. Due to the simplifications mentioned above, it is possible to use an equations-of-motion block from an off-the-shelf Simulink Aerospace library that does not need to be substantially extended. The six-degrees-of-freedom (6DoF) model represents a single point of mass and reacts on three forces and three momentums. The FALTER unit is designed to send the control command from the high level controller to the original hardware, with the low level flight stabilization. To limit the complexity of the flight controlling algorithms, the software is modeled with only three degrees of freedom. The unit can move up and down along the z-axis by setting the engines power. It can yaw around the z-axis and it can

97 x pitch to move forward along the x-axis as shown in Figure 2. In the simulation the yaw command and the engine power are directly converted to a momentum around the z-axis and a force along the z-axis of the copter. When a pitch command is sent by the software, the value is used to divide the force that is generated by the engines in two parts: one part along the z-axis of the copter and another along the x-axis. This eliminates the pitch movement that would be seen on the real hardware but also reduces the need to simulate the flight stabilization algorithms shipped with the original hardware platform. 2.3 Environment Model The main sensors of the FALTER unit to determine its position and to detect obstacles are the eight Devantech SRF08 ultrasonic sensors, used to measure ranges from 3 cm to 6 m. They are connected via the I 2 C bus (Inter-Integrated Circuit bus) and can receive and record up to 17 echo signals. To determine its position in the room, the FALTER unit also uses gyroscopes and the accelerometer. In the simulation, the sensor values for the last two sensor types come directly from the 6DoF model, which is provided by the Aerospace Blockset of Simulink. For the ultrasonic sensors, no off-the-shelf simulations models are provided. The position of an ultrasonic sensor is given by the position of the FALTER unit and an offset, that is given by the design of the unit, i.e., the location of the sensor on the unit. The direction of every range finder is calculated from the direction of the copter. The three front range finders are heading in the same direction like the unit itself. To get the directions of the range finders facing to the sides, the direction of the FALTER unit is rotated around the z-axis of the copter for 90. The same applies to the ground facing range finder that is rotated 90 around the y-axis of the copter. To simulate a SRF08 range finder, the beam pattern is divided in several single lines as shown in Figure z y y Figure 3: Simulated Sensor Beam Top and Side View The corresponding sensor lines of one range finder are derived from its direction vector. To construct an equivalent mathematical model, each sensor is represented by a collection of vectors, obtained by applying the corresponding rotation matrix to the direction vector. To get a sensor beam pattern that is similar to the pattern of the SRF08 range finder, two circles, each with eight lines, are build around the center beam. That makes a total of 17 lines from the position of the sensor as shown in figure 3. The length of every line

98 represents the maximum range and is used to determine if a facing wall is too far away to be measured. After the sensor beam lines are generated, the intersections with the walls need to be determined. Each wall is represented by a vertex and two direction vectors from that vertex. The length of the direction vector also gives the length of each edge of a wall. A wall is represented by the vertex v and the edges e and f. To calculate the intersection between one sensor line and the wall, the equations of the line and of the plane need to be equated. Once the intersection point between one sensor line and one wall is known, a check is needed whether the intersection point is within the bounded wall, because the method above will return an intersection point in all cases where the line is not parallel to the wall. To check if a point is in the quadrangle, that is given by the two vectors e and f, the dot product is used. For points within the boundry, the distance is calculated and returned. The distance is simply the distance between the position vector of the sensor and the intersection point at the wall. To limit the range of the sensor, the distance is compared to the length of the vector representing the corresponding sensor beam line, and if the distance is greater than the length, 0 is returned. This is done for all sensor beam lines. If the intersection point does not lie within the wall or the line is parallel to the wall, 0 is returned. The result of the above calculations is a matrix with the distances to every wall that was in the range of the sensor. This matrix is now minimized and the values are combined if they lie close together. This is done to imitate the result of a real SRF08 range finder. The SRF08 does only return one value, except errors in measurement, if it is faced to a wall. If the angle between the sensor and the wall is getting more acute, more values are returned. To respect this behavior, all values that differ not more than 10 cm are combined to one. 3 Implementation To simulate and visualize the models described in Section 2, Matlab/Simulink is utilized as an implementation framework. To obtain simulatable models, the Simulink core library is applied to formalize the sensor and environment model; the Aerospace Block Set is applied to formalize the dynamics of the platform model. For the virtual integration of the control software into the simulation, the S-function mechanism of Simulink is used. For the visualization of the simulation, the VRML extension of Simulink is used together with standard functionalities provided by Simulink to display simulation results. 3.1 Simulation Figure 4 shows the overall architecture of the simulation. The simulation is implemented by using Simulink blocks, with each block capturing a part of the model under simulation. The model of the environment of the FALTER unit describing walls and obstacles according to the representation described in Section 2.3 is captured in the blue Environment part. The dynamics of the FALTER unit are captured in the Physical Model, including the 6DoF and flight dynamics part, as described in Section 2.2. The remaining parts of Figure 4 describe the software and hardware parts of the unit relevant for the autonomous flight management. The SFR08 Sensors and Split Signal blocks capture the part of the model dealing with the sensor functionality, using the mathematical representation described in Section 2.3. All the above mentioned blocks are built

99 [Walls] [Ve] Ve (m/s) GAS YAW Static Objects Dynamic Objects Environment Ve (m/s) Xe (m) Angle (rad) DCM Vb (m/s) [Ve] [Xe] [DCM] [Vb] [Xe] [DCM] Walls Position Distances DCM SRF08 Sensors [Angle] GND LHS RHS FRONT_RHS Distances FRONT_CTR FRONT_LHS REAR CEILING Split signal [GAS] [YAW] [NICK] [ROLL] [Xe] [Angle] [DCM] [Vb] [w] [dw] [Walls] Xe (m) Angle (rad) DCM Vb (m/s) w (rad/s) dw/dt Walls Commands 3D NICK w (rad/s) [w] dw/dt [dw] ROLL Ab (m/s) Physical Model OBSTACLES ENV_DATA_POINTER Flight Management Task CTRL_REF Transport Delay [GAS] [YAW] [NICK] [ROLL] LED OBSTACLES ENV_DATA LED GAS YAW NICK ROLL CTRL_REF SRF08 GND SRF08 LHS SRF08 RHS SRF08 FRONT_RHS SRF08 FRONT_CTR SRF08 FRONT_LHS SRF08 REAR SRF08 CEILING FC ACC FC GYRO LASER Control Task Figure 4: Overall Architecture of the Simulation Model

100 using basic blocks from the Simulink core library. The two remaining blocks Control Task and Flight Management Task are capturing the models dealing with the software parts of the FALTER unit in form of the hardware abstraction layer and the flight management software. They are implemented in form of S-Function blocks. S-Functions provide a simple mechanism to embed additional functionality into Simulink. S-Function blocks can simply be used like other blocks from the core library to build a simulatable model. In the FALTER project, S-Functions are one key part of the simulation besides simulating the sensors and simulating the movement of the copter. The code that is running on the RoBoard in a RT-Linux is included in the simulation in form of a S- Function block. Using a S-Function to include the code has the advantage that the code running in the simulation and the code running on the FALTER unit are exactly the same. To integrate the code that runs on the RoBoard, two S-Functions are used. One calls the hardware abstraction layer code and one the flight management code. The code implementing the hardware abstraction layer is embedded in a dedicated S- Function and called with a sample time of 0.05, corresponding to a period of 50 milliseconds. The control code is intended to run with the same time settings on the RoBoard. A sample time of 0.05 is used to match the real SRF08 range finders cycle time of 50 ms. The S-Function for the hardware abstraction layer has three interfaces for the interprocess communication to the flight management task: the obstacles that were detected by the SRF08 ultrasonic sensors ( OBSTACLES ), the consolidated environment data collected from the accelerometers and the gyroscopes ( ENV DATA POINTER ), and the control commands for the hardware abstraction layer issued by flight management code ( CTRL REF ). Furthermore, to communicate with the FALTER unit platform, the corresponding interfaces for reading out the range sensors, accelerometers, and gyroscopes as well as controlling the unit movements are provided. The S-Function encapsulating the flight management code uses an interface matching the corresponding part of the hardware abstraction layer and is also called every 50 milliseconds. Since the flight management code does only communicate with the hardware abstraction layer, it receives or sends no signals to other blocks. Although not shown here, for debugging purposes additional signals can be introduced to access internal data of the software, e.g., the map constructed dynamically in the flight management task. This helps to visualize the actual state of the copter during the virtual execution of the software. 3.2 Visualization The visualization component of the simulation serves two purposes: It allows to observe the overall behavior in the virtual environment and to get a detailed view on how the FALTER unit perceives its environment through the sensors and how the unit interacts with it through its actuators. Figure 5 shows an example of the concrete user interface used during a simulation run. The interface includes different views of the FALTER unit in a 3D model of the virtual environment, like the top view and the centered view shown in the left upper half of Figure 5. These views also allow to depict the sensing zones of the range sensors. These visualizations support a straight-forward observation of the overall behavior of the unit. To get a more detailed view on how the unit perceives its environment and how it interacts with it, the elements in the lower left side and the right half can be used. Plots and gauges give direct readings of sensors and actuators, like the measured height or the commanded

101 Figure 5: User Interface of the Simulator speed. Furthermore, as mentioned above but not shown in the user interface in Figure 5, also information about the internal state of the FALTER unit can be visualized during execution. For the implementation of the plots and gauges, core functionalities of Simulink are used. To provide a 3D visualization of the simulation, Simulink 3D Animation is used. Using this framework, the visualization component reads the environment model and creates a VMRL representation of the virtual environment, and generates a 3D animation of it including a virtual FALTER unit. 4 Conclusion The presented approach provides means of software-in-the-loop testing of autonomous UAVs, using virtual integration of the controlling software with a model of the platform and the environment. In contrast to other approaches, it supports an efficient simulation of the behavior of an UAV covering the interaction between the UAV and its environment including complex sensors like ultrasonic range sensors. It furthermore provides means for the visualization of the results of the simulation. A detailed description on the virtual integration can be found in [Mut10]. Furthermore, detailed descriptions of the hardware abstraction layer, the path-planing, and the mission control can be found in [Bay10], [Grü10], and [Kan10], resp. While the current implementation supports the core features needed for early testing, there are several possibilities for improvement. Currently, the testing scenarios i.e., the model of the building and of possible obstacles are constructed manually. Here, as described in [SPR + 10], a more automated approach to systematically explore the state-space of the planning algorithm could be applied. Furthermore, currently only static objects walls as well as obstacles are supported. For a more realistic setting, also moving obstacles should be included. Finally, only box-like objects are supported; furthermore, possible

102 different reflection properties of different materials are not considered. A more detailed environment model could be supported to allow more realistic simulations at the cost of adding a substantial computational load to those simulations. References [Bay10] Andreas Bayha. FALTER - Flight unit for Autonomous Location and Terrain Exploration: Hardware Abstraction Layer. Bachelor s thesis, Technische Universität München, [Bil03] Ali Bilgin. A Simulation Model of Indoor Environments for Ultrasonic Sensors. Master s thesis, bilkent University, [CCC10] HaiYang Chao, YongCan Cao, and YangQuan Chen. Autopilots for Small Unmanned Aerial Vehicles: A Survey. International Journal of Control, Automation, and Systems, 8, [FAL10] FALTER project website, [Gar08] Richard D. Garcia. Designing an Autonomous Helicopter Testbed: From Conception Through Implementation. PhD thesis, University of South Florida, [Grü10] Franziska Grüneis. FALTER - Flight unit for Autonomous Location and Terrain Exploration: Path Planning. Bachelor s thesis, Technische Universität München, [JSPV04] Eric N. Johnson, Daniel P. Schrage, J.V.R. Prasad, and George J. Vachtsevanos. UAV Flight Test Programs at Georgia Tech. Technical report, Georgia Institute of Technology, [Kan10] Michael Kanis. FALTER - Flight unit for Autonomous Location and Terrain Exploration: [Mut10] Mission Control. Bachelor s thesis, Technische Universität München, Florian Mutter. FALTER - Flight unit for Autonomous Location and Terrain Exploration: Virtual Integration. Bachelor s thesis, Technische Universität München, [SPR + 10] Z. Saigol, F. Py, K. Rajan, C. McGann, J. Wyatt, and R. Dearden. Randomized Testing for Robotic Plan Execution for Autonomous Systems. In Autonomous Underwater Vehicles. IEEE, 2010.

103 Quantitative Analysis of UML Models Florian Leitner-Fischer and Stefan Leue Abstract: When developing a safety-critical system it is essential to obtain an assessment of different design alternatives. In particular, an early safety assessment of the architectural design of a system is desirable. In spite of the plethora of available formal quantitative analysis methods it is still difficult for software and system architects to integrate these techniques into their every day work. This is mainly due to the lack of methods that can be directly applied to architecture level models, for instance given as UML diagrams. Our approach bridges this gap and improves the integration of quantitative safety analysis methods into the development process. We propose a UML profile that allows for the specification of all inputs needed for the analysis at the level of a UML model. The QuantUM tool which we have developed, automatically translates an UML model into an analysis model. Furthermore, the results gained from the analysis are lifted to the level of the UML specification or other high-level formalism to further facilitate the process. Thus the analysis model and the formal methods used during the analysis are hidden from the user. 1 Introduction In a recent joint work with our industrial partner TRW Automotive GmbH we have proven the applicability of probabilistic verification techniques to safety analysis in an industrial setting [AFG + 09]. The analysis approach that we used was that of probabilistic Failure Modes Effect Analysis (pfmea) [GCW07]. The most notable shortcoming of the approach that we observed lies in the missing connection of our analysis to existing high-level architecture models and the modeling languages that they are typically written in. TRW Automotive GmbH, like many other software development enterprises, mostly uses the Unified Modeling Language (UML) [Obj10] for system modeling. But during the pfmea we had to use the language provided by the analysis tool used, in this case the input language of the stochastic model checker PRISM [HKNP06]. This required a manual translation from the design language UML to the formal modeling language PRISM. This manual translation has the following disadvantages: (1) It is a time-consuming and hence expensive process, (2) It is error-prone, since behaviors may be introduced that are not present in the original model. And, (3) the results of the formal analysis may not be easily transferable to the level of the high-level design language. To avoid problems that may result from (2) and (3), additional checks for plausibility have to be made, which again consume time. Some introduced errors may even remain unnoticed.

104 The objective of this paper is to bridge the gap between architectural design and formal stochastic modeling languages so as to remedy the negative implications of this gap listed above. This allows for a more seamless integration of formal dependability and reliability analysis into the design process. We propose an extension of the UML to capture probabilistic and error behavior information that are relevant for a formal stochastic analysis, such as when performing pfmea. All inputs of the analysis can be specified at the level of the UML model. In order to achieve this goal, we present an extension of the Unified Modeling Language that allows for the annotation of UML models with quantitative information, such as for instance failure rates. Additionally, a translation process from UML models to the PRISM language is defined. Our approach can be described by identifying the following steps: Our UML extension is used to annotate the UML model with all information that is needed to perform the analysis. The annotated UML model is then exported in the XML Metadata Interchange (XMI) format [Obj07] which is the standard format for exchanging UML models. Subsequently, our QuantUM Tool parses the generated XMI file and generates the analysis model in the input language of the probabilistic model checker PRISM and the properties to be verified in CSL. For the analysis we use the probabilistic model checker PRISM together with our extension of PRISM [AL08] that allows for the computation of probabilistic counterexamples. The resulting counterexamples can then either be represented as a fault tree that is interpretable on the level of the UML model, or they can be mapped onto a UML sequence diagram which is stored in an XMI file that can be displayed in the UML modeling tool containing the UML model. All analysis steps are fully automated and do not require user interaction. Structure of the paper. The remainder of the paper is structured as follows: In Section 2 we present our QuantUM approach. Section 3 is devoted to the case study of an airbag system. Followed by a discussion of related work in Section 4. We conclude in Section 5. 2 The QuantUM Approach 2.1 Extension of the UML In our approach all inputs of the analysis are specified at the level of a UML model. To facilitate the specification, we propose a quantitative extension of the UML. The annotated

105 model is then automatically translated into the analysis model, and the results of the analysis are subsequently represented on the level of the UML model. The analysis model and the formal methods used during the analysis are hence hidden from the user. In the following we describe our UML profile for quantitative analysis. The dependability terminology that we used here is based on [ALRL04]. UML models consist of two major parts, the structural and the behavioral description of the system. In order to capture the dependency structure of the model, which allows to express how the failure of one component influences the failure of another component, we need to extend the structural description capabilities of the UML. In addition we need to extend the behavioral description to capture the stochastic behavior. In the following we define the stereotypes and their properties that are used to specify the information needed to perform stochastic analysis. QUMComponent The stereotype QUMComponent can be assigned to all UML elements that represent building blocks of the real system, that is classes, components and interfaces. Each element with the stereotype QUMComponent comprises up to one (hierarchical) state machine representing the normal behavior and one to finitely many (hierarchical) state machines representing possible failure patterns. These state machines can be either state machines that are especially constructed for this QUMComponent, or they can be taken from a repository of state machines describing standard failure behaviors. The repository provides state machines for all standard components (e.g., switches) and the reuse of these state machines saves modeling effort and avoids redundant descriptions. In addition, each QUMComponent comprises a list called Rates that contains stochastic rates representing, for instance, failure rates, together with names identifying them. QUMFailureTransition and QUMAbstractFailureTransition In order to capture the operational profile and particularly to allow the specification of quantitative information, such as failure rates, we extend the Transition element used in UML state machines with the stereotypes QUMAbstractFailureTransition and QUMFailureTransition. These stereotypes allow the user to specify transition rates as well as a name for the transition. The specified rates are used as transition rates for the continuous-time Markov chains that are generated for the purpose of stochastic analysis. Transitions with the stereotype QUMAbstractFailureTransition are transitions that do not have a default rate. If a state machine is connected to a QUMComponent element, there has to be a rate in the Rates list of the QUMComponent that has the same name as the QUMAbstractFailureTransition, this rate is then considered for the QUMAbstractFailureTransition. The QUMAbstractFailure- Transition allows to define general state machines in a repository where the rates can be set individually for each component. The normal behavior state machine and all failure pattern state machines are implicitly combined in one hierarchical state machine. The combined state machine is automatically generated by the analysis tool and is not visible to the user. Its semantics can be described as follows: initially, the component executes the normal behavior state machine. If a QUMAbstractFailureTransition is enabled, the component will enter the state machine describing the corresponding failure pattern with the specified rate. The decision, which

106 of the n FailurePatterns is selected, is made by a stochastic race between the transitions. QUMStateConfiguration The QUMStateConfiguration stereotype can be used to assign names to state configurations. In order to do so, the stereotype is assigned to states in the state machines. All QUMStateConfiguration stereotypes with the same name are treated as one state configuration. A state configuration can also be seen as a boolean formula, each state can either be true when the system is in this state or false, when the system is not in this state. The operator variable indicates whether the boolean variables representing the states are connected by an and-operator (AND) or an or-operator (OR). The name of the state configuration is in the model checking process used to identify the state configurations. Due to space restrictions we can not describe the extension in full detail here. In addition to the stereotypes discussed above, we have also defined stereotypes allowing for the specification of repair management, failure propagation and spare handling. For a more detailed description of QuantUM, including a demonstration on how the QuantUM profile can be applied to a given UML model, we refer to [LF10]. 2.2 From UML to PRISM We define the semantics of our extensions by defining rules to translate the UML artifacts that we defined into the input language of the model checker PRISM [HKNP06]. This corresponds to the commonly held idea that the semantics of a UML model is largely defined by the underlying code generator. We base our semantic transformation on the operational UML semantics defined in [LMM + 99]. Besides the analysis model, the properties to be analyzed are important inputs to the analysis. In stochastic model checking, the property that is to be verified is specified using a variant of temporal logic. The temporal logic used here is Continuous Stochastic Logic (CSL) [AKVR96, BHHK03]. We offer two possibilities for property specification: first we automatically generate a set of CSL properties out of the UML model, and second we allow the user to manually specify CSL properties. This has the advantage, of supporting users with no or little knowledge of CSL by the automatic generation but still offers experts the full possibilities of CSL. Due to space restrictions we can not present the translation rules and CSL property generation here and refer to [LF10]. 3 Case Study: Airbag System We have applied our modeling and analysis approach to a case study from the automotive software domain. We performed an analysis of the design of an Electronic Control Unit for an Airbag system that is being developed at TRW Automotive GmbH, see also [AFG + 09]. Note that the used probability values are merely approximate ballpark numbers, since the

107 actual values are intellectual property of our industrial partner TRW Automotive GmbH that we are not allowed to publish. The airbag system architecture that we consider is depicted in Figure 1. The two acceleration sensors, MainSensor and SafetySensor, measure the acceleration of the car in order to detect front or rear crashes. The acceleration values are read by the microcontroller which performs the crash evaluation. The deployment of the airbag is secured by two redundant protection mechanisms. The Field Effect Transistor (FET) controls the power supply for the airbag squibs. If the Field Effect Transistor is not armed, which means that the FET-Pin is not high, the airbag squib does not have enough electrical power to ignite the airbag. The second protection mechanism is the Firing Application Specific Integrated Circuit (FASIC) which controls the airbag squib. Only if it receives first an arm command and then a fire command from the microcontroller it will ignite the airbag squib. Although airbags save lives in crash situations, they may cause fatal behavior if they are inadvertently deployed. This is because the driver may lose control of the car when this deployment occurs. It is therefore a pivotal safety requirement that an airbag is never deployed if there is no crash situation. In order to analyze whether the considered system architecture, modeled with the CASE tool IBM Rational Software Architect, is safe or not we annotated the model with our QuantUM extension and performed an analysis with the QuantUM tool. Figure 1: Class diagram modeling the airbag system. After all annotations were made, we exported the model into an XMI file, which was then imported by the QuantUM tool and translated into a PRISM model. The import of the XMI file and translation of the model was completed in less than two seconds. Without the QuantUM tool, this process would require hours of work of a trained engineer. The resulting PRISM model consists of 3249 states and transitions. The QuantUM tool also generated the CSL formula. P =? [(true)u <=T (inadvertent deployment)] where inadvertent deployment is replaced by the state formula which identifies all states in the QUMStateConfiguration inadvertent deployment and T represents the mission time. The mission time specifies the driving time of the car. Since, the acceleration value of the sensor state machines is always zero, the formula P =? [(true)u <=T (inadvertent deployment)] calculates the probability of the airbag being deployed, during the driving time T, although there is no crash situation. Therefore, if the QuantUM tool is used, the only input which has to be given by the user

108 is the mission time T. Whereas, without the QuantUM tool the engineers would also have to specify the CSL formula. We computed the probability for the mission time T=10, T=100, and T=1000 and recorded the runtime for the counterexample computation (Runtime CX), the number of paths in the counterexample (Paths in CX), the runtime of the fault tree generation algorithm (Runtime FT) and the numbers of paths in the fault tree (Paths in FT) in Figure 2. The experiments where performed on a PC with an Intel QuadCore i5 processor with 2.67 Ghz and 8 GBs of RAM. T Runtime CX (sec.) Paths in CX Runtime FT (sec.) Paths in FT (approx min.) (approx min.) (approx min.) Figure 2: Experiment results for T=10, T=100 and T=1000. Figure 2 shows that the computation of the fault tree is finished in several seconds, whereas the computation of the counterexample takes several minutes. While the different running times of the counterexample computation algorithm seem to be caused by the different values of the running time T, the variation of the running time of the fault tree computation seems to be due to background processes on the experiment PC. Figure 3 shows the fault tree generated from the counterexample for T=10. While the counterexample consists of 738 paths, the fault tree comprises only 5 paths. It is easy to see by which basic events, and with which probabilities, an inadvertent deployment of the airbag is caused. There is only one single fault that can lead to an inadvertent deployment, namely FASICShortage. The basic event MicroControllerFailure, for instance, can only lead to a inadvertent deployment if it is followed by one of the following sequences of basic events: enablefet, armfasic, and firefasic or enablefet, and FASICStuckHigh. In the Fault Tree this is expressed by the Priority-AND symbol that connects those events. If the basic event FETStuckHigh occurs prior to the MicroControllerFailure the sequence armfasic and firefasic occurring after the MicroControllerFailure event suffices. The case study shows that the fault tree is a compact and concise visualization of the counterexample. It allows for an easy identification of the basic events that cause the inadvertent deployment of the airbag, as well as their corresponding probabilities. If the order of the events is important, this can be represented in the fault tree by the PAND-gate. Without this automation one would have to manually compare the order of the events in all 738 paths of the counterexample, which is a tedious and time consuming task. Figure 4 shows the first part of the UML sequence diagram which visualizes the counterexample for T=10. The generation of the XMI code of the sequence diagram took less then one second. We imported the XMI code into the UML model of the airbag system in the CASE tool IBM Rational Software Architect. This allows us to interpret the counterexample directly in the CASE tool. An additional benefit of our QuantUM implementation is a visualization of the counterex-

109 Figure 3: Fault tree for the QUMStateConfiguration inadvertent deployment (T = 10). ample as sequence diagram in which operation calls can be shown. In the lower altcompartment of the automatically synthesized sequence diagram in Figure 4 for instance, it is easy to see how after a failure of the microcontroller the operations enablefet(), armfasic(), and firefasic() are called.

110 Figure 4: First part of the UML sequence diagram for the QUMStateConfiguration inadvertent deployment (T = 10). 4 Related Work The idea of using UML models to derive models for quantitative safety analysis is not new. In [MPB03] the authors present a UML profile for annotating software dependability properties. This annotated model is then transformed into an intermediate model, that is then transformed into Timed Petri Nets. The main drawbacks of this work is that it merely focuses on the structural aspect of the UML model, while the actual behavioral description is not considered. Another shortcoming is the introduction of unnecessary redundant information in the UML model, since sometimes the joint use of more than one stereotype is required. In [BMP09] the authors extend the UML Profile for Modeling and Analysis of Real Time Embedded Systems [Obj08] with a profile for dependability analysis and modeling. While this work is very expressive, it heavily relies on the use of the MARTE profile, which is only supported by very few UML CASE tools. Additionally, the amount of stereotypes, tagged values and annotations that need to be made to the model is very large. Another disadvantage of this approach is that the translation from the annotated UML model into the Deterministic and Stochastic Petri Nets (DSPN) [MC87] used for analysis is carried out manually which is, as we argue above, an error-prone and risky task for large UML models. The work defined in [Jan03] presents a stochastic extension of the Statechart notation, called StoCharts. The StoChart approach suffers from the following limitations. First, it is restricted to the analysis of the behavioral aspects of the system and does not allow for structural analysis. Second, while there exist some

111 tools that allow to draw StoCharts, there is no integration of StoCharts into UML models available. In [BCH + 08] the architecture dependability analysis framework Arcades is presented. While Arcade is very expressive and was applied to hardware, software and infrastructure systems, the main restriction is that it is based on a textual description of the system and hence would require a manual translation process of the UML model to Arcade. We are to the best of our knowledge not aware of any approach, that allows for the automatic generation of the analysis model and the automatic CSL property construction. 5 Conclusion We have presented a UML profile that allows for the annotation of UML models with quantitative information as well as a tool called QuantUM that automatically translates the model into the PRISM language, generates the CSL properties, and automatically performs a probabilistic analysis with PRISM. Furthermore, we have developed a method that allows us to automatically generates a fault tree from a probabilistic counterexample. We also presented a mapping of probabilistic counterexamples to UML sequence diagrams and thus make the counterexamples interpretable inside the UML modeling tool that is being used during design. We have demonstrated the usefulness of our approach on a case study known from the literature. The case study shows that tasks that previously required hours of work of trained engineers, like the translation of an UML model into PRISM or the formulation of CSL formulas, are now fully automated and completed within several seconds. In future work we plan to extend the expressiveness of the QuantUM profile, to integrate methods to further facilitate automatic stochastic property specification, and to apply our approach on other architecture description languages such as, for instance, SysML. References [AFG + 09] [AKVR96] [AL08] Husain Aljazzar, Manuel Fischer, Lars Grunske, Matthias Kuntz, Florian Leitner- Fischer, and Stefan Leue. Safety Analysis of an Airbag System Using Probabilistic FMEA and Probabilistic Counterexamples. In QEST 09: Proceedings of the Sixth International Conference on Quantitative Evaluation of Systems, pages , Los Alamitos, CA, USA, IEEE Computer Society. A. Aziz, K. Sanwal, V. Singhal, and R. K. Brayton. Verifying Continuous-Time Markov Chains. In CAV 96: Proceedings of the 8th International Conference on Computer Aided Verification, volume 1102, pages , New Brunswick, NJ, USA, Springer Verlag LNCS. H. Aljazzar and S. Leue. Debugging of Dependability Models Using Interactive Visualization of Counterexamples. In QEST 08: Proceedings of the Fifth International Conference on the Quantitative Evaluation of Systems, pages IEEE Computer Science Press, 2008.

112 [ALRL04] [BCH + 08] [BHHK03] [BMP09] [GCW07] [HKNP06] [Jan03] [LF10] A. Avižienis, J.C. Laprie, B. Randell, and C. Landwehr. Basic concepts and taxonomy of dependable and secure computing. IEEE transactions on dependable and secure computing, pages 11 33, H. Boudali, P. Crouzen, B.R. Haverkort, M. Kuntz, and M.I.A. Stoelinga. Architectural Dependability Modelling with Arcade. In Proceedings of the 38th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, pages , Christel Baier, Boudewijn Haverkort, Holger Hermanns, and Joost-Pieter Katoen. Model-Checking Algorithms for Continuous-Time Markov Chains. IEEE Transactions on Software Engineering, 29(7), S. Bernardi, J. Merseguer, and D.C. Petriu. A dependability profile within MARTE. Software and Systems Modeling, pages 1 24, Lars Grunske, Robert Colvin, and Kirsten Winter. Probabilistic Model-Checking Support for FMEA. In QEST 07: Proceedings of the Fourth International Conference on Quantitative Evaluation of Systems, pages , Washington, DC, USA, IEEE Computer Society. A. Hinton, M. Kwiatkowska, G. Norman, and D. Parker. PRISM: A Tool for Automatic Verification of Probabilistic Systems. In TACAS 06: Proceedings of the 12th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, Lecture Notes in Computer Science, pages Springer, David Nicolaas Jansen. Extensions of statecharts : with probability, time, and stochastic timing. PhD thesis, University of Twente, October Florian Leitner-Fischer. Quantitative Safety Analysis of UML Models. Master s thesis, University of Konstanz, [LMM + 99] D. Latella, I. Majzik, M. Massink, et al. Towards a formal operational semantics of UML statechart diagrams. In IFIP TC6/WG6, volume 1, pages Citeseer, [MC87] M. Marsan and G. Chiola. On Petri nets with deterministic and exponentially distributed firing times. Advances in Petri Nets 1987, pages , [MPB03] I. Majzik, A. Pataricza, and A. Bondavalli. Stochastic dependability analysis of system architecture based on UML models. Architecting dependable systems, page 219, [Obj07] Object Management Group. XML Metadata Interchange (XMI), v December [Obj08] Object Management Group. UML Profile for Modeling and Analysis of Real Time Embedded Systems. June [Obj10] Object Management Group. Unified Modeling Language. Specification v2.3. May 2010.

113 Diagnosis in Rail Automation: A Case Study on Megamodels in Practice Dennis Klar Michaela Huhn Jochen Grühser Department of Informatics Clausthal University of Technology {Dennis.Klar Siemens AG, I MO RA R&D OCS 4 D Braunschweig, Germany Abstract: In industrial automation, not only the diagnosis of complex distributed systems itself poses challenges. Also, the processes of gathering and managing the information on which diagnostic procedures are built, have to be adjusted to meet today s requirements for system availability and cost-effectiveness. We consider model-based system-level diagnosis in rail automation from the viewpoint of megamodeling. In order to streamline the overall diagnostic process, centers of competence are identified for which thoroughly engineered metamodels are provided. Next, strict dependencies are defined between metamodels to foster model transformation and integration. Tight control of the diagnostic process removes redundancies in both activities and artifacts relevant for diagnosis, such as technical documentation, system-level analysis of availability and diagnosability, etc. Finally, explicit megamodeling reveals new potential for automation, e.g. in model instantiation from target configuration, and facilitates additional flexibility in exchanging and combining diagnostic methods for better accuracy or performance. 1 Introduction Only a side issue once, system diagnosis has recently become a key factor in various domains of industrial automation. On the one hand, operating companies demand costeffectiveness for diagnostic processes; on the other hand, the quality of diagnosis as a precondition for high system availability must not decrease, even in a situation of rapidly growing system complexity, interaction, variability, and integration of third-party components. These challenges for system-level diagnosis are to be met in several domains such as rail automation, power supply systems, industrial automation and the automotive domain. Whereas, on the level of components and subsystems, self-diagnosis or monitoring is a domain-specific issue to be solved as part of development, technical and organizational difficulties accumulate on the system level. Technically, diagnosis always has to cope with the reduced observability of single components on the system level. In addition, greater variability and decreased in-house production depth nowadays multiply the possibilities of deviant behavior and complicate the retracing of causal chains to the origin of failure.

114 The organizational challenge is to efficiently gather, adopt, and integrate information that stems from the very different processes of a system s life cycle but is needed for diagnosis and maintenance: Component diagnosis is engineered during component design, composition and filtering of diagnostic information are part of product development, instantiation of the diagnostic system-level model is based on data from the target configuration, and diagnostic reasoning needs feedback from maintenance. In an ongoing project on diagnosis in rail automation in cooperation with Siemens AG 1, we investigate the pre-requisites and potential of new model-based diagnosis concepts. In this paper, we briefly survey system-level approaches in several domains of industrial automation to characterize the specific demands of rail automation. Most notably, an increase in the quality of diagnosis 2 depends not only on new diagnostic algorithms with improved efficiency, but also on the well-planned integration of a comprehensive diagnostic process which comprises the acquisition of diagnostic knowledge, modeling and model management. We argue why concepts from megamodeling constitute a theoretical foundation for such a process and present a fault catalog editor as an initial tool that implements a metamodel supporting a particular subprocess. It shows that, in contrast to other areas of model-based development, the maturity of metamodels is crucial. Not only because of the numerous stakeholders who contribute to the diagnostic process and depend on other tasks, but also because of the long operating period of rail automation systems that may exceed 30 years. This paper is structured as follows: In Section 2 we discuss different application domains. Section 3 introduces model-based diagnosis and causal analysis. In Section 4, we propose a revised diagnostic process and motivate the use of megamodels. Sections 5 and 6 explain diagnostic modeling and integration in more detail. Section 7 concludes this paper. 2 System Level Diagnosis in Industrial Automation We survey several approaches to system-level diagnosis from different domains to expose their strengths and open points. 2.1 Automotive Systems Automotive on-board systems show an enormous increase in complexity due to the fact that a rapidly growing number of interconnected and technologically sophisticated functions are deployed on networks of electronic control units (ECUs). The troubleshooting process of vehicles usually involves a combination of on-board and off-board diagnosis (see [BCF + 08]). Most ECUs possess self-diagnostic capabilities that provide so-called diagnostic trouble codes (DTC) if a pre-defined abnormal behavior is recognized locally. A service technician can access the DTCs and statistical records. However, the challenge 1 Siemens AG, Industry Sector, Mobility Division 2 Quality of diagnosis means the precision of fault localization in relation to effort and computational costs.

115 is to efficiently infer the origin of failure from the symptoms, i.e. the DTCs and driver observations, because any two instances of the same type of car may significantly differ with respect to their configuration. To reduce the costs of fault localization, new approaches to vehicle diagnosis are built on case-based reasoning (CBR), probabilistic model-based reasoning (MBR) and fault tree (FT) techniques. However, off-board diagnosis on the vehicle level is constructed in a centralized procedure performed by maintenance experts for each car type with all its variability which is a persistent bottleneck for high-quality, up-to-date diagnosis. 2.2 Power Transmission In power transmission, an enormous number of components such as power lines, circuitbreakers, routing buses, generators and transformers form huge grids [BCG97]. Whenever a short-circuit or any other severe fault occurs, affected parts or line segments will be automatically disconnected to protect the remaining network, while power transmission is rerouted via alternative paths. Multiple interleaved protection systems provide nearly instantaneous reaction and mutual backup. As opposed to other domains, diagnosis may be based on a model of normal behavior and deviations from it, since the components themselves are numerous but simple. Algorithms derived from model checking have been shown to scale up in this domain [LZ03]. Since observability is usually good, failure propagation can be analyzedo based on the temporal order of correlated failure messages. 2.3 Plant Automation In modern processing plants, a process control system (PCS) connects field components and monitors key process indicators (KPI). For instance, in a chemical plant, components to be controlled are pipes, valves, pumps, etc., and quantitative KPI such as flow volume, pressure and temperature are to be managed. In research studies on plant automation [ME09], observability is comparatively good, as intelligent (process-aware) components are directly accessed via field buses and process indicators of otherwise non-intelligent components can be derived from known data and laws of (fluid) physics. In [ME09], a layer-based framework is suggested to provide uniform component models for diagnosis with observable and derived properties. This approach is similar to what we call a state image (see Section 6.1). Challenges in plant automation arise from variable, task-dependent system structures which are addressed by extensive flow path analysis. However, monitoring and interpreting process indicators as well as flow path analysis need not only comprehensive models of the components but also their interactions in both cases, normal behavior and failure. Thus, key issues known from rail automation (1) on process integration, i.e. how to provide and compose the underlying information models, and (2) on algorithmic scalability will have to be solved here as well.

116 2.4 Rail Automation Diagnosis in rail automation can be applied to a variety of areas, for example on-board systems of rail vehicles or level-crossing systems. Our ongoing project is concerned with supporting the maintenance of signaling and interlocking equipment. This includes outdoor components such as signals, switches, axle counters or track circuits (for track vacancy detection), cables, and indoor components within the associated interlocking (safe redundant control computers, element operating modules, power supply). Hence, target systems are large and widely distributed installations containing hundreds or thousands of individual elements. A hierarchy of control systems monitors attached components and forwards selected status and fault data to a diagnostic platform. System-level diagnosis analyzes the incoming data to detect failures and identify root causes that explain observed symptoms. The more accurate fault localization is, the less time maintenance technicians will spend on searching for faults. Also, the costly replacement of components based on suspicion is reduced. The enormous size, complexity, spatial distribution and high level of variability in configuration make signaling and interlocking systems a prime example to test the performance and efficiency of any diagnostic approach. We have chosen a variation of model-based diagnosis (see Section 3) based on failure propagation and transformation to address these challenges. 3 Model-based Diagnosis and Causal Analysis Model-based diagnosis (MBD) utilizes explicit models of structure and behavior to analyze a target system based on observable outputs. Classic approaches to MBD model the system s intended (normal) behavior, allowing them either to predict future observations or to retrace and explain current ones. Fault detection searches for discrepancies between expected and observed outputs. Fault isolation analyzes internal structural and functional dependencies to identify those components that, if assumed faulty, can account for all detected discrepancies. In literature, diagnostic approaches use functional models such as discrete-event systems, automata, systems of equations, predicate or temporal logic. Earlier experiments [KH09] showed that these classic, simulation-based approaches are, in principle, restricted to the diagnosis of rather simple components and circuits because of the well-known state explosion problem. Thus, a different approach to modeling and diagnosis is needed that scales up well even to large target systems with complex components as encountered in the rail automation domain. To handle models of the encountered size and complexity, we concentrate on a causal analysis of the faulty behavior only, explaining failure propagation and observable symptoms. This approach reduces not only computational complexity, but also modeling efforts. Causal modeling is prominent in safety analysis, but can be adapted to diagnosis as well. Fault trees [VSD + 02] constitute a top-down, deductive method that causally traces a hazardous event or, more generally, an undesired situation back to its root causes. Although

117 easy to apply and fairly popular, this method struggles with some limitations. For example, redundancies occur as the resulting tree structure cannot represent common causes, which is essential for diagnosis. Compositional extensions such as Cause Effect Graphs [KLM03], which build on directed acyclic graphs instead of trees, remove many of these limitations, yet stay close to the fault tree terminology and methodology. Other safety analysis techniques, namely Failure Propagation and Transformation (FPT) Notation [FM93], FPT Calculus [Wal05] and FPT Analysis [GPM09], analyze causal chains, beginning with the occurrence of faults and modeling abnormal component reactions. These techniques mainly focus on software components and their reaction to data-related failure modes, such as value or timing (omission, commission) errors. Safety analysis stops when it is shown that all failures are successfully detected and mitigated before they can cause any harm (i.e. being propagated to actuator components). From the diagnostic point of view, causal analysis of failure propagation and transformation must continue beyond mitigation and also capture observable outputs such as status and error messages. We therefore extend the concept of FPT to generally consider all kinds of symptoms, so that the analysis is applicable both to software and hardware components and can also account for all possible system observations. 4 Diagnosis of Complex Systems in Rail Automation Besides technical issues, it is important to consider the diagnostic process itself. This process begins as early as the acquisition of diagnostic knowledge and involves many different stakeholders (developers, product management, diagnostic engineers, service and maintenance technicians). With highly distributed expert know-how and a variety of applications, there exists a multitude of partially isolated models that need to be integrated into a common workflow. Collaborative creation and use of diagnostic data are aspects of product data management (PDM). 4.1 Diagnostic Process This project s goal is to revise the diagnostic and documentary processes and better integrate and coordinate them with an in-house development process. A generic development process in the rail automation domain comprises stages of system engineering and product development to deliver installations with diagnostic capabilities (see Fig. 1). The more established the process, the more data will be locally held (models, documentation, databases, experience). Existing data needs to be actively made available to others to avoid redundancies and the inefficient gathering of information. For example, system experts should not be repeatedly interviewed on details of component behavior and interaction in the course of similar tasks, such as the specification of a diagnostic model, the provision of technical documentation and maintenance guides, or analyses of availability or diagnosability.

118 System engineering Product development Diagnostic component Installation (target) Figure 1: Physical view of the diagnostic process Consolidating all diagnostic knowledge into one new universal diagnostic model is not helpful either. There are far too many stakeholders involved in the overall diagnostic process, and adding all individually needed information would clutter the model for others. Instead, we aim at identifying centers of competence (see Table 1) based on process structures and reorganizing associated data repositories and dependencies where necessary. What distinguishes these centers is that they are represented by different stakeholders or participants in the diagnostic process. Furthermore, their work within the diagnostic process depends on data that is generated elsewhere and involves a processing step. For example, the models of faulty behavior are not merely part of the system engineer s functional model nor the product of an automated transformation, but rather first-class artifacts that require further analysis and modeling. To support the definition of an integrated diagnostic process we draw on the general idea of artifact models which relate locally produced data and models within a process. Center of competence System Engineering Diagnostic Engineering Documentation and Editing Product Development and Configuration Diagnostic Platform and Target Installation Diagnostic knowledge provided Functional models of components /subsystems Models of faulty component behavior Manuals, maintenance instructions, localization Target configurations Target s observable status Table 1: Centers of competence and associated diagnostic knowledge 4.2 Applying the Concept of Megamodels Being a relatively new approach to artifact modeling, a megamodel [BJV04] is a combined model that contains other models and metamodels as elements and shows their relationships. Applications are found in various areas, such as model management, matching different views in model-based architecture, or tracking the progress and results of model transformations. Another example as mentioned in [BJV04] relates to tasks, tools and produced artifacts in a software development process. Since this setting is close to ours regarding the diagnostic process, we adopt the concept of megamodels to describe how the individual models of different process steps depend on each other. Furthermore, we regard megamodels as a methodological foundation to foster the merging (weaving) of design models, runtime data and maintenance knowledge. The model management enables both metadata tracking and model traceability. We provide a metamodel for the megamodel (see Fig. 2) itself, that builds on the unified

119 metamodel and its extensions proposed in [HSG10]. It should be noted that, as we describe a process and data exchange, we primarily focus on process metamodels as elements. Models that conform to these meta models are only included as exemplary tool inputs and outputs. To better match our setting, we have replaced relations regarding transformations with more general ones that allow the use of arbitrary tools. Additionally, we have dropped all support for hierarchies of models and relations as this is not needed in our case. Model 1..* < connects 0..* Relation M2Model M1Model DependsOn ConformsTo 1 Tool 0..* supports > 1 works with > 0..* InputFrom Figure 2: An extended metamodel for process-related megamodels, adopted from [HSG10] The first step towards the construction of a megamodel for the diagnostic process is to identify the main centers of competence and their associated parts of diagnostic knowledge. All in all, each center of competence shall provide exactly one public metamodel with clearly defined boundaries and dependencies. These metamodels need to be extremely well constructed and usually undergo many iterations to support all required usages. The second step, the definition and formalization of dependencies, is also a matter of understanding and improving existing workflows. Tools (e.g. graphical or textual editors) support active processing tasks, such as the editing or merging of data (see Fig. 3). Additionally, the whole diagnostic process can be implemented as a business process that is aware of roles, activities, data exchange and updates. Figure 4 shows the resulting megamodel for the core part of the diagnostic process. 4.3 Local Metamodels and Supporting Tools Keeping in mind the fact that model-based diagnosis needs explicit descriptions of the structure and behavior of the target system (see Section 3), the revised diagnostic process introduces a strict division of the diagnostic model into a type model and an instance model. The type model is realized by means of the fault catalog (see Section 5.1) which describes the possible faulty behavior of individual component types of the application domain. It will become the main repository of diagnostic knowledge, supplying documentation / editing, diagnosis and all kinds of static analysis tasks with reliable data. The definition of fault models is a modeling task which will be carried out by experts based on own experience, technical documentation, or interview results. We provide a fault catalog metamodel and a graphical editor tool to facilitate the input of fault catalog definitions.

120 Diagnostic Engineering System Engineering MDD / MDE, Functional models Requirements, design, implementation Interviews, feedback & reports, Fault catalog modeling Probability analyses Product catalogs, instantiation derivation Target configuration Sales orders, Product installations Development references feedback Methods & algorithms Diagnostic model change notification Writing, localisation, Technical documents Manuals & instructions Diagnosis compute hypotheses Evaluation, history, State image Component mapping, filter rules Documentation and Editing feedback provision Maintenance Field experience, Maintenance instructions faults (hypotheses) Maintenance plans Installation (Target) Figure 3: Node- and model-centered view of the diagnostic process TargetConfig:M2 FaultCatalog:M2 d2:dependson DiagnosticModel:M2 d4:dependson StateImage:M2 > supports > supports c2:conformsto Instantiation:Tool c4:conformsto Evaluation:Tool c5:conformsto tc : M1 fc : M1 > i2:inputfrom works with dm : M1 > i4:inputfrom works with si : M1 Figure 4: Megamodel of the core diagnostic process The provision of instance models will be the responsibility of product engineers who are involved in configuration. Each installation in rail automation is configured according to customer orders. In this step, a target configuration is prepared, which lists all needed components and their dependencies. The instantiation task combines fault catalog type data with the structural definitions according to the target configuration. The result is a diagnostic instance model that matches the composition and possible faulty behavior of the target installations. Tool support is still under development, but likely to be a combination of automatic derivation and some manual customization. Both the target configuration and the diagnostic instance model are formalized by metamodel definitions. The editorial team can now reference component and behavior descriptions in the fault catalog when creating technical documents (manuals, guides) instead of building their own repository of diagnostic knowledge. The main tasks of writing, localizing, compiling and archiving publications remain untouched. The significant part here is the definition of an explicit interface to access and link fault catalog data. A diagnostic component (see Section 6.2) must continuously analyze the current state

121 of the target installation to detect failures and compute hypotheses of which components might be faulty. More specifically, available observations of the target system s behavior are compared to the diagnostic instance model and conclusions are derived therefrom. The concept of a state image (see Section 6.1) abstracts away from the underlying component communication and makes all observable status and failure information available in a unified object model. A corresponding metamodel and interface allow diagnostic methods of different kinds to be equally notified about status changes and also to actively query additional information. Lastly, diagnosis results can be returned to and stored directly in the state image, alerting maintenance technicians about possibly faulty components. Finally, a good process picks up on suggestions for improvements contributed by its participants. The fault catalog provides only an initial model for the faulty behavior of component types. Large systems in rail automation often show complex interactions, especially in the case of failures. Even more so, some observed reactions to faults might be unique to a specific installation caused by its composition, construction or configuration. Thus, it cannot be guaranteed that either a diagnostic model or a collection of maintenance instructions will always be accurate. Because of this, it is very important to make use of the extensive field experience of maintenance technicians by collecting feedback on errors or additional observations. In the same way, the editorial team and other experts should be able to submit their suggestions. Diagnostic engineers will check all submissions for plausibility and incorporate changes for the next release of the fault catalog. 5 Capturing Diagnostic Knowledge in Models One of the prominent advantages of the model-based approach to diagnosis is that it helps to capture and manage diagnostic data in an orderly manner. The required time and effort to model a target system s structure and behavior, however, mainly depend on the modeling procedure and the employed kind of models. Retaining practicability is one of the most important preconditions to the success of the revised diagnostic process in an industrial setting. Models must therefore be chosen very carefully to minimize (additional) efforts and allow for the use of synergetic effects. Modeling shall meet the following requirements: using modular descriptions that facilitate composition and reuse, being easy and intuitive to specify, and still allowing conclusions to be reconveyed from the model back into the application domain. As already mentioned, modeling is divided into the creation of type models, which describe component behavior, and instance models, which in turn reflect the structure and dependencies of an actual installation. 5.1 The Fault Catalog Approach The fault catalog provides a semi-formal basis for causal specification of the possible faulty behavior of individual types of rail automation components and systems as encoun-

122 tered in target installations. This catalog combines textual behavior descriptions that are close to the application domain with a definition of causal relationships that can be automatically analyzed. As a model repository, the fault catalog can be referred to in diagnostic or other analytic tasks. Catalog data can be entered either using a graphical forms-based tool (see Fig. 5) or in a textual description format. A fault catalog is a collection of causal models describing the behavior of component types in terms of internal faults and external abnormal influences. Therefore, basic component models, so-called unit models, contain lists of internal faults which will manifest themselves on the component s interfaces in the form of symptoms. Symptoms are deviations from normal behavior. They can be software-induced (see Section 3) but also relate to hardware issues such as electrical or thermal effects. Each unit model also provides interface definitions that, on the one hand, declare possible symptoms and, on the other hand, determine which models may be composed in a specific diagnostic model. Symptoms that originate in one component can affect other components and provoke (abnormal) reactions there. External influences on unit models are represented as incoming symptoms which might produce new symptoms on outgoing interfaces. The mechanisms of failure propagation and symptom transformation are defined by sets of clauses within unit models which specify the mapping between incoming and outgoing symptoms. Figure 5: Fault catalog editor tool 5.2 Instantiation Diagnosis depends on a diagnostic model instance that exactly reflects the structure and observable behavior of the target system. Only a consistent model will be able to explain

123 observed symptoms and help detect and localize fault candidates (potentially faulty components). Instantiation is usually guided by a description of the target installation (also called a target configuration, see Section 4.3) as provided by product engineering. The instance model can be interpreted as a construction kit that takes its building blocks from the fault catalog. For every system component, a matching unit model is instantiated, which then carries component-relevant data such as name, ID, location, etc. All unit instances need to be linked accordingly to their real dependencies to ensure that the failure propagation mechanisms function correctly. 6 Integration The revised diagnostic process needs to be integrated into an existing diagnostic platform. This platform provides access to the target system s observations and presents them in a unified way by means of the state image. Notification about the target s current diagnostic status and changes thereof is sent to the diagnostic component to trigger analyses. 6.1 State Image The existing central diagnostic platform already interfaces with the target system, but only on the level of raw, component and vendor-specific data streams from sensors, control units, and other subsystems and components. To add another layer of abstraction from technical details, the state image has been introduced which provides a unified view on the target system s observable state, including notification about locally detected errors or failures. Furthermore, the state image facilitates modeling as it provides a convenient overview of which data is available for analysis. The state image is an object structure that manages data objects for each of the installation s observable components. Measured variables (e.g. operating temperature, fan RPM, voltage) and received messages (operational state, fault codes) are equally stored as object properties of a certain data type. An interface is provided for fault detection components for notification about (status) changes and also for random access. A significant part of the state image contains quantitative data. However, our proposed diagnostic method, using fault catalog models and an analysis of failure propagation and transformation, completely relies on qualitative symptoms. Therefore, an evaluation step becomes necessary that discretizes the continuous values and produces boolean symptoms. Evaluation is a rule-based mechanism which, besides discretization, can also be used to execute a first level of fault detection. If simple analyses on single objects, such as checking lower and upper limits of measured values and issuing warnings, are already covered by evaluation, then model-based diagnosis can concentrate on the complex, componentspanning interactions to identify root causes.

124 6.2 Diagnosis of Failure Propagation and Transformation The unified representation and evaluation of the state image generally support multiple fault detection and diagnostic mechanisms to work in parallel or in combination. We provide a diagnostic component that directly analyzes causal chains of failure propagation and transformation based on fault catalog definitions and computes hypotheses of possibly faulty components (or combinations). The pre-requisites for diagnosis are that a diagnostic instance model and the set of observed symptoms, also called the current symptom pattern, are supplied. Upon activation, either triggered by status updates or at certain time intervals, the diagnostic component performs the following tasks which again are typical of classic model-based diagnosis (see Section 3): starting from the current symptom pattern, for each symptom a set of possible causes (faults) is computed and the components in question are flagged as suspects. Then, the direction of search is changed. All previously identified faults are checked if they can account for all observed symptoms, beginning with single faults, then moving on to combinations. Components with remaining potential faults are now flagged as candidates and added to the list of hypotheses, while others are cleared. Multiple candidates can still persist, if they cannot be further discerned by available observations. The hypotheses as the diagnostic result will be written back into the state image. New entries in reserved properties will generate warnings alerting maintenance. Currently, the restoration process relies on maintenance instructions to manually test possibly faulty component candidates. A further improvement of the process seems possible though, for example through the automatic generation of optimal test plans based on the diagnostic model. 7 Conclusion The rail automation domain presents good examples of very large installations that pose extreme challenges to system level diagnosis. Size in the number of components, complexity in their dependencies, and low observability due to strict architecture are a few which may be named. The fault catalog approach has been introduced and a variant of model-based diagnosis implementing a causal analysis of failure propagation and symptom transformation. The focus on faulty behavior instead of functional simulation helps to reduce efforts in computation and also in modeling. Practicability, although often neglected, is a prime requirement in a real-world industrial setting. A diagnostic process comprises much more than just diagnosis. The workflows of many different stakeholders need to be coordinated and supported to avoid redundant and inefficient management of diagnostic knowledge. We have shown how megamodels can help to define the overall process, control data exchange and take tool support into consideration. Furthermore, the revised diagnostic process itself benefits from modeling efficiency and diagnostic quality. Expert knowledge and field experience help to continually improve the diagnostic knowledge base.

125 Initially, we mainly focused on an optimized diagnostic algorithm. However, now the revised diagnostic process with its strict interfaces and explicit metamodels allows for new flexibility, as diagnostic techniques become easily exchangeable. For example, a slower but exhaustive combinatorial approach may be complemented by a fast heuristic diagnostic method, either in parallel or to prune its computations. References [BCF + 08] Stefania Bandini, Ettore Colombo, Giuseppe Frisoni, Fabio Sartori, and Joakim Svensson. Case-Based Troubleshooting in the Automotive Context: The SMMART Project. ECCBR2008, LNAI, 5239: , [BCG97] Pietro Baroni, Ulrico Canzi, and Giovanni Guida. Fault Diagnosis Through History Reconstruction: An Application to Power Transmission Networks. Expert Systems with Applications, 12(1):37 52, Intelligent Systems in Industry and Business. [BJV04] [FM93] [GPM09] [HSG10] [KH09] [KLM03] [LZ03] [ME09] Jean Bézivin, Frédéric Jouault, and Patrick Valduriez. On the Need for Megamodels. In Proceedings of the OOPSLA/GPCE: Best Practices for Model-Driven Software Development Workshop, Vancouver, British Columbia, Canada, October Peter Fenelon and John A. McDermid. An integrated tool set for software safety analysis. Journal of Systems and Software, 21(3): , June Xiaocheng Ge, Richard F. Paige, and John A. McDermid. Probabilistic Failure Propagation and Transformation Analysis. In Proc. 28th Intl. Conf. on Computer Safety, Reliability, and Security (SAFECOMP) 2009, volume 5775 of LNCS, pages , Hamburg, Germany, Springer-Verlag. Regina Hebig, Andreas Seibel, and Holger Giese. On the Unification of Megamodels. In Proc. 4th Intl. Workshop on Multi Paradigm Modeling (MPM 10) at MoDELS 2010, Oslo, Norway, Dennis Klar and Michaela Huhn. Partial Order Algorithms for Model-based Diagnosis of Discrete Event Systems. In Dagstuhl-Workshops Modellbasierte Entwicklung eingebetteter Systeme (MBEES V), Bernhard Kaiser, Peter Liggesmeyer, and Oliver Mäckel. A New Component Concept For Fault Trees. In SCS 03: Proc. 8th Australian Workshop on Safety Critical Systems and Software, pages 37 46, Darlinghurst, Australia, Australian Computer Society. Gianfranco Lamperti and Marina Zanella. Diagnosis of Active Systems: Principles and Techniques. Kluwer Academic Publishers, Martin Mertens and Ulrich Epple. A Layer-based Approach to Build up Diagnosis Applications in Process Industries. In IEEE Intl. Conf. on Control and Automation (ICCA) 2009, pages , [VSD + 02] William Vesely, Michael Stamatelatos, Joanne Dugan, Joseph Fragola, Joseph Minarick III, and Jan Railsback. Fault Tree Handbook with Aerospace Applications, [Wal05] Malcolm Wallace. Modular Architectural Representation and Analysis of Fault Propagation and Transformation. Electr. Notes in Theor. Comp. Sc., 141(3):53 71, Dec

126

127 Effizientes Erstellen von Simulink Modellen mit Hilfe eines spezifisch angepassten Layoutalgorithmus Lars Kristian Klauske, Christian Dziobek Abstract: Im Bereich der automobilen Softwareenwicklung spielt die modellbasierte Entwicklung mit Werkzeugen wie Simulink mittlerweile eine bedeutende Rolle. Da die Anordnung der verwendeten grafischen Elemente für die spätere Lesbarkeit eines Modells eine große Rolle spielt, zählen die derzeit notwendigen zeitaufwändigen Arbeitsschritte beim Platzieren und Vernetzen dieser Elemente zu den aktuellen Herausforderungen beim Bearbeiten von Simulink-Diagrammen. Dieser Beitrag stellt einen aus bestehenden Verfahren weiterentwickelten Layoutalgorithmus, sowie dessen spezifische Erweiterungen zur Linienbegradigung unter Einbeziehung der Anpassbarkeit von Blockgrößen in Simulink vor und zeigt einen Ansatz, diesen in den Simulink-Editor einzubinden. Darauf aufbauend wird das Konzept der kontextabhängigen Modellierung auf Simulink-Diagramme übertragen und ihre Umsetzung auf Basis einer neuen Heuristik gezeigt. 1 Einleitung Aufgrund steigender Komplexität und gestraffter Entwicklungszyklen wird eingebettete Software im Automobilbereich zunehmend in Form von Funktionsmodellen mit Werkzeugen wie z.b. Simulink (The Mathworks) entwickelt. Die Lesbarkeit und auch Anschaulichkeit der grafischen Funktionsmodelle im Vergleich zu textuell entwickelten Programmen gelten als maßgebliche Gründe für ihre zunehmende Verbreitung. Im Gegensatz zu textuellen Editoren fehlen Modelleditoren zumeist selbst einfache Entsprechungen von Funktionen wie Autoformatierung, kontextsensitiver Codevervollständigung oder Refactoring Operationen, die bereits seit vielen Jahren Stand der Technik bei textuellen Editoren sind. Obwohl die Editoren mittlerweile Funktionen zur automatischen Ausrichtung und Verteilung von Blöcken anbieten, muss zur Gewährleistung der Verständlichkeit noch immer viel Aufwand in das Layout der grafischen Beschreibungselemente investiert werden. Dieser Beitrag zeigt wie der Modellierer beim Editieren der grafischen Beschreibungselemente in Simulink-Diagrammen effizient durch den Einsatz neuer Layoutalgorithmen und automatisierter Arbeitsschritte unterstützt werden kann. Obwohl er sich auf die in Simulink zum Einsatz kommenden Blockdiagramme konzentriert, wird erwartet, dass die beschriebenen Anwendungsfälle und Methoden auch auf andere auf Blockdiagrammen basierende Modellierungssprachen und Werkzeuge anwendbar sind. Statecharts, wie beispielsweise Stateflow-Diagramme, werden in diesem Beitrag nicht betrachtet. Simulink wird als bekannt vorausgesetzt.

128 1.1 Anwendungsfall Der Lebenszyklus eines Simulink-Modells ist geprägt von einer Vielzahl von Änderungen am zugehörigen Diagramm. Strukturelle Änderungen (im Gegensatz beispielsweise zur Anpassung von Parametern) machen dabei typischerweise auch Anpassungen des Layouts erforderlich. Zu den häufigsten Anwendungsfällen beim manuellen Modellieren gehört das Erstellen neuer Modellfragmente. Hierbei wechseln sich zwei Phasen ab, wobei das bearbeitete Fragment im Verlauf typischerweise wächst. In der ersten fügt der Modellierer Blöcke oder Verbindungen hinzu, in der zweiten Phase passt er das Layout an, damit das geänderte Fragment verständlich und lesbar bleibt. Für einen ähnlichen Anwendungsfall bei der Modellierung von SyncChart Diagrammen 1 zeigen Untersuchungen (z.b. [Mat10]), dass allein durch die Bereitstellung einer Layoutunterstützung zur Automatisierung der zweiten Phase eine Verkürzung der Modellierungszeiten von bis zu 50% erreicht werden kann. Basierend auf Erfahrungen aus aktuellen Projekten liegt der geschätzte Anteil manueller Layoutverbesserungen an der Modellierungszeit bei Simulink-Diagrammen mit etwa 30% zwar niedriger, ist jedoch immer noch signifikant. 1.2 Lösungsansatz Eine Unterstützung des Modellierers bei diesem Anwendungsfall erfordert zuerst eine Form von effizienter Layoutunterstützung zur unmittelbaren Reduktion des Aufwands beim grafischen Aufbau der Diagramme und darauf aufsetzend die Automatisierung von typischen wiederkehrenden Arbeitsschritten zur weiteren Beschleunigung der Modellerstellung. Die Anwendung der Layout- und Unterstützungsfunktionen muß durch eine intuitive Erreichbarkeit während des Editiervorgangs z.b. durch den Einsatz von kontextsensitiven Menüs sichergestellt werden. Dadurch soll der Fokus bei der Modellerstellung auch für den Modellierer weg vom Design der Darstellung und hin zur Funktionsentwicklung verschoben werden. 2 Layoutunterstützung für Simulink Layouts von Simulink-Modellen sollten gewisse Anforderungen erfüllen, um eine grundlegende Lesbarkeit der Modelle sicherzustellen. Diese finden sich sowohl in unternehmensinternen, wie auch allgemein anerkannten Modellierungsrichtlinien (MAAB, MIS- RA), als auch beim Graphenlayout als typische Optimierungsziele. Die wichtigsten dieser Anforderungen für Simulink-Diagramme sind: Weder Blöcke noch Linien sollen sich 1 SyncCharts sind Diagramme zur Beschreibung reaktiver synchroner Systeme und in ihrer Darstellung ähnlich StateChart Diagrammen.

129 überschneiden. Blöcke und Linien sollen von Links nach Rechts orientiert sein, Rückkopplungen bilden eine Ausnahme und sollen in Gegenrichtung verlaufen. Kreuzungen von Linien sollen vermieden werden und Linien sollen mit möglichst wenig Knicke aufweisen. Zusätzliche Anforderungen betreffen Beschränkungen der Blockgrößen, spezielle Anordnungen bestimmter Blöcke wie Inports oder Outports oder die Kompaktheit des Layouts. 2.1 Layoutalgorithmus Simulink-Diagramme sind für Layout Algorithmen als gerichtete Graphen mit einer Orientierung von links nach rechts, Knoten variabler Größe, rechtwinkligen 1:n Hyperkanten und Port-Constraints. Ports-Constraints sind Einschränkungen der Positionen der Ports (der Kontaktstelle zwischen Kanten und Knoten), wobei die Port-Constraints bei Simulink die konktete Position aller Ports in direkter Abhängigkeit der Blockgröße vorgeben (was die für die Kreuzungsreduktion relevante Reihenfolge der Ports beinhaltet). Der Beitrag von [SFvHM09] stellt im Vergleich mit aktuellen Layoutalgorithmen fest, dass insbesondere Port Constraints bisher nur unzureichend unterstützt werden und erweitert die Algorithmen von [GKNV93, San95] insbesondere um einen Ansatz zum Umgang mit Ports fester Reihenfolge. Mit diesem Ansatz wäre es bereits möglich, grundlegende Layouts für Simulink-Diagramme zu erzeugen. Definitionen 1. Ein gerichteter Graph bestehend aus einer Menge von Knoten V, Kanten E V 2 und Ports P : G = (V, E, P ). 2. Im Verlauf des Algorithmus werden zusätzliche Knoten eingefügt, die besonders gekennzeichnet sind: Dummy D V und Inverter I V Knoten. 3. Ein Port p = (F, w) P bezeichnet den Berührungspunkt zwischen einer oder mehrerer Kanten (F E) und einem Knoten w V. Ein Port ist entweder Eingangsport ((u, w) F ) oder Ausgangsport ((w, v) F ), niemals eine Mischung aus beidem: ((u 1 v 2 ) und (u 2 v 1 )) für alle e 1 = (u 1, v 1 ), e 2 = (u 2, v 2 ) F ). 4. Die Menge aller Eingangsports eines Knotens v wird als P i (v) bezeichnet. Sie sind mit einen Index i(p i ) {0... P i (v) 1} geordnet, wobei i(p) i(q) für alle (p, q) P i (v). Ausgangsports heißen analog P o (v), ihr Index i(p o ). 5. Der Eingangsport (Ende) einer Kante e ist p i (e), der Ausgangsport (Anfang) p o (e). Stufen des Algorithmus Der Algorithmus arbeitet in fünf aufeinander aufbauenden Phasen. Für jede dieser Phasen sind jeweils verschiedene Lösungsansätze bekannt. Initial liegt ein gerichteter Graph G vor, der Zyklen enthalten kann. Mit Hilfe der Definitionen aus 2.1 lassen sich die Phasen des Algorithmus wie folgt darstellen:

130 1. Zyklenbehandlung: Diese Phase erkennt Kanten die Zyklen erzeugen (siehe z.b. [TDBET99] S. 294 ff.) und kehrt diese entweder um, oder entfernt sie temporär aus dem Graphen, um einen gerichteten azyklischen Graphen zu erhalten. 2. Hierarchisierung: Die Knoten werden so sortiert, dass die Signale im Diagramm von Links nach Rechts gerichtet sind. Dazu wird allen Knoten ein Ordnungsmerkmal Layer L(u) N hinzugefügt, so dass zunächst für alle Kanten e = (u, v) gilt: L(u) < L(v). Kanten die mehrere Layer überspannen werden durch Dummy Knoten aufgeteilt (siehe [STT80]): e 0 = (u, v) Dummy Knoten d D einfügen f = (d, v), p i (f) = p i (e 0 ), e = (u, d), bis am Ende der Stufe für alle Kanten (u, v) gilt: L(u) = L(v) Kreuzungsreduktion: In dieser Phase werden die Knoten jedes Layers durch Zuordnung eines Index ( I(v) N I(u) I(v), wenn L(u) = L(v) ) so geordnet, dass (möglichst) wenige Kreuzungen zwischen den Kanten vorkommen. 4. Knotenpositionierung: Basierend auf den Reihenfolgen innerhalb der Layer werden den Blöcken konkrete Y-Positionen y(v) zugeordnet. Es sind verschiedene Ansätze bekannt, die bei der Zuordnung der Koordinaten üblicherweise zu Ziel haben, Kantenknicke zu minimieren (beispielsweise [STT80, GKNV93, BK02]. 5. Kantenrouting: Zwischen den fertig angeordneten Layern werden in dieser Phase die Kanten gerouted und konkrete X-Positionen für die Knoten zugewiesen. Dummy Knoten werden in Stützpunkte der jeweiligen Kanten umgewandelt. 2.2 Simulink-Spezifische Erweiterungen Aus den besonderen Eigenschaften von Simulink-Diagrammen, sowie aus Modellierungsrichtlininen ergeben sich zusätzliche Anforderungen und Möglichkeiten beim Erstellen von Layouts, die von bisherigen Algorithmen nicht oder nur unzureichend unterstützt werden. Die zur Unterstützung dieser zusätzlichen Anforderungen entwickelten Erweiterungen des Algorithmus aus Abschnitt 2.1 werden hier erstmalig im Detail vorgestellt. Phase 1: Zyklenbehandlung Obwohl bestehende Methoden zur Zyklenerkennung bereits auf Simulink-Diagramme angewendet werden können, wurde der Algorithmus zur Zyklenerkennung aus KLay (Kieler Layouters [FSvH10]) erweitert, um zusätzliche Benutzervorgaben aus Simulink zu berücksichtigen. Hierbei wird ausgenutzt, dass die im Modell als Invertiert (links-rechts Seitenvertauscht) gesetzten Blöcke von Kanten umgeben sind, die ebenfalls von rechts nach links führen und demnach wie Zyklen zu behandeln sind. Bei der Umkehr oder Entfernung von Zyklen macht es die feste Reihenfolge und Position der Ports in Simulink-Diagrammen bei bisherigen Ansätzen notwendig, bei der späteren Knotenpositionierung in der Nähe des Start- und Endknoten zusätzlichen Platz vorzuhalten, um die Zykluskante später um den Knoten herum führen zu können. Es entsteht ein

131 Routing der Form: Nahbereich Startknoten Weg zum Zielknoten Nahbereich Zielknoten. Hierbei kann es an der Schnittstelle zwischen den drei Sektionen zu zusätzlichen Kantenknicken kommen. Abbildung 1a zeigt beispielhaft das im Bereich um Sw entstehende Routing der Kante Sw Delay. Diese Knicke können durch einen neuen Ansatz zur Kantenumkehr in vielen Fällen vermieden werden: Zykluskanten werden durch das Einfügen zusätzlicher virtueller Inverter Knoten umgekehrt. Sei e = (u, v) eine solche Kante, dann wird e aus G entfernt und stattdessen zusätzliche (markierte) Knoten w l, w r und Kanten f l, f m, f r eigefügt, so dass nach der Transformation für die Kanten f l = (w l, v), f m = (w l, w r ), f r = (u, w r ) und für die Ports p i (f l ) = p i (e) und p o (f r ) = p o (e) gilt. Diese Inverter Knoten dienen später ähnlich den Dummy Knoten lediglich als Stützpunkte während der Kreuzungsreduktion und Knotenpositionierung, machen jedoch ein gesondertes Routing im Nahbereich der Knoten unnötig. Sie werden am Ende ebenfalls entfernt und in Stützpunkte umgewandelt. Abbildung 1b zeigt das resultierende Layout. Phase 2: Hierarchisierung Im Allgemeinen lassen sich bestehende Ansätze zur Hierarchisierung auch auf Simulink-Diagramme anwenden. Einen auf linearer (integer) Programmierung basierenden Ansatz zur Hierarchisierung stellt [GKNV93] vor: Minimize: e=(u,v) E (L(u) L(v) 1) (1) Hierbei muss L(v) 1 für alle v V, sowie L(u) L(v) 1 für alle (u, v) E gelten. Die Lösung des linearen Programms führt zu einer Hierarchisierung mit minimalen Kantenlängen. Die Hierarchisierung von Simulink-Diagrammen kann jedoch zusätzlichen Anforderungen aus (optionalen) Modellierungsrichtlinien unterliegen. Diese fordern beispielsweise, dass Simulink Input Blöcke am linken Rand, also im ersten Layer, anzuordnen sind. Der vorgestellte Ansatz wurde daher zur Beachtung derartiger Anforderungen um zusätzliche Constraints erweitert. Für das gegebene Beispiel heißt das: Seien U = { u 0,..., u U 1 }, U V diejengen Knoten, die Input Blöcke repräsentieren, dann wird zusätzlich gefordert: L(u 0 ) > L(v) und L(u 0 ) = L(u i ) für alle u i U \ u 0 u 0 U, v V \ U (2) Phase 3: Kreuzungsreduktion Zur Kreuzungsreduktion unter Berücksichtigung von Ports mit fester Reihenfolge stellt [SFvHM09] einen iterativen Ansatz auf Basis einer modifizierten Barycenter-Heuristik vor. Dieser konnte in Form der Implementierung aus KLay direkt für Simulink-Diagramme verwendet werden. Phase 4: Knotenpositionierung In Simulink-Diagrammen sind die Positionen der Ports eines Knotens direkt von dessen Größe abhängig. Verändert man also die Größe eines Knotens, verändern sich die Positionen der Ports und somit auch eventuelle Knicke in den

132 daran angeschlossenen Kanten. Hierdurch entsteht ein zusätzlicher Freiheitsgrad beim Erstellen von Layouts. Diesen machen sich Modellierer häufig zu Nutze, indem sie die Größe von Knoten gezielt so anpassen, dass Kantenknicke minimiert werden. Eine Automatisierung lässt sich als Algorithmus für Phase 4 umsetzen, indem den Knoten nicht nur konkrete Positionen y(v), sondern auch neue Größen s y (v) zugewiesen werden. Ein grober Überblick über den hier erstmals im Detail vorgestellten Algorithmus erschien in [KD10]. Betrachtet man zunächst die Abhängigkeit zwischen Portposition und Knotengröße in Simulink-Diagrammen im Detail so stellt sich diese als eine nicht stetige Funktion der Knotengröße dar. Zum Zwecke der Kantenbegradigung nutzen wir aus, dass die Funktion im Mittel ansteigt und stellen sie als näherungsweise linear dar: y(p i (e)) = y(v) i(p i(e)) P i (v) s y (v) (3) In [GKNV93] wird ein Ansatz vorgestellt, um die Zuordnung der Y-Koordinaten von Knoten mittel linearer Programmierung vorzunehmen. Dabei werden die gesuchten Koordinaten als Variablen des Optimierungsziels gesehen, mit dem Ziel diese so zu wählen, dass resultierende Kantenknicke möglichst minimal sind. Die Abstände zwischen Knoten bilden die primären Constraints, zusätzliche ergeben sich beispielsweise aus Beschränkungen der Größe des Layouts. Konkrete Lösungen für das lineare (integer) Programm, also Koordinaten für die Knoten, können dann mit gängigen Verfahren der linearen Optimierung errechnet werden, wobei selbst frei verfügbare Solver wie lp solve [ENE10] bereits gute Ergebnisse in Bezug auf Geschwindigkeit und Qualität der Lösung erzielen. Gleichung 4 stellt die Zielfunktion dieses Ansatzes dar: Minimize: ω(e) y(v) y(u) (4) e=(u,v) E ω(e) bezeichnet dabei die Gewichtung der jeweiligen Kante zur bevorzugten Begradigung langer Kanten gemäß [GKNV93]: ω(e) = 0 für Kanten die Kreuzungen erzeugen oder vom Benutzer über als unwichtig gekennzeichnet wurden, ω(e) = 1 für normale Kanten, ω(e) = 2 für Kanten die an einen echten Knoten verbinden, ω(e) = 8 für Kanten zwischen zwei Dummy Knoten. Zusätzliche Constraints der Form y(u) y(v) >= δ(u, v) mit L(u) = L(v) und I(v) = I(u) + 1, stellen die Einhaltung von Mindestabständen δ(u, v) zwischen benachbarten Knoten sicher. Die näherungsweise Linearität der Abhängigkeit zwischen Portposition und Knotengröße ermöglicht es, das Optimierungsziel derart zu erweitern, dass die Knotengröße als weiterer Freiheitsgrad mit in die Optimierung eingeht. Hierzu wurden die Knotenpositionen y(u), y(v) in der Zielfunktion durch die Portpositionen y(p i,o ) ersetzt. Zusätzlich wurde die Zielfunktion um eine Komponente zur Gewichtung zwischen Kompaktheit (Größe der Knoten und Abstände zwischen den Knoten) und Linienbegradigung erweitert. Gleichung 5 zeigt die derart erweiterte Zielfunktion. Minimize: e=(u,v) E (ω(e) y(p i (e)) y(p o (e)) ) + κ w V (σs y (v) + y(v)) (5)

133 κ gewichtet die Kompaktheit gegen die Linienbegradigung, während σ die Blockgrößen innerhalb des Kompaktheitsterms gegenüber den Blockabständen gewichtet (im Prototypen σ = 4). Anzumerken ist, dass Inverter Knoten, sowie Kanten von und zu Inverter Knoten keinen Beitrag zur Zielfunktion oder zu Constraints leisten, da ihre konkrete Position nicht benötigt wird. Zusätzliche Skalierungsfaktoren zur Einhaltung des Koordinatenrasters, das sich aus den Simulink Koordinaten und den Abweichungen durch die Linearisierung ergeben, sowie die notwendigen Ausgleichsvariablen zur Linearisierung des Betrages in der Zielfunktion wurden an dieser Stelle nicht dargestellt. Das resultierende Layout mit Änderung der Knotengrößen ist in Abbildungen 1c am Beispiel des Zählermodells dargestellt. Im Vergleich zu 1b konnten zusätzlich die Kantenknicke der beiden Kanten an den äußeren Ports von Sw entfernt werden. 1 IV 1 IV 1 IV 2 1 R Out 2 R 1 Out 2 R 1 Out 1 Sw 1 Sw 1 C C Add C Add Add Sw z Delay (a) Routing nach [SFvHM09] z Delay (b) Ohne Größenänderung z Delay (c) Mit Größenänderung Abbildung 1: Vergleich verschiedener Layouts eines Zähler-Modells. Phase 5: Kantenrouting Für das Kantenrouting wurde die auf [San04] basierende Implementierung aus KLay verwendet [SFvHM09]. 2.3 Implementierung Der hier vorgestellte Algorithmus wurde auf Basis der KLay Datenstruktur in Java implementiert und in eine Applikation, den Layoutserver, integriert. Layoutinformationen, Parameter und Kommandos werden XML kodiert via TCP Socket zwischen Simulink und dem Layoutserver ausgetauscht. Das Auslesen / Aktualisieren von Layoutinformationen, sowie das Kodieren und Dekodieren dieser Informationen übernehmen auf Simulink- Seite Skripte in der Matlab eigenen Skriptsprache M. Zur Lösung der linearen Programme wurden Adaptoren zu verschiedenen LP-Solvern entwickelt, wobei die Erprobung derzeit ausschließlich mit dem freie LP-Solver lp solve 2 durchgeführt wird. Durch die Wahl der Datenstrukturen und die von Simulink unabhängige Implementierung des Layoutservers konnten bestehende Algorithmen, beispielsweise aus KLay, sowie der LP-Solver leicht eingebunden und gleichzeitig Entwicklung und Debugging einfach gehalten werden. 2 lp solve: Mixed Integer Linear Programming Solver,

134 In Simulink wurden die Optionen zum Start von Layout- oder Unterstützungsfunktionen (siehe 3.2) und zu deren Beeinflussung direkt in die Menüleiste, sowie die Kontextmenüs der Blöcke integriert. In der praktischen Anwendung stellte sich heraus, dass die Kontrolle wann und in welchem Umfang ein neues Layout erzeugt wird stets beim Anwender liegen sollte. Dazu kann dieser durch gezieltes Auswählen eines Modellbereichs entweder das ganze aktuelle Subsystem oder nur einen Teil (ausgewählter Modellteil) neu anordnen lassen, Rückkopplungen manuell festlegen (Blöcke invertieren), die Gewichtung der Linien für die Begradigung beeinflussen (einzelne Linien abgewählt) und die Veränderbarkeit der Größe für jeden Block einzeln festlegen (Kontextmenü), beispielsweise um ein Verzerren von Blöcken mit Grafiken zu verhindern. 3 Modellierungsunterstützung Mit der Verfügbarkeit von automatischer Layoutunterstützung werden über das Erledigen von Layoutaufgaben hinaus zusätzliche Funktionen zur erleichterten Bearbeitung von Modellen möglich, die ohne Layoutunterstützung nicht sinnvoll umsetzbar wären. Eine Vielzahl zusätzlicher Funktionalitäten, wie beispielsweise das Aufbrechen von Subsystemen oder das Einfügen von Signalen in Busse (beide gezeigt in [KD10]), erscheint hier möglich und nützlich. Für den im Abschnitt 1.1 beschriebenen Arbeitsschritt wurde die nachfolgende Unterstützungsfunktion umgesetzt. 3.1 Strukturbasiertes Editieren Das strukturbasierte Editieren [FvH10, Mat10] greift bekannte Refactoring Konzepte auf und überträgt diese mit Hilfe einer Modelltransformationssprache auf Statechart-Editoren. Das Ziel hierbei ist, die bei der Bearbeitung notwendigen manuellen Arbeitsschritte zu reduzieren. Anstatt per Drag and Drop einzelne Zustände in ein Zustandsdiagramm einzufügen und zu platzieren, werden dem Modellierer komplexere Arbeitsschritte wie beispielsweise Folgezustand einfügen angeboten und mit Hilfe automatischer Layoutverfahren umgesetzt. Im Rahmen vergleichende Experimente wird gezeigt, dass sich durch derartige Funktionen eine deutliche Zeitersparnis beim Modellieren ergeben kann. Für Simulink-Diagramme erscheint eine direkte Übertragung des strukturbasierten Editierens in dieser Form wegen der vielen Freiheitsgrade (Blocktypen, Ports) und der Anzahl der daraus resultierenden verschiedenen Transformationen nicht als sinnvoll. 3.2 Kontextbasiertes Modellieren Um dennoch Funktionen wie Nachfolger einfügen und Vorgänger einfügen umsetzen zu können, wurde ein auf statistischen Referenzdaten basierender Ansatz zur dynamischen Auswahl der im Kontext des aktuellen Blocks sinnvollen Transformationen entwickelt und

135 über das Kontextmenü des jeweiligen Blocks zugänglich gemacht. Für die Implementierung wurden die notwendigen Daten aus verschiedenen Funktionsmodellen des Fahrzeug-Innenraumbereichs ermittelt und nachträglich überarbeitet, um beispielsweise projektspezifisch verwendete Blöcke auszublenden. Tabelle 1 zeigt beispielhaft die häufigsten ausgangsseitigen Nachbarblöcke von Constant Blöcken 3. Blocktyp Häufigkeit Relational Operator 28% Switch 18% Sum 8% Logical Operator 7% Product 5% Tabelle 1: Häufige ausgangsseitige Nachbarblöcke von Constant Blöcken. Auf dieser Basis bietet das Kontextmenü eines Constant Blocks an, diese Blöcke direkt einzufügen und mit dem gewählten Port zu verbinden. Abbildung 2 zeigt beispielhaft die angebotenen Nachfolgeblöcke für einen Constant Block, die Auswahl zum Einfügen eines Addierers am Ausgang, sowie die Situation nach deren Ausführung mit anschließendem Layout: Custom Functions Add Neighbor at... Output Port 1 Sum at Input 1. (a) Kontextmenü (gekürzt) (b) Fertiges Ergebnis Abbildung 2: Unterstützungsfunktion zum Einfügen von Nachbarblöcken. So entfällt durch das kontextbasierte Modellieren im Zusammenspiel mit dem Layoutalgorithmus das Nachschlagen in der Blockbibliothek, sowie das manuelle Einfügen, Verbinden und Platzieren der Blöcke beim Erstellen häufiger genutzter Konstrukte. 4 Verwandte Arbeiten Die von allgemeinen gerichteten Graphen differenzierenden Merkmale von Simulink-Diagrammen zählen insbesondere die Port-Constraints, Hyperkanten, orthogonales Kantenrouting und größenveränderliche Knoten. Zudem liegen alle Knoten, Kanten und Ports auf Integer Koordinaten. Port-Coonstraints bei hierarchischen Graphen wurden bereits von [GKNV93, San94] in 3 Berücksichtigt wurden 130 ausgangsseitige Nachbarblöcke von 111 Constant Blöcken aus einem aktuellen Funktionsmodells des Fahrzeug Innenraumbereichs.

136 Form fester Kontaktpunkte zwischen Knoten und Kanten betrachtet. Die zur Anwendung kommenden Methoden sind jedoch insbesondere auf das Zeichnen von Datenstrukturen ausgerichtet und für den allgemeinen Fall von Port-Constraints bei Blockdiagrammen nicht geeignet. Die kommerzielle Layout Bibliothek yfiles (yworks GmbH) unterstützt zwar feste Kontaktpunkte zwischen Knoten und Kanten auch bei Blockdiagrammen, jedoch werden größenveränderliche Knoten mit festem Verhältnis zwischen Portposition und Knotengröße wie bei Simulink-Diagrammen nicht unterstützt. Details der entsprechenden Algorithmen sind nicht veröffentlicht. Gleiches gilt für ILOG JViews [SV01] und Tom Sawyer Visualization (Tom Sawyer Software), die ebenfalls verschiedene Port-Constraints unterstützen. Der Ansatz von [SFvHM09] setzt sowohl Port-Constraints bezüglich der Reihenfolge bei der Kreuzungsreduktion, als auch Hyperkanten bei orthogonalem Kantenrouting für hierarchisch gezeichnete Graphen um und würde sich bereits prinzipiell eignen, um grundlegende Layouts für Simulink-Diagramme zu erzeugen. Das darin vorgestellte Kantenrouting basierend auf [San95] kann bereits lange Kanten bevorzugt begradigen, weist jedoch bei Zyklen in der Nähe der Anfangs- und Endknoten unnötige Kantenknicke auf und betrachtet ebenfalls keine größenveränderlichen Knoten. Der ebenfalls prinzipiell für Blockdiagramme geeignete Ansatz Topology-Shape-Metrics [BNT86, Tam87, TDBB88] umgeht zu Gunsten einer besseren Kreuzungsminimierung die Hierarchisierung mit Hilfe von Planarisierung. Das Problem von Port-Constraints bei der Planarisierung, dem jeweils ersten Schritt dieses Ansatzes, ist jedoch bisher ungelöst, so dass dieser und andere auf Planarisierung basierende Verfahren nicht für Simulink- Diagramme einsetzbar sind. Kräftebasierte Layoutverfahren wie beispielsweise Spring Embedder [Ead84] oder Simulated Annealing [DH96] wären bei Formulierung eines geeigneten Kräftesystems prinzipiell auch für Simulink-Diagramme geeignet. Eine effiziente praktische Umsetzung für Blockdiagramme ist bisher nicht dokumentiert. Auf Layoutautomatisierung aufbauende Funktionen zur Modellierungsunterstützung, sowie deren Auswirkungen, werden beispielsweise als Structural Editing für Statecharts in [Pro08, FvH10] vorgestellt. Eine Übertragung auf Simulink-Diagramme wurde bisher nicht veröffentlich. 5 Zusammenfassung In diesem Beitrag wurde ein Werkzeug zum automatischen Layout von Simulink-Diagrammen, mit einem speziell auf diesen Diagrammtyp zugeschnittenen Layoutalgorithmus, vorgestellt. Der Algorithmus enthält neue Ansätze zum Umgang mit Zyklen, kann Modellierungsrichtlinien zu Knotenpositionen während der Hierarchisierung berücksichtigen und zur Begradigung von Kanten nicht nur die Positionen von Knoten verändern, sondern auch deren Größe anpassen. Des Weiteren wurde mit der kontextbasierten Modellierungsunterstützung das Prinzip des strukturbasierten Editierens auf Simulink-Diagramme übertragen. Im praktischen Einsatz des hier vorgestellten Werkzeugs in der Serienentwicklung von Fahrzeugfunktionen konnten bereits erste Erfahrungen gesammelt werden:

137 Durch den Einsatz des Layouters können die manuellen Layoutanpassungen tatsächlich deutlich reduziert werden und bei vielen Diagrammen sogar ganz entfallen. Ein vollständiger Verzicht auf manuelle Korrekturen, z.b. um semantische Zusammenhänge darzustellen bzw. deren Darstellung zu erhalten, kann derzeit jedoch noch nicht erreicht werden, da das automatische Layout bisher ausschließlich auf der Struktur des Modells basiert. Bei Diagrammen mit komplex verknüpften Kanten (ab ca. 100 Kanten) weisen automatisch erstellte Layouts durch den hierarchischen Ansatz zum Teil unnötig viele Kantenkreuzungen auf, was die Lesbarkeit im Vergleich zu manuell erstellten Layouts reduziert. Weniger komplexe Diagramme, wie z.b. typische Schnittstellen in denen lediglich Busse erzeugt oder aufgelöst werden, können hingegen auch noch mit deutlich mehr Kanten (über 300) gut lesbar angeordnet werden. Dies zeigt sich insbesondere in Verbindung mit Patterngeneratoren in der Serienentwicklung von Fahrzeugfunktionen, die einfach strukturierte Modellteile mit vielen Kanten, wie Schnittstellen und Funktionsrahmen, automatisch erzeugen. Dabei wird das hier vorgestellte Werkzeug verwendet um das Layout der generierten Modellteile zu erzeugen - eine Aufgabe die früher für jeden Patterngenerator statisch implementiert werden musste. Durch die Trennung der Generierung von Strukturund Layoutinformationen hat sich die Entwicklung derartiger Generatoren deutlich vereinfacht. Die Dauer des gesamten Layoutvorgangs wird derzeit noch hauptsächlich von der Anbindung des Werkzeugs (bis zu 90% der Gesamtdauer) bestimmt. Die Laufzeit des eigentlichen Algorithmus wird maßgeblich vom Solver in Stufe 4 bestimmt und hängt bei praxisrelevanten Modellen weniger von der Größe der Diagramme ab, als vielmehr von deren Struktur. So wird das Layout einer generierten Schnittstelle mit 61 Knoten und 60 Kanten in 80ms erzeugt, während das Diagramm aus Abbildung 1c mit 7 Knoten und 7 Kanten im Mittel 220ms benötigt 4. Für typische Modelle liegen die Laufzeiten zwischen 100 und 300ms. Zu den relevanten weiteren Schritten zählen insbesondere Verbesserungen beim Layouten von Diagrammen mit komplex verknüpften Kanten. Möglicherweise ist hierfür die Erweiterung von auf Planarisierung basierender Algorithmen zum Umgang mit Port-Constraints zielführend. Die Reduktion des Overheads zur Anbindung des Layouters, sowie die Verbesserung der Benutzerschnittstelle für Unterstützungsfunktionen sind weitere wichtige Schritte, die zumindest teilweise durch eine direkte Integration der Funktionen in den Editor erreicht werden könnten. Literatur [BK02] [BNT86] Ulrik Brandes und Boris Köpf. Fast and Simple Horizontal Coordinate Assignment. In Graph Drawing 2001, Jgg of LNCS, Seiten Springer, B. Becker, E. Nardelli und R. Tamassia. A Layout Algorithm for Data-Flow Diagrams. IEEE Trans. Softw. Eng., 12(4): , Intel Core2 T GHz, Win XP SP3, Matlab

138 [DH96] R. Davidson und D. Harel. Drawing Graphics Nicely Using Simulated Annealing. ACM Trans. Graph., 15(4): , [Ead84] Peter Eades. A Heuristic for Graph Drawing. Congressus Numerantium, 42: , [ENE10] Kjell Eikland, Peter Notebaert und Juergen Ebert. Introduction to lp solve Online: [FSvH10] [FvH10] [GKNV93] [KD10] [Mat10] [Pro08] [San94] [San95] Hauke Fuhrmann, Miro Spönemann und Reinhard von Hanxleden. Kiel Integrated Environment for Layout Hauke Fuhrmann und Reinhard von Hanxleden. Taming Graphical Modeling. In Proceedings of the ACM/IEEE 13th International Conference on Model Driven Engineering Languages and Systems (MoDELS 10), LNCS. Springer, Emden R. Gansner, Eleftherios Koutsofios, Stephen C. North und Kiem-Phong Vo. A Technique for Drawing Directed Graphs. IEEE Transactions on Software Engineering, 19: , Lars K. Klauske und Christian Dziobek. Improving Modeling Usability: Automated Layout Generation for Simulink. In Mathworks Automotive Conference, Michael Matzen. A Generic Framework for Structure-Based Editing of Graphical Models in Eclipse. Diploma thesis, Christian-Albrechts-Universitaet zu Kiel, Steffen Prochnow. Efficient development of complex statecharts. Dissertation, Institut fuer Informatik, Christian-Albrechts-Universitaet zu Kiel, Georg Sander. Graph layout through the VCG tool. Bericht A03/94, Universität des Saarlandes, FB 14 Informatik, Saarbrücken, October Georg Sander. A fast heuristic for hierarchical Manhattan layout. In Proceedings of the Symposium on Graph Drawing, Jgg of LNCS, Seiten Springer, [San04] Georg Sander. Layout of Directed Hypergraphs with Orthogonal Hyperedges. In Graph Drawing 2003, LNCS, Seiten Springer, [SFvHM09] Miro Spoenemann, Hauke Fuhrmann, Reinhard von Hanxleden und Mutzel. Port Constraints in Hierarchical Layout of Data Flow Diagrams. In Proceedings of the 17th International Symposium on Graph Drawing (GD 09). Springer, [STT80] [SV01] [Tam87] [TDBB88] Kozo Sugiyama, Shojiro Tagawa und Mitsuhiko Toda. Methods for Visual Understanding of Hierarchical System Structures. In IEEE Transactions on Systems, Man and Cybernetics, Jgg. 11/2 of IEEE Transactions, Seiten , Georg Sander und Adrian Vasiliu. The ILOG JViews graph layout module. In Proceedings of the 9th International Symposium on Graph Drawing, Jgg of LNCS, Seiten Springer-Verlag, R. Tamassia. On Embedding a Graph in the Grid with the Minimum Number of Bends. SIAM J. Comput., 16(3): , Roberto Tamassia, Giuseppe Di Battista und Carlo Batini. Automatic Graph Drawing and Readability of Diagrams. In IEEE Transactions on Systems, Man and Cybernetics, Jgg. 18 of IEEE Transactions, Seiten IEEE Systems, Man, and Cybernetics Society, [TDBET99] Ioannis G. Tollis, Giuseppe Di Battista, Peter Eades und Roberto Tamassia. Graph Drawing: Algorithms for the Visualization of Graphs. Alan Apt, July 1999.

139 MeMo Methods of Model Quality Wei Hu, Joachim Wegener Berner & Mattner Systemtechnik GmbH Ingo Stürmer Model Engineering Solutions GmbH Robert Reicherdt, Elke Salecker, Sabine Glesner Technische Universität Berlin Abstract: Model driven development as implemented by the Simulink-Stateflow- TargetLink tool chain facilitates the efficient development of software for embedded processors. But there are only a few automated quality assurance techniques comparable to those known from traditional software development that can be applied in early phases of the development process. This is a serious problem since the generated software especially in the automotive area has to fulfill very high safety requirements. In this paper, we present our project Methods of Model Quality 1 in which we develop automated quality assurance methods for early development phases to improve the current unsatisfying situation. These methods comprise static analyses for domainspecific error detection, analyses to identify the most error-prone model parts through model metrics and furthermore slicing techniques for analyses support and result visualization. To estimate the model maintainability and changeability a quality model including architecture and design analyses is proposed as well. The expected results of our project will help to reduce development time and costs as well as to improve code quality and reliability. 1 Introduction Model-based development is meanwhile common practice within a wide range of automotive embedded software development projects. Following the model-based approach means focusing on graphical models as the central development artifact to specify, design and implement software. These models are usually realized using data-flow and controlflow oriented modeling languages such as MATLAB/Simulink and Stateflow by The Math- Works [Mat10]. This modeling language is widespread in the automotive domain, where controller models are used to specify, design and implement software models that are used as a basis for embedded controller code generation (engine control, central locking system, etc.). Code generators make it possible to automatically generate efficient C code directly from these models. Due to the maturity of the code generators available and the application of model-based code generation in combination with model-based testing, it is meanwhile also common practice to use Simulink controller models for safety-relevant applications (e.g. automatic braking systems, X-by-wire systems). The model-based code generation approach implies that the quality and complexity of the models used for code 1 The project is funded by the Investitionsbank Berlin (IBB) within the EFRE program.

140 generation have a direct influence on the quality of the generated C code. Due to the fact, that the controller models used for code generation are of increasing complexity, analytical as well as constructive quality assurance methods are to be carried out as early as possible. With the so far available methods and tools, quality aspects such as maintainability, reuse and extendibility of huge controller models can only be addressed in a limited way. One reason for this is that fact metrics and methods are not sufficiently available in order to e.g. structure the models. The MeMo project (MeMo: Methods of Model Quality) focuses on quality assurance methods in order to increase the maturity of controller models used for serial production purposes. Three main topics are addressed within MeMo: 1. Development of slicing techniques for analyzing Simulink models 2. Data- and control-flow driven error analysis methods for Simulink 3. Analysis of the model s design and its model architecture Theses topics are presented in the following chapters. 2 Slicing Slicing is used to simplify a program according to a slicing criterion, which is usually a program point and a set of variables of interest. All statements that do not affect the slicing criterion are removed from the program such that its semantics with respect to the slicing criterion is preserved. Slicing is applied as support for tasks such as debugging or reducing the state space for model checking. Since [Wei81] introduced program slicing, a lot of program slicing techniques have been developed. Most of these techniques use dependence graphs that are directed, rooted graphs with nodes representing program statements and edges representing direct dependencies between those statements. An overview about program slicing is given in [Tip95]. The concept of slicing was extended to slice graphical notations. Androutsopoulos presented an approach [ACH + 09] for slicing Extended Finite State Machines. Wang presented an approach for slicing UML State Charts [WDQ02] by transforming them into Extended Hierarchical Automata (EHA) [MLS97]. To the best of our knowledge neither algorithms for slicing Simulink nor for slicing Stateflow are known in the literature. For slicing models containing Simulink as well as Stateflow parts we plan to develop an integrated slicing approach that will be based on a suitable definition of a slicing criterion and on the construction of dependence graphs for both Simulink and Stateflow model parts. For Simulink parts we define the slicing criterion C(B,S) as a block B and a set S of incoming or outgoing signals of this block. For Stateflow parts the slicing criterion C(S,T,V) is a set of states S, a set of transitions T and a set of variables V. For a given slicing criterion we want to remove all Simulink and Stateflow elements of a model, that do not influence the elements included in the slicing criterion.

141 In Simulink data dependencies are explicitly represented as signal lines. Control dependencies are either explicit like For, If or Switch-Case-blocks, or implicit like Switch or Multiport Switch-blocks and conditional subsystems. Regarding blocks as statements and signals as variables, we can apply the dependence graph construction approach for programming languages. We construct the dependence graph for a Simulink model mapping blocks to nodes and signals to edges. The set of edges must be extended with all identified control dependencies. How dependence graphs for Stateflow models can be constructed is less obvious due to more complex dependencies between the elements of a Stateflow model part. Stateflow is to some extend a Statechart derivate but also contains portions of flowcharts. Wang et al. defined a set of dependence relations on EHAs that describe different kinds of control and data dependence. They also use synchronization dependence to handle events. These dependence relations still need further investigation with respect to the semantics of Stateflow but seem to be applicable. In our project we plan to use Simulink and Stateflow model slicing to support static analyses and to improve result visualization. In software engineering, such as in automotive industries, variants are used to model highly configurable software systems. We plan to use slicing to remove inactive variants before we start our static analyses to reduce the number of unjustified error messages due to over-approximation caused by interdependencies between mutual exclusive variants. Often the source of an error does not appear at the point where the error is found but in previous blocks. Hence our visualization will use slicing to highlight the preceding blocks, that are relevant to the signal values of the block, where an error has been detected. Both identified application fields for slicing are relevant with respect to achievement of our overall project goals. 3 Static Analyses for Simulink and Stateflow Models Static analysis tools like Coverity or Polyspace are integrated in the software development process in many companies. They help to detect failures early in the development cycle and thus can remarkably reduce development costs. A good survey on tools used in industrial projects can be found in [EN08]. These tools detect source code specific errors e.g. memory leaks or null pointer dereferences that are difficult to detect through code reviews or testing. Analyzing code generated from MATLAB Simulink/Stateflow models with these tools is cumbersome, since errors detected in the generated code must be manually back traced into the model. Moreover, the failure checks implemented by these tools are not really suitable for it. Many of them are not relevant for this class of software systems, other domain-specific errors are not checked. In our project we plan to develop failure analyses that focus on errors specific to MAT- LAB Simulink/Stateflow models. Therefore we have conducted in the first phase of our project a study 1) to identify relevant error classes and 2) to identifiy criteria that a failure analysis must fulfil to guarantee its applicablility for real-world models and its acceptance by practitioners. Based on our results we have defined the following failure classes, for which we plan to develop analyses:

142 1. illegal arithmetic operations 2. incomplete or inaccessible model parts 3. improper fixed-point data type scaling To the first failure class belong errors that are known from traditional static source code analysis as for example division by zero or range violations. Range violations in MATLAB Simulink/Stateflow models may appear at control inputs for multiport switch blocks. The control input value is used as data port index and can have any arbitrary, user defined value. It may originate from any other part of the model what can easily lead to invalid run time values and hence to unexpected behaviour of the model. Errors of the second class are comparable to uninitialized variable values or dead code. The third class of errors results from the fact that the floating point data types used in a model must be mapped onto fixed point data types for code generation if the target processor does not support floating point operations. This can result in unintended loss of precision that leads to unexpected model behaviour. During our evaluation of static analysis methods we came to the conclusion that the acceptance of a static analysis tool is greatly influenced by its scalability. This implies for our project that our methods must be capable to deal with models that comprises up to blocks. Even if we can expect a certain number of blocks to be irrelevant for the behaviour of the modeled system (e.g. In/Outport blocks), we expect for the analysis of the run time behaviour scalability problems comparable to those known from code analysis. We plan to exploit properties as for example explicit range values or loop bounds, that are specific for the considered class of systems. Another important aspect is the false positive rate of the tool that indicates the number of unjustified error messages. Since all reported possible error candidates must be checked manually a great number of spurious error messages can have a contrary effect i.e. the deployment of the tool will cause an increased development time while the model quality does not improve. To avoid this problem we can benefit from the experiences with static analyses for source code and adapt successfully employed techniques as for example error message filtering. 4 Model Quality Assessment In a model-based development process the system model plays a central role for requirements, verification and validation, testing, and ultimately product deployment with or without automatic code generation. Although MATLAB/Simulink/Stateflow has established itself as de facto industry standard for model-based development especially in the automotive branch, there exist only few works concerning the model quality. The modeling guidelines of MathWorks Automotive Advisory Board (MAAB) [Boa07] are probably the most elaborated guidelines for MATLAB/Simulink/Stateflow models in praxis. They provide, however, no concrete methods for the evaluation. Automatic inspection of some modeling rules, e.g. naming conventions and parameter configurations, have been successfully realized in different commercial tools. Unfortunately these guideline checks do

143 not provide information about the internal quality of a model either. Deissenboeck et al. [DHJ + 08] and Pham et al. [PNN + 09] demonstrated methods to search for model clones, i.e. copy&paste model parts with or without modification(s), which are well known bad practices for software maintainability. Stürmer et al. [SPR10] presented an approach based on Halstead metrics to assess the Simulink/Stateflow model complexity. These works have established a solid foundation for further research on model quality. Within MeMo project, we are developing a quality model, which illuminates different internal quality characteristics w.r.t. changeability and maintainability, includes metrics as well, with which the quality attributes can be evaluated quantitatively or qualitatively. By means of the metrics assessment we are then able to deliver an inner picture of the model quality. Basically there are three steps to evaluate the quality of a software product. Firstly, define and specify the quality characteristics. Secondly, develop metrics based on the characteristics and thirdly, evaluate the metrics and interpret the result. Started from the ISO [ISO], which defines a general quality model for internal and external quality for software product, we propose a quality model with seven internal quality characteristics for MATLAB/Simulink/Stateflow models: Readability/Understandability, Changeability/Extensibility, Analysability, Testability, Reusability, Stability and Conformity. These quality features are further divided into subcharacteristics, for each subcharacteristic there are metrics developed for the assessment. Two main points among the above quality features are model architecture and design. Model architecture is defined as the hierarchical organization of the different model components, and model design considers the realization of the model components, e.g. which blocks are to use and how they will be configured. In order to assess model architecture it is essential to compare the existing structure with a "good" one, which might be obtained via the clustering technique. To cluster a model means to decompose the model into meaningful modules and components 2 according to the intra- and inter-relations within and between the model blocks. Based on the in section 2 mentioned dependence graphs we might find out an optimal partition of the graph, and thus ultimately a good model architecture by considering the principle of high cohesion and low coupling. As for the design analyses we concentrate ourselves on the problems of model simplification and change impact analysis, which will be carried out by using the both aforementioned clustering and slicing techniques. 5 Conclusion In our project MeMo we address research questions that are of high practical relevance due to the increasing importance of model based development in safety critical areas. The expected results will enable to integrate automatic quality assurance techniques early in 2 For the clearness we define "component" as a bundle of Simulink/Stateflow blocks (including subsystems) which realize only one single (often simple) functionality or represent one physical element, a "module" may contain one or several components, whereas "model" is a generic term for component(s), module(s) and block(s).

144 the development cycle and hence will help to improve quality and reliability of the implemented systems. The cooperation of industrial partners and academic researchers within our project has two main advantages with a positive impact on the project s success. Due to our cooperation we have available case studies from industrial development projects. This possibility considerably facilitates the evaluation of our methods. Furthermore, our cooperation speeds up both the knowledge transfer from research to industry and the feedback on the developed approaches. References [ACH + 09] Kelly Androutsopoulos, David Clark, Mark Harman, Zheng Li und Laurence Tratt. Control Dependence for Extended Finite State Machines. In Marsha Chechik und Martin Wirsing, Hrsg., FASE, Jgg of Lecture Notes in Computer Science, Seiten Springer, [Boa07] [DHJ + 08] [EN08] [ISO] [Mat10] [MLS97] MathWorks Automotive Advisory Board. Control Algorithm Modeling Guidelines Using MATLAB R, Simulink R, and Stateflow R, Version 2.1. F. Deissenboeck, B. Hummel, E. Jürgens, B. Schätz, S. Wagner, J.-F. Girard und S. Teuchert. Clone detection in automotive model-based development. In Proceedings of the 30th international conference on Software engineering, ICSE 08, Seiten , New York, NY, USA, ACM. Pär Emanuelsson und Ulf Nilsson. A Comparative Study of Industrial Static Analysis Tools. Electr. Notes Theor. Comput. Sci., 217:5 21, ISO/IEC9126-1:2001. Software engineering Product quality Part 1: Quality model. The MathWorks, Erich Mikk, Yassine Lakhnech und Michael Siegel. Hierarchical Automata as Model for Statecharts. In R. K. Shyamasundar und Kazunori Ueda, Hrsg., ASIAN, Jgg of Lecture Notes in Computer Science, Seiten Springer, [PNN + 09] Nam H. Pham, Hoan Anh Nguyen, Tung Thanh Nguyen, Jafar M. Al-Kofahi und Tien N. Nguyen. Complete and accurate clone detection in graph-based models. In Proceedings of the 31st International Conference on Software Engineering, ICSE 09, Seiten , Washington, DC, USA, IEEE Computer Society. [SPR10] I. Stürmer, H. Pohlheim und T. Rogier. Berechnung und Visualisierung der Modellkomplexität bei der modellbasierten Entwicklung sicherheits-relevanter Software. In Automotive - Safety & Security 2010, Seiten Shaker Verlag, June [Tip95] Frank Tip. A survey of program slicing techniques. J. Prog. Lang., 3(3), [WDQ02] [Wei81] Ji Wang, Wei Dong und Zhichang Qi. Slicing Hierarchical Automata for Model Checking UML Statecharts. In Chris George und Huaikou Miao, Hrsg., ICFEM, Jgg of Lecture Notes in Computer Science, Seiten Springer, Mark Weiser. Program slicing. In Proceedings of the 5th international conference on Software engineering, ICSE 81, Seiten , Piscataway, NJ, USA, IEEE Press.

145 Ontology-Based Consideration of Electric/Electronic Architectures of Vehicles M. Hillenbrand*, M. Heinz*, M. Mohrhard**, J. Kramer**, K. D. Müller-Glaser* Institute for Information Processing Technology, KIT, Karlsruhe, Germany *{hillenbrand, heinz, **{markus.mohrhard, Abstract: The development of electric/electronic architectures for vehicles has to consider numerous requirements. As current architecture development tools follow the meta modeling approach and the closed world assumption of model driven engineering, they do not satisfyingly support rule based checking in terms of meeting abstract requirements. We developed a procedure of transform electric/electronic architecture descriptions of vehicles to the open world assumption of the ontology description language OWL. Considering them as ontologies allows enrichment with semantic knowledge, that results in the possibility for logical inference and reasoning. We show, that conformation to complex concepts can be inferred from a semantically revised meta model equivalent. 1 Introduction Recalls of vehicles, due to defective safety relevant systems, negatively influence the trust of customers in the automotive industry. In 2010, original equipment manufacturers had to recall more than 8,5 Mio vehicles due to malfunctioning systems. Can manufacturers dominate the steadily increasing complexity of vehicle functions? The functionalities of a vehicle are fulfilled by mutually interacting embedded systems. A premium car comprises up to 80 electronic control units (ECUs), each of which is itself an embedded system. The modeling of the electric/electronic architecture (EEA) captures the interaction of this complex network of ECUs, sensors and actuators. To stay on top of the increasing complexity of safety relevant- and advanced driver assistance systems, it is stringently required to assess and evaluate these systems, their safety mechanisms and mutual interaction against all relevant requirements. These comprise equipment features as well as norms and standards. The international standard for functional safety of road vehicles ISO [Int10], whose publication is expected in 2011, has a serious impact to the development of safety relevant automotive systems with electric / electronic / programmable electronic parts or subsystems. Its consideration in the overall EEA development supports the design of safe vehicles from the beginning. This standard contains abstract descriptions of requirements and applicable design patterns. Their

146 consideration during the development and modeling of the EEA is required. An extension of the modeling paradigm in the domain of EEAs, towards the open world assumption (OWA) [AZW06] and ontologies, offers wider possibilities for modeling of meaning and knowledge, checking towards abstract requirements as well as inference and reasoning. The domain specific EEA modeling and evaluation tool PREEvision [aqu09] is in the meantime well established in the automotive domain. It covers the EEA development in the concept phase of the vehicle development. PREEvision follows the meta modeling approach from the model driven engineering (MDE) [SVEH07]. A PREEvision EEA instance model (EEA-IM) of a premium car comprises about artifacts. There are numerous influences to the development of EEAs. The proper selection of architectural realization alternatives often bases on experience and common sense, both of which are difficult to formalize [Sow06] and capture. To determine and accumulate information about existing EEA models, PREEvision offers to specify and execute query rules on the instance model (IM). These feature conjunction chains (AND) of IM artifact types and value assignments. The specification of cardinalities or the combination of query rules by applying logical operators, like NOT, OR, XOR, are not or not satisfyingly supported. Checking for semantics and reasoning are also not supported in the current version of the tool. The MDE follows a top down approach. Classes, Associations and Attributes of a meta model (MM) are developed as specification of possible instances. As instances can only be set up according to the requirements and constraints of the MM, they are always correct or true in terms of the MM. Therewith the MDE follows the paradigm of the closed world assumption (CWA) [AZW06]. Ontologies are a modeling paradigm, which specifies concepts by sets of their individuals. Therefore, it follows the open world assumption (OWA). Ontologies are applied to capture and relate knowledge. In this paper, we suggest and demonstrate the extension of the EEA modeling perspective towards the OWA. This means transforming EEAs from the CWA of the MDE to the OWA of ontologies. We implemented a set of four transformations for this purpose. Additionally, we extended an open source transformation to facilitate the transformation of the EEA-MM and EEA-IMs to their ontology equivalent. This equivalent uses the web ontology language (OWL). The enrichment of the transformation results with additional information and knowledge facilitates inference and reasoning. Therewith it extends the current possibilities of PREEvision. The idea to extend and support MDE based modeling by a possibility to model knowledge is not new. The ModelCVS project applied an open knowledge base to support the bridging of models between different representation formats and tools [KRS05]. GeneralStore is a comparable approach to transform models in order to exchange them between different tools [RKGMG04]. The query engine of PREEvision represents the advancement of the left hand side of the model to model transformation applied by GeneralStore [RKGMG04]. [GRK + 09] applies ontologies in order to support the overall design process of automotive systems by captured knowledge. Compared to these, our approach does not focus on the semantic background and transformability of different formats of models. We support

147 developers of electric / electronic architectures of vehicles with a possibility to extend the perspectives to the modeled content. As this extension goes to the field of ontologies, it entails the possibility to facilitate powerful model queries, to capture their thoughts and to extend the actual data formats by additional concepts. Basics on MDE and ontologies as well as their similarities and differences are presented in section 2. The concept of the paper is described in section 3. The required transformation (method) from EEA-MM and EEA-IMs to EEA ontologies is explained in section 4. Transformation results and associated constraints are presented in section 5. The handling of ontology based EEAs is discussed in section 6. We summarize the paper in section 7 and give an outlook to future work. 2 Approaches in Modeling According to [HC67], a model is a simplified abstraction of reality. This simple and general definition holds for both, the MDE and ontologies. In the domain of computer aided engineering, we have to differentiate between three characteristics of modeling [GDD09]. These characteristics are independent from the purpose of the model. The first characteristic is the meaning of the reality that we abstract in a model. The second characteristic is the language or meta model, that is human understandable and used to express or capture the abstraction or the model (UML, PREEvision, etc.). The third characteristic is the computer processable and storable representation of the model in form of a serialization (XML, XMI, etc.). In the following, the modeling paradigms MDE and ontologies are explained and compared. They are both modeling approaches and address the explained characteristics. They describe how to capture and abstract reality and feature an XMLbased Serialization. But their development strategies can be considered as being inverse. 2.1 Model Driven Engineering and the EEA Modeling Tool PREEvision The standard meta pyramid from the object management group (OMG) depicts the four levels of the MDE approach. It was originally published in the ISO Information Resource Dictionary System (IRDS) standard [II90],. The content of each level is a model (or instance) of its overlaying level. The levels of the meta pyramid are: M0 (objects as being instances in the real world), M1 (model), M2 (meta model or language) and M3 (meta meta model or language description). The frameworks meta object facility (MOF) [OMG10a] and Ecore are established language descriptions on M3 level. The main field of application of MDE is engineering in terms of of developing something new. Such developments starts with designing (if not yet available) a description format or language (MM). This description format or language can be used to express the things to develop. PREEvision is an example for such a languages. Due to the top down approach of the MDE from abstract to concrete, the IM can only contain artifacts and relations, whose expression is designated by the MM. Also conformance is derived top down. The

148 MDE has a closed world assumption (CWA), as models are only valid, if they completely conform the MM. While models have semantic meaning to the user, computers can only process models based on their syntax. The syntactical representation of a model facilitates symbols of the vocabulary specified by the applied modeling language. In addition, rules for computerprocessable handling of symbols can be specified. They capture the semantics of symbols and therewith realize a semantically predefined vocabulary. The intensional semantics of the MDE comprises inheritance, instantiation and value assignment to attributes of instances. The levels M1 to M3 have a human understandable representations (UML- or PREEvision diagrams). These representations usually have textual, XML-based equivalents (serializations), that facilitate to store and process the models on computers. The XML metadata interchange format (XMI) can be applied to serialize MOF-based and Ecore-based MMs or IMs. A XMI file itself is a model on M1 level of the XML grammar on M2 level. This in turn is derived from the Extended Backus Nauer From (EBNF) language description on M3 level. A stack of modeling levels (referring to the meta pyramid), with a particular meta meta model on M3 level, forms a modeling space (MS) [GDD09]. For human and computer processability of a PREEvision model, we need a MOF modeling space for the PREEvision representation and an EBNF modeling space for its XMI serialization. They are considered as being orthogonal. PREEvision is a tool for diagram based modeling and evaluation of EEAs for vehicles. It follows the MDE paradigm and features a domain specific MM. This bases on MOF version 1.3. The MM of PREEvision comprises about 1200 meta classes, about 800 associations and about 800 attributes. In combination, they cover a wide range of characteristics of vehicle s EEAs. It is actually not possible nor necessary to make user based extensions of the PREEvision MM. For accumulation of model content, PREEvision features query rules. They facilitate conjunction (AND) chains of IM artifact types and value assignments. The combination of rules with logical operators (NOT, OR, etc.) is not supported. The tool is not designed to capture semantical information nor knowledge about artifacts or relations. However, instances of the PREEvision MM class Requirement can be individually labeled and mapped to numerous other artifacts. So, they can be facilitated to establish semantic networks ([Sow91]) among other artifacts. The meaning of such networks bases on user definitions (terms, symbols, naming, etc.). 2.2 The Origin of Modeling and Ontologies Contrary to the top down approach of the MDE, the origin of natural science observed things, that existed in reality and abstracted them by categories and concepts [Sow00]. This was followed by the specification of relations between categories and individuals (things in reality). The procedure facilitated the capturing of knowledge about the real world and allowed deductive reasoning and inference (called syllogism in [AS09]). This origin paradigm of modeling is covered by ontologies. Ontology means the study

149 of the nature of being, existence and reality. Ontologies have a philosophical background and were applied in computer science in the field of artificial intelligence and semantic web [HKRS08]. According to [BS05], An ontology is a hierarchically structured set of concepts describing a specific domain of knowledge that can be used to create a knowledge base.... The term Concept refers to the term class from the MDE. As an ontology describes and abstracts things from reality, it can be considered as a model. The main difference between ontology engineering (OE) and MDE is the development strategy. Though real things (individuals) are captured, categorized and mutually related in OE, its strategy is bottom up. The fact, that an individual does not conform to any existing category, does not infer that it can not exist. It just means, a suitable category was not yet specified. We say OE has an open world assumption (OWA). OE can be supported by graphical representations like UML profiles [GDD09] and tools like the ontology editor and knowledge base framework Protégé [GMF + 03]. 2.3 Ontology Description Languages and Tools There exist several approaches to model and express knowledge and semantics in ontologies. These approaches aim for formal representation of ontologies as well as computerbased processability. Therefore, they usually apply a specific vocabulary of semantically predefined symbols. Resource Description Framework (RDF) [KCM04] is a language to describe concepts and their mutual relations as nodes and edges of a graph. A statement is interpreted as a triple of subject-predicate-object. Subject and object are nodes of the triple. The predicate, as the mutual relation, is the edge of the triple. RDF Schema (RDFS) adds expressions to the vocabulary of RDF to express terminological knowledge. Examples are the specification of domain and range of a relation. RDFS therewith forms a simple ontology language. The Web Ontology Language (OWL) [MvH09] extends the vocabulary of RDFS. OWL forms a domain comprehensive vocabulary of semantically predefined symbols. The entire vocabulary of OWL features three partial languages. OWL DL (description logic) is the most expressive one, while still being decidable. Relations between the terms of an ontology language vocabulary base on logic languages (propositional logic [GDD09], first-order predicate logic and description logics) and therefore have a formal, mathematical basis. This allows for logical analysis of the modeled content and beyond that, for inference and reasoning [HKRS08]. The tool Protégé features the possibility for logic inference in the form of reasoners (e.g. FaCT++). We apply them to subjoin individuals (objects) to complex concepts (classes), if they fulfill their sufficient conditions. SPARQL is a powerful query language for RDF-based graphs. It allows specifying search patterns. Its syntax is similar to SQL (Structured Query Language) and it contains numerous features (union (OR), optional, filter, boolean-, comparison- and arithmetic operations, etc.). SPARQL is supported by the command line tool Pellet, which directly works on an input file based on OWL. Ontologies are mainly applied to maintain the overview and interrelation between numer-

150 ous individuals and concepts, e.g in medicine (Unified Medical Language System[LHM93]) or lexicons (Wordnet [Fel98]). However, the AUDI AG started to facilitate an ontology based system, to model the behavior of ECUs and their interaction via complex relations and rules [SCA + 07]. 3 Transformation of EEA Models Towards Ontologies While working with the development and design of complex and comprehensive architectures, one realizes, that many decisions base on experience and common sense. Also thoughts about circumstances, that emerge from particular perspectives to the modeled content or the interpretation of the modeled content influence the design. So the already modeled content, in terms of specific combinations of model artifacts, have impact to design decisions. A decision made earlier in the development of an architecture can for example constrain design options in a following development step. It is not an easy task to maintain an overview in a model comprising about artifacts. It is important to easily browse the modeled content for information to base future design decisions on. Already the formulation of the according question is not trivial. PREEvision supports only queries existing of conjunction chains. So the actual questioning requires rewriting before it can be executed on a model. The possibility to facilitate a powerful query language like SPARQL, would make rewriting needless and allow architects to directly specify complex queries, resulting in meaningful sets of fulfilling model artifacts. This approach can not be accomplished by the Object Constraint Language (OCL), that is originally intended to specify invariant conditions, that must hold for the system being modeled [OMG10b]. In addition, we considered as helpful for sustainable modeling, not only to model the realizable artifacts (real world things) in and architectural model. Extensions should comprise thoughts, advices, annotations, ideas, concepts, relations and dependencies to capture the knowledge of architects and the circumstances, that lead to design decisions. We realize, that a domain ontology modeled based on the ontology web language (OWL) features the demanded criteria. We claim, that the process of developing electric/electronic architectures (EEA) of vehicles benefits from the additional consideration of the EEA according to the open world assumption. The representation of the EEA as ontology based on the the ontology web language (OWL) would result in the following advantages: Application of the powerful query language SPARQL Add and relate concepts to the EEA, that can cover and express statements beyond the possibilities of the actual approach Possibility for reasoning based on the modeled content of the EEA instance model Possibility to discover unobvious design and specification errors The consideration and enrichment of EEA meta model (EEA-MM) and EEA-IM content according to the OWA requires a transformation strategy from the MDE to OE. An EEA- IM is a M1 model according to the standard meta pyramid. It contains artifacts, that are

151 instances of classes from the overlying EEA-MM. From the perspective of ontologies, the EEA-MM is a domain ontology (DO), while the EEA-IM is a set of individuals. We call this set individual ontology (IO). Just as an EEA-IM references the EEA-MM, the individuals from the EEA-IO would reference the concepts from the EEA-DO. Accordingly, we developed two independent transformation processes, the meta model (M2) to domain ontology (MMToDO) and the instance model (M1) to instance ontology (IMToIO). Our approach uses the classes and individuals of an EEA as set of input statements. Their representation / serialization is transformed to a serialization, that bases on the ontology language OWL. The resulting ontology is extended by additional statements in order to describe and infer knowledge. So we start with OE half way, when the basic concepts and relations were already described. For the approach, we therewith facilitate the strength of both, the MDE and OE. 4 Transformations on M1 and on M2 Level This chapter describes the realization of the transformation processes. PREEvision safes the EEA-IMs and the EEA-MM as XMI serialization. We therefore facilitate XSLT (extensible stylesheet language transformation) as language to describe most of the necessary transformations. For the transformation of the EEA-MM, we facilitate and customize the open source project under development emftriple, version It provides a set of eclipse plug ins to help bridging the EMF (eclipse modeling framework) and semantic web technologies such as OWL, RDF and SPARQL. From all emftriple features, we only adapt the transfer of Ecore-based meta models to their OWL-based counterpart. Ontological EEA Model EMF EEA Modeling Space PREEvision EEA Modeling Space EEA Ontology Key RDF / OWL OWL EEA Domain Ontology EeaDo.owl Transformation Flow BasedOn emftriple (ATL-based) EcoreMM-to-OwlOnto. Transform. Ecore Metametamodel Ecore-based EEA Metamodel EeaM2Ecore.xml BasedOn MM2MM Transformation EeaM2MofToEcore.xslt MOF 1.3-based EEA MM with Attribute uri Insert Attribute uri EeaM2MofInsertUri.xslt Transformation Flow from PREEvision EEA Metamodel to OWL EEA Domain Ontology Transformation Modeling Space MOF 1.3 Metametamodel MOF 1.3-based EEA MM eea300.xml PREEvision EEA Instance Model EeaM1Mof.eea Meta Pyramid from M0 to M3 References / BasesOn EEA Representation File Language Description Figure 1: Transformation Workflow MMToDO Fig. 1 depicts the transformation process MMToDO. It comprises three modeling spaces (MS) according to the applied M3 levels. The MS on the right depicts the standard meta pyramid, as applied by PREEvision. The EEA-MM (version with serialization based on XMI version 1.1) bases on MOF 1.3 as language description on M3 level. The EEA-IM bases on the applied EEA-MM. It also features a serialization according to XMI version 1.1. The MS in the middle depicts the levels M3 and M2 of the EMF meta pyra-

152 mide. This facilitates Ecore as language description. The stack on the left shows the MS of the applied ontology description. The EEA ontology (EEA-O) will comprise the EEA-DO and the EEA-IO. M2ToDo comprises three transformations. The applied subset of emftriple facilitates an ATL-based (Atlas transformation language) transformation. As input, it requires an Ecore-based MM. Therefore we transform the MOF-based EEA MM to an Ecore-based MM with the transformation EeaM2ToEcore.xslt. This transformation therewith actually represents a MM to MM transformation. Packages of the MOF-bases EEA-MM do not feature the attribute URI, that is required for unique identification in ontologies. EeaM2MofInsertUri.xslt, the first transformation of MMToDO, adds this attribute to each package. The second transformation, EeaM2MofToEcore.xslt, transforms the output of the transformation EeaM2MofInsertUri.xslt into its Ecore-based equivalent. There are special features in this transformation: Associations and References: In the MOF XMI serialization of MOF 1.3, a reference is part of a class and points to the opposite association end of the according association. In Ecore, there are no associations, only references. Like in the MOF XMI serialization, they are part of the class. References in Ecore are always unidirectional. The type eopposite can be used to model information concerning bidirectional character of an association. Classes: In MOF XMI, elements are identifiable by a document wide unique XMI ID (e.g. a111). This ID is used when referencing superclasses. In Ecore XMI, the path is used as reference (e.g. #//mm/eea/core/eecomposite). As a consequence, the transformation resolves the MOF XMI IDs to their path. Datatypes: Many datatypes of the EEA-MM correspond to JAVA datatypes. They are not derived from MOF. However, some of them are not needed in the OWL-based EEA- DO. The following list states the transformation of datatypes, while means is transformed to. Integer EInt, Boolean EBoolean, String Estring, Double EDouble, {Hexadecimal, Number, Text, File, RGB} EString, Enumeration EEnum. The third transformation is handled by a customized version of emftriple. This comprises the proper transformation of the Ecore attribute eopposite and enumerations as well as the differentiation between elements with equal names in the transformation input file. As OWL does not support packages, each package is transformed to a separate OWL-file. The EEA-DO actually comprises 49 ontologies. The following transformations of Ecore types to OWL attributes are performed: EClass owl:class, EAttribute owl:dataproperty, EReference owl:objectproperty, EPackage separate ontology, Enumeration owl:oneof and owl:literal, eopposite owl:inverseobjectproperty. All OWL attributes refer to the OWL RDF / XML serialization. Fig. 2 shows the transformation process of IMToIO. This comprises two transformations. The actual transformation is handled by EeaM1MofToOwl.xslt. It transforms an EEA-IM (e.g. EeaM1Mof.eea) to an EEA-IO (EeaIo.owl). During this transformation, EeaM1MofToOwl.xslt determines the namespace according to the class (M2) of an element (M1) to transform. Fast access to the EEA MM content is required to achieve a fast transformation. The browsing of the XMI serialization of the EEA-MM (eea300.xml,

153 Ontological EEA Model EEA Ontology Key RDF / OWL OWL EEA Domain Ontology EeaDo.owl OWL EEA Instance Ontology EeaIo.owl Transformation Flow References M2M Transform. EeaM1MofToOwl.xlst Transformation MOF 1.3-based EEA MM EeaM2summary.xml EEA MM2MM Trasfrom. EeaM2MofSummary.xslt Modeling Space PREEvision EEA Modeling Space MOF 1.3 Metametamodel MOF 1.3-based EEA MM eea300.xml PREEvision EEA Instance Model EeaM1Mof.eea Meta Pyramid from M0 to M3 Reference EEA Representation File Language Description Figure 2: Transformation Workflow IMToIO as being the meta model of PREEvision) is time consuming because it requires recursive processing. We transform the file eea300.xml to a more browser friendly serialization (EeaM2Summary.xml). This serializes the content of eea300.xml in a flat hierarchy of its classes and the appropriate references. Although we have to apply an additional transformation, the saving of processing time and reduction of error-proneness is worth it. The following transformations of EE artifacts to OWL attributes are performed: object (artifact) owl:namedindividual and relations between classes owl:classassertion or owl:datapropertyassertion or owl:objectpropertyassertion. The transformation of enumerations (*) is currently under development. All OWL Attributes refer to the OWL / XML serialization. 5 Results Table 1 shows an overview of the files (transformations and models) involved in the transformation processes. The two horizontal areas of the table refer to the transformation from EEA-MM to EEA-DO and the transformation from EEA-IM to EEA-IO. We only apply the Ecore to OWL transformation from the open source project emftriple. This is represented by ecore2owl.atl (637 LoC (lines of code) after customization) and ecore2owlhelpers.atl. As OWL does not feature packages, EeaM2Owl.xml represents the EEA-DO as summarization of 49 ontologies (*.owl files). The EEA-IO EeaM1Owl.owl (**) is the transformation result form the official demo model of PREEvision v3.0. The complexity of this demo model is equivalent to a compact car. Its XML-serialization file comprises about 2.5 Mio. lines. This large number results from special features of ontologies, that require additional assertions (e.g. inequality of classes or individuals) as well as the applied serialization type (OWL / XML). Compared to the OWL RDF / XML serialization, the OWL / XML serialization allows for straight forward text-based transformation. To handle this file in the ontology tool Protégé, we apply a 64-Bit version of Windows and 4GB RAM allocated to Protégé.

154 Name Type Size LoC EeaM2Mof.xml EEA-MM 2455 kb EeaM2MofInsertUri.xslt Transf. 3 kb 46 EeaM2MofToEcore.xslt Transf. 15 kb 236 EeaM2Ecore.xml EEA-MM 688 kb 7634 ecore2owl.atl Transf. 20 kb 621/637 ecore2owlhelpers.atl Transf. 13 kb 372 EeaM2Owl.xml EEA-DO 1663 kb EeaM2Summary.xml EEA-MM 3971 kb EeaM2Summary.xslt Transf. 6 kb 109 EeaM1Mof.eea/EeaM1Mof.xml EEA-IM kb EeaM1ToOwl.xslt Transf. 24 kb 330 EeaM1Owl.owl** EEA-IO kb Table 1: Comparison of Transformations, Sources and Output Files 6 Handling of Generated Ontology After the transformation of the EEA-MM and the EEA-IM, the modeled content is present as EEA-ontology (EEA-O). This allows the application of SPARQL. This fact correlates to the need of architects to precisely specify logically combined concepts to gain and accumulate specific information about the EEA, they can base design decisions on. Also the ontology lifecycle begins. During this, the EEA-O is enriched with semantical information and knowledge. The adding of information could be and should be done by any role, that states requirements towards the EEA (safety experts, software experts, wiring experts, etc.). The additional information might be basically correlated to the concept of the domain ontology (DO). This itself is transformed from the PREEvision meta model. Each meta model release, this requires anew transformation of the PREEvision MM to its DO equivalent and revising it according to the status of the last DO. The revision of the domain ontology / adding of concepts comprises three steps: Correcting simplifications that were made during the transformation Adding of expressive terms to existing artifacts Developing a semantical superstructure The first to steps are required to be able to use the ontological representation of the EEA. The third develops over time, holding and representing the knowledge of the modeling parties. Revising addresses relations, e.g. owl:objectproperty. As predicates, they are the transformation results from associations. In the EEA-MM, associations feature role names and usually lack meaningful association names. In an ontology, a predicate has to be an uni-

155 qually identifiable resource. Adding meaningful identifiers contributes to the comprehensibility of the modeled content. E.g. adding the meaningful identifier has connector to the property transformed from the association with the PREEvision particular role names eelogicalconnectors and electroniccomposite. The adding of statements addresses the modeling of meaning and knowledge, that is not covered by the EEA-MM. This addresses the second demand from section 3, which states the possibility to capture and relate ideas and concepts that lead or led to design decisions or might provide useful information for future design decisions. The modeling of knowledge has two basic manifestations. Specification of the sufficient conditions of specific concepts in the domain ontology (DO), without information about the presence of instances of these concepts. Subsequent inference identifies resources from the instance ontology (IO) as being instances from the specific concepts, if they fulfill their sufficient conditions. This results in the adding of statements that relate the determined instances to the specific concepts. Semantic detailing of concepts and their application in the IO. Inference again results in additional statements. This time, they represent reasoned relations. These relations were not explicitly modeled and therewith are not obvious, but they could be derived from the semantics, that are preallocated to the relations of the available statements. We demonstrate the later type by the following example. According to section 2.3, we consider the EEA-O as a set of statements as a graph with nodes and edges. Some of the resources are semantically predefined symbols from the vocabulary of an ontology language (e.g. rdfs:, owl: ). Others are from a domain specific namespace (ex: ). Reasoners add statements to the graph, based on specific condition according to existing statements. The conditional adding of statements is captured in the rules, that are applied by the reasoner. As an example, we consider the following two rules (Rule 1, Rule 2) from the formal semantic of RDFS [HKRS08]. Rule 1: Rule 2: u a v. a rdfs subp ropertyof b. u b v. a rdf type rdfs ContainerMembershipP roperty. a rdfs subp ropertyof rdfs member. Above the horizontal line, each rule specifies the statement or the statements, the graph should be browsed for. In case of a hit, the statement below the line should be added. The statements are presented according to the n-triples notation. E.g. {u a v. } specifies a subject u, that is related to an object v by a property a. Rule 1 adds a statement according to the inference, that states the expression: If u is related to v by a and a is a sub property of b, then u is also related to v by b. Rule 2 adds a statement specifying, that a property a is a sub property of the property member, if a is an instance of the type rdfs ContainerMembershipP roperty. We assume the graph depicted in Fig. 3. This represents an academic model of concepts and relations to serve as an example. It has no correlation to any past or actual vehicle neither to any vehicle currently under development. In the part showing the

156 Key concept instance Relation ex:_ Example-Namespace owl:_ OWL-Namespace rdfs:_ RDFS-Namespace rdf:_ RDF-Namespace Applied owl:objectproperty ex:asilb ex:asilc rdfs:containermembershipproperty ex:asilb owl:oneof owl:oneof ex:safetyclassification ex:safetyclassification owl:objectproperty rdf:type owl:objectproperty ex:system ex:system ex:conformstosamesafetyrequirements rdfs:domain rdfs:range ex:compactcar rdfs:subpropertyof rdf:type ex:alternativefor Domain Ontology Instance Ontology ex:hasoptionalequipment ex:alternativefor ex:vehiclevariant_xy ex:esp ex:abs Figure 3: Example Knowledge Modeling and Inference IO, is states, that ex:esp is an alternative equipment to ex:abs by applying the property ex:alternativefor. In the domain ontology, this property is further specified as being sub property of the property ex:conformstosamesafetyrequirements, that relates instances of the type ex:system (domain restriction of property) of instances of the type ex:system (range restriction). Possible instances of the domain must have a relation to one of the safety classifications ex:asilb or ex:asilc, while possible instances of the range must have a relation to the safety classification ex:asilb. According to the definition of OWL, all instances of the resource owl:containermembershipproperty are some kind of contained-in-relations. According to the shown graph, this includes the property ex:conformstosamesafetyrequirements. We consider the two following statements as being added to the initial set of statements (according to Fig. 3), by the execution of a reasoner. Statement 1: ex:esp ex:conformstosamesafetyrequirements ex: ABS. Statement 2: ex:conformstosamesafetyrequirements rdfs:subpropertyof rdfs:member. Statement 1 is added according to Rule 1. It provides additional information about subject and object of the relation. Both are instances from ex:system and have a relation to a safety classification. The possible literals for the safety classifications of subject and object to this relation also states, that being an alternative for means to (at least) feature the same safety classification. Statement 2 is added according to Rule 2. This rule states, that a subject of the predicate ex:conformstosamesafetyrequirements has the object of the predicate as a member. According to Fig. 3 this means, that ex:abs is contained in ex:esp. The reasoned information could also be derived in the domain of the MDE by the specification and execution of query rules on an according instance model. However, the specification of the according query rules requires detailed knowledge about the underlying structure to the model and the specific characteristics of what should be discovered by the execution of the query rules. The difference to the ontological consideration lies with the position of knowledge representation. In the MDE consideration, it lies with the query rules. In the ontological consideration, it lies with the DO. The later facilitates the specifi-

157 cation, representation and availability of knowledge in the description of mutual relations between concepts, that can be instantiated in the IO. 7 Summary and Outlook The ontology representation of the EEA allows for a sustainable development and assessment of EEAs. The claim is reasoned by: Describe and revise EEAs based on a vocabulary of semantically predefined symbols Possibility to make inference and reasoning Perform compressive, graph based queries of the EEA with SPARQL Enrich the model with knowledge We realized two processes to transform the EEA meta model and EEA instance models to an ontological representation based on ontology web language (OWL). We also discussed the relevance of the ontological consideration of EEAs by stating the application of reasoners. In the future, we will collect experience in dealing with EEA-Os and reasoning. References [aqu09] [AS09] [AZW06] [BS05] [Fel98] [GDD09] [GMF + 03] aquintos. EE-Architekturwerkzeug PREEvision. Technical report, aquintos GmbH, Aristotle and Gisela Striker. Aristotle: Prior Analytics, Book I: Translated with an Introduction and Commentary: Bk. 1 (Clarendon Aristotle). Oxford University Press, Uwe Assmann, Steffen Zschaler, and Gerd Wagner. Ontologies for Software Engineering and Software Technology. Springer-Verlag Berlin Heidelberg, Eva Blomqvist and Kurt Sandkuhl. Patterns in Ontology Engineering: Classification of Ontology Patterns. In ICEIS2005 7th International Conference on Enterprise Information Systems, Christiane Fellbaum. WordNet - An Electronic Lexical Database. The MIT Press, Dragan Gasevic, Dragan Djuric, and Vladan Devedzic. Model Driven Engineering and Ontology Development. Springer Berlin Heidelberg, / John H. Gennari, Mark A. Musen, Ray W. Fergerson, William E. Grosso, Monica Crubézy, Henrik Eriksson, Natalya F. Noy, and Samson W. Tu. The evolution of Protégé: an environment for knowledge-based systems development. Int. J. Hum.- Comput. Stud., 58(1):89 123, 2003.

158 [GRK + 09] J. Gacnic, J. Rataj, F. Köstner, H. Jost, and D. Beisel. DeSCAS Design Process Model for Automotive Systems - Development Streams and Ontologies. SAE International, [HC67] Peter Haggett and Richard J. Choley. Models in Geography, chapter Models, paradigms and new geography, pages Methuen and Co. Ltd., [HKRS08] Pascal Hitzler, Markus Krötzsch, Sebastian Rudolph, and York Sure. Semantic Web: Grundlagen. Springer, Berlin, [II90] ISO and IEC. Information technology - imformation resource dictionary system (IRDS), [Int10] [KCM04] [KRS05] [LHM93] [MvH09] [OMG10a] [OMG10b] International Organization for Standardization. ISO/DIS Roadvehicles- Functional Safety, Part Technical report, International Organization for Standardization Graham Klyne, Jeremy J. Carrol, and Brian McBride. Resource Description Framework (RDF): Concepts and Abstract Syntax. Technical report, World Wide Web Consortium W3C, G. Kramler, W. Retschitzegger, and W. Schwinger. ModelCVS - A Semantic Infrastructure for Model-based Tool Integration. Technical report, D. Lindberg, B. Humphreys, and A. McCray. The Unified Medical Language System. Methods of Information in Medicine, 32(4): , Deborah L. McGuinness and Frank van Harmelen. OWL Web Ontology Language Overview. Technical report, World Wide Web Consortium (W3C), OMG. MOF (Meta-Object Facility). Technical report, Object Management Group, OMG. Object Constraint Language, Version 2.2. Technical report, Object Management Group, [RKGMG04] Clemens Reichmann, Markus Kühl, Philipp Graf, and Klaus D. Müller-Glaser. GeneralStore - A CASE-Tool Integration Platform Enabling Model Level Coupling of Heterogeneous Designs for Embedded Electronic Systems. In Proceedings of the 11th IEEE International Conference and Workshop on Engineering of Computer- Based Systems, pages 225, Washington, DC, USA, IEEE Computer Society. [SCA + 07] Thomas Syldatke, Willy Chen, Jüergen Angele, Andreas Nierlich, and Mike Ullrich. How Ontologies and Rules Help to Advance Automobile Development. Technical report, Audi AG Ingolstadt, Achievo Inproware GmbH Ingolstadt, ontoprise GmbH Karlsruhe, [Sow91] John F. Sowa. Principles of Semantic Networks. Morgan Kaufmann, [Sow00] John F. Sowa. Knowledge Representation: Logical, Philosophical, and Computational Foundations. Brooks Cole Publishing, Pacific Grove, CA, USA,, [Sow06] John F. Sowa. The Challenge of Knowledge Soup. Research Trends in Science, Technology and Mathematics Ecuaction, [SVEH07] T. Stahl, M. Völtner, S. Efftringe, and A. Haase. Modellgetriebene Software Entwicklung, Techniken, Engineering, Management. dpunkt Verlag Heidelberg, 2007.

159 Requirements, Traceability and DSLs in Eclipse with the Requirements Interchange Format (RIF/ReqIF) Andreas Graf Michael Jastram Abstract: Requirements engineering (RE) is a crucial aspect in systems development and is the area of ongoing research and process improvement. However, unlike in modelling, there has been no established standard that activities could converge on. In recent years, the emerging Requirements Interchange Format (RIF/ReqIF) gained more and more visibility in industry, and research projects start to investigate these standards. To avoid redundant efforts in implementing the standard, the VERDE and Deploy projects cooperate to provide a stable common basis for RIF/ReqIF that could be leveraged by other research projects too. In this paper, we present an Eclipse-based extensible implementation of a RIF/ReqIF-based requirements editing platform. In addition, we are concerned with two related aspects of RE that take advantage of the common platform. First, how can the quality of requirements be improved by replacing or complementing natural language requirements with formal approaches such as domain specific languages or models. Second, how can we establish robust traceability that links requirements and model constructs and other artefacts of the development process. We present two approaches to traceability and two approaches to modelling. We believe that our research represents a significant contribution to the existing tooling landscape, as it is the first clean-room implementation of the RIF/ReqIF standard. We believe that it will help reduce gaps in often heterogeneous tool chains and inspire new conceptual work and new tools. 1 Introduction Requirements engineering (RE) is an important aspect of the systems engineering process [Poh07]. In this paper, we explore two aspects of the role of RE in systems engineering. First, we present a framework that allows better integration on the tool level by using the emerging RIF/ReqIF standard [OMG]. Second, we explore integration on the process level by looking at traceability to more formal system representations. We describe a tool platform for managing natural language requirements that is suited for integration in system development environments. Our aim is to make integration in existing environments easy, to increase the confidence that the quality of the requirements is adequate, and that their relationship to other system elements is understood. We achieve this by providing a robust, extensible software platform for managing requirements. The platform is build on the emerging RIF/ReqIF standard, which facilitates integration with tool chains used in industry.

160 We demonstrate the extensibility of the platform by presenting two approaches to traceability and two approaches to modelling, implemented on top of the tooling platform. The capturing of requirements that conform to a set of quality criteria is regarded essential for good requirements [Poh07]. A subset of these criteria can be addressed by extending or replacing natural language with more formal notations [GJGZ00]. Formal notations come in a variety of flavours, including mathematical notations (Section 4.1) and domain specific languages (Section 4.3). On the significance on RIF/ReqIF and our first-clean room implementation of the standard, we draw comparisons to model-driven software development: After the specification of UML, a lot of publications and work concentrated on this standard, paving the way for lowcost and open-source tools. We believe that our open-source clean-room implementation of the standard based on Eclipse can serve as the basis for both innovative conceptual work and new tools. We introduce two European research projects that address these challenges with tooling based on Eclipse and the requirements standard RIF/ReqIF. 1.1 The Verde and Deploy Research Projects The research project Verde 1 has the aim of providing a universal tool platform for the verification- and validation-orientated development of embedded systems. The focus is on the integration of the tools which are already in use at the industrial partners. Verde develops new tools and methods in the areas where there are gaps in existing tool-chains and procedures. Deploy 2 is an European Commission Information and Communication Technologies FP7 project. Its goal is to make major advances in engineering methods for dependable systems through the deployment of formal engineering methods. Deploy uses the Event-B [Abr10] formal method as a basis (Section 4.1), for which tool support in the form of the Rodin platform [ABHV06] exists. 1.2 Related Work We use natural language requirements (NLRs) in our work. While requirements can be stored in forms other than natural language, it is the most natural way for the customer to express their perception of the model [Ger97]. NLRs can also be processed (semi-) automatically [AG06], which we won t pursue in this paper. The issue of traceability has been analysed in depth by Gotel et. al. [GF94]. Our research falls into the area of post-requirements specification traceability. Abrial [Abr06] recognizes the problem of the transition from informal user requirements 1 2

161 to a formal specification. He suggests to construct a formal model for the user requirements, but acknowledges that such a model would still require informal requirements to get started. He covers this approach in [Abr10]. For traceability between natural language requirements and formal Event-B models, ProR currently supports the WRSPM reference model [GJGZ00]. This model was attractive, because it deliberately left enough room to be tailored to specific needs, as opposed to more concrete methods like Problem Frames [Jac01] or KAOS [DDMvL97]. It is also more formal and complete than the functional-documentation model [PM95], another well-known approach. Another approach to requirements used in industry is to graft requirements on top of another meta-model. An example is SysML, where requirements are represented with the stereotype Requirement. However, this has the drawback that not all meta-models support extensions [CM10]. In addition, traceability is limited to artefacts that can be represented within the model. We are aware of a number of system development environments in the open source that support natural language requirements. These include Topcased [FGC + 06] and UniCase [BCHK07]. Also, SysML [Wei07] introduces a requirement diagram. However, the data structures for requirements used by these tools and standards lack the richness of the RIF data model. In fact, only commercial tools seem to offer data models that come close to the richness of the RIF data model. These include IBM Rational DOORS 3 or Visure IRQA 4. Existing traceability solutions can be found as an integral component of a tool or as external add-ons. Topcased and UniCase both support the former approach, which is also used by some commercial tools. Some commercial traceability tools assume a heterogeneous tool chain. They attempt to map the meta-models of the tools by providing tool- or model-specific adapters (e.g. Reqtify 5 ). 1.3 Structure of this Paper In Section 2, we provide an overview of the RIF/ReqIF Requirements Interchange Format. Section 3 describes the Eclipse-based tool and application framework that we created. Section 4 describes the integration of other elements of the development process using domain specific languages (DSLs), formal Event-B models and traceability constructs. It shows how the tool platform facilitates the integration. Section 5 concludes and provides an outlook on our future plans

162 2 The RIF/ReqIF Requirements Interchange Format RIF/ReqIF [OMG] is an emerging standard for requirements exchange, driven by the German automotive industry. It consists of a data model and an XML-based format for persistence. 2.1 History of the RIF/ReqIF Standard RIF was created in 2004 by the Herstellerinitiative Software, a body of the German automotive industry that oversees vendor-independent collaboration. Within a few years, it evolved to the current version 1.2. The format gained traction in the industry, and a number of commercial tools support it. For instance, Rational DOORS claims to support it, and we are aware of the commercial synchronization tools Atego Exerpt 6 and enso ReqIF Server 7. We are aware of at least one German car manufacturer where the use of RIF plays a strategic role. In 2010, the Object Management Group (OMG) took over the standardization process and released the ReqIF 1.0 RFC (Request For Comments). The name was changed to prevent confusion with the Rule Interchange Format, another OMG standard, while the version number was reset to 1.0. Our tool environment is currently based on RIF 1.2, support for ReqIF 1.0 is planned. 2.2 The Content and Structure of a ReqIF Model In general terms, a ReqIF model contains attributed requirements that are connected with attributed links. The requirements can be arbitrarily grouped into document-like constructs. In the following, we point out a few key model features: A SpecObject represents a requirement. A SpecObject has a number of AttributeValues, which hold the actual content of the SpecObject. SpecObjects are organized in Specifications, which are hierarchical structures holding SpecHierarchy elements. Each SpecHierarchy refers to exactly one SpecObject. This way, the same SpecObject can be referenced from various SpecHierarchies. ReqIF contains a sophisticated data model for Datatypes, support for permission management, facilities for grouping data and hooks for tool extensions. The details can be found in the ReqIF specification [OMG]

163 3 Tool Implementation The Verde-Project produced an EMF 8 -based implementation of the RIF 1.2 data model, which represents the foundation for both the Verde and the Deploy projects. The two projects provide their own GUI that is adapted to the corresponding approach. Due to the reliance on EMF, both tool solutions can be installed into any Eclipse-based system for deployment. The Verde-project will publish its tool as part of the Yakindu project 9, which is already available as open source project. 3.1 ProR Requirements Engineering Platform The tool from the Deploy project is called ProR and is available for download 10. It is a full-featured editor for RIF-Documents. The GUI of ProR is shown in Figure 1. The left pane shows projects that can be expanded to show their content. The view in the middle shows the RIF Specifications, which can be customized to show only selected Attributes in a table view. For the selected SpecObject (the selected row), all Attributes are shown in the Properties View on the bottom. Values can be edited in the Property View or directly in the main view. On the right side is an outline view that allows an alternative navigation of the model. Figure 1 also shows how an integration with other tools may look like. Here we see the Rodin integration. Variables from the formal model are recognized and rendered in colour in the main view. (see also Section 4.1). 3.2 ProR in the Field We are currently evaluating ProR in the Deploy project, where we model a Cruise Control system (for cars). The system is described by 46 requirements and modelled in Event-B in three refinement levels. As this project is not yet concluded, we cannot report traceability statistics yet. Further field studies are planned. 8 Eclipse Modeling Framework,

164 Figure 1: The ProR GUI (here shown running inside the Rodin Platform) 4 Traceability and Integration with Models In this Section, we show the integration of requirements with two modelling approaches, Event-B (Section 4.1) and DSLs (Section 4.3). We also show two approaches to traceability, using RIF SpecRelations (Section 4.1) and external Tracepoints (Section 4.2). Traditionally requirements are expressed in natural language. However, natural language suffers from the disadvantage of not being precise enough and impedes (automated) analysis. Accordingly, requirements engineering recommends to find clearer wording by means of formal notations [CM99]. The engineer can directly write down the requirements in textual formal language or replenish them with other models. To assure that the requirements are fully implemented in the system, it is necessary to trace them during the whole development process [GF94]. The effect of a requested change in any of the artefacts can be assessed by following the traces up and down the process.

165 4.1 Traceability to Event-B Models via WRSPM In the Deploy project, we demonstrate our ideas using Event-B, a formalism and method for discrete systems modelling. Event-B is a state-based modelling method. The choice of Event-B over similar methods [BS03, Jon90] is mostly motivated by the built-in formal refinement support and the availability of the Rodin tool [ABHV06] for experimentation with our approach. Event-B models are characterized by proof obligations. Proof obligations serve to verify properties of the model. To a large degree, such properties originate in requirements that the model is intended to realize. Eventually, we expect that by verifying the formal model we have also established that the requirements to which they correspond are satisfied. We developed an approach to traceability between natural language requirements and Event-B models based on WRSPM [JHLR10]. WRSPM is a reference model for applying formal methods to the development of user requirements and their reduction to a behavioural system specification [GJGZ00]. WRSPM distinguishes between artefacts and phenomena. Phenomena describe the state space (and state transitions) of the domain and system, while artefacts represent constraints on the state space and the state transitions. The artefacts are broadly classified into groups that pertain mostly to the system versus those that pertain mostly to the environment. Our goal is to establish requirements traceability from natural language requirements to an Event-B formal model, using WRSPM to provide structure to both the requirements and the model. To achieve this, we introduce a realize relationship that we refine into different types of traces, as well as criteria to verify that the traces have been established correctly. This allows us to justify that the formal model realizes the requirements. We created tool support by using the SpecRelations construct in RIF. ProR runs as a feature directly inside Rodin. The integration code is contained in its own plugin, which keeps ProR and Rodin independent. Via this plugin, ProR supports highlighting of formal model elements directly in the requirements text, as can be seen in Figure 1. It also support the creation of SpecRelations between formal model elements and requirements in ProR. Formal Event-B elements have a corresponding proxy SpecObject in the RIF model that is automatically synchronized with the Event-B model. The integration is currently manual via drag and drop. We plan semi-automatic creation and maintenance of the SpecRelations in the future. 4.2 Tracepoint Approach to Traceability The general concept of traceability in VERDE led to the decision to implement a traceability that is independent of the types of artefacts that are involved. Since Eclipse-based models are usually built on the Eclipse meta-model Ecore/EMF [SBPM09], VERDE implements a generic solution for the traceability of EMF-based elements (see Figure 2). The core data structure is a mapping table with three elements: source element, target element and arbitrary additional information. The elements are identified by a data structure, the

166 Figure 2: Tracepoint traceability so called Tracepoint. The inner structure of a Tracepoint depends on the structure of the meta-model that is being traced, but is hidden from the traceability infrastructure. For instance, a UML model element may be identified by a unique ID. A C source file by contrast could be identified by file path and name. The basic implementation supplies an assignment window that integrates into existing editors (like UML, C-IDE). This allows the user to navigate and edit the traceability relationship. While we just demonstrated traceability downstream (i.e. towards the implementation), it could also be used upstream (i.e. towards goals). For instance, project management could provide traceability to milestones that are part of the project plan. 4.3 Integration of Domain-Specific Languages The possibility to specify requirements with textual domain specific languages (DSLs) and to trace these to development artefacts is one of the foundations of the VERDE project. A textual DSL is a machine-processable language that is designed to express concepts of a specific domain. The concepts and notations used correspond to the way of thinking of the stakeholder concerned with these aspects while still being a formal notation. In the Verde requirements editor, the open-source tool Xtext [EV06] is used. The editor for the DSLs integrates itself directly into the requirements tool. The introduction of Xtext allows any project to define their own grammar and modelling. Project partners can design and evaluate new formal notations for the specification of requirements, in particular nonfunctional requirements such as memory usage and timing.

ISO 15504 Reference Model

ISO 15504 Reference Model Prozess Dimension von SPICE/ISO 15504 Process flow Remarks Role Documents, data, tools input, output Start Define purpose and scope Define process overview Define process details Define roles no Define

Mehr

Group and Session Management for Collaborative Applications

Group and Session Management for Collaborative Applications Diss. ETH No. 12075 Group and Session Management for Collaborative Applications A dissertation submitted to the SWISS FEDERAL INSTITUTE OF TECHNOLOGY ZÜRICH for the degree of Doctor of Technical Seiences

Mehr

eurex rundschreiben 094/10

eurex rundschreiben 094/10 eurex rundschreiben 094/10 Datum: Frankfurt, 21. Mai 2010 Empfänger: Alle Handelsteilnehmer der Eurex Deutschland und Eurex Zürich sowie Vendoren Autorisiert von: Jürg Spillmann Weitere Informationen zur

Mehr

PART 3: MODELLING BUSINESS PROCESSES EVENT-DRIVEN PROCESS CHAINS (EPC)

PART 3: MODELLING BUSINESS PROCESSES EVENT-DRIVEN PROCESS CHAINS (EPC) Information Management II / ERP: Microsoft Dynamics NAV 2009 Page 1 of 5 PART 3: MODELLING BUSINESS PROCESSES EVENT-DRIVEN PROCESS CHAINS (EPC) Event-driven Process Chains are, in simple terms, some kind

Mehr

Funktionale Sicherheit ISO 26262 Schwerpunkt Requirements Engineering,

Funktionale Sicherheit ISO 26262 Schwerpunkt Requirements Engineering, Funktionale Sicherheit ISO 26262 Schwerpunkt Requirements Engineering, Manfred Broy Lehrstuhl für Software & Systems Engineering Technische Universität München Institut für Informatik ISO 26262 Functional

Mehr

Software Engineering verteilter Systeme. Hauptseminar im WS 2011 / 2012

Software Engineering verteilter Systeme. Hauptseminar im WS 2011 / 2012 Software Engineering verteilter Systeme Hauptseminar im WS 2011 / 2012 Model-based Testing(MBT) Christian Saad (1-2 students) Context Models (e.g. State Machines) are used to define a system s behavior

Mehr

Customer-specific software for autonomous driving and driver assistance (ADAS)

Customer-specific software for autonomous driving and driver assistance (ADAS) This press release is approved for publication. Press Release Chemnitz, February 6 th, 2014 Customer-specific software for autonomous driving and driver assistance (ADAS) With the new product line Baselabs

Mehr

Cloud Architektur Workshop

Cloud Architektur Workshop Cloud Architektur Workshop Ein Angebot von IBM Software Services for Cloud & Smarter Infrastructure Agenda 1. Überblick Cloud Architektur Workshop 2. In 12 Schritten bis zur Cloud 3. Workshop Vorgehensmodell

Mehr

Lehrstuhl für Allgemeine BWL Strategisches und Internationales Management Prof. Dr. Mike Geppert Carl-Zeiß-Str. 3 07743 Jena

Lehrstuhl für Allgemeine BWL Strategisches und Internationales Management Prof. Dr. Mike Geppert Carl-Zeiß-Str. 3 07743 Jena Lehrstuhl für Allgemeine BWL Strategisches und Internationales Management Prof. Dr. Mike Geppert Carl-Zeiß-Str. 3 07743 Jena http://www.im.uni-jena.de Contents I. Learning Objectives II. III. IV. Recap

Mehr

Mit Legacy-Systemen in die Zukunft. adviion. in die Zukunft. Dr. Roland Schätzle

Mit Legacy-Systemen in die Zukunft. adviion. in die Zukunft. Dr. Roland Schätzle Mit Legacy-Systemen in die Zukunft Dr. Roland Schätzle Der Weg zur Entscheidung 2 Situation Geschäftliche und softwaretechnische Qualität der aktuellen Lösung? Lohnen sich weitere Investitionen? Migration??

Mehr

Einsatz einer Dokumentenverwaltungslösung zur Optimierung der unternehmensübergreifenden Kommunikation

Einsatz einer Dokumentenverwaltungslösung zur Optimierung der unternehmensübergreifenden Kommunikation Einsatz einer Dokumentenverwaltungslösung zur Optimierung der unternehmensübergreifenden Kommunikation Eine Betrachtung im Kontext der Ausgliederung von Chrysler Daniel Rheinbay Abstract Betriebliche Informationssysteme

Mehr

p^db=`oj===pìééçêíáåñçêã~íáçå=

p^db=`oj===pìééçêíáåñçêã~íáçå= p^db=`oj===pìééçêíáåñçêã~íáçå= Error: "Could not connect to the SQL Server Instance" or "Failed to open a connection to the database." When you attempt to launch ACT! by Sage or ACT by Sage Premium for

Mehr

DATA ANALYSIS AND REPRESENTATION FOR SOFTWARE SYSTEMS

DATA ANALYSIS AND REPRESENTATION FOR SOFTWARE SYSTEMS DATA ANALYSIS AND REPRESENTATION FOR SOFTWARE SYSTEMS Master Seminar Empirical Software Engineering Anuradha Ganapathi Rathnachalam Institut für Informatik Software & Systems Engineering Agenda Introduction

Mehr

Algorithms for graph visualization

Algorithms for graph visualization Algorithms for graph visualization Project - Orthogonal Grid Layout with Small Area W INTER SEMESTER 2013/2014 Martin No llenburg KIT Universita t des Landes Baden-Wu rttemberg und nationales Forschungszentrum

Mehr

Introduction to the diploma and master seminar in FSS 2010. Prof. Dr. Armin Heinzl. Sven Scheibmayr

Introduction to the diploma and master seminar in FSS 2010. Prof. Dr. Armin Heinzl. Sven Scheibmayr Contemporary Aspects in Information Systems Introduction to the diploma and master seminar in FSS 2010 Chair of Business Administration and Information Systems Prof. Dr. Armin Heinzl Sven Scheibmayr Objective

Mehr

Model-based Development of Hybrid-specific ECU Software for a Hybrid Vehicle with Compressed- Natural-Gas Engine

Model-based Development of Hybrid-specific ECU Software for a Hybrid Vehicle with Compressed- Natural-Gas Engine Model-based Development of Hybrid-specific ECU Software for a Hybrid Vehicle with Compressed- Natural-Gas Engine 5. Braunschweiger Symposium 20./21. Februar 2008 Dipl.-Ing. T. Mauk Dr. phil. nat. D. Kraft

Mehr

TomTom WEBFLEET Tachograph

TomTom WEBFLEET Tachograph TomTom WEBFLEET Tachograph Installation TG, 17.06.2013 Terms & Conditions Customers can sign-up for WEBFLEET Tachograph Management using the additional services form. Remote download Price: NAT: 9,90.-/EU:

Mehr

Support Technologies based on Bi-Modal Network Analysis. H. Ulrich Hoppe. Virtuelles Arbeiten und Lernen in projektartigen Netzwerken

Support Technologies based on Bi-Modal Network Analysis. H. Ulrich Hoppe. Virtuelles Arbeiten und Lernen in projektartigen Netzwerken Support Technologies based on Bi-Modal Network Analysis H. Agenda 1. Network analysis short introduction 2. Supporting the development of virtual organizations 3. Supporting the development of compentences

Mehr

1. General information... 2 2. Login... 2 3. Home... 3 4. Current applications... 3

1. General information... 2 2. Login... 2 3. Home... 3 4. Current applications... 3 User Manual for Marketing Authorisation and Lifecycle Management of Medicines Inhalt: User Manual for Marketing Authorisation and Lifecycle Management of Medicines... 1 1. General information... 2 2. Login...

Mehr

XML Template Transfer Transfer project templates easily between systems

XML Template Transfer Transfer project templates easily between systems Transfer project templates easily between systems A PLM Consulting Solution Public The consulting solution XML Template Transfer enables you to easily reuse existing project templates in different PPM

Mehr

Distributed testing. Demo Video

Distributed testing. Demo Video distributed testing Das intunify Team An der Entwicklung der Testsystem-Software arbeiten wir als Team von Software-Spezialisten und Designern der soft2tec GmbH in Kooperation mit der Universität Osnabrück.

Mehr

ReadMe zur Installation der BRICKware for Windows, Version 6.1.2. ReadMe on Installing BRICKware for Windows, Version 6.1.2

ReadMe zur Installation der BRICKware for Windows, Version 6.1.2. ReadMe on Installing BRICKware for Windows, Version 6.1.2 ReadMe zur Installation der BRICKware for Windows, Version 6.1.2 Seiten 2-4 ReadMe on Installing BRICKware for Windows, Version 6.1.2 Pages 5/6 BRICKware for Windows ReadMe 1 1 BRICKware for Windows, Version

Mehr

Security Patterns. Benny Clauss. Sicherheit in der Softwareentwicklung WS 07/08

Security Patterns. Benny Clauss. Sicherheit in der Softwareentwicklung WS 07/08 Security Patterns Benny Clauss Sicherheit in der Softwareentwicklung WS 07/08 Gliederung Pattern Was ist das? Warum Security Pattern? Security Pattern Aufbau Security Pattern Alternative Beispiel Patternsysteme

Mehr

Introducing PAThWay. Structured and methodical performance engineering. Isaías A. Comprés Ureña Ventsislav Petkov Michael Firbach Michael Gerndt

Introducing PAThWay. Structured and methodical performance engineering. Isaías A. Comprés Ureña Ventsislav Petkov Michael Firbach Michael Gerndt Introducing PAThWay Structured and methodical performance engineering Isaías A. Comprés Ureña Ventsislav Petkov Michael Firbach Michael Gerndt Technical University of Munich Overview Tuning Challenges

Mehr

Software-Architecture Introduction

Software-Architecture Introduction Software-Architecture Introduction Prof. Dr. Axel Böttcher Summer Term 2011 3. Oktober 2011 Overview 2 hours lecture, 2 hours lab sessions per week. Certificate ( Schein ) is prerequisite for admittanceto

Mehr

Possible Solutions for Development of Multilevel Pension System in the Republic of Azerbaijan

Possible Solutions for Development of Multilevel Pension System in the Republic of Azerbaijan Possible Solutions for Development of Multilevel Pension System in the Republic of Azerbaijan by Prof. Dr. Heinz-Dietrich Steinmeyer Introduction Multi-level pension systems Different approaches Different

Mehr

Exercise (Part II) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1

Exercise (Part II) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1 Exercise (Part II) Notes: The exercise is based on Microsoft Dynamics CRM Online. For all screenshots: Copyright Microsoft Corporation. The sign ## is you personal number to be used in all exercises. All

Mehr

Infrastructure as a Service (IaaS) Solutions for Online Game Service Provision

Infrastructure as a Service (IaaS) Solutions for Online Game Service Provision Infrastructure as a Service (IaaS) Solutions for Online Game Service Provision Zielsetzung: System Verwendung von Cloud-Systemen für das Hosting von online Spielen (IaaS) Reservieren/Buchen von Resources

Mehr

Safer Software Formale Methoden für ISO26262

Safer Software Formale Methoden für ISO26262 Safer Software Formale Methoden für ISO26262 Dr. Stefan Gulan COC Systems Engineering Functional Safety Entwicklung Was Wie Wie genau Anforderungen Design Produkt Seite 3 Entwicklung nach ISO26262 Funktionale

Mehr

Lab Class Model-Based Robotics Software Development

Lab Class Model-Based Robotics Software Development Lab Class Model-Based Robotics Software Development Dipl.-Inform. Jan Oliver Ringert Dipl.-Inform. Andreas Wortmann http://www.se-rwth.de/ Next: Input Presentations Thursday 1. MontiCore: AST Generation

Mehr

AS Path-Prepending in the Internet And Its Impact on Routing Decisions

AS Path-Prepending in the Internet And Its Impact on Routing Decisions (SEP) Its Impact on Routing Decisions Zhi Qi ytqz@mytum.de Advisor: Wolfgang Mühlbauer Lehrstuhl für Netzwerkarchitekturen Background Motivation BGP -> core routing protocol BGP relies on policy routing

Mehr

Labour law and Consumer protection principles usage in non-state pension system

Labour law and Consumer protection principles usage in non-state pension system Labour law and Consumer protection principles usage in non-state pension system by Prof. Dr. Heinz-Dietrich Steinmeyer General Remarks In private non state pensions systems usually three actors Employer

Mehr

Frequently asked Questions for Kaercher Citrix (apps.kaercher.com)

Frequently asked Questions for Kaercher Citrix (apps.kaercher.com) Frequently asked Questions for Kaercher Citrix (apps.kaercher.com) Inhalt Content Citrix-Anmeldung Login to Citrix Was bedeutet PIN und Token (bei Anmeldungen aus dem Internet)? What does PIN and Token

Mehr

Understanding and Improving Collaboration in Distributed Software Development

Understanding and Improving Collaboration in Distributed Software Development Diss. ETH No. 22473 Understanding and Improving Collaboration in Distributed Software Development A thesis submitted to attain the degree of DOCTOR OF SCIENCES of ETH ZURICH (Dr. sc. ETH Zurich) presented

Mehr

Wie agil kann Business Analyse sein?

Wie agil kann Business Analyse sein? Wie agil kann Business Analyse sein? Chapter Meeting Michael Leber 2012-01-24 ANECON Software Design und Beratung G.m.b.H. Alser Str. 4/Hof 1 A-1090 Wien Tel.: +43 1 409 58 90 www.anecon.com office@anecon.com

Mehr

Exploring the knowledge in Semi Structured Data Sets with Rich Queries

Exploring the knowledge in Semi Structured Data Sets with Rich Queries Exploring the knowledge in Semi Structured Data Sets with Rich Queries Jürgen Umbrich Sebastian Blohm Institut AIFB, Universität Karlsruhe (TH) Forschungsuniversität gegründet 1825 www.kit.ed Overview

Mehr

Load balancing Router with / mit DMZ

Load balancing Router with / mit DMZ ALL7000 Load balancing Router with / mit DMZ Deutsch Seite 3 English Page 10 ALL7000 Quick Installation Guide / Express Setup ALL7000 Quick Installation Guide / Express Setup - 2 - Hardware Beschreibung

Mehr

Bayerisches Landesamt für Statistik und Datenverarbeitung Rechenzentrum Süd. z/os Requirements 95. z/os Guide in Lahnstein 13.

Bayerisches Landesamt für Statistik und Datenverarbeitung Rechenzentrum Süd. z/os Requirements 95. z/os Guide in Lahnstein 13. z/os Requirements 95. z/os Guide in Lahnstein 13. März 2009 0 1) LOGROTATE in z/os USS 2) KERBEROS (KRB5) in DFS/SMB 3) GSE Requirements System 1 Requirement Details Description Benefit Time Limit Impact

Mehr

A central repository for gridded data in the MeteoSwiss Data Warehouse

A central repository for gridded data in the MeteoSwiss Data Warehouse A central repository for gridded data in the MeteoSwiss Data Warehouse, Zürich M2: Data Rescue management, quality and homogenization September 16th, 2010 Data Coordination, MeteoSwiss 1 Agenda Short introduction

Mehr

Fluid-Particle Multiphase Flow Simulations for the Study of Sand Infiltration into Immobile Gravel-Beds

Fluid-Particle Multiphase Flow Simulations for the Study of Sand Infiltration into Immobile Gravel-Beds 3rd JUQUEEN Porting and Tuning Workshop Jülich, 2-4 February 2015 Fluid-Particle Multiphase Flow Simulations for the Study of Sand Infiltration into Immobile Gravel-Beds Tobias Schruff, Roy M. Frings,

Mehr

SOA Service Oriented Architecture

SOA Service Oriented Architecture SOA Service Oriented Architecture (c) Till Hänisch 2006, BA Heidenheim [IBM] [WS] Wir haben: Prog ramm Proxy Proxy K2 K1 Plattformunabhängiger RPC Wir haben: Prog ramm Proxy Proxy K2 K1 Plattformunabhängiger

Mehr

H. Enke, Sprecher des AK Forschungsdaten der WGL

H. Enke, Sprecher des AK Forschungsdaten der WGL https://escience.aip.de/ak-forschungsdaten H. Enke, Sprecher des AK Forschungsdaten der WGL 20.01.2015 / Forschungsdaten - DataCite Workshop 1 AK Forschungsdaten der WGL 2009 gegründet - Arbeit für die

Mehr

Privacy-preserving Ubiquitous Social Mining via Modular and Compositional Virtual Sensors

Privacy-preserving Ubiquitous Social Mining via Modular and Compositional Virtual Sensors Privacy-preserving Ubiquitous Social Mining via Modular and Compositional s Evangelos Pournaras, Iza Moise, Dirk Helbing (Anpassung im Folienmaster: Menü «Ansicht» à «Folienmaster») ((Vorname Nachname))

Mehr

GIPS 2010 Gesamtüberblick. Dr. Stefan J. Illmer Credit Suisse. Seminar der SBVg "GIPS Aperitif" 15. April 2010 Referat von Stefan Illmer

GIPS 2010 Gesamtüberblick. Dr. Stefan J. Illmer Credit Suisse. Seminar der SBVg GIPS Aperitif 15. April 2010 Referat von Stefan Illmer GIPS 2010 Gesamtüberblick Dr. Stefan J. Illmer Credit Suisse Agenda Ein bisschen Historie - GIPS 2010 Fundamentals of Compliance Compliance Statement Seite 3 15.04.2010 Agenda Ein bisschen Historie - GIPS

Mehr

Exercise (Part XI) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1

Exercise (Part XI) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1 Exercise (Part XI) Notes: The exercise is based on Microsoft Dynamics CRM Online. For all screenshots: Copyright Microsoft Corporation. The sign ## is you personal number to be used in all exercises. All

Mehr

Software development with continuous integration

Software development with continuous integration Software development with continuous integration (FESG/MPIfR) ettl@fs.wettzell.de (FESG) neidhardt@fs.wettzell.de 1 A critical view on scientific software Tendency to become complex and unstructured Highly

Mehr

Ways and methods to secure customer satisfaction at the example of a building subcontractor

Ways and methods to secure customer satisfaction at the example of a building subcontractor Abstract The thesis on hand deals with customer satisfaction at the example of a building subcontractor. Due to the problems in the building branch, it is nowadays necessary to act customer oriented. Customer

Mehr

Prof. Dr. Margit Scholl, Mr. RD Guldner Mr. Coskun, Mr. Yigitbas. Mr. Niemczik, Mr. Koppatz (SuDiLe GbR)

Prof. Dr. Margit Scholl, Mr. RD Guldner Mr. Coskun, Mr. Yigitbas. Mr. Niemczik, Mr. Koppatz (SuDiLe GbR) Prof. Dr. Margit Scholl, Mr. RD Guldner Mr. Coskun, Mr. Yigitbas in cooperation with Mr. Niemczik, Mr. Koppatz (SuDiLe GbR) Our idea: Fachbereich Wirtschaft, Verwaltung und Recht Simple strategies of lifelong

Mehr

Patentrelevante Aspekte der GPLv2/LGPLv2

Patentrelevante Aspekte der GPLv2/LGPLv2 Patentrelevante Aspekte der GPLv2/LGPLv2 von RA Dr. Till Jaeger OSADL Seminar on Software Patents and Open Source Licensing, Berlin, 6./7. November 2008 Agenda 1. Regelungen der GPLv2 zu Patenten 2. Implizite

Mehr

RailMaster New Version 7.00.p26.01 / 01.08.2014

RailMaster New Version 7.00.p26.01 / 01.08.2014 RailMaster New Version 7.00.p26.01 / 01.08.2014 English Version Bahnbuchungen so einfach und effizient wie noch nie! Copyright Copyright 2014 Travelport und/oder Tochtergesellschaften. Alle Rechte vorbehalten.

Mehr

DIGICOMP OPEN TUESDAY AKTUELLE STANDARDS UND TRENDS IN DER AGILEN SOFTWARE ENTWICKLUNG. Michael Palotas 7. April 2015 1 GRIDFUSION

DIGICOMP OPEN TUESDAY AKTUELLE STANDARDS UND TRENDS IN DER AGILEN SOFTWARE ENTWICKLUNG. Michael Palotas 7. April 2015 1 GRIDFUSION DIGICOMP OPEN TUESDAY AKTUELLE STANDARDS UND TRENDS IN DER AGILEN SOFTWARE ENTWICKLUNG Michael Palotas 7. April 2015 1 GRIDFUSION IHR REFERENT Gridfusion Software Solutions Kontakt: Michael Palotas Gerbiweg

Mehr

Mash-Up Personal Learning Environments. Dr. Hendrik Drachsler

Mash-Up Personal Learning Environments. Dr. Hendrik Drachsler Decision Support for Learners in Mash-Up Personal Learning Environments Dr. Hendrik Drachsler Personal Nowadays Environments Blog Reader More Information Providers Social Bookmarking Various Communities

Mehr

IBM Security Lab Services für QRadar

IBM Security Lab Services für QRadar IBM Security Lab Services für QRadar Serviceangebote für ein QRadar SIEM Deployment in 10 bzw. 15 Tagen 28.01.2015 12015 IBM Corporation Agenda 1 Inhalt der angebotenen Leistungen Allgemeines Erbrachte

Mehr

TMF projects on IT infrastructure for clinical research

TMF projects on IT infrastructure for clinical research Welcome! TMF projects on IT infrastructure for clinical research R. Speer Telematikplattform für Medizinische Forschungsnetze (TMF) e.v. Berlin Telematikplattform für Medizinische Forschungsnetze (TMF)

Mehr

Working Sets for the Principle of Least Privilege in Role Based Access Control (RBAC) and Desktop Operating Systems DISSERTATION

Working Sets for the Principle of Least Privilege in Role Based Access Control (RBAC) and Desktop Operating Systems DISSERTATION UNIVERSITÄT JOHANNES KEPLER LINZ JKU Technisch-Naturwissenschaftliche Fakultät Working Sets for the Principle of Least Privilege in Role Based Access Control (RBAC) and Desktop Operating Systems DISSERTATION

Mehr

MindestanforderungenanDokumentationvon Lieferanten

MindestanforderungenanDokumentationvon Lieferanten andokumentationvon Lieferanten X.0010 3.02de_en/2014-11-07 Erstellt:J.Wesseloh/EN-M6 Standardvorgabe TK SY Standort Bremen Standard requirements TK SY Location Bremen 07.11.14 DieInformationenindieserUnterlagewurdenmitgrößterSorgfalterarbeitet.DennochkönnenFehlernichtimmervollständig

Mehr

Universität Dortmund Integrating Knowledge Discovery into Knowledge Management

Universität Dortmund Integrating Knowledge Discovery into Knowledge Management Integrating Knowledge Discovery into Knowledge Management Katharina Morik, Christian Hüppe, Klaus Unterstein Univ. Dortmund LS8 www-ai.cs.uni-dortmund.de Overview Integrating given data into a knowledge

Mehr

Instruktionen Mozilla Thunderbird Seite 1

Instruktionen Mozilla Thunderbird Seite 1 Instruktionen Mozilla Thunderbird Seite 1 Instruktionen Mozilla Thunderbird Dieses Handbuch wird für Benutzer geschrieben, die bereits ein E-Mail-Konto zusammenbauen lassen im Mozilla Thunderbird und wird

Mehr

IoT Scopes and Criticisms

IoT Scopes and Criticisms IoT Scopes and Criticisms Rajkumar K Kulandaivelu S 1 What is IoT? Interconnection of multiple devices over internet medium 2 IoT Scope IoT brings lots of scope for development of applications that are

Mehr

Integration of D-Grid Sites in NGI-DE Monitoring

Integration of D-Grid Sites in NGI-DE Monitoring Integration of D-Grid Sites in NGI-DE Monitoring Steinbuch Centre for Computing Foued Jrad www.kit.edu D-Grid Site Monitoring Status! Prototype D-Grid Site monitoring based on Nagios running on sitemon.d-grid.de

Mehr

Implementierung von IEC 61508

Implementierung von IEC 61508 Implementierung von IEC 61508 1 Qualität & Informatik -www.itq.ch Ziele Verständnis für eine mögliche Vorgehensweise mit IEC 61508 schaffen Bewusstes Erkennen und Behandeln bon Opportunitäten unmittelbaren

Mehr

SARA 1. Project Meeting

SARA 1. Project Meeting SARA 1. Project Meeting Energy Concepts, BMS and Monitoring Integration of Simulation Assisted Control Systems for Innovative Energy Devices Prof. Dr. Ursula Eicker Dr. Jürgen Schumacher Dirk Pietruschka,

Mehr

Medical Image Processing MediGRID. GRID-Computing für Medizin und Lebenswissenschaften

Medical Image Processing MediGRID. GRID-Computing für Medizin und Lebenswissenschaften Medical Image Processing in Medical Image Processing Image Processing is of high importantance for medical research, diagnosis and therapy High storage capacity Volume data, high resolution images, screening

Mehr

The Single Point Entry Computer for the Dry End

The Single Point Entry Computer for the Dry End The Single Point Entry Computer for the Dry End The master computer system was developed to optimize the production process of a corrugator. All entries are made at the master computer thus error sources

Mehr

CMMI for Embedded Systems Development

CMMI for Embedded Systems Development CMMI for Embedded Systems Development O.Univ.-Prof. Dipl.-Ing. Dr. Wolfgang Pree Software Engineering Gruppe Leiter des Fachbereichs Informatik cs.uni-salzburg.at Inhalt Projekt-Kontext CMMI FIT-IT-Projekt

Mehr

on Software Development Design

on Software Development Design Werner Mellis A Systematic on Software Development Design Folie 1 von 22 How to describe software development? dimensions of software development organizational division of labor coordination process formalization

Mehr

Operational Excellence with Bilfinger Advanced Services Plant management safe and efficient

Operational Excellence with Bilfinger Advanced Services Plant management safe and efficient Bilfinger GreyLogix GmbH Operational Excellence with Bilfinger Advanced Services Plant management safe and efficient Michael Kaiser ACHEMA 2015, Frankfurt am Main 15-19 June 2015 The future manufacturingplant

Mehr

CHAMPIONS Communication and Dissemination

CHAMPIONS Communication and Dissemination CHAMPIONS Communication and Dissemination Europa Programm Center Im Freistaat Thüringen In Trägerschaft des TIAW e. V. 1 CENTRAL EUROPE PROGRAMME CENTRAL EUROPE PROGRAMME -ist als größtes Aufbauprogramm

Mehr

Michael Piechotta - CASE Tools. openarchitecture Ware

Michael Piechotta - CASE Tools. openarchitecture Ware Model Driven Development Michael Piechotta - CASE Tools openarchitecture Ware Gliederung 1.Einleitung - Was ist MDD? - Wozu MDD? 2.Model Driven Development - OMG Konzepte: Modelle,Transformationen Meta-Modellierung

Mehr

Datenmodelle im Kontext von Europeana. Stefanie Rühle (SUB Göttingen)

Datenmodelle im Kontext von Europeana. Stefanie Rühle (SUB Göttingen) Datenmodelle im Kontext von Europeana Stefanie Rühle (SUB Göttingen) Übersicht Datenmodelle RDF DCAM ORE SKOS FRBR CIDOC CRM Datenmodelle "Datenmodellierung bezeichnet Verfahren in der Informatik zur formalen

Mehr

Extract of the Annotations used for Econ 5080 at the University of Utah, with study questions, akmk.pdf.

Extract of the Annotations used for Econ 5080 at the University of Utah, with study questions, akmk.pdf. 1 The zip archives available at http://www.econ.utah.edu/ ~ ehrbar/l2co.zip or http: //marx.econ.utah.edu/das-kapital/ec5080.zip compiled August 26, 2010 have the following content. (they differ in their

Mehr

Prozesse als strategischer Treiber einer SOA - Ein Bericht aus der Praxis

Prozesse als strategischer Treiber einer SOA - Ein Bericht aus der Praxis E-Gov Fokus Geschäftsprozesse und SOA 31. August 2007 Prozesse als strategischer Treiber einer SOA - Ein Bericht aus der Praxis Der Vortrag zeigt anhand von Fallbeispielen auf, wie sich SOA durch die Kombination

Mehr

JONATHAN JONA WISLER WHD.global

JONATHAN JONA WISLER WHD.global JONATHAN WISLER JONATHAN WISLER WHD.global CLOUD IS THE FUTURE By 2014, the personal cloud will replace the personal computer at the center of users' digital lives Gartner CLOUD TYPES SaaS IaaS PaaS

Mehr

A Practical Approach for Reliable Pre-Project Effort Estimation

A Practical Approach for Reliable Pre-Project Effort Estimation A Practical Approach for Reliable Pre-Project Effort Estimation Carl Friedrich Kreß 1, Oliver Hummel 2, Mahmudul Huq 1 1 Cost Xpert AG, Augsburg, Germany {Carl.Friedrich.Kress,Mahmudul.Huq}@CostXpert.de

Mehr

Softwareentwicklung & Usability Software Development & Usability

Softwareentwicklung & Usability Software Development & Usability Softwareentwicklung & Usability Software Development & Usability mobile media & communication lab Das m²c-lab der FH Aachen leistet Forschungs- und Entwicklungsarbeiten für individuelle und innovative

Mehr

A Requirement-Oriented Data Quality Model and Framework of a Food Composition Database System

A Requirement-Oriented Data Quality Model and Framework of a Food Composition Database System DISS. ETH NO. 20770 A Requirement-Oriented Data Quality Model and Framework of a Food Composition Database System A dissertation submitted to ETH ZURICH for the degree of Doctor of Sciences presented by

Mehr

Prediction Market, 28th July 2012 Information and Instructions. Prognosemärkte Lehrstuhl für Betriebswirtschaftslehre insbes.

Prediction Market, 28th July 2012 Information and Instructions. Prognosemärkte Lehrstuhl für Betriebswirtschaftslehre insbes. Prediction Market, 28th July 2012 Information and Instructions S. 1 Welcome, and thanks for your participation Sensational prices are waiting for you 1000 Euro in amazon vouchers: The winner has the chance

Mehr

SAP PPM Enhanced Field and Tab Control

SAP PPM Enhanced Field and Tab Control SAP PPM Enhanced Field and Tab Control A PPM Consulting Solution Public Enhanced Field and Tab Control Enhanced Field and Tab Control gives you the opportunity to control your fields of items and decision

Mehr

Using TerraSAR-X data for mapping of damages in forests caused by the pine sawfly (Dprion pini) Dr. Klaus MARTIN klaus.martin@slu-web.

Using TerraSAR-X data for mapping of damages in forests caused by the pine sawfly (Dprion pini) Dr. Klaus MARTIN klaus.martin@slu-web. Using TerraSAR-X data for mapping of damages in forests caused by the pine sawfly (Dprion pini) Dr. Klaus MARTIN klaus.martin@slu-web.de Damages caused by Diprion pini Endangered Pine Regions in Germany

Mehr

Titelbild1 ANSYS. Customer Portal LogIn

Titelbild1 ANSYS. Customer Portal LogIn Titelbild1 ANSYS Customer Portal LogIn 1 Neuanmeldung Neuanmeldung: Bitte Not yet a member anklicken Adressen-Check Adressdaten eintragen Customer No. ist hier bereits erforderlich HERE - Button Hier nochmal

Mehr

EEX Kundeninformation 2002-08-30

EEX Kundeninformation 2002-08-30 EEX Kundeninformation 2002-08-30 Terminmarkt - Eurex Release 6.0; Versand der Simulations-Kits Kit-Versand: Am Freitag, 30. August 2002, versendet Eurex nach Handelsschluss die Simulations -Kits für Eurex

Mehr

Ingenics Project Portal

Ingenics Project Portal Version: 00; Status: E Seite: 1/6 This document is drawn to show the functions of the project portal developed by Ingenics AG. To use the portal enter the following URL in your Browser: https://projectportal.ingenics.de

Mehr

IDRT: Unlocking Research Data Sources with ETL for use in a Structured Research Database

IDRT: Unlocking Research Data Sources with ETL for use in a Structured Research Database First European i2b2 Academic User Meeting IDRT: Unlocking Research Data Sources with ETL for use in a Structured Research Database The IDRT Team (in alphabetical order): Christian Bauer (presenter), Benjamin

Mehr

Seminar in Requirements Engineering

Seminar in Requirements Engineering Seminar in Requirements Engineering Vorbesprechung Frühjahrssemester 2010 22. Februar 2010 Prof. Dr. Martin Glinz Dr. Samuel Fricker Eya Ben Charrada Inhalt und Lernziele Software Produktmanagement Ziele,

Mehr

Phasen. Gliederung. Rational Unified Process

Phasen. Gliederung. Rational Unified Process Rational Unified Process Version 4.0 Version 4.1 Version 5.1 Version 5.5 Version 2000 Version 2001 1996 1997 1998 1999 2000 2001 Rational Approach Objectory Process OMT Booch SQA Test Process Requirements

Mehr

Total Security Intelligence. Die nächste Generation von Log Management and SIEM. Markus Auer Sales Director Q1 Labs.

Total Security Intelligence. Die nächste Generation von Log Management and SIEM. Markus Auer Sales Director Q1 Labs. Total Security Intelligence Die nächste Generation von Log Management and SIEM Markus Auer Sales Director Q1 Labs IBM Deutschland 1 2012 IBM Corporation Gezielte Angriffe auf Unternehmen und Regierungen

Mehr

Challenges in Systems Engineering and a Pragmatic Solution Approach

Challenges in Systems Engineering and a Pragmatic Solution Approach Pure Passion. Systems Engineering and a Pragmatic Solution Approach HELVETING Dr. Thomas Stöckli Director Business Unit Systems Engineering Dr. Daniel Hösli Member of the Executive Board 1 Agenda Different

Mehr

Gliederung. Einführung Phasen Ten Essentials Werkzeugunterstützung Aktivitäten, Rollen, Artefakte Werkzeug zur patternorientierten Softwareentwicklung

Gliederung. Einführung Phasen Ten Essentials Werkzeugunterstützung Aktivitäten, Rollen, Artefakte Werkzeug zur patternorientierten Softwareentwicklung Peter Forbrig RUP 1 Gliederung Einführung Phasen Ten Essentials Werkzeugunterstützung Aktivitäten, Rollen, Artefakte Werkzeug zur patternorientierten Softwareentwicklung Peter Forbrig RUP 2 Rational Unified

Mehr

Developing Interactive Integrated. Receiver Decoders: DAB/GSM Integration

Developing Interactive Integrated. Receiver Decoders: DAB/GSM Integration Developing Interactive Integrated Wolfgang Klingenberg Robert-Bosch GmbH Hildesheim Wolfgang.Klingenberg@de.bosch.co Receiver Decoders: DAB/GSM Integration DAB-GSM-Integration.ppt 1 Overview DAB receiver

Mehr

EEX Kundeninformation 2007-09-05

EEX Kundeninformation 2007-09-05 EEX Eurex Release 10.0: Dokumentation Windows Server 2003 auf Workstations; Windows Server 2003 Service Pack 2: Information bezüglich Support Sehr geehrte Handelsteilnehmer, Im Rahmen von Eurex Release

Mehr

Erfolgreiche Unternehmen bauen ihre SharePoint-Dashboards mit Visio Sehen heißt verstehen! Claus Quast SSP Visio Microsoft Deutschland GmbH

Erfolgreiche Unternehmen bauen ihre SharePoint-Dashboards mit Visio Sehen heißt verstehen! Claus Quast SSP Visio Microsoft Deutschland GmbH Erfolgreiche Unternehmen bauen ihre SharePoint-Dashboards mit Visio Sehen heißt verstehen! Claus Quast SSP Visio Microsoft Deutschland GmbH 2 Inhalt Was sind Dashboards? Die Bausteine Visio Services, der

Mehr

Parameter-Updatesoftware PF-12 Plus

Parameter-Updatesoftware PF-12 Plus Parameter-Updatesoftware PF-12 Plus Mai / May 2015 Inhalt 1. Durchführung des Parameter-Updates... 2 2. Kontakt... 6 Content 1. Performance of the parameter-update... 4 2. Contact... 6 1. Durchführung

Mehr

arlanis Software AG SOA Architektonische und technische Grundlagen Andreas Holubek

arlanis Software AG SOA Architektonische und technische Grundlagen Andreas Holubek arlanis Software AG SOA Architektonische und technische Grundlagen Andreas Holubek Speaker Andreas Holubek VP Engineering andreas.holubek@arlanis.com arlanis Software AG, D-14467 Potsdam 2009, arlanis

Mehr

Lesen Sie die Bedienungs-, Wartungs- und Sicherheitsanleitungen des mit REMUC zu steuernden Gerätes

Lesen Sie die Bedienungs-, Wartungs- und Sicherheitsanleitungen des mit REMUC zu steuernden Gerätes KURZANLEITUNG VORAUSSETZUNGEN Lesen Sie die Bedienungs-, Wartungs- und Sicherheitsanleitungen des mit REMUC zu steuernden Gerätes Überprüfen Sie, dass eine funktionsfähige SIM-Karte mit Datenpaket im REMUC-

Mehr

Config & Change Management of Models

Config & Change Management of Models Config & Change Management of Models HOOD GmbH Keltenring 7 82041 Oberhaching Germany Tel: 0049 89 4512 53 0 www.hood-group.com -1- onf 2007 -Config & Change Management of models Speaker HOOD Group Keltenring

Mehr

Modul Strategic Management (PGM-07)

Modul Strategic Management (PGM-07) Modul Strategic Management (PGM-07) Beschreibung u. Ziele des Moduls Dieses Modul stellt als eine der wesentlichen Formen wirtschaftlichen Denkens und Handelns den strategischen Ansatz vor. Es gibt einen

Mehr

Model-based ALM Arbeitsumgebungen à la carte

Model-based ALM Arbeitsumgebungen à la carte Model-based ALM Arbeitsumgebungen à la carte Insight 2013, Nürnberg November 2013 Jens Donig, Dr. Martin Künzle Agenda 01 Einleitung 02 Model-based ALM 03 Demo 04 Lernende Plattform November 2013 Jens

Mehr

Delivering services in a user-focussed way - The new DFN-CERT Portal -

Delivering services in a user-focussed way - The new DFN-CERT Portal - Delivering services in a user-focussed way - The new DFN-CERT Portal - 29th TF-CSIRT Meeting in Hamburg 25. January 2010 Marcus Pattloch (cert@dfn.de) How do we deal with the ever growing workload? 29th

Mehr

Cleanroom Fog Generators Volcano VP 12 + VP 18

Cleanroom Fog Generators Volcano VP 12 + VP 18 Cleanroom Fog Generators Volcano VP 12 + VP 18 Description & Functional Principle (Piezo Technology) Cleanrooms are dynamic systems. People and goods are constantly in motion. Further installations, production

Mehr