!! 16. Workshop Software-Reengineering und -Evolution

Größe: px
Ab Seite anzeigen:

Download "!! 16. Workshop Software-Reengineering und -Evolution"

Transkript

1 !! 16. Workshop Software-Reengineering und -Evolution der GI-Fachgruppe Software-Reengineering (SRE)! 6. Workshop Design for Future! des GI-Arbeitskreises Langlebige Softwaresysteme (L2S2)!! Bad Honnef April 2014!!

2 ! 16. Workshop Software-Reengineering und -Evolution der GI-Fachgruppe Software Reengineering (SRE) 6. Workshop Design for Future 2014 des GI-Arbeitskreises Langlebige Softwaresysteme (L2S2) Die Workshops Software Reengineering (WSR) im Physikzentrum Bad Honnef wurden mit dem ersten WSR 1999 von Jürgen Ebert und Franz Lehner ins Leben gerufen, um neben den internationalen erfolgreichen Tagungen im Bereich Reengineering (wie etwa WCRE und CSMR) auch ein deutschsprachiges Diskussionsforum zu schaffen. Dieses Jahr haben wir erstmals explizit das Thema Software-Evolution in den Titel mit aufgenommen, um eine breitere Zielgruppe anzusprechen und auf den Workshop aufmerksam zu machen. Damit ist das neue Kürzel entsprechend WSRE.! Ziel der Treffen ist es nach wie vor, einander kennen zu lernen und auf diesem Wege eine direkte Basis der Kooperation zu schaffen, so dass das Themengebiet eine weitere Konsolidierung und Weiterentwicklung erfährt. Durch die aktive und gewachsene Beteiligung vieler Forscher und Praktiker hat sich der WSRE als zentrale Reengineering-Konferenz im deutschsprachigen Raum etabliert. Dabei wird er weiterhin als Low-Cost-Workshop ohne eigenes Budget durchgeführt. Bitte tragen auch Sie dazu bei, den WSRE weiterhin erfolgreich zu machen, indem Sie interessierte Kollegen und Bekannte darauf hinweisen. Auf Basis der erfolgreichen WSR-Treffen der ersten Jahre wurde 2004 die GI-Fachgruppe Software Reengineering gegründet, die unter präsent ist. Durch die Fachgruppe wurden seitdem neben dem WSR(E) auch verwandte Tagungen zu Spezialthemen organisiert. Seit 2010 ist der Arbeitskreis Langlebige Softwaresysteme (L2S2) mit seinen Design For Future -Workshops (DFF) aufgrund der inhaltlichen Nähe ebenfalls bei der Fachgruppe Reengineering aufgehängt. Alle zwei Jahre findet seitdem ein gemeinsamer Workshop von WSR und DFF statt - so auch in diesem Jahr. Diese Kombination soll den Austausch zwischen den beiden Gruppen fördern. Während beim DFF der Schwerpunkt auf wartbaren Architekturen liegt, widmet sich der WSRE weiterhin dem allgemeinen Thema Reengineering in allen seinen Facetten Mai 2014 Physikzentrum Bad Honnef Der WSRE ist weiterhin die zentrale Tagungsreihe der Fachgruppe Software-Reengineering. Er bietet eine Vielzahl aktueller Reengineering-Themen, die gleichermaßen wissenschaftlichen wie praktischen Informationsbedarf abdecken. In diesem Jahr gibt es wieder Beiträge zu einem breiten Spektrum von Reengineering-Themen. Die Organisatoren danken allen Beitragenden für ihr Engagement insbesondere den Vortragenden, Autorinnen und Autoren. Unser Dank gilt auch den Mitarbeiterinnen und Mitarbeitern des Physikzentrums Bad Honnef, die es wie immer verstanden haben, ein angenehmes und problemloses Umfeld für den Workshop zu schaffen.! Für die FG SRE: Volker Riediger, Universität Koblenz-Landau Jochen Quante, Robert Bosch GmbH, Stuttgart Jens Borchers, Steria Mummert, Hamburg Jan Jelschen, Universität Oldenburg! Für den AK L2S2: Stefan Sauer, s-lab, Universität Paderborn Benjamin Klatt, FZI Karlsruhe Thomas P. Ruhroth, TU Dortmund

3 Qualität in Echtzeit mit Teamscale Nils Göde, Lars Heinemann, Benjamin Hummel, Daniela Steidl CQSE GmbH Lichtenbergstr. 8, Garching bei München {goede, heinemann, hummel, Zusammenfassung Existierende Werkzeuge für statische Qualitätsanalysen arbeiten im Batch-Modus. Die Analyse benötigt für jede Ausführung eine gewisse Zeit, was dazu führt, dass Entwickler sich oftmals bereits mit anderen Themen beschäftigen wenn die Ergebnisse verfügbar sind. Zudem lässt sich aufgrund der getrennten Ausführungen nicht zwischen alten und neuen Qualitätsdefiziten unterscheiden eine Grundvoraussetzung für Qualitätsverbesserung in der Praxis. In diesem Artikel stellen wir das Werkzeug Teamscale vor, mit dem sich Qualitätsdefizite zuverlässig während der Evolution des analysierten Systems verfolgen lassen. Durch die inkrementelle Funktionsweise stehen Analyseergebnisse wenige Sekunden nach einem Commit zur Verfügung, wodurch sich Qualität in Echtzeit überwachen und steuern lässt. 1 Einleitung Es existiert bereits eine Vielzahl an statischen Analysewerkzeugen um Qualitätsdefizite in Softwaresystemen aufzudecken. Hierzu zählen unter anderem Con- QAT [5], SonarQube [8] und die Bauhaus Suite [4]. Obwohl diese Werkzeuge eine umfangreiche Auswahl an Analysen bieten, haben sie doch zwei Probleme gemein. Zunächst werden die Werkzeuge im Batch- Modus ausgeführt. Bei aufeinanderfolgenden Analysen wird das komplette System neu analysiert selbst wenn sich nur wenige Teile geändert haben. In der Zeit, die vom Anstoßen der Analyse bis zur Verfügbarkeit der Ergebnisse vergeht, beschäftigen sich Entwickler oftmals schon mit anderen Themen. Das führt dazu, dass die aufgedeckten Qualitätsdefizite in der Praxis oft keine unmittelbare Beachtung finden. Die Wahrscheinlichkeit, dass ein solches Defizit zu einem späteren Zeitpunkt behoben wird, ist sehr gering. Das zweite große Problem besteht darin, dass die getrennte Ausführung der Analysen für verschiedene Versionsstände kein zuverlässiges Verfolgen von Qualitätsdefiziten über die Zeit zulässt. Zwar kann im Das diesem Artikel zugrundeliegende Vorhaben wurde mit Mitteln des Bundesministeriums für Bildung und Forschung unter dem Förderkennzeichen EvoCon, 01IS12034A gefördert. Die Verantwortung für den Inhalt dieser Veröffentlichung liegt bei den Autoren. Nachgang versucht werden die Ergebnisse verschiedener Analysen aufeinander abzubilden, dies führt aber in den meisten Fällen zu ungenauen oder unvollständigen Ergebnissen aufgrund fehlender Informationen (z.b. Umbenennung von Dateien). Zudem benötigt die Abbildung zusätzliche Zeit. Für die kontinuierliche Qualitätsverbesserung ist es notwendig, dass Entwickler ohne nennenswerte zeitliche Verzögerung nach Durchführung ihrer Änderungen über neue Probleme informiert werden. Dabei ist es ebenso wichtig, dass ein Qualitätsdefizit nur dann als neu markiert wird, wenn dies auch der Tatsache entspricht und es sich nicht um ein schon lange bestehendes Legacy-Problem handelt. Sowohl das zeitliche Problem als auch das Verfolgen von Qualitätsdefiziten wird von dem Werkzeug Teamscale [6, 7] gelöst. 2 Teamscale Teamscale ist ein inkrementelles Analysewerkzeug das die kontinuierliche Qualitätsüberwachung und -verbesserung unterstützt. Teamscale analysiert einen Commit innerhalb von Sekunden und informiert somit in Echtzeit darüber, wie sich aktuelle Änderungen auf die Qualität insbesondere die Wartbarkeit des Quelltextes ausgewirkt haben. Teamscale speichert die komplette Qualitätshistorie des Systems und ermöglicht es dadurch mit minimalem Aufwand herauszufinden wann und wodurch Qualitätsdefizite entstanden sind. Durch den Einsatz verschiedener Heuristiken werden Defizite auch verfolgt wenn diese sich zwischen Methoden oder Dateien bewegen [10]. Die zuverlässige Unterscheidung von neuen und alten Qualitätsdefiziten erlaubt es sich auf kürzlich eingeführte Defizite zu konzentrieren. Die Weboberfläche und IDE Plug-Ins stellen die Informationen Entwicklern und anderen Beteiligten zur Verfügung. Architektur Teamscale ist eine Client-Server-Anwendung. Die Architektur ist in Abbildung 1 skizziert. Teamscale verbindet sich direkt mit dem Versionskontrollsystem (Subversion, GIT oder TFS von Microsoft) oder dem Dateisystem. Teamscale unterstützt u.a. Java, C#, C/C++, JavaScript, ABAP oder auch Projekte mit einer Kombination dieser Sprachen. Zudem können

4 Abbildung 1: Teamscale Architektur auch Informationen über Bugs und Change Requests integriert werden. Unterstützt werden u.a. Jira, Redmine und Bugzilla. Die inkrementelle Analyse-Engine baut auf dem Werkzeug ConQAT [5] auf und wird für jeden Commit im angebundenen Versionskontrollsystem aktiv. Dabei werden nur die von Änderungen betroffenen Dateien neu analysiert, wodurch die Ergebnisse innerhalb von Sekunden zur Verfügung stehen. Durch die Analyse jedes einzelnen Commits lassen sich Qualitätsdefizite präzise anhand der Änderungen am Code verfolgen. Die Ergebnisse werden zusammen mit der Historie jeder Datei in einer nosql-datenbank, wie z.b. Apache Cassandra [11], abgelegt und über eine REST-Schnittstelle Clients zur Verfügung gestellt. Clients Teamscale beinhaltet einen JavaScript-basierten Webclient, der verschiedene Sichten auf die Evolution des Systems und dessen Qualität bietet um verschiedenen Rollen (z.b. Entwickler oder Projektleiter) gerecht zu werden. Die Sichten umfassen eine Übersicht der Commits und deren Auswirkungen auf die Qualität, eine Übersicht über den Code (ganz oder in Teilen), eine Delta-Sicht zum Vergleich mit einem früheren Code- Stand und ein frei konfigurierbares Dashboard. Zusätzlich beinhaltet Teamscale Plug-Ins für Eclipse und Visual Studio. Qualitätsdefizite werden in der IDE am Code annotiert. Dadurch können Entwickler neue Qualitätsdefizite unmittelbar in der jeweiligen Entwicklungsumgebung inspizieren und beheben. 3 Analysen Teamscale implementiert eine Vielzahl bekannter Qualitätsanalysen. Die zentralen Analysen, die unterstützt werden, sind im folgenden aufgezählt. Strukturmetriken. Teamscale erhebt zentrale Strukturmetriken, wie die Länge von Dateien, die Länge von Methoden und die Schachtelung des Codes. Klonerkennung. Teamscale führt eine inkrementelle Klonerkennung durch um durch Copy und Paste erzeugte redundante Codeabschnitte zu erkennen. Code Anomalien. Teamscale untersucht das System hinsichtlich Code Anomalien, wie z.b. Verstöße gegen Coding Guidelines und häufige Fehlermuster. Zudem können externe Werkzeuge wie z.b. Find- Bugs [1], PMD [2] und StyleCop [3] eingebunden und deren Ergebnisse importiert werden. Architekturkonformität. Sofern eine geeignete Architekturspezifikation vorliegt, die die Komponenten des Systems und deren Abhängigkeiten beschreibt, kann Teamscale diese mit der Implementierung vergleichen um Abweichungen aufzudecken. Kommentierung. Teamscale beinhaltet eine umfangreiche Analyse der Kommentare im Code [9]. Hierbei wird u.a. geprüft ob bestimmte Vorgaben eingehalten werden (ist z.b. jedes Klasse kommentiert) und ob Kommentare trivial oder inkonsistent sind. 4 Evaluation Eine erste Evaluation von Teamscale wurde als Umfrage unter professionellen Entwicklern bei einem unserer Evaluierungspartner aus der Industrie durchgeführt [7]. Laut Aussage der Entwickler bietet Teamscale ihnen einen guten Überblick über die Qualität ihres Codes und erlaubt es auf einfache Weise aktuelle Probleme von Legacy-Problemen zu trennen. Eine umfassendere Evaluation steht noch aus. 5 Zusammenfassung Teamscale löst zwei große Probleme bestehender Analysewerkzeuge. Durch die inkrementelle Vorgehensweise stehen die Analyserergebnisse in Echtzeit zur Verfügung. Entwickler sind so stets über die Auswirkungen ihrer aktuellen Änderungen informiert und können neu entstandene Qualitätsdefizite gleich beseitigen. Zudem bietet Teamscale eine vollständige Verfolgung von Qualitätsdefiziten durch die Historie. Dadurch kann zuverlässig zwischen alten und neuen Qualitätsdefiziten unterschieden und die Ursache von Problemen mit minimalem Aufwand ermittelt werden. Referenzen [1] FindBugs. [2] PMD. [3] StyleCop. [4] Axivion GmbH. Bauhaus Suite. [5] CQSE GmbH. ConQAT. [6] CQSE GmbH. Teamscale. [7] L. Heinemann, B. Hummel, and D. Steidl. Teamscale: Software quality control in real-time. In Proceedings of the 36th International Conference on Software Engineering, Accepted for publication. [8] SonarSource S.A. SonarQube. [9] D. Steidl, B. Hummel, and E. Juergens. Quality analysis of source code comments. In Proceedings of the 21st IEEE International Conference on Program Comprehension, [10] D. Steidl, B. Hummel, and E. Juergens. Incremental origin analysis of source code files. In Proceedings of the 11th Working Conference on Mining Software Repositories, Accepted for publication. [11] The Apache Software Foundation. Apache Cassandra.

5 Assessing Third-Party Library Usage in Practice Veronika Bauer Technische Universität München Florian Deissenboeck, Lars Heinemann CQSE GmbH {deissenboeck, Abstract Modern software systems build on a significant number of external libraries to deliver feature-rich and high-quality software in a cost-e cient and timely manner. As a consequence, these systems contain a considerable amount of third-party code. External libraries thus have a significant impact on maintenance activities in the project. However, most approaches that assess the maintainability of software systems largely neglect this factor. Hence, risks may remain unidentified, threatening the ability to e ectively evolve the system in the future. We propose a structured approach to assess the third-party library usage in software projects and identify potential problems. Industrial experience strongly influences our approach, which we designed in a lightweight way to enable easy adoption in practice. 1 Introduction A plethora of external software libraries form a significant part of modern software systems [3]. Consequently, external libraries and their usage have a significant impact on the maintenance of the including software. Unfortunately, third-party libraries are often neglected in quality assessments of software, leading to unidentified risks for the future evolution of the software. Based on industry needs, we propose a structured approach for the systematic assessment of third-party library usage in software projects. The approach is supported by a comprehensive assessment model relating key characteristics of software library usage to development activities. The model defines how di erent aspects of library usage influence the activities and, thus, allows to assess if and to what extent the usage of third-party libraries impacts the development activities of a given project. Furthermore, we provide guidance for executing the assessment in practice, including tool support. 2 Assessment model The proposed assessment model is inspired by activity-based quality models [2].The model contains entities, the objects we observe in the real world, and attributes, the properties that an entity possesses [4]. Entities are structured in a hierarchical manner to foster completeness. The combination of one or more entities and an attribute is called a fact. Facts are expressed as [Entities ATTRIBUTE]. Toexpress the impact of a fact, the model relates the fact to a development activity. This relation can either be positive, i. e., the fact eases the a ected activity, or negative, i. e., the fact impedes the activity. Impacts are expressed as [Entity ATTRIBUTE] +/! [Activity]. Each impact is backed by a justification, which provides the rationale for its inclusion in the model. We quantify facts with the three-value ordinal scale {low, medium, high}. To assess the impact on the activities, we use the three-value scale {bad, satisfactory, good}. Iftheim- pact is positive, there is a straight-forward mapping from low! bad, medium! satisfactory, high! good. If the fact [Library PREVALENCE], for example, is rated high, the e ect on the activity migrate is good as the + impact relation is positive [Library PREVALENCE]! [Migrate] as a high prevalence of a library usually gives rise to alternative implementations of the required functionality. If the impact relation is negative, the mapping is turned around: low! good, medium! satisfactory, high! bad. The assessment of a single library thus results in a mapping between the activities and the {bad, satisfactory, good} scale. To aggregate the results, we count the occurrences of each value at the leaf activities. Hence, the assessment of a library finally results in mapping from {bad, satisfactory, good}!n 0. We do not aggregate the results in a single number. Activities With one exception, the activities included in the model are typical for maintenance. We consider Modify, Understand, Migrate, Protect, and Distribute. Metrics The model quantifies each fact with one or more metrics 1. As an example, to quantify the extent of vulnerabilities of a library, we measure the number of known critical issues in the bug database of the library. Some of the facts cannot be measured directly, as they depend on many aspects. For instance, the maturity of a library cannot be captured with a single metric but must be judged according to several criteria by an expert. We do not employ an 1 The list of metrics, their description and assignment to facts are detailed in [1].

6 Table II EXAMPLE FOR ASSESSMENT AGGREGATION Library Modify Understand Migrate Protect Distribute Overall Library G: S: 2 B: 1 G: 2 S: 1 B: 2 G: 1 S: 0 B: 3 G: 0 S: 2 B: 0 G: S: 0 B: 1 Legend: G: # of good impacts, S: # of satisfactory impacts, B: # of bad impacts G: S: 5 B: 7 Figure 1: Example of the assessment overview of a library. open source software quality assessment toolkit ConQAT 5 The library s characteristics su ciently support the activity modify. However, it incurs more risks than benefits, and forcan thebe activities downloaded migrateasand a self-contained distribute. bundle including a modular toolkit for creating quality dashboards which ConQAT 6. integrate the results of multiple quality analyses. The current automatic, e. g., threshold-based, mapping from metric values to the {low, medium, high} scale but fully 3 Tool support implementation is targeted at analyzing the library usage of VI. CASE STUDY Java systems but could be adapted to other programming The assessment model includes five metrics that can rely on the experts capabilities. be A. automatically Study Goal determined by static code analyses. languages withimpacts a librarydefine reusehow concept factsand influence for which activities. a parser A The API in Java is available. Totool show support the for applicability the assessment of our is implemented approach, we in justification for each impact provides a rationale for performed Java on top of the open source software quality assessment toolkit ConQAT 3. The current implementation The analysis the impact requires whichthe increases sourceconfirmability and byte code of theof model the a case study on a real-world software system of azh and the assessments based on the model. The complete list of impacts is available online project as well as the included libraries as input. The output Abrechnungs- und IT-Dienstleistungszentrum für Heilberufe is a set of HTML and data files showing the 2 is targeted at analyzing the library usage of Java systems. GmbH, a customer of CQSE GmbH (see Section II).. metric values Our assessment process provides guidance to operationalize in a tabular the model fashion. for assessing The analysis librarytraverses usage in 4 for each library the abstract syntax tree (AST) for each class the project B. Analyzed Case Study System a specific software project. When assessing a realworldall project, method the sheer calls number to external of libraries. require Fora To The showanalyzed the applicability system of is our a distributed approach, we billing per- application and determines each library, possibility it determines to address the the following most relevant five metrics libraries first. formed a case study on a real-world software system. (see with a distinct data entry component running on a J2EE Therefore, the first step of the process structures and The analyzed system is a distributed billing application with a distinct data entry component running also Table I): application server which is accessed from around 350 fat ranks the libraries according to their entangledness Number with of the APIsystem. methodthis calls pre-selection directs the e ort on clients a J2EE (based application Java server Swing). whichthe is accessed system s fromsource code Number of of thedistinct second step APIof method our process: calls the expert assessment of the affected libraries. classes The last step collects the results system s Java Archive sourcefiles code comprises (JARs). about 3.5 MLOC. The around comprises 350 about fat clients 3.5 (based MLOC. onthe Javasystem s Swing). files The include 87 Percentage Scatteredness in an assessment of the API report. system s files include 87 Java Archive Files (JARs). During preselection, we determine the following Percentage of API utilization C. The Study results Procedure indicate that our approach gives a comprehensive overview on the external library usage of values for all libraries: The number of total method The number calls of to atotal library andallows distinct to rank APIallmethod externalcalls libraries as thewe analyzed executed system. our It assessment outlines which approach maintenance (see Section IV) well as theaccording percentage to the ofstrength affectedofclasses their direct are relations aggregated the activities on the study are supported object and to which recorded degree ourby observations the employed the process. libraries. We Furthermore, presented ourthe results semi-automated the stakeholders in during during the AST-traversal. system. The number The scatteredness of distinct method metric callsrequires to a library adds it expresses informationthe about degree the of implicit distribution entangled- pre-selection allowed for a significant reduction of the more computation: of the company and qualitatively captured their feedback. For ness of libraries and system. The scatteredness of time required by the expert assessment. API calls over the system structure. API calls within one this we used the following guiding questions: method calls to a library describes whether the usage package are considered as local. We would expect local Remarks of the library is concentrated to a specific part of the Does the report contain the central libraries? calls for specific system, functionality, or scattered across e. g., it. calls The topercentage networking of affected classes libraries. gives These a complementary would be expected overview to about be work [1], published at the International Conference on or This paper Does presents the assessment a condensed conform versiontoof the previous stakeholders intuition? image rendering concentratedthe toimpact smalla parts migration of the couldsystem. have oncontrarily, the system. Software Maintenance, libraries providing cross cutting functionality such as logging Are important aspects missing in the assessment? Our model guides the expert during the assessment process. The automated analyses have provided the References Were parts of the assessment result surprising? would be expected information to be which called can from be extracted a large from portion the of source the [1] V. Bauer, L. Heinemann, and F. Deissenboeck. A system, therefore code. The exhibiting expert now a high needsscatteredness to evaluate thevalue. remaining We D. Structured Results and Approach Observations to Assess Third-Party Library compute scatteredness metrics. Foras this, thehesum or she of the requires distances detailed between knowledge about nodes the project in the package and its domain tree with andcalls needs to to a Usage. In ICSM 12, The pre-selection step revealed that out of the 87 JAR all pairs of package [2] F. Deissenboeck, S. Wagner, M. Pizka, S. Teuchert, files included by the project files, the system s source code specific API. research The distance detailed of information two nodes about in the thepackage libraries. and J. Girard. An activity-based quality model for tree Subsequent to the assessment, a report can be generated from our model, containing the detailed infor- between the system and these libraries differs significantly, directly maintainability. calls methods In ICSM 07, from The extent of entangledness is given by the sum of the distance from each node to their [3] L. Heinemann, F. Deissenboeck, M. Gleirscher, least common mation ancestor. for eachit library is important in textual to note and tabular that since B. Hummel, and M. Irlbeck. On the Extent and Nature illustrated of Software in Reuse Figurein 3(a). Open Source For some Java Projects. libraries, only one (see as the scatteredness Figure 1) metric form. depends on the system structure method In ICSR 11, is called while for others, there are several thousand (i. e., the depth of the package tree) its values cannot be [4] method B. Kitchenham, calls indicating S. Pfleeger, theand difference N. Fenton. intowards importance a for the compared in a meaningful way across different software framework for software measurement validation. Software Engineering, IEEE Transactions project. Also the degree of scatteredness varies significantly, systems. The percentage of API utilization is computed as as shown in Figure 3(b). 7 on, 21(12): , fraction between the number of distinct API methods called and the total 2 number of API methods in the library. The library-usage-assessment/ 6 ccsm/library-usage-assessment/ complete tool support is available as a ConQAT extension Note that the long tail of libraries with only one method call or scatteredness of 1 or 0 is represented by the blanks in Figures 3(a) and 3(b), as they are not visible due to the log-scale.

7 Quality Measurement Scenarios in Software Migration Gaurav Pandey, Jan Jelschen, Dilshodbek Kuryazov, Andreas Winter Carl von Ossietzky Universität, Oldenburg, Germany Abstract. Legacy systems are migrated to newer technology to keep them maintainable and to meet new requirements. To aid choosing between migration and redevelopment, a quality prognosis of the migrated software, compared with the legacy system is required. Moreover, as the driving forces behind a migration e ort di er, migration tooling has to be tailored according to project-specific needs, to produce a migration result meeting significant quality criteria. Available metrics may not all be applicable identically for both legacy and migrated systems, e.g. because of paradigm shifts during migration. To this end, this paper describes identifies three scenarios for utilizing quality measurement in a migration project. 1 Introduction Migration, i. e. transferring the legacy systems into new environments and technologies without changing their functionality, is a key technique of software evolution [4]. It removes the cost and risk of developing a new system from scratch and allows to continue modernization of the system. However, it needs to be found out whether the conversion leads to a change in internal software quality. To decide between software migration and redevelopment, the quality measurement and comparison of legacy and migrated systems is required. Moreover, a migration project requires an especially tailored toolchain [3]. To choose the tools to carry out an automatic migration, assessment of the quality of migrated code is needed against the combination of involved tools. The identification of project-specific quality criteria and corresponding metrics for quality comparison can be achieved with the advice from project experts. However in a language based migration, e.g. from COBOL to Java, there is a shift from procedural to object-oriented paradigm. This can lead to limiting the usability of a metric, as its validity and interpretation might not hold in both platforms. For example, the metrics calculating object-oriented properties like inheritance or encapsulation, can be used on migrated Java code but not on COBOL source code. To overcome this, it is required to have a strategy regarding utilization and comparison of metrics in migration. To this end, this paper identifies the quality measurement scenarios with suitable metrics, enabling the quality calculation in di erent situations. The next two sections explain the Q-MIG project and the measurement scenarios and are followed by a Conclusion. 2 Q-MIG Project The Q-MIG-project (Quality-driven software MIGration) 1 is a joint venture of pro et con Innovative Informatikanwendungen GmbH, Chemnitz and Carl von Ossietzky University s Software Engineering Group. Q-MIG is aimed at advancing a toolchain for automated software migration [2]. To aid in deciding for or against a migration, selecting a migration strategy, and tailoring the toolchain and individual tools, the toolchain is to be complemented with a quality control center measuring, comparing, and predicting internal quality of software systems under migration. The project aims at enabling quality-driven decisions on migration strategies and tooling [6]. To achieve this, the Goal / Question / Metric approach [1] is used. The goal is to measure and compare the quality of the software before and after migration, to enable migration decisions and toolchain selection. The questions are the quality criteria based on which the quality assessment and comparison needs to be carried out. The Q-MIG project considers internal quality attributes, i. e. focuses on quality criteria maintainability and transferability in terms of the ISO quality standard [5]. Moreover, expert advice is taken for selecting and identifying criteria relevant for software migrations. For example, maintainabilityrelated metrics are important in a project that needs to keep on evolving, but not when the migrated project is meant to be an interim solution, until a redeveloped system can replace it. Then, to measure the quality criteria, metrics need to be identified. However, a metric that is valid for the legacy code might not be valid for the migrated code and vice versa. In order to identify the metrics for quality criteria calculation, the metrics are categorized as per the use case they can be utilized in. To achieve this, scenarios for quality comparison and toolchain component selection are defined in Section 3. 3 Measurement Scenarios This section presents the quality measurement scenarios that utilize the quality metrics according to the properties measured and their applicability to the legacy and migrated platforms. While the first two scenarios facilitate the quality comparison between 1 Q-MIG is funded by Central Innovation Program SME of the German Federal Ministry of Economics and Technology BMWi (KF KM3).

8 the legacy and the migrated systems, the third scenario is particularly useful for selecting components of the migration toolchain. While the Q-MIG project focuses on quality measurement of a COBOL to Java migration, the essence of the scenarios presented remains the same for other combinations of platforms. Same Interpretation and Implementation: This scenario facilitates quality comparison of legacy code (COBOL) and migrated code (Java) to help in project planning. It is achieved by utilizing the quality metrics that are valid and have the same implementations and interpretations in both platforms, and hence allowing for direct quality comparison between the systems. For example, Lines of Code, measuring the size of the project, is calculated identically for COBOL and Java (In some cases Lines of Code can be platform specific requiring adaptations like Function Point Analysis). Similarly Number of GOTOs, Comments Percentage, Cyclomatic Complexity (Mc Cabe Metric) and Duplicates Percentage can be calculated for both languages in the same fashion. Same Interpretation Di erent Implementation: In this scenario the metrics that have di erent implementations but same interpretation in legacy and target code are utilized for quality comparison. COBOL and Java codes are di erent in construct and the building blocks. So, certain metrics can have same interpretation but di erent ways of calculation in the platforms. For example, Cohesion is the degree of independence between the building blocks of a system. So, it can be calculated in the COBOL code considering procedures as building blocks, while in the migrated Java code they can be represented by classes. The two calculations can provide comparable metrics, hence enabling quality comparison. Similarly, other metrics can be utilized for quality comparison, that might not have exactly the same implementation for COBOL and Java. Some metrics conforming to this scenario are: Halstead s metrics (because it uses operators and operands, that di er among the languages), Average Complexity per Unit and Average Unit Size. Target Specific Metrics: In this scenario, the metrics that are specific to target platform Java (and may not be applicable to COBOL legacy code) are utilized for toolchain selection and improvement. For example, the metric Depth of Inheritance can be calculated for Java, but not for COBOL (procedural languages have no inheritance). Also, value of the metric can change on changing components of migration toolchain or by additional reengineering steps. This allows to use the metrics to choose a suitable toolchain by analyzing how the quality of migrated software changes with respect to the chosen components. But, in a one-to-one migration from COBOL to Java that introduces no restructuring, Depth of Inheritance metric value would not change with respect to the migration tools. This is because such migration will not introduce inheritance in the target code. However, the source code can be refactored before migration. And, an analysis of the metrics against the combination of refactoring tools allows the selection of the components of the refactoring toolchain. This scenario allows the metrics relevant to Q-MIG project and applicable for Java code, to be utilized for selecting the migration and refactoring tools. Here, various object-oriented metrics are used like: Number of Classes representing level of abstraction in code. Also, Attribute Hiding Factor and Method Hiding Factor that calculate the percentages of hidden attributes and methods respectively, are related to modifiability. Moreover, Average Number of Methods per Class calculates complexity of the code. Also, the metrics that are applicable in previous two scenarios can be used here, as they are applicable to the migrated code. However, the reverse might not be true. 4 Conclusion This paper identified three scenarios for measuring and comparing internal quality of software systems under migration, paired with applicable metrics. The scenarios stress the challenge of comparing quality measurements in the context of paradigm shifts, e. g. when migrating from procedural COBOL to objectoriented Java. They delimitate pre-/post-migration comparison to assess suitability of migrating, from comparing migration results using di erent toolchain configurations to improve the tools and tailor the toolchain to project-specific needs. Further steps in the project include the design and evaluation of a quality model by identifying relevant quality criteria, and making them measurable using appropriate metrics, with the scenarios providing an initial structure. References [1] V. R. Basili, G. Caldiera, and H. D. Rombach. The goal question metric approach. In Encyclopedia of Software Engineering. Wiley,1994. [2] C. Becker and U. Kaiser. Test der semantischen Äquivalenz von Translatoren am Beispiel von CoJaC. Softwaretechnik-Trends, 32(2),2012. [3] J. Borchers. Erfahrungen mit dem Einsatz einer Reengineering Factory in einem großen Umstellungsprojekt. HMD, 34(194):77 94, mar1997. [4] A. Fuhr, A. Winter, U. Erdmenger, T. Horn, U. Kaiser, V. Riediger, and W. Teppe. Model-Driven Software Migration - Process Model, Tool Support and Application. In A. D. Ionita, M. Litoiu, and G. Lewis, editors, Migrating Legacy Applications: Challenges in Service Oriented Architure and Cloud Computing Environments. IGI Global, Hershey, PA, USA, [5] ISO/IEC. ISO/IEC Systems and software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - System and software quality models. Technical report, [6] J. Jelschen, G. Pandey, and A. Winter. Towards quality-driven software migration. In Proceedings of the 1st Collaborative Workshop on Evolution and Maintenance of Long-Living Systems, 2014.

9 Semi-automated decision making support for undocumented evolutionary changes Jan Ladiges, Alexander Fay Automation Technology Institute Helmut Schmidt University Holstenhofweg 85, Hamburg, Germany {ladiges, Christopher Haubeck, Winfried Lamersdorf Distributed Systems and Information Systems University of Hamburg Vogt-Kölln-Straße 30, Hamburg, Germany {haubeck, 1 Introduction Long-living systems evolve under boundary conditions as diverse as the systems themselves. In the industrial practice of the production automation domain, for example, adaptations and even the initial engineering of control software are often performed without a formalized requirement specification [1]. Nevertheless, operators must decide during operation if such an undocumented change is consistent to the (informal) specification since changed behavior can also occur due to unintended side effects of changes or due to other influences (like wear and tear). In addition, the system behavior is strongly dependent on both, the software and the physical plant. Accordingly, approaches are needed to extract requirements out of the interdisciplinary system behavior and present it to the operator in a suitable format. The FYPA²C project (Forever Young Production Automation with Active Components) tries to realize an extraction of behavior related to nonfunctional requirements (NFR) by monitoring and analyzing signal traces of production systems. In doing so, the specific boundary conditions of the production automation domain should be considered. 2 The Evolution Support Process The assumption of this approach is that the externally measured signal traces of programmable logic controllers (PLCs) provide a basis to capture the NFRs on the system. Fig. 1 shows how lowlevel data (the signals) can be lifted to high-level NFR-related information. First, the signal traces which are created during (simulated) usage scenarios are used to automatically generate and adapt dynamic knowledge models. Such models are e.g. timed automata learned by the algorithm described by Schneider et al. in [2]. Each model expresses specific aspects of the system and serves as a documentation of the underlying process. An analysis of these models can provide NFR-related properties of the system in order to evaluate the influences of changes. Such properties are e.g. the throughput rate or the routing flexibility (see [3]). Figure 1: Process of extracting system properties Similar work has been done in [4]. Here, automata are generated out of test cases which are e.g. derived from design models. An invariant analysis allows for extracting functional requirements which can be monitored. However, the FYPA²C approach assumes that no formal models or test-cases are present and it aims at the extraction of NFRs. Since not every I/O-signal of a PLC includes information about needed aspects, a selection of the signals has to be done. Therefore, signals get enriched by semantics. The semantics include which kind of information is given by the signal. A signal stemming from a sensor which identifies the material of a workpiece (e.g. a capacitive sensor distinguishing workpieces) would get the semantic workpieceidentification. Note that enriching signals is a rather simple step compared to creating e.g. design models. Since a monitoring system cannot decide if a performed change and its influences on the NFRs is

10 intended (or at least acceptable), a practical semiautomated evolution support process with a user in the loop is used. At first an anomaly detection engine detects whenever a behavior is observed that contradicts the knowledge models and, therefore, can indicate an evolutionary change. In case of timed automata the anomaly detection method presented in [2] is used. This anomaly is, in a first step, reported to the user. At this point only the actual anomaly, the context it occurs in, and a limited amount of current properties and probable influences can be reported since only influences on the already observed scenarios can be considered. Deductions on the overall properties are very restricted at this point. If a decision cannot be made here, the changed behavior is added to the concerned knowledge models in order to evaluate the effects on the system properties in detail. This is done by an analysis based on the extracted scenarios that are applied on the plant or a simulation. The advantage of these steps is that the operator can be informed based on the overall NFRrelated properties of the system. As a reaction the change can be reverted if unintended or, if it is intended, adapted scenarios and models can be treated as valid. Figure 2: Semi-automated evolution support process If there is no possibility for a proactive determination of the system properties (missing of simulation and no availability of the system for tests), an adaptation of the models during operation is the only remaining option and just the already observed changes can be evaluated. When an unacceptable influence is observed the operator can react accordingly. However, the scenarios observed after the occurring change can be compared to the stored ones in order to estimate the completeness of the adapted knowledge models. To be more precisely, consider the following simple example: A conveyor system is responsible for transporting workpieces to a machine located at the end of the conveyor system. Workpieces are detected by lightbarriers at both ends of all conveyors. A requirement on the throughput rate demands that the transport does not take longer than 60 seconds. A PLC collects the signals stemming from the lightbarriers and starts the transport when a workpiece reaches the first conveyor and stops it, when the workpiece reaches the machine. Conveyor speed can be parameterized within the PLCprogram. A timed automaton (as a knowledge model) represents the transportation and is learned based on the observed signal traces by the learning algorithm in [2]. The automaton should just include signals related to the transportation. Therefore all I/O signals of the PLC are enriched by simple semantics and the learning algorithm is applied only on signals with the given semantic workpiecedetection. These are all signals stemming from lightbarriers. Accordingly, an analysis on the automaton enables deducing the transporting times by aggregating the transition times. Due to maintenance the motors of the conveyors are exchanged by motors with a higher slip resulting in a slower transportation. Unfortunately, the operator did not adapt the parameters in the PLC. During the first run of the plant the slower transportation is detected as a timeanomaly and reported to the operator after the workpiece passed the first conveyor. The operator can now decide if the anomaly is intended (or at least acceptable) or not. If he is not able to do this decision, for example due to a high complexity of the conveyor system, he can declare the anomaly as uncertain and the knowledge model gets further adapted during the transportation until a deduction about the fulfillment or violation of the throughput requirement can be done. If the requirement is violated the operator can react accordingly by changing the parameters in the PLC code. References [1] G. Frey, L. Litz, Formal methods in PLC programming, in Intl Conf on : Systems, Man, and Cybernetics, vol.4, [2] S. Schneider, L. Litz, and M. Danancher, Timed residuals for fault detection and isolation in discrete event systems, in Workshop on : Dependable Control of Discrete Systems, [3] J. Ladiges, C. Haubeck, A. Fay, and W. Lamersdorf, Operationalized Definitions of Non- Functional Requirements on Automated Production Facilities to Measure Evolution Effects with an Automation System, in Intl. Conf. on Emerging Technologies and Factory Automation, [4] C. Ackermann, R. Cleaveland, S. Huang, A. Ray, C. Shelton, E. Latronico, Automatic requirement extraction from test cases, in Intl. Conf. on Runtime Verification, 2010.

11 Checkable Code Decisions to Support Software Evolution Martin Küster, Klaus Krogmann FZI Forschungszentrum Informatik Haid-und-Neu-Str , Karlsruhe, Germany 1 Introduction For the evolution of software, understanding of the context, i.e. history and rationale of the existing artifacts, is crucial to avoid ignorant surgery [3], i.e. modifications to the software without understanding its design intent. Existing works on recording architecture decisions have mostly focused on architectural models. We extend this to code models, and introduce a catalog of code decisions that can be found in object-oriented systems. With the presented approach, we make it possible to record design decisions that are concerned with the decomposition of the system into interfaces, classes, and references between them, or how exceptions are handled. Furthermore, we indicate how decisions on the usage of Java frameworks (e.g. for dependency injection) can be recorded. All decision types presented are supplied with OCLconstraints to check the validity of the decision based on the linked code model. We hope to solve a problem of all long-lived systems: that late modifications are not in line with the initial design of the system and that decisions are (unconciously) overruled. The problem is that developers will not check all decisions taken in earlier stages, and whether the current implementation still complies with them. Automation of the validation of a large set of decisions, as presented in this work, is a key factor for more concious evolution of software systems. 2 Decision Catalog We developed an extensive catalog of recurring design decisions in Java-based systems. Some of the decision types are listed in Table 1. It lists the decision types and the associated constraints (in natural language) that are checked by the OCL interpreter. Due to space restrictions, we cannot go into detail of each decision type. Elements of all object-oriented languages, such as class declarations including generalizations and interface implementations, are covered as well as member declarations, such as field and method declarations. To show that the approach is not restricted to elementary decisions in object-oriented systems, we give more complex decision types, such as wrapper exception or code clone. Especially code clones can be acceptable if the decision model records the intention of the developer that Figure 1: MarshallingDecision and related artifacts. Code Decision Metamodel (CDM) elements shaded in grey, Java code model elements shaded in white. cloned the code. Important framework-specific decision types are left out from discussion: those for dependency injection (vs. constructor usage) and those for special classes (e.g. Bean classes). They require more complex linkage (not only to Java code, but also to configuration files in XML). The mechanism to state the decision invariant, however, is exactly the same. Fig. 1 gives a model diagram of the decision type MarshallingDecision. The decision states what mechanism is used to marshal a class (using standard serialization or hand-written externalization). 3 Automatic Checks of Decisions We propose a tight integration with models of Java code constructed in textual modeling IDEs. For that, we operate on the code not on a textual level, but on a model level based on EMFText 1. This enables linkage between decision models and code models. Typical di culties of models linking into code, esp. dangling references caused by saving and regenerating the code model from the textual representation, are solved by anchoring the decision in the code using Java annotations. The reference is established by comparing the id of the anchor in the code with the id of the decision. This kind of linkage is stable even in the presence of complex code modifications, such as combinations of moving, renaming, or deleting and 1

12 Code Decision Element Object Creation Inheritance / Abstraction Cardinalities and Order Composition Field Initialization Marshalling (Interface examp.) Description of Constraint objects of designated type are created only via the defined way. Class extends indicated class or one of its sub classes / is abstract Field is of the respective type (Set, SortedSet, List, Collection) Container class has a reference to part class Part class is instantiated and the reference is set within the constructor of container class or as part of the static initializer (bi-directional case) part class holds a reference to container class, too all fields are initialized (only!) as defined Marshalled class must implement the specified interface Wrapper Exception Code Clones Utility Class Singleton (Pattern example) class E must extend Exception methods containing code causing library exception must throw userdefined exception and must not throw library exception code was copied from indicated method according to clone type clones may di er no more than defined: exact clones must stay exact, syntactically identically may not contain modified fragments class must be final (empty) private constructor provides only static methods contains private static final field with self-reference contains a public static synchr. method getting the reference Table 1: Extract of the catalog of discussed code decisions. re-inserting fragments. The decision types are equipped with OCL constraints. These constraints use the linked code elements to check whether the defined design decision still holds in the current implementation. For example, given the MarshallingDecision from Fig. 1, the OCL will check if the class that is referenced by clazz (derived reference) implements java.io.externalizable (if this method is chosen). 4 Related Work and Conclusion The initial ideas of recording decisions during the design of object-oriented systems is from Potts and Bruns [4]. The process of object-oriented analysis is captured in a decision-based methodology by Barnes and Hartrum [1] capturing the argumentation of encapsulation or decomposition. For architectural models of software, the need to collect the set of decisions that led to the architectural design was first pointed out by Jansen and Bosch [2]. In this paper we presented a novel approach to model-based documentation of recurring objectoriented design decisions. We oulined an extract of our catalog of decision types in object-oriented systems. All decisions are equipped with OCL constraints. If applied to existing code, these types make it possible to check whether the defined decision still holds in the current implementation or if it is violated. Currently, we are re-engineering a commercial financial software. This real-world case study helps to complete the catalog and evaluate the benefits of the model-based approach, which is checking, finding rationales and intent, and links to drivers of decisions, during the evolution phase. References [1] P. D. Barnes and T. C. Hartrum. A Decision-Based Methodology For Object-Oriented Design. In Proc. IEEE 1989 National Aerospace and Electronics Conference, pages IEEE Computer Society Press, [2] A. Jansen and J. Bosch. Software Architecture as a Set of Architectural Design Decisions. In 5th Working IEEE/IFIP Conference on Software Architecture (WICSA 05), pages Ieee,2005. [3] D. L. Parnas. Software Aging. In Proc. 16th International Conference on Software Engineering (ICSE 94), pages ,1994. [4] C. Potts and G. Bruns. Recording the Reasons for Design Decisions. In Proc. 10th International Conference on Software Engineering (ICSE 1988), pages IEEE Computer Society Press, 1988.

13 Guidance for Design Rationale Capture to Support Software Evolution Mathias Schubanz 1,Andreas Pleuss 2,Howell Jordan 2,Goetz Botterweck 2 1 Brandenburg University of Technology, Cottbus - Senftenberg, Germany, 2 Lero The Irish Software Engineering Research Centre, Limerick, Ireland, M.Schubanz@b-tu.de, {Andreas.Pleuss, Howell.Jordan, Goetz.Botterweck}@lero.ie Abstract Documenting design rationale (DR) helps to preserve knowledge over long time to diminish software erosion and to ease maintenance and refactoring. However, use of DR in practice is still limited. One reason for this is the lack of concrete guidance for capturing DR. This paper provides a first step towards identifying DR questions that can guide DR capturing and discusses required future research. Introduction Software continuously evolves. This leads over time to software erosion resulting in significant costs when dealing with legacy software. Documenting design rationale (DR) can help developers to deal with the complexity of software maintenance and software evolution [4, 6]. DR reflects the reasoning (i.e., the Why? ) underlying a certain design. It requires designers to explicate their tacit knowledge about the given context, their intentions, and the alternatives considered [1]. This helps on the one hand to increase software quality and prevent software erosion based on capabilities to 1) enable communication amongst team members [6], 2) support impact analyses [7], and 3) prevent engineers from repeating errors or entering dead-end paths [1]. On the other hand, DR supports refactoring long-living systems to perform the leap towards new platforms or technologies without introducing errors due to missing knowledge about previous decisions. In general, once documented, DR can support software development in many ways, including debugging, verification, development automation or software modification [4]. This has been confirmed in industrial practise (e.g., [2, 5]). Problem Despite its potential benefits, systematic use of DR has not found its way into wider industrial practise. Burge [3] outlines that the lack of industrial application is due to the uncertainty connected to DR usage. There are too many barriers to capture DR accompanied by the uncertainty on its potential payo, as DR often unfolds its full potential late in the software lifecycle. The problem of DR elicitation has been described many times [1, 4, 6]. For instance, engineers might not collect the right information [6]. This based on the statement that DR answers questions [4] could be due to posing the wrong or no questions. General questions in the literature, such as Why was a decision made?, are rather unspecific and ambiguous. This can easily lead to over- or underspecified DR and compromise a developer s motivation. A first approach to guide DR capture has been proposed by Bass et al. [1]. They provide general guidelines on how to capture DR such as Document the decision, the reason or goal behind it, and the context for making the decision. However, considering those guidelines, general questions (e.g., Why? ) alone are not su cient to cover all relevant aspects and guide developers. Our goal is provide better support for software evolution by leveraging the benefits from DR management. Hence, we aim to integrate guidance for DR elicitation into software design and implementation. For this, we aim to identify concrete, specific DR questions that guide engineers in capturing DR and can be used as a basis for building relevant tool support. To the best of our knowledge, concrete DR questions to ask developers have not been investigated in a systematic way yet. Until now, there is just exemplary usage of DR questions in the literature. We aim to provide a first step in this paper by analysing DR questions that can be found in the literature up to now. For this we perform the following steps: (1) We perform a literature analysis and systematically collect DR questions. (2) We normalize the collected questions by rephrasing them. (3) We structure them in accordance to common decision making principles. As a result, we suggest a first set of DR questions as a basis towards guiding engineers in capturing DR. In the remainder of this paper we describe this analysis and the resulting set of DR questions as a first basis towards guiding engineers in capturing DR. Subsequently, the paper discusses the required future work. Question Elicitation To derive a set of specific DR questions to support software evolution we reviewed existing knowledge in DR related literature in a systematic way. Therefore, we collected all questions that we found in the literature, generalized and structured them, and eliminated duplicates. Based on an extensive literature review, we found concrete questions for DR capturing in 19 literature sources, for instance What does the hardware need to do?, What other alternatives were considered?, or How did other people deal with this problem?. This resulted in 150 questions that we collected in a spreadsheet. In the next step, we normalised the questions: Sorting the questions reveals di erent interrogatives used. Most questions are how? (24), what? (73) and why? (24) questions. The 29 other questions could

14 Model Element Decision Option Selected Option Rejected Option Action Add/Change Artefact Judgement Consequence Open Issue Decision # Question Response Type #1 What is the purpose of the decision? Text #2 What triggered the decision to be taken? Text #3 When will the decision be realized? Text #4 What are the options? Option[] #5 What are the actions to be done? Action[] #6 What judgements have been made on this option? Judgement[] #7 What are the anticipated consequences of this option? Consequence[] #8 Who is responsible? Text #9 Why was this alternative selected? Text #10 Why was this alternative not selected? Text #11 What artefacts will be added/changed? Text/Link #12 What other artefacts are related to this addition/change? Text/Link #13 What is the status before the action? Text/Link #14 Why is the new/changed artefact specified in this way? Text #15 Who are the intended users of the new/changed artefact? Text #16 How should the new/changed artefact be used? Text What are the criteria according to which this judgement is #17 made? Criterion #18 Who provided the judgement? What are the anticipated scenarios in which this #19 consequence may occur? Scenario[] #20 What are open issues associated with this consequence? Open Issue[] What are risks and conflicts associated with this #21 consequence? Text #22 What needs to be done? Text #23 Who will be responsible? Text #24 When will it need to be addressed? Text #25 What are the current criteria for success? Criterion[] #26 What are the intended future scenarios? Scenario[] Context Criterion #27 Which stakeholders does this criterion represent? Text Scenario #28 What events could trigger this scenario? Text Table 1: Refined set of questions found in the literature. be rephrased to start with an interrogative. Based on that it seems useful to consider also the other main interrogatives ( who?, when? and where? ) and we added them as generic questions to the overall set. In several iterations we then rephrased each question in a more generic way using one of the interrogatives and removed redundancies which resulted in 47 questions. We then further selected, summarized, and rephrased the questions according the guidelines from [1], resulting in a set of 28 questions, as shown in Table 1. As DR is closely related to decision making concepts, the resulting questions can be grouped according to them. For instance, some questions refer to available options and others to the consequences of a decision. We structure them in a data model with, e.g., Option and Consequence as entities, and links between them. Table 1 shows the resulting questions (middle column), structured by the identified entities (left column). The right column indicates the response type, which is either text, alink to a development artefact (e.g., a design model), or a reference to another entity. The resulting data model could be implemented as a tool-supported data model or metamodel. Research Agenda Besides a few successful cases (e.g., [2, 5]) the use of DR in industrial practise is still an exception. One reason is the lack of information about the practitioners concrete needs. First work has been conducted by Burge [3] and Tang et al. [8] each performing a survey on the expectations and needs in relation to DR usage. They found that practitioners consider DR as important, but also that there is a lack of methodology and tool support. They also stress the need for more empirical work to close this gap. We think that there is no one-fits-all approach. Therefore, Table 1 is just a first step to overcome the uncertainty connected to DR usage. As we intend to provide concrete guidance to designers for capturing DR, concrete 1) application domains, 2) team structures, and 3) the employed development processes need to be considered. Thus, to successfully guide designers when capturing DR, further work to elicit questions to be answered need to be developed under the consideration of these three dimensions. If this is not done carefully, DR questions will remain on an abstract level. Hence, they would merely serve as a guideline for DR capture (similar to [1]) instead of concrete guidance to DR capture. Within future work we intend to get more insights into the process of DR documentation by taking the three dimensions from above into account. Software engineering in regulated domains, including certification-oriented processes seems to be a promising candidate as the need for careful documentation is already well established and first successful industry cases exist (e.g. [2]). Hence, we aim to focus on the automotive domain as a first starting point and intend to create questionnaires and perform interviews with practitioners of corresponding industry partners. Acknowledgments This work was supported, in part, by Science Foundation Ireland grant 10/CE/I1855 to Lero the Irish Software Engineering Research Centre References [1] L. Bass, P. Clements, R. L. Nord, and J. A. Sta ord. Capturing and using rationale for a software architecture. In Rationale Management in Software Engineering, pages Springer, [2] R. Bracewell, K. Wallace, M. Moss, and D. Knott. Capturing design rationale. Computer-Aided Design, 41(3): , [3] J. E. Burge. Design rationale: Researching under uncertainty. AI EDAM (Artificial Intelligence for Engineering Design, Analysis and Manufacturing), 22(4):311, [4] J. E. Burge, J. M. Carroll, R. McCall, and I. Mistrk. Rationale-Based Software Engineering. Springer, [5] E. J. Conklin and K. C. B. Yakemovic. A processoriented approach to design rationale. Human Computer Interaction, 6: ,1991. [6] A. H. Dutoit, R. McCall, I. Mistrík, and B. Paech. Rationale Management in Software Engineering: Concepts and Techniques. In Rationale Management in Software Engineering, pages Springer, [7] J. Liu, X. Hu, and H. Jiang. Modeling the evolving design rationale to achieve a shared understanding. In CSCWD, pages ,2012. [8] A. Tang, M. A. Babar, I. Gorton, and J. Han. A survey of architecture design rationale. Journal of Systems and Software, 79(12): ,2006.

15 Parsing Variant C Code: An Evaluation on Automotive Software Robert Heumüller Universität Magdeburg Magdeburg, Germany Robert.Heumueller@st.ovgu.de Jochen Quante and Andreas Thums Robert Bosch GmbH, Corporate Research Stuttgart, Germany {Jochen.Quante, Andreas.Thums}@de.bosch.com Abstract Software product lines are often implemented using the C preprocessor. Di erent features are selected based on macros; the corresponding code is activated or deactivated using #if. Unfortunately, C preprocessor constructs are not parseable in general, since they break the syntactical structure of C code [1]. This imposes a severe limitation on software analyses: They usually cannot be performed on unpreprocessed C code. In this paper, we will discuss how and to what extent large parts of the unpreprocessed code can be parsed anyway, and what the results can be used for. 1 Approaches C preprocessor (Cpp) constructs are not part of the C syntax. Code therefore has to be preprocessed before a C compiler can process it. Only preprocessed code conforms to C syntax. In order to perform analyses on unpreprocessed code, this code has to be made parseable first. Several approaches have been proposed for that: Extending a C parser. Preprocessor constructs are added at certain points in the syntax. This requires that these constructs are placed in a way compatible with the C syntax. However, preprocessor constructs can be added anywhere, so this approach cannot cover all cases [1]. Extending a preprocessor parser. The C snippets inside preprocessor conditionals are parsed individually, e. g., using island grammars [4]. This approach is quite limited, because the context is missing, which is often important for decisions during parsing. Analyzing all variants separately and merging results. This approach can build on existing analysis tools. However, for a large number of variance points, it is not feasible due to the exponential growth in the number of variants. Replacing Cpp with a better alternative. A di erent language for expressing conditional compilation and macros was for example proposed by McCloskey et al. [3]. Such a language can be designed to be better analyzable and better integrate with C. However, it is a huge e ort to change a whole code base to a new preprocessing language. We chose to base our work on the first approach. We took ANTLR s standard ANSI C grammar 1 and extended it by preprocessor commands in well-formed places. This way, we were already able to process about 90% of our software. In order to further increase the amount of successfully processable files, it was necessary to discover where this approach failed, and to come up with a strategy for dealing with these failures. An initial regex-based evaluation indicated that the two main reasons for failures were a) the existence of conditional branches with incomplete syntax units, and b) the use of troublesome macros. 2 Normalization To be able to deal with incomplete conditional branches, we implemented a pre-preprocessor as proposed by Garrido et al. [1]. The idea is to transform preprocessor constructs that break the C structure to semantically equivalent code that fits into the C structure. The transformation basically adds code to the conditional code until the condition is at an allowed position. Figure 1 shows a typical example of unparseable code and its normalized equivalent. The code is read into a tree that corresponds to the hierarchy of the input s conditional compilation directives. The normalization can then be performed on this tree using a simple fix-point algorithm: 1. Find a Cpp conditional node with incomplete C syntax units in at least one of its branches. Incompleteness is checked based on token black and white lists. For example, a syntactical unit may not start with tokens like else or &&. 2. Copy missing tokens from before/after the conditional into all of the conditional s branches. This way, some code is duplicated, but the resulting code becomes parseable by the extended parser. 3. Delete the copied tokens at their original location. 1

16 Original: Normalized: Pruned: #ifdef a #ifdef a #ifdef a if (cond) { #ifdef a if (cond) { #endif if (cond) { foo(); foo(); foo(); } #ifdef a } #else } #else foo(); #endif if (cond) { #endif foo(); #endif #else #ifdef a foo(); } #else foo(); #endif #endif Figure 1: Normalization and pruning example. 4. Repeat until convergence. This step introduces a lot of infeasible paths and redundant conditions in the code. For example, the code in Figure 1 contains many lines that the compiler will never see they are not reachable because of the nested check of the negated condition. Such infeasible paths may even contain syntax errors, like foo();} in the example. Such irrelevant parts are thrown away in a postprocessing step (pruning). It symbolically evaluates the conditions, identifies contradictions and redundancy, and removes the corresponding elements. 3 Macros and User-Defined Types In unpreprocessed code, macro and type definitions are often not available. They are usually only resolved by included header files, and this inclusion is done by the preprocessor. Therefore, our parser cannot differentiate between macros and user-defined types or functions. Kästner et al. [2] have solved this problem by implementing a partial preprocessor that preprocesses #include and macros, but keeps conditional compilation. We decided to use a di erent approach: We added a further preprocessing step that collects all macro definitions and type declarations from the entire code base. This information is then used by the parser to decide whether an identifier is a macro statement, expression, or call, or whether it is a userdefined type. Additionally, naming conventions are exploited in certain cases. 90% could be parsed by simply extending the parser to be able to deal with preprocessor constructs in certain well-formed positions. 4% were gained by providing the parser with precollected macro and type information. 3% were gained by normalization. 1% was gained by adding information about type naming conventions. In summary, the share of code that can now be parsed could be increased from 90% to 98% at an acceptable cost. This enables meaningful analyses, for example collecting metrics on variance. These can in future be used to come up with improved variance concepts. Another use case is checking if the product line code complies with the architecture. We also think about transforming #if constructs to corresponding dynamic checks to allow using static analysis tools like Polyspace on the entire product line at once. References [1] A. Garrido and R. Johnson. Analyzing multiple configurations of a C program. In Proc. of 21st Int l Conf. on Software Maintenance (ICSM), pages ,2005. [2] C. Kästner, P. G. Giarrusso, and K. Ostermann. Partial preprocessing C code for variability analysis. In Proc. of 5th Workshop on Variability Modeling of Software-Intensive Systems, pages ,2011. [3] B. McCloskey and E. Brewer. ASTEC: A new approach to refactoring C. In Proceedings of the 13th Int l Symp. on Foundations of Software Engineering (ESEC/FSE), pages21 30,2005. [4] L. Moonen. Generating robust parsers using island grammars. In Proc. of 8th Working Conference on Reverse Engineering (WCRE), pages13 22, Results The approach was evaluated on an engine control software of about 1.5 MLOC. It consists of about 6,700 source files and contains about 150 variant switching macros. The following share of files could be successfully parsed due to the di erent parts of the approach:

17 Consolidating Customized Product Copies to Software Product Lines Benjamin Klatt, Klaus Krogmann FZI Research Center for Information Technology Haid-und-Neu-Str , Karlsruhe, Germany Christian Wende DevBoost GmbH Erich-Ponto-Str. 19, Dresden, Germany 1 Introduction Reusing existing software solutions as initial point for new projects is a frequent approach in software business. Copying existing code and adapting it to customer-specific needs allows for flexible and e cient software customization in the short term. But in the long term, a Software Product Line (SPL) approach with a single code base and explicitly managed variability reduces maintenance e ort and eases instantiation of new products. However, consolidating custom copies into an SPL afterwards, is not trivial and requires a lot of manual e ort. For example, identifying relevant di erences between customized copies requires to review a lot of code. State-of-the-art software di erence analysis neither considers characteristics specific for copy-based customizations nor supports further interpretations of the di erences found (e.g. relating thousands of lowlevel code changes). Furthermore, deriving a reasonable variability design requires experience and is not a software developer s everyday task. In this paper, we present our product copy consolidation approach for software developers. It contributes i) a di erence analysis adapted for code copy di erencing, ii) a variability analysis to identify related di erences, and iii) the derivation of a reasonable variability design. 2 Consolidation Process As illustrated in Figure 1, consolidating customized product copies into a single-code-base SPL encompasses three main steps: Di erence Analysis, Variability Design and the Consolidation Refactoring of the original implementations. These steps are related to typical tasks involved in software maintenance, but adapted to the specific needs of a consolidation. As summarized by Pigoski [2](p. 6-4), developers spend 40% 60% of their maintenance e ort on program comprehension, i.e. di erence analysis in our approach. This is a major part of a consolidation process but it is also the least supported one. In the following sections, we provide further details on the di erent steps of the consolidation process. Acknowledgment: This work was supported by the German Federal Ministry of Education and Research (BMBF), grant No. 01IS13023 A-C. Original Customized Copy 1 Product Customized Copy 2 Difference Analysis Variability Design Consolidation Refactoring Figure 1: Consolidation Process Software Product Line 3 Di erence Analysis We have developed a customized di erence analysis approach that is adapted for the needs for productline consolidation in three directions: Respecting code structures, providing strict (Boolean) change classification, and respecting coding guidelines for copybased customization if available. Today s code comparison solutions do not always respect syntactic code structures. This leads to identified di erences that might cut across two methods bodies. In our approach, we detect di erences on extracted syntax models. This allows to precisely identify changed software elements and detect relations between them later on. Furthermore, we filter code elements not relevant for the software s behavior (e.g. code comments or layout information). However, we strictly detect any changes of elements in the scope and prefer false positively detected changes (i.e. they can be ignored later on) to avoid the loss of behavioral di erences. Coding-guidelines can include specific rules for code copying. For example, developers might be asked to introduce customer-specific su xes to code unit names or introduce extend -relationships to the original code. Since these customization guidelines are vital for aligning di erent product copies, we also feed them into the di erence analysis. 4 Variability Analysis Having all di erences detected, it is important to identify those related to each other. Related di erences tend to contribute to the same customization and thus might need to be part of the same variant later on. In our approach, we derive a Variation Point Model (VPM) from the di erences detected before. The VPM contains variation points (VP), each referencing to a code location containing one of the di erences.

18 At each VP, the code alternatives of the di erence are referenced by variant elements. Starting with this fine-grained model, we analyze the VPs to identify related ones and recommend reasonable aggregations. Recommending and applying aggregations is an iterative approach until the person responsible for the consolidation is satisfied with the VPs (i.e. the variability design). With each iteration, it is his decision to accept or decline the recommended aggregations. This allows him to consider organization aspects such as decisions to not consolidate specific code copies. The variation point relationship analysis itself combines basic analyses, each able to identify a specific type of relationship (e.g. VP location, similar terms used in the code, common modifications or program dependencies). Based on the identified relationships, reasonable aggregations are recommended. Basic analyses can be individually combined to match project-specific needs (e.g. indicators for code belonging together). 5 Consolidation Refactoring As a final step, the code copies implementation must be transformed to a single code base according to the chosen variability design and selected variability realization techniques. Opposed to traditional refactorings (i.e. not changing the external behavior of software), consolidation refactorings might extend (i.e. change) the external behavior. The underlying goal of consolidation refactoring is to keep each individual variant/product copy functional. However, new functional combinations enabled by introducing variability are valid considered consolidation refactorings. To implement consolidation refactorings, we are working on i) a refactoring method that explicitly distinguishes between introducing variability and restructuring code, and ii) specific refactoring automation to introduce variability mechanisms. The former focuses on guidelines and decision support. The latter is about novel refactoring specifications using well known formalization concepts, such as refactoring patterns described by Fowler et al. [3] or the refactoring role model defined by Reimann et al. [6]. Based on this formalization, we will automate the refactoring specifications to reduce the probability of errors compared to manual refactoring. 6 Existing Consolidation Approaches SPLs and variability are established research topics nowadays. However, only a few existing approaches target the consolidation of customized code copies into an SPL with a single code base. Rubin et al. [7] have developed a conceptual framework of how to merge customized product variants in general. They focus on a model level, but their general high-level algorithm matches to our approach. In [8] Schütz presents a consolidation process, describes state-of-the-art capabilities and argues for the need of an automation as we target. In a similar way, others like Alves et al. [1], focus on refactoring existing SPLs, but also identified the lack of support for consolidating customized product copies and the necessity for automation. Koschke et al. [5] presented an approach for consolidating customized product copies by assigning features to module structures and thus identifying di erences between the customized copies. Their approach is complimentary to ours and could be used as an additional variability analysis if according module descriptions are available. 7 Prototype & Research Context In our previous work [4], we presented the idea of tool support for evolutionary SPL development. Meanwhile, we are working on the integration with stateof-the-art development environments. Furthermore, in the project KoPL 1, we refine and enhance the approach for industrial applicability. This encompasses the adaptation of the analysis to be used by software developers in terms of required input and result presentation. Furthermore, extension points are introduced to support additional types of software artifacts, analyses and variability mechanisms. Currently, a prototype of the analysis part is already available and evaluated with an open source case study based on ArgoUML-SPL and an industrial case study. The refactoring is in a design state and will be focused later in the project. As lessons learned: A strong input of how desired SPL characteristics should look like (e.g. realization techniques or quality attributes) improves the approach. We call this an SPL Profile. Furthermore, the first step of understanding is the most crucial one for a consolidation. References [1] V. Alves, R. Gehyi, T. Massoni, U. Kulesza, P. Borba, and C. Lucena. Refactoring product lines. In Proceedings of GPCE ACM. [2] P. Bourque and R. Dupuis. Guide to the Software Engineering Body of Knowledge. IEEE,2004. [3] M. Fowler, K. Beck, J. Brant, and W. Opdyke. Refactoring: Improving the Design of Existing Code. Addison-Wesley Professional, [4] B. Klatt and K. Krogmann. Towards Tool-Support for Evolutionary Software Product Line Development. In Proceedings of WSR [5] R. Koschke, P. Frenzel, A. P. J. Breu, and K. Angstmann. Extending the reflexion method for consolidating software variants into product lines. Software Quality Journal, [6] J. Reimann, M. Seifert, and U. Aß mann. On the reuse and recommendation of model refactoring specifications. Software & Systems Modeling, 12(3),2012. [7] J. Rubin and M. Chechik. A Framework for Managing Cloned Product Variants. In Proceedings of ICSE IEEE. [8] D. Schütz. Variability Reverse Engineering. In Proceedings of EuroPLoP

19 Variability Realization Improvement of Software Product Lines Bo Zhang Software Engineering Research Group University of Kaiserslautern Kaiserslautern, Germany Martin Becker Fraunhofer Institute Experimental Software Engineering (IESE) Kaiserslautern, Germany Abstract: As a software product line evolves both in space and in time, variability realizations tend to erode in the sense that they become overly complex to understand and maintain. To solve this challenge, various tactics are proposed to deal with both eroded variability realizations in the existing product line and variability realizations that tend to erode in the future. Moreover, a variability improvement process is presented that contains these tactics against realization erosion and can be applied in different scenarios. 1 Introduction Nowadays successful software product lines are often developed in an incremental way, in which the variability artifacts evolve both in space and in time. During product line evolution, variability realizations tend to become more and more complex over time. For instance, in variability realizations using conditional compilation, variation points implemented as #ifdef blocks tend to be nested, tangled and scattered in core code assets [2]. Moreover, fine-grained variability realizations are often insufficiently documented in variability specifications (e.g., a feature model), which makes the specifications untraceable or even inconsistent with their realizations [3]. As a result, it is an increasing challenge to understand and maintain the complex variability realizations in product line evolution, which is known as variability realization erosion [4]. In this paper, four countermeasure tactics are introduced to deal with either variability erosion in existing product line or variability realizations that tend to erode in the future. Moreover, a variability realization improvement process is presented that contains these tactics against realization erosion and can be applied in different scenarios. Following this improvement process, we have analyzed the evolution of a large industrial product line (31 versions over four years) and conducted quantitative code measurement [4]. Finally, we have detected six types of erosion symptoms in existing variability realizations and predicted realizations that tend to erode in the future. 2 Variability Improvement Tactics In order to solve the practical problem of variability erosion, different countermeasures can be conducted. Avizienis et al. [1] have introduced four tactics to attain software dependability: fault tolerance, fault removal, fault forecasting, and fault prevention. Similarly, these tactics can be also used for coping with variability erosion as shown in Table 1. Each tactic should be adopted depending on the product line context and business goals. While the tactics of tolerance and removal are dealing with erosion in current variability realizations, the tactics of forecasting and prevention are targeting at variability realizations that tend to erode as the product line evolves with current trend. Table 1. Variability Realization Improvement Tactics Problem Tactics Type Current Tolerance analytical erosion Removal reactive Future Forecasting analytical erosion Prevention proactive Figure 1. Extracted Variability Realization Elements. Since one cause of variability erosion is the lack of sufficient variability documentation, the tactic of tolerance is to understand variability realizations by extracting a variability reflexion model which documents various variability realization elements as well as their inter-dependencies. Figure 1 shows

20 variability realization elements using conditional compilation, which can be automatically extracted into a hierarchical structure. The variability reflexion model does not focus on a specific eroded variability code element, but helps to understand fine-grained variability elements especially for product line maintenance. This tolerance tactic is an analytical approach because it does not change any existing product line artifact. On the contrary, the tactic of removal is to identify and fix eroded elements in existing variability realizations, which is a reactive improvement approach. Besides tackling existing erosion, the tactic of forecasting is to predict future erosion trends and their likely consequences based on the current product line evolution trends (also an analytical approach). If the prediction of future erosion turns out to be non-trivial, then the tactic of prevention should be conducted as a proactive approach to avoid erosion and its consequences in the future. While the tactics of tolerance and forecasting are both analytical, the other two tactics (i.e., removal and prevention) need to change and quality-assure product line realizations with additional effort. 3 Variability Improvement Process Given the four aforementioned tactics, a variability realization improvement process is presented to investigate the variability erosion problem and conduct relevant countermeasures against variability realization erosion. The improvement process contains four activities (Monitor, Analyze, Plan, and Execute) as shown in Figure 2. The aforementioned four countermeasure tactics are conducted in one or multiple activities. Figure 2. Variability Realization Improvement Process. The first activity is Monitor, which extracts a variability reflexion model from variability realizations and product configurations (Tolerance tactic). Then in the activity Analyze the extracted variability reflexion model is analyzed to identify various realization erosion symptoms (part of the Removal tactic) and predict future erosion trends (Forecasting tactic). Both activities have been conducted in an industrial case study in our previous work [4]. In the third activity Plan, countermeasures against those erosion symptoms are designed to either fix eroded variability elements (Removal tactic) or prevent future erosion (Prevention tactic). Finally, in the fourth activity Execute the designed countermeasures of erosion removal or prevention are executed. While the activities of Monitor and Execute can be fully automated depending on variability realization techniques, the activities of Analyze and Plan are technique-independent and require domain knowledge. Since the four tactics have different applicability with respect to variability realization improvement, a product line organization can selectively conduct either one or multiple tactics for different improvement purposes. As shown in Figure 2, the improvement process begins with the activity Monitor, and the derived variability reflexion model is the basis of all following activities. In other words, the tactic of tolerance conducted in the Monitor activity is a prerequisite of all other tactics. Based on the variability reflexion model, a product line organization can decide to either identify and fix eroded variability elements in existing realizations (Removal tactic) or predict and avoid variability erosion in future realizations (Forecasting and Prevention tactics). 4 Conclusion This paper introduces a product line improvement process containing four tactics with different applying scenarios to deal with variability realization erosion either at present or in the future. References [1] A. Avizienis, J. C. Laprie, B. Randell, and C. Landwehr, "Basic concepts and taxonomy of dependable and secure computing," Dependable and Secure Computing, IEEE Transactions on, vol. 1, no. 1, pp , Jan [2] J. Liebig, S. Apel, C. Lengauer, C. Kästner, and M. Schulze, An analysis of the variability in forty preprocessor-based software product lines, in Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1, ser. ICSE '10. New York, NY, USA: ACM, 2010, pp [3] T. Patzke, "Sustainable evolution of product line infrastructure code," Ph.D. dissertation, [4] B. Zhang, M. Becker, T. Patzke, K. Sierszecki, and J. E. Savolainen, "Variability evolution and erosion in industrial product lines: a case study," in Proceedings of the 17th International Software Product Line Conference, ser. SPLC '13. New York, NY, USA: ACM, 2013, pp

prorm Budget Planning promx GmbH Nordring Nuremberg

prorm Budget Planning promx GmbH Nordring Nuremberg prorm Budget Planning Budget Planning Business promx GmbH Nordring 100 909 Nuremberg E-Mail: support@promx.net Content WHAT IS THE prorm BUDGET PLANNING? prorm Budget Planning Overview THE ADVANTAGES OF

Mehr

Cloud Architektur Workshop

Cloud Architektur Workshop Cloud Architektur Workshop Ein Angebot von IBM Software Services for Cloud & Smarter Infrastructure Agenda 1. Überblick Cloud Architektur Workshop 2. In 12 Schritten bis zur Cloud 3. Workshop Vorgehensmodell

Mehr

Mit Legacy-Systemen in die Zukunft. adviion. in die Zukunft. Dr. Roland Schätzle

Mit Legacy-Systemen in die Zukunft. adviion. in die Zukunft. Dr. Roland Schätzle Mit Legacy-Systemen in die Zukunft Dr. Roland Schätzle Der Weg zur Entscheidung 2 Situation Geschäftliche und softwaretechnische Qualität der aktuellen Lösung? Lohnen sich weitere Investitionen? Migration??

Mehr

EEX Kundeninformation 2007-09-05

EEX Kundeninformation 2007-09-05 EEX Eurex Release 10.0: Dokumentation Windows Server 2003 auf Workstations; Windows Server 2003 Service Pack 2: Information bezüglich Support Sehr geehrte Handelsteilnehmer, Im Rahmen von Eurex Release

Mehr

Security Patterns. Benny Clauss. Sicherheit in der Softwareentwicklung WS 07/08

Security Patterns. Benny Clauss. Sicherheit in der Softwareentwicklung WS 07/08 Security Patterns Benny Clauss Sicherheit in der Softwareentwicklung WS 07/08 Gliederung Pattern Was ist das? Warum Security Pattern? Security Pattern Aufbau Security Pattern Alternative Beispiel Patternsysteme

Mehr

Darstellung und Anwendung der Assessmentergebnisse

Darstellung und Anwendung der Assessmentergebnisse Process flow Remarks Role Documents, data, tool input, output Important: Involve as many PZU as possible PZO Start Use appropriate templates for the process documentation Define purpose and scope Define

Mehr

Tube Analyzer LogViewer 2.3

Tube Analyzer LogViewer 2.3 Tube Analyzer LogViewer 2.3 User Manual Stand: 25.9.2015 Seite 1 von 11 Name Company Date Designed by WKS 28.02.2013 1 st Checker 2 nd Checker Version history Version Author Changes Date 1.0 Created 19.06.2015

Mehr

Customer-specific software for autonomous driving and driver assistance (ADAS)

Customer-specific software for autonomous driving and driver assistance (ADAS) This press release is approved for publication. Press Release Chemnitz, February 6 th, 2014 Customer-specific software for autonomous driving and driver assistance (ADAS) With the new product line Baselabs

Mehr

HIR Method & Tools for Fit Gap analysis

HIR Method & Tools for Fit Gap analysis HIR Method & Tools for Fit Gap analysis Based on a Powermax APML example 1 Base for all: The Processes HIR-Method for Template Checks, Fit Gap-Analysis, Change-, Quality- & Risk- Management etc. Main processes

Mehr

Multicriterial Design Decision Making regarding interdependent Objectives in DfX

Multicriterial Design Decision Making regarding interdependent Objectives in DfX Overview Multicriterial Design Decision Making regarding interdependent Objectives in DfX S. Bauer The Design Process Support of the Design Process with Design for X Visualization of Decision Problems

Mehr

Magic Figures. We note that in the example magic square the numbers 1 9 are used. All three rows (columns) have equal sum, called the magic number.

Magic Figures. We note that in the example magic square the numbers 1 9 are used. All three rows (columns) have equal sum, called the magic number. Magic Figures Introduction: This lesson builds on ideas from Magic Squares. Students are introduced to a wider collection of Magic Figures and consider constraints on the Magic Number associated with such

Mehr

Efficient Design Space Exploration for Embedded Systems

Efficient Design Space Exploration for Embedded Systems Diss. ETH No. 16589 Efficient Design Space Exploration for Embedded Systems A dissertation submitted to the SWISS FEDERAL INSTITUTE OF TECHNOLOGY ZURICH for the degree of Doctor of Sciences presented by

Mehr

Die Kunst des Programmierens...

Die Kunst des Programmierens... Die Kunst des Programmierens... Wo die Kosten anfallen Der Mythos Wiederverwendung: Design für Wartung als eigentliches Ziel, Objekt Spektrum 4/2009 software maintainers sped 45 percent of their time seeking

Mehr

H.1 FORMI: An RMI Extension for Adaptive Applications H.1 FORMI: An RMI Extension for Adaptive Applications

H.1 FORMI: An RMI Extension for Adaptive Applications H.1 FORMI: An RMI Extension for Adaptive Applications Motivation The ed-object Approach Java RMI ed Objects in Java RMI Conclusions Universität Erlangen-Nürnberg Informatik 4, 2007 H-Formi-.fm 2007-12-14 13.11 H.1 1 Motivation Distributed object-oriented

Mehr

prorm Workload Workload promx GmbH Nordring Nuremberg

prorm Workload Workload promx GmbH Nordring Nuremberg prorm Workload Workload Business promx GmbH Nordring 100 90409 Nuremberg E-Mail: support@promx.net Business Content WHAT IS prorm WORKLOAD? prorm Workload Overview THE ADVANTAGES OF prorm WORKLOAD General

Mehr

Mock Exam Behavioral Finance

Mock Exam Behavioral Finance Mock Exam Behavioral Finance For the following 4 questions you have 60 minutes. You may receive up to 60 points, i.e. on average you should spend about 1 minute per point. Please note: You may use a pocket

Mehr

Word-CRM-Upload-Button. User manual

Word-CRM-Upload-Button. User manual Word-CRM-Upload-Button User manual Word-CRM-Upload for MS CRM 2011 Content 1. Preface... 3 2. Installation... 4 2.1. Requirements... 4 2.1.1. Clients... 4 2.2. Installation guidelines... 5 2.2.1. Client...

Mehr

Unit 1. Motivation and Basics of Classical Logic. Fuzzy Logic I 6

Unit 1. Motivation and Basics of Classical Logic. Fuzzy Logic I 6 Unit 1 Motivation and Basics of Classical Logic Fuzzy Logic I 6 Motivation In our everyday life, we use vague, qualitative, imprecise linguistic terms like small, hot, around two o clock Even very complex

Mehr

https://portal.microsoftonline.com

https://portal.microsoftonline.com Sie haben nun Office über Office365 bezogen. Ihr Account wird in Kürze in dem Office365 Portal angelegt. Anschließend können Sie, wie unten beschrieben, die Software beziehen. Congratulations, you have

Mehr

Number of Maximal Partial Clones

Number of Maximal Partial Clones Number of Maximal Partial Clones KARSTEN SCHÖLZEL Universität Rostoc, Institut für Mathemati 26th May 2010 c 2010 UNIVERSITÄT ROSTOCK MATHEMATISCH-NATURWISSENSCHAFTLICHE FAKULTÄT, INSTITUT FÜR MATHEMATIK

Mehr

Mitglied der Leibniz-Gemeinschaft

Mitglied der Leibniz-Gemeinschaft Methods of research into dictionary use: online questionnaires Annette Klosa (Institut für Deutsche Sprache, Mannheim) 5. Arbeitstreffen Netzwerk Internetlexikografie, Leiden, 25./26. März 2013 Content

Mehr

Extracting Business Rules from PL/SQL-Code

Extracting Business Rules from PL/SQL-Code Extracting Business Rules from PL/SQL-Code Version 7, 13.07.03 Michael Rabben Knowledge Engineer Semantec GmbH, Germany Why? Where are the business rules? Business Rules are already hidden as logic in

Mehr

Algorithms for graph visualization

Algorithms for graph visualization Algorithms for graph visualization Project - Orthogonal Grid Layout with Small Area W INTER SEMESTER 2013/2014 Martin No llenburg KIT Universita t des Landes Baden-Wu rttemberg und nationales Forschungszentrum

Mehr

ISO 15504 Reference Model

ISO 15504 Reference Model Process flow Remarks Role Documents, data, tools input, output Start Define purpose and scope Define process overview Define process details Define roles no Define metrics Pre-review Review yes Release

Mehr

VGM. VGM information. HAMBURG SÜD VGM WEB PORTAL USER GUIDE June 2016

VGM. VGM information. HAMBURG SÜD VGM WEB PORTAL USER GUIDE June 2016 Overview The Hamburg Süd VGM Web portal is an application that enables you to submit VGM information directly to Hamburg Süd via our e-portal Web page. You can choose to enter VGM information directly,

Mehr

Geschäftsprozesse und Regeln

Geschäftsprozesse und Regeln Geschäftsprozesse und Regeln 7 Szenarien einer möglichen Integration Jana Koehler Hochschule Luzern Lucerne University of Applied Sciences and Arts jana.koehler@hslu.ch Gartner: Organizations struggle

Mehr

Lehrstuhl für Allgemeine BWL Strategisches und Internationales Management Prof. Dr. Mike Geppert Carl-Zeiß-Str. 3 07743 Jena

Lehrstuhl für Allgemeine BWL Strategisches und Internationales Management Prof. Dr. Mike Geppert Carl-Zeiß-Str. 3 07743 Jena Lehrstuhl für Allgemeine BWL Strategisches und Internationales Management Prof. Dr. Mike Geppert Carl-Zeiß-Str. 3 07743 Jena http://www.im.uni-jena.de Contents I. Learning Objectives II. III. IV. Recap

Mehr

Seminar: Software Engineering verteilter Systeme

Seminar: Software Engineering verteilter Systeme Seminar: Software Engineering verteilter Systeme Hauptseminar im Sommersemester 2011 Programmierung verteilter Systeme Institut für Informatik Universität Augsburg 86135 Augsburg Tel.: +49 821 598-2118

Mehr

Exercise (Part II) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1

Exercise (Part II) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1 Exercise (Part II) Notes: The exercise is based on Microsoft Dynamics CRM Online. For all screenshots: Copyright Microsoft Corporation. The sign ## is you personal number to be used in all exercises. All

Mehr

a) Name and draw three typical input signals used in control technique.

a) Name and draw three typical input signals used in control technique. 12 minutes Page 1 LAST NAME FIRST NAME MATRIKEL-NO. Problem 1 (2 points each) a) Name and draw three typical input signals used in control technique. b) What is a weight function? c) Define the eigen value

Mehr

Aufbau eines IT-Servicekataloges am Fallbeispiel einer Schweizer Bank

Aufbau eines IT-Servicekataloges am Fallbeispiel einer Schweizer Bank SwissICT 2011 am Fallbeispiel einer Schweizer Bank Fritz Kleiner, fritz.kleiner@futureways.ch future ways Agenda Begriffsklärung Funktionen und Aspekte eines IT-Servicekataloges Fallbeispiel eines IT-Servicekataloges

Mehr

Social Innovation and Transition

Social Innovation and Transition Social Innovation and Transition Dmitri Domanski & Jürgen Howaldt TU Dortmund University Sozialforschungsstelle Dortmund Co-innovation theory and practice to facilitate change Wageningen, October 28-29,

Mehr

WP2. Communication and Dissemination. Wirtschafts- und Wissenschaftsförderung im Freistaat Thüringen

WP2. Communication and Dissemination. Wirtschafts- und Wissenschaftsförderung im Freistaat Thüringen WP2 Communication and Dissemination Europa Programm Center Im Freistaat Thüringen In Trägerschaft des TIAW e. V. 1 GOALS for WP2: Knowledge information about CHAMPIONS and its content Direct communication

Mehr

Dynamic Hybrid Simulation

Dynamic Hybrid Simulation Dynamic Hybrid Simulation Comparison of different approaches in HEV-modeling GT-SUITE Conference 12. September 2012, Frankfurt/Main Institut für Verbrennungsmotoren und Kraftfahrwesen Universität Stuttgart

Mehr

ISO 15504 Reference Model

ISO 15504 Reference Model Prozess Dimension von SPICE/ISO 15504 Process flow Remarks Role Documents, data, tools input, output Start Define purpose and scope Define process overview Define process details Define roles no Define

Mehr

Call Centers and Low Wage Employment in International Comparison

Call Centers and Low Wage Employment in International Comparison Wissenschaftszentrum Nordrhein-Westfalen Kulturwissenschaftliches Institut Wuppertal Institut für Klima, Umwelt, Energie Institut Arbeit und Technik Call Centers and Low Wage Employment in International

Mehr

p^db=`oj===pìééçêíáåñçêã~íáçå=

p^db=`oj===pìééçêíáåñçêã~íáçå= p^db=`oj===pìééçêíáåñçêã~íáçå= Error: "Could not connect to the SQL Server Instance" or "Failed to open a connection to the database." When you attempt to launch ACT! by Sage or ACT by Sage Premium for

Mehr

FEM Isoparametric Concept

FEM Isoparametric Concept FEM Isoparametric Concept home/lehre/vl-mhs--e/folien/vorlesung/4_fem_isopara/cover_sheet.tex page of 25. p./25 Table of contents. Interpolation Functions for the Finite Elements 2. Finite Element Types

Mehr

Field-Circuit Coupling for Mechatronic Systems: Some Trends and Techniques

Field-Circuit Coupling for Mechatronic Systems: Some Trends and Techniques Field-Circuit Coupling for Mechatronic Systems: Some Trends and Techniques Stefan Kurz Robert Bosch GmbH, Stuttgart Now with the University of the German Federal Armed Forces, Hamburg stefan.kurz@unibw-hamburg.de

Mehr

Level of service estimation at traffic signals based on innovative traffic data services and collection techniques

Level of service estimation at traffic signals based on innovative traffic data services and collection techniques Level of service estimation at traffic signals based on innovative traffic data services and collection techniques Authors: Steffen Axer, Jannis Rohde, Bernhard Friedrich Network-wide LOS estimation at

Mehr

Level 2 German, 2015

Level 2 German, 2015 91126 911260 2SUPERVISOR S Level 2 German, 2015 91126 Demonstrate understanding of a variety of written and / or visual German text(s) on familiar matters 2.00 p.m. Friday 4 December 2015 Credits: Five

Mehr

Software development with continuous integration

Software development with continuous integration Software development with continuous integration (FESG/MPIfR) ettl@fs.wettzell.de (FESG) neidhardt@fs.wettzell.de 1 A critical view on scientific software Tendency to become complex and unstructured Highly

Mehr

EVANGELISCHES GESANGBUCH: AUSGABE FUR DIE EVANGELISCH-LUTHERISCHE LANDESKIRCHE SACHSEN. BLAU (GERMAN EDITION) FROM EVANGELISCHE VERLAGSAN

EVANGELISCHES GESANGBUCH: AUSGABE FUR DIE EVANGELISCH-LUTHERISCHE LANDESKIRCHE SACHSEN. BLAU (GERMAN EDITION) FROM EVANGELISCHE VERLAGSAN EVANGELISCHES GESANGBUCH: AUSGABE FUR DIE EVANGELISCH-LUTHERISCHE LANDESKIRCHE SACHSEN. BLAU (GERMAN EDITION) FROM EVANGELISCHE VERLAGSAN DOWNLOAD EBOOK : EVANGELISCHES GESANGBUCH: AUSGABE FUR DIE EVANGELISCH-LUTHERISCHE

Mehr

WAS IST DER KOMPARATIV: = The comparative

WAS IST DER KOMPARATIV: = The comparative DER KOMPATATIV VON ADJEKTIVEN UND ADVERBEN WAS IST DER KOMPARATIV: = The comparative Der Komparativ vergleicht zwei Sachen (durch ein Adjektiv oder ein Adverb) The comparative is exactly what it sounds

Mehr

Inequality Utilitarian and Capabilities Perspectives (and what they may imply for public health)

Inequality Utilitarian and Capabilities Perspectives (and what they may imply for public health) Inequality Utilitarian and Capabilities Perspectives (and what they may imply for public health) 1 Utilitarian Perspectives on Inequality 2 Inequalities matter most in terms of their impact onthelivesthatpeopleseektoliveandthethings,

Mehr

How to develop and improve the functioning of the audit committee The Auditor s View

How to develop and improve the functioning of the audit committee The Auditor s View How to develop and improve the functioning of the audit committee The Auditor s View May 22, 2013 Helmut Kerschbaumer KPMG Austria Audit Committees in Austria Introduced in 2008, applied since 2009 Audit

Mehr

ELBA2 ILIAS TOOLS AS SINGLE APPLICATIONS

ELBA2 ILIAS TOOLS AS SINGLE APPLICATIONS ELBA2 ILIAS TOOLS AS SINGLE APPLICATIONS An AAA/Switch cooperative project run by LET, ETH Zurich, and ilub, University of Bern Martin Studer, ilub, University of Bern Julia Kehl, LET, ETH Zurich 1 Contents

Mehr

Java Tools JDK. IDEs. Downloads. Eclipse. IntelliJ. NetBeans. Java SE 8 Java SE 8 Documentation

Java Tools JDK. IDEs.  Downloads. Eclipse. IntelliJ. NetBeans. Java SE 8 Java SE 8 Documentation Java Tools JDK http://www.oracle.com/technetwork/java/javase/ Downloads IDEs Java SE 8 Java SE 8 Documentation Eclipse http://www.eclipse.org IntelliJ http://www.jetbrains.com/idea/ NetBeans https://netbeans.org/

Mehr

arlanis Software AG SOA Architektonische und technische Grundlagen Andreas Holubek

arlanis Software AG SOA Architektonische und technische Grundlagen Andreas Holubek arlanis Software AG SOA Architektonische und technische Grundlagen Andreas Holubek Speaker Andreas Holubek VP Engineering andreas.holubek@arlanis.com arlanis Software AG, D-14467 Potsdam 2009, arlanis

Mehr

Support Technologies based on Bi-Modal Network Analysis. H. Ulrich Hoppe. Virtuelles Arbeiten und Lernen in projektartigen Netzwerken

Support Technologies based on Bi-Modal Network Analysis. H. Ulrich Hoppe. Virtuelles Arbeiten und Lernen in projektartigen Netzwerken Support Technologies based on Bi-Modal Network Analysis H. Agenda 1. Network analysis short introduction 2. Supporting the development of virtual organizations 3. Supporting the development of compentences

Mehr

KURZANLEITUNG. Firmware-Upgrade: Wie geht das eigentlich?

KURZANLEITUNG. Firmware-Upgrade: Wie geht das eigentlich? KURZANLEITUNG Firmware-Upgrade: Wie geht das eigentlich? Die Firmware ist eine Software, die auf der IP-Kamera installiert ist und alle Funktionen des Gerätes steuert. Nach dem Firmware-Update stehen Ihnen

Mehr

LiLi. physik multimedial. Links to e-learning content for physics, a database of distributed sources

LiLi. physik multimedial. Links to e-learning content for physics, a database of distributed sources physik multimedial Lehr- und Lernmodule für das Studium der Physik als Nebenfach Links to e-learning content for physics, a database of distributed sources Julika Mimkes: mimkes@uni-oldenburg.de Overview

Mehr

ONLINE LICENCE GENERATOR

ONLINE LICENCE GENERATOR Index Introduction... 2 Change language of the User Interface... 3 Menubar... 4 Sold Software... 5 Explanations of the choices:... 5 Call of a licence:... 7 Last query step... 9 Call multiple licenses:...

Mehr

I-Q SCHACHT & KOLLEGEN QUALITÄTSKONSTRUKTION GMBH ISO 26262:2011. Tabellen mit ASIL Zuordnungen

I-Q SCHACHT & KOLLEGEN QUALITÄTSKONSTRUKTION GMBH ISO 26262:2011. Tabellen mit ASIL Zuordnungen I-Q SCHACHT & KOLLEGEN QUALITÄTSKONSTRUKTION GMBH ISO 26262:2011 Tabellen mit ASIL Zuordnungen 1. Die Tabellen in der Norm (mit ASIL Zuordnung) Ein wesentlicher Bestandteil der Norm sind die insgesamt

Mehr

Attention: Give your answers to problem 1 and problem 2 directly below the questions in the exam question sheet. ,and C = [ ].

Attention: Give your answers to problem 1 and problem 2 directly below the questions in the exam question sheet. ,and C = [ ]. Page 1 LAST NAME FIRST NAME MATRIKEL-NO. Attention: Give your answers to problem 1 and problem 2 directly below the questions in the exam question sheet. Problem 1 (15 points) a) (1 point) A system description

Mehr

Karlsruhe Institute of Technology Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH)

Karlsruhe Institute of Technology Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) Combining Cloud and Grid with a User Interface Jie Tao Karlsruhe Institute of Technology jie.tao@kit.edu Die Kooperation von Outline Motivation The g-eclipse Project Extending gg-eclipse for a Cloud Framework

Mehr

Funktion der Mindestreserve im Bezug auf die Schlüsselzinssätze der EZB (German Edition)

Funktion der Mindestreserve im Bezug auf die Schlüsselzinssätze der EZB (German Edition) Funktion der Mindestreserve im Bezug auf die Schlüsselzinssätze der EZB (German Edition) Philipp Heckele Click here if your download doesn"t start automatically Download and Read Free Online Funktion

Mehr

VGM. VGM information. HAMBURG SÜD VGM WEB PORTAL - USER GUIDE June 2016

VGM. VGM information. HAMBURG SÜD VGM WEB PORTAL - USER GUIDE June 2016 Overview The Hamburg Süd VGM-Portal is an application which enables to submit VGM information directly to Hamburg Süd via our e-portal web page. You can choose to insert VGM information directly, or download

Mehr

Seminar: Software Engineering verteilter Systeme

Seminar: Software Engineering verteilter Systeme Seminar: Software Engineering verteilter Systeme Hauptseminar im WS 2010/2011 Programmierung verteilter Systeme Institut für Informatik Universität Augsburg 86135 Augsburg Tel.: +49 821 598-2118 Fax: +49

Mehr

Preisliste für The Unscrambler X

Preisliste für The Unscrambler X Preisliste für The Unscrambler X english version Alle Preise verstehen sich netto zuzüglich gesetzlicher Mehrwertsteuer (19%). Irrtümer, Änderungen und Fehler sind vorbehalten. The Unscrambler wird mit

Mehr

Big Data Analytics. Fifth Munich Data Protection Day, March 23, Dr. Stefan Krätschmer, Data Privacy Officer, Europe, IBM

Big Data Analytics. Fifth Munich Data Protection Day, March 23, Dr. Stefan Krätschmer, Data Privacy Officer, Europe, IBM Big Data Analytics Fifth Munich Data Protection Day, March 23, 2017 C Dr. Stefan Krätschmer, Data Privacy Officer, Europe, IBM Big Data Use Cases Customer focused - Targeted advertising / banners - Analysis

Mehr

Creating OpenSocial Gadgets. Bastian Hofmann

Creating OpenSocial Gadgets. Bastian Hofmann Creating OpenSocial Gadgets Bastian Hofmann Agenda Part 1: Theory What is a Gadget? What is OpenSocial? Privacy at VZ-Netzwerke OpenSocial Services OpenSocial without Gadgets - The Rest API Part 2: Practical

Mehr

There are 10 weeks this summer vacation the weeks beginning: June 23, June 30, July 7, July 14, July 21, Jul 28, Aug 4, Aug 11, Aug 18, Aug 25

There are 10 weeks this summer vacation the weeks beginning: June 23, June 30, July 7, July 14, July 21, Jul 28, Aug 4, Aug 11, Aug 18, Aug 25 Name: AP Deutsch Sommerpaket 2014 The AP German exam is designed to test your language proficiency your ability to use the German language to speak, listen, read and write. All the grammar concepts and

Mehr

Social Sciences and Humanities role in Social Innovation Some key observations from the experience of TU Dortmund University sfs

Social Sciences and Humanities role in Social Innovation Some key observations from the experience of TU Dortmund University sfs Social Sciences and Humanities role in Social Innovation Some key observations from the experience of TU Dortmund University sfs Jürgen Howaldt & Dmitri Domanski Rethinking the role of Social Sciences

Mehr

Algorithms & Datastructures Midterm Test 1

Algorithms & Datastructures Midterm Test 1 Algorithms & Datastructures Midterm Test 1 Wolfgang Pausch Heiko Studt René Thiemann Tomas Vitvar

Mehr

H. Enke, Sprecher des AK Forschungsdaten der WGL

H. Enke, Sprecher des AK Forschungsdaten der WGL https://escience.aip.de/ak-forschungsdaten H. Enke, Sprecher des AK Forschungsdaten der WGL 20.01.2015 / Forschungsdaten - DataCite Workshop 1 AK Forschungsdaten der WGL 2009 gegründet - Arbeit für die

Mehr

Challenges for the future between extern and intern evaluation

Challenges for the future between extern and intern evaluation Evaluation of schools in switzerland Challenges for the future between extern and intern evaluation Michael Frais Schulentwicklung in the Kanton Zürich between internal evaluation and external evaluation

Mehr

Stand der Recherche nach publizierten Identity Management Standards - ISO/IEC, DIN, BSI, CEN/ISSS und OASIS

Stand der Recherche nach publizierten Identity Management Standards - ISO/IEC, DIN, BSI, CEN/ISSS und OASIS IT Advisory Group AG Stand der Recherche nach publizierten Identity Management Standards - ISO/IEC, DIN, BSI, CEN/ISSS und OASIS Arslan Brömme Dipl.-Inform., B.Sc. 1 AK GenericIAM Stuttgart, 20. Juni 2006

Mehr

Einkommensaufbau mit FFI:

Einkommensaufbau mit FFI: For English Explanation, go to page 4. Einkommensaufbau mit FFI: 1) Binäre Cycle: Eine Position ist wie ein Business-Center. Ihr Business-Center hat zwei Teams. Jedes mal, wenn eines der Teams 300 Punkte

Mehr

Scenario Building Workshop - Interplay of problem framings

Scenario Building Workshop - Interplay of problem framings Transdiciplinary Conference Inter- and Transdisciplinary Problem Framing, ETH Zürich, 27-28 November 2008 Scenario Building Workshop - Interplay of problem framings PD Dr. Rolf Meyer*, Dr. Martin Knapp*,

Mehr

Praktikum Entwicklung von Mediensystemen mit ios

Praktikum Entwicklung von Mediensystemen mit ios Praktikum Entwicklung von Mediensystemen mit ios WS 2011 Prof. Dr. Michael Rohs michael.rohs@ifi.lmu.de MHCI Lab, LMU München Today Heuristische Evaluation vorstellen Aktuellen Stand Software Prototyp

Mehr

Corporate Digital Learning, How to Get It Right. Learning Café

Corporate Digital Learning, How to Get It Right. Learning Café 0 Corporate Digital Learning, How to Get It Right Learning Café Online Educa Berlin, 3 December 2015 Key Questions 1 1. 1. What is the unique proposition of digital learning? 2. 2. What is the right digital

Mehr

Analysis Add-On Data Lineage

Analysis Add-On Data Lineage 1 Analysis Add-On Data Lineage Docu Performer Analysis Add-On Data Lineage 2 Introduction... 3 Starting the Function... 4 Display of the Mapping in Excel... 5 BW InfoProvider... 6 HANA Objects... 7 ABAP

Mehr

LEBEN OHNE REUE: 52 IMPULSE, DIE UNS DARAN ERINNERN, WAS WIRKLICH WICHTIG IST (GERMAN EDITION) BY BRONNIE WARE

LEBEN OHNE REUE: 52 IMPULSE, DIE UNS DARAN ERINNERN, WAS WIRKLICH WICHTIG IST (GERMAN EDITION) BY BRONNIE WARE LEBEN OHNE REUE: 52 IMPULSE, DIE UNS DARAN ERINNERN, WAS WIRKLICH WICHTIG IST (GERMAN EDITION) BY BRONNIE WARE DOWNLOAD EBOOK : LEBEN OHNE REUE: 52 IMPULSE, DIE UNS DARAN EDITION) BY BRONNIE WARE PDF Click

Mehr

Der Adapter Z250I / Z270I lässt sich auf folgenden Betriebssystemen installieren:

Der Adapter Z250I / Z270I lässt sich auf folgenden Betriebssystemen installieren: Installationshinweise Z250I / Z270I Adapter IR USB Installation hints Z250I / Z270I Adapter IR USB 06/07 (Laden Sie den Treiber vom WEB, entpacken Sie ihn in ein leeres Verzeichnis und geben Sie dieses

Mehr

Einsatz einer Dokumentenverwaltungslösung zur Optimierung der unternehmensübergreifenden Kommunikation

Einsatz einer Dokumentenverwaltungslösung zur Optimierung der unternehmensübergreifenden Kommunikation Einsatz einer Dokumentenverwaltungslösung zur Optimierung der unternehmensübergreifenden Kommunikation Eine Betrachtung im Kontext der Ausgliederung von Chrysler Daniel Rheinbay Abstract Betriebliche Informationssysteme

Mehr

Bosch Rexroth - The Drive & Control Company

Bosch Rexroth - The Drive & Control Company Bosch Rexroth - The Drive & Control Company Alle Rechte bei Bosch Rexroth AG, auch für den Fall von Schutzrechtsanmeldungen. Jede Verfügungsbefugnis, wie Kopier- und Weitergaberecht, bei uns. 1 Case study

Mehr

I-Q SCHACHT & KOLLEGEN QUALITÄTSKONSTRUKTION GMBH ISO 26262:2011. Tabellen mit ASIL Zuordnungen

I-Q SCHACHT & KOLLEGEN QUALITÄTSKONSTRUKTION GMBH ISO 26262:2011. Tabellen mit ASIL Zuordnungen I-Q SCHACHT & KOLLEGEN QUALITÄTSKONSTRUKTION GMBH ISO 26262:2011 Tabellen mit ASIL Zuordnungen 1. Die Tabellen in der Norm (mit ASIL Zuordnung) Ein wesentlicher Bestandteil der Norm sind die insgesamt

Mehr

Context-adaptation based on Ontologies and Spreading Activation

Context-adaptation based on Ontologies and Spreading Activation -1- Context-adaptation based on Ontologies and Spreading Activation ABIS 2007, Halle, 24.09.07 {hussein,westheide,ziegler}@interactivesystems.info -2- Context Adaptation in Spreadr Pubs near my location

Mehr

Rough copy for the art project >hardware/software< of the imbenge-dreamhouse artist Nele Ströbel.

Rough copy for the art project >hardware/software< of the imbenge-dreamhouse artist Nele Ströbel. Rough copy for the art project >hardware/software< of the imbenge-dreamhouse artist. Title >hardware/software< This art project reflects different aspects of work and its meaning for human kind in our

Mehr

Newest Generation of the BS2 Corrosion/Warning and Measurement System

Newest Generation of the BS2 Corrosion/Warning and Measurement System Newest Generation of the BS2 Corrosion/Warning and Measurement System BS2 System Description: BS2 CorroDec 2G is a cable and energyless system module range for detecting corrosion, humidity and prevailing

Mehr

Sport Northern Ireland. Talent Workshop Thursday 28th January 2010 Holiday Inn Express, Antrim

Sport Northern Ireland. Talent Workshop Thursday 28th January 2010 Holiday Inn Express, Antrim Sport Northern Ireland Talent Workshop Thursday 28th January 2010 Holiday Inn Express, Antrim Outcomes By the end of the day participants will be able to: Define and differentiate between the terms ability,

Mehr

Introduction FEM, 1D-Example

Introduction FEM, 1D-Example Introduction FEM, 1D-Example home/lehre/vl-mhs-1-e/folien/vorlesung/3_fem_intro/cover_sheet.tex page 1 of 25. p.1/25 Table of contents 1D Example - Finite Element Method 1. 1D Setup Geometry 2. Governing

Mehr

NEWSLETTER. FileDirector Version 2.5 Novelties. Filing system designer. Filing system in WinClient

NEWSLETTER. FileDirector Version 2.5 Novelties. Filing system designer. Filing system in WinClient Filing system designer FileDirector Version 2.5 Novelties FileDirector offers an easy way to design the filing system in WinClient. The filing system provides an Explorer-like structure in WinClient. The

Mehr

IATUL SIG-LOQUM Group

IATUL SIG-LOQUM Group Purdue University Purdue e-pubs Proceedings of the IATUL Conferences 2011 IATUL Proceedings IATUL SIG-LOQUM Group Reiner Kallenborn IATUL SIG-LOQUM Group Reiner Kallenborn, "IATUL SIG-LOQUM Group." Proceedings

Mehr

Exercise (Part XI) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1

Exercise (Part XI) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1 Exercise (Part XI) Notes: The exercise is based on Microsoft Dynamics CRM Online. For all screenshots: Copyright Microsoft Corporation. The sign ## is you personal number to be used in all exercises. All

Mehr

Praktikum Entwicklung Mediensysteme (für Master)

Praktikum Entwicklung Mediensysteme (für Master) Praktikum Entwicklung Mediensysteme (für Master) Organisatorisches Today Schedule Organizational Stuff Introduction to Android Exercise 1 2 Schedule Phase 1 Individual Phase: Introduction to basics about

Mehr

Contents. Interaction Flow / Process Flow. Structure Maps. Reference Zone. Wireframes / Mock-Up

Contents. Interaction Flow / Process Flow. Structure Maps. Reference Zone. Wireframes / Mock-Up Contents 5d 5e 5f 5g Interaction Flow / Process Flow Structure Maps Reference Zone Wireframes / Mock-Up 5d Interaction Flow (Frontend, sichtbar) / Process Flow (Backend, nicht sichtbar) Flow Chart: A Flowchart

Mehr

Level 2 German, 2016

Level 2 German, 2016 91126 911260 2SUPERVISOR S Level 2 German, 2016 91126 Demonstrate understanding of a variety of written and / or visual German texts on familiar matters 2.00 p.m. Tuesday 29 November 2016 Credits: Five

Mehr

Ressourcenmanagement in Netzwerken SS06 Vorl. 12,

Ressourcenmanagement in Netzwerken SS06 Vorl. 12, Ressourcenmanagement in Netzwerken SS06 Vorl. 12, 30.6.06 Friedhelm Meyer auf der Heide Name hinzufügen 1 Prüfungstermine Dienstag, 18.7. Montag, 21. 8. und Freitag, 22.9. Bitte melden sie sich bis zum

Mehr

p^db=`oj===pìééçêíáåñçêã~íáçå=

p^db=`oj===pìééçêíáåñçêã~íáçå= p^db=`oj===pìééçêíáåñçêã~íáçå= How to Disable User Account Control (UAC) in Windows Vista You are attempting to install or uninstall ACT! when Windows does not allow you access to needed files or folders.

Mehr

Group and Session Management for Collaborative Applications

Group and Session Management for Collaborative Applications Diss. ETH No. 12075 Group and Session Management for Collaborative Applications A dissertation submitted to the SWISS FEDERAL INSTITUTE OF TECHNOLOGY ZÜRICH for the degree of Doctor of Technical Seiences

Mehr

Technische Universität Kaiserslautern Lehrstuhl für Virtuelle Produktentwicklung

Technische Universität Kaiserslautern Lehrstuhl für Virtuelle Produktentwicklung functions in SysML 2.0 La Jolla, 22.05.2014 12/10/2015 Technische Universität Kaiserslautern Lehrstuhl für Virtuelle Produktentwicklung Dipl. Wirtsch.-Ing. Christian Muggeo Dipl. Wirtsch.-Ing. Michael

Mehr

GridMate The Grid Matlab Extension

GridMate The Grid Matlab Extension GridMate The Grid Matlab Extension Forschungszentrum Karlsruhe, Institute for Data Processing and Electronics T. Jejkal, R. Stotzka, M. Sutter, H. Gemmeke 1 What is the Motivation? Graphical development

Mehr

Die "Badstuben" im Fuggerhaus zu Augsburg

Die Badstuben im Fuggerhaus zu Augsburg Die "Badstuben" im Fuggerhaus zu Augsburg Jürgen Pursche, Eberhard Wendler Bernt von Hagen Click here if your download doesn"t start automatically Die "Badstuben" im Fuggerhaus zu Augsburg Jürgen Pursche,

Mehr

Exercise (Part V) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1

Exercise (Part V) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1 Exercise (Part V) Notes: The exercise is based on Microsoft Dynamics CRM Online. For all screenshots: Copyright Microsoft Corporation. The sign ## is you personal number to be used in all exercises. All

Mehr

RESI A Natural Language Specification Improver

RESI A Natural Language Specification Improver Universität Karlsruhe (TH) Forschungsuniversität gegründet 1825 RESI A Natural Language Specification Improver Dipl. Inform. Sven J. Körner Torben Brumm Prof. Dr. Walter F. Tichy Institute for Programming

Mehr

Englisch-Grundwortschatz

Englisch-Grundwortschatz Englisch-Grundwortschatz Die 100 am häufigsten verwendeten Wörter also auch so so in in even sogar on an / bei / in like wie / mögen their with but first only and time find you get more its those because

Mehr

FEM Isoparametric Concept

FEM Isoparametric Concept FEM Isoparametric Concept home/lehre/vl-mhs--e/cover_sheet.tex. p./26 Table of contents. Interpolation Functions for the Finite Elements 2. Finite Element Types 3. Geometry 4. Interpolation Approach Function

Mehr

Contract Based Design

Contract Based Design Contract Based Design The Problem + = How can we avoid this in complex software and systems? How do we describe what we want? Requirement or Specification: REQ-1: The two traffic lights must not be green

Mehr