!! 16. Workshop Software-Reengineering und -Evolution

Größe: px
Ab Seite anzeigen:

Download "!! 16. Workshop Software-Reengineering und -Evolution"

Transkript

1 !! 16. Workshop Software-Reengineering und -Evolution der GI-Fachgruppe Software-Reengineering (SRE)! 6. Workshop Design for Future! des GI-Arbeitskreises Langlebige Softwaresysteme (L2S2)!! Bad Honnef April 2014!!

2 ! 16. Workshop Software-Reengineering und -Evolution der GI-Fachgruppe Software Reengineering (SRE) 6. Workshop Design for Future 2014 des GI-Arbeitskreises Langlebige Softwaresysteme (L2S2) Die Workshops Software Reengineering (WSR) im Physikzentrum Bad Honnef wurden mit dem ersten WSR 1999 von Jürgen Ebert und Franz Lehner ins Leben gerufen, um neben den internationalen erfolgreichen Tagungen im Bereich Reengineering (wie etwa WCRE und CSMR) auch ein deutschsprachiges Diskussionsforum zu schaffen. Dieses Jahr haben wir erstmals explizit das Thema Software-Evolution in den Titel mit aufgenommen, um eine breitere Zielgruppe anzusprechen und auf den Workshop aufmerksam zu machen. Damit ist das neue Kürzel entsprechend WSRE.! Ziel der Treffen ist es nach wie vor, einander kennen zu lernen und auf diesem Wege eine direkte Basis der Kooperation zu schaffen, so dass das Themengebiet eine weitere Konsolidierung und Weiterentwicklung erfährt. Durch die aktive und gewachsene Beteiligung vieler Forscher und Praktiker hat sich der WSRE als zentrale Reengineering-Konferenz im deutschsprachigen Raum etabliert. Dabei wird er weiterhin als Low-Cost-Workshop ohne eigenes Budget durchgeführt. Bitte tragen auch Sie dazu bei, den WSRE weiterhin erfolgreich zu machen, indem Sie interessierte Kollegen und Bekannte darauf hinweisen. Auf Basis der erfolgreichen WSR-Treffen der ersten Jahre wurde 2004 die GI-Fachgruppe Software Reengineering gegründet, die unter präsent ist. Durch die Fachgruppe wurden seitdem neben dem WSR(E) auch verwandte Tagungen zu Spezialthemen organisiert. Seit 2010 ist der Arbeitskreis Langlebige Softwaresysteme (L2S2) mit seinen Design For Future -Workshops (DFF) aufgrund der inhaltlichen Nähe ebenfalls bei der Fachgruppe Reengineering aufgehängt. Alle zwei Jahre findet seitdem ein gemeinsamer Workshop von WSR und DFF statt - so auch in diesem Jahr. Diese Kombination soll den Austausch zwischen den beiden Gruppen fördern. Während beim DFF der Schwerpunkt auf wartbaren Architekturen liegt, widmet sich der WSRE weiterhin dem allgemeinen Thema Reengineering in allen seinen Facetten Mai 2014 Physikzentrum Bad Honnef Der WSRE ist weiterhin die zentrale Tagungsreihe der Fachgruppe Software-Reengineering. Er bietet eine Vielzahl aktueller Reengineering-Themen, die gleichermaßen wissenschaftlichen wie praktischen Informationsbedarf abdecken. In diesem Jahr gibt es wieder Beiträge zu einem breiten Spektrum von Reengineering-Themen. Die Organisatoren danken allen Beitragenden für ihr Engagement insbesondere den Vortragenden, Autorinnen und Autoren. Unser Dank gilt auch den Mitarbeiterinnen und Mitarbeitern des Physikzentrums Bad Honnef, die es wie immer verstanden haben, ein angenehmes und problemloses Umfeld für den Workshop zu schaffen.! Für die FG SRE: Volker Riediger, Universität Koblenz-Landau Jochen Quante, Robert Bosch GmbH, Stuttgart Jens Borchers, Steria Mummert, Hamburg Jan Jelschen, Universität Oldenburg! Für den AK L2S2: Stefan Sauer, s-lab, Universität Paderborn Benjamin Klatt, FZI Karlsruhe Thomas P. Ruhroth, TU Dortmund

3 Qualität in Echtzeit mit Teamscale Nils Göde, Lars Heinemann, Benjamin Hummel, Daniela Steidl CQSE GmbH Lichtenbergstr. 8, Garching bei München {goede, heinemann, hummel, Zusammenfassung Existierende Werkzeuge für statische Qualitätsanalysen arbeiten im Batch-Modus. Die Analyse benötigt für jede Ausführung eine gewisse Zeit, was dazu führt, dass Entwickler sich oftmals bereits mit anderen Themen beschäftigen wenn die Ergebnisse verfügbar sind. Zudem lässt sich aufgrund der getrennten Ausführungen nicht zwischen alten und neuen Qualitätsdefiziten unterscheiden eine Grundvoraussetzung für Qualitätsverbesserung in der Praxis. In diesem Artikel stellen wir das Werkzeug Teamscale vor, mit dem sich Qualitätsdefizite zuverlässig während der Evolution des analysierten Systems verfolgen lassen. Durch die inkrementelle Funktionsweise stehen Analyseergebnisse wenige Sekunden nach einem Commit zur Verfügung, wodurch sich Qualität in Echtzeit überwachen und steuern lässt. 1 Einleitung Es existiert bereits eine Vielzahl an statischen Analysewerkzeugen um Qualitätsdefizite in Softwaresystemen aufzudecken. Hierzu zählen unter anderem Con- QAT [5], SonarQube [8] und die Bauhaus Suite [4]. Obwohl diese Werkzeuge eine umfangreiche Auswahl an Analysen bieten, haben sie doch zwei Probleme gemein. Zunächst werden die Werkzeuge im Batch- Modus ausgeführt. Bei aufeinanderfolgenden Analysen wird das komplette System neu analysiert selbst wenn sich nur wenige Teile geändert haben. In der Zeit, die vom Anstoßen der Analyse bis zur Verfügbarkeit der Ergebnisse vergeht, beschäftigen sich Entwickler oftmals schon mit anderen Themen. Das führt dazu, dass die aufgedeckten Qualitätsdefizite in der Praxis oft keine unmittelbare Beachtung finden. Die Wahrscheinlichkeit, dass ein solches Defizit zu einem späteren Zeitpunkt behoben wird, ist sehr gering. Das zweite große Problem besteht darin, dass die getrennte Ausführung der Analysen für verschiedene Versionsstände kein zuverlässiges Verfolgen von Qualitätsdefiziten über die Zeit zulässt. Zwar kann im Das diesem Artikel zugrundeliegende Vorhaben wurde mit Mitteln des Bundesministeriums für Bildung und Forschung unter dem Förderkennzeichen EvoCon, 01IS12034A gefördert. Die Verantwortung für den Inhalt dieser Veröffentlichung liegt bei den Autoren. Nachgang versucht werden die Ergebnisse verschiedener Analysen aufeinander abzubilden, dies führt aber in den meisten Fällen zu ungenauen oder unvollständigen Ergebnissen aufgrund fehlender Informationen (z.b. Umbenennung von Dateien). Zudem benötigt die Abbildung zusätzliche Zeit. Für die kontinuierliche Qualitätsverbesserung ist es notwendig, dass Entwickler ohne nennenswerte zeitliche Verzögerung nach Durchführung ihrer Änderungen über neue Probleme informiert werden. Dabei ist es ebenso wichtig, dass ein Qualitätsdefizit nur dann als neu markiert wird, wenn dies auch der Tatsache entspricht und es sich nicht um ein schon lange bestehendes Legacy-Problem handelt. Sowohl das zeitliche Problem als auch das Verfolgen von Qualitätsdefiziten wird von dem Werkzeug Teamscale [6, 7] gelöst. 2 Teamscale Teamscale ist ein inkrementelles Analysewerkzeug das die kontinuierliche Qualitätsüberwachung und -verbesserung unterstützt. Teamscale analysiert einen Commit innerhalb von Sekunden und informiert somit in Echtzeit darüber, wie sich aktuelle Änderungen auf die Qualität insbesondere die Wartbarkeit des Quelltextes ausgewirkt haben. Teamscale speichert die komplette Qualitätshistorie des Systems und ermöglicht es dadurch mit minimalem Aufwand herauszufinden wann und wodurch Qualitätsdefizite entstanden sind. Durch den Einsatz verschiedener Heuristiken werden Defizite auch verfolgt wenn diese sich zwischen Methoden oder Dateien bewegen [10]. Die zuverlässige Unterscheidung von neuen und alten Qualitätsdefiziten erlaubt es sich auf kürzlich eingeführte Defizite zu konzentrieren. Die Weboberfläche und IDE Plug-Ins stellen die Informationen Entwicklern und anderen Beteiligten zur Verfügung. Architektur Teamscale ist eine Client-Server-Anwendung. Die Architektur ist in Abbildung 1 skizziert. Teamscale verbindet sich direkt mit dem Versionskontrollsystem (Subversion, GIT oder TFS von Microsoft) oder dem Dateisystem. Teamscale unterstützt u.a. Java, C#, C/C++, JavaScript, ABAP oder auch Projekte mit einer Kombination dieser Sprachen. Zudem können

4 Abbildung 1: Teamscale Architektur auch Informationen über Bugs und Change Requests integriert werden. Unterstützt werden u.a. Jira, Redmine und Bugzilla. Die inkrementelle Analyse-Engine baut auf dem Werkzeug ConQAT [5] auf und wird für jeden Commit im angebundenen Versionskontrollsystem aktiv. Dabei werden nur die von Änderungen betroffenen Dateien neu analysiert, wodurch die Ergebnisse innerhalb von Sekunden zur Verfügung stehen. Durch die Analyse jedes einzelnen Commits lassen sich Qualitätsdefizite präzise anhand der Änderungen am Code verfolgen. Die Ergebnisse werden zusammen mit der Historie jeder Datei in einer nosql-datenbank, wie z.b. Apache Cassandra [11], abgelegt und über eine REST-Schnittstelle Clients zur Verfügung gestellt. Clients Teamscale beinhaltet einen JavaScript-basierten Webclient, der verschiedene Sichten auf die Evolution des Systems und dessen Qualität bietet um verschiedenen Rollen (z.b. Entwickler oder Projektleiter) gerecht zu werden. Die Sichten umfassen eine Übersicht der Commits und deren Auswirkungen auf die Qualität, eine Übersicht über den Code (ganz oder in Teilen), eine Delta-Sicht zum Vergleich mit einem früheren Code- Stand und ein frei konfigurierbares Dashboard. Zusätzlich beinhaltet Teamscale Plug-Ins für Eclipse und Visual Studio. Qualitätsdefizite werden in der IDE am Code annotiert. Dadurch können Entwickler neue Qualitätsdefizite unmittelbar in der jeweiligen Entwicklungsumgebung inspizieren und beheben. 3 Analysen Teamscale implementiert eine Vielzahl bekannter Qualitätsanalysen. Die zentralen Analysen, die unterstützt werden, sind im folgenden aufgezählt. Strukturmetriken. Teamscale erhebt zentrale Strukturmetriken, wie die Länge von Dateien, die Länge von Methoden und die Schachtelung des Codes. Klonerkennung. Teamscale führt eine inkrementelle Klonerkennung durch um durch Copy und Paste erzeugte redundante Codeabschnitte zu erkennen. Code Anomalien. Teamscale untersucht das System hinsichtlich Code Anomalien, wie z.b. Verstöße gegen Coding Guidelines und häufige Fehlermuster. Zudem können externe Werkzeuge wie z.b. Find- Bugs [1], PMD [2] und StyleCop [3] eingebunden und deren Ergebnisse importiert werden. Architekturkonformität. Sofern eine geeignete Architekturspezifikation vorliegt, die die Komponenten des Systems und deren Abhängigkeiten beschreibt, kann Teamscale diese mit der Implementierung vergleichen um Abweichungen aufzudecken. Kommentierung. Teamscale beinhaltet eine umfangreiche Analyse der Kommentare im Code [9]. Hierbei wird u.a. geprüft ob bestimmte Vorgaben eingehalten werden (ist z.b. jedes Klasse kommentiert) und ob Kommentare trivial oder inkonsistent sind. 4 Evaluation Eine erste Evaluation von Teamscale wurde als Umfrage unter professionellen Entwicklern bei einem unserer Evaluierungspartner aus der Industrie durchgeführt [7]. Laut Aussage der Entwickler bietet Teamscale ihnen einen guten Überblick über die Qualität ihres Codes und erlaubt es auf einfache Weise aktuelle Probleme von Legacy-Problemen zu trennen. Eine umfassendere Evaluation steht noch aus. 5 Zusammenfassung Teamscale löst zwei große Probleme bestehender Analysewerkzeuge. Durch die inkrementelle Vorgehensweise stehen die Analyserergebnisse in Echtzeit zur Verfügung. Entwickler sind so stets über die Auswirkungen ihrer aktuellen Änderungen informiert und können neu entstandene Qualitätsdefizite gleich beseitigen. Zudem bietet Teamscale eine vollständige Verfolgung von Qualitätsdefiziten durch die Historie. Dadurch kann zuverlässig zwischen alten und neuen Qualitätsdefiziten unterschieden und die Ursache von Problemen mit minimalem Aufwand ermittelt werden. Referenzen [1] FindBugs. [2] PMD. [3] StyleCop. https://stylecop.codeplex.com. [4] Axivion GmbH. Bauhaus Suite. [5] CQSE GmbH. ConQAT. [6] CQSE GmbH. Teamscale. [7] L. Heinemann, B. Hummel, and D. Steidl. Teamscale: Software quality control in real-time. In Proceedings of the 36th International Conference on Software Engineering, Accepted for publication. [8] SonarSource S.A. SonarQube. [9] D. Steidl, B. Hummel, and E. Juergens. Quality analysis of source code comments. In Proceedings of the 21st IEEE International Conference on Program Comprehension, [10] D. Steidl, B. Hummel, and E. Juergens. Incremental origin analysis of source code files. In Proceedings of the 11th Working Conference on Mining Software Repositories, Accepted for publication. [11] The Apache Software Foundation. Apache Cassandra.

5 Assessing Third-Party Library Usage in Practice Veronika Bauer Technische Universität München Florian Deissenboeck, Lars Heinemann CQSE GmbH {deissenboeck, Abstract Modern software systems build on a significant number of external libraries to deliver feature-rich and high-quality software in a cost-e cient and timely manner. As a consequence, these systems contain a considerable amount of third-party code. External libraries thus have a significant impact on maintenance activities in the project. However, most approaches that assess the maintainability of software systems largely neglect this factor. Hence, risks may remain unidentified, threatening the ability to e ectively evolve the system in the future. We propose a structured approach to assess the third-party library usage in software projects and identify potential problems. Industrial experience strongly influences our approach, which we designed in a lightweight way to enable easy adoption in practice. 1 Introduction A plethora of external software libraries form a significant part of modern software systems [3]. Consequently, external libraries and their usage have a significant impact on the maintenance of the including software. Unfortunately, third-party libraries are often neglected in quality assessments of software, leading to unidentified risks for the future evolution of the software. Based on industry needs, we propose a structured approach for the systematic assessment of third-party library usage in software projects. The approach is supported by a comprehensive assessment model relating key characteristics of software library usage to development activities. The model defines how di erent aspects of library usage influence the activities and, thus, allows to assess if and to what extent the usage of third-party libraries impacts the development activities of a given project. Furthermore, we provide guidance for executing the assessment in practice, including tool support. 2 Assessment model The proposed assessment model is inspired by activity-based quality models [2].The model contains entities, the objects we observe in the real world, and attributes, the properties that an entity possesses [4]. Entities are structured in a hierarchical manner to foster completeness. The combination of one or more entities and an attribute is called a fact. Facts are expressed as [Entities ATTRIBUTE]. Toexpress the impact of a fact, the model relates the fact to a development activity. This relation can either be positive, i. e., the fact eases the a ected activity, or negative, i. e., the fact impedes the activity. Impacts are expressed as [Entity ATTRIBUTE] +/! [Activity]. Each impact is backed by a justification, which provides the rationale for its inclusion in the model. We quantify facts with the three-value ordinal scale {low, medium, high}. To assess the impact on the activities, we use the three-value scale {bad, satisfactory, good}. Iftheim- pact is positive, there is a straight-forward mapping from low! bad, medium! satisfactory, high! good. If the fact [Library PREVALENCE], for example, is rated high, the e ect on the activity migrate is good as the + impact relation is positive [Library PREVALENCE]! [Migrate] as a high prevalence of a library usually gives rise to alternative implementations of the required functionality. If the impact relation is negative, the mapping is turned around: low! good, medium! satisfactory, high! bad. The assessment of a single library thus results in a mapping between the activities and the {bad, satisfactory, good} scale. To aggregate the results, we count the occurrences of each value at the leaf activities. Hence, the assessment of a library finally results in mapping from {bad, satisfactory, good}!n 0. We do not aggregate the results in a single number. Activities With one exception, the activities included in the model are typical for maintenance. We consider Modify, Understand, Migrate, Protect, and Distribute. Metrics The model quantifies each fact with one or more metrics 1. As an example, to quantify the extent of vulnerabilities of a library, we measure the number of known critical issues in the bug database of the library. Some of the facts cannot be measured directly, as they depend on many aspects. For instance, the maturity of a library cannot be captured with a single metric but must be judged according to several criteria by an expert. We do not employ an 1 The list of metrics, their description and assignment to facts are detailed in [1].

6 Table II EXAMPLE FOR ASSESSMENT AGGREGATION Library Modify Understand Migrate Protect Distribute Overall Library G: S: 2 B: 1 G: 2 S: 1 B: 2 G: 1 S: 0 B: 3 G: 0 S: 2 B: 0 G: S: 0 B: 1 Legend: G: # of good impacts, S: # of satisfactory impacts, B: # of bad impacts G: S: 5 B: 7 Figure 1: Example of the assessment overview of a library. open source software quality assessment toolkit ConQAT 5 The library s characteristics su ciently support the activity modify. However, it incurs more risks than benefits, and forcan thebe activities downloaded migrateasand a self-contained distribute. bundle including a modular toolkit for creating quality dashboards which ConQAT 6. integrate the results of multiple quality analyses. The current automatic, e. g., threshold-based, mapping from metric values to the {low, medium, high} scale but fully 3 Tool support implementation is targeted at analyzing the library usage of VI. CASE STUDY Java systems but could be adapted to other programming The assessment model includes five metrics that can rely on the experts capabilities. be A. automatically Study Goal determined by static code analyses. languages withimpacts a librarydefine reusehow concept factsand influence for which activities. a parser A The API in Java is available. Totool show support the for applicability the assessment of our is implemented approach, we in justification for each impact provides a rationale for performed Java on top of the open source software quality assessment toolkit ConQAT 3. The current implementation The analysis the impact requires whichthe increases sourceconfirmability and byte code of theof model the a case study on a real-world software system of azh and the assessments based on the model. The complete list of impacts is available online project as well as the included libraries as input. The output Abrechnungs- und IT-Dienstleistungszentrum für Heilberufe is a set of HTML and data files showing the 2 is targeted at analyzing the library usage of Java systems. GmbH, a customer of CQSE GmbH (see Section II).. metric values Our assessment process provides guidance to operationalize in a tabular the model fashion. for assessing The analysis librarytraverses usage in 4 for each library the abstract syntax tree (AST) for each class the project B. Analyzed Case Study System a specific software project. When assessing a realworldall project, method the sheer calls number to external of libraries. require Fora To The showanalyzed the applicability system of is our a distributed approach, we billing per- application and determines each library, possibility it determines to address the the following most relevant five metrics libraries first. formed a case study on a real-world software system. (see with a distinct data entry component running on a J2EE Therefore, the first step of the process structures and The analyzed system is a distributed billing application with a distinct data entry component running also Table I): application server which is accessed from around 350 fat ranks the libraries according to their entangledness Number with of the APIsystem. methodthis calls pre-selection directs the e ort on clients a J2EE (based application Java server Swing). whichthe is accessed system s fromsource code Number of of thedistinct second step APIof method our process: calls the expert assessment of the affected libraries. classes The last step collects the results system s Java Archive sourcefiles code comprises (JARs). about 3.5 MLOC. The around comprises 350 about fat clients 3.5 (based MLOC. onthe Javasystem s Swing). files The include 87 Percentage Scatteredness in an assessment of the API report. system s files include 87 Java Archive Files (JARs). During preselection, we determine the following Percentage of API utilization C. The Study results Procedure indicate that our approach gives a comprehensive overview on the external library usage of values for all libraries: The number of total method The number calls of to atotal library andallows distinct to rank APIallmethod externalcalls libraries as thewe analyzed executed system. our It assessment outlines which approach maintenance (see Section IV) well as theaccording percentage to the ofstrength affectedofclasses their direct are relations aggregated the activities on the study are supported object and to which recorded degree ourby observations the employed the process. libraries. We Furthermore, presented ourthe results semi-automated the stakeholders in during during the AST-traversal. system. The number The scatteredness of distinct method metric callsrequires to a library adds it expresses informationthe about degree the of implicit distribution entangled- pre-selection allowed for a significant reduction of the more computation: of the company and qualitatively captured their feedback. For ness of libraries and system. The scatteredness of time required by the expert assessment. API calls over the system structure. API calls within one this we used the following guiding questions: method calls to a library describes whether the usage package are considered as local. We would expect local Remarks of the library is concentrated to a specific part of the Does the report contain the central libraries? calls for specific system, functionality, or scattered across e. g., it. calls The topercentage networking of affected classes libraries. gives These a complementary would be expected overview to about be work [1], published at the International Conference on or This paper Does presents the assessment a condensed conform versiontoof the previous stakeholders intuition? image rendering concentratedthe toimpact smalla parts migration of the couldsystem. have oncontrarily, the system. Software Maintenance, libraries providing cross cutting functionality such as logging Are important aspects missing in the assessment? Our model guides the expert during the assessment process. The automated analyses have provided the References Were parts of the assessment result surprising? would be expected information to be which called can from be extracted a large from portion the of source the [1] V. Bauer, L. Heinemann, and F. Deissenboeck. A system, therefore code. The exhibiting expert now a high needsscatteredness to evaluate thevalue. remaining We D. Structured Results and Approach Observations to Assess Third-Party Library compute scatteredness metrics. Foras this, thehesum or she of the requires distances detailed between knowledge about nodes the project in the package and its domain tree with andcalls needs to to a Usage. In ICSM 12, The pre-selection step revealed that out of the 87 JAR all pairs of package [2] F. Deissenboeck, S. Wagner, M. Pizka, S. Teuchert, files included by the project files, the system s source code specific API. research The distance detailed of information two nodes about in the thepackage libraries. and J. Girard. An activity-based quality model for tree Subsequent to the assessment, a report can be generated from our model, containing the detailed infor- between the system and these libraries differs significantly, directly maintainability. calls methods In ICSM 07, from The extent of entangledness is given by the sum of the distance from each node to their [3] L. Heinemann, F. Deissenboeck, M. Gleirscher, least common mation ancestor. for eachit library is important in textual to note and tabular that since B. Hummel, and M. Irlbeck. On the Extent and Nature illustrated of Software in Reuse Figurein 3(a). Open Source For some Java Projects. libraries, only one (see as the scatteredness Figure 1) metric form. depends on the system structure method In ICSR 11, is called while for others, there are several thousand (i. e., the depth of the package tree) its values cannot be [4] method B. Kitchenham, calls indicating S. Pfleeger, theand difference N. Fenton. intowards importance a for the compared in a meaningful way across different software framework for software measurement validation. Software Engineering, IEEE Transactions project. Also the degree of scatteredness varies significantly, systems. The percentage of API utilization is computed as as shown in Figure 3(b). 7 on, 21(12): , fraction between the number of distinct API methods called and the total 2 number of API methods in the library. The library-usage-assessment/ 6 ccsm/library-usage-assessment/ complete tool support is available as a ConQAT extension Note that the long tail of libraries with only one method call or scatteredness of 1 or 0 is represented by the blanks in Figures 3(a) and 3(b), as they are not visible due to the log-scale.

7 Quality Measurement Scenarios in Software Migration Gaurav Pandey, Jan Jelschen, Dilshodbek Kuryazov, Andreas Winter Carl von Ossietzky Universität, Oldenburg, Germany Abstract. Legacy systems are migrated to newer technology to keep them maintainable and to meet new requirements. To aid choosing between migration and redevelopment, a quality prognosis of the migrated software, compared with the legacy system is required. Moreover, as the driving forces behind a migration e ort di er, migration tooling has to be tailored according to project-specific needs, to produce a migration result meeting significant quality criteria. Available metrics may not all be applicable identically for both legacy and migrated systems, e.g. because of paradigm shifts during migration. To this end, this paper describes identifies three scenarios for utilizing quality measurement in a migration project. 1 Introduction Migration, i. e. transferring the legacy systems into new environments and technologies without changing their functionality, is a key technique of software evolution [4]. It removes the cost and risk of developing a new system from scratch and allows to continue modernization of the system. However, it needs to be found out whether the conversion leads to a change in internal software quality. To decide between software migration and redevelopment, the quality measurement and comparison of legacy and migrated systems is required. Moreover, a migration project requires an especially tailored toolchain [3]. To choose the tools to carry out an automatic migration, assessment of the quality of migrated code is needed against the combination of involved tools. The identification of project-specific quality criteria and corresponding metrics for quality comparison can be achieved with the advice from project experts. However in a language based migration, e.g. from COBOL to Java, there is a shift from procedural to object-oriented paradigm. This can lead to limiting the usability of a metric, as its validity and interpretation might not hold in both platforms. For example, the metrics calculating object-oriented properties like inheritance or encapsulation, can be used on migrated Java code but not on COBOL source code. To overcome this, it is required to have a strategy regarding utilization and comparison of metrics in migration. To this end, this paper identifies the quality measurement scenarios with suitable metrics, enabling the quality calculation in di erent situations. The next two sections explain the Q-MIG project and the measurement scenarios and are followed by a Conclusion. 2 Q-MIG Project The Q-MIG-project (Quality-driven software MIGration) 1 is a joint venture of pro et con Innovative Informatikanwendungen GmbH, Chemnitz and Carl von Ossietzky University s Software Engineering Group. Q-MIG is aimed at advancing a toolchain for automated software migration [2]. To aid in deciding for or against a migration, selecting a migration strategy, and tailoring the toolchain and individual tools, the toolchain is to be complemented with a quality control center measuring, comparing, and predicting internal quality of software systems under migration. The project aims at enabling quality-driven decisions on migration strategies and tooling [6]. To achieve this, the Goal / Question / Metric approach [1] is used. The goal is to measure and compare the quality of the software before and after migration, to enable migration decisions and toolchain selection. The questions are the quality criteria based on which the quality assessment and comparison needs to be carried out. The Q-MIG project considers internal quality attributes, i. e. focuses on quality criteria maintainability and transferability in terms of the ISO quality standard [5]. Moreover, expert advice is taken for selecting and identifying criteria relevant for software migrations. For example, maintainabilityrelated metrics are important in a project that needs to keep on evolving, but not when the migrated project is meant to be an interim solution, until a redeveloped system can replace it. Then, to measure the quality criteria, metrics need to be identified. However, a metric that is valid for the legacy code might not be valid for the migrated code and vice versa. In order to identify the metrics for quality criteria calculation, the metrics are categorized as per the use case they can be utilized in. To achieve this, scenarios for quality comparison and toolchain component selection are defined in Section 3. 3 Measurement Scenarios This section presents the quality measurement scenarios that utilize the quality metrics according to the properties measured and their applicability to the legacy and migrated platforms. While the first two scenarios facilitate the quality comparison between 1 Q-MIG is funded by Central Innovation Program SME of the German Federal Ministry of Economics and Technology BMWi (KF KM3).

8 the legacy and the migrated systems, the third scenario is particularly useful for selecting components of the migration toolchain. While the Q-MIG project focuses on quality measurement of a COBOL to Java migration, the essence of the scenarios presented remains the same for other combinations of platforms. Same Interpretation and Implementation: This scenario facilitates quality comparison of legacy code (COBOL) and migrated code (Java) to help in project planning. It is achieved by utilizing the quality metrics that are valid and have the same implementations and interpretations in both platforms, and hence allowing for direct quality comparison between the systems. For example, Lines of Code, measuring the size of the project, is calculated identically for COBOL and Java (In some cases Lines of Code can be platform specific requiring adaptations like Function Point Analysis). Similarly Number of GOTOs, Comments Percentage, Cyclomatic Complexity (Mc Cabe Metric) and Duplicates Percentage can be calculated for both languages in the same fashion. Same Interpretation Di erent Implementation: In this scenario the metrics that have di erent implementations but same interpretation in legacy and target code are utilized for quality comparison. COBOL and Java codes are di erent in construct and the building blocks. So, certain metrics can have same interpretation but di erent ways of calculation in the platforms. For example, Cohesion is the degree of independence between the building blocks of a system. So, it can be calculated in the COBOL code considering procedures as building blocks, while in the migrated Java code they can be represented by classes. The two calculations can provide comparable metrics, hence enabling quality comparison. Similarly, other metrics can be utilized for quality comparison, that might not have exactly the same implementation for COBOL and Java. Some metrics conforming to this scenario are: Halstead s metrics (because it uses operators and operands, that di er among the languages), Average Complexity per Unit and Average Unit Size. Target Specific Metrics: In this scenario, the metrics that are specific to target platform Java (and may not be applicable to COBOL legacy code) are utilized for toolchain selection and improvement. For example, the metric Depth of Inheritance can be calculated for Java, but not for COBOL (procedural languages have no inheritance). Also, value of the metric can change on changing components of migration toolchain or by additional reengineering steps. This allows to use the metrics to choose a suitable toolchain by analyzing how the quality of migrated software changes with respect to the chosen components. But, in a one-to-one migration from COBOL to Java that introduces no restructuring, Depth of Inheritance metric value would not change with respect to the migration tools. This is because such migration will not introduce inheritance in the target code. However, the source code can be refactored before migration. And, an analysis of the metrics against the combination of refactoring tools allows the selection of the components of the refactoring toolchain. This scenario allows the metrics relevant to Q-MIG project and applicable for Java code, to be utilized for selecting the migration and refactoring tools. Here, various object-oriented metrics are used like: Number of Classes representing level of abstraction in code. Also, Attribute Hiding Factor and Method Hiding Factor that calculate the percentages of hidden attributes and methods respectively, are related to modifiability. Moreover, Average Number of Methods per Class calculates complexity of the code. Also, the metrics that are applicable in previous two scenarios can be used here, as they are applicable to the migrated code. However, the reverse might not be true. 4 Conclusion This paper identified three scenarios for measuring and comparing internal quality of software systems under migration, paired with applicable metrics. The scenarios stress the challenge of comparing quality measurements in the context of paradigm shifts, e. g. when migrating from procedural COBOL to objectoriented Java. They delimitate pre-/post-migration comparison to assess suitability of migrating, from comparing migration results using di erent toolchain configurations to improve the tools and tailor the toolchain to project-specific needs. Further steps in the project include the design and evaluation of a quality model by identifying relevant quality criteria, and making them measurable using appropriate metrics, with the scenarios providing an initial structure. References [1] V. R. Basili, G. Caldiera, and H. D. Rombach. The goal question metric approach. In Encyclopedia of Software Engineering. Wiley,1994. [2] C. Becker and U. Kaiser. Test der semantischen Äquivalenz von Translatoren am Beispiel von CoJaC. Softwaretechnik-Trends, 32(2),2012. [3] J. Borchers. Erfahrungen mit dem Einsatz einer Reengineering Factory in einem großen Umstellungsprojekt. HMD, 34(194):77 94, mar1997. [4] A. Fuhr, A. Winter, U. Erdmenger, T. Horn, U. Kaiser, V. Riediger, and W. Teppe. Model-Driven Software Migration - Process Model, Tool Support and Application. In A. D. Ionita, M. Litoiu, and G. Lewis, editors, Migrating Legacy Applications: Challenges in Service Oriented Architure and Cloud Computing Environments. IGI Global, Hershey, PA, USA, [5] ISO/IEC. ISO/IEC Systems and software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - System and software quality models. Technical report, [6] J. Jelschen, G. Pandey, and A. Winter. Towards quality-driven software migration. In Proceedings of the 1st Collaborative Workshop on Evolution and Maintenance of Long-Living Systems, 2014.

9 Semi-automated decision making support for undocumented evolutionary changes Jan Ladiges, Alexander Fay Automation Technology Institute Helmut Schmidt University Holstenhofweg 85, Hamburg, Germany {ladiges, Christopher Haubeck, Winfried Lamersdorf Distributed Systems and Information Systems University of Hamburg Vogt-Kölln-Straße 30, Hamburg, Germany {haubeck, 1 Introduction Long-living systems evolve under boundary conditions as diverse as the systems themselves. In the industrial practice of the production automation domain, for example, adaptations and even the initial engineering of control software are often performed without a formalized requirement specification [1]. Nevertheless, operators must decide during operation if such an undocumented change is consistent to the (informal) specification since changed behavior can also occur due to unintended side effects of changes or due to other influences (like wear and tear). In addition, the system behavior is strongly dependent on both, the software and the physical plant. Accordingly, approaches are needed to extract requirements out of the interdisciplinary system behavior and present it to the operator in a suitable format. The FYPA²C project (Forever Young Production Automation with Active Components) tries to realize an extraction of behavior related to nonfunctional requirements (NFR) by monitoring and analyzing signal traces of production systems. In doing so, the specific boundary conditions of the production automation domain should be considered. 2 The Evolution Support Process The assumption of this approach is that the externally measured signal traces of programmable logic controllers (PLCs) provide a basis to capture the NFRs on the system. Fig. 1 shows how lowlevel data (the signals) can be lifted to high-level NFR-related information. First, the signal traces which are created during (simulated) usage scenarios are used to automatically generate and adapt dynamic knowledge models. Such models are e.g. timed automata learned by the algorithm described by Schneider et al. in [2]. Each model expresses specific aspects of the system and serves as a documentation of the underlying process. An analysis of these models can provide NFR-related properties of the system in order to evaluate the influences of changes. Such properties are e.g. the throughput rate or the routing flexibility (see [3]). Figure 1: Process of extracting system properties Similar work has been done in [4]. Here, automata are generated out of test cases which are e.g. derived from design models. An invariant analysis allows for extracting functional requirements which can be monitored. However, the FYPA²C approach assumes that no formal models or test-cases are present and it aims at the extraction of NFRs. Since not every I/O-signal of a PLC includes information about needed aspects, a selection of the signals has to be done. Therefore, signals get enriched by semantics. The semantics include which kind of information is given by the signal. A signal stemming from a sensor which identifies the material of a workpiece (e.g. a capacitive sensor distinguishing workpieces) would get the semantic workpieceidentification. Note that enriching signals is a rather simple step compared to creating e.g. design models. Since a monitoring system cannot decide if a performed change and its influences on the NFRs is

10 intended (or at least acceptable), a practical semiautomated evolution support process with a user in the loop is used. At first an anomaly detection engine detects whenever a behavior is observed that contradicts the knowledge models and, therefore, can indicate an evolutionary change. In case of timed automata the anomaly detection method presented in [2] is used. This anomaly is, in a first step, reported to the user. At this point only the actual anomaly, the context it occurs in, and a limited amount of current properties and probable influences can be reported since only influences on the already observed scenarios can be considered. Deductions on the overall properties are very restricted at this point. If a decision cannot be made here, the changed behavior is added to the concerned knowledge models in order to evaluate the effects on the system properties in detail. This is done by an analysis based on the extracted scenarios that are applied on the plant or a simulation. The advantage of these steps is that the operator can be informed based on the overall NFRrelated properties of the system. As a reaction the change can be reverted if unintended or, if it is intended, adapted scenarios and models can be treated as valid. Figure 2: Semi-automated evolution support process If there is no possibility for a proactive determination of the system properties (missing of simulation and no availability of the system for tests), an adaptation of the models during operation is the only remaining option and just the already observed changes can be evaluated. When an unacceptable influence is observed the operator can react accordingly. However, the scenarios observed after the occurring change can be compared to the stored ones in order to estimate the completeness of the adapted knowledge models. To be more precisely, consider the following simple example: A conveyor system is responsible for transporting workpieces to a machine located at the end of the conveyor system. Workpieces are detected by lightbarriers at both ends of all conveyors. A requirement on the throughput rate demands that the transport does not take longer than 60 seconds. A PLC collects the signals stemming from the lightbarriers and starts the transport when a workpiece reaches the first conveyor and stops it, when the workpiece reaches the machine. Conveyor speed can be parameterized within the PLCprogram. A timed automaton (as a knowledge model) represents the transportation and is learned based on the observed signal traces by the learning algorithm in [2]. The automaton should just include signals related to the transportation. Therefore all I/O signals of the PLC are enriched by simple semantics and the learning algorithm is applied only on signals with the given semantic workpiecedetection. These are all signals stemming from lightbarriers. Accordingly, an analysis on the automaton enables deducing the transporting times by aggregating the transition times. Due to maintenance the motors of the conveyors are exchanged by motors with a higher slip resulting in a slower transportation. Unfortunately, the operator did not adapt the parameters in the PLC. During the first run of the plant the slower transportation is detected as a timeanomaly and reported to the operator after the workpiece passed the first conveyor. The operator can now decide if the anomaly is intended (or at least acceptable) or not. If he is not able to do this decision, for example due to a high complexity of the conveyor system, he can declare the anomaly as uncertain and the knowledge model gets further adapted during the transportation until a deduction about the fulfillment or violation of the throughput requirement can be done. If the requirement is violated the operator can react accordingly by changing the parameters in the PLC code. References [1] G. Frey, L. Litz, Formal methods in PLC programming, in Intl Conf on : Systems, Man, and Cybernetics, vol.4, [2] S. Schneider, L. Litz, and M. Danancher, Timed residuals for fault detection and isolation in discrete event systems, in Workshop on : Dependable Control of Discrete Systems, [3] J. Ladiges, C. Haubeck, A. Fay, and W. Lamersdorf, Operationalized Definitions of Non- Functional Requirements on Automated Production Facilities to Measure Evolution Effects with an Automation System, in Intl. Conf. on Emerging Technologies and Factory Automation, [4] C. Ackermann, R. Cleaveland, S. Huang, A. Ray, C. Shelton, E. Latronico, Automatic requirement extraction from test cases, in Intl. Conf. on Runtime Verification, 2010.

11 Checkable Code Decisions to Support Software Evolution Martin Küster, Klaus Krogmann FZI Forschungszentrum Informatik Haid-und-Neu-Str , Karlsruhe, Germany 1 Introduction For the evolution of software, understanding of the context, i.e. history and rationale of the existing artifacts, is crucial to avoid ignorant surgery [3], i.e. modifications to the software without understanding its design intent. Existing works on recording architecture decisions have mostly focused on architectural models. We extend this to code models, and introduce a catalog of code decisions that can be found in object-oriented systems. With the presented approach, we make it possible to record design decisions that are concerned with the decomposition of the system into interfaces, classes, and references between them, or how exceptions are handled. Furthermore, we indicate how decisions on the usage of Java frameworks (e.g. for dependency injection) can be recorded. All decision types presented are supplied with OCLconstraints to check the validity of the decision based on the linked code model. We hope to solve a problem of all long-lived systems: that late modifications are not in line with the initial design of the system and that decisions are (unconciously) overruled. The problem is that developers will not check all decisions taken in earlier stages, and whether the current implementation still complies with them. Automation of the validation of a large set of decisions, as presented in this work, is a key factor for more concious evolution of software systems. 2 Decision Catalog We developed an extensive catalog of recurring design decisions in Java-based systems. Some of the decision types are listed in Table 1. It lists the decision types and the associated constraints (in natural language) that are checked by the OCL interpreter. Due to space restrictions, we cannot go into detail of each decision type. Elements of all object-oriented languages, such as class declarations including generalizations and interface implementations, are covered as well as member declarations, such as field and method declarations. To show that the approach is not restricted to elementary decisions in object-oriented systems, we give more complex decision types, such as wrapper exception or code clone. Especially code clones can be acceptable if the decision model records the intention of the developer that Figure 1: MarshallingDecision and related artifacts. Code Decision Metamodel (CDM) elements shaded in grey, Java code model elements shaded in white. cloned the code. Important framework-specific decision types are left out from discussion: those for dependency injection (vs. constructor usage) and those for special classes (e.g. Bean classes). They require more complex linkage (not only to Java code, but also to configuration files in XML). The mechanism to state the decision invariant, however, is exactly the same. Fig. 1 gives a model diagram of the decision type MarshallingDecision. The decision states what mechanism is used to marshal a class (using standard serialization or hand-written externalization). 3 Automatic Checks of Decisions We propose a tight integration with models of Java code constructed in textual modeling IDEs. For that, we operate on the code not on a textual level, but on a model level based on EMFText 1. This enables linkage between decision models and code models. Typical di culties of models linking into code, esp. dangling references caused by saving and regenerating the code model from the textual representation, are solved by anchoring the decision in the code using Java annotations. The reference is established by comparing the id of the anchor in the code with the id of the decision. This kind of linkage is stable even in the presence of complex code modifications, such as combinations of moving, renaming, or deleting and 1

12 Code Decision Element Object Creation Inheritance / Abstraction Cardinalities and Order Composition Field Initialization Marshalling (Interface examp.) Description of Constraint objects of designated type are created only via the defined way. Class extends indicated class or one of its sub classes / is abstract Field is of the respective type (Set, SortedSet, List, Collection) Container class has a reference to part class Part class is instantiated and the reference is set within the constructor of container class or as part of the static initializer (bi-directional case) part class holds a reference to container class, too all fields are initialized (only!) as defined Marshalled class must implement the specified interface Wrapper Exception Code Clones Utility Class Singleton (Pattern example) class E must extend Exception methods containing code causing library exception must throw userdefined exception and must not throw library exception code was copied from indicated method according to clone type clones may di er no more than defined: exact clones must stay exact, syntactically identically may not contain modified fragments class must be final (empty) private constructor provides only static methods contains private static final field with self-reference contains a public static synchr. method getting the reference Table 1: Extract of the catalog of discussed code decisions. re-inserting fragments. The decision types are equipped with OCL constraints. These constraints use the linked code elements to check whether the defined design decision still holds in the current implementation. For example, given the MarshallingDecision from Fig. 1, the OCL will check if the class that is referenced by clazz (derived reference) implements java.io.externalizable (if this method is chosen). 4 Related Work and Conclusion The initial ideas of recording decisions during the design of object-oriented systems is from Potts and Bruns [4]. The process of object-oriented analysis is captured in a decision-based methodology by Barnes and Hartrum [1] capturing the argumentation of encapsulation or decomposition. For architectural models of software, the need to collect the set of decisions that led to the architectural design was first pointed out by Jansen and Bosch [2]. In this paper we presented a novel approach to model-based documentation of recurring objectoriented design decisions. We oulined an extract of our catalog of decision types in object-oriented systems. All decisions are equipped with OCL constraints. If applied to existing code, these types make it possible to check whether the defined decision still holds in the current implementation or if it is violated. Currently, we are re-engineering a commercial financial software. This real-world case study helps to complete the catalog and evaluate the benefits of the model-based approach, which is checking, finding rationales and intent, and links to drivers of decisions, during the evolution phase. References [1] P. D. Barnes and T. C. Hartrum. A Decision-Based Methodology For Object-Oriented Design. In Proc. IEEE 1989 National Aerospace and Electronics Conference, pages IEEE Computer Society Press, [2] A. Jansen and J. Bosch. Software Architecture as a Set of Architectural Design Decisions. In 5th Working IEEE/IFIP Conference on Software Architecture (WICSA 05), pages Ieee,2005. [3] D. L. Parnas. Software Aging. In Proc. 16th International Conference on Software Engineering (ICSE 94), pages ,1994. [4] C. Potts and G. Bruns. Recording the Reasons for Design Decisions. In Proc. 10th International Conference on Software Engineering (ICSE 1988), pages IEEE Computer Society Press, 1988.

13 Guidance for Design Rationale Capture to Support Software Evolution Mathias Schubanz 1,Andreas Pleuss 2,Howell Jordan 2,Goetz Botterweck 2 1 Brandenburg University of Technology, Cottbus - Senftenberg, Germany, 2 Lero The Irish Software Engineering Research Centre, Limerick, Ireland, {Andreas.Pleuss, Howell.Jordan, Abstract Documenting design rationale (DR) helps to preserve knowledge over long time to diminish software erosion and to ease maintenance and refactoring. However, use of DR in practice is still limited. One reason for this is the lack of concrete guidance for capturing DR. This paper provides a first step towards identifying DR questions that can guide DR capturing and discusses required future research. Introduction Software continuously evolves. This leads over time to software erosion resulting in significant costs when dealing with legacy software. Documenting design rationale (DR) can help developers to deal with the complexity of software maintenance and software evolution [4, 6]. DR reflects the reasoning (i.e., the Why? ) underlying a certain design. It requires designers to explicate their tacit knowledge about the given context, their intentions, and the alternatives considered [1]. This helps on the one hand to increase software quality and prevent software erosion based on capabilities to 1) enable communication amongst team members [6], 2) support impact analyses [7], and 3) prevent engineers from repeating errors or entering dead-end paths [1]. On the other hand, DR supports refactoring long-living systems to perform the leap towards new platforms or technologies without introducing errors due to missing knowledge about previous decisions. In general, once documented, DR can support software development in many ways, including debugging, verification, development automation or software modification [4]. This has been confirmed in industrial practise (e.g., [2, 5]). Problem Despite its potential benefits, systematic use of DR has not found its way into wider industrial practise. Burge [3] outlines that the lack of industrial application is due to the uncertainty connected to DR usage. There are too many barriers to capture DR accompanied by the uncertainty on its potential payo, as DR often unfolds its full potential late in the software lifecycle. The problem of DR elicitation has been described many times [1, 4, 6]. For instance, engineers might not collect the right information [6]. This based on the statement that DR answers questions [4] could be due to posing the wrong or no questions. General questions in the literature, such as Why was a decision made?, are rather unspecific and ambiguous. This can easily lead to over- or underspecified DR and compromise a developer s motivation. A first approach to guide DR capture has been proposed by Bass et al. [1]. They provide general guidelines on how to capture DR such as Document the decision, the reason or goal behind it, and the context for making the decision. However, considering those guidelines, general questions (e.g., Why? ) alone are not su cient to cover all relevant aspects and guide developers. Our goal is provide better support for software evolution by leveraging the benefits from DR management. Hence, we aim to integrate guidance for DR elicitation into software design and implementation. For this, we aim to identify concrete, specific DR questions that guide engineers in capturing DR and can be used as a basis for building relevant tool support. To the best of our knowledge, concrete DR questions to ask developers have not been investigated in a systematic way yet. Until now, there is just exemplary usage of DR questions in the literature. We aim to provide a first step in this paper by analysing DR questions that can be found in the literature up to now. For this we perform the following steps: (1) We perform a literature analysis and systematically collect DR questions. (2) We normalize the collected questions by rephrasing them. (3) We structure them in accordance to common decision making principles. As a result, we suggest a first set of DR questions as a basis towards guiding engineers in capturing DR. In the remainder of this paper we describe this analysis and the resulting set of DR questions as a first basis towards guiding engineers in capturing DR. Subsequently, the paper discusses the required future work. Question Elicitation To derive a set of specific DR questions to support software evolution we reviewed existing knowledge in DR related literature in a systematic way. Therefore, we collected all questions that we found in the literature, generalized and structured them, and eliminated duplicates. Based on an extensive literature review, we found concrete questions for DR capturing in 19 literature sources, for instance What does the hardware need to do?, What other alternatives were considered?, or How did other people deal with this problem?. This resulted in 150 questions that we collected in a spreadsheet. In the next step, we normalised the questions: Sorting the questions reveals di erent interrogatives used. Most questions are how? (24), what? (73) and why? (24) questions. The 29 other questions could

14 Model Element Decision Option Selected Option Rejected Option Action Add/Change Artefact Judgement Consequence Open Issue Decision # Question Response Type #1 What is the purpose of the decision? Text #2 What triggered the decision to be taken? Text #3 When will the decision be realized? Text #4 What are the options? Option[] #5 What are the actions to be done? Action[] #6 What judgements have been made on this option? Judgement[] #7 What are the anticipated consequences of this option? Consequence[] #8 Who is responsible? Text #9 Why was this alternative selected? Text #10 Why was this alternative not selected? Text #11 What artefacts will be added/changed? Text/Link #12 What other artefacts are related to this addition/change? Text/Link #13 What is the status before the action? Text/Link #14 Why is the new/changed artefact specified in this way? Text #15 Who are the intended users of the new/changed artefact? Text #16 How should the new/changed artefact be used? Text What are the criteria according to which this judgement is #17 made? Criterion #18 Who provided the judgement? What are the anticipated scenarios in which this #19 consequence may occur? Scenario[] #20 What are open issues associated with this consequence? Open Issue[] What are risks and conflicts associated with this #21 consequence? Text #22 What needs to be done? Text #23 Who will be responsible? Text #24 When will it need to be addressed? Text #25 What are the current criteria for success? Criterion[] #26 What are the intended future scenarios? Scenario[] Context Criterion #27 Which stakeholders does this criterion represent? Text Scenario #28 What events could trigger this scenario? Text Table 1: Refined set of questions found in the literature. be rephrased to start with an interrogative. Based on that it seems useful to consider also the other main interrogatives ( who?, when? and where? ) and we added them as generic questions to the overall set. In several iterations we then rephrased each question in a more generic way using one of the interrogatives and removed redundancies which resulted in 47 questions. We then further selected, summarized, and rephrased the questions according the guidelines from [1], resulting in a set of 28 questions, as shown in Table 1. As DR is closely related to decision making concepts, the resulting questions can be grouped according to them. For instance, some questions refer to available options and others to the consequences of a decision. We structure them in a data model with, e.g., Option and Consequence as entities, and links between them. Table 1 shows the resulting questions (middle column), structured by the identified entities (left column). The right column indicates the response type, which is either text, alink to a development artefact (e.g., a design model), or a reference to another entity. The resulting data model could be implemented as a tool-supported data model or metamodel. Research Agenda Besides a few successful cases (e.g., [2, 5]) the use of DR in industrial practise is still an exception. One reason is the lack of information about the practitioners concrete needs. First work has been conducted by Burge [3] and Tang et al. [8] each performing a survey on the expectations and needs in relation to DR usage. They found that practitioners consider DR as important, but also that there is a lack of methodology and tool support. They also stress the need for more empirical work to close this gap. We think that there is no one-fits-all approach. Therefore, Table 1 is just a first step to overcome the uncertainty connected to DR usage. As we intend to provide concrete guidance to designers for capturing DR, concrete 1) application domains, 2) team structures, and 3) the employed development processes need to be considered. Thus, to successfully guide designers when capturing DR, further work to elicit questions to be answered need to be developed under the consideration of these three dimensions. If this is not done carefully, DR questions will remain on an abstract level. Hence, they would merely serve as a guideline for DR capture (similar to [1]) instead of concrete guidance to DR capture. Within future work we intend to get more insights into the process of DR documentation by taking the three dimensions from above into account. Software engineering in regulated domains, including certification-oriented processes seems to be a promising candidate as the need for careful documentation is already well established and first successful industry cases exist (e.g. [2]). Hence, we aim to focus on the automotive domain as a first starting point and intend to create questionnaires and perform interviews with practitioners of corresponding industry partners. Acknowledgments This work was supported, in part, by Science Foundation Ireland grant 10/CE/I1855 to Lero the Irish Software Engineering Research Centre References [1] L. Bass, P. Clements, R. L. Nord, and J. A. Sta ord. Capturing and using rationale for a software architecture. In Rationale Management in Software Engineering, pages Springer, [2] R. Bracewell, K. Wallace, M. Moss, and D. Knott. Capturing design rationale. Computer-Aided Design, 41(3): , [3] J. E. Burge. Design rationale: Researching under uncertainty. AI EDAM (Artificial Intelligence for Engineering Design, Analysis and Manufacturing), 22(4):311, [4] J. E. Burge, J. M. Carroll, R. McCall, and I. Mistrk. Rationale-Based Software Engineering. Springer, [5] E. J. Conklin and K. C. B. Yakemovic. A processoriented approach to design rationale. Human Computer Interaction, 6: ,1991. [6] A. H. Dutoit, R. McCall, I. Mistrík, and B. Paech. Rationale Management in Software Engineering: Concepts and Techniques. In Rationale Management in Software Engineering, pages Springer, [7] J. Liu, X. Hu, and H. Jiang. Modeling the evolving design rationale to achieve a shared understanding. In CSCWD, pages ,2012. [8] A. Tang, M. A. Babar, I. Gorton, and J. Han. A survey of architecture design rationale. Journal of Systems and Software, 79(12): ,2006.

15 Parsing Variant C Code: An Evaluation on Automotive Software Robert Heumüller Universität Magdeburg Magdeburg, Germany Jochen Quante and Andreas Thums Robert Bosch GmbH, Corporate Research Stuttgart, Germany {Jochen.Quante, Abstract Software product lines are often implemented using the C preprocessor. Di erent features are selected based on macros; the corresponding code is activated or deactivated using #if. Unfortunately, C preprocessor constructs are not parseable in general, since they break the syntactical structure of C code [1]. This imposes a severe limitation on software analyses: They usually cannot be performed on unpreprocessed C code. In this paper, we will discuss how and to what extent large parts of the unpreprocessed code can be parsed anyway, and what the results can be used for. 1 Approaches C preprocessor (Cpp) constructs are not part of the C syntax. Code therefore has to be preprocessed before a C compiler can process it. Only preprocessed code conforms to C syntax. In order to perform analyses on unpreprocessed code, this code has to be made parseable first. Several approaches have been proposed for that: Extending a C parser. Preprocessor constructs are added at certain points in the syntax. This requires that these constructs are placed in a way compatible with the C syntax. However, preprocessor constructs can be added anywhere, so this approach cannot cover all cases [1]. Extending a preprocessor parser. The C snippets inside preprocessor conditionals are parsed individually, e. g., using island grammars [4]. This approach is quite limited, because the context is missing, which is often important for decisions during parsing. Analyzing all variants separately and merging results. This approach can build on existing analysis tools. However, for a large number of variance points, it is not feasible due to the exponential growth in the number of variants. Replacing Cpp with a better alternative. A di erent language for expressing conditional compilation and macros was for example proposed by McCloskey et al. [3]. Such a language can be designed to be better analyzable and better integrate with C. However, it is a huge e ort to change a whole code base to a new preprocessing language. We chose to base our work on the first approach. We took ANTLR s standard ANSI C grammar 1 and extended it by preprocessor commands in well-formed places. This way, we were already able to process about 90% of our software. In order to further increase the amount of successfully processable files, it was necessary to discover where this approach failed, and to come up with a strategy for dealing with these failures. An initial regex-based evaluation indicated that the two main reasons for failures were a) the existence of conditional branches with incomplete syntax units, and b) the use of troublesome macros. 2 Normalization To be able to deal with incomplete conditional branches, we implemented a pre-preprocessor as proposed by Garrido et al. [1]. The idea is to transform preprocessor constructs that break the C structure to semantically equivalent code that fits into the C structure. The transformation basically adds code to the conditional code until the condition is at an allowed position. Figure 1 shows a typical example of unparseable code and its normalized equivalent. The code is read into a tree that corresponds to the hierarchy of the input s conditional compilation directives. The normalization can then be performed on this tree using a simple fix-point algorithm: 1. Find a Cpp conditional node with incomplete C syntax units in at least one of its branches. Incompleteness is checked based on token black and white lists. For example, a syntactical unit may not start with tokens like else or &&. 2. Copy missing tokens from before/after the conditional into all of the conditional s branches. This way, some code is duplicated, but the resulting code becomes parseable by the extended parser. 3. Delete the copied tokens at their original location. 1

16 Original: Normalized: Pruned: #ifdef a #ifdef a #ifdef a if (cond) { #ifdef a if (cond) { #endif if (cond) { foo(); foo(); foo(); } #ifdef a } #else } #else foo(); #endif if (cond) { #endif foo(); #endif #else #ifdef a foo(); } #else foo(); #endif #endif Figure 1: Normalization and pruning example. 4. Repeat until convergence. This step introduces a lot of infeasible paths and redundant conditions in the code. For example, the code in Figure 1 contains many lines that the compiler will never see they are not reachable because of the nested check of the negated condition. Such infeasible paths may even contain syntax errors, like foo();} in the example. Such irrelevant parts are thrown away in a postprocessing step (pruning). It symbolically evaluates the conditions, identifies contradictions and redundancy, and removes the corresponding elements. 3 Macros and User-Defined Types In unpreprocessed code, macro and type definitions are often not available. They are usually only resolved by included header files, and this inclusion is done by the preprocessor. Therefore, our parser cannot differentiate between macros and user-defined types or functions. Kästner et al. [2] have solved this problem by implementing a partial preprocessor that preprocesses #include and macros, but keeps conditional compilation. We decided to use a di erent approach: We added a further preprocessing step that collects all macro definitions and type declarations from the entire code base. This information is then used by the parser to decide whether an identifier is a macro statement, expression, or call, or whether it is a userdefined type. Additionally, naming conventions are exploited in certain cases. 90% could be parsed by simply extending the parser to be able to deal with preprocessor constructs in certain well-formed positions. 4% were gained by providing the parser with precollected macro and type information. 3% were gained by normalization. 1% was gained by adding information about type naming conventions. In summary, the share of code that can now be parsed could be increased from 90% to 98% at an acceptable cost. This enables meaningful analyses, for example collecting metrics on variance. These can in future be used to come up with improved variance concepts. Another use case is checking if the product line code complies with the architecture. We also think about transforming #if constructs to corresponding dynamic checks to allow using static analysis tools like Polyspace on the entire product line at once. References [1] A. Garrido and R. Johnson. Analyzing multiple configurations of a C program. In Proc. of 21st Int l Conf. on Software Maintenance (ICSM), pages ,2005. [2] C. Kästner, P. G. Giarrusso, and K. Ostermann. Partial preprocessing C code for variability analysis. In Proc. of 5th Workshop on Variability Modeling of Software-Intensive Systems, pages ,2011. [3] B. McCloskey and E. Brewer. ASTEC: A new approach to refactoring C. In Proceedings of the 13th Int l Symp. on Foundations of Software Engineering (ESEC/FSE), pages21 30,2005. [4] L. Moonen. Generating robust parsers using island grammars. In Proc. of 8th Working Conference on Reverse Engineering (WCRE), pages13 22, Results The approach was evaluated on an engine control software of about 1.5 MLOC. It consists of about 6,700 source files and contains about 150 variant switching macros. The following share of files could be successfully parsed due to the di erent parts of the approach:

17 Consolidating Customized Product Copies to Software Product Lines Benjamin Klatt, Klaus Krogmann FZI Research Center for Information Technology Haid-und-Neu-Str , Karlsruhe, Germany Christian Wende DevBoost GmbH Erich-Ponto-Str. 19, Dresden, Germany 1 Introduction Reusing existing software solutions as initial point for new projects is a frequent approach in software business. Copying existing code and adapting it to customer-specific needs allows for flexible and e cient software customization in the short term. But in the long term, a Software Product Line (SPL) approach with a single code base and explicitly managed variability reduces maintenance e ort and eases instantiation of new products. However, consolidating custom copies into an SPL afterwards, is not trivial and requires a lot of manual e ort. For example, identifying relevant di erences between customized copies requires to review a lot of code. State-of-the-art software di erence analysis neither considers characteristics specific for copy-based customizations nor supports further interpretations of the di erences found (e.g. relating thousands of lowlevel code changes). Furthermore, deriving a reasonable variability design requires experience and is not a software developer s everyday task. In this paper, we present our product copy consolidation approach for software developers. It contributes i) a di erence analysis adapted for code copy di erencing, ii) a variability analysis to identify related di erences, and iii) the derivation of a reasonable variability design. 2 Consolidation Process As illustrated in Figure 1, consolidating customized product copies into a single-code-base SPL encompasses three main steps: Di erence Analysis, Variability Design and the Consolidation Refactoring of the original implementations. These steps are related to typical tasks involved in software maintenance, but adapted to the specific needs of a consolidation. As summarized by Pigoski [2](p. 6-4), developers spend 40% 60% of their maintenance e ort on program comprehension, i.e. di erence analysis in our approach. This is a major part of a consolidation process but it is also the least supported one. In the following sections, we provide further details on the di erent steps of the consolidation process. Acknowledgment: This work was supported by the German Federal Ministry of Education and Research (BMBF), grant No. 01IS13023 A-C. Original Customized Copy 1 Product Customized Copy 2 Difference Analysis Variability Design Consolidation Refactoring Figure 1: Consolidation Process Software Product Line 3 Di erence Analysis We have developed a customized di erence analysis approach that is adapted for the needs for productline consolidation in three directions: Respecting code structures, providing strict (Boolean) change classification, and respecting coding guidelines for copybased customization if available. Today s code comparison solutions do not always respect syntactic code structures. This leads to identified di erences that might cut across two methods bodies. In our approach, we detect di erences on extracted syntax models. This allows to precisely identify changed software elements and detect relations between them later on. Furthermore, we filter code elements not relevant for the software s behavior (e.g. code comments or layout information). However, we strictly detect any changes of elements in the scope and prefer false positively detected changes (i.e. they can be ignored later on) to avoid the loss of behavioral di erences. Coding-guidelines can include specific rules for code copying. For example, developers might be asked to introduce customer-specific su xes to code unit names or introduce extend -relationships to the original code. Since these customization guidelines are vital for aligning di erent product copies, we also feed them into the di erence analysis. 4 Variability Analysis Having all di erences detected, it is important to identify those related to each other. Related di erences tend to contribute to the same customization and thus might need to be part of the same variant later on. In our approach, we derive a Variation Point Model (VPM) from the di erences detected before. The VPM contains variation points (VP), each referencing to a code location containing one of the di erences.

18 At each VP, the code alternatives of the di erence are referenced by variant elements. Starting with this fine-grained model, we analyze the VPs to identify related ones and recommend reasonable aggregations. Recommending and applying aggregations is an iterative approach until the person responsible for the consolidation is satisfied with the VPs (i.e. the variability design). With each iteration, it is his decision to accept or decline the recommended aggregations. This allows him to consider organization aspects such as decisions to not consolidate specific code copies. The variation point relationship analysis itself combines basic analyses, each able to identify a specific type of relationship (e.g. VP location, similar terms used in the code, common modifications or program dependencies). Based on the identified relationships, reasonable aggregations are recommended. Basic analyses can be individually combined to match project-specific needs (e.g. indicators for code belonging together). 5 Consolidation Refactoring As a final step, the code copies implementation must be transformed to a single code base according to the chosen variability design and selected variability realization techniques. Opposed to traditional refactorings (i.e. not changing the external behavior of software), consolidation refactorings might extend (i.e. change) the external behavior. The underlying goal of consolidation refactoring is to keep each individual variant/product copy functional. However, new functional combinations enabled by introducing variability are valid considered consolidation refactorings. To implement consolidation refactorings, we are working on i) a refactoring method that explicitly distinguishes between introducing variability and restructuring code, and ii) specific refactoring automation to introduce variability mechanisms. The former focuses on guidelines and decision support. The latter is about novel refactoring specifications using well known formalization concepts, such as refactoring patterns described by Fowler et al. [3] or the refactoring role model defined by Reimann et al. [6]. Based on this formalization, we will automate the refactoring specifications to reduce the probability of errors compared to manual refactoring. 6 Existing Consolidation Approaches SPLs and variability are established research topics nowadays. However, only a few existing approaches target the consolidation of customized code copies into an SPL with a single code base. Rubin et al. [7] have developed a conceptual framework of how to merge customized product variants in general. They focus on a model level, but their general high-level algorithm matches to our approach. In [8] Schütz presents a consolidation process, describes state-of-the-art capabilities and argues for the need of an automation as we target. In a similar way, others like Alves et al. [1], focus on refactoring existing SPLs, but also identified the lack of support for consolidating customized product copies and the necessity for automation. Koschke et al. [5] presented an approach for consolidating customized product copies by assigning features to module structures and thus identifying di erences between the customized copies. Their approach is complimentary to ours and could be used as an additional variability analysis if according module descriptions are available. 7 Prototype & Research Context In our previous work [4], we presented the idea of tool support for evolutionary SPL development. Meanwhile, we are working on the integration with stateof-the-art development environments. Furthermore, in the project KoPL 1, we refine and enhance the approach for industrial applicability. This encompasses the adaptation of the analysis to be used by software developers in terms of required input and result presentation. Furthermore, extension points are introduced to support additional types of software artifacts, analyses and variability mechanisms. Currently, a prototype of the analysis part is already available and evaluated with an open source case study based on ArgoUML-SPL and an industrial case study. The refactoring is in a design state and will be focused later in the project. As lessons learned: A strong input of how desired SPL characteristics should look like (e.g. realization techniques or quality attributes) improves the approach. We call this an SPL Profile. Furthermore, the first step of understanding is the most crucial one for a consolidation. References [1] V. Alves, R. Gehyi, T. Massoni, U. Kulesza, P. Borba, and C. Lucena. Refactoring product lines. In Proceedings of GPCE ACM. [2] P. Bourque and R. Dupuis. Guide to the Software Engineering Body of Knowledge. IEEE,2004. [3] M. Fowler, K. Beck, J. Brant, and W. Opdyke. Refactoring: Improving the Design of Existing Code. Addison-Wesley Professional, [4] B. Klatt and K. Krogmann. Towards Tool-Support for Evolutionary Software Product Line Development. In Proceedings of WSR [5] R. Koschke, P. Frenzel, A. P. J. Breu, and K. Angstmann. Extending the reflexion method for consolidating software variants into product lines. Software Quality Journal, [6] J. Reimann, M. Seifert, and U. Aß mann. On the reuse and recommendation of model refactoring specifications. Software & Systems Modeling, 12(3),2012. [7] J. Rubin and M. Chechik. A Framework for Managing Cloned Product Variants. In Proceedings of ICSE IEEE. [8] D. Schütz. Variability Reverse Engineering. In Proceedings of EuroPLoP

19 Variability Realization Improvement of Software Product Lines Bo Zhang Software Engineering Research Group University of Kaiserslautern Kaiserslautern, Germany Martin Becker Fraunhofer Institute Experimental Software Engineering (IESE) Kaiserslautern, Germany Abstract: As a software product line evolves both in space and in time, variability realizations tend to erode in the sense that they become overly complex to understand and maintain. To solve this challenge, various tactics are proposed to deal with both eroded variability realizations in the existing product line and variability realizations that tend to erode in the future. Moreover, a variability improvement process is presented that contains these tactics against realization erosion and can be applied in different scenarios. 1 Introduction Nowadays successful software product lines are often developed in an incremental way, in which the variability artifacts evolve both in space and in time. During product line evolution, variability realizations tend to become more and more complex over time. For instance, in variability realizations using conditional compilation, variation points implemented as #ifdef blocks tend to be nested, tangled and scattered in core code assets [2]. Moreover, fine-grained variability realizations are often insufficiently documented in variability specifications (e.g., a feature model), which makes the specifications untraceable or even inconsistent with their realizations [3]. As a result, it is an increasing challenge to understand and maintain the complex variability realizations in product line evolution, which is known as variability realization erosion [4]. In this paper, four countermeasure tactics are introduced to deal with either variability erosion in existing product line or variability realizations that tend to erode in the future. Moreover, a variability realization improvement process is presented that contains these tactics against realization erosion and can be applied in different scenarios. Following this improvement process, we have analyzed the evolution of a large industrial product line (31 versions over four years) and conducted quantitative code measurement [4]. Finally, we have detected six types of erosion symptoms in existing variability realizations and predicted realizations that tend to erode in the future. 2 Variability Improvement Tactics In order to solve the practical problem of variability erosion, different countermeasures can be conducted. Avizienis et al. [1] have introduced four tactics to attain software dependability: fault tolerance, fault removal, fault forecasting, and fault prevention. Similarly, these tactics can be also used for coping with variability erosion as shown in Table 1. Each tactic should be adopted depending on the product line context and business goals. While the tactics of tolerance and removal are dealing with erosion in current variability realizations, the tactics of forecasting and prevention are targeting at variability realizations that tend to erode as the product line evolves with current trend. Table 1. Variability Realization Improvement Tactics Problem Tactics Type Current Tolerance analytical erosion Removal reactive Future Forecasting analytical erosion Prevention proactive Figure 1. Extracted Variability Realization Elements. Since one cause of variability erosion is the lack of sufficient variability documentation, the tactic of tolerance is to understand variability realizations by extracting a variability reflexion model which documents various variability realization elements as well as their inter-dependencies. Figure 1 shows

20 variability realization elements using conditional compilation, which can be automatically extracted into a hierarchical structure. The variability reflexion model does not focus on a specific eroded variability code element, but helps to understand fine-grained variability elements especially for product line maintenance. This tolerance tactic is an analytical approach because it does not change any existing product line artifact. On the contrary, the tactic of removal is to identify and fix eroded elements in existing variability realizations, which is a reactive improvement approach. Besides tackling existing erosion, the tactic of forecasting is to predict future erosion trends and their likely consequences based on the current product line evolution trends (also an analytical approach). If the prediction of future erosion turns out to be non-trivial, then the tactic of prevention should be conducted as a proactive approach to avoid erosion and its consequences in the future. While the tactics of tolerance and forecasting are both analytical, the other two tactics (i.e., removal and prevention) need to change and quality-assure product line realizations with additional effort. 3 Variability Improvement Process Given the four aforementioned tactics, a variability realization improvement process is presented to investigate the variability erosion problem and conduct relevant countermeasures against variability realization erosion. The improvement process contains four activities (Monitor, Analyze, Plan, and Execute) as shown in Figure 2. The aforementioned four countermeasure tactics are conducted in one or multiple activities. Figure 2. Variability Realization Improvement Process. The first activity is Monitor, which extracts a variability reflexion model from variability realizations and product configurations (Tolerance tactic). Then in the activity Analyze the extracted variability reflexion model is analyzed to identify various realization erosion symptoms (part of the Removal tactic) and predict future erosion trends (Forecasting tactic). Both activities have been conducted in an industrial case study in our previous work [4]. In the third activity Plan, countermeasures against those erosion symptoms are designed to either fix eroded variability elements (Removal tactic) or prevent future erosion (Prevention tactic). Finally, in the fourth activity Execute the designed countermeasures of erosion removal or prevention are executed. While the activities of Monitor and Execute can be fully automated depending on variability realization techniques, the activities of Analyze and Plan are technique-independent and require domain knowledge. Since the four tactics have different applicability with respect to variability realization improvement, a product line organization can selectively conduct either one or multiple tactics for different improvement purposes. As shown in Figure 2, the improvement process begins with the activity Monitor, and the derived variability reflexion model is the basis of all following activities. In other words, the tactic of tolerance conducted in the Monitor activity is a prerequisite of all other tactics. Based on the variability reflexion model, a product line organization can decide to either identify and fix eroded variability elements in existing realizations (Removal tactic) or predict and avoid variability erosion in future realizations (Forecasting and Prevention tactics). 4 Conclusion This paper introduces a product line improvement process containing four tactics with different applying scenarios to deal with variability realization erosion either at present or in the future. References [1] A. Avizienis, J. C. Laprie, B. Randell, and C. Landwehr, "Basic concepts and taxonomy of dependable and secure computing," Dependable and Secure Computing, IEEE Transactions on, vol. 1, no. 1, pp , Jan [2] J. Liebig, S. Apel, C. Lengauer, C. Kästner, and M. Schulze, An analysis of the variability in forty preprocessor-based software product lines, in Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1, ser. ICSE '10. New York, NY, USA: ACM, 2010, pp [3] T. Patzke, "Sustainable evolution of product line infrastructure code," Ph.D. dissertation, [4] B. Zhang, M. Becker, T. Patzke, K. Sierszecki, and J. E. Savolainen, "Variability evolution and erosion in industrial product lines: a case study," in Proceedings of the 17th International Software Product Line Conference, ser. SPLC '13. New York, NY, USA: ACM, 2013, pp

ISO 15504 Reference Model

ISO 15504 Reference Model Prozess Dimension von SPICE/ISO 15504 Process flow Remarks Role Documents, data, tools input, output Start Define purpose and scope Define process overview Define process details Define roles no Define

Mehr

Software development with continuous integration

Software development with continuous integration Software development with continuous integration (FESG/MPIfR) ettl@fs.wettzell.de (FESG) neidhardt@fs.wettzell.de 1 A critical view on scientific software Tendency to become complex and unstructured Highly

Mehr

Customer-specific software for autonomous driving and driver assistance (ADAS)

Customer-specific software for autonomous driving and driver assistance (ADAS) This press release is approved for publication. Press Release Chemnitz, February 6 th, 2014 Customer-specific software for autonomous driving and driver assistance (ADAS) With the new product line Baselabs

Mehr

Group and Session Management for Collaborative Applications

Group and Session Management for Collaborative Applications Diss. ETH No. 12075 Group and Session Management for Collaborative Applications A dissertation submitted to the SWISS FEDERAL INSTITUTE OF TECHNOLOGY ZÜRICH for the degree of Doctor of Technical Seiences

Mehr

Einsatz einer Dokumentenverwaltungslösung zur Optimierung der unternehmensübergreifenden Kommunikation

Einsatz einer Dokumentenverwaltungslösung zur Optimierung der unternehmensübergreifenden Kommunikation Einsatz einer Dokumentenverwaltungslösung zur Optimierung der unternehmensübergreifenden Kommunikation Eine Betrachtung im Kontext der Ausgliederung von Chrysler Daniel Rheinbay Abstract Betriebliche Informationssysteme

Mehr

Lehrstuhl für Allgemeine BWL Strategisches und Internationales Management Prof. Dr. Mike Geppert Carl-Zeiß-Str. 3 07743 Jena

Lehrstuhl für Allgemeine BWL Strategisches und Internationales Management Prof. Dr. Mike Geppert Carl-Zeiß-Str. 3 07743 Jena Lehrstuhl für Allgemeine BWL Strategisches und Internationales Management Prof. Dr. Mike Geppert Carl-Zeiß-Str. 3 07743 Jena http://www.im.uni-jena.de Contents I. Learning Objectives II. III. IV. Recap

Mehr

Algorithms for graph visualization

Algorithms for graph visualization Algorithms for graph visualization Project - Orthogonal Grid Layout with Small Area W INTER SEMESTER 2013/2014 Martin No llenburg KIT Universita t des Landes Baden-Wu rttemberg und nationales Forschungszentrum

Mehr

eurex rundschreiben 094/10

eurex rundschreiben 094/10 eurex rundschreiben 094/10 Datum: Frankfurt, 21. Mai 2010 Empfänger: Alle Handelsteilnehmer der Eurex Deutschland und Eurex Zürich sowie Vendoren Autorisiert von: Jürg Spillmann Weitere Informationen zur

Mehr

Mit Legacy-Systemen in die Zukunft. adviion. in die Zukunft. Dr. Roland Schätzle

Mit Legacy-Systemen in die Zukunft. adviion. in die Zukunft. Dr. Roland Schätzle Mit Legacy-Systemen in die Zukunft Dr. Roland Schätzle Der Weg zur Entscheidung 2 Situation Geschäftliche und softwaretechnische Qualität der aktuellen Lösung? Lohnen sich weitere Investitionen? Migration??

Mehr

Understanding and Improving Collaboration in Distributed Software Development

Understanding and Improving Collaboration in Distributed Software Development Diss. ETH No. 22473 Understanding and Improving Collaboration in Distributed Software Development A thesis submitted to attain the degree of DOCTOR OF SCIENCES of ETH ZURICH (Dr. sc. ETH Zurich) presented

Mehr

A Practical Approach for Reliable Pre-Project Effort Estimation

A Practical Approach for Reliable Pre-Project Effort Estimation A Practical Approach for Reliable Pre-Project Effort Estimation Carl Friedrich Kreß 1, Oliver Hummel 2, Mahmudul Huq 1 1 Cost Xpert AG, Augsburg, Germany {Carl.Friedrich.Kress,Mahmudul.Huq}@CostXpert.de

Mehr

Bayerisches Landesamt für Statistik und Datenverarbeitung Rechenzentrum Süd. z/os Requirements 95. z/os Guide in Lahnstein 13.

Bayerisches Landesamt für Statistik und Datenverarbeitung Rechenzentrum Süd. z/os Requirements 95. z/os Guide in Lahnstein 13. z/os Requirements 95. z/os Guide in Lahnstein 13. März 2009 0 1) LOGROTATE in z/os USS 2) KERBEROS (KRB5) in DFS/SMB 3) GSE Requirements System 1 Requirement Details Description Benefit Time Limit Impact

Mehr

SAP PPM Enhanced Field and Tab Control

SAP PPM Enhanced Field and Tab Control SAP PPM Enhanced Field and Tab Control A PPM Consulting Solution Public Enhanced Field and Tab Control Enhanced Field and Tab Control gives you the opportunity to control your fields of items and decision

Mehr

Exercise (Part II) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1

Exercise (Part II) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1 Exercise (Part II) Notes: The exercise is based on Microsoft Dynamics CRM Online. For all screenshots: Copyright Microsoft Corporation. The sign ## is you personal number to be used in all exercises. All

Mehr

Introducing PAThWay. Structured and methodical performance engineering. Isaías A. Comprés Ureña Ventsislav Petkov Michael Firbach Michael Gerndt

Introducing PAThWay. Structured and methodical performance engineering. Isaías A. Comprés Ureña Ventsislav Petkov Michael Firbach Michael Gerndt Introducing PAThWay Structured and methodical performance engineering Isaías A. Comprés Ureña Ventsislav Petkov Michael Firbach Michael Gerndt Technical University of Munich Overview Tuning Challenges

Mehr

Klausur Verteilte Systeme

Klausur Verteilte Systeme Klausur Verteilte Systeme SS 2005 by Prof. Walter Kriha Klausur Verteilte Systeme: SS 2005 by Prof. Walter Kriha Note Bitte ausfüllen (Fill in please): Vorname: Nachname: Matrikelnummer: Studiengang: Table

Mehr

Support Technologies based on Bi-Modal Network Analysis. H. Ulrich Hoppe. Virtuelles Arbeiten und Lernen in projektartigen Netzwerken

Support Technologies based on Bi-Modal Network Analysis. H. Ulrich Hoppe. Virtuelles Arbeiten und Lernen in projektartigen Netzwerken Support Technologies based on Bi-Modal Network Analysis H. Agenda 1. Network analysis short introduction 2. Supporting the development of virtual organizations 3. Supporting the development of compentences

Mehr

Inequality Utilitarian and Capabilities Perspectives (and what they may imply for public health)

Inequality Utilitarian and Capabilities Perspectives (and what they may imply for public health) Inequality Utilitarian and Capabilities Perspectives (and what they may imply for public health) 1 Utilitarian Perspectives on Inequality 2 Inequalities matter most in terms of their impact onthelivesthatpeopleseektoliveandthethings,

Mehr

H. Enke, Sprecher des AK Forschungsdaten der WGL

H. Enke, Sprecher des AK Forschungsdaten der WGL https://escience.aip.de/ak-forschungsdaten H. Enke, Sprecher des AK Forschungsdaten der WGL 20.01.2015 / Forschungsdaten - DataCite Workshop 1 AK Forschungsdaten der WGL 2009 gegründet - Arbeit für die

Mehr

p^db=`oj===pìééçêíáåñçêã~íáçå=

p^db=`oj===pìééçêíáåñçêã~íáçå= p^db=`oj===pìééçêíáåñçêã~íáçå= Error: "Could not connect to the SQL Server Instance" or "Failed to open a connection to the database." When you attempt to launch ACT! by Sage or ACT by Sage Premium for

Mehr

MindestanforderungenanDokumentationvon Lieferanten

MindestanforderungenanDokumentationvon Lieferanten andokumentationvon Lieferanten X.0010 3.02de_en/2014-11-07 Erstellt:J.Wesseloh/EN-M6 Standardvorgabe TK SY Standort Bremen Standard requirements TK SY Location Bremen 07.11.14 DieInformationenindieserUnterlagewurdenmitgrößterSorgfalterarbeitet.DennochkönnenFehlernichtimmervollständig

Mehr

XML Template Transfer Transfer project templates easily between systems

XML Template Transfer Transfer project templates easily between systems Transfer project templates easily between systems A PLM Consulting Solution Public The consulting solution XML Template Transfer enables you to easily reuse existing project templates in different PPM

Mehr

Cloud Architektur Workshop

Cloud Architektur Workshop Cloud Architektur Workshop Ein Angebot von IBM Software Services for Cloud & Smarter Infrastructure Agenda 1. Überblick Cloud Architektur Workshop 2. In 12 Schritten bis zur Cloud 3. Workshop Vorgehensmodell

Mehr

Exploring the knowledge in Semi Structured Data Sets with Rich Queries

Exploring the knowledge in Semi Structured Data Sets with Rich Queries Exploring the knowledge in Semi Structured Data Sets with Rich Queries Jürgen Umbrich Sebastian Blohm Institut AIFB, Universität Karlsruhe (TH) Forschungsuniversität gegründet 1825 www.kit.ed Overview

Mehr

1. General information... 2 2. Login... 2 3. Home... 3 4. Current applications... 3

1. General information... 2 2. Login... 2 3. Home... 3 4. Current applications... 3 User Manual for Marketing Authorisation and Lifecycle Management of Medicines Inhalt: User Manual for Marketing Authorisation and Lifecycle Management of Medicines... 1 1. General information... 2 2. Login...

Mehr

Extended Ordered Paired Comparison Models An Application to the Data from Bundesliga Season 2013/14

Extended Ordered Paired Comparison Models An Application to the Data from Bundesliga Season 2013/14 Etended Ordered Paired Comparison Models An Application to the Data from Bundesliga Season 2013/14 Gerhard Tutz & Gunther Schauberger Ludwig-Maimilians-Universität München Akademiestraße 1, 80799 München

Mehr

Exercise (Part XI) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1

Exercise (Part XI) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1 Exercise (Part XI) Notes: The exercise is based on Microsoft Dynamics CRM Online. For all screenshots: Copyright Microsoft Corporation. The sign ## is you personal number to be used in all exercises. All

Mehr

Dynamische Programmiersprachen. David Schneider david.schneider@hhu.de STUPS - 25.12.02.50

Dynamische Programmiersprachen. David Schneider david.schneider@hhu.de STUPS - 25.12.02.50 Dynamische Programmiersprachen David Schneider david.schneider@hhu.de STUPS - 25.12.02.50 Organisatorisches Aufbau: Vorlesung 2 SWS Übung Kurzreferat Projekt Prüfung Übung wöchentliches Aufgabenblatt in

Mehr

ReadMe zur Installation der BRICKware for Windows, Version 6.1.2. ReadMe on Installing BRICKware for Windows, Version 6.1.2

ReadMe zur Installation der BRICKware for Windows, Version 6.1.2. ReadMe on Installing BRICKware for Windows, Version 6.1.2 ReadMe zur Installation der BRICKware for Windows, Version 6.1.2 Seiten 2-4 ReadMe on Installing BRICKware for Windows, Version 6.1.2 Pages 5/6 BRICKware for Windows ReadMe 1 1 BRICKware for Windows, Version

Mehr

A central repository for gridded data in the MeteoSwiss Data Warehouse

A central repository for gridded data in the MeteoSwiss Data Warehouse A central repository for gridded data in the MeteoSwiss Data Warehouse, Zürich M2: Data Rescue management, quality and homogenization September 16th, 2010 Data Coordination, MeteoSwiss 1 Agenda Short introduction

Mehr

Funktionale Sicherheit ISO 26262 Schwerpunkt Requirements Engineering,

Funktionale Sicherheit ISO 26262 Schwerpunkt Requirements Engineering, Funktionale Sicherheit ISO 26262 Schwerpunkt Requirements Engineering, Manfred Broy Lehrstuhl für Software & Systems Engineering Technische Universität München Institut für Informatik ISO 26262 Functional

Mehr

Ingenics Project Portal

Ingenics Project Portal Version: 00; Status: E Seite: 1/6 This document is drawn to show the functions of the project portal developed by Ingenics AG. To use the portal enter the following URL in your Browser: https://projectportal.ingenics.de

Mehr

DATA ANALYSIS AND REPRESENTATION FOR SOFTWARE SYSTEMS

DATA ANALYSIS AND REPRESENTATION FOR SOFTWARE SYSTEMS DATA ANALYSIS AND REPRESENTATION FOR SOFTWARE SYSTEMS Master Seminar Empirical Software Engineering Anuradha Ganapathi Rathnachalam Institut für Informatik Software & Systems Engineering Agenda Introduction

Mehr

Distributed testing. Demo Video

Distributed testing. Demo Video distributed testing Das intunify Team An der Entwicklung der Testsystem-Software arbeiten wir als Team von Software-Spezialisten und Designern der soft2tec GmbH in Kooperation mit der Universität Osnabrück.

Mehr

Praktikum Entwicklung von Mediensystemen mit ios

Praktikum Entwicklung von Mediensystemen mit ios Praktikum Entwicklung von Mediensystemen mit ios WS 2011 Prof. Dr. Michael Rohs michael.rohs@ifi.lmu.de MHCI Lab, LMU München Today Heuristische Evaluation vorstellen Aktuellen Stand Software Prototyp

Mehr

Wie agil kann Business Analyse sein?

Wie agil kann Business Analyse sein? Wie agil kann Business Analyse sein? Chapter Meeting Michael Leber 2012-01-24 ANECON Software Design und Beratung G.m.b.H. Alser Str. 4/Hof 1 A-1090 Wien Tel.: +43 1 409 58 90 www.anecon.com office@anecon.com

Mehr

Software Engineering verteilter Systeme. Hauptseminar im WS 2011 / 2012

Software Engineering verteilter Systeme. Hauptseminar im WS 2011 / 2012 Software Engineering verteilter Systeme Hauptseminar im WS 2011 / 2012 Model-based Testing(MBT) Christian Saad (1-2 students) Context Models (e.g. State Machines) are used to define a system s behavior

Mehr

Working Sets for the Principle of Least Privilege in Role Based Access Control (RBAC) and Desktop Operating Systems DISSERTATION

Working Sets for the Principle of Least Privilege in Role Based Access Control (RBAC) and Desktop Operating Systems DISSERTATION UNIVERSITÄT JOHANNES KEPLER LINZ JKU Technisch-Naturwissenschaftliche Fakultät Working Sets for the Principle of Least Privilege in Role Based Access Control (RBAC) and Desktop Operating Systems DISSERTATION

Mehr

Instruktionen Mozilla Thunderbird Seite 1

Instruktionen Mozilla Thunderbird Seite 1 Instruktionen Mozilla Thunderbird Seite 1 Instruktionen Mozilla Thunderbird Dieses Handbuch wird für Benutzer geschrieben, die bereits ein E-Mail-Konto zusammenbauen lassen im Mozilla Thunderbird und wird

Mehr

CHAMPIONS Communication and Dissemination

CHAMPIONS Communication and Dissemination CHAMPIONS Communication and Dissemination Europa Programm Center Im Freistaat Thüringen In Trägerschaft des TIAW e. V. 1 CENTRAL EUROPE PROGRAMME CENTRAL EUROPE PROGRAMME -ist als größtes Aufbauprogramm

Mehr

Seminar in Requirements Engineering

Seminar in Requirements Engineering Seminar in Requirements Engineering Vorbesprechung Frühjahrssemester 2010 22. Februar 2010 Prof. Dr. Martin Glinz Dr. Samuel Fricker Eya Ben Charrada Inhalt und Lernziele Software Produktmanagement Ziele,

Mehr

GIPS 2010 Gesamtüberblick. Dr. Stefan J. Illmer Credit Suisse. Seminar der SBVg "GIPS Aperitif" 15. April 2010 Referat von Stefan Illmer

GIPS 2010 Gesamtüberblick. Dr. Stefan J. Illmer Credit Suisse. Seminar der SBVg GIPS Aperitif 15. April 2010 Referat von Stefan Illmer GIPS 2010 Gesamtüberblick Dr. Stefan J. Illmer Credit Suisse Agenda Ein bisschen Historie - GIPS 2010 Fundamentals of Compliance Compliance Statement Seite 3 15.04.2010 Agenda Ein bisschen Historie - GIPS

Mehr

Infrastructure as a Service (IaaS) Solutions for Online Game Service Provision

Infrastructure as a Service (IaaS) Solutions for Online Game Service Provision Infrastructure as a Service (IaaS) Solutions for Online Game Service Provision Zielsetzung: System Verwendung von Cloud-Systemen für das Hosting von online Spielen (IaaS) Reservieren/Buchen von Resources

Mehr

Frequently asked Questions for Kaercher Citrix (apps.kaercher.com)

Frequently asked Questions for Kaercher Citrix (apps.kaercher.com) Frequently asked Questions for Kaercher Citrix (apps.kaercher.com) Inhalt Content Citrix-Anmeldung Login to Citrix Was bedeutet PIN und Token (bei Anmeldungen aus dem Internet)? What does PIN and Token

Mehr

Scrum @FH Biel. Scrum Einführung mit «Electronical Newsletter» FH Biel, 12. Januar 2012. Folie 1 12. Januar 2012. Frank Buchli

Scrum @FH Biel. Scrum Einführung mit «Electronical Newsletter» FH Biel, 12. Januar 2012. Folie 1 12. Januar 2012. Frank Buchli Scrum @FH Biel Scrum Einführung mit «Electronical Newsletter» FH Biel, 12. Januar 2012 Folie 1 12. Januar 2012 Frank Buchli Zu meiner Person Frank Buchli MS in Computer Science, Uni Bern 2003 3 Jahre IT

Mehr

Challenges and solutions for field device integration in design and maintenance tools

Challenges and solutions for field device integration in design and maintenance tools Integrated Engineering Workshop 1 Challenges and solutions for field device integration in design and maintenance tools Christian Kleindienst, Productmanager Processinstrumentation, Siemens Karlsruhe Wartungstools

Mehr

on Software Development Design

on Software Development Design Werner Mellis A Systematic on Software Development Design Folie 1 von 22 How to describe software development? dimensions of software development organizational division of labor coordination process formalization

Mehr

Patentrelevante Aspekte der GPLv2/LGPLv2

Patentrelevante Aspekte der GPLv2/LGPLv2 Patentrelevante Aspekte der GPLv2/LGPLv2 von RA Dr. Till Jaeger OSADL Seminar on Software Patents and Open Source Licensing, Berlin, 6./7. November 2008 Agenda 1. Regelungen der GPLv2 zu Patenten 2. Implizite

Mehr

Implementierung von IEC 61508

Implementierung von IEC 61508 Implementierung von IEC 61508 1 Qualität & Informatik -www.itq.ch Ziele Verständnis für eine mögliche Vorgehensweise mit IEC 61508 schaffen Bewusstes Erkennen und Behandeln bon Opportunitäten unmittelbaren

Mehr

The Single Point Entry Computer for the Dry End

The Single Point Entry Computer for the Dry End The Single Point Entry Computer for the Dry End The master computer system was developed to optimize the production process of a corrugator. All entries are made at the master computer thus error sources

Mehr

EEX Kundeninformation 2007-09-05

EEX Kundeninformation 2007-09-05 EEX Eurex Release 10.0: Dokumentation Windows Server 2003 auf Workstations; Windows Server 2003 Service Pack 2: Information bezüglich Support Sehr geehrte Handelsteilnehmer, Im Rahmen von Eurex Release

Mehr

AS Path-Prepending in the Internet And Its Impact on Routing Decisions

AS Path-Prepending in the Internet And Its Impact on Routing Decisions (SEP) Its Impact on Routing Decisions Zhi Qi ytqz@mytum.de Advisor: Wolfgang Mühlbauer Lehrstuhl für Netzwerkarchitekturen Background Motivation BGP -> core routing protocol BGP relies on policy routing

Mehr

A Requirement-Oriented Data Quality Model and Framework of a Food Composition Database System

A Requirement-Oriented Data Quality Model and Framework of a Food Composition Database System DISS. ETH NO. 20770 A Requirement-Oriented Data Quality Model and Framework of a Food Composition Database System A dissertation submitted to ETH ZURICH for the degree of Doctor of Sciences presented by

Mehr

Employment and Salary Verification in the Internet (PA-PA-US)

Employment and Salary Verification in the Internet (PA-PA-US) Employment and Salary Verification in the Internet (PA-PA-US) HELP.PYUS Release 4.6C Employment and Salary Verification in the Internet (PA-PA-US SAP AG Copyright Copyright 2001 SAP AG. Alle Rechte vorbehalten.

Mehr

Praktikum Entwicklung Mediensysteme (für Master)

Praktikum Entwicklung Mediensysteme (für Master) Praktikum Entwicklung Mediensysteme (für Master) Organisatorisches Today Schedule Organizational Stuff Introduction to Android Exercise 1 2 Schedule Phase 1 Individual Phase: Introduction to basics about

Mehr

Software-Architecture Introduction

Software-Architecture Introduction Software-Architecture Introduction Prof. Dr. Axel Böttcher Summer Term 2011 3. Oktober 2011 Overview 2 hours lecture, 2 hours lab sessions per week. Certificate ( Schein ) is prerequisite for admittanceto

Mehr

IDRT: Unlocking Research Data Sources with ETL for use in a Structured Research Database

IDRT: Unlocking Research Data Sources with ETL for use in a Structured Research Database First European i2b2 Academic User Meeting IDRT: Unlocking Research Data Sources with ETL for use in a Structured Research Database The IDRT Team (in alphabetical order): Christian Bauer (presenter), Benjamin

Mehr

CMMI for Embedded Systems Development

CMMI for Embedded Systems Development CMMI for Embedded Systems Development O.Univ.-Prof. Dipl.-Ing. Dr. Wolfgang Pree Software Engineering Gruppe Leiter des Fachbereichs Informatik cs.uni-salzburg.at Inhalt Projekt-Kontext CMMI FIT-IT-Projekt

Mehr

Prof. Dr. Margit Scholl, Mr. RD Guldner Mr. Coskun, Mr. Yigitbas. Mr. Niemczik, Mr. Koppatz (SuDiLe GbR)

Prof. Dr. Margit Scholl, Mr. RD Guldner Mr. Coskun, Mr. Yigitbas. Mr. Niemczik, Mr. Koppatz (SuDiLe GbR) Prof. Dr. Margit Scholl, Mr. RD Guldner Mr. Coskun, Mr. Yigitbas in cooperation with Mr. Niemczik, Mr. Koppatz (SuDiLe GbR) Our idea: Fachbereich Wirtschaft, Verwaltung und Recht Simple strategies of lifelong

Mehr

PRESS RELEASE. Kundenspezifische Lichtlösungen von MENTOR

PRESS RELEASE. Kundenspezifische Lichtlösungen von MENTOR Kundenspezifische Lichtlösungen von MENTOR Mit Licht Mehrwert schaffen. Immer mehr Designer, Entwicklungsingenieure und Produktverantwortliche erkennen das Potential innovativer Lichtkonzepte für ihre

Mehr

Sprint 1 -> 2 Bridge (20150108)

Sprint 1 -> 2 Bridge (20150108) Sprint 1 WK49-WK50 Prerequisites: MDM4 API documentation MDM4 business object model Eclipse tooling definitions (maven as build) Common Goals: Define MDM API as a valid component and its position MDM API

Mehr

Integration of D-Grid Sites in NGI-DE Monitoring

Integration of D-Grid Sites in NGI-DE Monitoring Integration of D-Grid Sites in NGI-DE Monitoring Steinbuch Centre for Computing Foued Jrad www.kit.edu D-Grid Site Monitoring Status! Prototype D-Grid Site monitoring based on Nagios running on sitemon.d-grid.de

Mehr

USBASIC SAFETY IN NUMBERS

USBASIC SAFETY IN NUMBERS USBASIC SAFETY IN NUMBERS #1.Current Normalisation Ropes Courses and Ropes Course Elements can conform to one or more of the following European Norms: -EN 362 Carabiner Norm -EN 795B Connector Norm -EN

Mehr

Model-based Development of Hybrid-specific ECU Software for a Hybrid Vehicle with Compressed- Natural-Gas Engine

Model-based Development of Hybrid-specific ECU Software for a Hybrid Vehicle with Compressed- Natural-Gas Engine Model-based Development of Hybrid-specific ECU Software for a Hybrid Vehicle with Compressed- Natural-Gas Engine 5. Braunschweiger Symposium 20./21. Februar 2008 Dipl.-Ing. T. Mauk Dr. phil. nat. D. Kraft

Mehr

JONATHAN JONA WISLER WHD.global

JONATHAN JONA WISLER WHD.global JONATHAN WISLER JONATHAN WISLER WHD.global CLOUD IS THE FUTURE By 2014, the personal cloud will replace the personal computer at the center of users' digital lives Gartner CLOUD TYPES SaaS IaaS PaaS

Mehr

Prediction Market, 28th July 2012 Information and Instructions. Prognosemärkte Lehrstuhl für Betriebswirtschaftslehre insbes.

Prediction Market, 28th July 2012 Information and Instructions. Prognosemärkte Lehrstuhl für Betriebswirtschaftslehre insbes. Prediction Market, 28th July 2012 Information and Instructions S. 1 Welcome, and thanks for your participation Sensational prices are waiting for you 1000 Euro in amazon vouchers: The winner has the chance

Mehr

Labour law and Consumer protection principles usage in non-state pension system

Labour law and Consumer protection principles usage in non-state pension system Labour law and Consumer protection principles usage in non-state pension system by Prof. Dr. Heinz-Dietrich Steinmeyer General Remarks In private non state pensions systems usually three actors Employer

Mehr

Extract of the Annotations used for Econ 5080 at the University of Utah, with study questions, akmk.pdf.

Extract of the Annotations used for Econ 5080 at the University of Utah, with study questions, akmk.pdf. 1 The zip archives available at http://www.econ.utah.edu/ ~ ehrbar/l2co.zip or http: //marx.econ.utah.edu/das-kapital/ec5080.zip compiled August 26, 2010 have the following content. (they differ in their

Mehr

KURZANLEITUNG. Firmware-Upgrade: Wie geht das eigentlich?

KURZANLEITUNG. Firmware-Upgrade: Wie geht das eigentlich? KURZANLEITUNG Firmware-Upgrade: Wie geht das eigentlich? Die Firmware ist eine Software, die auf der IP-Kamera installiert ist und alle Funktionen des Gerätes steuert. Nach dem Firmware-Update stehen Ihnen

Mehr

Künstliche Intelligenz

Künstliche Intelligenz Künstliche Intelligenz Data Mining Approaches for Instrusion Detection Espen Jervidalo WS05/06 KI - WS05/06 - Espen Jervidalo 1 Overview Motivation Ziel IDS (Intrusion Detection System) HIDS NIDS Data

Mehr

Is forecast accuracy a good KPI for forecasting in retail? ISF 2010 Roland Martin, SAF AG Simulation, Analysis and Forecasting San Diego, June 2010

Is forecast accuracy a good KPI for forecasting in retail? ISF 2010 Roland Martin, SAF AG Simulation, Analysis and Forecasting San Diego, June 2010 Is forecast accuracy a good KPI for forecasting in retail? ISF 2010 Roland Martin, SAF AG Simulation, Analysis and Forecasting San Diego, June 2010 Situation in Retail and Consequences for Forecasters

Mehr

Lesen Sie die Bedienungs-, Wartungs- und Sicherheitsanleitungen des mit REMUC zu steuernden Gerätes

Lesen Sie die Bedienungs-, Wartungs- und Sicherheitsanleitungen des mit REMUC zu steuernden Gerätes KURZANLEITUNG VORAUSSETZUNGEN Lesen Sie die Bedienungs-, Wartungs- und Sicherheitsanleitungen des mit REMUC zu steuernden Gerätes Überprüfen Sie, dass eine funktionsfähige SIM-Karte mit Datenpaket im REMUC-

Mehr

Ways and methods to secure customer satisfaction at the example of a building subcontractor

Ways and methods to secure customer satisfaction at the example of a building subcontractor Abstract The thesis on hand deals with customer satisfaction at the example of a building subcontractor. Due to the problems in the building branch, it is nowadays necessary to act customer oriented. Customer

Mehr

Possible Solutions for Development of Multilevel Pension System in the Republic of Azerbaijan

Possible Solutions for Development of Multilevel Pension System in the Republic of Azerbaijan Possible Solutions for Development of Multilevel Pension System in the Republic of Azerbaijan by Prof. Dr. Heinz-Dietrich Steinmeyer Introduction Multi-level pension systems Different approaches Different

Mehr

SARA 1. Project Meeting

SARA 1. Project Meeting SARA 1. Project Meeting Energy Concepts, BMS and Monitoring Integration of Simulation Assisted Control Systems for Innovative Energy Devices Prof. Dr. Ursula Eicker Dr. Jürgen Schumacher Dirk Pietruschka,

Mehr

DIGICOMP OPEN TUESDAY AKTUELLE STANDARDS UND TRENDS IN DER AGILEN SOFTWARE ENTWICKLUNG. Michael Palotas 7. April 2015 1 GRIDFUSION

DIGICOMP OPEN TUESDAY AKTUELLE STANDARDS UND TRENDS IN DER AGILEN SOFTWARE ENTWICKLUNG. Michael Palotas 7. April 2015 1 GRIDFUSION DIGICOMP OPEN TUESDAY AKTUELLE STANDARDS UND TRENDS IN DER AGILEN SOFTWARE ENTWICKLUNG Michael Palotas 7. April 2015 1 GRIDFUSION IHR REFERENT Gridfusion Software Solutions Kontakt: Michael Palotas Gerbiweg

Mehr

Introduction to the diploma and master seminar in FSS 2010. Prof. Dr. Armin Heinzl. Sven Scheibmayr

Introduction to the diploma and master seminar in FSS 2010. Prof. Dr. Armin Heinzl. Sven Scheibmayr Contemporary Aspects in Information Systems Introduction to the diploma and master seminar in FSS 2010 Chair of Business Administration and Information Systems Prof. Dr. Armin Heinzl Sven Scheibmayr Objective

Mehr

Security Patterns. Benny Clauss. Sicherheit in der Softwareentwicklung WS 07/08

Security Patterns. Benny Clauss. Sicherheit in der Softwareentwicklung WS 07/08 Security Patterns Benny Clauss Sicherheit in der Softwareentwicklung WS 07/08 Gliederung Pattern Was ist das? Warum Security Pattern? Security Pattern Aufbau Security Pattern Alternative Beispiel Patternsysteme

Mehr

Challenges in Systems Engineering and a Pragmatic Solution Approach

Challenges in Systems Engineering and a Pragmatic Solution Approach Pure Passion. Systems Engineering and a Pragmatic Solution Approach HELVETING Dr. Thomas Stöckli Director Business Unit Systems Engineering Dr. Daniel Hösli Member of the Executive Board 1 Agenda Different

Mehr

Projektrisikomanagement im Corporate Risk Management

Projektrisikomanagement im Corporate Risk Management VERTRAULICH Projektrisikomanagement im Corporate Risk Management Stefan Friesenecker 24. März 2009 Inhaltsverzeichnis Risikokategorien Projekt-Klassifizierung Gestaltungsdimensionen des Projektrisikomanagementes

Mehr

Operational Excellence with Bilfinger Advanced Services Plant management safe and efficient

Operational Excellence with Bilfinger Advanced Services Plant management safe and efficient Bilfinger GreyLogix GmbH Operational Excellence with Bilfinger Advanced Services Plant management safe and efficient Michael Kaiser ACHEMA 2015, Frankfurt am Main 15-19 June 2015 The future manufacturingplant

Mehr

Isabel Arnold CICS Technical Sales Germany Isabel.arnold@de.ibm.com. z/os Explorer. 2014 IBM Corporation

Isabel Arnold CICS Technical Sales Germany Isabel.arnold@de.ibm.com. z/os Explorer. 2014 IBM Corporation Isabel Arnold CICS Technical Sales Germany Isabel.arnold@de.ibm.com z/os Explorer Agenda Introduction and Background Why do you want z/os Explorer? What does z/os Explorer do? z/os Resource Management

Mehr

Mash-Up Personal Learning Environments. Dr. Hendrik Drachsler

Mash-Up Personal Learning Environments. Dr. Hendrik Drachsler Decision Support for Learners in Mash-Up Personal Learning Environments Dr. Hendrik Drachsler Personal Nowadays Environments Blog Reader More Information Providers Social Bookmarking Various Communities

Mehr

The poetry of school.

The poetry of school. International Week 2015 The poetry of school. The pedagogy of transfers and transitions at the Lower Austrian University College of Teacher Education(PH NÖ) Andreas Bieringer In M. Bernard s class, school

Mehr

Vertrauen ist gut. Dr. Florian Deißenböck. BITKOM Arbeitskreis Software Engineering. 8. Oktober 2014. Continuous Quality in Software Engineering

Vertrauen ist gut. Dr. Florian Deißenböck. BITKOM Arbeitskreis Software Engineering. 8. Oktober 2014. Continuous Quality in Software Engineering Vertrauen ist gut Dr. Florian Deißenböck BITKOM Arbeitskreis Software Engineering 8. Oktober 2014 Continuous Quality in Software Engineering Software Engineering Governance is the set of structures, processes

Mehr

Using TerraSAR-X data for mapping of damages in forests caused by the pine sawfly (Dprion pini) Dr. Klaus MARTIN klaus.martin@slu-web.

Using TerraSAR-X data for mapping of damages in forests caused by the pine sawfly (Dprion pini) Dr. Klaus MARTIN klaus.martin@slu-web. Using TerraSAR-X data for mapping of damages in forests caused by the pine sawfly (Dprion pini) Dr. Klaus MARTIN klaus.martin@slu-web.de Damages caused by Diprion pini Endangered Pine Regions in Germany

Mehr

Role Play I: Ms Minor Role Card. Ms Minor, accountant at BIGBOSS Inc.

Role Play I: Ms Minor Role Card. Ms Minor, accountant at BIGBOSS Inc. Role Play I: Ms Minor Role Card Conversation between Ms Boss, CEO of BIGBOSS Inc. and Ms Minor, accountant at BIGBOSS Inc. Ms Boss: Guten Morgen, Frau Minor! Guten Morgen, Herr Boss! Frau Minor, bald steht

Mehr

SECURING PROCESSES FOR OUTSOURCING INTO THE CLOUD

SECURING PROCESSES FOR OUTSOURCING INTO THE CLOUD SECURING PROCESSES FOR OUTSOURCING INTO THE CLOUD Sven Wenzel 1, Christian Wessel 1, Thorsten Humberg 2, Jan Jürjens 1,2 1 2 SecGov, 19.4.2012 Overview Toolsupport: analysis analysis analysis 2 Computing

Mehr

Safer Software Formale Methoden für ISO26262

Safer Software Formale Methoden für ISO26262 Safer Software Formale Methoden für ISO26262 Dr. Stefan Gulan COC Systems Engineering Functional Safety Entwicklung Was Wie Wie genau Anforderungen Design Produkt Seite 3 Entwicklung nach ISO26262 Funktionale

Mehr

PART 3: MODELLING BUSINESS PROCESSES EVENT-DRIVEN PROCESS CHAINS (EPC)

PART 3: MODELLING BUSINESS PROCESSES EVENT-DRIVEN PROCESS CHAINS (EPC) Information Management II / ERP: Microsoft Dynamics NAV 2009 Page 1 of 5 PART 3: MODELLING BUSINESS PROCESSES EVENT-DRIVEN PROCESS CHAINS (EPC) Event-driven Process Chains are, in simple terms, some kind

Mehr

Config & Change Management of Models

Config & Change Management of Models Config & Change Management of Models HOOD GmbH Keltenring 7 82041 Oberhaching Germany Tel: 0049 89 4512 53 0 www.hood-group.com -1- onf 2007 -Config & Change Management of models Speaker HOOD Group Keltenring

Mehr

(Prüfungs-)Aufgaben zum Thema Scheduling

(Prüfungs-)Aufgaben zum Thema Scheduling (Prüfungs-)Aufgaben zum Thema Scheduling 1) Geben Sie die beiden wichtigsten Kriterien bei der Wahl der Größe des Quantums beim Round-Robin-Scheduling an. 2) In welchen Situationen und von welchen (Betriebssystem-)Routinen

Mehr

Universität Dortmund Integrating Knowledge Discovery into Knowledge Management

Universität Dortmund Integrating Knowledge Discovery into Knowledge Management Integrating Knowledge Discovery into Knowledge Management Katharina Morik, Christian Hüppe, Klaus Unterstein Univ. Dortmund LS8 www-ai.cs.uni-dortmund.de Overview Integrating given data into a knowledge

Mehr

Smart Import for supplier projects

Smart Import for supplier projects Release July 2014 Smart Import for supplier Wizard for receiving supplier in the versiondog system Tool for automated import, versioning and Check-In of files edited externally Enhanced user New-look overview

Mehr

IDS Lizenzierung für IDS und HDR. Primärserver IDS Lizenz HDR Lizenz

IDS Lizenzierung für IDS und HDR. Primärserver IDS Lizenz HDR Lizenz IDS Lizenzierung für IDS und HDR Primärserver IDS Lizenz HDR Lizenz Workgroup V7.3x oder V9.x Required Not Available Primärserver Express V10.0 Workgroup V10.0 Enterprise V7.3x, V9.x or V10.0 IDS Lizenz

Mehr

Highlights versiondog 3.1

Highlights versiondog 3.1 Highlights versiondog 3.1 Release June 2014 Smart Import for supplier Wizard for receiving supplier in the versiondog system Tool for automated import, versioning and Check-In of files edited externally

Mehr

Delivering services in a user-focussed way - The new DFN-CERT Portal -

Delivering services in a user-focussed way - The new DFN-CERT Portal - Delivering services in a user-focussed way - The new DFN-CERT Portal - 29th TF-CSIRT Meeting in Hamburg 25. January 2010 Marcus Pattloch (cert@dfn.de) How do we deal with the ever growing workload? 29th

Mehr

Kybernetik Intelligent Agents- Decision Making

Kybernetik Intelligent Agents- Decision Making Kybernetik Intelligent Agents- Decision Making Mohamed Oubbati Institut für Neuroinformatik Tel.: (+49) 731 / 50 24153 mohamed.oubbati@uni-ulm.de 03. 07. 2012 Intelligent Agents Environment Agent Intelligent

Mehr