Tagungsband zur ÖGBMT- Jahrestagung 2014

Größe: px
Ab Seite anzeigen:

Download "Tagungsband zur ÖGBMT- Jahrestagung 2014"

Transkript

1 UMIT - Lecture Notes in Biomedical Computer Science and Mechatronics Volume 4 University for Health Sciences, Medical Informatics and Technology Tagungsband zur ÖGBMT- Jahrestagung 2014 Christian Baumgartner, Winfried Mayr (Hrsg.) ISBN: Sept. 2014, Hall in Tirol

2

3 UMIT - Lecture Notes in Biomedical Computer Science and Mechatronics Volume 4 Series Editor: Department für Biomedizinische Informatik und Mechatronik

4

5 Christian Baumgartner, Winfried Mayr Herausgeber Tagungsband zur ÖGBMT- Jahrestagung 2014

6 Univ.-Prof. DI Dr. Christian Baumgartner Institut für Elektrotechnik und Biomedizinische Technik Department für Biomedizinische Informatik und Mechatronik UMIT - Private Universität für Gesundheitswissenschaften, Medizinische Informatik und Technik 6060 Hall in Tirol, Österreich Univ.-Prof. DI DDr. Winfried Mayr Institut für Biomedizinische Technik und Physik Medizinische Universität Wien Währinger Gürtel Wien, Österreich UMIT- Lecture Notes in Biomedical Computer Science and Mechatronics ISBN: Institut für Elektrotechnik und Biomedizinische Technik (IEBE), UMIT, Hall in Tirol Umschlaggestaltung: IEBE, Theresa Rienmüller Layout: IEBE, Theresa Rienmüller Druck und Bindung: druck.at This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks.

7 Vorwort Sehr geehrte Tagungsteilnehmerinnen, sehr geehrte Tagungsteilnehmer, nach zwei Jahren heißen wir Sie wieder zur Jahrestagung der Österreichischen Gesellschaft für Biomedizinische Technik (ÖGBMT) an der Privaten Universität für Gesundheitswissenschaften, Medizinische Informatik und Technik (UMIT) in Hall in Tirol herzlich willkommen. Gemeinsam mit der Standortagentur Tirol haben wir für Sie diese Zwei-Tages-Veranstaltung organisiert. Schwerpunkte der wissenschaftlichen Sitzungen umfassen die Bereiche biomedizinische Bild- und Signal- verarbeitung, Bioinformatik, Modellbildung und Simulation des Herzen, Medical Devices und klinische Anwendungen, Health Care Technology u. a. Für die besten Masterarbeiten und Dissertationen werden im Rahmen des Studierendenwettbewerbes Preise von der Fachgesellschaft vergeben, die die hervorragenden Leistungen und die hohe Qualität der verfassten Arbeiten unseres wissenschaftlichen Nachwuchses würdigen. Unter dem Motto Faszination Medizin und Technik: Angreifen - Ausprobieren - Staunen - Lernen werden die Tagungsteilnehmer und die Bevölkerung die Möglichkeit haben, im Rahmen der Medizintechnik-Schau zum Anfassen in der Burg Hasegg/Münze Hall Forschungseinrichtungen und Medizintechnikunternehmen und deren jeweilige Forschungsprojekte, Produkte und Dienstleistungen näher kennenzulernen. Für Kinder und Jugendliche finden erstmals Medtech 4 Kids-Workshops ( Kinder-Uni ) zu spannenden Themen aus der Medizintechnik statt. Dieses Programm wird ergänzt durch FI(M)T Workshops (Frauen in die Medizintechnik), die speziell an junge Frauen gerichtet sind. Im Besonderen bedanken wir uns beim Land Tirol und der UMIT für die finanzielle Unterstützung dieser Jahrestagung. Im Namen der ÖGBMT, der Standortagentur Tirol und der UMIT wünschen wir Ihnen somit einen spannenden, abwechslungsreichen und inspirierenden Aufenthalt bei der ÖGBMT Jahrestagung und dem Tiroler Medizintechnikforum 2014! Univ.-Prof. Dr. Christian Baumgartner Univ.-Prof. DDr. Winfried Mayr Hall in Tirol, am

8

9 Inhaltsverzeichnis/Table of Contents Session 1: Biomedizinische Bild- und Signalverarbeitung ELASTISCHE CT CONE-BEAM CT REGISTRIERUNG ALS BASIS FÜR DIE BILDBASIERTE ANALYSE DER BESTRAHLUNGSQUALITÄT P. Raudaschl, R. Schubert, P. Eichberger, P. Lukas NON-INVASIVE QUANTIFICATION OF MYOCARDIAL BLOOD FLOW FOR THE DIAGNOSIS OF CAD USING DYNAMIC CT T. Rienmüller, M. Handler, M. Toifl, C. Baumgartner, R. Rienmüller VALIDATION OF JOINT RESTORATION OF BI-CONTRAST MRI DATA FOR INTENSITY NON-UNIFORMITIES S. Hadjidemetriou, R. Schubert QUALITÄTSBETRACHTUNG DER WAVELET- UND RMS ANALYSE IN ABHÄNGIG- KEIT DER AUFNAHMEFREQUENZ VON EMG SIGNALEN L. Wiedemann, T. Haftner, B. Pobatschnig, M. Faulhuber, M. Reichel GRUNDFREQUENZBESTIMMUNG IN SPRACHSIGNALEN DURCH ADAPTIVE KREUZKORRELATION M. Staudacher, V. Steixner, A. Griessner, C. Zierhofer VALIDATION OF A FLEXIBLE OPTIMAL CONTROL APPROACH FOR RF-PULSE-DESIGN INCLUDING RELAXATION EFFECTS AND SAR C. S. Aigner, C. Clason, A. Rund R. Stollberger Session 2: Studierendenwettbewerb Teil 1 VIRTUAL REALITY IN DER TELE-NEUROREHABILITATION M. Kaindl ebalance - DEVELOPMENT AND USABILITY EVALUATION OF A COMPUTER GAME FOR REHABILITATIVE TRAINING J. Flandorfer ROBOTERUNTERSTÜTZTE MEDIKATION ÄLTERER PERSONEN IM HÄUS- LICHEN UMFELD M. Schweitzer, A. Hoerbst QUANTITATIVE MEASUREMENT OF LEFT VENTRICULAR MYOCARDIAL PER- FUSION BASED ON DYNAMIC CT SCANS M. Toifl KRAFTAUFTEILUNG DER QUADRIZEPSMUSKULATUR BEI ISOKINETISCHEM TRAINING AN DER BEINPRESSE - SIMULATIONSSTUDIE MIT OPENSIM G. Schneider, M. Krenn, J. Cvecka, M. Sedliak, S. Loefler, H. Kern, W. Mayr

10 Inhaltsverzeichnis/Table of Contents Session 3: Studierendenwettbewerb Teil 2 COMPLIANCE MONITORING FÜR DAS ELEKTROSTIMULATIONS-TRAINING IN DER HEIMTHERAPIE M. Hendling, M. Krenn, M. Haller, S. Loefler, H. Kern, W. Mayr SELEKTIVITÄT DER TRANSKUTANEN ELEKTROSTIMULATION DER LUMBA- LEN HINTERWURZELN BEI VERÄNDERUNG DER STIMULATIONSHÖHE A. Toth, M. Krenn, S.M. Danner, U.S. Hofstoetter, K. Minassian, W. Mayr PREVENTING FALLS: MISSION POSSIBLE! AN ICT APPROACH TO ASSESS FALL RISK IN OLDER PEOPLE Andreas Ejupi SIMULATION AND EVALUATION OF THE CRYOABLATION PROCEDURE FOR TREATMENT OPTIMIZATION OF CARDIAC ARRHYTHMIAS Michael Handler BIG DATA IM KRANKENHAUS - RAHMENKONZEPT UND ARCHITEKTUR FÜR DIE SEKUNDÄRNUTZUNG KLINISCHER ROUTINEDATEN W.O. Hackl, E. Ammenwerth Session 4/5: Minisymposium Modellbildung und Simulation des Herzens MULTISCALE MULTIPHYSICS MODELING OF TOTAL CARDIAC FUNCTION: FROM BASIC SCIENCE TO CLINICAL APPLICATIONS C. Augustin, C. Costa, F. Campos, A. Crozier, A. Neic, A.J. Prassl, E. Hofer, G. Plank INFLUENCE OF VARIATIONS IN THE ANGLE OF DIFFERENT EXCITATION DIRECTIONS IN ISOTROPIC CARDIOMYOCYTE MONOLAYERS R. Kienast, M. Stöger, M. Handler, F. Hanser, C. Baumgartner SIMULATING EFFECTS OF INCREASED HEAT TRANSFER SURFACES BET- WEEN APPLICATOR TIP AND REFRIGERANT IN CARDIAC CRYOABLATION M. Handler G. Fischer, R. Kienast, C. Baumgartner

11 Inhaltsverzeichnis/Table of Contents Session 6: Health Care Technology und Biomedizinische Informatik HYBRID MODELING A NEW PROSPECT FOR HEALTHCARE SYSTEMS SIMULATIONS W. Siegl, A. Lassnig, J. Schröttner INTEGRATED CARE IN HEART FAILURE TREATMENT A MODELLING SETUP COMBINING ESTABLISHED CONCEPTS A. Lassnig, W. Siegl, J. Schröttner HerzMobil TIROL mhealth TELEMONITORING EINGEBETTET IN EIN HERZINSUFFIZENZ-NETZWERK S. Welte, P. Kastner, G. Pölzl, A. von der Heidt, R. Modre-Osprian A COMBINED APPROACH FOR SIMILARITY SEARCH AND ANALYSIS IN BIOCHEMICAL MOLECULAR DATABASES M. Popovscaia, C. Baumgartner INTEGRATION OF NGS DATA AND IMAGES OF TISSUE SECTIONS FOR PERSONALIZED ONCOLOGY M. Baldauf, A. Dander, M. Sperk, S. Pabinger, Z. Trajanoski Session 7: Medical Devices und Anwendungen MRI SAFETY OF DEEP BRAIN STIMULATOR PATIENTS A. Tilp, N. Leitgeb EFFECTS OF FINE STRUCTURE STIMULATION ON PITCH PERCEPTION A. Krenmayr, V. Steixner, R. Schatzer, M. Staudacher, A. Griessner, C. Zierhofer LABORTESTUNG EINES IMPLANTIERBAREN TELEMETRIESYSTEMS FÜR CHRONISCHE EMG-AUFNAHMEN IM LANGZEITEXPERIMENT L. Kneisz, E. Unger, H. Lanmüller, W. Mayr ZUSAMMENLEGUNG VON INSULINGABE UND GLUKOSEMESSUNG IM FETT- GEWEBE ZUR VERBESSERUNG DER THERAPIE VON DIABETES M. Tschaikner, M. Jungklaus, B. Lehki, M. Ellmerer, H. Scharfetter, T. Pieber, W. Regittnig

12 10

13 Biomedizinische Bild- und Signalverarbeitung 11

14 12

15 ELASTISCHE CT CONE-BEAM CT REGISTRIERUNG ALS BASIS FÜR DIE BILDBASIERTE ANALYSE DER BESTRAHLUNGSQUALITÄT P. Raudaschl 1, R. Schubert 1, P. Eichberger 2, P. Lukas 2 1 Institut für Biomedizinische Bildanalyse, UMIT, Österreich 2 Universitätsklinik für Strahlentherapie u. Radioonkologie, Medizinische Universität Innsbruck Abstract Das Auftreten strahleninduzierter Sekundärmalignome hängt stark von der Bestrahlungsdosis und deren Verteilung ab. Um die geplante Bestrahlung mit der tatsächlich applizierten Bestrahlung vergleichen zu können, müssen Planungs-CT und Verifikations-CBCT aufeinander registriert werden. In dieser Arbeit wird eine dafür entwickelte Registrierungspipeline vorgestellt, die aus mehreren Vorverarbeitungsschritten und einer elastischen Hauptregistrierung besteht. Keywords Strahlentherapie, Sekundärmalignom, elastische Registrierung, Mutual Information, CBCT Einleitung Krebs ist die zweithäufigste Todesursache in Österreich und es ist zu erwarten, dass die Zahl der Betroffenen an Krebs erkrankter Menschen in Zukunft ansteigen wird. Deshalb ist die Verbesserung der Krebstherapie von größter Bedeutung. Vor Allem auf dem Gebiet der Bestrahlungstherapie gibt es kontinuierlich Weiterentwicklungen. Neuartige Bestrahlungstechnologien wie IMRT und VMAT in Kombination mit einer detaillierten Bestrahlungsplanung ermöglichen eine effektive und zielgerichtete Bestrahlung des Tumors. Der Nachteil dieser neuen Bestrahlungstechnologien ist jedoch, dass ein größeres Volumen des Körpers mit einer minimalen Strahlendosis belastet wird. Verschiedene Studien kamen zum Schluss, dass das Auftreten eines strahleninduzierten Sekundärmalignoms stark von der Bestrahlungsdosis und deren Verteilung abhängig ist [1-4]. Deshalb ist die quantitative und qualitative Analyse der Bestrahlungsqualität sowie die Analyse der Langzeitauswirkungen von Minimaldosisbelastungen das Hauptziel des Oncotyrol Projekts SEMPER [5]. Folgend wird die dafür entwickelte CT CBCT Registrierungspipeline vorgestellt, die die Basis für die oben angeführten Analysen ist. Methoden Die Bestrahlungsqualität wird durch Vergleich von geplanter mit tatsächlich applizierter Bestrahlung analysiert. Dies wird durch Vergleich von Planungs- CT Bilddaten (pct) mit Cone-Beam CT Bilddaten (CBCT), die unmittelbar vor der jeweiligen Bestrahlungssitzung akquiriert werden und den Bestrahlungsplanungsdaten, realisiert. Die Grundlage für diese Analysen ist eine exakte Bildregistrierung zwischen pct und CBCTs. Aufgrund der unterschiedlichen Charakteristika der Bilddaten sind mehrere Vorverarbeitungsschritte nötig: 1. Histogramm-basierte Intensitätsanpassung zwischen pct und CBCT (siehe Abb. 1) 2. Median Filterung des CBCT zur Unterdrückung von hochfrequentem Bildrauschen 3. Rigide, intensitätsbasierte Vorregistrierung Abb. 1: Vergleich CBCT (links) u. pct (rechts). Die Intensitätsunterschiede sind deutlich erkennbar. Nach diesen Vorverarbeitungsschritten wird die Hauptregistrierung zwischen pct und CBCT durchgeführt. Bei der CT CBCT Registrierung werden vorwiegend elastische Bildregistrierungsmethoden angewendet [6, 7]. In dieser Arbeit wurde eine Variante des B-Spline Free Form Deformation Algorithmus adaptiert [8]. Als Metrik wurde die Mutual Information (MI) verwendet. Die Registrierung wurde mit dem Limited memory Broyden-Fletcher-Goldfarb- Shanno (L-BFGS-B) Algorithmus optimiert. Die Genauigkeit der Registrierung wurde einerseits visuell bewertet, andererseits auch quantitativ analysiert. Der Korrelationskoeffizient zwischen rigide und elastisch registriertem pct mit dem CBCT wurde berechnet. Zudem wurden drei verschiedene anatomische Strukturen in den Bilddaten segmentiert um die Registrierungsgenauigkeit zu beurteilen (Femurkopf, Steißbein, Lendenwirbel L5). Dafür wurde der Dice Koeffizient als Ähnlichkeitsmaß verwendet. Für die Analysen stand ein Datensatz eines Patienten mit Prostatakarzinom zur Verfügung. Dieser bestand aus einem pct und fünf Verifikations-CBCTs. Für die Registrierung wurden die Bildverarbeitungsframeworks ITK [9] und Plastimatch [10] verwendet. Zur Bearbeitung und Visualisierung der Bilder kamen I-Presp [11] und Slicer [12] zur Anwendung. 13

16 Ergebnisse In Abb. 2 ist das Ergebnis der rein rigiden und der elastischen Registrierung als Überlagerung zwischen registriertem pct und CBCT beispielhaft zu sehen. Es ist zu erkennen, dass die elastische Registrierung bei externen Konturen, knöchernen Strukturen und Weichteilgewebe bessere Ergebnisse liefert. Abb. 2: Ergebnis der rigiden (links) und elastischen (rechts) Registrierung als Überlagerung von registriertem pct und CBCT. Rote Einfärbung entspricht dem Ausmaß des Registrierungsfehlers. Die durchschnittliche Korrelation zwischen den fünf CBCTs und dem elastisch registrierten pct ist (± 0.049). Die Korrelation liegt somit im Bereich der als Referenz verwendeten CBCT-CBCT Selbstkorrelation mit einer 1mm translationalen Verschiebung in alle Richtungen (0.923 ± siehe Tab. 1). Tab. 1: Durchschnittlicher Korrelationskoeffizient (± Standardabweichung) zwischen fünf CBCTs und registriertem pct. (o./m. I. ohne/mit Intensitätsanpassung) Registrierung Fixed Moving Korrelation Nur rigid (o. I.) CBCT pct ± Nur rigid (m. I.) CBCT pct ± Rigid + elastisch CBCT pct ± mm Translation CBCT CBCT ± In Tab. 2 ist der durchschnittliche Dice Koeffizient von den verschiedenen extrahierten anatomischen Strukturen angeführt. Es ist daraus ersichtlich, dass die elastische Registrierung mit rigider Vorregistrierung bei allen Strukturen den höchsten Dice Koeffizienten liefert. Tab. 2: Durchschnittlicher Dice Koeffizient (± Standardabweichung) verschiedener anatomischer Strukturen aus registriertem pct und fünf CBCTs. Struktur Registrierung Dice Koeffizient li. Femurkopf rigid ± Steißbein rigid ± Wirbel L5 rigid ± li. Femurkopf rigid+elastisch ± Steißbein rigid+elastisch ± Wirbel L5 rigid+elastisch ± Diskussion und Ausblick Die Ergebnisse zeigen, dass die entwickelte Registrierungspipeline eine genaue CT-CBCT Registrierung liefert, vor Allem im Vergleich mit einer rein rigiden Registrierung. Es ist zu erwarten, dass durch eine HU-mapping basierte Intensitätsanpassung eine weitere Erhöhung der Registrierungsgenauigkeit erreicht werden kann. Durch Analysen mit mehreren Datensätzen und weiteren anatomischen Strukturen wird dieser Ansatz untersucht. Literatur [1] Trott K.-R., Can we reduce the incidence of secondary primary malignancies occurring after radiotherapy?, Radioth. & Oncology, vol. 91, pp. 1-3, 2009 [2] Tubiana M., Can we reduce the incidence of secondary primary malignancies occurring after radiotherapy? A critical review., Radioth. & Oncology, vol. 91, pp. 4-15, 2009 [3] Brenner D.J., Curtis R.E., Hall E.J., Ron E., Second malignancies in prostate carcinoma patients after radiotherapy compared with surgery, Cancer, vol. 88, pp , 2000 [4] Boice J.D., Jr, Day N.E., Anderson A., et al.: Second cancers following radiation treatment for cervical cancer: An international collaboration among cancer registries, J Natl. Cancer Inst., vol. 74, pp , 1985 [5] SEMPER: SEcondary Malignoma - Prospective Evaluation of the Radiotherapeutic dose distribution as the cause for induction, Oncotyrol, ekte/area-4-hta-and-bioinformatics/245/, 2014 [6] Lawson J.D., Schreibmann E., Jani A.B., Fox T., Quantitative evaluation of a cone-beam computed tomography-planning computed tomography deformable image registration method for adaptive radiation therapy, J Appl. Clin. Med. Phys., vol. 8(4), pp , 2007 [7] Hou J., Guerrero M., Chen W., D Souza W.D., Deformable planning CT to cone-beam CT image registration in head-and-neck cancer, Med. Phys., vol. 38(4), pp , 2001 [8] Rueckert D., Sonoda LI, Hayes C., Hill D.L., Leach M.O., Hawkes D.J.., Non-rigid registration using free-form deformations: application to breast MR images, IEEE Trans. Med. Imaging, vol. 18(8), pp , 1999 [9] Kitware, Medical Image Segmentation and Registration With ITK - Overview: ITK Registration Methods, onmethodsoverview.pdf, 2014 [10] Plastimatch, [11] Fritscher K., Development of a software framework for preprocessing and level-set segmentation of medical image data. Institute for Biomedical Image Analysis, UMIT, 2004 [12] Fedorov A. et al.: 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network. Magn. Reson. Imaging., vol. 30(9), pp , November

17 NON-INVASIVE QUANTIFICATION OF MYOCARDIAL BLOOD FLOW FOR THE DIAGNOSIS OF CAD USING DYNAMIC CT T. Rienmüller 1, M. Handler 1, M. Toifl 1, R. Rienmüller 2, C. Baumgartner 1 1 Institute of Electrical and Biomedical Engineering, UMIT, Hall in Tirol, Austria 2 Bakulev Scientific Center of Cardiovascular Surgery, Moscow, Russia Abstract Multislice CT is considered an upcoming technology for the non-invasive quantification of myocardial blood flow in patients with Coronary Artery Disease (CAD). For this purpose, the heart of the patient is imaged in 3D for several heart beats after contrast agent administration. This approach requires to define a specific workflow taking into account the examination protocol, medical image processing to register the 4D data sets as well as signal processing of the obtained contrast enhancement-time-curves. In addition to a methodology of how a perfusion workflow can be set up this work outlines challenges on the way to the final quantitative perfusion result. Keywords Myocardial Perfusion 4D, CT-scans, Medical Image Processing Introduction Hypoperfusion represents a major threat to the human heart. Indeed, insufficient oxygen supply of the myocardial tissue may lead to cellular dysfunction and have life threatening consequences. Thus, quantitative computation of myocardial perfusion (MPF) is an essential prerequesite to provide specific diagnoses and take appropriate therapeutic management (medication, surgery) in patients with CAD. In the literature, different approaches and technologies to estimate MPF are described such as Magnetic Resonance Imaging (MRI), Scintigraphic Methods or Echocardio-graphy [1]. By the development of fast multi-slice CT scanners, it has become possible to quantitatively measure MPF using CT technology. Static perfusion scans allow for a qualitative evaluation of MPF only and require an elaborate timing protocol in order to robustly assess the extent of a perfusion defect [2]. Quantitative determination of MPF, on the other hand, necessitates the recording of volumetric CT data sets over several heartbeats (4D data). In this context, the workflow from examination protocol/image acquisition to the final quantitative perfusion result comprises several challenges and different aspects that need to be taken into account. ECG-gating, for example, restricts the image acquisition to a specific cardiac phase which significantly reduces radiation dose and limits motion artifacts. However, for an evaluation on a per voxel basis, spatial misalignments that occur even in the case of highly accurate prospective triggering, reduce the accuracy of the quantitative analysis [3]. In this work we propose an entire MPF estimation workflow that bases on a detailed evaluation of the specific tasks in phantom, animal as well as patient studies. Method Figure 1 shows the needed steps for the quantitative determination of MPF using dynamic CT scans together with the required tasks. Figure 1: MPF estimation workflow Dynamic CT-scans The examination protocol comprises the application of contrast agent, scanning and image reconstruction and processing. In a first step, we performed a phantom study simulating perfusion measurements over 30 seconds and 12 slices, confirming the general suitability of ECG-triggered scans using a multi-row detector [4]. In a second phantom study, we compared different reconstruction kernels and iterative reconstruction (IR) approaches examining changes in absolute HU values and standard deviations. Different aspects of contrast agent application (amount, concentration as well as injection speed) were investigated in an animal study [5,6]. The obtained contrast enhancement time (CET) curves (see Fig. 2) were analyzed regarindg noise and shape in order to obtain a suitable scan protocol whilst minimizing the number of scans and thus radiation dose for the patient. Medical Image Processing 4D data sets put high demands on medical image processing: In the native/low contrast enhanced scans, the separation between myocardial tissue and ventricle is generally not possible. We therefore developed a semi-automatic segmentation process that incorporates temporal variations of the HU values to distinguish different cardiac areas in a preprocessing step and perform the segmentation on the preprocessed data [3]. Even in case of highly accurate ECG-gating and a stable heart beat, spatial misalignments between different time steps occur. For quantitative estimation of MPF on a per voxel basis, the volumetric scans acquired at each heart beat must be registered to each other. The main challenge is the change in contrast enhan- 15

18 Figure 2: HU values with fitted CET curve cement over time. We compared purely HU-value based approaches to local phase based approaches (see [7]) in order to overcome this problem. Model-based MPF estimation We analyzed different model curves to approximate the raw HU data (see Fig. 2) and implemented different deconvolution methods in order overcome beam hardening artifacts due to high contrast enhancement in the right ventricle and rather different shapes of the CET curves depending on the health status as well as physiological parameters (heart rate, blood pressure). In a last step, we compared different empirical and model-based approaches for quantitative estimation of MPF (see [1,8]). Results Final Perfusion Estimation Approach After a detailed evaluation of different approaches in all necessary processing stages, we finally propose the following examination protocol for the quanitative estimation of MPF: Generation of 4D image data sets Based on the evaluation of CET curves (animal/human studies), we propose to perform at least three scans before the contrast agent bolus enters the left ventricle (LV) in order to obtain a reliable base line (see Fig. 2). The number of needed total scans depends mainly on the heart rate of the patient which will be registered simultaneously. Generally, it should not be lower than 30 scans in order to robustly estimate the wash-out of contrast media. The amount of contrast media is calculated individually based on the patients weight in order to minimize the dose and thus beam hardening artifacts. To prevent the bolus to tear apart, a constant injection rate of 4 ml/s is ensured using an automated injector. Based on comparison of phantom data/animal studies, our perfusion framework relies on iterative CT reconstruction (IR) algorithms, since for perfusion estimation on a per voxel basis, the signal to noise ratio is too low using filtered back projection (FBP) reconstruction only (see Fig. 3). Medical Image Processing For the estimation of global MPF, we semi-automatically segment the myocardial tissue based on temporal variations of the HU-values. In order to enable MPF estimation on a per voxel basis and include the borders of the tissue, the images must be well registered. We implemented a local phase based algorithm enabling the registration of images with different contrast enhancement [7]. Model-based MPF Systems theoretic approaches that compute the transfer function between the CET curve of the LV and the myocardium (Fig. 4) were Figure 3: Boxplots of HU value distribution for different reconstruction modalities. IR methods lead to significant noise reduction. found to be more reliable [1,8] and allow for an automated MPF estimation. Figure 4: MPF model The results of an animal study based on this workflow were compared to previous data obtained by Electron Beam Computed Tomography (EBCT) and show similar results [5]. Conclusion This work proposes an image and signal processing framework spanning from the image acquisition and examination protocol to the final voxel based quantitative myocardial perfusion estimation. Several tasks that need to be considered during this process were assessed by different phantom und animal studies. In the next step, different models for specific diseases of the human heart will be assessed in more detail. Acknowledgements Supported by the Science Fund Tirol (TWF). References [1] Toifl, M. (2014). Quantitative Estimation of Left Ventricular Myocardial Perfusion Using Contrast Agent Computed Tomography. Master s Thesis. UMIT [2] Ho, K.-T., Chua, K.-C., Klotz, E., & Panknin, C. (2010). Stress and Rest Dynamic Myocardial Perfusion Imaging by Evaluation of Complete Time-Attenuation Curves With Dual- Source CT. JACC: CARDIOVASCULAR IMAGING, 3 (8), [3] Isola, A. A., Schmitt, H., van Stevendall, et al. (2011). Image Registration and Analysis for Quantitative Myocardial Perfusion: Application to Dynamic Circular Cardiac CT. Physics in Medicine and Biology, 56 (18), [4] Handler, M., Rienmüller, T., Ourednicek, P., et al. (2013). Assessment of HU-value stability in dynamic CT-scans for quantitative estimation of myocardial perfusion. European Congress of Radiology (ECR 2013). Wien. [5] Makarenko, V., Rienmüller, T., Handler, M., et al. (2013). Feasibility of rotational 256 row CT in measuring myocardial perfusion in anaesthetized dogs. European Congress of Radiology (ECR 2013), EPOS. Wien. [6] Rienmüller, T., Handler, M., Ourednicek, P., et al. (2013). Contrast agent concentration of mgi/ml allows for quantitative estimation of myocardial perfusion in dogs. European Congress of Radiology (ECR 2013), EPOS. Wien. [7] Janssens, G., Jacques, L., Jonathan Orban, d. X., et al. (2011). Diffeomorphic Registration of Images with Variable Contrast Enhancement. International Journal of Biomedical Imaging, [8] Rienmüller, T., Baumgartner, C., Handler, M., et al. (2013). Quantitative estimation of left ventricular myocardial perfusion based on dynamic CT scans. BMT Graz. 16

19 VALIDATION OF JOINT RESTORATION OF BI-CONTRAST MRI DATA FOR INTENSITY NON-UNIFORMITIES Stathis Hadjidemetriou and Rainer Schubert Institute of Biomedical Image Analysis (IBIA), UMIT, A-6060 Hall in Tirol, Austria Abstract _ The Radio-Frequency (RF) field in MRI is in practice inhomogeneous and leads to nonbiological intensity non-uniformities across an image [1,2]. Moreover, patient imaging includes various contrasts with different non-uniformities. The method presented and evaluated performs a postacquisition joint restoration of two such images. It restores the marginal and joint statistics of the images. It also enforces other regularity constraints. The effectiveness of the method has been validated with BrainWeb MRI data using the Coefficient of Joint Variation (CJV) between the intensity statistics of regions of different tissues [3]. Keywords _ Anatomic MRI, multi-contrast MRI, intensity non-uniformity, image restoration. Introduction The analysis of MRI data is hampered by nonbiological intensity non-uniformities that stem from the inhomogeneity of the RF field and its interaction with the subject. A patient imaging protocol includes multiple contrasts suffering from different non-uniformities. There have been attempts to calibrate for the RF non-uniformities with parameterized acquisitions [2], which, however, prolong imaging. Several post-acquisition restoration methods have been proposed for images of specific anatomies or statistics [2,4]. A post-acquisition method for joint image restoration preserves the differential structure and enforces smooth nonuniformities [5]. The method presented performs joint restoration of two images of the same anatomy with different contrasts. It uses the auto-co-occurrence and the joint-co-occurrence statistics of two images [6,7]. These statistics are restored with a Wiener filter. Additional data regularities and the valid signal regions of the images are considered. The method is non-parametric over statistics and contrasts of tissues in images. Its effectiveness and accuracy has been demonstrated by validating it with BrainWeb MRI images [3]. This has been achieved with the Coefficient of Joint Variation, CJV, between the intensity statistics of regions of different tissues. Methods Data description: The image datasets consist of five pairs of T 1 and T 2 images from the BrainWeb database [3]. The images are corrupted with simulated non-uniformities of levels B=40%-60%-80%- 100% and noise of levels N=3%-5%. Intensity restoration: The data are two 3D coregistered images I i (x), i=0,1. They correspond to different assumed multiplicative non-uniformity fields B i, i=0,1, corrupting underlying anatomic images I A,i, i=0,1, respectively. Each image is also corrupted by Rayleigh noise, N i, to give I i =B i I A,i +N i, i=0,1, where is the voxelwise Hadamard product. The statistics are the intensity cooccurrences C ij of intensities u 0 and u 1 over a local spherical neighborhood ΔxϵD of radius ρ: C( I,,, ) ` ` dx i I j u0 u1 x x dx. 1 Ii ( u0 ) I j 1 ( u1 ) They give the marginal auto-co-occurrences for i=j and the joint-co-occurrences for i j [6,7]. The statistics of I Ai and B i are assumed independent. Thus, the statistics of their product are the convolution of the statistics of I Ai with the Point Spread Functions,PSFs, corresponding to B i. The latter are non-stationary Gaussian distributions, G(σ). The distortions of the auto-co-occurrences are expressed in polar coordinates. The multiplicative spatial non-uniformity results in a standard deviation of the radial distortion that is linearly proportional to the radial coordinate, σ r r. The angular PSF G(σ φi ) is largest along the diagonal and zero along the axes. The standard deviation of the distortion of the joint-co-occurrences is proportional to Cartesian coordinates, σ ui u i. The restoration separates I i into the products of B i and I A,i. It is iterative, t, with coordinate descent along the regularity constraints. An iteration provides B i,t and I Ai,t, i=0,1. The co-occurrences are restored with non-stationary Wiener filtering f=g/( G ). The restored statistics are forced back to the images. The auto-co-occurrence statistics provide restored intensities in polar form (r i`, i`) and restoration matrix R s i,t(r,φ) as: ' C (, ) *(, ) ' ' ii fi r ri i s r i i i ri, i, Ri, t ( r, ) C f (, ) *(1,1) r, ii i r i i where * is convolution. The restoration of the jointco-occurrences provide updated coordinates (u 0,u 1 )=(C 01 f 01 (σ u0,σ u1 )*(u 0,u 1 ))/(C 01 f 01 (σ u0,σ u1 )*(1,1)) and restoration matrices R b i,t(u 0,u 1 )=u i /u i. The three restoration matrices are back-projected to the images to provide an initial estimate of the restorations W i,t (x)=(1/2)e ΔxϵD (R s i,t(i i (x),i i (x+δx))+r b i,t(i 0 (x),i 1 (x+ Δx))),i=0,1. i 17

20 These estimates are Gaussian filtered spatially with G(σ s ) to enforce smooth non-uniformities W i,t =W i,t *G(σ s ), i=0,1. The restored images are also adjusted to preserve their L 1 norm, I i,t 1 = I i 1, i=0,1. The iterations end when the entropy of C ii, S i,t, i=0,1 and of C 01, S 01,t, satisfy (S i,t+1 -S i,t )/S i,t >0.2, for i=0,1, 01 or at a maximum number t max =100. Implementation: The method is in C++. The B i exist over the signal regions that are identified. The minimum signal intensities for I i, i=0,1 depend on the variances of the Rayleigh noise. The dynamic ranges also exclude high intensity motion artifacts. The signal regions are then denoised topologically. The estimated B i are extrapolated to the entire images. The adjustable parameters of the method are σ r1 =σ r2 and σ s1 =σ s2. The σ ri, i=0,1, are a fraction of the dynamic range and σ ui =(1/ 2)σ ri. The spatial Gaussian filtering is separable. The iterations start with an under-estimate of σ ri and an over-estimate of σ si that are σ ri = 2%, σ φi =4, and σ si =140 pixels. Validation: The intensity uniformity is quantified as the contrast between the intensity statistics of the Gray Matter (GM) and of the White Matter (WM) regions. In particular, with the mean values of the tissue intensity statistics, μ GM and μ WM, as well as their standard deviations σ GM and σ WM. They give the Coefficient of Joint Variation (CJV) [6]: CJV ( GM, WM ) GM WM. GM WM It is computed for the T 1 w and the T 2 w images to give CJV T1 and CJV T2, respectively. An effective restoration decreases the value of CJV. The ratio of the CJV of the restored image, CJV It, to the original image, CJV I0, CJV It /CJV I0, below unity indicates an effective restoration. Experimental Results The values of the CJV for the T 1 image, CJV T1, and for the T 2 image, CJV T2, are in Table 1. The Table shows the values of the original corrupted image pairs and the values of the intensity restored images. In parentheses are the ratios of the CJVs of the restored to the corresponding original images that are always less than unity. They demonstrate the improvement that is greater for higher magnitude non-uniformities such as in high field MRI. The noise level also affects the restoration. The method removes non-uniformity and also preserves tissue contrast. An example of the analysis of images of a Brain- Web data set with non-uniformity of 40% and noise of 5% is in figure 1 that shows slices from the original and restored images as well as their cooccurrences. The cerebellum tissues become brighter and similar in intensity to those of the cerebrum. The restored statistical distributions are also sharper and as expected preserve tissues contrasts. 2 Table 1: The CJVs for intensity statistics of GM and WM tissues. In parentheses are the ratios of the restored to the original CJVs. Values less than unity indicate improvement. Brainweb Original Joint co occurrences N RF T 1 T 2 T 1 T (0.92) 1.02(0.95) (0.97) 1.25(0.99) (0.76) 1.38(0.63) (0.76) 1.38(0.63) (0.76) 1.38(0.63) Discussion The method allows non-parametric representations for both non-uniformities and anatomic statistics. It is thus robust to variations in anatomy, pathology, and contrasts. It has been validated with BrainWeb data using the CJV between the intensity statistics of different tissues. The results demonstrate the effectiveness of the method that can shorten patient scanning time required for calibration. Figure 1: Original and restored T 1 w and T 2 w images and their statistics. The restored statistics are sharper and preserve tissues contrasts. Literature [1] Sled, J., Zijdenbos, A., Evans, A., IEEE Trans. on Med. Imag., vol. 17 (1):pp , 1998 [2] Noterdaeme, O., Brady, M., Proc. of ISBI, pp , 2008 [3] Collins, D., Zijdenbos, A., et al., IEEE Trans. on Med. Imag., vol. 17 (3): pp , 1998 [4] Belaroussi, B., Milles, et al., MedImA, 10, , 2006 [5] Fan, A., Wells III, et al, Proc. of IPMI. Vol. Springer LNCS 2732, pp , 2003 [6] Hadjidemetriou, S., Studholme, et al, MedImA, vol. 13(1):pp , 2009 [7] Hadjidemetriou, S., Buechert, M., et al, Proc. of IPMI, Vol. Springer LNCS 6801, pp ,

21 QUALITÄTSBETRACHTUNG DER WAVELETANALYSE IN ABHÄNGIGKEIT DER AUFNAHMEFREQUENZ VON EMG SIGNALEN L.G. Wiedemann 1, T.S. Haftner 1, B. Pobatschnig 1, M. Faulhuber 2, M. Reichel 1 1 FH Technikum Wien, Österreich 2 Fakultät für Mathematik, Universität Wien, Österreich Abstract Im Rahmen dieser Studie werden 4kHz mit 1kHz Daten im Frequenz- und Amplitudenbereich verglichen. Dazu finden die Wavelet Transformation von Von Tscharner (2000) und die Root Mean Square (RMS) Analyse Anwendung. Die Untersuchung wird sowohl anhand künstlich erzeugter Signale, als auch bei isometrischen und dynamischen EMG Messungen von sechs Muskeln durchgeführt. Die relativen Differenzen zwischen den beiden Methoden liegen im Mittel zwischen 0,65% ± 3.60% und 1,37% ± 0,48%. Es zeigen sich demnach relativ geringe Unterschiede im Vergleich der 4kHz und 1kHz Daten. Weitere Studien sollten auch Signale mit höheren mittleren Frequenzen analysieren. Keywords Aufnahmefrequenz, Wavelet Transformation, RMS, EMG, Mittlere Frequenz Einleitung Um die Datenakquirierung beliebiger Signale so genau wie möglich zu gewährleisten, ist die geeignete Wahl der Abtastfrequenz des Messgeräts ein unumgängliches Kriterium. Dabei ist das Beachten des Nyquist-Shannon Theorems notwendig. Beim Messen von EMG Daten sollen laut Merletti [4] Frequenzen von maximal 500Hz für die Signalanalyse berücksichtigt werden. Dies bedeutet, dass für das Messen von EMG-Daten eine Abtastfrequenz von mindestens 1000Hz benötigt wird. Bisher wurden in der Praxis jedoch vermehrt Geräte mit höheren Abtastfrequenzen vermarktet. Ob dies zu einer realitätsgetreueren Darstellung des Signals führt, wurde in vergangen Studien kontrovers diskutiert [3,5]. In der folgenden Studie wird erwartet, dass die Reduktion der Abtastfrequenz von 4kHz auf 1kHz eine mittlere relative Differenz unter 5 % im Bezug auf den quadratischen Mittelwert und der mittleren Frequenz bedeuten. Eine weitere Annahme besteht darin, dass durch entsprechende Interpolation ("Upsamplen") von 1kHz auf 4kHz zusätzliche Stützstellen entstehen, die eine bessere Reproduzierbarkeit eines originalen 4kHz Signals widerspiegeln. Methoden Im Bezug auf die Fragestellung wird ein künstliches Signal mit einem kontinuierlichen Frequenzanstieg von 6,90Hz bis 395,46Hz und einer Abtastrate von 4kHz in Matlab (R2010b) erzeugt. Diese Frequenzgrenzen beziehen sich auf die minimale und maximale "center-frequency" der Wavelets, die in der Filterbank von Von Tscharner [6] angegeben sind. In dieser Filterbank werden 11 verschiedene Wavelets definiert, die für die EMG-Frequenzanalyse entwickelt wurden. Die erzeugten Signale werden auf 1kHz "downgesampled" indem jeder vierte Wert des Datensatzes einen neuen Datensatz bildete. Für eine weitere Analyse werden diese Daten wieder auf 4kHz interpoliert. Diese Vorgehensweise wird in weiterer Folge als "re-upgesampled" bezeichnet. Die künstlichen Ausgangssignale werden sowohl den 1kHz Daten, als auch den re-upgesampelten (4kHz) Signalen gegenüber gestellt. Mittels Wavelet Transformation und RMS Analyse wird der mittlere Frequenz- (MF) und Amplitudenverlauf der Originalsignale und der bearbeiteten Signale analysiert (mit einer Fensterbreite von 40ms) und in weiterer Folge miteinander verglichen. Zudem wird die elektrische Aktivität einzelner Muskeln - M. vastus laeralis, M. gastrocnemius medialis, M. semitendinosus, M. biceps brachii, M. triceps brachii, M. pectoralis major - isometrisch in verschiedenen MVC Positionen und während dynamischer Bewegungen (Liegestütz und Drop Jump) mit Hilfe des DELSYS Systems (Aufnahmefrequenz 4kHz) aufgezeichnet. Die Auswahlkriterien der Bewegungsarten und Muskeln beziehen sich auf eine möglichst hochfrequente Muskelaktivität während den Bewegungen [2]. Es wurden je drei dynamische und isometrische Messungen (5s) pro Muskel untersucht. Bei den dynamischen Messungen werden die Muskeln nur während ihrer Dauer der Aktivierung analysiert. Die Fensterbreite für die Berechnung der MF und RMS wird bei isometrischen Bewegungen mit 100ms und bei dynamischen Messungen auf 40ms festgelegt. Für die statistische Analyse wird eine Abwandlung des Bland Altmann Diagramms verwendet [1]. Ergebnisse Die Frequenz-Untersuchung der EMG-Signale bei den isometrischen Messungen zeigt eine mittlere relative Differenz von 1,37% ± 0,48%. Die maximale mittlere Frequenz beträgt 195,4Hz (Abb.1). 19

22 Das 95% - Konfidenzintervall reicht von 0,44% bis 2,31%. Abbildung 1: Bland Altmann Plot. x-achse: Mittlere Frequenz der Originaldaten. y-achse: relative Differenzen der mittleren Frequenzen der 4kHz und 1kHz Daten. Im Folgenden werden zwei Tabellen (Tab.2 und Tab.3) mit den Vergleichswerten der mittleren Frequenzen (F) und Amplituden (A) bei den jeweiligen Bewegungsarten bzw. des künstlichen Signals angegeben. Tabelle 2: Mittelwert der relativen Differenz (Mw.), Standardabweichung (Std.) und 95% Konfidenzintervall (K. Int.) der EMG-Signale. Bewegungsart Mw. [%] Std. [%] K. Int. [%] Isometrisch (F) 1,37 0,48 0,44-2,31 Isometrisch (A) 0,80 1,25-1,65-3,24 Dynamisch (F) 1,19 1,33-1,43-3,80 Dynamisch (A) 0,65 3,60-6,40-7,71 Tabelle 3: Mittelwert der absoluten Differenzen (Mw.), Standardabweichung (Std.) und 95% Konfidenzintervall (K. Int.) der künstlichen Signale. Die originalen 4kHz Daten wurden den 1kHz, und den interpolierten (ip) 4kHz Daten gegenübergestellt. Künstliches Mw. Std. K. Int. [Hz] Signal [Hz] [Hz] 4 khz vs 1 khz 0,63 0,22 0,20-1,07 4 khz vs 4 khz (ip) 0,01 0,03-0,05-0,06 Diskussion Die Ergebnisse dieser Studie zeigen, dass eine Abtastfrequenz von 1kHz für die vorliegenden EMG- Signale ausreichend ist. Durch eine Interpolation des Datensatzes auf 4kHz kann eine Minimierung des Fehlers im Frequenzbereich erreicht werden. So beträgt der absolute Mittelwert der Differenzen des 1kHz Signals 0,63Hz ± 0,22Hz und des interpolierten 4kHz Signals 0,01Hz ± 0,03Hz. Es wird angenommen, dass aufgrund der Interpolation mehr Stützstellen für die Wavelets entstehen und somit die Rekonstruktion im Frequenzbereich verbessert wird. Aus diesem Grund werden im weiteren Verlauf die originalen EMG Signale lediglich mit den reupgesampelten Datensätzen verglichen. Bei der Gegenüberstellung der 4kHz mit den reupgesampelten EMG Daten weicht der Mittelwert der Differenzen um höchstens 1,37% in Relation zu den Frequenzen und Amplituden der originalen Daten ab. Diese Ergebnisse bestätigen die Vermutung, dass die mittleren Differenzen um weniger als 5% durch die Wahl einer geringeren Aufnahmefrequenz von 1kHz im Vergleich zu 4kHz abweichen. Diese 5%- Grenze wird bei der RMS-Untersuchung dynamischer EMG-Signale zwar überschritten (Tab. 2), jedoch zeigt sich eine sehr geringe mittlere Differenz von 0,65% zum Originalsignal. In Anlehnung an die maximal ermittelte Frequenz (195,4Hz) ist anzunehmen, dass höhere Frequenzanteile im EMG-Signal mit höheren Informationsverlusten im Frequenz- und Amplitudenbereich einhergehen. Dieser Umstand sollte in weiteren Studien analysiert werden. Zudem wären Informationen bezüglich der Auswirkungen weiterer Abtastraten in zukünftigen Studien sinnvoll. Die Untersuchungen dieser Studie weisen darauf hin, dass keine höheren Aufnahmefrequenzen als 1kHz für EMG-Analysen notwendig sind. Literatur [1] Bland, J., & Altman, D. (2010). Statistical methods for assessing agreement between two methods of clinical measurement. International Journal of Nursing Studies, 47, [2] Illbeigi, S., van Gheluwe, B. (2011). Electromyogeraphic wavelet analysis of lower extremity muscles during sprint start and two subsequnt steps. Portuguese Journal of Sport Science, 11(2), [3] Ives, J.C. & Wigglesworth, J.K. (2003). Sampling rate effects on surface EMG timing and amplitude measures. Clinical Biomechanics. 18(6), [4] Merletti, R. (1999). Standards for Reporting EMG Data. Journal of Electromyography and Kinesiology, 9(1), III-IV. [5] Siemienski, A., Kebel, A. & Klajner, P. (2006). Fatigue independent amplitude-frequency correlations in EMG signals. Zeszyty Naukowe Katedry Mechaniki Stosowanej, 26, [6] von Tscharner, V. (2000). Intensity analysis in time-frequency space of surface myoelectric signals by wavelets of specified resolution. Journal of Electromyography and Kinesiology, 10,

23 GRUNDFREQUENZBESTIMMUNG IN SPRACHSIGNALEN DURCH ADAPTIVE KREUZKORRELATION M. Staudacher, V. Steixner, A. Griessner, C. Zierhofer Institut für Mechatronik, Universität Innsbruck, Österreich Abstract Der vorliegende Artikel präsentiert einen Algorithmus zur Grundfrequenzbestimmung in Sprachsignalen. Dazu wird ein Segment des Sprachsignales extrahiert und mit dem laufenden Signal korreliert. Sobald eine periodische Wiederholung anhand der Korrelationswerte detektiert wird, kann ein Schätzwert für die Grundfrequenz berechnet werden. Nach jeder Berechnung eines Schätzwertes wird das extrahierte Segment durch jenes ersetzt, das den Ausschnitt des Sprachsignals enthält, welcher für die Berechnung der letzten Korrelation verwendet wurde. Das Ergebnis ist ein Algorithmus, der nach einer kurzen Zeitverzögerung den aktuellen Wert der Tonhöhe mit einer geringen Fehlerrate bestimmt. Keywords Grundfrequenzbestimmung, Signalverarbeitung, Kreuzkorrelation, Autokorrelation Einleitung Eine schnelle Detektion stimulationsrelevanter Daten in Cochlea-Implantaten ist eine Grundvoraussetzung, um auftretende Zeitverzögerungen so kurz wie möglich zu halten. Dies dient beispielsweise dem Ziel einer möglichst synchronen audiovisuellen Wahrnehmung [1] und dem Vermeiden störender Zeitverzögerungen direkter und gefilterter Audiosignale bei einseitig Ertaubten. Wird eine Tonhöhenbestimmung in die Signalverarbeitung integriert, wird an diese ebenso die Forderung einer möglichst schnellen und exakten Abschätzung der auftretenden Grundfrequenz gestellt. Viele Algorithmen zur Grundfrequenzbestimmung analysieren ein Sprachsignal, in dem sie dieses in Segmente einer bestimmten Länge unterteilen und die Annahme treffen, dass die Frequenz innerhalb des Segmentes stationär ist [2]. Die Grundfrequenz zu jedem Segment wird anschließend z.b. durch Autokorrelation bestimmt. Dazu muss das Segment zumindest doppelt so lang wie die Periode der niedrigsten zu bestimmenden Frequenz sein [2,3]. Sollen etwa Frequenzen ab 50 Hz detektieren werden, beträgt die Mindestlänge des Segmentes 40 ms. In einer Echtzeitanwendung tritt daher eine Zeitverzögerung zwischen der aktuellen Tonhöhe und dem berechneten Schätzwert auf, welche der Länge des Segmentes entspricht. Methode Der vorliegende Algorithmus zur Grundfrequenzbestimmung beruht auf einer adaptiven Kreuzkorrelation (adaptive cross correlation, ACC). Dazu wird ein Segment der Länge Lw des Sprachsignales extrahiert und mit dem laufenden Sprachsignal korreliert. Die berechneten Korrelationswerte sind ein Maß für die Übereinstimmung des Segmentes mit dem jeweiligen Teil des Sprachsignales, das zur Berechnung der Korrelation verwendet wurde. Liegt ein periodisches Signal vor, weist das Korrelationssignal theoretisch nach einer Periode ein ausgeprägtes Maximum auf. Dieses Maximum wird von einem Spitzenwertdetektor gesucht, der durch eine Zeitkonstante τ charakterisiert wird. Der Abstand zweier Maxima im Korrelationssignal ist ein Maß für die Periodendauer und somit für die Grundfrequenz zwischen diesen Punkten. Nachdem ein Schätzwert für die Grundfrequenz berechnet werden konnte, wird das Segment durch den Ausschnitt des Sprachsignales ersetzt, welcher zur Berechnung der letzten Korrelation herangezogen wurde. Die weitere Berechnung erfolgt anschließend mit diesem neuen Segment, bis wieder ein Maximum in den Korrelationswerten detektiert wird. Eine detailliertere Beschreibung erfolgt in [4]. Da die Korrelation eines Segmentes mit dem Sprachsignal berechnet wird, beträgt die notwendige Länge des Segmentes einmal die Periodendauer der niedrigsten zu detektierenden Frequenz. Zur Bestimmung der Grundfrequenz ist somit diese Länge zuzüglich der auftretenden Periode notwendig. Es resultiert ein Algorithmus, der ein Signal mit Werten aus der Vergangenheit vergleicht und die Tonhöhe abschätzt, sobald eine Periode des Signals detektiert wurde. Dies ermöglicht es, die Grundfrequenz mit einer geringen Zeitverzögerung zur tatsächlichen Tonhöhe zu berechnen. Als Mittel zur Bewertung der Fehlerrate von Verfahren zur Grundfrequenzbestimmung werden öffentlich verfügbare Sprachdatenbanken [5, 6] herangezogen. Diese enthalten Sprachsequenzen mit verschiedenen Sprechern und Referenzwerte zur Tonhöhe. Die berechneten Schätzwerte werden mit den Referenzwerten der Datenbanken verglichen, was die Angabe eines relativen Fehlers ermöglicht. Als Fehler (gross error) gewertet werden Resultate, die mehr als 20 % vom Referenzwert der Datenbank abweichen. Um einen möglichst objektiven Fehlerwert zu berechnen, der auch einen Vergleich zwischen unterschiedlichen Datenbanken zulässt, werden ausschließlich stimmhafte Sequenzen der Sprachsignale zur Analyse herangezogen. 21

24 Ergebnisse Für die folgenden Untersuchungen wurde das Sprachsignal bandpassgefiltert (Butterworth-Filter 6. Ordnung, Grenzfrequenzen 50 Hz / 500 Hz, Flankensteilheit ±18 db/oct). Zusätzlich erfolgt eine Höhenabsenkung ab 50 Hz um -6 db/oct, um höherharmonische Anteile des Sprachsignales zu dämpfen. Abbildung 1 zeigt die Anzahl der Fehler als Funktion der Segmentlänge LW und der Zeitkonstante τ des Spitzenwertdetektors für die beschriebene Implementierung des ACC-Algorithmus ( ) und als zusätzliche Variante ohne Höhenabsenkung (o). Grundlage für die Evaluierung bildet die Keele-Datenbank [5]. Wie der Abbildung entnommen werden kann, existieren sowohl für die Länge des Segmentes, als auch für die Zeitkonstante optimale Werte, um den Rechenfehler des Algorithmus möglichst gering zu halten. Des Weiteren wird durch die Höhenabsenkung eine deutliche Verbesserung der Fehlerrate erreicht. eine Periode des Signals bestimmt wurde. Dazu beträgt die notwendige Länge des Segmentes lediglich einmal die Periodendauer der niedrigsten zu detektierenden Frequenz. Der Vergleich zu anderen Algorithmen zeigt eine niedrige Fehlerrate bis hin zur kürzesten untersuchten Segmentlänge. Dies ermöglicht es, die Grundfrequenzen in Sprachsignalen gleichzeitig mit einer geringen Zeitverzögerung durch die Verwendung kurzer Segmente und mit einer niedrigen Fehlerrate zu berechnen. Abbildung 2: Vergleich der Fehlerrate (gross errors) des ACC-Algorithmus, Autokorrelations-Algorithmus und SHRP-Algorithmus als Funktion der Segmentlänge. Abbildung 1: Fehler (gross errors) als Funktion der Segmentlänge und der Zeitkonstante. Durch eine Evaluierung wie in Abb. 1 dargestellt, konnte zu jeder Segmentlänge eine optimale Zeitkonstante bestimmt werden. Aufbauend auf diesen Ergebnissen erfolgt in Abb. 2 ein Vergleich des ACC- Algorithmus ( ) mit einem Algorithmus auf Basis von Autokorrelation ( ) und dem SHRP-Algorithmus (subharmonic to harmonic ratio) (o), der im Frequenzbereich nach der auftretenden Grundfrequenz sucht. Die Abbildung zeigt die Resultate für die Keele-Datenbank. Zur Kontrolle der Ergebnisse erfolgte dieselbe Auswertung ebenfalls mit der CSTR- Datenbank [6], was qualitativ zum selben Ergebnis führte. Wie zu sehen ist, zeigt der ACC-Algorithmus eine niedrige und nahezu konstante Fehlerrate über den gesamten untersuchten Bereich der Segmentlängen. Diskussion Wie in der Beschreibung der Methode erläutert wurde, berechnet der ACC-Algorithmus einen Schätzwert für die Grundfrequenz in Sprachsignalen sobald Literatur [1] E. Freeman, A. Ipser, A. Palmbaha, D. Paunoiu, P. Brown, C. Lambert, A. Leff, J. Driver, Sight and sound out of synch: Fragmentation and renormalisation of audiovisual integration and subjective timing, in: Cortex, Vol. 49, 2013, pp [2] W. Hess, Pitch Determination of Speech Signals: Algorithms and Devices, Springer-Verlag, Heidelberg, Germany, [3] S. Hande, M. Shah, Pitch estimation, in: International Conference on Computational & Network Technologies (CNT), Pune, [4] M. Staudacher, V. Steixner, A. Griessner, C. Zierhofer, Time domain pitch determination pitch picker, in: CI 2014, Munich, [5] F. Plante, G. Meyer, W. Ainsworth, A pitch extraction reference database, in: ESCA EU- ROSPEECH '95 4th European Conf. on Speech Communication and Technology, Madrid, 1995, pp [6] P. Bagshaw, Fundamental Frequency Determination Algorithm (FDA) Evaluation Database, Centre for Speech Technology Research, University of Edinburgh. 22

25 VALIDATION OF A FLEXIBLE OPTIMAL CONTROL APPROACH FOR RF- PULSE-DESIGN INCLUDING RELAXATION EFFECTS AND SAR Christoph Stefan Aigner 1, Christian Clason 2, Armin Rund 3, and Rudolf Stollberger 1 1 Institute of Medical Engineering, Graz University of Technology, Graz, Austria 2 Faculty of Mathematics, University of Duisburg-Essen, Germany 3 Institute for Mathematics and Scientific Computing, University of Graz, Graz, Austria Abstract Radio frequency (RF) pulses are essential in MRI to excite and alter magnetization. We present and validate a flexible approach based on optimal control of the full time-dependent Bloch equation, including relaxation effects. A globally convergent trust-region Newton method with exact derivatives via adjoint calculus allows the efficient computation of optimal pulses. The results are validated on a 3T scanner and demonstrate the ability to generate optimized pulses for arbitrary flip angles and arbitrary slice profiles. Keywords MRI, RF-pulse-design, slice selective excitation, optimal control theory Introduction For many applications in MRI there is still a demand for the optimization of slice selective RF pulses. Inhomogeneous RF fields and the restrictions of the Specific Absorption Rate (SAR) are a challenge for RF pulse design at high field strength. Short pulses in fast imaging typically suffer from a bad slice profile. Computing RF pulse shapes with a good slice profile, low SAR requirement and robust against RF in homogeneities is therefore still important and becomes critical for quantitative methods. Conventional design is based on simplifications of the Bloch equation, i.e. the small tip angle or the hard pulse approximation 1. Applying iterative methods 1-3, including optimal control (OC) 4-7, are used to improve the excitation process. However, its computational effort and difficult implementation limits a general application. We present and validate a generalized approach based on optimal control of the full time-dependent Bloch equations that is able to handle arbitrary flip angles and target slice profiles. Furthermore, the flexibility of the formulation allows inclusion of relaxation effects and models the SAR by adding a regularization term in the cost function. Methods Theory The optimal control approach to pulse design consists in minimizing the functional ( ) ( ) ( ) ( ) where the three dimensional magnetization vector ( ) ( ( ) ( ) ( )), is the solution of the time-dependent Bloch equation ( ) ( ) ( ) ( ), and describes the total precession of the nuclear magnetization in an external magnetic field B at the end of the excitation time for every point along the slice direction from to. The relaxation term ( ) ( ( ) ) contains the initial magnetization and the longitudinal and transversal relaxation of the nuclear magnetization. ( ) is the desired slice profile and ( ) ( ( ) ( ) ( )) includes the scaling and the shape of the RF pulse ( ) ( ( ) ( )) to be optimized and the prescribed slice selective gradient. The last term in incorporates the desire for minimal SAR of the optimized pulse and is weighted with α for a trait-off between the slice profile accuracy and the required pulse power. The optimal control minimizing can be computed using a globally convergent trust-region Newton method 8 with a matrix-free iterative solution of the Newton step, where the gradient and application of the Hessian are calculated using the adjoint calculus, i.e., by solving in each step forward and backward Bloch equations. Since it is essential to have accurate derivative information, this is done via a numerical simulation of the full Bloch equation, where the time-stepping schemes are chosen such that the discretized derivatives coincide with the derivatives of the discretized functional (adjoint-consistency). Implementation The described approach was implemented in MATLAB (The MathWorks, Inc., Natick, USA) and used to compute optimized pulses for a rectangular slice with a thickness of 20mm and a flip angle of 90 for a total excitation time of T=2.56ms starting from a zero initial guess. The slice selection gradients were taken from a gradient echo sequence on a 3T MR scanner (Magnetom Skyra, Siemens Healthcare, Erlangen, Germany). Validation The results of the numerical optimization for the first profile were verified on the above mentioned scanner using a body coil and a cylinder phan- 23

26 tom with a diameter of 140mm, a length of 400mm and relaxation times T 1 102ms, T 2 81ms and T 2 * 70ms. Results Figure 1-3 show the results of the numerical optimization for the rectangular target profile. The optimized pulse ( ( ) ( )) ( ( ) ( ) ) is given in Figure 1. It can be seen that ( ) is similar, but not identical, to a standard sinc shape, and that ( ) is nearly zero, which is expected due to the symmetry of the prescribed slice profile. Figure 2 contains the corresponding slice profile ( ) obtained from a numerical solution of the Bloch equation and the experimental results (small crosses) which confirms the simulation. It shows an excitation with a steep transition between the in- and out-of-slice regions and a homogeneous flip angle distribution across the target slice. The phantom image is shown in Figure 3, where the slice profile in Figure 2 is indicated in red. Figure 3: Reconstructed experimental phantom image using optimized pulse shown in Figure 1. Discussion The results indicate that the proposed approach allows a problem specific optimization for 1Dexcitation. Due to the flexibility of the optimal control formulation, it is possible to include additional robustness (e.g., with respect to B 1 or B 0 inhomogeneities) as well as joint optimization of pulse and gradient shape including slew rate limitations. Acknowledgement This work is funded and supported by the Austrian Science Fund (FWF) in the context of project "SFB F " (Mathematical Optimization and Applications in Biomedical Sciences). Figure 1: Numerical optimized RF-pulse (B 1,x and B 1,y ). Figure 2: Comparison of the simulated magnetization pattern (solid line) with experimental data (crosses). References [1] Pauly J et al. Parameter Relations for the Shinnar-Le Roux Selective Excitation Pulse Design Algorithm. IEEE Trans Med Imaging. 1991; 10: [2] Gezelter JD, Freeman R. Use of Neural Networks to Design Shaped Radiofrequency Pulses. J Magn Reson. 1990; 90: [3] Buonocore MH. RF pulse design using the inverse scattering transform. Magn Reson Med. 1993; 29: [4] Conolly S et al.optimal Control Solutions to the Magnetic Resonance Selective Excitation Problem. IEEE Trans Med Imaging, 1986; 5: [5] Lapert et al. Exploring the Physical Limits of Saturation Contrast in Magnetic Resonance Imaging. Sci. Rep. 2012; 2: 589) [6] Xu D et al. Designing Multichannel, Multidimensional, Arbritrary Flip Angle RF Pulses Using an Optimal Control Approach. Magn Reson Med. 2008; 59: [7] Vinding MS et al. Fast numerical design of spatialselective rf pulses in MRI using Krotov and quasi- Newton based optimal control methods. JCP 2012;137: [8] Steihaug. Conjugate gradient method and trust regions in large scale optimization. SINUM 1983;20:

27 Studierendenwettbewerb Teil 1 25

28 26

29 VIRTUAL REALITY IN TELE-NEUROREHABILITATION M.Kaindl 1 1 FH Technikum Wien, Österreich Abstract In this project, a system called Sirius move is developed that enables patients suffering from deficiencies of neurological abilities and motor functions, after a brain trauma, to continue therapeutic training at home under professional supervision. After a stroke or other major brain trauma, the patient can use the system to regain abilities with a focus on coordination of the upper extremities. The system enables a professional to prepare an appropriate set of tasks for each patient and receive a report on the training-results. The tasks consist of paths the therapist draws including a start-point and an end-point. The patient projects the tasks onto a suitable wall and follows the path with hand-movements. The movement is tracked using a camera in front of the patient. The values measured are accuracy, velocity and level of tremor. The main goal of the system is to keep the patients training their abilities and keep the amount of work for the therapists as low as possible. Keywords Motion analysis, training, tracking, rehabilitation, stroke Introduction After a major injury to the brain, it is necessary to take immediate care of the patient in a medical institution. During the fist phases of the trauma, the patient needs frequent training to regain motor functions and cognitive functions. [1] After the acute phase of the injury, patients need to be released from the hospital as soon as possible. Reasons are lack of financial and personnel resources in the sector of healthcare, [2] but also a higher risk for the patient of being exposed to bacteria and viruses found in hospitals. [3] In a successful concept, the patients must not be left on their own after discharge. An environment can be created that combines the advantages of personal surroundings and those of professional supervision. Exercises can be analysed by a therapist to review the patient s progress. Like almost every person, patients need to have someone they know reviewing their progress. Like a music teacher, it is often hardly necessary to teach, only to be there so the exercises will be done. The system enables a professional to draw a graph on the computer that the patient shall follow with armmovements in front of a suitable wall, where the projection of the graph can be seen. (Figure 1) Methods Figure 1: Setting of the training Hardware: The necessary hardware for the patient includes a camera with USB connection to a computer and a projector that is connected to the same computer. It is necessary that an internet connection is provided for the download of new task-sets and upload of the report. For the professional, a computer with internet connection is necessary as well. For the non-real-time data transmission, space on a server is needed that can be reached by the computers of both participants of the system. Software: The software is implemented in Java (Sun Microsystems, Version ). ReacTIVision is used as a framework for motion-tracking. [4] The printed fiducial markers provided by the project are applied to the hand of the patient. (Figure 2). The framework provides the software with position, velocity and angle of the fiducial. Figure 2: Fiducial marker of the amoeba set provided by ReacTIVision Motion-analysis: The therapist receives the average distance to the path in pixels for comparison and also 27

30 an illustration of the movement including the distances measured as lines between the hand and the closest reference point along the path. (Figure 3). Through a shifting range of eight reference points, also self-crossing paths are analysed correctly. The mean distance to the path is multiplied by a scalar, calculated from the difference in length between trajectory and path to take detours and short-cuts into consideration. Results Two different pieces of software are developed. The therapist receives a GUI, which is optimised for fast use, based on interfaces known from established drawing-software. The patient's part is clearly arranged and easy to use with only two functions visible to the user. (Figure 6) Figure 3: Connections to the closest reference point along the path The velocity is illustrated by pointers of certain length and direction to determine the quality of the movement. (Figure 4) Figure 4: Illustration of velocity along the path Tremor is illustrated in Figure 5. Lines which are relatively in order indicate steady movement, while orderless lines indicate tremor. Figure 5: Illustration of tremor Figure 6: System overview- top-left: therapist's GUI top-right: patient's GUI Discussion The system provides the important factors of computer aided therapy: motivation, feedback, documentation, repetition and objectivity, while still leaving room for the therapist's approaches and ideas. Patients may not be able to install the system without help and may also require guidance during the first attempts. The fiducial marker need to face the camera in front of the patient, which is a difficult task for some users. Acknowledgement Special thanks goes to my supervisors FH-Prof. DI Dr. Lars Mehnen and FH-Prof. DI Dr. Johannes Martinek from FH Technikum Wien. Literature [1] J. Mehrholz, Frühphase Schlaganfall: Physiotherapie und medizinische Versorgung, Kapitel 4, Thieme Verlag, Stuttgart, Germany, 2008 [2] Statistik Austria, Statistisches Jahrbuch Öster reichs. Kapitel 3.01, [3] R. Thomas, MRSA in der Neurologischen Frührehabilitation: Eine Bestandsaufnahme zur Inzidenz, Prävalenz und Morbidität, Neurologie & Rehabilitation; 19 (2): , Hippocampus Verlag, Bad Honnef, Germany, 2013 [4] R. Bencina, M. Kaltenbrunner, S. Jord`a Improved topological fiducial tracking in the reactivision system. Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, IEEE Computer Society, Washington, DC,

31 EBALANCE DEVELOPMENT AND USABILITY EVALUATION OF A COM- PUTER GAME FOR REHABILITATIONAL TRAINING J. Flandorfer 1 1 University of Applied Sciences Technikum Wien, Vienna, Austria Abstract During physical rehabilitation, repetitive practice of a limited amount of exercises should improve dynamic balance and motor recovery. A different approach is to use virtual reality training and interactive games to retrain postural stability. It is to find a way to use the eshoe system, developed by the Austrian research institute CEIT RAL- TEC (Schwechat, Austria), for training purposes in physical rehabilitation and to develop a computer game which can be controlled via movements detected by the eshoe insoles. The game is then evaluated by means of usability tests by users between 23 and 27 without any motor deficiencies. The results of the usability tests are analyzed, recommendations for future development are given and the game is improved according to the recommendations. Overall, the test subjects show a good reception of the system. However, the usability tests show that adaptations have to be made to the layout as well as to the motion detection. Keywords balance, motor deficiency, virtual reality, rehabilitation, training Introduction The mobile motion measurement system eshoe was developed in the Austrian research institute CEIT RALTEC. They created an orthopedic insole instrumented with several sensors and an embedded system for measuring gait parameters directly in the shoe without being bound to a laboratory. Sensors in the insole collect pressure data, the angle of the foot, acceleration and angular velocity along three axes. Stroke incidence and often is the cause of major motor deficiencies, instabilities and balance problems. [1] An improvement of these postural instabilities can be achieved by conventional rehabilitation interventions like movement re-education, therapeutic exercises, or training with walking aids. [2] Although such conventional training practices often achieve the objective of retraining postural stability, studies suggest that this kind of physical rehabilitation does not evoke engagement or commitment to the training from the side of the patient. [3] Studies indicate that introducing games in rehabilitation processes is at least as effective as engaging only in conventional therapy. [3] Also, including suitable games supports the adherence of the patient to the rehabilitation process and encourages him or her to comply with the therapy plan. The game aims at training balance and weight shift. From a set of images one image is selected. The player has to search through the same set of images to find the matching image. As soon as the player has selected and confirmed the matching image, another one is randomly selected from the set of images and the player has to find the image once again. When the time runs out the images correctly retrieved are counted and the score is displayed. The game features four different levels of difficulty with different playing time and different movements to execute the game s functions. Materials and Methods eshoe In order to collect motion measurement data the application has to be connected with two eshoe insoles per recording. One insole holds four FSR sensors (force sensing resistors) (A401, Tekscan, South Boston, US) measuring the occurring pressure beneath the heel, metatarsal head I, metatarsal head IV and the toes, an acceleration sensor (ADXL 346, ANALOG DEVICES, Norwood, US) for determining the acceleration of the foot along three axes and a gyroscope (ITG-3200, InvenSense, Sunnyvale, US) measuring the angular acceleration along three axes. [4] The Bluetooth module (KC-22 Bluetooth OEM Micro Module) will send data collected from the sensors by the microcontroller (PIC24FJ256GB206T-I/MR, Microchip Technology Inc., Arizona, US) to the connected computer. ebalance Figure 1: Game window of ebalance with the score and the time left displayed. 29

32 In the following description of the game, the term user describes the person setting up the game (e.g. the therapist) whereas the term player refers to the person actually playing the game and performing the movements. Before the actual game can be started, the user has to choose the level of difficulty, the set of images to be used and the insole size the player wears. When the user presses the start button, the Bluetooth module connects to the insoles and the game starts. The goal of the game is for the player to scroll through a collection of images in search of a specific image displayed above the image collection. Figure 1 shows the game window containing the image collection in the form of an image strip. The image in the middle framed by the red square is the currently selected image and the one on the top right represents the image which has to be found. Usability testing To evaluate the game, a concept for usability tests is developed in which eight subjects aged between 23 and 27 test the program assessing its overall look and feel as well as the controls, intuition of movements and feedback of the game. For the initial usability tests a user group of physically healthy people is preferred over presumably vulnerable elderly people. The subjects are given an overview of the settings by a test. Afterwards, each level of difficulty is played through two times followed by a questionnaire gathering information on the look and clearness of the game as well as the usage and easiness of the controls. Factors influencing the evaluation can be divided into objective and subjective factors, objective being the points reached by the player as opposed to the usability questionnaire as subjective factors. Usability questionnaire The main section of the questionnaire is divided into three subsections. The first subsection questions whether the subjects have already had experience with either usability tests or motion control systems and the test subjects are asked if they have received and understood the spoken instructions and if he or she considers the game s graphics and layout clear, structured and of adequate size. In the next subsection, the player is asked if the instructions window at the beginning of the game was understood, if he or she could easily control the game and if it requires physical or mental effort. The last subsection includes questions whether it was fun playing the game and if he or she was likely to continue playing the game. The supervisor then fills the last section of the questionnaire with his or her observations of the test. Results The majority of the test subjects find the screen layout to be neatly arranged and only one test subject believes the graphics should be a bit larger. The instructions on the instructions window could not be understood or were misleading for four test persons. Still, seven subjects state that it was easy for them to learn how to control the system and all of them consider the game clear and understandable. According to the answers of the test subjects it can be assumed that the level of difficulty easy was accepted quite negatively. However, it is shown that the levels of difficulty easyplus, medium and hard are assessed rather positively by the tests subjects. The results indicate a good reception of the control motions in all of the levels of difficulty. Four test persons claim that moderate physical and mental effort is required during the gameplay. The overall reaction to the game is that it was fun to play but only five of the test subjects see themselves continuing to play the game on a regular basis. The observations of the supervisor say that the test persons could easily understand the instructions but showed some confusion over the instructions window. Discussion The usability tests generally show a positive reception of the game. However, adaptations to the motions of the difficulty easy (toe stand and heel stand) have been indicated and are approached in the following re-design stage. Additional sensors, namely the gyroscope to detect a tilt of the foot, are now used for the detection of these motions. Another improvement would be using more pressure sensors. During the test execution it is observed that it is of significant importance to the detection of the motions that the insoles used are of a smaller size than the test subject s feet. The test persons showed a much higher effort in performing the motions when their feet are the same size or smaller than the insoles in their shoes. This might be due to the arrangement of the pressure sensors in the insoles. Also, the usability tests show that changes to the images on the instruction screen have to be made in order to make them more self-explanatory. This issue is also approached and solved in the subsequent redesign stage. Literatur [1] Ringelstein E et al.: Der ischämische Schlaganfall: eine praxisorientierte Darstellung von Pathophysiologie, Diagnostik und Therapie [2] Weiss R: Physical Therapy Aide: A Worktext [3] Rajaratnam B et al.: Does the Inclusion of Virtual Reality Games within Conventional Rehabilitation Enhance Balance Retraining after a Recent Episode of Stroke? Rehabilitation Research and Practice, vol. 2013, [4] David V, Jagos H, Litzenberger S, Reichel M: Instrumented insole for mobile and long distance motion pattern measurement,

33 ROBOTERUNTERSTÜTZTE MEDIKATION ÄLTERER PERSONEN IM HÄUSLICHEN UMFELD M. Schweitzer, A. Hoerbst Forschungsgruppe für ehealth und Telemedizin, Universität für Gesundheitswissenschaften, Medizinische Informatik und Technik, Österreich Abstract Altersbedingte Probleme bei der alltäglichen Medikation älterer Personen führt zur Abnahme der Compliance. Technische Hilfsmittel wie humanoide Roboter besitzen hier hohes Potential, ältere Menschen im Alltag zu unterstützen. Auch leistungsfähige seriengefertigte humanoide Systeme mit preisorientierter Technik existieren bereits, es ist jedoch unklar, ob diese Systeme zur persönlichen Unterstützung des Medikationsprozesses geeignet sind. In dieser Arbeit wurde ein System zur Medikationsunterstützung anhand des Robotersystem NAO entwickelt, unter Laborbedingungen getestet und evaluiert. Das System zeigt vielversprechende Ergebnisse auf technischer/funktionaler Ebene, jedoch werden Untersuchungen im realen Feld benötigt. Keywords Ambient Assisted Living, humanoide Servicerobotik, Medikationsmanagement Einleitung Die häusliche Altenpflege gewinnt durch die steigenden Lebenserwartung bzw. dem vermehrten Auftreten multimorbider Krankheitsbilder zunehmend an Bedeutung. Ein beachtliches Problem stellt dabei die tägliche Einnahme der verordneten Medikamente dar. Die vielfältige Medikation, auch bekannt unter dem Begriff der Polypharmazie, führt in Verbindung mit altersbedingten Beeinträchtigungen der kognitiven, visuellen, fein- und grobmotorischen Leistungsfähigkeit zur Abnahme der Compliance. Folglich nehmen nur ca. 50% aller chronisch kranken älteren Patienten ihre Medikamente wie verordnet [1]. Ambient Assisted Living (AAL) bietet hierfür geeignete technische Lösungen, den alltäglichen Medikationsprozess eines pflegebedürftigen Menschen zu unterstützen. Diese Systeme wie Smartphone-Apps, intelligente Medikamentenspender, Medikationsreminder u.v.w. [2] sind zwar leistungsfähig, bilden jedoch oft unabhängige Spezialsysteme, welche nicht bzw. nur aufwändig untereinander verknüpft werden können. Sehr hohes Potential als Alltagsunterstützung besitzen humanoide Serviceroboter. Neben prototypischen High-Tech-Lösungen aus Wissenschaft und Forschung existieren bereits Serviceroboter in seriengefertigter Form mit preisorientierter Technik. Es ist jedoch nicht bekannt, ob bzw. in wie weit sich solch ein seriengefertigtes Robotersystem als Alltagsunterstützung einer älteren Person eignet. Ziel der vorliegenden Diplomarbeit war es, auf Basis des serienmäßigen, humanoiden Serviceroboters NAO [3] ein System zur Medikationsunterstützung älterer Personen im häuslichen Umfeld zu entwerfen, prototypisch umzusetzen und hinsichtlich dessen technischer und funktionaler Eignung unter Laborbedingungen zu prüfen. Methoden Hierzu wurde ein Medikationsmanagement-System (MMS) für NAO entwickelt, welches auf den häuslichen administrativen Medikationsprozess fokussiert ist. Das System wurde dabei ausschließlich auf Softwarebasis ohne Hardwaremodifikationen am NAO- Roboter umgesetzt. Anforderungen an AAL bzw. an die elektronische Medikationsunterstützung wurden mit Hilfe eines Reviews wissenschaftlicher Literatur erhoben und durch Einbeziehung der Expertenmeinung eines niedergelassenen Arztes validiert. Aus den identifizierten Anforderungen wurde eine Middleware für NAO zur dynamischen und flexiblen Nutzung verschiedener AAL-Dienste konzipiert und prototypisch umgesetzt. Im nächsten Schritt wurde auf der Middleware basierend ein prototypischer Dienst zur Unterstützung des Medikationsprozesses entwickelt. Der Dienst besteht aus einem Grundgerüst, welches iterativ und agil durch Funktionen ergänzt wurde. Zur Identifizierung der Medikamente wurden bildbasierte Ansätze der Merkmalserkennung (Pattern Recognition), der Texterkennung (OCR) und des Barcode-Scannings umgesetzt. Eine Funktion zur Wechselwirkungsprüfung wurde mittels Implementierung der SIS Datenbank der österreichischen Apothekerkammer integriert. Das prototypische MMS wurde durch Modul- bzw. Integrationstests umfassend getestet. Quantitative Tests der Medikamentenidentifikation unter verschiedenen Lichtverhältnissen mit Variation der Distanz, Orientierung und dem Verdeckungsgrad des Medikaments gaben Auskunft über die Erkennungsleistung des Systems. Abschließend wurde ein reales Testszenario in Laborumgebung durchgeführt. Ergebnisse Aus den ermittelten Problemen aus der Literatur bzw. den Expertengesprächen wurden Anforderungen formuliert, welche in die Systemkonzeption und -implementierung mit einbezogen wurden. 31

34 Python Choregraphe XML SQLite Funktionsaufruf AAL-Middleware NAO-Modul MMS Datenhaltung Frontend des Systems; Kommunikation mit Benutzer Verwendung der adaptiven AAL-Struktur Schnittstelle zwischen Choregraphe und MMS Zentrales MMS mit Funktionalitäten Persistente Speicherung der Daten Abbildung 1: Schichten der Architektur mit beteiligten Technologien (li.) und Kurzbeschreibung (re.) Der resultierende Prototyp besitzt die Fähigkeiten Medikamente per Bild- und Barcodeerkennung zu identifizieren, den Patienten an die Einnahme zu erinnern, die Medikation auf Wechselwirkungen bzw. Abweichungen vom Verordnungsplan zu untersuchen, und den gesamten Verlauf der Einnahmen in einer Medikationshistorie zu dokumentieren. Die Kommunikation zwischen Benutzer und System erfolgt dabei über Spracherkennung bzw. Sprachausgabe. Neben der eigentlichen Interaktion zwischen Benutzer und Roboter kann auch der Arzt die erstellte Dokumentation einsehen und bearbeiten. Dies erfolgt per Remote-Verbindung zum System. Außerdem besteht die Möglichkeit, die Medikation eines ELGA-konformen CDA-Dokumentes automatisch auszulesen und in das MMS zu importieren. Die verwendete 5-Schichten-Softwarearchitektur (Abb.1) erlaubt die flexible Adaptierung der Software, so können Datenhaltung, Medikationsmanagement und Roboterplattform (Choregraphe) ausgetauscht werden, sofern passende Schnittstellen zur Verfügung stehen. Das Grundgerüst des MMS besteht aus einer Python-Klassenstruktur, welche den kompletten Medikationsprozess abbildet. Das NAO-Modul bildet die Schnittstelle zwischen MMS und Robotersystem bzw. AAL-Middleware. Medikamentendatenbanken bzw. die Persistierung der Klassenstruktur mit Verordnungsplan, Historie und Patientendaten erfolgt eingebettet in NAO mittels SQLite bzw. XML. Der Barcode-Scanner wurde in einem separaten NAO-Modul mit einer Open-Source Bibliothek implementiert. Für die merkmalsbasierte Objekterkennung wurde das NAO-Standardmodul ALVision- Recognition verwendet. Die Zuordnung von Bildmerkmale bzw. Barcode zum Medikament erfolgt über Mapping der eindeutigen Pharmazentralnummer (PZN) jedes Medikamentes. Diskussion Die Anforderungen des MMS wurden aus wissenschaftlicher Literatur möglichst vollständig erhoben. Speziell im Bereich des Ambient Assisted Living, wo noch keine standardisierten Bezeichnungen bzw. MeSH-Terms existieren, blieb mitunter wichtige Literatur verborgen. Durch Einbeziehung des Experten konnten die erhobenen Ansätze validiert bzw. erweitert werden. Tabelle 1: Auszug aus den Tests der Medikamentenerkennung. Erkennungsrate 3 Lichtverhältnis Anz. Tests 2 15cm 30cm 50cm helle Umgebung dunkle Umgebung schwache Beleuchtung mit 20W bzw. 19 Medikamente wurden je 3 Mal getestet. 3 Erkennungsrate unter der Distanz zur Kamera des Roboters Dies wurde hinsichtlich des Ziels der Arbeit, die technischen Evaluierung des prototypischen Systems, als ausreichend empfunden. Für eine Generalisierbarkeit des Konzeptes müsste der Ansatz jedoch um weitere Experten bzw. Stakeholder erweitert werden. Es konnte im Test gezeigt werden, dass NAO gemeinsam mit dem entwickelten MMS eine geeignete Basis zur Unterstützung des alltäglichen Medikationsprozesses bildet. Aufgrund der verfügbaren Kameras in NAO wurde bewusst auf bildbasierte Verfahren bei der Medikamentenidentifizierung festgehalten. Über OCR konnten keine brauchbaren Ergebnisse erzielt werden, wodurch dieser Ansatz nicht weiter verfolgt wurde. Die merkmalsbasierte Objekterkennung erzielte die besten Ergebnisse: So konnten im Abstand von 15cm alle Testmedikamente erkannt werden, bei schlechter Beleuchtung schlug nur ein Test fehl (Tab. 2). Der benötige Lernvorgang beim merkmalsbasierten Verfahren entfällt beim Barcode-Scanning: Hier kann über die verwendete Datenbank eindeutig auf die PZN geschlossen werden. Die einfache Kamerahardware von NAO (720p Auflösung, fixer Fokus) erfordert jedoch eine sehr genaue Positionierung des Barcodes vor der Kamera. Eine physische Unterstützung ist auf Grund der Dimensionierung der Hardware des NAO-Roboters nicht möglich. Die Sprachsteuerung eignet sich grundsätzlich gut für technisch unerfahrene Benutzer wobei die Spracherkennung vordefinierter Begriffe gute Ergebnisse in ruhiger Umgebung liefert. Der Remote-Zugriff auf die Medikationsdaten bzw. Dokumentation ermöglicht flexible Medikationsänderungen für Arzt oder Pflegepersonal. Mit dem entwickelten MMS konnten unter Verwendung von NAO aussichtsreiche Ergebnisse als Unterstützung des Medikationsprozesses erzielt werden. Der gesamte Prototyp wurde jedoch bisher nur unter Laborbedingungen getestet, zur Erhebung der Praxistauglichkeit werden ergänzende Feldtests im realen Umfeld benötigt. Literatur [1] Murray, MD, Morrow DG, Weiner M, Clark D, Tu W, Deer MM, et al. A conceptual framework to study medication adherence in older adults. Am J Geriatr Pharmacother. 2004;2: [2] U.S. Department of Health and Human Services, Assistant Secretary for Planning and Evaluation. Report to Congress: Ageing Services Technology, June, 2012 [3] Aldebaran Robotics. [Online]. Available: [Accessed: 12-May-2014]. 32

35 QUANTITATIVE MEASUREMENT OF LEFT VENTRICULAR MYOCARDIAL PERFUSION BASED ON DYNAMIC CT SCANS M. Toifl 1 1 Institute of Electrical and Biomedical Engineering, UMIT, Austria Abstract The knowledge of the myocardial blood perfusion is essential for the adequate therapy of cardiovascular diseases. Emerging from the various methods which have been brought up in the past decades, a framework of contrast agent supported computed tomography was built up to provide clinicians with absolute myocardial perfusion values. Since the underlying processing tasks need to be evaluated systematically, this work goes into detail with the involved signal processing tasks. The results help to improve the automatic perfusion calculation tool by outlining the impact of the processing methods onto the absolute perfusion value. Keywords Computed Tomography, Quantitative Perfusion, Signal Processing, Curve Fitting. Introduction The highly important role of absolute myocardial perfusion (MPF) estimation deduces from the increasing demands for therapeutic interventions in cardiovascular diseases. One promising technology is the assessment of the perfusion status by means of contrast agent supported computed tomography (CT). To receive clinically applicable perfusion values, a workflow is set up, including image acquisition, preprocessing, curve fitting and perfusion calculation tasks. After accomplishing these tasks, the impact of the various data processing steps on the quantitative estimation of myocardial perfusion is evaluated. Methods The quantification of MPF relies on the venous injection of an X-ray contrast agent, which mixes up with the blood on its path to the heart. The passage of the contrast agent bolus is traced by dynamic CT scans attenuation [HU] time [s] time [s] Fig. 1: Example data set, showing a ventricular (left) and a myocardial CT values curve (right). attenuation [HU] through the left ventricular cavity and the myocardium. Once the CT images are recorded, both the ventricular and myocardial CT values curves (Fig. 1) are generated from the segmented image stack. In consequence, a quantitative estimation of myocardial perfusion is possible by the examination of various parameters of the attenuation curves (shape, height, upslope) using model-based and empirical methods. Preprocessing: The first part of the workflow includes filtering together with equidistant sampling to enable the application of convolution methods that allow for simulating step responses. Curve Fitting: In order to increase the robustness of the perfusion calculation against outliers, the data are fitted by three different model curves: the Weibull curve [4], a low-pass of n th order (Eq. 1, [1]) and the gamma-variate curve [2]. Especially the low-pass of 4 th order model proved to be profitable due to its easy transformability into the Laplacian representation: f (t )=at n 1 e tb F (s)= 6a (s+b) n (1) Perfusion Calculation: Two main methods are used to compute absolute perfusion values in the myocardium: the upslope method, which is proposed by KA. Miles [3] and a convolutional approach, as suggested by C. Baumgartner [1], based on assumptions formulated by Rumberger [5]. While the upslope method is of empirical nature, the current project pursues the u(t) * h(t) y(t) Fig. 2: System theoretic model with the ventricular input curve u(t), myocardial output curve y(t) and the transfer function h(t) between them. convolution method, a system-theoretic approach, also facilitating further improvements of this technique towards local perfusion assessments. The system theoretic concept is illustrated in Fig. 2, where the ventricular attenuation curve u(t) is considered as the input function. The output function y(t) is represented by the myocardial attenuation curve and the transfer function h(t) between input and output is computed either by symbolic evaluation (low- 33

36 pass model) or by numerical estimation using the ARX model (auto-regressive model with exogenous inputs, Eq. 2). y (t ) + a 1 y (t 1) + + a na y (t na) = b 1 u(t nk ) + + b nb u(t nb nk +1) + e(t) (2) The transfer function is then used to simulate the transfer characteristics of a short imaginary contrast agent bolus (duration 0.1 s). By dividing the response peak height by the area under the ventricular attenuation curve, the perfusion is calculated in absolute values (ml/100g/min). The assessment of all steps of the processing pipeline was the main task to achieve insight into the behaviour of the selected algorithms with respect to their feasibility and reliability. Seven test data sets (named KV to YN) provided by a preliminary study on human males were used for evaluation. Results The parametrisation for sampling and filtering was optimised at the beginning of the workflow, where a smooth and shape preserving processing of the raw curves is needed. For the subsequent curve fitting operations, the squared errors sum is used as minimisation criterion, which is calculated between the parametrised model curve and the fitted data. Thus, it is possible to estimate which model curve fits best to a certain data set, since all observed curves have apparently differing shapes. The final part of the work comprises the comparison of the two selected perfusion approaches (upslope method, convolution method). Figures 3 and 4 summarise the absolute perfusion results for the upslope and convolution methods. MPF [ml/100g/min] ventr.: myo.: KV LI LV ME NA SA YN W W W L W G L W L L L G curve fitting method G W Fig. 3: MPF results calculated by the upslope method depend on the applied curve fitting approach. G L G G Discussion Using the upslope method, it was demonstrated that the absolute perfusion values are highly dependent on the model curve shape and the quality of its fitting. Fig. 3 includes all combinations of automatic fittings of the Weibull function (W), the low pass (L) and the gamma-variate (G) model. The results show that some combinations (especially myocardial gamma-variate fitting) lead to misestimation of the perfusion. In consequence it is strongly advised to take the quality of fit into consideration, as well as the purpose of the used model curve. Also the convolution methods for the perfusion calculation demonstrated variable perfusion values. It was worked out that some factors influence the computation significantly: the curve shape, smoothness and the parametrisation of the transfer function estimation (e.g., the numbers of poles and zeros). The results of the automatic perfusion computation are illustrated in Fig. 4, where the absolute perfusion values are shown for all three methods (symbolic low-pass of 4 th order, ARX estimation, ARX estimation after gammavariate fit). MPF [ml/100g/min] 200 KV LI LV ME NA SA YN LP4 ARX gamma-arx calculation method (convolution) Fig. 4: MPF results calculated by three convolution approaches. All methods show rather good coherence within the data, except for the data sets KV and ME with the gamma-arx method. Future research activities will comprise ameliorations in the choice of algorithms and in the intelligent recognition of the best performing configuration of processing methods. This work has shown that there lies potential in the improvement of convolution techniques, while the refinement of the upslope method made it ready to serve as a reference standard for evaluation. Acknowledgements Supported by the Science Fund established by the Land Tyrol (TWF). Literature [1] Baumgartner, C. Messung der cerebralen Durchblutung mittels Elektronenstrahl-Computer-tomographie, Dissertation, University of Technology, Graz, Austria, [2] Madsen MT. A simplified formulation of the gamma variate function. Physics in Medicine and Biology. 1992;37(7): [3] Miles, KA. Measurement of tissue perfusion by dynamic computed tomography, Br J Radiol. 1992;64(761): [4] The MathWorks, Inc.: Curve Fitting and Distribution Fitting, products/statistics/examples.html?file=/products/ demos/shipping/stats/cfitdfitdemo.html, Last visited: [5] Rumberger JA, et al. Use of Ultrafast Computed Tomography to Quantitate Regional Myocardial Perfusion: A Preliminary Report. J Am Coll Cardiol January;9(1):

37 KRAFTAUFTEILUNG DER QUADRIZEPSMUSKULATUR BEI ISOKINETI- SCHEM TRAINING AN DER BEINPRESSE - SIMULATIONSSTUDIE MIT OPENSIM G. Schneider 1, M. Krenn 1, J. Cvecka 2, M. Sedliak 2, S. Loefler 3, H. Kern 3,4, W. Mayr 1 1 Zentrum für Medizinische Physik und Biomedizinische Technik, Medizinische Universität Wien, Österreich 2 Institut für Körpererziehung und Sport, Komenius Universität Bratislava, Slowakei 3 Ludwig Boltzmann Institut für Elektrostimulation und physikalische Rehabilitation, Österreich 4 Institut für Physikalische Medizin und Rehabilitation, Wilhelminenspital, Österreich Abstract Die Beinpresse ist in der Muskeltherapie oft verwendet und liefert eine gelenksschonende Bewegungsform. Eine computer-gesteuerte Beinpresse ermöglicht die Belastung und Bewegungsabläufe gezielt zu regeln und an den Patienten anzupassen. In dieser Simulationsstudie wurden die erzeugten Kräfte und Aktivierungscharakteristika in der Quadrizepsmuskulatur mit OpenSim (Stanford University) simuliert. Empirisch ermittelte Bewegungsund Kraftdaten dienten als Grundlage für die Simulation um ein realistische Abbildung der erzeugten Muskelkräfte zu erhalten. Das Ergebnis zeigt, dass der Musculus vastus lateralis den größten Kraftanteil leistet. Ein weiteres Ergebnis zeigt die Winkelabhängigkeit der prozentuellen Kraftaufteilung der einzelnen Muskelköpfe. Die Simulationsergebnisse erlauben eine verbesserte Steuerung der Muskelbelastung. Keywords OpenSim, Simulation, Oberschenkelmuskel, Beinpresse, Muskelaktivität Einleitung Mit einer computer-gesteuerte Beinpresse können speziellen Beanspruchungs- und Beinbewegungsformen für das Training der Beinmuskulatur realisiert werde. Neben den üblichen isometrischen und isokinetischen Bewegungsformen bietet diese spezielle Beinpresse die Möglichkeit der isotonischen und vibrierenden Trainingsmethode. Ziel der hier vorliegenden Studie war die Erarbeitung einer leistungsfähigen Simulation der entstehenden Muskelkräfte und Muskelaktivierung in den unteren Extremitäten, im Speziellen der Quadrizepsmuskelgruppe des linken Beines während des Trainings. Methoden Die Beinpresse wurde am Institut für Körpererziehung und Sport, der Komenius Universität in Bratislava Slowakei, entwickelt [4]. Die Bewegung war isokinetisch bei verschiedenen erzeugten externe Kräfte. Für die Simulation der Muskelkraft und -aktivierung wurde die Simulationssoftware OpenSim (NIH Center for Biomedical Computation, Stanford University zur Modellierung und Simulation biomechanischer Aufgabenstellungen) verwendet [1]. In der präsentierten Studie kam das Equilibrium Musculotendon Model nach Millard zur Anwendung [2]. Das Modell wurde mit empirischen Bewegungs- und Kraftdaten gespeist. Ein Proband (männlich, 28Jahre) führte dafür eine isokinetische Bewegung mit der linken unteren Extremität an der Beinpresse aus. Die ermittelten externen Kräfte (Anpresskräfte auf Schlitten) wurden nachbearbeitet und in konstante konzentrische und exzentrische Phasen unterteilt. Verschieden starke Reaktionskräfte an den Schlitten der Beinpresse wurden so ermittelt ( %). Die exzentrische Maximalkraft lag bei 1480 N und bei konzentrischer Belastung wurde ein Maximum von 1100 N erreicht. Um numerische Probleme, hervorgerufen durch die scharfen Wendepunkte im Weg-Zeit Verlauf bzw. der sprunghaften Kraft-Zeit Kurve zu vermeiden, wurde der Wechsel von der konzentrischen in die exzentrischen Phase, und umgekehrt, mittels einer Rampenfunktion überlagert. (A) (B) Abbildung 1: (A) OpenSim-Model mit eingeblendeter Quadrizepsmuskulatur, Überlappungen zeigen den Bewegungsraum; (B) Simulierter Kniewinkel bei isokinetischer Bewegung (schwarz), externe angebrachte Kraft an der Fußsohle (rot); punktierte Linie zeigt Kniewinkelstellung für Vergleiche in Abb. 2,3. 35

38 Vorbereitend für die Simulation wurden die Bewegungsdaten mittels Matlab (MathWorks, Inc., Natick, MA, USA) in Positionskoordinaten der am Fuß des Modelles fixierten Markerpunkte (Abb. 1A) konvertiert. Diese dienten in weiterer Folge der Simulation als Input zur Berechnung der jeweiligen Gelenkswinkel für jeden einzelnen Zeitabschnitt der Bewegung (Inverse Kinematics - IK). Die daraus errechneten Winkel definieren die Bewegung des Models und dienen, wie die zuvor überarbeiteten Kraftdaten, in einem nächsten Schritt als Grundlage für die Computer-Muscle-Control-Analyse (CMC) [3]. Die Ergebnisse der Simulation lieferten die Aktivierungs- und Kraftverläufe der Beinmuskeln in Abhängigkeit der aufgebrachten, externen Kraft und vorgegebenen Bewegung. In weiterer Folge werden nur Musculus vastus lateralis (VL), Musculus vastus intermedius (VI), Musculus vastus medialis (VM) und Musculus rectus femoris (RF) betrachtet. Muskelkraft und Aktivierung wurde während der konzentrischen und exzentrischen Phase bei einem Kniewinkel von -78 und -68 ausgewertet. Wobei aus den drei Durchgängen die Mittelwerte berechnet wurden. Ergebnisse Eine Erhöhung der erzeugten Kräfte auf den Schlitten der Beinpresse spiegelt eine deutliche Steigerung der einzelnen Muskelkräfte im Oberschenkelmuskel wider. Der Vergleich der Muskelköpfe des Quadrizeps (Abb. 2) zeigt, dass der größte Anteil (~40%) durch den VL übernommen wird. Die Anteile von VI, VM und RF erzielen ähnliche Wert und übernehmen je Muskel ca. 20%. In der Gegenüberstellung der exzentrischen und konzentrischen Phase konnte eine Steigerung der Muskelkräfte um rund 20% bei einer Kniewinkelstellung von -78 und von rund 40% für -68 beobachtet werden. Die Aktivierung des VL ist vor allem abhängig von der Bewegungsrichtung (Abb. 3), aber auch Unterschiede bei beiden untersuchten Kniewinkeln von -68 und -78 sind erkennbar. Abbildung 2: Aufteilung der Muskelkräfte in der Quadrizepsmuskulatur bei 75% maximal Kraft und Kniewinkelstellung von -68 (rot) und -78 (blau) Abbildung 3: Aktivierung des VL in Abhängigkeit der externen Kraft bei einer Kniewinkelstellung von -68 (rot) und -78 (blau) Diskussion Der Hauptanteil der Kraftentwicklung in der Quadrizepsmuskulatur beim Training an der Beinpresse wird durch den VL realisiert. Studien [5, 6] dokumentieren den signifikanten Unterschied von VM-Kraft zu den anderen Muskelköpfen der Quadrizepsmuskulatur. Durch die Winkelstellung des Fußes in Relation zur Tibia kann man die Aufteilung zwischen VM und VL beeinflussen. Eine geringere Muskelaktivierung bei deutlich höherer Kraftproduktion in der exzentrischen Phase [4] wird auch in dieser Simulation berücksichtigt. Danksagungen Die Studie wurde unterstützt durch den European Regional Development Fund, EU-Interreg IVa : N00033 (MOBIL Mobility of Elderly). Literatur [1] Delp SL, Anderson FC, Arnold AS, et al. Guendelman E, Thelen DG. OpenSim: Open-source Software to Create and Analyze Dynamic Simulations of Movement. IEEE Transactions on Biomedical Engineering. (2007). [2] Millard, M., Uchida, T., et al. (2013). Flexing computational muscle: modeling and simulation of musculotendon dynamics. Journal of Biomechanical Engineering, 135(2), [3] Hicks, J. B. (2011). User s Guide - OpenSim 3.0 [4] Cvecka, J., Hamar, D., Trimmel, L., Vogelauer, M., Bily, W., Löfler, S., Kern, H. (2009). Einfluss von serial stretch loading auf die Effektivität des isokinetischen Krafttrainings. Basic Applied Myology, 19(4), [5] Shweta, S., Priyaranjan, M., & Sanhu, J. S. (2011). Peak Torque and IEMG activity of Quadriceps femoris muscle at three different knee angles in a collegiate population, 9(1), [6] Alkner, B. A., Tesch, P. A., & Berg, H. E. (2000). Quadriceps EMG/force relationship in knee extension and leg press. Medicine & Science in Sports & Exercise, 32(2),

39 Studierendenwettbewerb Teil 2 37

40 38

41 COMPLIANCE MONITORING FÜR DAS ELEKTROSTIMULATIONS- TRAINING IN DER HEIMTHERAPIE M. Hendling 1, M. Krenn 1, M. Haller 1, S. Loefler 2, H. Kern 2,3, W. Mayr 1 1 Zentrum für Medizinische Physik und Biomedizinische Technik, Medizinische Universität Wien, Österreich 2 Ludwig Boltzmann Institut für Elektrostimulation und physikalische Rehabilitation, Österreich 3 Institut für Physikalische Medizin und Rehabilitation, Wilhelminenspital, Österreich Abstract Muskeltraining mittels neuromuskulärer Elektrostimulation als Heimtherapie erfordert hinzureichende Kontrolle. Im Zuge einer klinischen Studie wurde die vordere Oberschenkelmuskulatur stimuliert bei einer Gruppe von Senioren über einen Zeitraum von 9 Wochen. Die Auswertung der Compliance von 10 Probanden (7 weiblich) wird hier präsentiert. Die Compliance-Software wurde in Visual Studio C# programmiert und wertet die Stimulationsparameter eines selbstgebauten spannungsgesteuerten Stimulators aus, die während des Trainings aufzeichnet wurden. Die Ergebnisse aller Probanden ergaben eine mittlere Stimulationsspannung von 20,47 V, sowie einen mittleren Strom von 56,02 ma. Die Überwachung des Heimtrainings ist ein wichtiger Faktor für die quantitative Evaluation des Studienerfolges. Keywords Evaluationssoftware, Stimulator Design Einleitung Viele Studien zeigen, dass sich im Alter die Muskelkraft im Durchschnitt bis zu 40% reduziert [1] bei isometrischer Kontraktion der Knieextensoren im Vergleich von 25 zu 80 jährigen Probanden. Muskeltraining kann zu einer beträchtlichen Verbesserung der Muskelkraft, Leistung und funktionalen Fähigkeiten von älteren Personen führen [2,3]. Auch neuromuskuläre Elektrostimulation erzielt positive Effekte und trägt bei reduzierter Mobilität zu einer Steigerung der Muskelkraft bei [4,5]. Das Ziel dieser Studie war es die Intensität von Heimtherapie mittels Elektrostimulation anhand von Compliance Daten zu erfassen. Methoden Das Stimulationstraining wurde mit einem selbstgebauten, programmierbaren, spannungsgesteuerten Stimulator an der vorderen Oberschenkelmuskulatur durchgeführt [6]. Das System setzt sich aus einer Kontrolleinheit und zwei Kanälen zusammen. Unter Verwendung eines seriellen Bussystems und einer individuell angefertigten Software wurden die Stimulationsparameter und das Trainingsprotokoll programmiert. Einzelne Trainingsserien beider Kanäle wurden auf einer Secured Digital (SD) Karte gespeichert. Während des Trainings konnten die Amplitude frei eingestellt werden. Die Compliance Daten umfassten den eingestellten Amplitudenwert, sowie gemessene Spannungs- und Stromimpulse. Insbesondere die Strominformationen, wie beispielsweise die Ladung, bilden wichtige Parameter für die Kontrolle der Muskelaktivierung bei spannungsgesteuerten Stimulatoren. Zur Analyse wurde eine Evaluationssoftware in Visual Studio C# (Microsoft, Redmond, USA) programmiert. Die wichtigsten Messdaten der Compliance waren mittlere Stimulationsspannung und -strom. Für die Auswertung wurde die Compliance von 10 Teilnehmern (7 weiblich) einer laufenden Studie herangezogen. Trainiert wurde 9 Wochen, mit je 2 Sitzungen pro Woche in den ersten 2 Wochen und 3 Sitzungen pro Woche in der verbleibenden Zeit. Eine Sitzung bestand aus 3 Serien, wobei eine Serie in den ersten 2 Wochen 6 Minuten und anschließend 10 Minuten, d.h. 75 Kontraktionen pro Bein (Stimulationszeit: 3,5 s, Pause: 4 s), dauerte. Die Stimulationsimpulse waren rechteckig, biphasisch und spannungsgesteuert mit einer Pulsbreite von 2x300 µs und einer Rate von 60 Impulsen pro Sekunde. Ergebnisse Die Ergebnisse aller Teilnehmer ergaben eine mittlere gemessene Spannung von 20,47 V und einen mittleren gemessenen Strom von 56,02 ma (Tabelle 1). Die Auswertung des gesamten Trainings von Subjekt A219 (Abbildung 2) ergab Schwankungen bei Spannung und Strom von jeweils +/- 17% und +/- 14%. Pro Burst wurde ein Stimulationsimpuls aufgezeichnet (Abbildung 1-B,C). Abbildung 1-A zeigt die manuell eingestellte Amplitude von jedem Burst während einer Trainingsserie (10 Minuten, 75 Bursts). Tabelle 1: Mittlere (m.) Spannung und Strom (Standardabweichung) aller Trainingssitzungen aller Teilnehmer. Proband m. Spannung (V) m. Strom (ma) A219 17,54 (2,98) 57,10(7,92) A220 12,61 (1,95) 43,53 (10,47) A224 30,22 (8,12) 71,85 (18,06) A227 22,20 (3,44) 48,75 (5,88) 39

42 A230 22,35 (3,95) 49,02 (5,56) B009 27,97 (2,72) 53,73 (6,71) B017 15,93 (2,83) 54,06 (4,93) B018 16,62 (3,76) 58,89 (5,02) B019 20,22 (2,47) 62,50 (5,27) B020 19,05 (2,97) 60,81 (7,06) Mittelwert (Std) 20,47 (5,43) 56,02 (8,11) Diskussion Monitoring von Heimtherapie ist wichtig, um den Studienerfolg zu evaluieren. Die Ergebnisse liefern wichtige Informationen über die Handhabung des Stimulators durch den Anwender. Fehlbedienung und zu schwaches Elektrostimulationstraining können erkannt werden. Durch die genaue Messung von Stimulationsspannung und -strom können auch Elektrodenverschleiß bestimmt werden. Danksagungen Die Studie wurde unterstützt durch den European Regional Development Fund, EU-Interreg IVa : N00033 (MOBIL Mobility of Elderly). Abbildung 1: Stimulationsstrom (A) und -spannung (B), mit einer Sampling Rate von 83,3 ks/s. Eingestellter Amplitudenwert (C) einer Trainingsserie. Drei Impulse (Burst 1, Burst 38, Burst 75) sind dargestellt. Die Daten beziehen sich auf die Trainingsserie 48 des Teilnehmers A219. Abbildung 2: Durchschnittliche Amplitude der gemessenen Spannung (A) und des Stromes (B). Die Daten beziehen sich auf die ersten 48 Trainingsserien (16 Tage) des Teilnehmers A219. Literatur [1] Lauretani F, Russo CR, Bandinelli S et al., Age-associated changes in skeletal muscles and their effect on mobility: an operational diagnosis of sarcopenia, J Appl Physiol, vol. 95, no. 5, pp , [2] Zampieri, S., Pietrangelo, L., Loefler, S., Fruhmann, H., Vogelauer, M., Burggraf, S.,... & Kern, H. Lifelong physical exercise delays age-associated skeletal muscle decline. The Journals of Gerontology Series A: Biological Sciences and Medical Sciences, 2014 [3] Melov S., Tarnopolsky MA, et al., Resistance Exercise Reverses Aging in Human Skeletal Muscle, PLoS One, vol. 2, no. 5, [4] Kern, H., Loefler, S., Hofer, C., Vogelauer, M., Burggraf, S.,..., Zampieri, S. FES Training in Aging: interim results show statistically significant improvements in mobility and muscle fiber size. European Journal of Translational Myology, 22(1-2), 61-67, 2012 [5] Bax L, Staes F, Verhagen A, Does neuromuscular electrical stimulation strengthen the quadriceps femoris? Sports medicine, vol. 35, no. 3, pp , [6] Krenn M, Haller M, Bijak M, Unger E, Hofer C, Kern H, Mayr W. Safe Neuromuscular Electrical Stimulator Designed for the Elderly. Artificial Organs vol. 35, no. 3, pp ,

43 SELEKTIVITÄT DER TRANSKUTANEN ELEKTROSTIMULATION DER LUMBALEN HINTERWURZELN BEI VERÄNDERUNG DER STIMULATIONSHÖHE A. Toth 1, M. Krenn 1, S.M. Danner 1,2, U.S. Hofstoetter 1, K. Minassian 1, W. Mayr 1 1 Zentrum für Medizinische Physik und Biomedizinische Technik, Medizinische Universität Wien, Österreich 2 Institut für Analysis und Scientific Computing, Technische Universität Wien, Österreich Abstract Bei epiduraler und transkutaner Elektrostimulation des lumbalen Rückenmarks können monosynaptische Posterior Root-Muscle (PRM) - Reflexe ausgelöst werden. Ziel dieser Studie war es, die Selektivität der Aktivierung von Reflexantworten in Quadrizeps (Q) und Trizeps surae (TS) bei der transkutanen Stimulation nachzuweisen. Dafür wurde mit einem Elektroden-Array die Stimulationslage rostrokaudal verändert. Das Array mit 7 Levels wurde am Rücken in der Höhe um die Wirbelköper T11/T12 angebracht. Die Reflexe wurden elektromyographisch an den verschiedenen Muskelgruppen beider Beine gemessen. Durch die unterschiedlichen Stimulationslevels konnten eine selektive Stimulation erreicht und das Rekrutierungsmuster beeinflusst werden. Im Mittel erfolgte eine frühere Aktivierung von Q von rostraleren Stimulationslevels und von TS von kaudaleren Positionen. Keywords Posterior Root-Muscle Reflexe, Rückenmarkstimulation, Elektroden-Array Einleitung Die Anwendungsgebiete der epiduralen und transkutanen Rückenmarkstimulation in der Rehabilitation sind vielschichtig und reichen von der Kontrolle von Abbildung 1: Electrode-Array. Jedes der 7 Levels besteht aus drei kombinerten Ag/AgCl Elektroden. Spastizität [1,2] bis hin zur Verbesserung der Motorfunktionen und Generierung von bilaterale Extension und rhythmische Bewegungsmustern [3]. Um die Einsatzmöglichkeiten der transkutanten Variante weiter zu verbessern, ist unter anderem eine möglichst selektive Stimulationswirkung notwendig. Diese Selektivität wurde in der vorliegenden Arbeit durch die sogenannten Posterior Root-Muscle (PRM) Reflexe getestet [4]. Unser besonderes Interesse galt dabei der Rekrutierung von Quadrizeps (Q) und Trizeps surae (TS), zwei Muskelgruppen mit getrennten Innervationszonen, und zwar L2 L4 im Falle von Q und L5 S2 bei TS. Unsere Arbeitshypothese war, dass die Stimulation von rostraleren Levels bevorzugt PRM Reflexe in Q auslösen würde, während kaudalere Lagen eher zu Antworten in TS führen würden. Methoden Zehn gesunde Probanden (5 Frauen) wurden für die Studie rekrutiert. Alle Messungen wurden in Rückenlage durchgeführt. Stimulations-Setup: Der Stimulationsaufbau bestand aus jeweils drei Elektroden mit einem Durchmesser von 1 cm pro Level (T-60, Leonhard Lang GmbH, AT), welche am Rücken übereinander bis zu 4cm rostral und 8cm kaudal in Bezug auf die Wirbelkörper T11- T12 angebracht wurden (Abb. 1). Am Abdomen komplettierten zwei große Gegenelektroden (8 x 13cm, STIMEX, schwa-medico GmbH, GER) den Aufbau. Über einen stromkontrollierter Stimulator (Stimulette R2X, Schuhfried Medizintechnik GmbH, AT) wurden biphasische Stimulationsimpulse mit einer Pulsbreite von 1 ms pro Phase appliziert. Datenaufzeichung: Muskelantworten von Quadriceps (Q), und Triceps surae (TS) wurden aufgezeichnet. Messprotokoll: Stimulationsintensitäten bis zu 125 ma wurden je nach Toleranzgrenze des Probanden in 5 ma Schritten appliziert. Ein Doppelstimulus mit einem Interstimulusintervall von 30 ms, gefolgt von 3 Einzelstimuli im Abstand von 5 s, wurde pro Intensität appliziert. Die Doppelpulse wurden verwendet, um die Stimulation afferenter Fasern nachzuweisen bzw. mögliche direkte Aktivierung von efferenten Fasern zu detektieren und von der weiteren Analyse auszuschließen [4]. 41

44 Normalisierte Rekruitmentfläche Q TS - 8cm - 6cm - 4cm - 2cm T cm + 4cm Abbildung 2: Gemittelte Werte der auf das Maximum über alle Levels normierten Rekrutierungsfläche aller Probanden (n=20) für Q und TS. Die Errorbars zeigen die Standardabweichung Daten Analyse: Peak-to-peak (PTP) Amplituden der Reflexantworten wurden gemessen. Rekrutierungskurven der PTP Amplituden wurden erstellt und die Fläche darunter berechnet. Die Datenanalyse erfolgte offline mit Matlab R2013a (MathWorks Inc., Natick, MA, USA). Ergebnisse Im Mittel wurden bei Stimulation von den rostraleren Stimulationslevels PRM Reflexe in Q mit niedrigeren Schwellen ausgelöst, während die bevorzugte Aktivierung von TS an mehr kaudal gelegenen Levels (von - 4 bis 8 cm relativ zu Level 0) zu finden war. Level 0 war durch eine annähernd gleichzeitige Aktivierung beider Muskelgruppen ausgezeichnet. Durch die gemittelte Flächenberechnung unter den Rekrutierungskurven zeichnete sich eine flächenmäßige Diskrepanz der zwei Muskelgruppen ab (Abb.2). Es kann somit je nach Stimulationslevel von einer vorherrschenden Stimulation der oberen und/oder unteren Hinterwurzeln ausgegangen werden. Die höchsten Amplituden in Rückenlage konnten für Q bei 2 cm und für TS bei +6 cm erzielt werden. Abbildung 3 verdeutlicht die selektive Aktivierung von Q und TS bei verschiedenen Stimulationslevels. Diskussion Es konnte gezeigt werden, dass eine selektive Stimulation nahe der Aktivierungsschwellen möglich ist. Daraus kann man eine vorherrschende Stimulation der lumbalen Rückenmarksegmente L2 - L4 bzw. L5 - S2, je nach Stimulationslevel, ableiten. Somit erfolgt mit der Variation der Stimulationshöhe durch die anatomischen Gegebenheiten auch eine Verschiebung des elektrischen Feldes. Danksagungen Diese Studie wurde von der Wings for Life Stiftung für Rückenmarksforschung, Projektnummer: WFL AT007/11, und vom Wiener Wissenschafts-, Forschungs- und Technologiefond, Projektnummer: LS11-057, unterstützt. Abbildung 3: Selektive Muskelrekrutierung an den 7 verschiedenen Levels anhand eines Probanden bei einer Stimulationsintensität von 55mA Literatur [1] M.M. Pinter, F. Gerstenbrand, and M.R. Dimitrijevic, Epidural electrical stimulation of posterior structures of the human lumbosacral cord: 3. Control Of spasticity, Spinal cord the official journal of the International Medical Society of Paraplegia, 38:9, pp , [2] U.S. Hofstoetter, W.B. McKay, K.E. Tansey, W. Mayr, H. Kern, K. Minassian. Modification of spasticity by transcutaneous spinal cord stimulation in individuals with incomplete spinal cord injury. J Spinal Cord Med. 37(2):202-11, 2014 [3] M.R. Dimitrijevic, Y. Gerasimenko, and M.M. Pinter, Evidence for a spinal central pattern generator in humans., Annals Of The New York Academy Of Sciences, 860:1, pp , [4] K. Minassian, I. Persy, F. Rattay, M.R. Dimitrijevic, C. Hofer, and H. Kern, Posterior root- muscle reflexes elicited by transcutaneous stimulation of the human lumbosacral cord, Muscle Nerve, vol. 35, no. 3, pp ,

45 PREVENTING FALLS: MISSION POSSIBLE! AN ICT APPROACH TO ASSESS FALL RISK IN OLDER PEOPLE Andreas Ejupi 1,2 1 Austrian Institute of Technology, Assistive Healthcare Information Technology Group, Austria 2 Vienna University of Technology, Austria Abstract Falls are common and the leading cause of injury-related death and hospitalisation in old age. Due to limited health care resources, less expensive and objective fall risk assessments are required. A novel sensor-based fall risk test battery was developed using a combination of 3D-depth sensor and a 3D-accelerometer. Preliminary results demonstrate the feasibility of the assessment which was conducted with 135 community-living older people in a laboratory setting and 30 older people in a regular daily life setting. Temporal and spatial fall risk parameters were automatically extracted from the sensor signals to identify people at high risk of falling. Keywords Falls, Sensor-based fall risk assessment, Accelerometer, Microsoft Kinect, Exergaming Introduction Falls in older people are common and a major public health problem. More than 30% of the people older than 65 years and more than 50% in the age group above 80 years fall at least once a year [1]. Falls are the most frequent cause of injury-related hospitalization [2]. Globally, about 37.3 million falls require medical attention and individuals die from falls every year. Falls can be attributed to a wide variety of causes, with poor balance, limited mobility and slow reactions being commonly reported [1-2]. For a targeted and tailored fall prevention program it is necessary to identify people at high risk and to accurately determine their individual fall risk factors first. Standard clinical fall risk assessments are often described as subjective and qualitative [3]. Because of limited health care resources objective test equipment (e.g. force platforms or electronic walkways) is not always available. In addition, such clinical assessments usually have to be administered by a trained health professional. Quick, easy to administer and simple tests are needed which can be applied by the individual to assess fall risk on a regular basis. My work focuses on the feasibility of sensor-based fall risk assessments. Low-cost and portable sensors hold great promise for more objective, regular and task-specific assessments in clinical and daily life settings. Figure 1 shows the application areas of sensor-based fall risk assessments and lists the requirements for these new technological tests. Figure 1: Application areas of sensor-based fall risk assessments and requirements for in-home assessments [4] Methods Within the project istoppfalls [5] a directed routine fall risk assessment (further referred to as istoppfalls-assessment) has been developed, aiming to design a test which can be used in a semisupervised clinical setting and unsupervised home setting. In the assessment, which uses exergaming technologies, the participant is represented as an avatar on the TV screen and can control the avatar s movement with the Microsoft Kinect (Fig. 2). The istoppfalls-assessment consists of 1) three balance tests (semi-/near-/full tandem test) in which participants have to stand for 30 seconds under different challenging conditions 2) two reaction tests were participants have to act as fast as possible and 3) a five times sit-to-stand test. Data are recorded with a 3D-depth sensor (i.e. Microsoft Kinect) and a bodyworn 3D-acclerometer (Philips Research). For my doctoral thesis I tested 135 communitydwelling older adults living in the area of Sydney, Australia. The sensor-based istoppfalls-assessment and standard clinical fall risk assessments (e.g. Physiological Profile Assessment, Timed Up and Go) were conducted with the participants visiting the laboratory. 43

46 er to complete the five transitions (+2000ms). More fallers were not able to stand for 30 seconds in the balance tests. To date, 23 (out of 30) participants performed the istoppfalls-assessment 38 times on their own in the first two months of installation. Figure 2: Schematic representation of the sensorbased istoppfalls-assessment [6] The inclusion criteria were living in the community, aged 65 years or older and being ambulant with or without the use of a walking aid. The exclusion criteria were: medically unstable, suffering from major cognitive impairment (Mini-Cog < 3), neurodegenerative disease or color blindness. In addition to the laboratory tests, a subsample of 30 participants got the istoppfalls-assessment system installed into their homes and were instructed to perform the unsupervised assessment on a regular basis over the next 4 months. This research is ongoing and will be running until August Data acquisition and processing During the istoppfalls-assessment skeleton data of anatomical landmarks were recorded using the Microsoft Kinect Software Development Kit with a sampling rate of 30 Hz. In addition, 3D-accelerometer (ADXL362, ± 8 g) data were acquired at a sampling rate of 50 Hz with a wearable device from Philips Research Europe. An analysis software was developed in MATLAB (R2013b) to automatically extract fall risk related temporal and spatial parameters from the sensor signals for each test. Results Preliminary results demonstrate the feasibility of the sensor-based istoppfalls-assessment to classify between fallers and non-fallers. 44 participants had a fall within the past 12 months and 91 didn t have a fall. Temporal and spatial parameters were automatically extracted from the Microsoft Kinect and accelerometer signal. Based on the laboratory assessment fallers showed a significant lower reaction time (+150ms) compared to the non-fallers in the reaction tests [6]. For the sit-to-stand test fallers needed long- Discussion The feasibility of a sensor-based, low-cost and portable fall risk assessment to identify people at high risk of falling has been shown. The istoppfalls- Assessment is simple, quick and easy to administer and therefore can be used in a semi-supervised clinical or unsupervised home setting. Recent technological fall risk studies have used accelerometers in laboratory settings before [3]. In this research the Microsoft Kinect, a commercially available consumer device and a 3D-accelerometer, in combination with an assessment using exergaming technology was used. With this combination temporal and spatial parameters were extracted using specifically developed software and algorithms. The preliminary findings are in agreement with the literature on falls and ageing. A body of literature exists, that demonstrates that slower reaction, a longer sit-to-stand time and an increased postural sway is related to a higher risk of falling [1-2,4]. Future research will examine the predictive ability of the extracted fall risk parameters to identify people with a high risk of falling from the described unsupervised home assessment data. Literature [1] M. E. Tinetti and C. Kumar, The Patient Who Falls, Journal of the American Medical Association, vol. 303, no. 3, pp , [2] K. Delbaere, J. C. T. Close, J. Heim, P. S. Perminder, H. Brodaty, M. J. Slavin, N. A. Kochan and S. R. Lord, A multifactorial approach to understanding fall risk in older people., Journal of the American Geriatrics Society, vol. 58, no. 9, pp , [3] J. Howcroft, J. Kofman, and E. D. Lemaire, Review of fall risk assessment in geriatric populations using inertial sensors., vol. 10, no. 1, p. 91, [4] A. Ejupi, S. R. Lord and K. Delbaere, New methods for fall risk prediction., Current Opinion in Clinical Nutrition and Metabolic Care, forthcoming, [5] istoppfalls, a fall prevention Project. [Online]. Available: [6] A. Ejupi, M. Brodie, Y. J. Gschwind, D. Schoene, S. R. Lord and K. Delbaere, Choice Stepping Reaction Time test using Exergame technology for fall risk assessment in older people, in review,

47 SIMULATION AND EVALUATION OF THE CRYOABLATION PROCEDURE FOR TREATMENT OPTIMIZATION OF CARDIAC ARRHYTHMIAS Michael Handler Institute of Electrical and Biomedical Engineering, UMIT, 6060 Hall i. T., Austria Abstract Cardiac cryoablation is a minimally invasive procedure for the treatment of cardiac arrhythmias by cooling responsible tissues to freezing temperatures. Computer simulations can be used to analyze and improve different cryoablation scenarios and possible applicator geometries. In this thesis a new simulation framework was created considering characteristic phases of the refrigerant and the blood stream surrounding the applicator during cardiac cryoablation, which was verified by in-vivo measurements and data from literature. Using this framework realistic temperature distributions of simulated cardiac cryoablation scenarios can be obtained and valuable recommendations for clinical applications can be made. Keywords Cardiac cryoablation, simulation, finite element method, applicator geometries, temperature profiles Introduction Cardiac cryoablation is a minimally invasive procedure for the treatment of cardiac arrhythmias by cooling tissue responsible for the arrhythmia to freezing temperatures. Advantages of cardiac cryoablation compared to the commonly applied radiofrequency (RF) ablation by heat is e.g. less endothelial disruption, cryoadhesion of the applicator to the target ablation zone and the ability to evaluate ablation sites before the actual treatment (cryomapping). However, longer intervention durations and higher recurrence rates compared to RF ablation were reported [1]. To analyze and improve different cryoablation scenarios and the influences of variations in applicator geometries, computer simulations can be used prior to expensive and time-consuming validation experiments. Compared to simulations of ablation procedures in other fields of cryosurgery different cooling techniques have to be considered in the simulation of cardiac cryoablation as well as the direct contact with streaming blood surrounding the applicator. Methods In this thesis a simulation environment based on the finite element method was created to simulate the temperature distribution of the applicator, the treated myocardial tissue and the surrounding blood during cardiac cryoablation scenarios. To consider heat contribution by perfusion and metabolism in living tissue, Pennes bioheat equation was selected in combination with the effective heat capacity model to consider the phase change of tissue/blood over a given temperature range [2]. To reduce the loss of cooling capacity to the surrounding blood stream for a loop shaped applicator variant [3], effects of different insulation layers were analyzed considering the anisotropic thermal properties of the applicator and the insulation layer. Stationary temperature distributions were used to evaluate the cooling flux between applicator and both tissue and blood [4] (see Figure 1). Figure 1: Simulated temperature distribution of loop shaped applicator with outer insulation layer [4]. Isolines are plotted in 10 C steps (cyan isoline: 0 C). The same colormap as shown in Figure 3 was used. For a tip applicator characteristic temperature profiles at the tip and at the epicardium as well as refrigerant flow rates during cardiac cryoablation were obtained from in-vivo experiments. The recorded flow rates were used to estimate the cooling capacities of characteristic phases of the refrigerant, which was consequently applied as time and temperature dependent boundary conditions at the inner surface of the applicator. The temperature recordings were used to verify the temperature distribution provided by the simulation. In addition material properties and boundary conditions of the model were adapted based on the given in-vivo data [5]. Effects of different tip applicator positioning at the cryoablation procedure were analyzed and temperature profiles at the applicator tip [6] and at the epicardium were used to verify the simulations. 45

48 The created model was used to analyze transmural myocardial temperature profiles for different ablation scenarios with multiple freeze thaw cycles, focusing on different effects responsible for tissue injury [7] (see Figure 2). Figure 2: Transmural temperature profiles over time for different depths below the tip applicator for a double freeze-thaw cryoablation protocol of 300 s freezing with an interim thawing phase of 10 s after 150 s first freezing phase [7]. The green lines represent the -20 C, -10 C and 0 C-isolines from the bottom to the top. The same colormap as shown in Figure 3 was used. Additionally a new tip applicator prototype was simulated to analyze the effects of a design modification to increase the surface for heat exchange with the refrigerant [8]. Figure 3: Simulated temperature distribution of a tip applicator prototype after 4 min freezing [8]. Isolines are plotted in 10 C steps. The outer and inner white line are the 0 C and -10 C isoline. Results Simulated temperature profiles showed a high correlation with temperature data registered at the applicator tip in in-vivo experiments and additionally similarities of ice ball dimensions in the simulation and lesion dimensions could be verified by literature [5]. For different applicator positioning during a simulated cryoablation scenario high variations in iceball dimensions were revealed, however, only small variations in the temperature profiles of the applicator tip were seen in the simulation [6], which is consistent with findings from the literature. Investigating simulated transmural temperature profiles for double freeze-thaw scenarios, variations of effects responsible for cell death were identified depending on durations of the freeze-thaw phases [7]. Modifications of applicator geometries could be assessed (insulation layers [4] and dimensioning of refrigerant outlets [3] on loop shaped applicators; increase of heat exchange surfaces between refrigerant and tip applicator [8]) and reasonably discussed. Discussion In this work a powerful simulation framework was developed and evaluated, enabling the simulation of realistic temperature distributions of cardiac cryoablation scenarios and valuable recommendations can be made for the clinical application. Although exact dimensions of ablated tissue could not be extracted from simulated temperature profiles yet, lethal temperature boundaries known from literature could be successfully verified. Results of this work are already applied to other research questions focusing on the simulation of electrophysiological changes at cell death during cardiac cryoablation. Acknowledgement This work is funded by the K-Regio- Project of the Standortagentur Tirol, Innsbruck, Austria and by the European Regional Development Fund (ERDF). References [1] B. Schwagten, Y. Van Belle and L. Jordaens, "Cryoablation: how to improve results in atrioventricular nodal reentrant tachycardia ablation?," Europace, vol. 12, no. 11, pp , November [2] J. Liu, "Bioheat transfer model," John Wiley & Sons, Inc., [3] M. Seger, G. Fischer, M. Handler, M. Stöger, C. Nowak, F. Hintringer, G. Klima and C. Baumgartner, "Achieving elongated lesions employing cardiac cryoablation: A preclinical evaluation study," Cryobiology, no. 65, pp , [4] M. Handler, G. Fischer, M. Seger, R. Kienast, A. Schütte and C. Baumgartner, "Effect of insulating layers in loop applicators for cardiac cryoablation," in BIOTECHNO 2012: The Fourth International Conference on Bioinformatics, Biocomputational Systems and Biotechnologies, [5] M. Handler, G. Fischer, M. Seger, R. Kienast, C.-N. Nowak, D. Pehböck, F. Hintringer and C. Baumgartner, "Computer simulation of cardiac cryoablation: Comparison with in vivo data," Medical Engineering & Physics, no. 35, pp , [6] M. Handler, R. Kienast, G. Fischer, M. Seger, C.-N. Nowak, M. Popovscaia, F. Hanser and C. Baumgartner, "Effects of tip-applicator positioning on recorded temperature profiles in cryoablation procedures estimated by finite element modeling," Biomed Tech 2013, [7] M. Handler, G. Fischer, R. Kienast, M. Seger, F. Hanser and C. Baumgartner, "Simulation and evaluation of freeze-thaw cryoablation scenarios for the treatment of cardiac arrhythmias," Physics in Medicine and Biology, 2014 (submitted). [8] M. Handler, G. Fischer, R. Kienast and C. Baumgartner, "Simulating effects of increased heat transfer surfaces between applicator tip and refrigerant in cardiac cryoablation," in ÖGBMT 2014, Hall in Tyrol, 2014 (submitted). 46

49 BIG DATA IM KRANKENHAUS - RAHMENKONZEPT UND ARCHITEKTUR FÜR DIE SEKUNDÄRNUTZUNG KLINISCHER ROUTINEDATEN W.O. Hackl 1, E. Ammenwerth 1 1 Institut für Biomedizinische Informatik, UMIT - Private Universität für Gesundheitswissenschaften, Medizinische Informatik und Technik, Hall in Tirol, Österreich Abstract Auf Basis theoretischer Überlegungen und Literaturstudien wurden ein allgemeines Rahmenkonzept und Vorgehensmodell sowie ein konkreter Leitfaden zur Integration und Erschließung klinischer Routinedaten für patientenübergreifende Analysen erstellt. Diese wurden in einer umfassenden Fallstudie validiert und vertieft. Die theoretischen Konzepte wurden dabei auch praktisch umgesetzt. Es wurde ein Nursing Data Mart und Nursing Intelligence System geschaffen, welches nun in den Routineeinsatz auf Managementebene übernommen wurde und die patientenübergreifende Analyse von derzeit über 30 Millionen Datensätzen aus mehreren Jahren an Pflegeprozessdokumentation für eine Vielzahl von Fragestellungen ermöglicht. erfolgreichen Umsetzung einer Sekundärnutzung behandlungsprozessbezogener, klinischer Routinedaten [6]. Ziel der vorgestellten Dissertation [7] war es daher, einen sowohl wissenschaftlich fundierten als auch praxistauglichen, umfassenden Ansatz zur Erschließung und Sekundärnutzung von Routinedaten aus der klinischen und pflegerischen Prozessdokumentation zu erarbeiten. Methoden Abb. 1 gibt einen Überblick über die einzelnen Zielsetzungen und die methodische Vorgehensweise: Keywords Sekundärnutzung, klinische Routinedaten, Klinische Datenintegration, Clinical Intelligence Einleitung Der Übergang ins 21. Jahrhundert markierte den Übergang vom technologischen ins Informationszeitalter. Noch nie war es so einfach, Daten zu generieren, zu kommunizieren und zu verarbeiten, um daraus Informationen und neues Wissen abzuleiten. Daten werden daher als das "Gold der Zukunft" bezeichnet und der Begriff BIG DATA, als Ausdruck der hohen Erwartungen, die bezüglich der Nutzung der nun verfügbaren, enormen Datensammlungen bestehen, ist in aller Munde [1]. In Medizin und Pflege fällt während diagnostischer, therapeutischer und Nachsorge-Prozesse eine gewaltige Fülle an Routinedaten an, die bislang fast ausschließlich kasuistisch, also für den jeweiligen Behandlungsfall, genutzt werden. Es wird aber erwartet, dass auch in diesen Daten noch viel unentdecktes Potenzial für die Beantwortung einer Vielzahl von Fragestellungen (z.b. Qualitätsmanagement, Medizincontrolling, klinische Forschung, Prozessmanagement- und -optimierung, Personaleinsatzplanung, Ressourcenallokation, etc.) steckt [2,3]. Für solche Sekundärnutzungszwecke müssen die Daten aber entsprechend aufbereitet werden. Hierzu können Methoden aus der Domäne der Wirtschaftswissenschaften (z.b. Data Warehousing und Business Intelligence) eingesetzt werden [4,5]. Allerdings ist der klinische Kontext derart komplex, dass sich existierende Verfahren nur bedingt eignen und es gibt kaum umfassende, systematische und vor allem tragfähige, praxistaugliche Ansätze zur Abbildung 1: Zielsetzungen und Vorgehen Ausgehend von theoretischen Überlegungen und Literaturstudien wurde ein erstes Rahmenkonzept entwickelt, das als Basis für die Fallstudie, die in einem großen österreichischen Universitätsklinikum durchgeführt wurde, diente. In der Fallstudie wurden konkrete Konzepte für klinisches Data Warehousing und die Sekundärnutzung pflegerischer Routinedaten erarbeitet und prototypisch umgesetzt. Die Prototypenentwicklung erfolgte nach einem spiralförmigen Vorgehensmodell. Ergebnisse aus den einzelnen Spiraldurchläufen flossen in Feedbackschleifen zurück und wurden zur Validierung und Vertiefung der Konzepte verwendet. Schließlich wurde ein konkreter Leitfaden zur Erschließung und Sekundärnutzung pflegerischer Routinedaten erstellt. Ergebnisse Rahmenkonzept und Vorgehensmodell: Es wurde ein allgemeines Rahmenkonzept und Vorgehensmodell zur Sekundärnutzung von Daten aus der behandlungsprozessbezogenen, klinischen Routinedo- 47

50 kumentation entwickelt. Es enthält eine Übersicht über Arten bzw. Klassen von Fragestellungen, die mit klinischen Routinedaten beantwortet werden können, beschreibt Anforderungen an Analysewerkzeuge und erläutert die zur Nutzbarmachung von Routinedaten im Rahmen patientenübergreifender Fragestellungen notwendigen Schritte sehr detailliert. Data Warehouse und Sekundärnutzungskonzept: Im Sekundärnutzungskonzept wurden die konkreten Analysefragestellungen und Anforderungen an ein Analysewerkzeug beschrieben und die zur Beantwortung der Fragestellungen relevanten Pflegedaten selektiert. Im Data Warehouse Konzept wurde ein Nursing Data Mart (NDM) konzipiert. Bei der Entwicklung des Datenmodells für den NDM wurde darauf Bedacht genommen, dass einerseits die konkreten Fragestellungen beantwortet werden können, das Modell andererseits aber möglichst flexibel bleibt und jederzeit erweitert werden kann, damit es auch in Zukunft die Beantwortung weiterer, jetzt noch nicht bekannter Fragestellungen, unterstützt. Entwickelte Prototypen: Die in der Fallstudie entwickelten Konzepte wurden auch unter Verwendung von Open-Source Komponenten prototypisch umgesetzt. Sowohl der NDM, als auch das zur Datenanalyse konzipierte webbasierte Nursing Intelligence System (NIS) haben einen Reifegrad erreicht, der sie in der Praxis einsetzbar macht. Mittlerweile werden auch Daten aus anderen Einrichtungen des Krankenanstaltenträgers der Fallstudie eingebunden. NDM und NIS wurden in den Routineeinsatz auf Managementebene übernommen und ermöglichen die patientenübergreifende Analyse von derzeit über 30 Millionen Datensätzen aus mehreren Jahren an Pflegeprozessdokumentation aus verschiedenen Einrichtungen für eine Vielzahl von Fragestellungen. Leitfaden zur Sekundärnutzung von Pflegedaten: Dieser konkretisiert das allgemeine Rahmenkonzept und Vorgehensmodell für den Einsatz in der Anwendungsdomäne Pflege. Zusätzlich zur detaillierten Beschreibung aller notwendigen Schritte enthält der Leitfaden zahlreiche Vorschläge für mögliche pflegespezifische Sekundäranalysen inklusive der dazu benötigten Datenelemente. Der Leitfaden ist so gestaltet, dass er direkt in anderen Einrichtungen eingesetzt werden kann. Diskussion Das Rahmenkonzept berücksichtigt im Gegensatz zu bestehenden Ansätzen, die vor allem technische oder organisatorische Aspekte behandeln, die Besonderheiten des klinischen Kontexts und kann nun auch in anderen Einrichtungen, mit anderen Gegebenheiten und klinischen Prozessen, und für andere zugrundeliegende Fragestellungen und Quelldaten eingesetzt werden. Dazu muss es natürlich an die jeweilige Situation adaptiert werden. Wie eine solche Adaptierung aussehen kann, wurde mit dem Leitfaden zur Erschließung und Sekundärnutzung von Routinedaten aus der Pflegedokumentation gezeigt. Er baut direkt auf dem Rahmenkonzept auf und konkretisiert es für die spezielle Anwendungsdomäne Pflege. Der Leitfaden ist ohne gröbere Adaptierungsschritte direkt anwendbar und kann zum Aufbau eines Nursing Data Marts und Nursing Intelligence Systems in einer Krankenanstalt, in der eine (teil-) strukturierte elektronische Pflegedokumentation eingesetzt wird, genutzt werden. Die Praxistauglichkeit der entwickelten Konzepte konnte in der Fallstudie gezeigt werden. Darüber hinaus wird der entwickelte Ansatz bereits in einem Forschungsprojekt eingesetzt. In diesem wird ein Langzeitregister konzipiert, in das klinische Routinedaten einfließen sollen, um die Auswirkungen bestimmter radiotherapeutischer Verfahren auf die Entstehung von Sekundärmalignomen zu erforschen. Die vorliegende Arbeit kann somit einen Beitrag zur Ermöglichung bzw. Intensivierung der Sekundärnutzung klinischer Routinedaten in Einrichtungen des Gesundheitswesens leisten. Danksagung Teile dieser Arbeit wurden durch den Tiroler Wissenschaftsfonds gefördert (GZ: , ). Literatur [1] Mayer-Schönberger V, Cukier K. Big Data: A Revolution that Will Transform how We Live, Work, and Think: Houghton Mifflin Harcourt; [2] Aller RD. The clinical laboratory data warehouse. An overlooked diamond mine. Am J Clin Pathol Dec;120(6): [3] De Lusignan S, Liaw ST, Michalakidis G, Jones S. Defining datasets and creating data dictionaries for quality improvement and research in chronic disease using routinely collected data: an ontology-driven approach. Inform Prim Care. 2012;19(3): [4] Li P, Wu T, Chen M, Zhou B, Xu WG. A study on building data warehouse of hospital information system. Chin Med J (Engl) Aug;124(15): [5] Stolba N, Tjoa AM. The Relevance of Data Warehousing and Data Mining in the Field of Evidencebased Medicine to Support Healthcare Decision Making. International Journal of Computer Systems Science & Engineering. 2007;3(3):143-8 [6] Samaha TR, Croll PR. A Data Warehouse Architecture for Clinical Data Warehousing. In: Roddick F, Warren JR, editors. Proceedings Australasian Workshop on Health Knowledge Management and Discovery (HKMD 2007) CRPIT; Ballarat, Victoria: Australian Computer Society; p [7] Hackl WO. Erschließung und Sekundärnutzung von Routinedaten aus der klinischen und pflegerischen Prozessdokumentation: Ein Rahmenkonzept, Vorgehensmodell und Leitfaden. Dissertationsschrift (Dr. techn.). UMIT, Hall in Tirol:

51 Minisymposium Modellbildung und Simulation des Herzens Vorträge MULTISCALE MULTIPHYSICS MODELING OF TOTAL CARDIAC FUNCTION: FROM BASIC SCIENCE TO CLINICAL APPLICATIONS G. Plank HIGH-PERFORMANCE SIMULATIONS FOR CARDIAC ELECTROMECHANICAL MODELS C. Augustin MULTISCALE MODELING OF CALCIUM-MEDIATED PREMATURE VENTRICULAR COM- PLEXES F. Campos PREDICTING RESPONSE TO CARDIAC RESYNCHRONISATION THERAPY THROUGH COMPUTATIONAL MODELLING A. Crozier AN EFFICIENT FINITE ELEMENT APPROACH FOR MODELING FIBROTIC CLEFTS IN THE HEART C. Mendonca Costa SCALABLE ACCELERATED ITERATIVE SOLVERS FOR CARDIAC ELECTRO-MECHANICS A. Neic EXPERIMENTAL VALIDATION OF COMPUTATIONAL MODELS OF CARDIAC ELECTRO- MECHANICS E. Hofer, A. J. Prassl 49

52 50

53 MULTISCALE MULTIPHYSICS MODELING OF TOTAL CARDIAC FUNC- TION: FROM BASIC SCIENCE TO CLINICAL APPLICATIONS C. Augustin 1, C. Costa 1, F. Campos 1, A. Crozier 1,2, A. Neic 1, A.J. Prassl 1, E. Hofer 1, G. Plank 1,3, 1 Institut für Biophysik, Medizinische Universität Graz, Österreich 2 Biomedical Engineering Department, Kings College London, UK 3 Oxford e-science Research Centre, University Oxford, UK Abstract The heart is a highly complex multiphysics organ whose main function is to propel blood around the circulatory system, thus providing oxygen and metabolites to the organs. Despite the wealth of data available today in the era of postgenomic biology and the significant advances made in clinical imaging which provide image dataset at an unprecedented resolution, using such data to improve treatment of cardiovascular disease remains challenging. Mathematical models of integrated cardiac function are a promising approach to harness such data for gaining better insight into the complex interplay of biological processes across different spatial and temporal scales as well as between the different physics involved. However, the methodological frameworks required for performing advanced experiments are vastly complex. Numerous methodological issues have to be addressed to make modeling an additional modality, applicable in clinical scenarios for optimizing therapies and predicting outcomes. This minisymposium will address selected methodological as well as applied topics in both basic as well as clinical research. Keywords Cardiac Modeling, Electrophysiology, Biomechanics, Numerical Methods, Scientific computing. Introduction The physiological function of the heart is regulated by a cascade of processes in which a propagating electrical wavefront controls mechanical contraction and relaxation. Any disturbance in this highly ordered sequence of events may trigger severe malfunctions, either with immediately lethal consequences such as the formation of reentrant fibrillatory activation patterns which precede sudden cardiac death, or, by initiating a progressive degradation of pumping efficiency which, eventually, leads to severe morbidity and mortality, as it is common with pathologies such as heart failure. Today, in the era of postgenomic biology, a wealth of biological data is available to research which can be used to unravel mechanisms underlying cardiac funtion in health and disease. However, a key challenge in harnessing these data, gathered at various scales of biological organization, lies in the complexity of cardiac function which emerges from complex interactions between processes within and across the hierarchical levels of organization. This is reflected in cause-effect relationship which are difficult, if not impossible, to dissect by reasoning alone. Comprehensive multiscale modeling frameworks are deemed to be a promising approach as they facilitate mechanistic inquiries into these causal relationships. A further challenging problem is to gain insight into mechanisms governed by interactions across different physics. This is of critical clinical importance for treating complex cardiovascular diseases such as heart failure where electrophysiological, mechanical and fluidic components are factors contributing to disease progression. In such scenarios multiphysics models may be of great utility due to their ability of linking data on electrophysiology, anatomy and pathological substrate alterations such as infarct scars or fibrosis, mechanical performance, fluid flow and fluid structure interaction into a coherent representation of a patient s cardiovascular system. It is anticipated that such personalized multiphysics models will play a critical role as a clinical tool for planning of therapies as well as for outcome prediction [1]. While cardiac modeling is among the most promising research approaches for addressing these challenges, methodological frameworks needed for building robust and efficient environments for in-silico experimentation are vastly demanding (Fig. 1). The major topics research is currently focusing on are: 1) Various complementary imaging and other diagnostic modalities have to be combined to provide the basic information for characterizing anatomy, structure and function of the heart; 2) A complex set of processing steps such as multimodal regristration and segmentation are necessary to extract information from image datasets which provide a basis for model parametrization; 3) In a global data assimilation process extracted information has to be combined with a priori knowledge to parametrize model components which cannot be measured directly; 4) Organ domains have to be tesselated [2] and model equations have to be 51

54 Figure 1 Components of an In-silico experimentation environment: Electrophysiological model for computing electrical activation and repolarization sequences, excitation-contraction coupling (ECC) and mechanoelectric feedback (MEF), structure mechanical deformation model linked to a lumped model oft he cardiovascular system; postprocessing tools for computing electrocardiogram (ECG), magnetocardiogram (MCG) and optical maps to facilitate comparison with clinical and experimental recordings. discretized and solved, requiring cutting edge numerical methods to allow for sufficiently short simulationanalysis cycles [3,4]; 5) Verification of modeling software and validation of model outputs by comparing in-silico predictions with experimental or clinical evidence is key for clinical future applications [5]. Objectives of Mini-Symposium The mini-symposium on Multiscale-multiphysics modeling of total cardiac function reviews current trends and showcases recent applications and methodological developments in computational cardiology, including combined experimental and theoretical work on model validation. Acknowledgement This research is supported by the Austrian Science Fund Grant F3210-N18, the National Heart, Lung, and Blood Institute Grant RO1-HL and the European Commission FP7 Grant CardioProof. Literatur [1] N. Smith, A. de Vecchi, et al. euheart: personalized and integrated cardiac care using patientspecific cardiovascular modelling. Interface Focus, 1(3): , [2] A.J. Prassl, F. Kickinger, H. Ahammer, E. Hofer, J.E. Schneider, E.J. Vigmond, N.A. Trayanova, and G. Plank. Automatically generated, anatomically accurate Meshes for the Cardiac Bidomain Equations., IEEE Trans. Biomed. Eng., 56(5): , [3] S. A. Niederer, L. Mitchell, N.P. Smith, and G. Plank. Simulating human cardiac electrophysiology on clinical time-scales. Front Physiol, 2:14, [4] A. Neic, M. Liebmann, E. Hoetzl, L. Mitchell, E.J. Vigmond, G. Haase, and G. Plank. Accelerating cardiac bidomain simulations using graphics processing units. IEEE Trans Biomed Eng, 59(8): , [5] G. Plank, A. J. Prassl, R. Arnold, Y. Rezk, T. E. Fastl, E. Hofer, C. M. Augustin. Multiscale- Multiphysics Models of Ventricular Electromechanics - Computational Modeling, Parametrization and Experimental Validation. XIII Mediterranean Conference on Medical and Biological Engineering and Computing 2013, 41: ,

55 INFLUENCE OF VARIATIONS IN THE ANGLE OF DIFFERENT EXCITATION DIRECTIONS IN ISOTROPIC CARDIOMYOCYTE MONOLAYERS R. Kienast 1, M. Stöger 1,2, M. Handler 1, F. Hanser 1, C. Baumgartner 1 1 Institute of Electrical and Biomedical Engineering, UMIT, Hall in Tyrol, Austria 2 Division of Internal Medicine III / Cardiology, Medical University Innsbruck, Austria Abstract As previously demonstrated signal recordings of randomly grown isotropic cardiac monolayers using micro electrode arrays (MEAs) are highly sensitive to alternating spread directions. However, the influence of the angle between two wavefrontpropagations related to changes in electrophysiology of isotropic cardiac cell preparations is unknown. We therefore determined the angle between endogenous active pacemaker centers in n=7 primary chicken cardiomyocytes monolayers and calculated relative changes for each of the three investigated intrinsic features of field potentials (FPs), FP rise, FP MIN, FP dur and conduction velocity (CV). We could demonstrate that even small changes in the angle of excitation have a significant influence on the considered FP features and CV. Keywords cardiomyocyte cell layers, spread dependency, micro electrode arrays Introduction In a previous work we demonstrated the influence of excitation direction on alterations of extracellular recorded field potentials (FPs) based on local anisotropies in myocardial monolayers using microelectrode-arrays (MEAs) [1]. It is, however, not evident how sensitive these electrophysiological alterations respond to changes of the excitation entry point of the observed electrode registration area in the cell layer. This leads to the following scientific question: Do have small alterations in the angle of excitation wavefront less impact on cardiac electrical activity than large differences? To examine this entry pointdependent response, detailed information on the entry angle of the excitation wavefront related to the observed region was calculated using an in-house developed framework. Using this approach we were able to demonstrate that even small changes in the entry point of excitation lead to significant electrophysiological changes within single points of a randomly grown myocardial cell layer. Methods Cell cultivation: As previously described in [2] the ventricles of twelve-day-old chicken embryos were used for cell preparations. The cellular suspension was plated onto commercially available planar MEAs integrated in a culture dish (Multi Channel Systems, Reutlingen, Germany). Based on this procedure n = 7 primary, firmly attached and spontaneously beating cell cultures comprising at least two endogenous active pacemakers were obtained to examine the influence on changes in excitation direction. MEA recording: Each culture dish with a built-in MEA was mounted on a recording system (Multi Channel Systems, Reutlingen, Germany) allowing for simultaneously registering data from all 60 channels at a temperature of 37 C. The sampling frequency was set to 20kHz at a bandwidth of 1Hz to 3kHz. Intrinsic FP features: To examine electrophysiological alterations at the cellular level we investigated the following FP features according to Halbach et al. [3]: FP rise which correlates to the action potential (AP) upstroke time. FP MIN which depends on Na + current. FP dur which has a close relationship to the AP repolarization time. Additionally we considered CV locally at each single electrode. Data analysis: Data were analyzed offline with an inhouse developed software tool based on MATLAB (The Mathworks, Natick, MA, USA). This tool analyzes in a first step each MEA-channel separately and extracts relevant intrinsic FP features of recorded FPs. In a second step, to enable the reconstruction of single excitation wavefronts each detected FP is assigned to a single contraction of the cell layer and CV was calculated at each single electrode. Thirdly, an in-house developed algorithm, recently described in [2], estimates the spatio-temporal distribution of all active pacemaker centers inside and outside of the MEA registration area. Finally, the angles of active pacemakers referred to each single electrode of the MEA were calculated. Data were then arranged in a dataset structure allowing effective data analysis. To examine angular dependency, the angle between each endogenous pacemaker center and the change of the respective feature was obtained and sorted in bins of 10 each. Angles up to 150 were detected. Due to varying values of the investigated features between different myocardial cell layers relative changes were used to allow for comparisons (Figure 1). 53

56 Results Conduction velocity (CV): Figure 2a shows a boxplot of relative changes of CV related to the angle between different pacemaker centers. No correlation between variations in the entry angle and CV alteration could be observed. However, small angles (<10 ) seems to have slightly less impact on CV than angles >10. FP rise : The median variation is in the range of about 11% to 13%, however a clear relationship between excitation angle and change of FP rise -time can not be confirmed (see Figure 2b). (a) Fig. 1: A schematic illustration of a randomly grown myocardial cell layer with multiple endogenous pacemaker centers outside the MEA registration area. The zoomed area illustrates an example for an angle between two pacemakers. FP MIN : No significant correlation between angles and FP MIN was detected. However, analogous to CV a slightly smaller variation compared to other angles was observed in the range of 0 to 10. FP dur : In [1] we did not find a correlation between FP dur and excitation direction. Interestingly we also did not observe an angle dependent variation. The median variation of FP dur is about 1% only, independent of the angle between different endogenous pacemakers. Discussion The hypothesis of correlation between changes of excitation wave entry points and changes in the FP morphology has to be rejected. This result confirms that there exists a distinct local anisotropic structure in a randomly grown isotropic cell layer. As a consequence slight changes in the excitation direction lead to significant electrophysiological alterations in an isotropic cardiac monolayer and need to be considered in signal analysis. (b) Fig. 2: Variation of (a) CV and (b) FP rise plotted over angle between different wavefronts excitation patterns. Grants This work was funded by the K-Regio-Project of the Standortagentur Tirol, Innsbruck, Austria and by the European Regional Development Fund (ERDF). References [1] R. Kienast, M. Stöger, M. Handler, F. Hanser, and C. Baumgartner, Alterations of field potentials in isotropic cardiomyocyte cell layers induced by multiple endogenous pacemakers under normal and hypothermal conditions, Am. J. Physiol. Heart Circ. Physiol, Published 1 August [2] R. Kienast, M. Handler, M. Stöger, G. Fischer, F. Hanser, and C. Baumgartner, A system for analysing thermally-induced effects of propagation direction dependent features in field potentials of cardiomyocyte monolayers using multi-electrode arrays, Electrocardiology 2013 Proceedings of the 40th International Congress on Electrocardiology. 2014, Veda, Publishing House, pp , [3] M. Halbach, U. Egert, J. Hescheler, and K. Banach, Estimation of action potential changes from field potential recordings in multicellular mouse cardiac myocyte cultures, (eng), Cell. Physiol. Biochem, vol. 13, no. 5, pp ,

57 SIMULATING EFFECTS OF INCREASED HEAT TRANSFER SURFACES BETWEEN APPLICATOR TIP AND REFRIGERANT IN CARDIAC CRYOABLATION M. Handler 1, G. Fischer 2, R. Kienast 1, C. Baumgartner 1 1 Institute of Electrical and Biomedical Engineering, UMIT, 6060 Hall i. T., Austria 2 AFreeze GmbH, 6020 Innsbruck, Austria Abstract The simulation of cardiac cryoablation scenarios using different applicator variants allows for an investigation of effects caused by structural differences of examined applicator modalities. In this study two similar variants of a tip applicator model are analyzed using the finite element method. The applicator variants differ in the heat transfer surface between the active tip of the applicator and the refrigerant caused by an extension of the applicator tip into the boiling chamber. Temperature fields and temperature profiles obtained by the simulation show a significant enhancement of the cooling power of the variant with extension in the boiling chamber, confirming a substantial increase in efficiency by an enlarged heat transfer surface. Keywords Cardiac cryoablation, modeling, simulation, finite element method, applicator geometry Introduction Cardiac cryoablation is a minimally invasive intervention for the treatment of cardiac arrhythmias. Cryoapplicators are positioned close to the tissue responsible for the arrhythmia and cools it down to freezing temperatures. For an optimal treatment of diagnosed arrhythmias existing applicator variants are being continuously improved. Applicator geometries are adapted and modified to their field of application to allow for effective treatments with an efficient usage of the supplied refrigerant (e.g. [1]). In this work the temperature distribution of a cryoablation scenario is simulated using a newly developed tip applicator (for a schematic model see Figure 1). The active part of the applicator the gold tip exhibits only a small heat transfer surface at the outside of the applicator to reduce the loss of cooling power to the blood stream. To increase the heat transfer of the refrigerant to the active part of the applicator and consequently its cooling performance, the gold tip is extended into the boiling chamber to enhance the heat transfer surface. To estimate the effects of cryoapplicator geometry modifications, realistic simulations of cryoablation scenarios by different applicator variants can be performed using the finite element method [1]. In this study the temperature distributions between two variants of the tip applicator without and with tip extension in the boiling chamber are investigated by simulating a common ablation protocol of 4 min freezing. Figure 1: Geometrical models of simulated applicator variants. a: Model without tip extension into boiling chamber. b: Model with tip extension into boiling chamber. Methods To simulate the effects of probe extension in the boiling chamber of the applicator, two simplified geometrical models, i.e. without and with extension, were created using real dimensions of the applicator (see Figure 1). In the model both applicator variants are positioned 1 mm deep into the myocardium with 6 mm thickness. To simulate the heat contribution by the blood stream around the applicator during cryoablation, a blood layer with 3 mm thickness was added to the model. To simulate the temperature distribution of the applicator, the myocardial tissue and the blood layer, Pennes bioheat equation was used [2]. For the consideration of latent heat during the freezing of myocardial tissue and blood, the effective heat capacity model was applied [3] by using a phase transition temperature range between 0 C and -10 C. Thermal properties for blood and tissue and terms for heat contribution by blood perfusion and metabolism were used as described in our previous work [4]. The specific heat capacity and density of gold (applicator 55

58 tip and extension), tungsten and polyurethane (tube) were selected from [5]. The thermal conductivity of the tungsten polyurethane mixture of the applicator tube was measured using tube samples. To consider different phases of the refrigerant nitrous oxide, time and temperature dependent Neumann and Cauchy boundary conditions were applied at the inner boundaries of the tip, the tip extension and the first 4 mm of the tube similarly as described in [4]. Adaptations were made to incorporate the lower maximal refrigerant flow rate of 91 mg/s (2.75 standard liters per minute) and a linear reduction of the cooling capacity at the inside of the tube with increasing distance from the applicator tip. Results Figure 2 shows the temperature distributions for both simulated variants with and without tip extension. The cooling power increase of the applicator variant with tip extension is clearly seen by the distribution of the depicted isolines. To compare the temperatures of the applicator tip and at equal distances in the myocardium, temperature profiles were extracted at the highlighted positions in Figure 2 (see Figure 3). The applicator tip reaches a temperature of C in the simulation with tip extension, whereas only C is reached at the end of 4 min freezing in the simulation without tip extension. The depth of the -10 C-isoline (fully developed iceball at lower phase transition temperature) is approximately 1 mm deeper in the simulation with tip extension (2.81 mm below tip) compared to the simulation without tip extension (1.82 mm below tip). Figure 3: Temperature profiles at the applicator tip (black), at 1 mm (blue), 3 mm (green) and 5 mm distance from the tip (red). Dashed lines: Model without tip extension into boiling chamber. Solid lines: Model with tip extension into boiling chamber. Discussion The results of the simulated variants show clearly the positive effect of the tip extension in the boiling chamber. The significantly lower tip temperature of the variant with gold extension consequently causes lower temperatures with higher cooling rates in the myocardium close to the applicator factors which verifiably increase cell death during cryoablation. Acknowledgement This work is funded by the K-Regio-Project of the Standortagentur Tirol, Innsbruck, Austria and by the European Regional Development Fund (ERDF). Figure 2: Temperature distributions after 240 s freezing. Isolines denote 10 C steps. White isolines are the -10 C (closer to the tip) and 0 C isolines (more distant from the tip) and enclose the volume of phase transition (Effective Heat Capacity Model [3]). a: Model without tip extension into boiling chamber. b: Model with tip extension into boiling chamber. Colored dots: Location of extracted temperature profiles in Figure 3. References [1] M. Seger, G. Fischer, M. Handler, M. Stöger, C. Nowak, F. Hintringer, G. Klima and C. Baumgartner, "Achieving elongated lesions employing cardiac cryoablation: A preclinical evaluation study," Cryobiology, no. 65, pp , [2] H. H. Pennes, "Analysis of tissue and arterial blood temperatures in the resting human forearm," Journal of Applied Physiology, vol. 1, no. 2, pp , [3] J. Liu, "Bioheat transfer model," John Wiley & Sons, Inc., [4] M. Handler, G. Fischer, M. Seger, R. Kienast, C.-N. Nowak, D. Pehböck, F. Hintringer and C. Baumgartner, "Computer simulation of cardiac cryoablation: Comparison with in vivo data," Medical Engineering & Physics, no. 35, pp , [5] H. Kuchling, Taschenbuch der Physik, Fachbuchverlag Leipzig im Carl Hanser Verlag,

59 Health Care Technology und Biomedizinische Informatik 57

60 58

61 HYBRID MODELING A NEW PROSPECT FOR HEALTHCARE SYSTEMS SIMULATIONS W. Siegl 1, A. Lassnig 1, J. Schröttner 1 1 Institute of Health Care Engineering, Graz University of Technology, Austria Abstract By now simulation modeling is a wellestablished method for planning and decision making in the healthcare domain. Taking advantage of all three main simulation paradigms, namely System Dynamics, Discrete Event and Agent Based Modeling, this paper proposes a concept for a comprehensive hybrid model of the healthcare system. Outlining an example in the domain of heart failure treatment, it is shown that such models are powerful tools for healthcare planning. Keywords multi-paradigm modeling, agent based, system dynamics, discrete event Introduction Over the past years simulation modeling became a well-established tool for healthcare decision making and planning. Thereby offering solutions for problems where other methods, like analytical calculations, fail to succeed, or where acquiring the solution in the real world is not possible respectively dangerous or expensive (e.g. building up healthcare infrastructure just for testing purposes). By now numerous models, which cover a wide variety of issues, can be found in literature. These models usually use on of the three main techniques for simulation modeling. Thus the choice is usually based on the characteristics of the underlying problem, which should be analyzed. The popularity of simulation modeling is not only, but also, based on the fact that nowadays even standard consumer computers have enough computational power to run models with intermediate complexity in reasonable time. This progress in the hardware domain allows more and more complex models to be created. One of these new prospects, namely the possibility to combine different simulation techniques to a so called hybrid model, is discussed in this paper. Methods Generally three main simulation techniques can be applied. Firstly, the system dynamics (SD) approach describes a complex system at a high abstraction level. Such models typically consist of stock and flow variables. A stock, respectively an accumulation, thereby describes the current state of the system, while the latter define the change of a stock over time. Through their interdependence, these two elements characterize a complex system by using feedback loops. For example the population of a country can be modeled as a stock, while births and deaths are the respective flow variables. Secondly, the Discrete Event (DE) technique is frequently used to describe a system from the process point of view. Thus a system is modeled as a sequence of operations that is performed over entities. A patient s treatment process for example can easily be modeled by using DE, where the patient is represented by an entity that experiences a sequence of administrative and medical operations (e.g. waiting for a nurse, moving to examination room, receiving a treatment...). Finally, the most detailed models can be built using the Agent Based (AB) approach. The overall model behavior is discovered by knowing, respectively simulating, the behavior of single objects, the so called agents, in a given environment. The overall model results from the interaction of these agents with each other or with their environment. A simple use case for example would be to model patients and doctors as agents that are distributed over a region. A patient-agent characteristic can then be planned in a way that, if he needs medical treatment, he automatically visits the nearest doctor on duty. The concept of hybrid modeling now takes advantage of all the techniques explained above. The main idea is to model every part of a system with the most suitable simulation paradigm. Based on this idea, preexisting non-hybrid models at the Institute of Health Care Engineering were combined to a large scale healthcare model, and a general hybrid modeling concept was defined. Results Similar to Djanatliev et al.[1] the following five core components were found to be essential for the creation of a comprehensive large scale healthcare model (cf. Fig. 1): Demographics Disease spread Treatment process(es) Reimbursement system Geography The demographics module predicts the population development dissected by age and sex, as these are key parameters for the prevalence diseases. For this purpose cohort component models, like [2], are welland can easily be implemented using the system dynamics technique. 59

62 Figure 1: Simplified model structure outlining the core modules at different abstraction levels The disease spread module uses prevalence values to determine the affected population at the simulation start, respectively incidence rates during the course of a simulation run. This module can, with some limitations, be realized at high abstraction level using SD, since inclusion of individual risk factors (e.g. body weight, smoking, etc.) through probability distributions is possible. The disease spread module is hence important, as it defines the interface between the demographics module and the treatment process module, respectively the geography module. In a first step we coupled our population model [2] to our DE heart failure treatment model [4], by adjusting the amount of entities according to the population size and incidence rate. In a second step, the geographic aspects should be taken into account by using AB simulation. Our design plans that for every new incidence a patientagent is created, which does not only consists of personal parameters (age, sex, etc.) but also contains information on its geographic location, respectively hometown. When medical providers, ranging from general practitioners to hospitals, are also modeled as agents, a patient-agent can seek for a provider that fulfills his needs (feasible treatment for a specific disease, reasonable waiting time, reachability). Under certain circumstances, e.g. high risk patients or rural regions without appropriate healthcare infrastructure, the telemedicine option included in [4] can be applied as alternative. Finally statistics on usage, collected by the treatment process module, are handed over to the finance module, which calculates the arising healthcare costs based on the implemented reimbursement system (e.g. the Austrian DRG (diagnosis related-groups) system). Discussion The model concept presented in this paper describes how different modeling techniques can be combined to build a comprehensive healthcare model for a specific disease or even for a whole healthcare system. The five core components should be implemented in separated modules with according interfaces, to ensure extensibility and changeability of single aspects (e.g. a new reimbursement system). However it must be kept in mind that the coupling between the single modules is crucial. Although the concept presented above seems straightforward, additional feedback loops must be taken into account. For example, considering long term simulations, the change in mortality due to different treatment of diseases cannot be neglected. Moreover further work should be dedicated to model the influence of geography on the disease spread, to be able to reproduce epidemic behavior of contagious diseases. Taking all these possibilities into account, this first approach shows that large scale healthcare models have the potential to be a powerful tool for future healthcare planning and decision making. References [1] Djanatliev, A. et al.: Hybrid simulation with loosely coupled system dynamics and agent-based models for Prospective Health Technology Assessments, Proceedings of the 2012 Winter Simulation Conference, 9-12 Dec 2012; Berlin, Germany. Edited By Laroque C. et al [2] Schröttner, J. et al.: A population prospect for future health care models based on a system dynamics model, IFMBE Proceedings of the European Medical & Biological Engineering Conference, Nov 2008, Antwerp, Belgium. Edited by. Vander Sloten, J. et al., pp , [3] Booth, H.: Demographic forecasting: 1980 to 2005 in review, Int J Forecasting, vol. 22, pp , 2006 [4] Schröttner, J., Lassnig, A.: Simulation model for cost estimation of integrated care concepts of heart failure patients, Health Economics Review, vol. 3:26, November

63 INTEGRATED CARE IN HEART FAILURE TREATMENT A MODELLING SETUP COMBINING ESTABLISHED CONCEPTS A. Lassnig 1, W. Siegl 1, J. Schröttner 1 1 Institute of Health Care Engineering with European Notified Body of Medical Devices, Graz University of Technology, Austria Abstract With the double aging effect and its ensuing increase of chronic illnesses, new challenges for public health care systems arise. To overcome future financial burdens, new strategies and methods of care have to be considered. Using the Discrete Event simulation methodology a comprehensive model was developed to analyse and simulate an integrated care concept for the treatment of heart failure. Relations between the objects and their implementation are described and discussed. The developed model is the first tool to combine the most common concepts of care in heart failure treatment and allows their evaluation through comprehensive scenarios. Keywords heart failure, discrete event, integrated care, telemedicine, model Introduction The double aging effect and its ensuing increase of chronic illnesses causes new challenges for public healthcare systems. A prominent example is heart failure which is one of the major causes for hospitalisations of people above the age of 65 years [1]. To overcome the future financial burdens, public healthcare systems will have to focus on the development of new strategies and methods of care. Integrated care concepts, such as the telemedical treatment of heart failure patients, are potential steps to future improvement. However, they have to be evaluated in terms of practicability, medical and economic outcomes. Models are powerful instruments to pursue these purposes and allow the investigation of potential solutions and difficulties. This work focuses on the simulation of common integrated care concepts for the treatment of heart failure based on a telemonitoring system in combination with a disease management program. Methods The chosen environment for the modelling process was AnyLogic Version 6.9, a simulation tool that supports the most common simulation methodologies, namely Discrete Event, System Dynamics and Agent Based modelling. To translate different treatment concepts of heart failure into the digital world, it is important to choose a degree of abstraction which is sensitive enough to deliver significant results, but does not require too detailed thus unavailable data. For the simulation the discrete event approach was chosen to reproduce the clinical pathway of heart failure patients. This technique has a middle to low degree of abstraction and uses objects and resources to describe the event driven procedures. [2] Results To evaluate the integrated care concept a model for the treatment of heart failure patients was build. As an important criterion to distinguish the overall state of health, patients were classified accordingly to guidelines of the New York Heart Association. Each of the NYHA classes (I-IV) is integrated into the model and represented by an individual process which is further split into two blocks, separating the outpatient and inpatient care [3]. Figure 1 illustrates a simplified chart of patient flows in the model. Figure 1: Overview of patient flows for outpatient and inpatient care [4] For the outpatient care the general practitioner (GP), the specialist and the ambulance were simulated as objects with distinct features, mainly varying in associated costs per visit for each NYHA patient, the frequency of visits and the potential improvement or deterioration of health, thus the class change and mortality rates. In case of a hospitalisation, emergency and normal transports have been taken into account. [3] The inpatient care regards stays in normal and intensive care, with the option of additional individual medical procedures. Calculations are based on the Austrian DRG system and consider procedure-oriented diagnosis-related case groups with daily and procedure components. Transgressions of the common interval for the length of stay of heart failure patients account for a major part of the overall costs. Therefore instead of medians, probability density functions were implemented to realistically simulate the outcome. [3] The telemonitoring system, the data centre, the attending physician, the home nurse and the hospital are key 61

64 components of the modelled integrated care concept. To describe their respective influence on health and economic outcomes, relations between the active objects have to be defined in detail. Figure 2 shows the simplified concept of dependencies between the objects in the model. Figure 2: Conceptual illustration of relations between objects With a telemonitoring system the state of health is monitored on a daily basis through data transfer to the data centre. In case of conspicuous health parameter changes the general practitioner is alerted and approaches the patient. After an alarm potential treatment adaptions through medication are prescribed by the physician and considered with ATC codes (Anatomical Therapeutic Chemical Classification System). As in conventional care, the patient still consults the physician if necessary. The main role of the nurse is to support the patient by informing about heart failure, self-care, prevention and/or how to use the telemonitoring system, thus promoting the health literacy. Through home visits the nurse can assist the physician with deeper knowledge about the patient s well-being, perform physical examinations or provide other forms of care (e.g. physical therapy or medication reconciliation). If the administration of drugs is not effective enough, the possibility of a referral to a hospital is still given via normal or emergency transport. All mentioned objects and their relations are implemented through probabilities describing the efficiency of the interactions (e.g. NYHA class changes and the rate of events). Usually not every patient receives a telemonitoring system, therefore the rules for the allocation have to be defined before running the simulation, for example based on the NYHA class. Expenses for the telemonitoring system can be calculated with different methods of financing. Costs for drugs, home nurse visits and transportation to the hospital are adjustable as well. Discussion The flow chart nature of the discrete event model greatly facilitated adaptations to incorporate different methods of care. For the first time a model was developed that encompasses the most common treatment approaches for heart failure and considers various relationships between the involved objects. Beneficial effects of patient trainings by the nurse, structured telephone support, clinic-based outpatient programs [6] as well as new drug therapies and other approaches in the field can also be simulated and discussed in terms of health and economic outcomes and extend the number of potential scenarios. As for most models, limitations had to be addressed, mainly based on the quality of data. The lower the degree of abstraction, the better the knowledge about the system has to be. For the inpatient care detailed data had been obtained through various sources like the Austrian DRG system, Statistics Austria [5] and ongoing study results, all enabling the development of realistic scenarios. Outpatient care is still a factor of uncertainty, since data sources are limited or generally unavailable. The influence of the combination of a telemedical system with a disease management program on treatment outcomes still has to be evaluated through clinical studies. Singular investigations of each method of treatment are available, with diverging results [6, 7]. The ultimate objective is to find a sustainable solution in form of an integrated care concept that manages to combine the advantages of outpatient and inpatient care to improve treatment outcomes in terms of health and costs. The built model is a powerful instrument to evaluate all common approaches in the treatment of heart failure patients. In future work a first insight on the efficiency of the integrated care concept will be presented through simulation results and serve as a basis for decision making in healthcare. Bibliography [1] Zannad, F., Agrinier, N., Alla, F.: Heart failure burden and therapy, Europace, 11(Suppl 5):1-9, 2009 [2] Borshchev, A.: The Big Book of Simulation Modeling, AnyLogic North America, 1-612, 2013 [3] Schröttner, J., Lassnig, A.: Simulation model for cost estimation of integrated care concepts of heart failure patients, Health Economics Review, 3:26, 2013 [4] Lassnig, A., Schröttner, J.: Comparison of telemedical and conventional treatment of heart failure patients considering different approaches to inhospital stay, in Proceedings of BMT 2013; Graz - Austria, , September 2013 [5] Statistics Austria: Statistisches Jahrbuch 2014, Wien: Verlag Österreich GmbH, 2014 [6] Feltner, C., Jones, CD., Cené, CW., Zheng, ZJ., Sueta, CA., Coker-Schwimmer, EJL., Arvanitis, M., Lohr, KN., Middleton, JC., Jonas, DE.: Transitional Care Interventions to Prevent Readmissions for Persons With Heart Failure, Ann Intern Med, 160: , 2014 [7] Andrikopoulou, E., Abbate, K., Whellan, DJ.: Conceptual Model for Heart Failure Disease Management, Canadian Journal of Cardiology, 30: ,

65 HERZMOBIL TIROL mhealth TELEMONITORING EINGE- BETTET IN EIN HERZINSUFFIZIENZ NETZWERK S. Welte 1, P. Kastner 1, G. Pölzl 2, A. von der Heidt 2, R. Modre-Osprian 1 1 AIT Austrian Institute of Technology GmbH, Österreich 2 Medizinische Universität Innsbruck, Österreich Abstract In die Behandlung bzw. Betreuung an Herzinsuffizienz (HI) erkrankten Patienten sind unterschiedliche Personengruppen involviert wie Kliniker, Kardiologen, Internisten, niedergelassene Ärzte oder Krankenschwestern. Eine große Herausforderung stellt sich hier in der abgestimmten Betreuung und Kommunikation der beteiligten Personen. Die bisherigen Erfahrungen im Bereich mhealth basiertes Telemonitoring zeigen, dass die telemedizinische Versorgung von HI-Patienten durch ein spezifisches Betreuungsnetzwerk ergänzt werden sollte. In Rahmen einer ersten Machbarkeitsstudie wurde ein solches HI-Netzwerk in Innsbruck aufgebaut. Darauf aufbauend wird dieses Netzwerk im Zentralraum Innsbruck im Rahmen von HerzMobil Tirol evaluiert. aus einem HI-Telemonitoring-Koordinator, 5 niedergelassene Ärzten, einem Kliniker und einer mobilen Krankenschwester in Innsbruck gezeigt werden (Abbildung 1). Keywords Telemedizin, mhealth, Herzinsuffizienz, Patientenbetreuung Einleitung Mindestens 28 Millionen Menschen im Großraum Europa leiden an Herzinsuffizienz (HI). Sie ist somit eine der am häufigsten vorkommenden chronischen Erkrankungen in der älteren Gesellschaftsschicht entwickelter Länder. Eine Studie in den USA hat ergeben, dass in den nächsten 20 Jahren die Zahlen der Erkrankten um 25% steigen werden [1]. Trotz Verbesserungen in der medizinischen Therapie von HI-Patienten ist die Re-Hospitalisierungrate sehr hoch. In einem Zeitraum von 6 Monaten nach einer stationären Behandlung werden zwischen 30 bis 50% der HI Patienten wieder Stationär aufgenommen; dies entspricht in Österreich einer Zahl von mehr als Krankenhausaufenthalte pro Jahr [2]. Telemedizinische Betreuungsprogramme können die Re-Hospitalisierungrate verringern [3]. Die unterschiedlichen Ergebnisse aus HI-Telemonitoring- Studien [1, 4] lassen sich teilweise durch die unterschiedlichen Patientenkollektive, der Adhärenz oder verwendeten Technologien begründen. Ein wesentlicher Faktor ist jedoch auch das Betreuungsprogramm und die Kollaboration der beteiligten Stakeholder entlang des Behandlungspfades der Patienten. Im Rahmen einer Studie soll die technische Machbarkeit einer 6 monatigen telemedizinischen Betreuung in einer realen Versorgungsumgebung von HI-Patienten in einem HI-Netzwerk bestehend Methoden Abbildung 1: HI-Netzwerk-Stakeholder Das integrieren von mobilen Endgeräten in der Behandlung von HI-Patienten eröffnet neue Möglichkeiten der Therapieoptimierung. So können z.b. die Vitalparameter der Patienten auf Distanz beobachtet werden und auch ein Bild über die Adhärenz des Patienten im Hinblick auf die medikamentöse Therapie verschafft werden. Hierzu werden dem Patienten während des Zeitraums der Telemonitoringphase Blutdruckmessgerät, Körperwaage und ein Mobiltelefon zur Verfügung gestellt. Eine auf dem Mobiltelefon installierte Applikation ermöglicht es dem Patienten die gemessenen Daten mittels NFC-Technologie aus den Messgeräten zu übernehmen und diese an die Datenzentrale zu senden [5]. Zusätzlich zu den gemessenen Daten können auch diverse Fragen zur Medikation und zum Wohlbefinden über den Touchscreen des Mobiltelefons beantwortet werden. Im Zeitraum April 2012 bis September 2013 wurden zehn Patienten für die Machbarkeit im System aufgenommen und monitiert. Eine abschließende Befragung der Akteure des HI-Netzwerkes bezüglich Ihrer 63

66 persönlichen Einschätzung zur technischen Machbarkeit wurde mittels eines Interviews durchgeführt. Ergebnisse Die in der Studie beteiligten Patienten haben kumulativ an 2501 Tagen Daten an die Telemonitoring Zentrale übermittelt. Das entspricht 82,2 Telemonitoring- Monate bzw. 6,9 Telemonitoring-Jahre. 9 von 10 Patienten konnten länger als 6 Monaten telemedizinisch betreut werden (siehe Tabelle 1). Patient 5 wurde in eine Pflegeeinrichtung aufgenommen und beendete die Teilnahme an der Studie. Patient Monate Tage Patient 1 8,5 258 Patient 2 10,5 320 Patient 3 10,1 308 Patient 4 7,8 238 Patient 5 2,5 75 Patient 6 8,8 268 Patient 7 6,9 210 Patient 8 8,1 246 Patient 9 10,5 320 Patient 10 8,5 258 Tabelle 1: Übertragungszeitraum der Patienten Die im Rahmen des Projektes durchgeführten Interviews der beteiligten Akteure haben ergeben, dass aus deren persönlichen Sicht ein klarer medizinischer Nutzen im Sinne der Therapieeinstellung, - optimierung und -adhärenz erkennbar ist. Die Stärke des Herzinsuffizienz Netzwerks besteht neben der integrierten mhealth Telemonitoring Technologie vor allem in der ambulanten Betreuung der Patienten durch die Pflegekräfte. Die ständige Übertragung der für die Therapie relevanten Gesundheitsdaten und die gute Betreuung durch die Pflegekräfte vermittelt den beteiligten Patienten ein Gefühl der Sicherheit im Umgang mit ihrer Erkrankung. Die Machbarkeitsstudie hat ergeben, dass eine Betreuung von HI-Patienten mittels mhealth basiertem Telemonitoring eingebettet in ein HI- Versorgungsnetzwerk in der realen Versorgungsumgebung technisch umsetzbar ist. Aufbauend auf diesen Ergebnissen wurde das Herzinsuffizienz Netzwerk Tirol auf 4 Kliniken (Innsbruck, Hall in Tirol, Natters und Hochzirl) der TILAK - Tiroler Landeskrankenanstalten GmbH im Rahmen einer fortführenden Proof-of-Concept Studie erweitert. In diesen Kliniken wurden von 7 Klinikern und 3 Diplomkrankenschwestern/-pflegern bis dato 67 Patienten aufgenommen, von denen derzeit noch 36 Patienten aktiv im System registriert sind. Diese Patienten werden von 10 niedergelassenen Ärzten im Raum Innsbruck Land gemonitored und von den oben genannten Diplomkrankenschwestern/-pflegern im Zeitraum der Telemonitoringphase betreut. Diskussion Kollaborative Herzinsuffizienz-Versorgung mit mobilfunkbasiertem Telemonitoring ist ein wertvolles Instrument zur Steigerung von Effizienz und Kosten- Effektivität in einer sektorenübergreifenden, integrierten Versorgung. Sofern sich die Erfahrungen aus der Proof-of-Concept Studie als positiv erweisen, ist eine Ausdehnung von HerzMobil Tirol auf das gesamte Bundesland Tirol möglich. Das mit HerzMobil Tirol erarbeitete Versorgungsnetzwerk einschließlich der Kommunikationsinfrastruktur (modulare, multi-indikative Telemedizinische- Plattform) ist im Prinzip auch auf die Versorgung anderer chronischer Erkrankungen (z.b. Diabetes Mellitus, Bluthochdruck, koronare Herzerkrankung, Rhythmusstörungen) übertragbar. Somit könnten zukunftsträchtige Versorgungsstrukturen für eine integrierte, kontinuierliche Gesundheitsversorgung von chronischen Erkrankungen bereitgestellt werden. Danksagung Ein Dank geht an die Mitglieder des Qualitätszirkels Herzinsuffizienz Herzmobil die in enger Zusammenarbeit an dem Projekt HerzMobil Tirol mitgearbeitet haben und an die UMIT für die Durchführung der Interviews. Literatur [1] M.A. Konstam, Does Home Monitoring Heart Failure Care Improve Patient Outcomes? Home Monitoring Should Be the Central Element in an Effective Program of Heart Failure Disease Management, Circulation 125 (2012), [2] E. Baldaszti, Jahrbuch der Gesundheitsstatistik 2012, Statistik Austria, Wien, 2013 [3] D. Scherr, P. Kastner, A. Kollmann, A. Hallas, J. Auer, H. Krappinger, H. Schuchlenz, G. Stark, W. Grander, G. Jakl, G. Schreier, F.M. Fruhwald, MOBITEL Investigators, Effect of Home-Based Telemonitoring Using Mobile Phone Technology on the Outcome of Heart Failure Patients After an Episode of Acute Decompensation: Randomized Controlled Trial, J Med Internet Res 11 (2009), e34. [4] A.S. Desai, Does Home Monitoring Heart Failure Care Improve Patient Outcomes? Home Monitoring Heart Failure Care Does Not Improve Patient Outcomes Looking Beyond Telephone-Based Disease Management, Circulation 125 (2012), [5] J. Morak, H. Kumpusch, D. Hayn, R. Modre- Osprian, G. Schreier, Design and evaluation of a telemonitoring concept based on NFC enabled mobile phones and sensor devices, IEEE Trans Inf Technol Biomed 16 (2012),

67 A COMBINED APPROACH FOR SIMILARITY SEARCH AND ANALYSIS IN BIOCHEMICAL MOLECULAR DATABASES M. Popovscaia 1, C. Baumgartner 1 1 Institute of Electrical and Biomedical Engineering, UMIT The Health and Life Sciences University, Hall in Tirol, Austria Abstract Performing the search in modern large databases of various types of molecules is often challenging and computationally expensive. In this work we proposed a strategy for similarity search in molecular databases, namely, clustering structurally similar molecules. This strategy combines topological indices whose calculation is polynomial in time, machine learning techniques and statistical methods. Our approach allowed us to construct different clusters of structurally similar molecules under various preliminary settings and conditions. Keywords Molecule (Molecular Compound), Database Search, Topological Index, Machine Learning. Introduction Nowadays a large number of various molecular compounds are stored in specific databases. The amount of data stored in these databases make important operations such as searching for similar compounds within the database or comparing newly discovered compounds with the existent ones, in the manual way extremely complex or even not possible. For this purpose there were developed numerous similarity measures. However, their applications are not straightforward and usually computationally expensive. In addition, the effectiveness of the same similarity measure may vary in different cases. Therefore there is a need of developing new automated tools and strategies for conducting fast and intelligent search and analysis in the large molecular databases. The work presented here is part of a multistep framework for similarity search in databases of molecular compounds performed by us. It combines topological indices, which are easy to compute, machine learning techniques and statistical methods. Our strategy starts with transforming each molecule from the dataset under study first in a molecular graph and then representing it as a specific vector with the help of topological indices chosen from particular three classes. Later we use statistical and machine learning methods to cluster structurally similar molecular graphs. Choosing the topological indices from different classes and setting the machine learning parameters in a specific way defined by us allowed us to construct clusters of structurally similar molecular compounds. Methods Dataset. For our experimental work a dataset of 100 chemical organic molecules was chosen from the AR3982 database. AR3982 is a Molfile database and it was created in [1] by filtering out isomorphic graphs from the Ames Genetoxicity database. For each molecule the information about its name, atoms, bonds, coordinates and connectivity is provided. Computational representation of the molecules. Molecules from the dataset were presented in a specific vector form. First, for each molecule we used chemical information from the Molfile to construct molecular graphs where nodes are atoms and edges are bonds of the molecule. For each molecular graph we constructed representative vectors based on the values of topological indices. In more detail, we designed and analyzed three cases of study. A total of 30 topological indices [2] were chosen from three different groups, 10 indices in each group: distance based ( ), entropy based ( ) and eigenvalue based indices ( ). Using topological indices from these three different groups allowed us in every case to obtain different structural information about the molecular graphs. In the first case for each graph, we calculated all and only topological indices from the Group 1 and defined the vectors,. These vectors we considered to be the numerical representatives of the molecular graphs, and subsequently of the corresponding molecular compounds. We repeated the process of vector construction in the second and third cases in the same manner using topological indices from Group 2 and Group 3 correspondingly. Therefore we obtained three vector representations for each graph from the dataset: three 10-dimensional vectors, each vector being based on specific group of indices and containing specific structural information. Cluster analysis. First step of our approach consisted in performing the principal component regression analysis (PCR) to reduce the dimension of the constructed vectors and to filter out insignificant topological indices. In the next step we applied the k- means clustering method to form the clusters. The parameter k was set from 5 to 13. In the final step we analyzed the selected clusters. 65

68 Results Following the described scheme we represented molecules as vectors. Using the PCR we filtered out the insignificant topological indices: in the first case the dimension of the vectors was reduced from 10 to 6, in the second case from 10 to 5, in the third case from 10 to 7. For the filtered out vectors we applied the k-means method. In Tab. 1 results of this analysis are shown, namely, the accuracy of clustering in all three cases and for each k value is presented. As one can see, we generally obtained decently high accuracy values. But the highest accuracy values were obtained in the second case when for the construction of the representative vectors the entropy based topological indices were used. According to these results and subsequent statistical and visual analysis we concluded that the best partitioning was obtained when setting the parameter k from 9 to 13. Note that all the clusters contain different number of graphs. In almost all generated clusters we were able to identify and confirm structurally similar molecular graphs. Table 1: Results of the k-means data analysis. k Case 1 Case 2 Case For example, in Fig. 1 the clusters No. 3 and No. 4 from the second case with k=13 are presented. One can see that both clusters have molecular graphs of similar dimensions and structure. Molecular graphs in the third cluster (Fig. 1, left) tend to have more linear structure and have few or no closed loops. On the other side, graphs in the fourth cluster (Fig. 1, right) have at least 2 loops and have similar dimensions as well. In another example (Fig. 2) one cluster from the second case (entropy based topological indices, cluster No. 3, left) and another cluster from the third case (eigenvalue based topological indices, cluster No. 10, right) are shown. We see that the left cluster contains all four graphs from the right cluster (the left cluster in this case is being less specific), all the graphs in the right cluster have no cycles, have linear structure and have not more than one branch. These examples suggest us that depending on the scope of search one may choose different groups of topological indices and set different values for the parameter k to construct meaningful clusters of structurally similar biomolecules. Figure 1: Clusters No. 3 (left) and No. 4 (right), second case (entropy based topological indices), k=13. Figure 2. Cluster No. 3, second case (entropy based topological indices), k=13 (left); cluster number 10, third case (eigenvalue based topological indices), k=13 (right). Discussion The aim of our work was to develop new computationally inexpensive and fast methods for searching for structurally similar biochemical molecules in large databases. In this work after combining topological indices, machine learning techniques and statistical methods we were able to form clusters of structurally similar molecular graphs very fast. These clusters are formed in a reasonable way when the different initial parameters are set. This brings us to the conclusion, that depending on the scope of the search, we can select the topological indices and set the necessary calculation parameters in such a way that meaningful clusters of structurally similar biomolecules will be constructed. Acknowledgement We would like to thank Prof. Dr. Habil. M. Dehmer for providing us with the AR3982 database. Bibliography [1] M. Dehmer, N. Barbarini et al.: A large scale analysis of information-theoretic network complexity measures using chemical strutures, PLoS ONE, 4:e8057, 2009 [2] R. Todeschini, V. Consonni, Molecular Descriptors for Chemoiformatics. Wiley-VCH, Weinheim, second, revised and enlarged edition,

69 INTEGRATION OF NGS DATA AND IMAGES OF TISSUE SECTIONS FOR PERSONALIZED ONCOLOGY M. Baldauf 1,2, A. Dander 1,2, M. Sperk 1, S. Pabinger 1,3, Z. Trajanoski 1,2 1 Division for Bioinformatics, Biocenter, Innsbruck Medical University, Innsbruck, Austria. 2 Oncotyrol GmbH, Center for Personalized Cancer Medicine, Innsbruck, Austria. 3 AIT - Austrian Institute of Technology, Health & Environment Department, Molecular Diagnostics, Vienna, Austria. Abstract Advances in next-generation sequencing (NGS) and processing of whole-slide bioimages facilitate the development of personalized oncology. The open source bioinformatics platform Personalized Oncology Suite (POS) enables the integration of clinical data, NGS data, whole-slide bioimages, and publicly available information. POS is a scalable and flexible web-based platform, offering different modules for data up- and download, visualization techniques, as well as collaboration features. Keywords Personalized oncology, Data integration, Next-generation sequencing, Bioimaging Background Technological advances in NGS as well as the development of devices for scanning whole-slide bioimages from tissue sections and image analysis software for quantification of tumor-infiltrating lymphocytes (TILs) allow, for the first time, the development of personalized cancer immunotherapies that target patient specific mutations [1]. However, the real value of these disparate datasets can be truly exploited only when the data is integrated. In our experience, it is of utmost importance to establish a local database hosting only the necessary data. This approach implies that pre-processed and normalized data will be stored in a dedicated relational database whereas primary data are archived at separate locations including public repositories [2]. To this end, a database integrating clinical, NGS, and bioimaging data would be extremely helpful for clinical cancer research and in near future also for routine applications in medical oncology. As there is currently, to the best of our knowledge, no application that supports this integration, we developed the bioinformatics platform Personalized Oncology Suite (POS) to overcome this bottleneck and support the development of this exciting field. Methods Designed as a web-based platform, the Personalized Oncology Suite (POS) is based on the Java Enterprise Edition 6 (J2EE 6) technology stack and relies on the JBoss Application Server. The modular architecture of the web-application, in combination with its open source license (GNU AGPL) enables the community to easily modify and extend the application with further functionalities regarding additional data sources, visualizations, user interface languages, or other key components. Utilized libraries comprise PrimeFaces and PrimeFaces Extensions for the creation of JSF components, whereas Hibernate Validator provides input validation for user entries. Apache CODI integrates additional Java Bean scopes and the Guava libraries were chosen to support POS with a set of helpful functionalities regarding collections. The underlying database is based on a data warehouse schema promoting a simple integration of information extracted from different sources. Object-relational mapping for interaction with the PostgreSQL database is provided via EclipseLink. Through the addition of this abstraction layer, the database back-end can be easily changed. Results The Personalized Oncology Suite (POS) is an application facilitating the combination of clinical and biological data into one integrated solution. Its objective is achieved through consistent implementation of various data entry possibilities, modules for data upand download, and different visualization techniques. Clinical data comprise, on the one hand, information about cancer patients itself, excluding patientidentifying data. On the other hand, POS integrates information of the routinely used UICC/TNM staging system. Furthermore, POS is designed to support the evaluation of the immunoscore, an innovative staging system based on quantitative determination of CD3 + and CD8 + T cells within different locations of a tumor [3]. Descriptive plots provide insights in distributions of these scores and feature a comparison of different participating institutes. In addition to manual data input, clinical data and staging information can be uploaded to POS in CSV format and exported as XLS, CSV, and PDF files. Additionally, somatic mutations, identified within patient samples in an upstream step by the use of NGS and specialized applications [4], can be integrated into POS. For this purpose, the common variant call format (VCF) is supported, enabling a batch upload of thousand mutations from several patients at once. As the visualization of identified mutations tremendously supports their interpretation, POS includes the genome browser Gviz [5]. It visualizes mutations in 67

70 the genomic context, including publicly available gene models, and supports the selection of several patients at once, displaying a comparable view of the mutations for each patient in separate tracks. Beside somatic mutations, also whole-slide bioimages from tissue sections can be connected to patient data. As these images comprise several gigapixels, file size is likewise large. Hence, POS does not store the raw image files itself. In order to integrate wholeslide bioimages, POS facilitates the exchange of imaging data through incorporating several distributed instances of the bioimage management application Bisque [6]. For this purpose, POS is able to access already uploaded images within a connected Bisque instance. POS also offers an image upload module that facilitates a direct upload of these huge images. Additionally, a developed standalone application enables a tailored batch upload of numerous images at once. When using the direct upload module or the standalone image upload application, proprietary image formats get converted to the widely used OME-TIFF file format [7]. POS supports the following common formats: CZI, NDPI, OME-TIFF, SCN, SVS, TIFF (Trestle), VMS, VSI, and ZVI. The creation of a tiled image pyramid is performed by connected Bisque instances. This kind of image processing is important to display the image within the POS image viewer (Fig. 1) in a Google maps like manner. The image viewer is based on an adapted version of the interactive JavaScript widget PanoJS3 for fetching only tiles of the currently displayed region from Bisque. The connections to Bisque instances are administered by POS, whereas tailored Java servlets manage the encrypted communication between POS and Bisque. As POS holds confidential and patient related data, the application is secured by an authorization and authentication system. Furthermore, POS provides an intelligent logging functionality, exception handling, and logical integrity due to input validation. A use case for such an integrated data management system comprises the identification of high-risk cancer patients through histological and genomic features and their tailored treatment based on their molecular characterization. Discussion POS is a web-based application combining clinical and biomolecular data, including NGS data and whole-slide bioimages. The integration of heterogeneous biological datasets from multiple sources enables views from different perspectives in a single system. Its modular and flexible architecture and its release under an open source license facilitate extensions and adaptions to different innovative requirements in a simple manner. The web-application provides an intuitive user interface for data upload, download, manipulation, and visualization of all integrated data types. POS is an effective solution for current challenges in clinical cancer research and a possible future routine application. References [1] J. Couzin-Frankel, Cancer Immunotherapy, Science, vol. 342, no. 6165, pp , Dec [2] H. Hackl et al., Information technology solutions for integration of biomolecular and clinical data in the identification of new cancer biomarkers and targets for therapy, Pharmacol. Ther., vol. 128, no. 3, pp , Dec [3] J. Galon et al., Cancer classification using the Immunoscore: a worldwide task force, J. Transl. Med., vol. 10, no. 1, p. 205, Oct [4] S. Pabinger et al., A survey of tools for variant analysis of next-generation genome sequencing data, Brief. Bioinform., vol. 15, no. 2, pp , Mar [5] F. Hahne et al., Gviz: Plotting data and annotation information along genomic coordinates, R package version [6] K. Kvilekval et al., Bisque: a platform for bioimage analysis and management, Bioinformatics, vol. 26, no. 4, pp , Feb [7] M. Linkert et al., Metadata matters: access to image data in the real world, J. Cell Biol., vol. 189, no. 5, pp , May Figure 1: POS user interface showing the whole-slide bioimage viewer. It provides a quick view on a large image by dynamic fetching and stitching of currently displayed image tiles from a connected Bisque instance. 68

71 Medical Devices und Anwendungen 69

72 70

73 MRI SAFETY OF DEEP BRAIN STIMULATOR PATIENTS A. Tilp 1, N. Leitgeb 1 1 Institute of Health Care Engineering with European Notified Body of Medical Devices, Graz University of Technology, Graz, Austria Abstract Magnetic resonance imaging (MRI) of patients with implanted deep brain stimulators (DBS) may cause health risks due to enhanced tissue heating. This paper evaluates the MRI-induced specific absorption rate (SAR) and temperature elevation due to the radio frequency fields emitted by two birdcage coils at a frequency of 128 MHz. Investigations were made on a modified anatomical numerical human model with implanted DBS. The results indicate that by meeting the patient-related partial body limit of 3.2 W/kg for the head-averaged- SAR, adverse tissue heating occurs at an implanted DBS. Keywords deep brain stimulator, MRI, specific absorption rate, tissue heating, patient safety Introduction Deep brain stimulators (DBS) are an alternative to medication in treating patients suffering from Parkinson s disease, movement disorders and medically intractable tremor [1, 2] as well as psychiatric disorders [3]. MRI is applied for diagnosis as well as for placing and verifying the proper position of implanted electrode(s). Manufacturer s safety guidelines restrict MRI imaging of such patients to systems up to 1.5 T only which apply 64 MHz radio frequency (RF) electromagnetic fields (EMF) [4]. In contrast, retrospective studies on performed scans, partly disregarding this advice, report no MRI-related injuries [3, 5], and an expert group expressed concerns that such a precaution may be too restrictive [2]. The aim of this paper is to analyse the specific absorption rate (SAR) and temperature elevation due to the RF-field of MRI scanners at 128 MHz (3T) by numerical modelling. Methods Simulation platform: All simulations were performed with a commercial software package (SEMCAD X V14.8). Boundary surfaces of the simulation space were totally absorbing. Thermal simulation was performed with the steady state solver and tissue boundary conditions according to Neufeld [6]. Model: A unilateral monopolar DBS, consisting of a metallic can with a dielectric header and an insulated electrode lead, was implanted into the anatomical model of the adult male Duke [7]. The stimulator can was placed pectoral below the left clavicle. The lead was guided along subcutaneous tissues. The lead tip was placed in the hypothalamus (Figure 1). Figure 1: Deep Brain Stimulator integrated in the human model "Duke" To simulate RF EMF generation by the MRI scanner two birdcage coils (head coil, body coil) with 16 rungs were used (Figure 2). Dielectric material property values were taken from to the SEMCAD X material database which is based on results from Gabriel et al. [8]. All metallic objects were modelled as perfect electric conductors. The relative permittivity of the lead insulation and the header of the stimulator case were 3. Figure 2: Birdcage coils used for MRI imaging; head coil (left) and body coil (right) SAR: The input power was scaled such as to induce a whole-head averaged SAR of 3.2 W/kg [9] in a reference patient without implant. SAR averaged over 1g, 10g and the whole body were calculated. Temperature: Exposure to MRI RF EMF lasts long enough to justify a steady state approach. Hence, specific heat capacity, thermal conductivity, heat generation rate, perfusion and blood heat capacity of the tissues were considered constant. The temperature elevation at equilibrium (calculated with input power = 0 W), was subtracted from the results for each voxel. 71

74 Results Tissue heating is caused by RF EMF absorption and by EMF-induced eddy currents within the metallic parts of the implant, which is the most important safety issue for patients with DBS. [10] The presence of the DBS during MRI at 128 MHz leads to local SAR values at both types of coils, which exceed the safety limit SAR 10g = 20 W/kg [9]. The metallic implant changes the spatial SAR distribution rather than the whole body SAR value. Patient s heat load and the affected volume depend on the coil type. Both are larger using the body coil. The spatial SAR distribution for the head coil and the body coil with and without the DBS implant are shown in Figure 3. Figure 3: SAR distribution with and without a DBS, at a head coil (left) and a body coil (right) Without the DBS at both coils the highest temperature elevation is encountered in the superficial tissues. As expected, at patients with implanted DBS the maximal tissue heating is located near the electrode tip. It is 13.6 C and 7.2 C for the head coil and the body coil, respectively. These temperature elevations are reached after about 20 min, which corresponds to usual MRI scan times. Discussion At maximal permissible power levels, local SAR limits are exceeded, and tissue heating is high enough to cause adverse thermal effects. It must be pointed out that the calculations do not yet consider pulsation of RF-fields and the different duty cycles. Therefore, the results are worst case estimations. Manufacturer s safety recommendations limit the headaveraged-sar to 0.1 W/kg [4]. This is conservative enough to meet the SAR safety limits and keep the highest temperature elevation well below 1 C. Further investigations which account for RF-EMF pulsation are needed. References [1] C. C. McIntyre, M. Savasta, L. Kerkerian-Le Goff and J. L. Vitek, Uncovering the mechanism(s) of action of deep brain stimulation - activation, inhibiton or both, Clinical Neurophysiology, p. 115: , [2] J. M. Bronstein, Deep Brain Stimulation for Parkinson Disease - An Expert Consensus and Review of Key Issues, Archives of Neurology, pp. 68(2): , [3] E. Pereira, D. Nandi and T. Aziz, Deep Brain Stimulation: an underused panacea?, Advances In Clinical Neuroscience And Rehabilitation, [4] Medtronic, MRI Guidelines for Medtronic Deep Brain Stimulation Systems, [Online].Available: uments/dbs-mri-gdlns.pdf. [Accessed ]. [5] P. S. Larson, R. M. Richardson, P. A. Starr and A. J. Martin, Magnetic Resonance Imaging of Implanted Deep Brain Stimulators: Experience in a Large Series, Stereotactic and Functional Neurosurgery, p. 86:92 100, [6] E. Neufeld, High Resolution Hyperthermia Treatment Planning, Dissertation - ETH Zürich, [7] A. Christ, The Virtual Family development of surface-based anatomical models of two adults and two children for dosimetric simulations, Phys. Med. Biol., pp. 55: N23- N38, [8] S. Gabriel, R. Lau and C. Gabriel, The dielectric properties of biological tissues: III. Parametric models for the dielectric spectrum of tissues, Phys Med Biol, pp. 41: , [9] IEC, IEC Medical Electrical Equipment-Part 2-33: Particular requirements for basic safety and essential performance of magnetic resonance eqipment for medical diagnosis, International Electrotechnical Commission, Geneva, Switzerland, [10] J. A. Nyenhuis, S.-M. Park, R. Kamondetdacha, A. Amjad, F. G. Shellock and A. R. Rezai, MRI and Implanted Medical Devices: Basic Interactions With an Emphasis on Heating, IEEE Transactions on Device and Materials Reliability, pp ,

Medical Image Processing MediGRID. GRID-Computing für Medizin und Lebenswissenschaften

Medical Image Processing MediGRID. GRID-Computing für Medizin und Lebenswissenschaften Medical Image Processing in Medical Image Processing Image Processing is of high importantance for medical research, diagnosis and therapy High storage capacity Volume data, high resolution images, screening

Mehr

Extended Ordered Paired Comparison Models An Application to the Data from Bundesliga Season 2013/14

Extended Ordered Paired Comparison Models An Application to the Data from Bundesliga Season 2013/14 Etended Ordered Paired Comparison Models An Application to the Data from Bundesliga Season 2013/14 Gerhard Tutz & Gunther Schauberger Ludwig-Maimilians-Universität München Akademiestraße 1, 80799 München

Mehr

eurex rundschreiben 094/10

eurex rundschreiben 094/10 eurex rundschreiben 094/10 Datum: Frankfurt, 21. Mai 2010 Empfänger: Alle Handelsteilnehmer der Eurex Deutschland und Eurex Zürich sowie Vendoren Autorisiert von: Jürg Spillmann Weitere Informationen zur

Mehr

An Introduction to Monetary Theory. Rudolf Peto

An Introduction to Monetary Theory. Rudolf Peto An Introduction to Monetary Theory Rudolf Peto 0 Copyright 2013 by Prof. Rudolf Peto, Bielefeld (Germany), www.peto-online.net 1 2 Preface This book is mainly a translation of the theoretical part of my

Mehr

Simulation of Longitudinal Beam Dynamics

Simulation of Longitudinal Beam Dynamics Fachgebiet Theoretische Elektrotechnik und Numerische Feldberechnung PD Dr. Markus Clemens DESY Beam Dynamics Meeting Simulation of Longitudinal Beam Dynamics, Markus Clemens Chair for Theory in Electrical

Mehr

Customer-specific software for autonomous driving and driver assistance (ADAS)

Customer-specific software for autonomous driving and driver assistance (ADAS) This press release is approved for publication. Press Release Chemnitz, February 6 th, 2014 Customer-specific software for autonomous driving and driver assistance (ADAS) With the new product line Baselabs

Mehr

USBASIC SAFETY IN NUMBERS

USBASIC SAFETY IN NUMBERS USBASIC SAFETY IN NUMBERS #1.Current Normalisation Ropes Courses and Ropes Course Elements can conform to one or more of the following European Norms: -EN 362 Carabiner Norm -EN 795B Connector Norm -EN

Mehr

ISO 15504 Reference Model

ISO 15504 Reference Model Prozess Dimension von SPICE/ISO 15504 Process flow Remarks Role Documents, data, tools input, output Start Define purpose and scope Define process overview Define process details Define roles no Define

Mehr

Ways and methods to secure customer satisfaction at the example of a building subcontractor

Ways and methods to secure customer satisfaction at the example of a building subcontractor Abstract The thesis on hand deals with customer satisfaction at the example of a building subcontractor. Due to the problems in the building branch, it is nowadays necessary to act customer oriented. Customer

Mehr

Fluid-Particle Multiphase Flow Simulations for the Study of Sand Infiltration into Immobile Gravel-Beds

Fluid-Particle Multiphase Flow Simulations for the Study of Sand Infiltration into Immobile Gravel-Beds 3rd JUQUEEN Porting and Tuning Workshop Jülich, 2-4 February 2015 Fluid-Particle Multiphase Flow Simulations for the Study of Sand Infiltration into Immobile Gravel-Beds Tobias Schruff, Roy M. Frings,

Mehr

MUSKEL LEISTUNGSDIAGNOSE

MUSKEL LEISTUNGSDIAGNOSE MUSKEL LEISTUNGSDIAGNOSE 1 MUSKELLEISTUNGSDIAGNOSE 2 MusclePerformanceDiagnosis measure - diagnose optimize performance Motorleistung PS Muskelleistung W/kg 3 Performance development Create an individual

Mehr

A Practical Approach for Reliable Pre-Project Effort Estimation

A Practical Approach for Reliable Pre-Project Effort Estimation A Practical Approach for Reliable Pre-Project Effort Estimation Carl Friedrich Kreß 1, Oliver Hummel 2, Mahmudul Huq 1 1 Cost Xpert AG, Augsburg, Germany {Carl.Friedrich.Kress,Mahmudul.Huq}@CostXpert.de

Mehr

Group and Session Management for Collaborative Applications

Group and Session Management for Collaborative Applications Diss. ETH No. 12075 Group and Session Management for Collaborative Applications A dissertation submitted to the SWISS FEDERAL INSTITUTE OF TECHNOLOGY ZÜRICH for the degree of Doctor of Technical Seiences

Mehr

Lehrstuhl für Allgemeine BWL Strategisches und Internationales Management Prof. Dr. Mike Geppert Carl-Zeiß-Str. 3 07743 Jena

Lehrstuhl für Allgemeine BWL Strategisches und Internationales Management Prof. Dr. Mike Geppert Carl-Zeiß-Str. 3 07743 Jena Lehrstuhl für Allgemeine BWL Strategisches und Internationales Management Prof. Dr. Mike Geppert Carl-Zeiß-Str. 3 07743 Jena http://www.im.uni-jena.de Contents I. Learning Objectives II. III. IV. Recap

Mehr

Using TerraSAR-X data for mapping of damages in forests caused by the pine sawfly (Dprion pini) Dr. Klaus MARTIN klaus.martin@slu-web.

Using TerraSAR-X data for mapping of damages in forests caused by the pine sawfly (Dprion pini) Dr. Klaus MARTIN klaus.martin@slu-web. Using TerraSAR-X data for mapping of damages in forests caused by the pine sawfly (Dprion pini) Dr. Klaus MARTIN klaus.martin@slu-web.de Damages caused by Diprion pini Endangered Pine Regions in Germany

Mehr

The Single Point Entry Computer for the Dry End

The Single Point Entry Computer for the Dry End The Single Point Entry Computer for the Dry End The master computer system was developed to optimize the production process of a corrugator. All entries are made at the master computer thus error sources

Mehr

Einsatz einer Dokumentenverwaltungslösung zur Optimierung der unternehmensübergreifenden Kommunikation

Einsatz einer Dokumentenverwaltungslösung zur Optimierung der unternehmensübergreifenden Kommunikation Einsatz einer Dokumentenverwaltungslösung zur Optimierung der unternehmensübergreifenden Kommunikation Eine Betrachtung im Kontext der Ausgliederung von Chrysler Daniel Rheinbay Abstract Betriebliche Informationssysteme

Mehr

AS Path-Prepending in the Internet And Its Impact on Routing Decisions

AS Path-Prepending in the Internet And Its Impact on Routing Decisions (SEP) Its Impact on Routing Decisions Zhi Qi ytqz@mytum.de Advisor: Wolfgang Mühlbauer Lehrstuhl für Netzwerkarchitekturen Background Motivation BGP -> core routing protocol BGP relies on policy routing

Mehr

Closed-Loop Healthcare Monitoring in a Collaborative Heart Failure Network

Closed-Loop Healthcare Monitoring in a Collaborative Heart Failure Network Closed-Loop Healthcare Monitoring in a Collaborative Heart Failure Network Graduelle Anpassung der Versorgungsstruktur R. Modre-Osprian 1,*, G. Pölzl 2, A. VonDerHeidt 2, P. Kastner 1 1 AIT Austrian Institute

Mehr

IPEK. Institut für Produktentwicklung. Institut für Produktentwicklung Universität Karlsruhe (TH) Prof. A. Albers

IPEK. Institut für Produktentwicklung. Institut für Produktentwicklung Universität Karlsruhe (TH) Prof. A. Albers Bead Optimization with respect to acoustical behavior - o.prof.dr.-ing.dr.h.c.a.albers Institute of Product Development University of Karlsruhe (TH) 2007 Alle Rechte beim Karlsruhe. Jede Institute of Product

Mehr

Cleanroom Fog Generators Volcano VP 12 + VP 18

Cleanroom Fog Generators Volcano VP 12 + VP 18 Cleanroom Fog Generators Volcano VP 12 + VP 18 Description & Functional Principle (Piezo Technology) Cleanrooms are dynamic systems. People and goods are constantly in motion. Further installations, production

Mehr

Instruktionen Mozilla Thunderbird Seite 1

Instruktionen Mozilla Thunderbird Seite 1 Instruktionen Mozilla Thunderbird Seite 1 Instruktionen Mozilla Thunderbird Dieses Handbuch wird für Benutzer geschrieben, die bereits ein E-Mail-Konto zusammenbauen lassen im Mozilla Thunderbird und wird

Mehr

Possible Solutions for Development of Multilevel Pension System in the Republic of Azerbaijan

Possible Solutions for Development of Multilevel Pension System in the Republic of Azerbaijan Possible Solutions for Development of Multilevel Pension System in the Republic of Azerbaijan by Prof. Dr. Heinz-Dietrich Steinmeyer Introduction Multi-level pension systems Different approaches Different

Mehr

GIPS 2010 Gesamtüberblick. Dr. Stefan J. Illmer Credit Suisse. Seminar der SBVg "GIPS Aperitif" 15. April 2010 Referat von Stefan Illmer

GIPS 2010 Gesamtüberblick. Dr. Stefan J. Illmer Credit Suisse. Seminar der SBVg GIPS Aperitif 15. April 2010 Referat von Stefan Illmer GIPS 2010 Gesamtüberblick Dr. Stefan J. Illmer Credit Suisse Agenda Ein bisschen Historie - GIPS 2010 Fundamentals of Compliance Compliance Statement Seite 3 15.04.2010 Agenda Ein bisschen Historie - GIPS

Mehr

A. Wutte, J. Plank, M. Bodenlenz, C. Magnes, W. Regittnig, F. Sinner, B. Rønn, M. Zdravkovic, T. R. Pieber

A. Wutte, J. Plank, M. Bodenlenz, C. Magnes, W. Regittnig, F. Sinner, B. Rønn, M. Zdravkovic, T. R. Pieber Proportional ose Response Relationship and Lower Within Patient Variability of Insulin etemir and NPH Insulin in Subjects With Type 1 iabetes Mellitus A. Wutte, J. Plank, M. Bodenlenz, C. Magnes, W. Regittnig,

Mehr

Algorithms for graph visualization

Algorithms for graph visualization Algorithms for graph visualization Project - Orthogonal Grid Layout with Small Area W INTER SEMESTER 2013/2014 Martin No llenburg KIT Universita t des Landes Baden-Wu rttemberg und nationales Forschungszentrum

Mehr

Ingenics Project Portal

Ingenics Project Portal Version: 00; Status: E Seite: 1/6 This document is drawn to show the functions of the project portal developed by Ingenics AG. To use the portal enter the following URL in your Browser: https://projectportal.ingenics.de

Mehr

TMF projects on IT infrastructure for clinical research

TMF projects on IT infrastructure for clinical research Welcome! TMF projects on IT infrastructure for clinical research R. Speer Telematikplattform für Medizinische Forschungsnetze (TMF) e.v. Berlin Telematikplattform für Medizinische Forschungsnetze (TMF)

Mehr

Modellfreie numerische Prognosemethoden zur Tragwerksanalyse

Modellfreie numerische Prognosemethoden zur Tragwerksanalyse Modellfreie numerische Prognosemethoden zur Tragwerksanalyse Zur Erlangung des akademischen Grades Doktor-Ingenieur (Dr.-Ing.) an der Fakultät Bauingenieurwesen der Technischen Universität Dresden eingereichte

Mehr

Technical Thermodynamics

Technical Thermodynamics Technical Thermodynamics Chapter 1: Introduction, some nomenclature, table of contents Prof. Dr.-Ing. habil. Egon Hassel University of Rostock, Germany Faculty of Mechanical Engineering and Ship Building

Mehr

Introduction to the diploma and master seminar in FSS 2010. Prof. Dr. Armin Heinzl. Sven Scheibmayr

Introduction to the diploma and master seminar in FSS 2010. Prof. Dr. Armin Heinzl. Sven Scheibmayr Contemporary Aspects in Information Systems Introduction to the diploma and master seminar in FSS 2010 Chair of Business Administration and Information Systems Prof. Dr. Armin Heinzl Sven Scheibmayr Objective

Mehr

RailMaster New Version 7.00.p26.01 / 01.08.2014

RailMaster New Version 7.00.p26.01 / 01.08.2014 RailMaster New Version 7.00.p26.01 / 01.08.2014 English Version Bahnbuchungen so einfach und effizient wie noch nie! Copyright Copyright 2014 Travelport und/oder Tochtergesellschaften. Alle Rechte vorbehalten.

Mehr

Support Technologies based on Bi-Modal Network Analysis. H. Ulrich Hoppe. Virtuelles Arbeiten und Lernen in projektartigen Netzwerken

Support Technologies based on Bi-Modal Network Analysis. H. Ulrich Hoppe. Virtuelles Arbeiten und Lernen in projektartigen Netzwerken Support Technologies based on Bi-Modal Network Analysis H. Agenda 1. Network analysis short introduction 2. Supporting the development of virtual organizations 3. Supporting the development of compentences

Mehr

Cluster Health Care Economy has been established in 2008 Regional approach to develop health care industries Head of the cluster is Ms.

Cluster Health Care Economy has been established in 2008 Regional approach to develop health care industries Head of the cluster is Ms. How to develop health regions as driving forces for quality of life, growth and innovation? The experience of North Rhine-Westphalia Dr. rer. soc. Karin Scharfenorth WHO Collaborating Centre for Regional

Mehr

Challenges and solutions for field device integration in design and maintenance tools

Challenges and solutions for field device integration in design and maintenance tools Integrated Engineering Workshop 1 Challenges and solutions for field device integration in design and maintenance tools Christian Kleindienst, Productmanager Processinstrumentation, Siemens Karlsruhe Wartungstools

Mehr

Der RIA von ICN eignet sich zur Überwachung der Lutealphase.

Der RIA von ICN eignet sich zur Überwachung der Lutealphase. 6 Zusammenfassung Das Ziel der vorliegenden Arbeit bestand darin die Eignung von gering invasiven Methoden (Messung der Körperinnentemperatur, Gewinnung von Vaginalsekret zur Bestimmung von ph-wert und

Mehr

Working Sets for the Principle of Least Privilege in Role Based Access Control (RBAC) and Desktop Operating Systems DISSERTATION

Working Sets for the Principle of Least Privilege in Role Based Access Control (RBAC) and Desktop Operating Systems DISSERTATION UNIVERSITÄT JOHANNES KEPLER LINZ JKU Technisch-Naturwissenschaftliche Fakultät Working Sets for the Principle of Least Privilege in Role Based Access Control (RBAC) and Desktop Operating Systems DISSERTATION

Mehr

Funktionale Sicherheit ISO 26262 Schwerpunkt Requirements Engineering,

Funktionale Sicherheit ISO 26262 Schwerpunkt Requirements Engineering, Funktionale Sicherheit ISO 26262 Schwerpunkt Requirements Engineering, Manfred Broy Lehrstuhl für Software & Systems Engineering Technische Universität München Institut für Informatik ISO 26262 Functional

Mehr

1. General information... 2 2. Login... 2 3. Home... 3 4. Current applications... 3

1. General information... 2 2. Login... 2 3. Home... 3 4. Current applications... 3 User Manual for Marketing Authorisation and Lifecycle Management of Medicines Inhalt: User Manual for Marketing Authorisation and Lifecycle Management of Medicines... 1 1. General information... 2 2. Login...

Mehr

Beschwerdemanagement / Complaint Management

Beschwerdemanagement / Complaint Management Beschwerdemanagement / Complaint Management Structure: 1. Basics 2. Requirements for the implementation 3. Strategic possibilities 4. Direct Complaint Management processes 5. Indirect Complaint Management

Mehr

Exercise (Part II) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1

Exercise (Part II) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1 Exercise (Part II) Notes: The exercise is based on Microsoft Dynamics CRM Online. For all screenshots: Copyright Microsoft Corporation. The sign ## is you personal number to be used in all exercises. All

Mehr

Model-based Development of Hybrid-specific ECU Software for a Hybrid Vehicle with Compressed- Natural-Gas Engine

Model-based Development of Hybrid-specific ECU Software for a Hybrid Vehicle with Compressed- Natural-Gas Engine Model-based Development of Hybrid-specific ECU Software for a Hybrid Vehicle with Compressed- Natural-Gas Engine 5. Braunschweiger Symposium 20./21. Februar 2008 Dipl.-Ing. T. Mauk Dr. phil. nat. D. Kraft

Mehr

XML Template Transfer Transfer project templates easily between systems

XML Template Transfer Transfer project templates easily between systems Transfer project templates easily between systems A PLM Consulting Solution Public The consulting solution XML Template Transfer enables you to easily reuse existing project templates in different PPM

Mehr

Patentrelevante Aspekte der GPLv2/LGPLv2

Patentrelevante Aspekte der GPLv2/LGPLv2 Patentrelevante Aspekte der GPLv2/LGPLv2 von RA Dr. Till Jaeger OSADL Seminar on Software Patents and Open Source Licensing, Berlin, 6./7. November 2008 Agenda 1. Regelungen der GPLv2 zu Patenten 2. Implizite

Mehr

SAP PPM Enhanced Field and Tab Control

SAP PPM Enhanced Field and Tab Control SAP PPM Enhanced Field and Tab Control A PPM Consulting Solution Public Enhanced Field and Tab Control Enhanced Field and Tab Control gives you the opportunity to control your fields of items and decision

Mehr

Wie agil kann Business Analyse sein?

Wie agil kann Business Analyse sein? Wie agil kann Business Analyse sein? Chapter Meeting Michael Leber 2012-01-24 ANECON Software Design und Beratung G.m.b.H. Alser Str. 4/Hof 1 A-1090 Wien Tel.: +43 1 409 58 90 www.anecon.com office@anecon.com

Mehr

Chemical heat storage using Na-leach

Chemical heat storage using Na-leach Hilfe2 Materials Science & Technology Chemical heat storage using Na-leach Robert Weber Empa, Material Science and Technology Building Technologies Laboratory CH 8600 Dübendorf Folie 1 Hilfe2 Diese Folie

Mehr

Netzwerke und Sicherheit auf mobilen Geräten

Netzwerke und Sicherheit auf mobilen Geräten Netzwerke und Sicherheit auf mobilen Geräten Univ.-Prof. Priv.-Doz. DI Dr. René Mayrhofer Antrittsvorlesung Johannes Kepler Universität Linz Repräsentationsräume 1. Stock (Uni-Center) 19.1.2015, 16:00

Mehr

Algorithms & Datastructures Midterm Test 1

Algorithms & Datastructures Midterm Test 1 Algorithms & Datastructures Midterm Test 1 Wolfgang Pausch Heiko Studt René Thiemann Tomas Vitvar

Mehr

p^db=`oj===pìééçêíáåñçêã~íáçå=

p^db=`oj===pìééçêíáåñçêã~íáçå= p^db=`oj===pìééçêíáåñçêã~íáçå= Error: "Could not connect to the SQL Server Instance" or "Failed to open a connection to the database." When you attempt to launch ACT! by Sage or ACT by Sage Premium for

Mehr

Software development with continuous integration

Software development with continuous integration Software development with continuous integration (FESG/MPIfR) ettl@fs.wettzell.de (FESG) neidhardt@fs.wettzell.de 1 A critical view on scientific software Tendency to become complex and unstructured Highly

Mehr

Titelbild1 ANSYS. Customer Portal LogIn

Titelbild1 ANSYS. Customer Portal LogIn Titelbild1 ANSYS Customer Portal LogIn 1 Neuanmeldung Neuanmeldung: Bitte Not yet a member anklicken Adressen-Check Adressdaten eintragen Customer No. ist hier bereits erforderlich HERE - Button Hier nochmal

Mehr

CMMI for Embedded Systems Development

CMMI for Embedded Systems Development CMMI for Embedded Systems Development O.Univ.-Prof. Dipl.-Ing. Dr. Wolfgang Pree Software Engineering Gruppe Leiter des Fachbereichs Informatik cs.uni-salzburg.at Inhalt Projekt-Kontext CMMI FIT-IT-Projekt

Mehr

SARA 1. Project Meeting

SARA 1. Project Meeting SARA 1. Project Meeting Energy Concepts, BMS and Monitoring Integration of Simulation Assisted Control Systems for Innovative Energy Devices Prof. Dr. Ursula Eicker Dr. Jürgen Schumacher Dirk Pietruschka,

Mehr

Infrastructure as a Service (IaaS) Solutions for Online Game Service Provision

Infrastructure as a Service (IaaS) Solutions for Online Game Service Provision Infrastructure as a Service (IaaS) Solutions for Online Game Service Provision Zielsetzung: System Verwendung von Cloud-Systemen für das Hosting von online Spielen (IaaS) Reservieren/Buchen von Resources

Mehr

Applying Pléiades in the ASAP project HighSens

Applying Pléiades in the ASAP project HighSens Applying Pléiades in the ASAP project HighSens Highly versatile, new satellite Sensor applications for the Austrian market and International Development (Contract number: 833435) Dr. Eva Haas, GeoVille

Mehr

CHAMPIONS Communication and Dissemination

CHAMPIONS Communication and Dissemination CHAMPIONS Communication and Dissemination Europa Programm Center Im Freistaat Thüringen In Trägerschaft des TIAW e. V. 1 CENTRAL EUROPE PROGRAMME CENTRAL EUROPE PROGRAMME -ist als größtes Aufbauprogramm

Mehr

MindestanforderungenanDokumentationvon Lieferanten

MindestanforderungenanDokumentationvon Lieferanten andokumentationvon Lieferanten X.0010 3.02de_en/2014-11-07 Erstellt:J.Wesseloh/EN-M6 Standardvorgabe TK SY Standort Bremen Standard requirements TK SY Location Bremen 07.11.14 DieInformationenindieserUnterlagewurdenmitgrößterSorgfalterarbeitet.DennochkönnenFehlernichtimmervollständig

Mehr

Exercise (Part XI) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1

Exercise (Part XI) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1 Exercise (Part XI) Notes: The exercise is based on Microsoft Dynamics CRM Online. For all screenshots: Copyright Microsoft Corporation. The sign ## is you personal number to be used in all exercises. All

Mehr

Doctoral viva (Rigorosum) in the PhD Program N094 (to all who are interested in)

Doctoral viva (Rigorosum) in the PhD Program N094 (to all who are interested in) presented in German by Mr Dr. Nikolaus Duschek Homocysteine as Biomarker for Carotid Surgery Risk Stratification Vascular Biology on 9 September 2015, at 14:00 pm AKH, Hörsaalzentrum, Ebene 7, Kursraum

Mehr

Accounting course program for master students. Institute of Accounting and Auditing http://www.wiwi.hu-berlin.de/rewe

Accounting course program for master students. Institute of Accounting and Auditing http://www.wiwi.hu-berlin.de/rewe Accounting course program for master students Institute of Accounting and Auditing http://www.wiwi.hu-berlin.de/rewe 2 Accounting requires institutional knowledge... 3...but it pays: Lehman Bros. Inc.,

Mehr

REQUEST FOR YOUR MEDICAL SECOND OPINION REPORT ANTRAG AUF IHR MEDIZINISCHES ZWEITE MEINUNG - GUTACHTEN

REQUEST FOR YOUR MEDICAL SECOND OPINION REPORT ANTRAG AUF IHR MEDIZINISCHES ZWEITE MEINUNG - GUTACHTEN REQUEST FOR YOUR MEDICAL SECOND OPINION REPORT ANTRAG AUF IHR MEDIZINISCHES ZWEITE MEINUNG - GUTACHTEN SECOND OPINION REPORT ZWEITE MEINUNG GUTACHTEN netto Euro brutto Euro medical report of a medical

Mehr

Assessment of disgn-flows in water management, Classical methods, nonstationary and multidimensional extensions of Extreme Value Modeling (EVM)

Assessment of disgn-flows in water management, Classical methods, nonstationary and multidimensional extensions of Extreme Value Modeling (EVM) Assessment of disgn-flows in water management, Classical methods, nonstationary and multidimensional extensions of Extreme Value Modeling (EVM) Dr. Winfried Willems, IAWG Outline Classical Approach, short

Mehr

H. Enke, Sprecher des AK Forschungsdaten der WGL

H. Enke, Sprecher des AK Forschungsdaten der WGL https://escience.aip.de/ak-forschungsdaten H. Enke, Sprecher des AK Forschungsdaten der WGL 20.01.2015 / Forschungsdaten - DataCite Workshop 1 AK Forschungsdaten der WGL 2009 gegründet - Arbeit für die

Mehr

Labour law and Consumer protection principles usage in non-state pension system

Labour law and Consumer protection principles usage in non-state pension system Labour law and Consumer protection principles usage in non-state pension system by Prof. Dr. Heinz-Dietrich Steinmeyer General Remarks In private non state pensions systems usually three actors Employer

Mehr

Workflows, Ansprüche und Grenzen der GNSS- Datenerfassung im Feld

Workflows, Ansprüche und Grenzen der GNSS- Datenerfassung im Feld Workflows, Ansprüche und Grenzen der GNSS- Datenerfassung im Feld Alexander Fischer Senior Application Engineer Asset Collection & GIS 1 Leica Zeno GIS Agenda Erfassung im Feld VS Erfassung im Office Validierung

Mehr

Critical Chain and Scrum

Critical Chain and Scrum Critical Chain and Scrum classic meets avant-garde (but who is who?) TOC4U 24.03.2012 Darmstadt Photo: Dan Nernay @ YachtPals.com TOC4U 24.03.2012 Darmstadt Wolfram Müller 20 Jahre Erfahrung aus 530 Projekten

Mehr

Prediction Market, 28th July 2012 Information and Instructions. Prognosemärkte Lehrstuhl für Betriebswirtschaftslehre insbes.

Prediction Market, 28th July 2012 Information and Instructions. Prognosemärkte Lehrstuhl für Betriebswirtschaftslehre insbes. Prediction Market, 28th July 2012 Information and Instructions S. 1 Welcome, and thanks for your participation Sensational prices are waiting for you 1000 Euro in amazon vouchers: The winner has the chance

Mehr

Künstliche Intelligenz

Künstliche Intelligenz Künstliche Intelligenz Data Mining Approaches for Instrusion Detection Espen Jervidalo WS05/06 KI - WS05/06 - Espen Jervidalo 1 Overview Motivation Ziel IDS (Intrusion Detection System) HIDS NIDS Data

Mehr

Business-centric Storage How appliances make complete backup solutions simple to build and to sell

Business-centric Storage How appliances make complete backup solutions simple to build and to sell Business-centric Storage How appliances make complete backup solutions simple to build and to sell Frank Reichart Sen. Dir. Prod. Marketing Storage Solutions 0 The three horrors of data protection 50%

Mehr

Scrum @FH Biel. Scrum Einführung mit «Electronical Newsletter» FH Biel, 12. Januar 2012. Folie 1 12. Januar 2012. Frank Buchli

Scrum @FH Biel. Scrum Einführung mit «Electronical Newsletter» FH Biel, 12. Januar 2012. Folie 1 12. Januar 2012. Frank Buchli Scrum @FH Biel Scrum Einführung mit «Electronical Newsletter» FH Biel, 12. Januar 2012 Folie 1 12. Januar 2012 Frank Buchli Zu meiner Person Frank Buchli MS in Computer Science, Uni Bern 2003 3 Jahre IT

Mehr

KURZANLEITUNG. Firmware-Upgrade: Wie geht das eigentlich?

KURZANLEITUNG. Firmware-Upgrade: Wie geht das eigentlich? KURZANLEITUNG Firmware-Upgrade: Wie geht das eigentlich? Die Firmware ist eine Software, die auf der IP-Kamera installiert ist und alle Funktionen des Gerätes steuert. Nach dem Firmware-Update stehen Ihnen

Mehr

A central repository for gridded data in the MeteoSwiss Data Warehouse

A central repository for gridded data in the MeteoSwiss Data Warehouse A central repository for gridded data in the MeteoSwiss Data Warehouse, Zürich M2: Data Rescue management, quality and homogenization September 16th, 2010 Data Coordination, MeteoSwiss 1 Agenda Short introduction

Mehr

Introducing PAThWay. Structured and methodical performance engineering. Isaías A. Comprés Ureña Ventsislav Petkov Michael Firbach Michael Gerndt

Introducing PAThWay. Structured and methodical performance engineering. Isaías A. Comprés Ureña Ventsislav Petkov Michael Firbach Michael Gerndt Introducing PAThWay Structured and methodical performance engineering Isaías A. Comprés Ureña Ventsislav Petkov Michael Firbach Michael Gerndt Technical University of Munich Overview Tuning Challenges

Mehr

Lufft UMB Sensor Overview

Lufft UMB Sensor Overview Lufft Sensor Overview Wind Radiance (solar radiation) Titan Ventus WS310 Platinum WS301/303 Gold V200A WS300 WS400 WS304 Professional WS200 WS401 WS302 Radiance (solar radiation) Radiation 2 Channel EPANDER

Mehr

Distributed testing. Demo Video

Distributed testing. Demo Video distributed testing Das intunify Team An der Entwicklung der Testsystem-Software arbeiten wir als Team von Software-Spezialisten und Designern der soft2tec GmbH in Kooperation mit der Universität Osnabrück.

Mehr

Zugangsvoraussetzungen für Airworthiness Review Staff gem. Part-M.A.707

Zugangsvoraussetzungen für Airworthiness Review Staff gem. Part-M.A.707 1) Zusammenfassung der relevanten Part-M Paragraphen und AMC M.A.707 Airworthiness review staff (a) To be approved to carry out reviews, an approved continuing management organisation shall have appropriate

Mehr

Plan für heute. Vorlesungstermine. CG1 & CG2 Vorlesungsthemen. Anwendungsgebiete. Warum Computer Grafik? Computergrafik 1&2 SS 2010

Plan für heute. Vorlesungstermine. CG1 & CG2 Vorlesungsthemen. Anwendungsgebiete. Warum Computer Grafik? Computergrafik 1&2 SS 2010 Plan für heute Computergrafik 1&2 SS 2010 http://www.icg.tu-graz.ac.at/courses/cgcv Organisation der Vorlesung Anwendungen der Computergrafik Konzepte der Computergrafik Display Technologies Prof. Institut

Mehr

Austria Regional Kick-off

Austria Regional Kick-off Austria Regional Kick-off Andreas Dippelhofer Anwendungszentrum GmbH Oberpfaffenhofen (AZO) AZO Main Initiatives Andreas Dippelhofer 2 The Competition SPOT THE SPACE RELATION IN YOUR BUSINESS 3 Global

Mehr

SEKTION 43 F - Baader f/2 Highspeed- Filter und Filtersätze

SEKTION 43 F - Baader f/2 Highspeed- Filter und Filtersätze 1 von 5 13.08.2014 12:35 SEKTION 43 F - Baader f/2 Highspeed- Filter und Filtersätze BAADER FILTER und SPANNUNGSFREI GEFASSTE FILTER - od warum dürfen Baader Filter in Ihren Fassungen "klappern" Here we

Mehr

Cloud Architektur Workshop

Cloud Architektur Workshop Cloud Architektur Workshop Ein Angebot von IBM Software Services for Cloud & Smarter Infrastructure Agenda 1. Überblick Cloud Architektur Workshop 2. In 12 Schritten bis zur Cloud 3. Workshop Vorgehensmodell

Mehr

DIGICOMP OPEN TUESDAY AKTUELLE STANDARDS UND TRENDS IN DER AGILEN SOFTWARE ENTWICKLUNG. Michael Palotas 7. April 2015 1 GRIDFUSION

DIGICOMP OPEN TUESDAY AKTUELLE STANDARDS UND TRENDS IN DER AGILEN SOFTWARE ENTWICKLUNG. Michael Palotas 7. April 2015 1 GRIDFUSION DIGICOMP OPEN TUESDAY AKTUELLE STANDARDS UND TRENDS IN DER AGILEN SOFTWARE ENTWICKLUNG Michael Palotas 7. April 2015 1 GRIDFUSION IHR REFERENT Gridfusion Software Solutions Kontakt: Michael Palotas Gerbiweg

Mehr

(Prüfungs-)Aufgaben zum Thema Scheduling

(Prüfungs-)Aufgaben zum Thema Scheduling (Prüfungs-)Aufgaben zum Thema Scheduling 1) Geben Sie die beiden wichtigsten Kriterien bei der Wahl der Größe des Quantums beim Round-Robin-Scheduling an. 2) In welchen Situationen und von welchen (Betriebssystem-)Routinen

Mehr

1.9 Dynamic loading: τ ty : torsion yield stress (torsion) τ sy : shear yield stress (shear) In the last lectures only static loadings are considered

1.9 Dynamic loading: τ ty : torsion yield stress (torsion) τ sy : shear yield stress (shear) In the last lectures only static loadings are considered 1.9 Dynaic loading: In the last lectures only static loadings are considered A static loading is: or the load does not change the load change per tie N Unit is 10 /sec 2 Load case Ι: static load (case

Mehr

Virtual PBX and SMS-Server

Virtual PBX and SMS-Server Virtual PBX and SMS-Server Software solutions for more mobility and comfort * The software is delivered by e-mail and does not include the boxes 1 2007 com.sat GmbH Kommunikationssysteme Schwetzinger Str.

Mehr

Background for Hybrid Processing

Background for Hybrid Processing Background for Hybrid Processing Hans Uszkoreit Foundations of LST WS 04/05 Scope Classical Areas of Computational Linguistics: computational morphology, computational syntax computational semantics computational

Mehr

Delivering services in a user-focussed way - The new DFN-CERT Portal -

Delivering services in a user-focussed way - The new DFN-CERT Portal - Delivering services in a user-focussed way - The new DFN-CERT Portal - 29th TF-CSIRT Meeting in Hamburg 25. January 2010 Marcus Pattloch (cert@dfn.de) How do we deal with the ever growing workload? 29th

Mehr

Restschmutzanalyse Residual Dirt Analysis

Restschmutzanalyse Residual Dirt Analysis Q-App: Restschmutzanalyse Residual Dirt Analysis Differenzwägeapplikation, mit individueller Proben ID Differential weighing application with individual Sample ID Beschreibung Gravimetrische Bestimmung

Mehr

Aber genau deshalb möchte ich Ihre Aufmehrsamkeit darauf lenken und Sie dazu animieren, der Eventualität durch geeignete Gegenmaßnahmen zu begegnen.

Aber genau deshalb möchte ich Ihre Aufmehrsamkeit darauf lenken und Sie dazu animieren, der Eventualität durch geeignete Gegenmaßnahmen zu begegnen. NetWorker - Allgemein Tip 618, Seite 1/5 Das Desaster Recovery (mmrecov) ist evtl. nicht mehr möglich, wenn der Boostrap Save Set auf einem AFTD Volume auf einem (Data Domain) CIFS Share gespeichert ist!

Mehr

Versorgungsforschung: ein MUSS für Interdisziplinarität?

Versorgungsforschung: ein MUSS für Interdisziplinarität? Versorgungsforschung: ein MUSS für Interdisziplinarität? Tanja Stamm Medizinische Universität Wien, Abt. f. Rheumatologie FH Campus Wien, Department Gesundheit, Ergotherapie & Health Assisting Engineering

Mehr

PRESS RELEASE. Kundenspezifische Lichtlösungen von MENTOR

PRESS RELEASE. Kundenspezifische Lichtlösungen von MENTOR Kundenspezifische Lichtlösungen von MENTOR Mit Licht Mehrwert schaffen. Immer mehr Designer, Entwicklungsingenieure und Produktverantwortliche erkennen das Potential innovativer Lichtkonzepte für ihre

Mehr

Power, space heat and refurbishment: Tools and methods to include energy in urban planning. Thomas Hamacher

Power, space heat and refurbishment: Tools and methods to include energy in urban planning. Thomas Hamacher Power, space heat and refurbishment: Tools and methods to include energy in urban planning Thomas Hamacher Problem Statement 2 New developments Urban planning Smaller scale units in the infrastructure

Mehr

Implementation und Test eines Optimierungsverfahrens zur Lösung nichtlinearer Gleichungen der Strukturmechanik

Implementation und Test eines Optimierungsverfahrens zur Lösung nichtlinearer Gleichungen der Strukturmechanik Bauhaus-Universität Weimar Fakultät Bauingenieurwesen Institut für Strukturmechanik Diplomarbeit Implementation und Test eines Optimierungsverfahrens zur Lösung nichtlinearer Gleichungen der Strukturmechanik

Mehr

Abteilung Internationales CampusCenter

Abteilung Internationales CampusCenter Abteilung Internationales CampusCenter Instructions for the STiNE Online Enrollment Application for Exchange Students 1. Please go to www.uni-hamburg.de/online-bewerbung and click on Bewerberaccount anlegen

Mehr

Mit Legacy-Systemen in die Zukunft. adviion. in die Zukunft. Dr. Roland Schätzle

Mit Legacy-Systemen in die Zukunft. adviion. in die Zukunft. Dr. Roland Schätzle Mit Legacy-Systemen in die Zukunft Dr. Roland Schätzle Der Weg zur Entscheidung 2 Situation Geschäftliche und softwaretechnische Qualität der aktuellen Lösung? Lohnen sich weitere Investitionen? Migration??

Mehr

Challenges in Systems Engineering and a Pragmatic Solution Approach

Challenges in Systems Engineering and a Pragmatic Solution Approach Pure Passion. Systems Engineering and a Pragmatic Solution Approach HELVETING Dr. Thomas Stöckli Director Business Unit Systems Engineering Dr. Daniel Hösli Member of the Executive Board 1 Agenda Different

Mehr

PCIe, DDR4, VNAND Effizienz beginnt im Server

PCIe, DDR4, VNAND Effizienz beginnt im Server PCIe, DDR4, VNAND Effizienz beginnt im Server Future Thinking 2015 /, Director Marcom + SBD EMEA Legal Disclaimer This presentation is intended to provide information concerning computer and memory industries.

Mehr

How to access licensed products from providers who are already operating productively in. General Information... 2. Shibboleth login...

How to access licensed products from providers who are already operating productively in. General Information... 2. Shibboleth login... Shibboleth Tutorial How to access licensed products from providers who are already operating productively in the SWITCHaai federation. General Information... 2 Shibboleth login... 2 Separate registration

Mehr

Veräußerung von Emissionsberechtigungen in Deutschland

Veräußerung von Emissionsberechtigungen in Deutschland Veräußerung von Emissionsberechtigungen in Deutschland Monatsbericht September 2008 Berichtsmonat September 2008 Die KfW hat im Zeitraum vom 1. September 2008 bis zum 30. September 2008 3,95 Mio. EU-Emissionsberechtigungen

Mehr