Felix Thielke
at Fraunhofer MEVIS
SPIE Involvement:
Author | Instructor
Publications (3)

Proceedings Article | 3 April 2023 Presentation + Paper
Felix Thielke, Farina Kock, Annika Hänsch, Nasreddin Abolmaali, Andrea Schenk, Hans Meine
Proceedings Volume 12464, 124640B (2023) https://doi.org/10.1117/12.2654157
KEYWORDS: Computed tomography, Veins, Liver, Arteries, Deep learning, Volume segmentation

Proceedings Article | 4 April 2022 Poster + Presentation + Paper
Farina Kock, Grzegorz Chlebus, Felix Thielke, Andrea Schenk, Hans Meine
Proceedings Volume 12033, 120331O (2022) https://doi.org/10.1117/12.2607253
KEYWORDS: Arteries, Liver, Convolutional neural networks, Neural networks, Image segmentation, Computed tomography

Proceedings Article | 4 April 2022 Poster + Paper
Felix Thielke, Farina Kock, Annika Hänsch, Joachim Georgii, Nasreddin Abolmaali, Itaru Endo, Hans Meine, Andrea Schenk
Proceedings Volume 12032, 120323E (2022) https://doi.org/10.1117/12.2612526
KEYWORDS: Liver, Image segmentation, Convolution, Reconstruction algorithms, Neural networks, Veins

Course Instructor
SC1235: Introduction to Medical Image Analysis Using Convolutional Neural Networks
Segmentation, detection, and classification are major tasks in medical image analysis and image understanding. Medical imaging researchers heavily use the results of recent developments in machine learning approaches, and with deep learning methods they achieve significantly better results in many real-world problems compared to previous solutions. The course aims to enable students and professionals to apply deep learning methods to their data and problem. Using an interactive programming environment, participants of the course will explore all required steps in practice and learn tools and techniques from data preparation to result interpretation. We will work on example data and train models to segment anatomical structures, to detect abnormalities, and to classify them. Simple methods to explain predictions and assess network uncertainty will be discussed briefly as well. Participants will work in a prepared online environment providing selected deep learning toolkit installations, example data, and fully functional skeleton code as a basis for own experiments.
SC1324: Transformers: A Powerful Tool for Image Analysis and Generation
Compared to the tremendous success models like GPT4, Bard, or Llama, which are all based on transformers, had in the field of text analysis, machine translation, and de-novo text generation, it took longer for transformers to enter the scene of image analysis and image generation. Reasons being the high demands in terms of compute and data. More successful approaches in training transformers have led to a change that will likely be as impactful as the introduction of CNNs for image classification. Already, first (still limited) “image foundation models” have been published, and also in medical image analysis, several attempts are being made to parallel the semantic understanding and emergence seen in large language models for image models. In this course, we explain the thought model and elementary mathematics of the attention mechanism underlying transformers. You will learn in theory and explore in hands-on work the reason for their modeling capacity and understand why this creates the need of larger training datasets. The course traces the development of transformers for image analysis tasks and shows ways to pre-train transformers on weak or unlabeled data. The course concludes with examples of applications used in medical image analysis tasks.
SIGN IN TO:
  • View contact details

UPDATE YOUR PROFILE
Is this your profile? Update it now.
Don’t have a profile and want one?

Advertisement
Advertisement
Back to Top