Talks and presentations

Can Synthetic Images Improve CNN Performance in Wound Image Classification?

June 01, 2023

Talk, Medical Informatics Europe 2023 Conference, Gothenburg, Sweden

For artificial intelligence (AI) based systems to become clinically relevant, they must perform well. Machine Learning (ML) based AI systems require a large amount of labelled training data to achieve this level. In cases of a shortage of such large amounts, Generative Adversarial Networks (GAN) are a standard tool for synthesising artificial training images that can be used to augment the data set. We investigated the quality of synthetic wound images regarding two aspects:(i) improvement of wound-type classification by a Convolutional Neural Network (CNN) and (ii) how realistic such images look to clinical experts (n= 217). Concerning (i), results show a slight classification improvement. However, the connection between classification performance and the size of the artificial data set is still unclear. Regarding (ii), although the GAN could produce highly realistic images, the clinical experts took them for real in only 31% of the cases. It can be concluded that image quality may play a more significant role than data size in improving the CNN-based classification result.

Automatic Wound Type Classification with Convolutional Neural Networks

August 01, 2022

Talk, 20th International Conference on Informatics, Management and Technology in Healthcare, Athens,Greece

Chronic wounds are ulcerations of the skin that fail to heal because of an underlying condition such as diabetes mellitus or venous insufficiency. The timely identification of this condition is crucial for healing. However, this identification requires expert knowledge unavailable in some care situations. Here, artificial intelligence technology may support clinicians. In this study, we explore the performance of a deep convolutional neural network to classify diabetic foot and venous leg ulcers using wound images. We trained a convolutional neural network on 863 cropped wound images. Using a hold-out test set with 80 images, the model yielded an F1-score of 0.85 on the cropped and 0.70 on the full images. This study shows promising results. However, the model must be extended in terms of wound images and wound types for application in clinical practice.

Klix Project

July 01, 2021

Talk, Institute of Cognitive Science, Osnabrück university, Osnabrück, Germany

Measuring the quality of face recognizers adequately and quantifying improvements is crucial for further developing face recognizers. A popular dataset for evaluation is the LFW (Labeled Faces in the Wild) dataset. While this dataset is widely accepted in the community as a benchmark, it could be argued that the dataset is also saturated with recent models achieving over 99% accuracy. This can be considered problematic for improvements up to this point have to be very small increments, making the effects of changes in the recognizers hard to quantify. We introduced a novel dataset ”Labeled Children in the Wild” (LCW) with a structure similar to LFW to serve as a drop-in-replacement for LFW. The data is selected to be more difficult, featuring additional challenges that can occur in recognition scenarios. These involve such challenges as strong age differences, ranging from early childhood to very high ages, strong use of costumes and makeup, as well as a strong variety of source media. The dataset features photographs as well as frames from films and movies. The images were recorded under heterogeneous conditions between 1870 and 2019, featuring a wide range of scenarios, camera technology and image quality.More information you can find here.

Face Recognition Using Vgg16

March 01, 2021

Talk, Institute of Cognitive Science, Osnabrück university, Osnabrück, Germany

VGG16 is a convolutional neural network (CNN) architecture that was originally designed for image classification tasks. However, its deep architecture and large number of parameters make it well-suited for feature extraction in various computer vision applications, including face recognition and facial expression analysis. More information you can find here.

Malaria parasite detection in giemsa-stained blood cell images

March 01, 2013

Talk, Machine Vision and Image Processing(MVIP) Conference , Zanjan,Iran

This research represents a method to detect malaria parasite in blood samples stained with giemsa. In order to increase the accuracy of detecting, at the first step, the red blood cell mask is extracted. It is due to the fact that most of malaria parasites exist in red blood cells. Then, stained elements of blood such as red blood cells, parasites and white blood cells are extracted. At the next step, red blood cell mask is located on the extracted stained elements to separate the possible parasites. Finally, color histogram, granulometry, gradient and flat texture features are extracted and used as classifier inputs. Here, five classifiers were used: support vector machines (SVM), nearest mean (NM), K nearest neighbors (KNN), 1-NN and Fisher. In this research K nearest neighbors classifier had the best accuracy, which was 91%.