Open-source technologies and solutions have paved the way for making science accessible the world over. Motivated to
contribute to the direction of open-source methods, our current research presents a complete workflow of building a microscope
using 3D printing and easily accessible optical components to collect images of biological samples. Further, these
images are classified using machine learning algorithms to illustrate both the effectiveness of this method and show the
disadvantages of classifying images that are visually similar. The second outcome of this research is an openly accessible
dataset of the images collected, OPEN-BIOset, and made available to the machine learning community for future research.
The research adopts the OpenFlexure Delta Stage microscope (https://openflexure.org/) that allows motorised control
and maximum stability of the samples when imaging. A Raspberry Pi camera is used for imaging the samples in a
transmission-based illumination setup. The imaging data collected is catalogued and organised for classification using
TensorFlow. Using visual interpretation, we have created subsets from amongst the samples to experiment for the best
classification results. We found that by removing similar samples, the categorical accuracy achieved was 99.9% and 99.59%
for the training and testing sets. Our research shows evidence of the efficacy of open source tools and methods. Future
approaches will use improved resolution images for classification and other modalities of microscopy will be realised based
on the OpenFlexure microscope.
Supervised deep learning algorithms are re-defining the state-of-the-art for object detection and classification. However, training these algorithms requires extensive datasets that are typically expensive and time-consuming to collect. In the field of defence and security, this can become impractical when data is of a sensitive nature, such as infrared imagery of military vessels. Consequently, algorithm development and training are often conducted in synthetic environments, but this brings into question the generalisability of the solution to real world data. In this paper we investigate training deep learning algorithms for infrared automatic target recognition without using real-world infrared data. A large synthetic dataset of infrared images of maritime vessels in the long wave infrared waveband was generated using target-missile engagement simulation software and ten high-fidelity computer-aided design models. Multiple approaches to training a YOLOv3 architecture were explored and subsequently evaluated using a video sequence of real-world infrared data. Experiments demonstrated that supplementing the training data with a small sample of semi-labelled pseudo-IR imagery caused a marked improvement in performance. Despite the absence of real infrared training data, high average precision and recall scores of 99% and 93% respectively were achieved on our real-world test data. To further the development and benchmarking of automatic target recognition algorithms this paper also contributes our dataset of photo-realistic synthetic infrared images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.