The interface of deep learning and imaging has seen extraordinary progress in the past few years as computational power now enables image processing that can exceed human capability. Much of the recent work at this interface involves the application of variants of convolutional neural networks, for a wide variety of techniques including image enhancement, style transfer and labelling. However, whilst deep learning can unlock extremely powerful capabilities, the collection and processing of appropriate training data remains a significant challenge. In this talk, a brief tutorial on the practical application of neural networks for image processing will be presented, followed by experimental results associated with optical and scanning electron microscopy. The focus of this talk will be on the demonstration of image enhancement of optical microscopy from 20x resolution to 1500x, whilst simultaneously identifying the objects present and hence enabling automated labelling, colour-enhancing and removal of specific objects in the magnified image.
We demonstrate the application of deep learning for the identification of particles, directly from their backscattered light. The particles were illuminated using a single-mode fibre-coupled laser light source and the scattered light was collected by a 30-core optical fibre. The technique enabled identification of the specific species of pollen grains with an accuracy of ~97%, even in the presence of high levels of background light equivalent to daytime sunlight. In addition, the technique determined the distance between the fibre tip and the particles with an accuracy of ± 6 µm.
Materials processing using femtosecond laser pulses offers the potential for high-precision manufacturing. However, due to the associated nonlinear processes, even small levels of experimental noise (e.g. instability in laser power, or unexpected debris) can result in substantial deviations from the desired machined structures. There is therefore much interest in the development of closed-loop feedback processes. Recent advances in the algorithms behind neural networks, and in particular convolutional neural networks (CNNs) have led to rapid advancements in the field. Here, we will present the first demonstration of the application of a CNN for observing and identifying the experimental parameters exclusively from a camera that observes the sample during laser machining. We will show that the CNN was able to accurately determine the laser fluence, number of pulses and the material used.
Although there are many other computational approaches for image-based feedback, this CNN approach has the significant advantage that it works purely as a pattern recognition device, and hence requires minimal human input with regards to the physical processes that underlie the laser machining process. Therefore, this avoids the need for a comprehensive programmatical description of the nonlinear interaction of laser light and material. Training time was one hour, and the time to process and identify the experimental parameters from a single image was approximately 30 milliseconds, hence showing the potential for a CNN to act as the central component of a real-time feedback system for laser machining, and enabling undesired or incorrect machining to be immediately compensated.
Predictive visualisation for laser-processing of materials can be challenging, as the nonlinear interaction of light and matter is complicated to model, particularly when scaling up from atom-level to bulk material. Here, we demonstrate a predictive visualisation approach that uses a pair of neural networks (NNs) that are trained using data obtained from laser machining using a digital micromirror device (DMD) acting as an intensity spatial light modulator. The DMD enables laser machining using many beam shapes, and hence can be used to produce significant amounts of training data for NNs. Here, the training data corresponds to hundreds of DMD patterns (i.e. beam shapes) and their associated images and 3D depth profiles. The trained NNs are able to generate a surface image and 3D depth profile, showing what the ablated surface would look like, for a wide range of ablating beam shapes. The predicted visualisations are remarkably effective and almost indistinguishable from real experimental data in appearance.
Such a NN approach has considerable advantages over modelling techniques that start from first-principles (i.e. light-atom interaction), since zero understanding of the underlying physical processes is needed, as instead the NN learns directly via observation of labelled experimental data. We will show that the NN learns key optical properties such as diffraction, the nonlinear interaction of light and matter, and the statistical distribution of debris and burring of material, all with zero human assistance. This offers a new paradigm in predictive capabilities, which could be applied to almost any manufacturing process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.