Ear infections are exceedingly common, yet challenging to diagnose correctly. The diagnosis requires a clinician (such as a physician, nurse practitioner, or physician assistant) to use an otoscope and inspect the eardrum (i.e. tympanic membrane). Once visualized the clinician must rely on clinical judgment to determine the presence of changes typically associated with an ear infection such as eardrum color and/or position. Research has however consistently demonstrated systemic failure among clinicians to correctly diagnose and manage ear infections. With recent advancements of pattern recognition techniques, including deep learning, there has been increasing interest in the opportunity to automate the diagnosis of ear infections. While there are some previous studies that successfully apply machine learning to classify ear drum photos, these methods were developed and evaluated in non-real world settings and used single, crisp, still-shot photos of the eardrum that would be labor-intensive to acquire in uncooperative pediatric patients. Contrary to previous works, we present a deep anomaly detection based method that flags otoscopy video sequences as normal or abnormal, achieving a promising first step towards automated analysis of otoscopy video for in-clinic or at-home screening.
Optical countermeasures are widely used nowadays and quite often a laser is used as the optical source. Unfortunately such a laser beam can become severely distorted by optical turbulence when propagating through the atmosphere, resulting in effects such as beam spreading, beam wander, irradiance fluctuations, and loss of spatial coherence. These effects can be (partially) overcome using knowledge of the atmospheric conditions, as well as techniques to correct for amplitude and phase distortions. Our research focuses on the characterization of the atmospheric conditions, using adaptive optics, an in-house developed multi-aperture transmissometer, as well as a plenoptic sensor using phase distortion algorithms to compensate for effects caused by (strong) turbulence conditions.
Laser beams used in many open space applications, such as in defense, optical communication, and remote sensing, will subject to turbulence distortions that disrupt the intended beam profiles at the end of propagation. To guide the transmitted beam properly through an open space channel, adaptive optics (AO) are often used to implement beam corrections based on the reciprocity principles. In specific, if wave distortion from a remote spot can be determined and field conjugated at the site of the transmitter, the transmitted light will focus to the same spot at the receiver. Many experiments have demonstrated such a principle using a cooperative laser guide star on the target plane. However, finding or creating a well-defined guide star is impractical in real-world applications. The second best beacon choice is temporal glint signals that are relatively refined in geometry and brighter than ambient target illumination. To date, the best approach to extract information from arbitrary glint signals to instruct AO correction is still unknown. We propose the plenoptic sensor technique to extract phase distortion information from glint signals with minimum loss of information. In addition, as the addressed turbulence channel is typically a lateral path near the ground, we also validate the function of the plenoptic sensor in revealing the anisotropic state of turbulence.
Both the plenoptic sensor and the light field camera can be used to correct images distorted by turbulence. The underlying principle involves using the redundant light field information collected by these devices to discriminate and suppress random distortion in the images. The light field camera records multiple light rays that converges to each spatial point on the image plane, and the plenoptic sensor records multiple views per sub-angular space. Correspondingly, image filters and synthetic methods that are used in the two approaches are significantly different. To the best of the authors’ knowledge, we are the first to build a hybrid system to compare the differences between the two devices in their effectiveness for imaging through turbulence. We show through analysis and case-by-case experimental studies that the turbulence scenarios that fit the employment of a plenoptic sensor or a light field camera are significantly different. Based on our studies, we have summarized the rule of thumb to wisely use light field technology in imaging through turbulence.
We present an experimental evaluation of a multi-aperture laser transmissometer system which profiles long-term laser beam statistics over long paths. While the system was originally designed to measure the aerosol extinction rate, the beam profiling capabilities of the transmissometer system also allows experimental observations of Gaussian beam statistics in weak and strong turbulence. Additionally, measurement of long-term beam spread at the receiver allows the system to estimate a path-averaged Cn2, including in strong turbulence regimes where scintillometers experience saturation effects. Additionally, a phase-frequency correlation technique for synchronizing with transmitter ON/OFF modulation in the presence of background ambient light is presented. In application, our ruggedized and weather resistant laser transmissometer system has significant advantages for the measurement and study of aerosol concentration, absorption, scattering, and turbulence properties over multi-kilometer paths, which are crucial for directed energy systems, ground-level free-space optical communication systems, environmental monitoring, and weather forecasting.