To function at the same operational tempo as human teammates on the battlefield in a robust and resilient manner, autonomous systems must assess and manage risk as it pertains to vehicle navigation. Risk comes in multiple forms, associated with both specific and uncertain terrains, environmental conditions, and nearby actors. In this work, we present a risk-aware path planning method to handle the first form, incorporating perception uncertainty over terrain types to trade-off between exploration and exploitation behaviors. The uncertainty from machine learned terrain segmentation models is used to generate a layered terrain map that associates every grid cell with its label uncertainty among the semantic classes. The risk term increases when differently traversable semantic classes (e.g., tree and grass) are associated with the same cell. We show that adjusting risk tolerances allows the planner to recognize and generate paths through materials like tall grass that historically have been ruled out when only considering geometry. Utilizing a risk-aware planner allows triggering an exploratory behavior to gather more information to minimize uncertainty over terrain categorizations. Most existing methods for incorporating risk will avoid regions of uncertainty, whereas here the vehicle can determine if the risk is too high after new observation/investigation. This also allows the autonomous system to decide to ask a human teammate for help to reduce uncertainty and make progress towards goal. We demonstrate the approach on a ground robot in simulation and in real world for autonomously navigating through a wooded environment.
Organoids are multicellular structures grown in the lab that resemble tissues or organs of the body. We recently generated human kidney organoids compatible with high throughput screening for developmental and disease phenotypes. Accurately segmenting large-scale image collections of organoids remains a challenge. We investigated automated segmentation of these structures using both conventional image processing algorithms and two different deep convolutional neural network architectures. Our dataset consisted of multi-channel images of organoids in 384-well plates, labeling distal tubules, proximal tubules, and podocytes as distinct segments. These images were used either for training and validation, or for testing. Each image was initially subjected to automated segmentation using a customized CellProfiler workflow. Separately, we performed semantic organoid segmentation using a Residual UNet (ResUNet) architecture, and instance organoid segmentation using a Mask R-CNN (MRCNN) architecture. For the latter, we compared model performance after initializing network weights in three different ways: randomly, using ResNet-50 weights pre-trained on the COCO dataset, and using ResUNet weights pre-trained on organoid images. Using ResUNet or randomly initializing MRCNN backbone weights provided improved semantic segmentation compared to using precomputed weights from COCO or ResUNet, or to using the CellProfiler workflow. Conversely, using precomputed weights to initialize MRCNN provided better instance segmentation accuracy and sensitivity than random initialization. Our findings provide a basis for automated segmentation of organoids with convolutional neural networks, to aid in high throughput screening for compounds relevant to renal phenotypes.
Simulations of flatbed scanners can shorten the development cycle of new designs, estimate image quality, and lower manufacturing costs. In this paper, we present a flatbed scanner simulation a strobe RGB scanning method that investigates the effect of the sensor height on color artifacts. The image chain model from the remote sensing community was adapted and tailored to fit flatbed scanning applications. This model allows the user to study the relationship between various internal elements of the scanner and the final image quality. Modeled parameters include: sensor height, intensity and duration of illuminant, scanning rate, sensor aperture, detector modulation transfer function (MTF), and motion blur created by the movement of the sensor during the scanning process. These variables are also modeled mathematically by utilizing Fourier analysis, functions that model the physical components, convolutions, sampling theorems, and gamma corrections. Special targets were used to validate the simulation include single frequency pattern, a radial chirp-like pattern, or a high resolution scanned document. The simulation is demonstrated to model the scanning process effectively both on a theoretical and experimental level.
Developing precise and low-cost spatial localization algorithms is an essential component for autonomous
navigation systems. Data collection must be of sufficient detail to distinguish unique locations, yet coarse enough to
enable real-time processing. Active proximity sensors such as sonar and rangefinders have been used for interior
localization, but sonar sensors are generally coarse and rangefinders are generally expensive. Passive sensors such as
video cameras are low cost and feature-rich, but suffer from high dimensions and excessive bandwidth. This paper
presents a novel approach to indoor localization using a low cost video camera and spherical mirror. Omnidirectional
captured images undergo normalization and unwarping to a canonical representation more suitable for processing.
Training images along with indoor maps are fed into a semi-supervised linear extension of graph embedding manifold
learning algorithm to learn a low dimensional surface which represents the interior of a building. The manifold surface
descriptor is used as a semantic signature for particle filter localization. Test frames are conditioned, mapped to a low
dimensional surface, and then localized via an adaptive particle filter algorithm. These particles are temporally filtered
for the final localization estimate. The proposed method, termed omnivision-based manifold particle filters, reduces
convergence lag and increases overall efficiency.
Three-dimensional textural and volumetric image analysis holds great potential in understanding the image data produced by multi-photon microscopy. In this paper, an algorithm that quantitatively analyzes the texture and the morphology of vasculature in engineered tissues is proposed. The investigated 3D artificial tissues consist of Human Umbilical Vein Endothelial Cells (HUVEC) embedded in collagen exposed to two regimes of ultrasound standing wave fields under different pressure conditions. Textural features were evaluated using the normalized Gray-Scale Cooccurrence Matrix (GLCM) combined with Gray-Level Run Length Matrix (GLRLM) analysis. To minimize error resulting from any possible volume rotation and to provide a comprehensive textural analysis, an averaged version of nine GLCM and GLRLM orientations is used. To evaluate volumetric features, an automatic threshold using the gray level mean value is utilized. Results show that our analysis is able to differentiate among the exposed samples, due to morphological changes induced by the standing wave fields. Furthermore, we demonstrate that providing more textural parameters than what is currently being reported in the literature, enhances the quantitative understanding of the heterogeneity of artificial tissues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.