PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 13037, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An objective, quantitative method for assessing image interpretability for machine learning (ML) would be a valuable tool to support sensor design, collection management, and algorithm selection. The National Imagery Interpretability Rating Scale (NIIRS) has served as a useful standard for image analysis in support of intelligence, surveillance, and reconnaissance (ISR) missions. However, NIIRS focuses on human perception and empirical studies have demonstrated a tenuous relationship, at best, between NIIRS and observed performance for ML algorithms. We propose a new approach that approximates the Bayes error for object classification to establish an upper bound on ML performance for a given set of imagery. The process starts with high fidelity signatures from the object classes of interest. Degrading these signatures through an emulation of the sensor’s image chain produces signatures consistent with observed imagery from that sensor. Various distance metrics quantify the separability between specific object classes. We demonstrate a resampling technique to approximate the Bayes error, which is the theoretical limit for performance. This approach provides a quantitative measure that is independent of any specific machine learning model or methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Segment Anything Model (SAM) has demonstrated exceptional capabilities for object segmentation in various settings. In this work, we focus on the remote sensing domain and examine whether SAM’s performance can be improved for overhead imagery and geospatial data. Our evaluation indicates that directly applying the pretrained SAM model to aerial imagery does not yield satisfactory performance due to the domain gap between natural and aerial images. To bridge this gap, we utilize three parameter-efficient fine-tuning strategies and evaluate SAM’s performance across a set of diverse benchmarks. Our results show that while a vanilla SAM model lacks the intrinsic ability to generate accurate masks for smaller objects often found in overhead imagery, fine-tuning greatly improves performance and produces results comparable to current state-of-the-art techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Morroa aquifer plays a crucial role supplying drinking water to around one million residents across Sucre, Córdoba, and Bolívar departments in Colombia. However, it faces severe water stress, ranking as the second most overexploited aquifer globally according to recent research using the Groundwater Footprint (GF) indicator. This situation threatens the sustainability of the aquifer and the well-being of the region's inhabitants who rely on it. To tackle this challenge, CARSUCRE, the entity responsible for aquifer management, has implemented various strategies. These include establishing a monitoring network with piezometers to track static and dynamic aquifer levels and conducting civil works to redirect rainfall runoff towards artificial recharge projects. Yet, the impact of vegetation variations in the recharge areas of the aquifer levels remains uncertain due to many different factors like drought, heavy rainfall, and economic changes. This research introduces a methodology that leverages remote sensing data, particularly high-resolution images from the Planet platform (3m), combined with land cover analysis in piezometer influence areas. The primary aim is to assess how changes in vegetation affect both static and dynamic levels of the Morroa Aquifer and then identify strategies to enhance land cover and improve water capture. The results obtained show a significant correlation between NDVI, EVI, and LULC for the aquifer recharge zone, with an average of 0.858 for all applied tools. These findings provide valuable information for the management and preservation of this vital water resource in the region.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many water bodies play a crucial role as receiver of several urban basins within the water system of a city, these urban basins often face challenges of pollution and reduction in water flow, such as, the case of the Juan Angola channel in the city of Cartagena, Colombia. Current remote sensing strategies using Landsat and Sentinel-2 satellite imagery, lack the necessary spatial resolution to adequately study such as water bodies. In contrast, higher spatial resolution data, such as the PlanetScope one, allows for better spatial and temporal details. Nevertheless, PlanetScope does not count with the same spectral resolution as Landsat and Sentinel-2, requiring of further processings to extract relevant information. In this paper, we used PlanetScope satellite images, processed through computer vision techniques, to analyze the evolution of the Juan Angola channel, Laguna del Cabrero and Chambac´u over time. Our approach involved extracting water areas from PlanetScope images and comparing these over different periods. Preliminary findings revealed noticeable variations in the area of the channel due to factors such as rainfall and possible illegal human invasion, as well as, the increment in level of contamination observed by means of the Normalized Difference Turbidity Index (NDTI). The images used from PlanetScope offered a more detailed time-series analysis of different hydrographic areas, which is particularly pertinent in the Juan Angola channel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maintaining safe transportation infrastructure networks such as roadways benefit from image surveillance. One promising technology is 3D LiDAR scanning of which the paper presents the Slope LiDAR embankment (SLidE) dataset. This paper highlights 3D LiDAR exploitation methods for expansive clay terrains across different seasons at a specific site along the Terry Road Exit from I-20 westbound in Jackson, Mississippi. The analysis helps to understand the impact of seasonal moisture variation on slope stability, with a particular focus on the implications of climate change. Expansive clays, known for their shrink-swell behavior in response to moisture changes, pose significant geotechnical challenges, especially under the evolving conditions brought about by extreme weather. By capturing dynamic soil behavior through seasonal 3D scanning, the results provide insights into these soils' volumetric changes and deformation patterns at the monitored location, underscoring the critical influence of moisture dynamics on soil and slope stability. The proposed LiDAR 3D scan processing methodology is designed to reduce the computational load of analyzing large datasets. Moreover, this work shares the SLidE dataset. SLidE serves as a valuable resource for researchers and practitioners in the field, enhancing data processing efficiency and enabling real-time monitoring and rapid response to potential geotechnical failures. Results indicate a notable trend where the slope, subject to expansive clay dynamics, tends to revert to its normal structural state during the fall/winter months.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The interpretability of an image indicates its potential information value. Historically, the National Imagery Interpretability Rating Scale (NIIRS) has been the standard for quantifying the interpretability of an image. With the growing reliance on machine learning (ML) for image analysis, NIIRS fails to capture the image quality attributes relevant to ML. Empirical studies have demonstrated that the relationship between NIIRS and ML performance is weak at best. In this study, we explore several image characteristics through the relationship between the training data and the test data using two standard ML methods: TensorFlow and Detectron2. We employed quantitative methods to measure color diversity, edge density, and image texture as ways to characterize the training and test sets. A series of experiments demonstrate the utility of these measures. The results suggest that each of the proposed methods quantifies an aspect of image difficulty for the ML method. Performance is generally better for test sets with lower levels of color diversity, edge density, and texture. In addition, the experiments suggest that training on higher complexity imagery yields more resilient models. Future studies will assess the relationship among these image features and explore methods for extending them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
4S – Silversword Software and Services, LLC (4S) is developing a novel, robust, Free Space Optical (FSO) communications technology entitled Through the Air Link Optical Component (TALOC), applicable for use on military aircraft, both manned and unmanned. The basic concept involves three components, a tracking and acquisition system, a dedicated communications system, and a retroreflector. Operations involve scanning the horizon with a fan of light, searching for a retro-reflection from another TALOC unit. Once a reflection of the scanning array is located, tracking begins by continued scanning around the spatial location of the acquired reflection. As soon as tracking is established, communications begins, utilizing a separate, dedicated, light source. Tracking continues throughout the communications period to ensure a solid connection and reduce dropouts. TALOC uses solid state scanning to facilitate tracking and acquisition. Figure 1 at right shows the next generation, multi-wavelength, TALOC concept.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image exploitation has evolved with a growing reliance on machine learning (ML). With increased reliance on ML comes questions about performance, reliability, and trust. When an imagery analyst performs analysis without the assistance of an ML model, the confidence is dependent on the quality of the underlying imagery and the expertise of the analyst. The National Imagery Interpretation Rating Scale (NIIRS) maps task complexity to image quality based on human cognition. Thus, NIIRS is an accepted standard for the information potential of the image for human analysis. Empirical analysis indicates that NIIRS is, at best, a partial indicator of expected performance for ML. This paper explores several factors that can affect the quality of ML-based results:
• The image quality as assessed in the ML context,
• The scene complexity of the image,
• The ML architecture, and
• The relationship between the training imagery and the mission imagery.
This paper will explore each of these factors and discuss their importance. We conclude with a set of recommendations for future research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coupling (semi-)autonomous drones with on-ground personnel can help improve metrics like mission safety, task effectiveness, and task completion time. However, in order for a drone to be an effective companion, it needs to be able to make intelligent decisions about what to do in a partially observable and dynamic environment in light of uncertainty and multiple competing criteria. One simple example is where and how to move. These kinds of continuous or waypoint-based decisions vary greatly from task to task, such as in the scenario of building a 3D map of an area, getting a minimum number of pixels on objects for automatic target detection, exploring an area around a search team, etc. While it is possible to implement each behavior from scratch, we discuss a flexible and extensible framework that allows the specification of dynamic, controlled, and explainable behaviors based on the multi-criteria decision making (MCDM), an aggregation task, of different UFOMap voxel map layers. While we currently employ specific layers such as drone position, time since a voxel was last observed, minimum distance to a voxel, and exploration fringe, future additional layers present the opportunity for the creation of more complex and novel behaviors. Through testing with simulated flights, we have demonstrated that such an approach is feasible for the construction of useful semi-autonomous behaviors in the pursuit of human-robot teaming.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.