KEYWORDS: 3D modeling, Point clouds, Data modeling, RGB color model, Cameras, Error analysis, Data acquisition, Color, Crop monitoring, Atmospheric modeling
Leaf area in agricultural crops is a crucial indicator for understanding growth conditions and assessing photosynthetic efficiency. Traditional methods for measuring leaf area often involved destructive techniques, where leaves or entire plants were cut and manually measured. These methods not only reduced the yield of the destroyed crops but also required significant time and labor. In response to these challenges, this study focused on developing a method to monitor plant growth using RGBD cameras, specifically the RealSense L515 and iPhone 14 Pro, which can capture the three-dimensional structure of plants. The proposed method involves extracting plant portions from the acquired 3D point cloud data using color information characteristics and cluster classification. Subsequently, 3D models are created, and leaf area is estimated based on the surface area of these models. The experiments were conducted using artificial plants. The results showed that the method using the RealSense L515 sensor achieved an average absolute error rate as low as 6.6%, while the iPhone 14 Pro had an average absolute error rate of 10.8%. Although the RealSense L515 demonstrated better accuracy, the iPhone 14 Pro proved to be relatively usable even in outdoor environments.
KEYWORDS: Point clouds, Machine learning, Data modeling, Matrices, Airborne laser technology, Laser systems engineering, Laser scattering, Education and training, Data analysis, Time metrology
In recent years, water-related accidents caused by torrential rain have been occurring frequently. Visual search for persons requiring rescue is challenging from coast or riverbank. Due to water currents and underwater topography, search from boat is also difficult. This research aims to develop a safe, wide area and accurate target search method using point cloud data from drone. The authors focused on a LiDAR system called Airborne Laser Bathymetry (ALB) which is specialized for underwater observation. A green laser ALB, In particular, has capability to obtain underwater topography data because it is equipped with not only near-infrared laser used in conventional land surveying but also green visible laser for observing in relatively shallow water. The purpose of this study is to make it possible to identify the water surface, underwater topography, and underwater floating objects such as algae from green laser ALB point cloud data using machine learning methods. For machine learning, I use Pointnet++, a network effective for point cloud processing, and SVM (Support Vector Machine), specialized for two class classification. The Pointnet++ addresses the limitations of the previously used Pointnet by sampling local features based on point cloud distance and density for learning. In proposed method, Pointnet++ is used to input three-dimensional coordinates X, Y and Z and extract three classes: water surface, underwater topography, and floating objects. Then, by inputting the Z-axis coordinate data and backscatter data (Intensity) into the SVM, it becomes possible to detect persons requiring rescue from among the floating objects.
Various natural disasters occur on the earth. In Japan, heavy rains and earthquakes have caused particularly severe damage. We focus on landslides caused by them. This study proposes a landslide detection method using synthetic aperture radar (SAR). SAR uses microwave observations, and microwaves are reflected according to the properties of materials on the earth’s surface. In addition, microwave amplitude and phase information can be obtained, and these are used for various analyses. They are often used to detect disasters, mostly to detect changes caused by disasters. For example, change detection by differential reflection intensity, analysis of terrain variation by phase difference, and detection of material by properties of polarization. Therefore, multiple SAR data are required for disaster detection. However, in the event of a disaster, rapid detection of the damaged area is necessary. For this reason, this study investigates a method for detecting the damaged area from a single SAR data. As a research method, instance segmentation is conducted using YOLOv8. The SAR data used in the experiments were obtained for the Noto Peninsula earthquake. This disaster occurred on January 1st in 2024 in the Noto region of Ishikawa Prefecture and caused extensive damage. Images of landslide areas were obtained from SAR data, annotated and trained instance segmentation by YOLOv8 to evaluate test performance.
Ground deformation can be detected by processing SAR (Synthetic Aperture Radar) phase data acquired in different periods. However, due to the characteristics of SAR, it is difficult to determine the direction of ground deformation as the distance change between the satellite and the ground surface is observed. Therefore, on-site field observation is required since SAR observation results differ from the actual amount of ground deformation. This study aims to estimate ground deformation over a wide area using satellite SAR data, understand the disaster situation quickly, and reduce secondary damage risks caused by on-site field observation. In this paper, Interferometric SAR (InSAR) analysis is applied to estimate ground deformation caused by Kumamoto earthquake in 2016 from C-band SAR data on Sentinel-1 satellite. 2.5-dimensional analysis is conducted by combining the InSAR analysis results of the ascending and descending orbits, and the direction of ground deformation caused by earthquake is visualized using displacement vectors. Furthermore, changes in land cover classification, which classifies land based on surface vegetation and geology is performed by using time-series analysis based on machine learning techniques from optical sensor images obtained from Sentinel-2. The results show that the accurate understanding of the damage situation over a wide area is very effective in terms of estimating landslides and speeding up disaster response, such as evacuation.
In recent years, natural disasters have caused serious damage. In particular, landslides caused by earthquakes are damaging. However, it is difficult to predict when and where natural disasters will occur. Therefore, this study was conducted on early detection of landslides. SAR (Synthetic Aperture Radar) is a remote sensing technology. It uses microwaves and can observe day and night in all weather conditions. But this SAR data is a grayscale image, which is difficult to analyze without specialized knowledge. Therefore, we decided to use machine learning to detect changes in disasters that appear in SAR data. There are two machine learning models called pix2pix and pix2pixHD for image transformation. The objective of this study is to detect changes of surface by transforming pseudo-optical images from SAR data using machine learning. Two machine learning models were used for training, with test images and actual disaster data input. Simple terrain, such as forests only, was highly accurate, but complex terrain was difficult to generate. About actual disaster data, something like disaster-induced changes appeared in the converted images. However, we found it difficult to distinguish bare area from grassland in the output images. In the future, it is necessary to consider the combination of data to be used for learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.