The Φsat-2 mission from the European Space Agency (ESA) is part of Φsat mission lineup aimed to address innovative mission concepts making use of advanced onboard processing including Artificial Intelligence. Φsat-2 is based on a 6U CubeSat with a medium-high resolution VIS/NIR multispectral payload (eight bands plus NIR) combined with a hardware accelerated unit capable of running several AI applications throughout the mission lifetime. As images are acquired, and after the application of dTDI processing, the raw data is transferred through SpaceWire to a payload pre-processor where level L1B will be produced. At this stage radiometric and geometric processing are carried out in conjunction with georeferencing. Once the data is pre-processed, it is fed to the AI processor through the primary computer and made available to the onboard applications; orchestration is done via a dedicated version of the NanoSat MO Framework. The following applications are currently baselined and additional two will be selected via dedicated AI Challenge by Q3 2023: SAT2MAP for autonomous detection of streets during emergency scenarios; Cloud Detection application and service for data reduction; the Autonomous Vessel Awareness to detect and classify vessel types and the deep compression application (CAE) that has the goal of reducing the amount of acquired data to improve the mission effectiveness.
Hyperspectral image analysis has been attracting research attention in a variety of fields. Since the size of hyperspectral data cubes can easily reach gigabytes, their efficient transfer, manual delineation, and intrinsic heterogeneity have become serious obstacles in building ground-truth datasets in emerging scenarios. Therefore, applying supervised learners for the hyperspectral classification and segmentation remains a difficult yet very important task in practice, as segmentation is a pivotal step in the process of extracting useful information about the scanned area from such highly dimensional data. We tackle this problem using self-organizing maps and exploit an unsupervised algorithm for segmenting such imagery. The experimental study, performed over two benchmark hyperspectral scenes and backed up with the sensitivity analysis, showed that our technique can be applied for this purpose due to its flexibility, it delivers reliable segmentations, and offers fast operation.
In this paper a novel method of impulsive noise removal in color images is presented. The proposed filtering design is based on a new measure of pixel similarity, which takes into account the structure of the local neighborhood of the pixels being compared. Thus, the new distance measure can be regarded as an extension of the reachability distance used in the construction of the local outlier factor, widely used in the big data analysis. Using the new similarity measure, an extension of the classic Vector Median Filter (VMF) has been developed. The new filter is extremely robust to outliers introduced by the impulsive noise, retains details and has the unique ability to sharpen image edges. Using the structure of the developed filter, a new impulse detector has been constructed. The cumulated sum of smallest reachability distances in the filtering window serves as a robust measure of pixel outlyingness. In this way, a pixel will be treated as corrupted if a predefined threshold is exceeded and will be replaced by the average of pixels which were found to belong to the original, pristine image; otherwise the processed pixel will be retained. This structure is similar to the Fast Averaging Peer Group Filter, however the incorporation of the reachability measure makes this technique more robust. The new filtering design can be applied in real time scenario, as its computational efficiency is comparable with the standard VMF, which is fast enough to be used for the enhancement of video sequences. The new filter operates in a 3×3 filtering window, however the information acquired from a larger window is processed. The source of additional information is the local neighborhood of pixels, which is used for the determination of the novel reachability measure. The experiments performed on a large database of color images show that the new filter surpasses existing designs especially in the case of highly polluted images. The robust reachability measure assures that the clusters of impulses are being removed, as not only the pixels, but also their neighborhoods are considered. The novel measure of dissimilarity can be also used in other tasks whose main goal is the detection of outliers.
Data augmentation is a popular technique which helps improve generalization capabilities of deep neural net- works, and can be perceived as implicit regularization. It is widely adopted in scenarios where acquiring high- quality training data is time-consuming and costly, with hyperspectral satellite imaging (HSI) being a real-life example. In this paper, we investigate data augmentation policies (exploiting various techniques, including generative adversarial networks applied to elaborate artificial HSI data) which help improve the generalization of deep neural networks (and other supervised learners) by increasing the representativeness of training sets. Our experimental study performed over HSI benchmarks showed that hyperspectral data augmentation boosts the classification accuracy of the models without sacrificing their real-time inference speed.
Recent advancements in single-image super-resolution reconstruction (SRR) are attributed primarily to convolutional neural networks (CNNs), which effectively learn the relation between low and high resolution and allow for obtaining high-quality reconstruction within seconds. SRR from multiple images benefits from information fusion, which improves the reconstruction outcome compared with example-based methods. On the other hand, multiple-image SRR is computationally more demanding, mainly due to required subpixel registration of the input images. Here, we explore how to exploit CNNs in multiple-image SRR and we demonstrate that competitive reconstruction outcome can be obtained within seconds.
In the paper, a novel approach to the enhancement of color images corrupted by impulsive noise is presented. The proposed algorithm first calculates for every image pixel the distances in the RGB color space to all elements belonging to the filtering window. Then, a sum of a specified number of smallest distances, which serves as a measure of pixel similarity, is calculated. This generalization of the Rank-Ordered Absolute Difference (ROAD) is robust to outliers, as the high distances are not considered when calculating this measure. Next, for each pixel, a neighbor with smallest ROAD value is searched for. If such a pixel is found, then the filtering window is moved to a new position and again a neighbor, with ROAD measure lower than the initial value is looked for. If it is encountered, the window is moved again, otherwise the process is terminated and the starting pixel is replaced with the last pixel in the path formed by the iterative procedure of the window shifting. The comparison with the filters intended for the removal of noise in color images revealed excellent properties of the new enhancement technique. It is very fast, as the ROAD values can be pre-computed, and the formation of the paths needs only comparisons of scalar values. The proposed technique can be applied for the restoration of color images distorted by impulsive noise and can also be used as a method of edge sharpening. Its low computational complexity allows also for its application in the processing of video sequences.
Computed tomography (CT) imaging became an indispensable modality exploited across a vast spectrum of clinical indications for diagnosis and follow-up, alongside various image-guided procedures, especially in patients with lung cancer. Accurate lung segmentation from whole-body CT scans is an initial, yet extremely important step in such procedures. Therefore, fast and robust (against low-quality data) segmentation techniques are being actively developed. In this paper, we propose a new real-time algorithm for segmenting lungs from the entire body CT scans. Our method benefits from both 2D and 3D analysis of CT images, coupled with several fast pruning strategies to remove false-positive tissue areas, including trachea and bronchi. Also, we developed a new approach for separating lungs which exploits spatial analysis of lung candidates. Our algorithms were implemented in Adaptive Vision Studio (AVS)|a visual-programming software suite based on the data-ow paradigm. Although AVS is extensively used in machine-vision industrial applications (it is equipped with a range of highly optimized image-processing routines), we showed it can be easily utilized in general data analysis applications, including medical imaging. Experimental study performed on a benchmark dataset manually annotated by an experienced reader revealed that our algorithm is very fast (average processing time of an entire CT series is less than 1.5 seconds), and it is competitive against the state of the art, delivering high-quality and consistent results (DICE was above 0.97 for both lungs; 0.96 for the left and 0.95 for the right lung after separation). The quantitative analysis was backed up with thorough qualitative investigation (including 2D and 3D visualizations) and statistical tests.
Cortical surface extraction from magnetic resonance (MR) scans is a preliminary, yet crucial step in brain segmentation and analysis. Although there are many algorithms that address this problem, they often sacrifice execution speed for accuracy or they depend on many parameters that have to be tuned manually by an experienced practitioner. Therefore fast, accurate and autonomous cortical surface extraction algorithms are in high demand and they are being actively developed to enable clinicians to appropriately plan a treatment pathway and quantify response in patients with brain lesions based on precise image analysis. In this paper, we present an automated approach for cortical surface extraction from MR images based on 3D image morphology, connected component labeling and edge detection. Our technique allows for real-time processing of MR scans – an average study of 102 slices, each 512x512 pixels, takes approximately 768 ms to process (about 7 ms per slice) with known parameters. To automate the process of tuning the algorithm parameters, we developed a genetic algorithm for this task. Experimental study performed using real-life MR brain images revealed that the proposed algorithm offers very high-quality cortical surface extraction, it works in real-time, and it is competitive with the state of the art.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.