Although numerical algorithms for 2D crack simulation have been studied in Modeling and Simulation (M&S) and
computer graphics for decades, realism and computational efficiency are still major challenges. In this paper, we
introduce a high-fidelity, scalable, adaptive and efficient/runtime 2D crack/fracture simulation system by applying the
mathematically elegant Peano-Cesaro triangular meshing/remeshing technique to model the generation of
shards/fragments. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local
multi-resolution refinement to any level-of-detail. The generated binary decomposition tree also provides efficient
neighbor retrieval mechanism used for mesh element splitting and merging with minimal memory requirements essential
for realistic 2D fragment formation. Upon load impact/contact/penetration, a number of factors including impact angle,
impact energy, and material properties are all taken into account to produce the criteria of crack initialization,
propagation, and termination leading to realistic fractal-like rubble/fragments formation. The aforementioned parameters
are used as variables of probabilistic models of cracks/shards formation, making the proposed solution highly adaptive
by allowing machine learning mechanisms learn the optimal values for the variables/parameters based on prior
benchmark data generated by off-line physics based simulation solutions that produce accurate fractures/shards though at
highly non-real time paste. Crack/fracture simulation has been conducted on various load impacts with different initial
locations at various impulse scales. The simulation results demonstrate that the proposed system has the capability to
realistically and efficiently simulate 2D crack phenomena (such as window shattering and shards generation) with diverse potentials in military and civil M&S applications such as training and mission planning.
Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially
in diverse panoply of military and civil applications where these simulation systems are widely used for
personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation
should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions
between flying rubble and their surrounding entities. However, none of the existing building damage simulation
systems sufficiently faithfully realize the criteria of realism required for effective military applications. In
this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to
realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied
to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The
system also takes account of rubble pile formation and applies a generic and scalable multi-component based
object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler
to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and
scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results
demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout
and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles
and pedestrians in clusters of sequential and parallel damage events.
Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have
been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic,
robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to
computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered
semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical
layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges
the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through
utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and
used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano-
Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on
low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region
dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local
multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient
neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been
conducted within the maritime image environment where the segmented layered semantic objects include the basic level
objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the
proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions
with contextual topological relationships.
To protect naval and commercial ships from attack by terrorists and pirates, it is important to have automatic surveillance
systems able to detect, identify, track and alert the crew on small watercrafts that might pursue malicious intentions,
while ruling out non-threat entities. Radar systems have limitations on the minimum detectable range and lack high-level
classification power. In this paper, we present an innovative Automated Intelligent Video Surveillance System for Ships
(AIVS3) as a vision-based solution for ship security. Capitalizing on advanced computer vision algorithms and practical
machine learning methodologies, the developed AIVS3 is not only capable of efficiently and robustly detecting,
classifying, and tracking various maritime targets, but also able to fuse heterogeneous target information to interpret
scene activities, associate targets with levels of threat, and issue the corresponding alerts/recommendations to the man-in-
the-loop (MITL). AIVS3 has been tested in various maritime scenarios and shown accurate and effective threat
detection performance. By reducing the reliance on human eyes to monitor cluttered scenes, AIVS3 will save the
manpower while increasing the accuracy in detection and identification of asymmetric attacks for ship protection.
Content-based video retrieval from archived image/video is a very attractive capability of modern intelligent video
surveillance systems. This paper presents an innovative Semantic-Based Video Indexing and Retrieval (SBVIR) software
toolkit to help users of intelligent video surveillance to easily and rapidly search the content of large video archives to
conduct video-based forensic and image intelligence. Tailored for maritime environment, SBVIR is suited for
surveillance applications in harbor, sea shores, or around ships. The system comprises two major modules: a video
analytic module that performs automatic target detection, tracking, classification, activities recognition, and a retrieval
module that performs data indexing, and information retrieval. SBVIR is capable of detecting and tracking objects from
multiple cameras robustly in condition of dynamic water background and illumination changes. The system provides
hierarchical target classification among a large ontology of watercraft classes, and is capable of recognizing a variety of
boat activities. Video retrieval is achieved with both query-by-keyword and query-by-example. Users can query video
content using semantic concepts selected from a large dictionary of objects and activities, display the history linked to a
given target/activity, and search for anomalies. The user can interact with the system and provide feedbacks to tune the
system for improved accuracy and relevance of retrieved data.
SBVIR has been tested for real maritime surveillance scenarios and shown to be able to generate highly-semantic
metadata tags that can be used during the retrieval to provide user with relevant and accurate data in real-time.
Image segmentation plays an important role in medical image analysis and visualization since it greatly enhances the
clinical diagnosis. Although many algorithms have been proposed, it is still challenging to achieve an automatic clinical
segmentation which requires speed and robustness. Automatically segmenting the vertebral column in Magnetic Resonance
Imaging (MRI) image is extremely challenging as variations in soft tissue contrast and radio-frequency (RF) in-homogeneities
cause image intensity variations. Moveover, little work has been done in this area. We proposed a generic
slice-independent, learning-based method to automatically segment the vertebrae in spinal MRI images. A main feature of
our contributions is that the proposed method is able to segment multiple images of different slices simultaneously. Our
proposed method also has the potential to be imaging modality independent as it is not specific to a particular imaging
modality. The proposed method consists of two stages: candidate generation and verification. The candidate generation
stage is aimed at obtaining the segmentation through the energy minimization. In this stage, images are first partitioned
into a number of image regions. Then, Support Vector Machines (SVM) is applied on those pre-partitioned image regions
to obtain the class conditional distributions, which are then fed into an energy function and optimized with the graph-cut
algorithm. The verification stage applies domain knowledge to verify the segmented candidates and reject unsuitable ones.
Experimental results show that the proposed method is very efficient and robust with respect to image slices.
Image segmentation plays an important role in medical image analysis and visualization since it greatly enhances
the clinical diagnosis. Although many algorithms have been proposed, it is challenging to achieve an automatic
clinical organ segmentation which requires speed and robustness. Automatically segmenting cardiac Magnetic
Resonance Imaging (MRI) image is extremely challenging due to the artifacts of cardiac motion and characteristics
of MRI. Moreover many of the existing algorithms are specific to a particular view of cardiac MRI images.
We proposed a generic view-independent, learning-based method to automatically segment cardiac MRI images,
which uses machine learning techniques and the geometric shape information. A main feature of our contribution
is the fact that the proposed algorithm can use a training set containing a mix of various views and is able to
successfully segment any given views. The proposed method consists of four stages. First, we partition the input
image into a number of image regions based on their intensity characteristics. Then, we calculate the pre-selected
feature descriptions for each generated region and use a trained classi.er to learn the conditional probabilities
for every pixel based on the calculated features. In this paper, we use the Support Vector Machine (SVM) to
train our classifier. The learned conditional probabilities of every pixel are then fed into an energy function
to segment the input image. We optimize our energy function with graph cuts. Finally, domain knowledge is
applied to verify the segmentation. Experimental results show that this method is very efficient and robust with
respect to image views, slices and motion phases. The method also has the potential to be imaging modality
independent as the proposed algorithm is not specific to a particular imaging modality.
The automatic detection of Region of Interests (ROI) is an active research area in the design of machine vision systems. By using bottom-up image processing algorithms to predict human eye fixations or focus of attention and extract the relevant embedded information content in images has been widely applied in this area, especially in mobile robot navigation. Text that appears in images contains large quantities of useful information. Further more, many potential landmarks in mobile robot navigation contain text, such as nameplates, information signs and hence scene text is an important feature to be extracted. In this paper, we propose a simple and fast text localization algorithm based on a zero-crossing operator, which can effectively detect text-based features in an indoor environment for mobile robot navigation. This method is based on the idea that high local spatial variance is one of the distinguishing characteristics of text. Text in images has distinct intensity/color differences relative to its neighbourhood background and appears in clusters with uniform inter-character distance. If we compute the spatial variance along the text line we can get a large value, while the spatial variance in the background is fairly low. Experimental results show that calculating the spatial variance to detect text-based landmarks in real-time is an effective and efficient method for mobile robot navigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.