Along with the advancing of technology in wireless and miniature camera, Wireless Capsule Endoscopy (WCE), the
combination of both, enables a physician to diagnose patient's digestive system without actually perform a surgical
procedure. Although WCE is a technical breakthrough that allows physicians to visualize the entire small bowel
noninvasively, the video viewing time takes 1 - 2 hours. This is very time consuming for the gastroenterologist. Not
only it sets a limit on the wide application of this technology but also it incurs considerable amount of cost. Therefore, it
is important to automate such process so that the medical clinicians only focus on interested events. As an extension
from our previous work that characterizes the motility of digestive tract in WCE videos, we propose a new assessment
system for energy based events detection (EG-EBD) to classify the events in WCE videos. For the system, we first
extract general features of a WCE video that can characterize the intestinal contractions in digestive organs. Then, the
event boundaries are identified by using High Frequency Content (HFC) function. The segments are classified into WCE
event by special features. In this system, we focus on entering duodenum, entering cecum, and active bleeding. This
assessment system can be easily extended to discover more WCE events, such as detailed organ segmentation and more
diseases, by using new special features. In addition, the system provides a score for every WCE image for each event.
Using the event scores, the system helps a specialist to speedup the diagnosis process.
In this paper, we present a new graph-based query language and its query processing for a Graph-based Video
Database Management System (GVDBMS). Although extensive researches have proposed various query languages
for video databases, most of them have the limitation in handling general-purpose video queries. Each
method can handle specific data model, query type or application. In order to develop a general-purpose video
query language, we first produce Spatio-Temporal Region Graph (STRG) for each video, which represents spatial
and temporal information of video objects. An STRG data model is generated from the STRG by exploiting
object-oriented model. Based on the STRG data model, we propose a new graph-based query language named
STRG-QL, which supports various types of video query. To process the proposed STRG-QL, we introduce a
rule-based query optimization that considers the characteristics of video data, i.e., the hierarchical correlations
among video segments. The results of our extensive experimental study show that the proposed STRG-QL is
promising in terms of accuracy and cost.
As a result of advances in skin imaging technology and the development of suitable image processing
techniques during the last decade, there has been a significant increase of interest in the computer-aided
diagnosis of melanoma. Automated border detection is one of the most important steps in this procedure,
since the accuracy of the subsequent steps crucially depends on it. In this paper, a fast and unsupervised
approach to border detection in dermoscopy images of pigmented skin lesions based on the Statistical
Region Merging algorithm is presented. The method is tested on a set of 90 dermoscopy images. The
border detection error is quantified by a metric in which a set of dermatologist-determined borders is
used as the ground-truth. The proposed method is compared to six state-of-the-art automated methods
(optimized histogram thresholding, orientation-sensitive fuzzy c-means, gradient vector flow snakes,
dermatologist-like tumor extraction algorithm, meanshift clustering, and the modified JSEG method)
and borders determined by a second dermatologist. The results demonstrate that the presented method
achieves both fast and accurate border detection in dermoscopy images.
Data mining techniques have been applied in video databases to identify various patterns or groups. Clustering analysis is used to find the patterns and groups of moving objects in video surveillance systems. Most existing methods for the clustering focus on finding the optimum of overall partitioning. However, these approaches cannot provide meaningful descriptions of the clusters. Also, they are not very suitable for moving object databases since video data have spatial and temporal characteristics, and high-dimensional attributes. In this paper, we propose a model-based conceptual clustering (MCC) of moving objects in video surveillance based on a formal concept analysis. Our proposed MCC consists of three steps: 'model formation', 'model-based concept analysis', and 'concept graph generation'. The generated concept graph provides conceptual descriptions of moving objects. In order to assess the proposed approach, we conduct comprehensive experiments with artificial and real video surveillance data sets. The experimental results indicate that our MCC dominates two other methods, i.e., generality-based and error-based conceptual clustering algorithms, in terms of quality of concepts.
Advances in video technology are being incorporated into today’s healthcare practice. For example, colonoscopy is an important screening tool for colorectal cancer. Colonoscopy allows for the inspection of the entire colon and provides the ability to perform a number of therapeutic operations during a single procedure. During a colonoscopic procedure, a tiny video camera at the tip of the endoscope generates a video signal of the internal mucosa of the colon. The video data are displayed on a monitor for real-time analysis by the endoscopist. Other endoscopic procedures include upper gastrointestinal endoscopy, enteroscopy, bronchoscopy, cystoscopy, and laparoscopy. However, a significant number of out-of-focus frames are included in this type of videos since current endoscopes are equipped with a single, wide-angle lens that cannot be focused. The out-of-focus frames do not hold any useful information. To reduce the burdens of the further processes such as computer-aided image processing or human expert’s examinations, these frames need to be removed. We call an out-of-focus frame as non-informative frame and an in-focus frame as informative frame. We propose a new technique to classify the video frames into two classes, informative and non-informative frames using a combination of Discrete Fourier Transform (DFT), Texture Analysis, and K-Means Clustering. The proposed technique can evaluate the frames without any reference image, and does not need any predefined threshold value. Our experimental studies indicate that it achieves over 96% of four different performance metrics (i.e. precision, sensitivity, specificity, and accuracy).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.