Chronic wounds affect millions of people around the world. In particular, elderly persons in home care may develop decubitus. Here, mobile image acquisition and analysis can provide a good assistance. We develop a system for mobile wound capture using mobile devices such as smartphones. The photographs are acquired with the integrated camera of the device and then calibrated and processed to determine the size of various tissues that are present in a wound, i.e., necrotic, sloughy, and granular tissue. The random forest classifier based on various color and texture features is used for that. These features are Sobel, Hessian, membrane projections, variance, mean, median, anisotropic diffusion, and bilateral as well as Kuwahara filters. The resultant probability output is thresholded using the Otsu technique. The similarity between manual ground truth labeling and the classification is measured. The acquired results are compared to those achieved with a basic technique of color thresholding, as well as those produced by the SVM classifier. The fast random forest was found to produce better results. It is also seen to have a superior performance when the method is applied only to the wound regions having the background subtracted. Mean similarity is 0.89, 0.39, and 0.44 for necrotic, sloughy, and granular tissue, respectively. Although the training phase is time consuming, the trained classifier performs fast enough to be implemented on the mobile device. This will allow comprehensive monitoring of skin lesions and wounds.
Overflows in urban drainage structures, or sewers, must be prevented on time to avoid their undesirable consequences. An effective monitoring system able to measure volumetric flow in sewers is needed. Existing stateof-the-art technologies are not robust against harsh sewer conditions and, therefore, cause high maintenance expenses. Having the goal of fully automatic, robust and non-contact volumetric flow measurement in sewers, we came up with an original and innovative idea of a vision-based system for volumetric flow monitoring. On the contrast to existing video-based monitoring systems, we introduce a second camera to the setup and exploit stereo-vision aiming of automatic calibration to the real world. Depth of the flow is estimated as the difference between distances from the camera to the water surface and from the camera to the canal’s bottom. Camerato-water distance is recovered automatically using large-scale stereo matching, while the distance to the canal’s bottom is measured once upon installation. Surface velocity is calculated using cross-correlation template matching. Individual natural particles in the flow are detected and tracked throughout the sequence of images recorded over a fixed time interval. Having the water level and the surface velocity estimated and knowing the geometry of the canal we calculate the discharge. The preliminary evaluation has shown that the average error of depth computation was 3 cm, while the average error of surface velocity resulted in 5 cm/s. Due to the experimental design, these errors are rough estimates: at each acquisition session the reference depth value was measured only once, although the variation in volumetric flow and the gradual transitions between the automatically detected values indicated that the actual depth level has varied. We will address this issue in the next experimental session.
KEYWORDS: Imaging systems, 3D modeling, 3D image processing, RGB color model, Cameras, Wound healing, Tissues, Picture Archiving and Communication System, Image segmentation, Skin
The state-of-the art method of wound assessment is a manual, imprecise and time-consuming procedure. Per- formed by clinicians, it has limited reproducibility and accuracy, large time consumption and high costs. Novel technologies such as laser scanning microscopy, multi-photon microscopy, optical coherence tomography and hyper-spectral imaging, as well as devices relying on the structured light sensors, make accurate wound assessment possible. However, such methods have limitations due to high costs and may lack portability and availability. In this paper, we present a low-cost wound assessment system and architecture for fast and accurate cutaneous wound assessment using inexpensive consumer smartphone devices. Computer vision techniques are applied either on the device or the server to reconstruct wounds in 3D as dense models, which are generated from images taken with a built-in single camera of a smartphone device. The system architecture includes imaging (smartphone), processing (smartphone or PACS) and storage (PACS) devices. It supports tracking over time by alignment of 3D models, color correction using a reference color card placed into the scene and automatic segmentation of wound regions. Using our system, we are able to detect and document quantitative characteristics of chronic wounds, including size, depth, volume, rate of healing, as well as qualitative characteristics as color, presence of necrosis and type of involved tissue.
Digital cameras are often used in recent days for photographic documentation in medical sciences. However, color reproducibility of same objects suffers from different illuminations and lighting conditions. This variation in color representation is problematic when the images are used for segmentation and measurements based on color thresholds. In this paper, motivated by photographic follow-up of chronic wounds, we assess the impact of (i) gamma correction, (ii) white balancing, (iii) background unification, and (iv) reference card-based color correction. Automatic gamma correction and white balancing are applied to support the calibration procedure, where gamma correction is a nonlinear color transform. For unevenly illuminated images, non- uniform illumination correction is applied. In the last step, we apply colorimetric calibration using a reference color card of 24 patches with known colors. A lattice detection algorithm is used for locating the card. The least squares algorithm is applied for affine color calibration in the RGB model. We have tested the algorithm on images with seven different types of illumination: with and without flash using three different off-the-shelf cameras including smartphones. We analyzed the spread of resulting color value of selected color patch before and after applying the calibration. Additionally, we checked the individual contribution of different steps of the whole calibration process. Using all steps, we were able to achieve a maximum of 81% reduction in standard deviation of color patch values in resulting images comparing to the original images. That supports manual as well as automatic quantitative wound assessments with off-the-shelf devices.
Quantitative light-induced fluorescence (QLF) is widely used to assess the damage of a tooth due to decalcification. In digital photographs, decalcification appears as white spot lesions, i.e. white spots on the tooth surface. We propose a novel multimodal registration approach for the matching of digital photographs and QLF images of decalcified teeth. The registration is based on the idea of contour-to-pixel matching. Here, the curve, which represents the shape of the tooth, is extracted from the QLF image using a contour segmentation by binarization and morphological processing. This curve is aligned to the photo with a non-rigid variational registration approach. Thus, the registration problem is formulated as minimization problem with an objective function that consists of a data term and a regularizer for the deformation. To construct the data term, the photo is pointwise classified into tooth and non-tooth regions. Then, the signed distance function of the tooth region allows to measure the mismatch between curve and photo. As regularizer a higher order, linear elastic prior is used. The resulting minimization problem is solved numerically using bilinear Finite Elements for the spatial discretization and the Gauss-Newton algorithm. The evaluation is based on 150 image pairs, where an average of 5 teeth have been captured from 32 subjects. All registrations have been confirmed correctly by a dental expert. The contour-to-pixel methods can directly be used in 3D for surface-to-voxel tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.