The tongue is an important organ in the oral cavity. It can provide information about the oral cavity and physical conditions, and it is also one of the references for traditional Chinese medicine diagnosis. The segmentation of the tongue is a crucial stage in computer-assisted tongue diagnostic systems. Existing methods for segmenting images of the tongue are based on standard dataset, cannot be generalized without a large number of training data from different sources, making it difficult to adapt to mobile devices. A new method for automatically segmenting tongue images by combining traditional image processing and small sample deep learning is proposed. In a complicated context, the Yolo-V5 target detection module is employed to acquire the tongue area. A unified Gaussian distribution is utilized to adjust the color of this region to minimize the negative impact of varied colors on segmentation. Then, for precise segmentation, an enhanced Unet with RFB and attention mechanism is input. The potential noise is then eliminated using a morphological combining process. This technique enhances the segmentation performance of non-standard tongue photos taken by mobile devices by 8% to 10% compared to a single segmentation network, and the average DSC and IoU on non-standard dataset are 95.62% and 91.70%, respectively. It is anticipated that the suggested technique would be applied in stationary and mobile computer-assisted tongue diagnostic equipment due to its improved multi-environment robustness.
Coronary stents improve the blood flow by keeping narrowed vessels open, but small stent cells that overlay a side
branch may cause restenosis and obstruct the blood flow to the side branch. There are increasing demands for precise
measurement of the stent coverage of side branches for outcome evaluation and clinical research. Capturing micrometerresolution images, intravascular optical coherence tomography (IVOCT) allows proper visualization of the stent struts, which subsequently can be used for the coverage measurement purpose. In this paper, a new approach to compute the stent coverage of side branches in IVOCT image sequences is presented. The amount of the stent coverage of a side branch is determined by the ostial area of the stent cells that cover this side branch. First, the stent struts and the guide wires are detected to reconstruct the irregular stent surface and the stent cell contours are generated to segment their coverage area on the stent surface. Next, the covered side branches are detected and their
lumen contours are projected onto the stent surface to specify the side branch areas. By assessing the common parts
between the stent cell areas and the side branch areas, the stent cell coverage of side branches can be computed.
The evaluation based on a phantom data set demonstrated that the average error of the stent coverage of side branches is 8.9% ± 7.0%. The utility of the presented approach for in-vivo data sets was also proved by the testing on 12 clinical IVOCT image sequences.
Intravascular optical coherence tomography (IVOCT) provides very high resolution cross-sectional image sequences of
vessels. It has been rapidly accepted for stent implantation and its follow up evaluation. Given the large amount of stent
struts in a single image sequence, only automated detection methods are feasible. In this paper, we present an automated
stent strut detection technique which requires neither lumen nor vessel wall segmentation. To detect strut-pixel
candidates, both global intensity histograms and local intensity profiles of the raw polar images are used. Gaussian
smoothing is applied followed by specified Prewitt compass filters to detect the trailing shadow of each strut. The
shadow edge positions assist the strut-pixel candidates clustering. In the end, a 3D guide wire filter is applied to remove
the guide wire from the detection results. For validation, two experts marked 6738 struts in 1021 frames in 10 IVOCT
image sequences from a one-year follow up study. The struts were labeled as malapposed, apposed or covered together
with the image quality (high, medium, low). The inter-observer agreement was 96%. The algorithm was validated for
different combinations of strut status and image quality. Compared to the manual results, 93% of the struts were
correctly detected by the new method. For each combination, the lowest accuracy was 88%, which shows the robustness
towards different situations. The presented method can detect struts automatically regardless of the strut status or the
image quality, which can be used for quantitative measurement, 3D reconstruction and visualization of the implanted
stents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.