For the real road environment, the proportion of traffic sign targets is small and not easy to be detected, and the recognition background is complex and changeable, resulting in the problems of low detection precision and poor robustness. In this paper, a traffic sign detection algorithm based on improved Yolov5-PB is proposed. The algorithm uses the ParC-Net as the backbone feature extraction network to fully extract the target feature information. The BiFPN structure is adopted to enhance the feature fusion ability of the network for multi-scale traffic signs. Simulation is carried out on TT100K and GTSDB traffic sign datasets, and the results show that the mAP@0.5 of Yolov5-PB algorithm reaches 85.6% and 72.2%, respectively, which are 3.1% and 19.2% higher than Yolov5. The precision reaches 85.3% and 71.6%, respectively, which are 0.3% and 21.8% higher than Yolov5. Furthermore, compared with the current mainstream target detection algorithms, the proposed algorithm can achieve better detection precision and meet the requirements of real-time detection, and has better robustness.
Radiological imaging and image interpretation for clinical decision making are mostly specific to each body region such as head & neck, thorax, abdomen, pelvis, and extremities. For automating image analysis and consistency of results, standardizing definitions of body regions and the various anatomic objects, tissue regions, and zones in them becomes essential. Assuming that a standardized definition of body regions is available, a fundamental early step needed in automated image and object analytics is to automatically trim the given image stack into image volumes exactly satisfying the body region definition. This paper presents a solution to this problem based on the concept of virtual landmarks and evaluates it on whole-body positron emission tomography/computed tomography (PET/CT) scans. The method first selects a (set of) reference object(s), segments it (them) roughly, and identifies virtual landmarks for the object(s). The geometric relationship between these landmarks and the boundary locations of body regions in the craniocaudal direction is then learned through a neural network regressor, and the locations are predicted. Based on low-dose unenhanced CT images of 180 near whole-body PET/CT scans (which includes 34 whole-body PET/CT scans), the mean localization error for the boundaries of superior of thorax (TS) and inferior of thorax (TI), expressed as number of slices (slice spacing ≈ 4mm)), and using either the skeleton or the pleural spaces as reference objects, is found to be 3,2 (using skeleton) and 3, 5 (using pleural spaces) respectively, or in mm 13, 10 mm (using skeleton) and 10.5, 20 mm (using pleural spaces), respectively. Improvements of this performance via optimal selection of objects and virtual landmarks and other object analytics applications are currently being pursued.
and the skeleton and pleural spaces used as a reference objects
Much has been published on finding landmarks on object surfaces in the context of shape modeling. While this is still an open problem, many of the challenges of past approaches can be overcome by removing the restriction that landmarks must be on the object surface. The virtual landmarks we propose may reside inside, on the boundary of, or outside the object and are tethered to the object. Our solution is straightforward, simple, and recursive in nature, proceeding from global features initially to local features in later levels to detect landmarks. Principal component analysis (PCA) is used as an engine to recursively subdivide the object region. The object itself may be represented in binary or fuzzy form or with gray values. The method is illustrated in 3D space (although it generalizes readily to spaces of any dimensionality) on four objects (liver, trachea and bronchi, and outer boundaries of left and right lungs along pleura) derived from 5 patient computed tomography (CT) image data sets of the thorax and abdomen. The virtual landmark identification approach seems to work well on different structures in different subjects and seems to detect landmarks that are homologously located in different samples of the same object. The approach guarantees that virtual landmarks are invariant to translation, scaling, and rotation of the object/image. Landmarking techniques are fundamental for many computer vision and image processing applications, and we are currently exploring the use virtual landmarks in automatic anatomy recognition and object analytics.
To reduce cupping artifacts and enhance contrast resolution in cone-beam CT (CBCT), in this paper, we introduce a new
approach which combines blind deconvolution with a level set method. The proposed method focuses on the
reconstructed image without requiring any additional physical equipment, is easily implemented on a single-scan
acquisition. The results demonstrate that the algorithm is practical and effective for reducing the cupping artifacts and
enhance contrast resolution on the images, preserves the quality of the reconstructed image, and is very robust.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.