Active contour-based methods are widely popular in the image segmentation field. Basically, they perform a semiautomatic region identification by partitioning the image content mainly into the foreground and background. Nevertheless, the accurate delimitation still remains as an important challenge which usually depends on how close the initial contour is placed to the object of interest (OI). Several applications of active contours require the user interaction to give prior information about the initial position as the first step, which drives the tool substantially dependent on a manual process. This paper describes how to overcome this limitation by including the expertise provided by the training stage of a Convolutional Neural Network (CNN). Despite CNN methods require a large dataset or data augmentation techniques to improve their results, the combined proposal accomplishes a presegmentation task with a reduced number of images to obtain the assumed locations for each OI. These results are used to initialize a multiphase active contour model that follows a level set scheme to lead a smoother multiregion segmentation with less effort. Experiments of this approach are included to compare classic techniques of contour initialization and show the benefits of our proposal.
Texture is one of the most important elements used by the human visual system (HVS) to distinguish different objects in a scene. Early bio-inspired methods for texture segmentation involve partitioning an image into distinct regions by setting a criterion based on their frequency response and local properties in order to further perform a grouping task. Nevertheless, the correct texture delimitation still remains as an important challenge in image segmentation. The aim of this study is to generate a novel approach to discriminate different textures by comparing internal and external image content in a set of evolving curves. We propose a multiphase formulation with an active contour model applied on the highest energy coefficients generated by the Hermite transform (HT). Local texture features such as scale and orientation are reflected in the HT coefficients which guide the evolution of each curve. This process leads to the enclosure of similar characteristics in a region associated with a level set function. The efficiency of our proposal is evaluated using a variety of synthetic images and real textured scenes.
Periodic variations in patterns within a group of pixels provide important information about the surface of interest and can be used to identify objects or regions. Hence, a proper analysis can be applied to extract particular features according to some specific image properties. Recently, texture analysis using orthogonal polynomials has gained attention since polynomials characterize the pseudo-periodic behavior of textures through the projection of the pattern of interest over a group of kernel functions. However, the maximum polynomial order is often linked to the size of the texture, which implies in many cases, a complex calculation and introduces instability in higher orders leading to computational errors. In this paper, we address this issue and explore a pre-processing stage to compute the optimal size of the window of analysis called “texel.” We propose Haralick-based metrics to find the main oscillation period, such that, it represents the fundamental texture and captures the minimum information, which is sufficient for classification tasks. This procedure avoids the computation of large polynomials and reduces substantially the feature space with small classification errors. Our proposal is also compared against different fixed-size windows. We also show similarities between full-image representations and the ones based on texels in terms of visual structures and feature vectors using two different orthogonal bases: Tchebichef and Hermite polynomials. Finally, we assess the performance of the proposal using well-known texture databases found in the literature.