|
1.IntroductionMost endoscopes have a short focal length and a wide field of view (FOV) in order to observe a broad area with minimum moving or bending of the endoscope, at the expense of severe geometric distortion. The distorted images can adversely affect the accuracy of size and shape estimations. Therefore, a global standard providing a quantitative and simple method to evaluate endoscope distortion is essential. However, such a standard to characterize all possible types of endoscope distortion, along with instructions on how to evaluate and present the distortion results, has yet to be developed, which makes it difficult to accurately evaluate the quality of new and existing endoscopic technologies. This in turn leads to delays in the availability of technologically superior endoscopes in the market. Such a domino effect can be avoided by the development of a consistent and accurate standardized method to characterize endoscope distortion. 1.1.Endoscope DistortionOptical aberration includes two main types: chromatic aberrations and monochromatic aberrations. The former arises from the fact that the refractive index is actually a function of wavelength. The latter occurs even with quasimonochromatic light and falls into two subgroupings: monochromatic aberrations that deteriorate the image, making it unclear (e.g., spherical aberration, coma, and astigmatism), and monochromatic aberrations that deform the image (e.g., Petzval field curvature and distortion).1 In this paper, we focus on the monochromatic aberrations that deform the image. We call such aberrations geometric distortions. A geometric distortion is a deviation from the rectilinear projection, a projection in which straight lines in a scene remain straight in their image. While similar distortions can also be seen in display (display distortion, especially in cathode ray tube display), we mainly focus on the geometric distortions caused by geometric optics. Among different types of geometric distortions, radial distortions are the most commonly encountered and most severe. They cause an inward (barrel distortions) or outward (pincushion distortions) displacement of a given image point along the radial direction from its undistorted location (Fig. 4). A radial distortion can also be a combination of both barrel and pincushion distortions, which is called a mustache (or wave) distortion. In an image with a radial distortion, a straight line that runs through the image center (usually also being the center of distortion) remains straight. Since most radial distortions are circularly symmetric (i.e., rotationally symmetric with respect to any angle), or approximately so, arising from the circular symmetry of the optical imaging systems, a circle that is concentric with the image center remains a circle in its image, although its radius may be affected. Some complex distortions include both radial and tangential components, i.e., a given image point displaces along both radial (radial distortion) and tangential (tangential distortion) directions (Fig. 1). Such distortions, called radial-tangential distortions in this paper, include decentering distortions and thin prism distortions.2,3 Unless otherwise specified, distortions hereafter mentioned in this paper mean radial geometric distortions—the focus of this paper. Endoscopes usually have severe barrel distortions. An endoscope needs a short focal length and a wide FOV in order to observe a broad area with minimum moving or bending of the endoscope, which is essential for steady and smooth manipulation of the endoscope because of the restricted space and degrees of freedom of movement and the limitation in hand-eye coordination during surgical cases.4 However, lenses used in endoscopes usually have a short focal length (a few millimeters only) and a wide FOV (ranging from 100 to 170 deg), which inevitably causes severe distortions.5 Typically, endoscopes exhibit barrel distortions. Occasionally, an endoscope exhibits a mustache distortion that varies between barrel and pincushion across the image, mostly because a mathematical algorithm is used to correct distortion at the maximum image height or other parts of the image. Since endoscope distortions can negatively affect size estimation and feature identification related diagnosis,5–7 quantitative evaluation of endoscope distortions and proper understanding of the evaluation results are essential. 1.2.Need for an Endoscope Distortion Evaluation MethodThe millions of endoscopic procedures conducted monthly in the United States for a wide range of indications are driving the advancement of endoscopic imaging technology. With new diagnostic capabilities and more complex optical designs, technological advances in endoscopes promise significant improvements in both safety and effectiveness. Endoscope optical performance (OP) can be evaluated by OP characteristics (OPCs), including resolution, distortion, FOV, direction of view, depth of field, optimal working distance, image noise, detection uniformity, veiling glare, and so on. Current consensus standards provide limited information on validated and quantitative test methods for assessing endoscope OP. There is no standardized method to evaluate endoscope distortions. An international standard specifies methods of determining distortion in optical systems.8 The methods require the usage of complex devices, such as an autocollimator, or an instrument to measure the object and image pupil field angles and height. While the standard provides complex equations, it does not clarify how the distortion results should be presented and evaluated. Also, the picture height distortion value mentioned in the standard is insufficient for the evaluation of severe barrel or pincushion distortions, and fails for the evaluation of mustache distortions. The definitions of angular magnification and lateral magnification in this standard are only based on a small area near the optical axis of the test specimen, which cannot be extended to endoscopes whose magnification changes significantly within the FOV. The endoscopes working group (WG) of the International Organization for Standardization (ISO), ISO/TC172/SC5/WG6, develops and oversees endoscope standards (the ISO 8600 serial standards) that cover the endoscope OPCs of FOV, direction of view, and optical resolution. However, an endoscope distortion standard has not yet been developed by this WG. While endoscopic technology is developing fast, the regulatory science for endoscope OP evaluation has been unable to keep pace. Every year, the U.S. Food and Drug Administration receives a large number of endoscope submissions for premarket notification or premarket approval. However, the evaluation of new video endoscopic equipment is difficult because of the lack of objective OP standards. The industry lacks consensus standards on objective test methods to evaluate distortion. The resulting patchwork of tests conducted by different device manufacturers leads to delays in bringing important endoscopic technology to market and may allow the clearance of a less optically robust system that negatively impacts patient care. In this paper, we tried to establish a quantitative, objective, and simple distortion evaluation method for endoscopes, with the goal of applying the method in an international endoscope standard. We reviewed some common methods described in prior journal articles for distortion evaluation of an optical imaging system and analyzed the relationship between these methods. Based on the review, a quantitative test method for assessing endoscope radial distortion was developed and validated based on the local magnification idea. The method will help facilitate performance characterization and device intercomparison for a wide variety of standards and endoscopic imaging products. The method has the potential to facilitate the product development and regulatory assessment processes in a least burdensome approach by reducing the workload on both the endoscope manufactures and the regulatory agency. As a result, novel, high-quality endoscopic systems can be swiftly brought into the market. The method can also be used to facilitate the rapid identification and understanding of the cause for poorly performing endoscopes, and benefit quality control during manufacturing as well as quality assurance during clinical practices. 2.Review of Common Methods for Distortion EvaluationIn this section, common methods for distortion evaluation are reviewed. The distortion pattern on an image sensor might not be the same as shown on a display device because of the effects of hardware, such as cathode ray tube (CRT), or software, such as an image processing algorithm. To be simple, this paper focuses on distortions of digital images from an image sensor that might or might not have been processed. However, the methods can also be extended to evaluate display distortions. Theoretically, a geometric distortion might include both radial and tangential components, i.e., a given image point displaces along both radial (radial distortion) and tangential (tangential distortion) directions (Fig. 1). Such distortions, called radial-tangential distortions in this paper, include decentering distortions and thin prism distortions.2,3 A radial-tangential distortion can be evaluated by comparing the positions of two-dimensional (2-D) points on the distorted images with their positions on an ideally nondistorted image. It can be described with a 2-D matrix showing the relative position change of each point as a function of coordinates. In an optical imaging system, the tangential component of a geometric distortion is basically conditioned by imperfect circular symmetry. However, an optical imaging system manufactured in accordance with the present state of the art has a negligible tangential distortion.8,9 Therefore, this paper only focuses on radial distortions. 2.1.Picture Height Distortion and Related MethodsThere are several methods for distortion evaluation. The picture height distortion method (, where means distortion) is defined by the European Broadcasting Union (EBU)10 and recommended by the ISO 9039 International Standard.8 It quantifies the bending of the image of a horizontal straight line that is tangent to the circumscribed (for barrel distortion) or inscribed (for pincushion distortion) rectangle of the distorted image (Fig. 2). As shown in Fig. 2, it is calculated as with being half and being the height of circumscribed (for barrel distortion) or inscribed (for pincushion distortion) rectangle of the distorted image. values are negative for barrel distortions and positive for pincushion distortions. The reported value should be the mean value of all four corners. While was initially defined for the vertical direction, it is applicable to the horizontal distortion as well. is also called the television (TV) distortion method () or traditional TV distortion method. The term TV distortion has been used because such geometric distortion was often observed on a traditional CRT television due to the effect of internal or external magnetic field, or because this method is often used to evaluate the distortion on a display device. While CRT televisions have almost been made obsolete, the term TV distortion is still widely used, though its meaning is not the original meaning related to TV anymore. An open standard for self-regulation of mobile imaging device manufacturers, named Standard Mobile Imaging Architecture (SMIA),11,12 defines a distortion evaluation method in a similar way as . We call this method SMIA TV distortion method () to distinguish from or . can be calculated as The reported value should also be the mean values of all four corners. Obviously, the value is twice as large as the value for the same distorted image, i.e., .Another distortion evaluation method is presented in Fig. 3.13 If we draw a straight line connecting two ends of a curved line—the image of a straight line in the target, its length is and the largest distance from this drawn line to any point on the curved image line is [Fig. 3(a)]—then the distortion is defined as As opposed to , the values are positive for barrel distortions and negative for pincushion distortions. Otherwise, the definition of is similar to that of . The absolute value of in is the same as that of in the method for the lower horizontal edge of a distorted image. Comparing Figs. 2 and 3, we can get the relation of . Since the method has no significant advantage over the method, we do not recommend this method for distortion evaluation. The aforementioned distortion evaluation methods calculate the largest positional error of barrel or pincushion distortions over the whole image. They are meaningful only if the optical system has a steadily increasing distortion (barrel or pincushion distortion) from the image center to the edges. For a complex distortion pattern, it is impossible to evaluate the distortion in detail with a single value since the value might be misleading. Taking a mustache distortion as an example, it is possible that the image displays little or virtually zero distortion at the edges as measured by any of these methods, but a maximum distortion at the midfield. These methods are also related to the aspect ratio of the distorted image. The EBU defines the for the case of aspect ratio being , a ratio for the traditional television and computer monitor standard. However, there are other widely used aspect ratios, such as the ratio of the classic 35 mm still camera film and the ratio of HD video. We cannot directly compare the distortion values of two images with different aspect ratios. 2.2.Radial Distortion MethodAnother distortion evaluation method is based on comparing the radii of distorted () and undistorted () images. It is assumed that the distortion close to the optical center is zero. Therefore, an undistorted image can be calculated based on the information at the center of the distorted image. The distorted image is then evaluated with the undistorted image as a reference along the radial direction. Since this method can be applied to any radial distortion, we call it . As shown in Fig. 4, where is the distance of a point in the distorted image from the image center and is the distance of the corresponding point in the calculated undistorted image from the image center.14,15 The point can come from any location in the distorted image except for the image center where can be infinitely small, although Fig. 4 shows only the top-right corner as an example. If the absolute values of and are magnified at the same scale, the distortion evaluation results will not be affected.can be used to evaluate complex distortions (e.g., mustache distortions) with a distortion profile along a radius line. Mustache distortions can be caused by countermeasures in the design or by an image processing algorithm to limit or remove distortion. If we calculate the along a diagonal from the image center to a corner, we can obtain a curve of versus or , as shown in Fig. 5. For a simple barrel or pincushion distortion, we can identify a barrel distortion if the or value is negative and a pincushion distortion if the value is positive. However, this criterion will fail for identifying a mustache distortion. The key point is that the identification of a distortion type should not depend on the sign of the distortion value but on the slope of the radial distortion curve. Typical radial distortion curves are shown in Fig. 5. Generally, these distortion curves start at zero, matching the assumption that the distortion close to the optical center is zero. For a barrel distortion [Fig. 5(a)], the curve slope is always negative; therefore, the distortion values are also negative. For a pincushion distortion [Fig. 5(b)], the curve slope is always positive, resulting in the distortion values being positive. For a mustache distortion, the curve has both positive and negative slope values at different regions. From Fig. 5(c), the curve has negative slope when and positive slope when . This means that the image has a barrel distortion for and a pincushion distortion for even though the distortion values are still negative for . A higher absolute slope value means more pronounced distortion at this radius. For a simple barrel or pincushion distortion, the absolute value of radial distortion calculated from an image corner is usually larger than that of or .15 This can be theoretically explained with the barrel distortion in Fig. 4 as an example. In Fig. 4(a), points and are, respectively, the middle point and right corner of the upper edge of the undistorted image, with their distance to the image center in vertical direction being . In Fig. 4(b), points and are the image of and , with their distances to the image center in vertical direction being and . is larger than for barrel distortion and smaller for pincushion distortion. at corner can be calculated as , and for corner can be calculated as . For a barrel distortion, both and are distorted toward the image center in the vertical direction with the amount of () and (), respectively, with the negative value signifying a barrel distortion. At the same time, moved the amount of () in the vertical direction toward the image center relative to . Therefore, the equation of evaluates only the position of relative to . On the other hand, the equation of can be split into horizontal component of and vertical component of . The vertical component is the movement of and can also be expressed as , which is the sum of the movement of relative to and the movement of . Since the values of and are close, is roughly equal to the movement of relative to in the vertical component of for corner , without considering the movement of in the vertical direction and the movement of in the horizontal direction. That explains why the absolute value is usually smaller than the absolute value. Similar conclusion can also be obtained for a pincushion distortion. The main problem with the method is that the values of are not available, since the undistorted image does not physically exist. The values can be approximated according to the data at the center of the distorted image, with the assumption that there is none or little distortion near the optical axis.16 Taking the image of a grid target as an example, if we know the distance of one grid (reference grid) at the center of the distorted image is corresponding to pixels on the image sensor, then the distance of grids from the image center in the undistorted image should be , based on which we can calculate the undistorted image of the grid target and then the values. However, there is a problem with this approach. The size of the assumed undistorted area in the distorted image will affect the final results for a severe distortion as shown in most endoscopic images. If the reference grid is too big, it might have been distorted. Figure 6(a) shows a barrel distorted image. From the image center to the right edge in the horizontal direction, we got the distance () of each cross-point from the center. By assuming that there was no distortion between point 0 and point 1 (0–1), point 0 and point 2 (0–2), or point 0 and point 3 (0–3), respectively, we calculated the undistorted distance () of the cross-points with these different assumptions and obtained three curves as shown in Fig. 6(b). For a barrel distortion, the distortion values should monotonically decrease with radius, with the maximum value of zero at the center. However, the 0–2 and 0–3 curves in Fig. 6(b) showed positive values at shorter radial distances, which illuminated that the assumptions of no distortion from point 0 to point 2 or point 0 to point 3 are less accurate than the no distortion assumption from point 0 to point 1, and bigger assumed nondistortion area causes bigger error. On the other hand, if the assumed nondistortion area was too small, the reading error for at center would be enlarged by the large number of grids at further radial distance when calculating . 3.Development of the Local Magnification Method for Distortion Evaluation of EndoscopesDistortion evaluation is always related to camera calibration techniques—a rather mature area for an optical imaging device.17,18 While these techniques are useful for image correction with the help of powerful computational capability, the two- or three-dimensional image data as well as the transform matrixes are complex, lack direct physical meaning, and are hard to understand for most users. Therefore, these calibration methods are not a good choice for a consensus evaluation method that could be potentially adopted by an international standard. The local magnification method we developed is mathematically and experimentally simple, and can better describe the distortion characteristic of an endoscope than the commonly used methods. The method can also provide valuable information to help a physician to interpret a distorted medical image. 3.1.Experimental MeasurementsWe established a distortion evaluation method using an endoscopic system (Olympus EVIS EXERA II) that includes a high-intensity xenon light source (CLV-180), a gastrointestinal videoscope (GIF-H180), and a video system center (CV-180). This system has the common barrel distortion as seen in most endoscopes. A series of grid targets with three different grid sizes (, , and ) was designed and printed with a laser printer. The total size of each target was large enough to cover the whole FOV area. The target during tests should be planar and be able to move along the optical axis. To securely keep the endoscope in its place, a customized mold with adjustable height was used. The direction of the endoscope distal end was adjusted with a fiber optic positioner (Newport, FPR2-C1A). The setup was adjusted so that the endoscope optical axis was perpendicular to the test target and aligned with the test target center (Fig. 7). Criteria for acceptable adjustment are as follows: (1) center of the target, the point on the target that locates at the FOV center, was located at the center of the captured image and (2) two pairs of centrally symmetric points (e.g., and in Fig. 7) on the target were also centrally symmetric on the image. It was assumed that the distortion center (i.e., where the optical axis passes through the image plane) overlaps with the image center. The assumption was evaluated in Sec. 4.1. For the first criterion, the target was positioned at a given distance to take its image. The image was then analyzed with software (e.g., MATLAB) to find the target center in the image. The distance from the target center to the image center was calculated using the formula , with the unit of pixels. The distance was controlled within of the picture height [ for the image with size of ]. For the second criterion, the same method as for the first criterion was used to make sure that the midpoint between each pair of points was of the picture height far away from the target center in the image. The two criteria were satisfied through an iterative trial-and-error process. 3.2.Local Magnification Method to Quantify DistortionTo avoid the aforementioned problem of calculating in Sec. 2.2, we proposed to evaluate radial distortions with a new approach—the local magnification () method. For a small object (e.g., a small cross) placed at a local point on the test target, the ratio of the object length on the image sensor (or on a display device from the sensor) to its actual length on the test target is called . The term “local magnification” is borrowed from the IEC 1262-4 Standard, which addresses determination of image distortion in electro-optical x-ray image intensifiers.19 In the standard, discrete values are obtained by measuring size changes in small hash marks and their accuracy can be affected by the sizes of the marks. In this paper, is expressed with an equation that can accurately and continuously express distortion at any location in the FOV. can be separated into the local radial magnification () and the local tangential magnification (). is the local magnification of a small one-dimensional (1-D) object oriented radially toward the FOV center. is the local magnification of a small 1-D object tangentially oriented to a radial direction. Figure 8 shows the and of a cross-shape object (ideally, the object should be infinitely small) with the width of and height of at radius of on the test target. The object image located at radius of on the target image with the width of and height of . Then the and at the cross-point can be calculated as and . Assuming the distortion is radial, data from any straight line crossing the image center can be used to evaluate the distortion of the whole image if the target is well aligned with the device. We used the horizontal line from the image center to the right edge [Fig. 6(a)] as an example to explain the method. Since the distance from the image center to the right edge is not the maximum radius in the whole image, the final evaluation results mainly reflect the distortion characteristics in a circle area with the radius equal to the distance from image center to right edge. Other straight lines (e.g., a vertical or diagonal line) crossing the image center can also be used to cover a bigger circle area or obtain more accurate results. The method is described in detail as follows. After proper alignment, an image of the target at an established distance from the endoscope is taken [Fig. 6(a)]. The horizontal line from the image center to the right edge is then used to analyze the radial distortion. Following this, the coordinates of each cross-point on the line are read with image analysis software, and the distances () of the points from the image center are calculated in terms of pixels. The actual distance () of these cross-points from the center on the target can also be obtained. To be simple, is used as the number of grids from the center to the cross-points (Table 1), instead of measuring the actual distances. A matrix of is then mapped to a matrix of . Table 1Example of evaluating geometric distortion.
The value on the image edge is needed in order to evaluate the distortion at the edge. In most cases, however, an image near the edge is often blurred due to severe distortion and vignette effects. Additionally, the edge may not exactly lie on a cross point, and the number of cross points from the center to the edge would therefore not be an integer. Based on available and data from the cross-points, a polynomial equation of is calculated. The maximum pixel number from the image center to the image edge (i.e., half the picture width, 640 in our images) is then used as to calculate at the edge (bold numbers in Table 1). The value of 12.2 is obtained in this example. Both and are normalized, setting their maximum value as 1 (Table 1). From the curve of normalized versus normalized (Fig. 9) or vice versa, a polynomial fitting equation of or is created to fit and define the relation between and . The normalized versus normalized polynomial function of can be easily converted to other scales. Assume the function has a polynomial form9,20,21 of where is the degree of the polynomial equation, represents normalized , and represents normalized . The degree zero term (the constant term ) is assumed to be zero since, we assume that the centers of the distorted and undistorted images are overlapped. If and need to be scaled to and , so that and , then Equations (6) and (5) are the same if and .3.2.1.Local radial magnification methodIf and are the actual lengths of and , is defined as follows from the derivative of Eq. (6): Substituting in the above equation, we get Taking the data in Table 1 as an example, we can get the following fitting equation based on Eq. (5), with the normalized as and normalized as . Degree 5 is the lowest degree of the polynomial fitting equation with the -squared value of the fitting equation being . The target grid size is . So the maximum is 36.6 mm () and is 0.0273 (). Assuming the image sensor has pixel size of , the maximum is 1.792 mm () and is 0.5580 (). Then, Eq. (7) for these data will be from which the data at each cross-point can be calculated as shown in Table 1. The data can be normalized so that the maximum magnification at the image center is one. The and normalized versus normalized curves are shown in Fig. 10, from which we can see that the two curves can totally overlap by adjusting the scale of coordinates.3.2.2.Local tangential magnificationas defined in Fig. 8 is equal to the ratio of two circumferences with and as radius, respectively, . The equation for can be derived from Eq. (6). If and are the actual lengths of and , we define as Substituting in the above equation, we get There is no value at , because the ratio of two circumferences both with a value of zero cannot be calculated. Again, take the data in Table 1 as an example. Assuming is 0.0273 and is 0.5580, we can get the equation based on Eqs. (8) and (10) as follows: from which the and normalized data can be calculated as shown in Table 1.Based on Table 1, we can compare and as shown in Fig. 11. While we use the normalized as the axis, the axis can also be the normalized based on the requirement. From Fig. 11, and are the same at the center position, when is less than 0.2. However, decreases faster than with increasing , which indicates that the image of an object will be compressed further in the radial direction than in the tangential direction, when is greater than 0.2. This property is important for a physician to interpret an endoscopic image. 3.3.Deriving and (or ) fromThe local magnification method could also help improve the traditional way of calculating and . Assume that and are the actual sizes of the undistorted and distorted images, respectively. (Please note that represents the actual size of the target in Sec. 3.2, but the equations are exactly the same.) The radial distortion equation can be obtained from Eq. (6) as As is apparent from the above equation, there is no value at . Substituting into the above equation, we getTaking the data in Table 1 as an example, we assume the normalized as (then ) and the undistorted images has maximum diameter of 1.71, and is 0.585 (). (Assuming the center grid of the distorted image is undistorted, the undistorted image has a diameter of .) Therefore, the values can be calculated (Table 1) based on Eqs. (8) and (12). While an image can be magnified to different sizes by changing the values of and , the ratio of to should be constant, i.e., should be constant. We can also calculate the traditionally used or . Take the barrel distortion in Fig. 4(b) as an example. Assume that the distorted image is wide and high and we have obtained two functions of and with and being normalized values. Then the or of a barrel distortion can be calculated with the procedure shown in Fig. 12. For pincushion distortion, similar method can be used. The advantage of calculating or with this method is that it does not need lines that are tangent to the image boundaries [for barrel distortion, Fig. 13(a)] or whose end points overlap with the image corners (for pincushion distortion). For example, while Fig. 13(a) is an ideal image to analyze in the traditional way, Fig. 13(b) is not because there are no lines being tangent to the edges. However, the method in Fig. 12 will work for both images. Most importantly, or can be directly calculated if the polynomial equations describing the relationship between and are known. 4.Discussion4.1.Assumptions of Circular Symmetry and Overlap of the Image Center with the Distortion CenterAs mentioned before, we assumed that the endoscope is circularly symmetric, and therefore, the tangential component of the distortion can be ignored. We also assume that the distortion center overlaps with the image center. These assumptions can be verified. In Sec. 3, we used the data from the image center to right edge of Fig. 6(a) to demonstrate the distortion evaluation methods and their results (Table 1). Similar results can also be obtained based on data on any other radius. If the endoscope is circularly symmetric and the distortion center overlaps with the image center, the distortion results based on data from different radii should be close. Four sets of data on four radii, i.e., the image center to the right, left, top, and bottom edges, were obtained from Fig. 6(a) to derive the normalized versus normalized curves (Fig. 14). The data were normalized with the longest radius as one. From Fig. 14, the four curves overlap, meaning the assumption of a circularly symmetric optical system without tangential distortion is correct and the distortion center overlaps with the image center. 4.2.Accuracy of the Local Magnification MethodTo evaluate the accuracy of the obtained results, we applied the results in a MATLAB routine to correct a distorted image from the same endoscope. The distorted and corrected images are shown in Fig. 15. Since the points selected to establish the distortion function did not cover the four corners of the image, the equation only covers a circle area [Fig. 6(a)] with the radius as the distance from the image center to the furthest point. Therefore, only the image located within the circle was corrected with the four corners outside the circle discarded. Overall, the corrected image removed the vast majority of the distortion originally present. Some errors were still present near the image boundary. The main reason for the errors was that the coordinate reading for a point at further distance from the center was less accurate than at closer distance because of the smaller magnification, lower resolution, and dimmer light intensity at a further distance than at the center position. By adjusting the illuminating light, the accuracy can be improved. 4.3.Number of Data Needed for Distortion Evaluation and the Formats of Polynomial EquationsEffect of the number of grids imaged from image center to edge (i.e., number of points to derive the polynomial equation of normalized versus normalized ) on distortion evaluation was studied. A grid target with the grid size of was placed 6.4 cm away from the endoscope distal end and perpendicular to the optical axis of the endoscope. The distorted image of the target was used to analyze the effects of data number on distortion curves/equations. From the image center to the right edge, 34 radial distance data were obtained from 34 cross-points. From these 34 data, 18, 12, 8, 6, and 5 data were selected, respectively, with roughly even distribution to obtain the normalized versus normalized curve as shown in Fig. 16. From the figure, all the curves overlap with the curve based on all the 34 sets of data, which means that the minimum data number for distortion evaluation can be the number of unknown constant parameters in the distortion equation [e.g., the five parameters of in Eq. (5) with set as zero] if the cross-points on the target image can be clearly read, on the premise that the equation format is correct. For all the normalized and non-normalized fittings, a polynomial fitting equation of degree 5 is accurate enough for most severe barrel or pincushion distortions, with the -squared value . However, the actual degree of a fitting equation can be flexible based on the required -squared value. For example, the degree can be 2, 3, and 4 if the required -squared values are 0.9898, 0.9987, and 0.9998, respectively, for the endoscope we evaluated. The degree can be for more complex distortion. The equations can have all the terms from degree 0 to the degree of the equation or only some of the terms (e.g., the constant term is not necessarily zero or the equation can only have terms with degrees of odd numbers22). 4.4.Projection Methods of EndoscopesDistortion is the consequence of the projection method used in an optical design. Theoretically, the distortion pattern can be derived based on the known projection method, which is usually unknown to consumers. On the other hand, if the distortion pattern is known, the projection method can be inversely derived. Most consumer cameras have rectilinear lenses based on the perspective projection (also called gnomonic projection) that renders a straight line in the object space as a straight line in the image. The perspective projection obeys the mapping function of , where is the angle between the optical axis and the line from an object point to the entrance pupil center, is the distance from the image of the object point to the image center, and is the focal length of the optical system.15 For a 2-D object that is perpendicular to the optical axis of the camera, perspective projection can produce an image that faithfully reflects the geometry of the object. However, it is difficult to make a rectilinear lens with more than 100 deg of FOV. Therefore, some other projection methods (Fig. 17), such as stereographic [], equidistant (), equisolid angle [], and orthographic/orthogonal [] projections, are used to design lenses with a wide FOV, such as fisheye lenses and lenses in endoscopes.23 The projection method of an endoscope can be derived based on its distortion evaluation results. The 34 sets of data in Sec. 4.3 were used as an example of deriving the projection method of an endoscope. These data were obtained from 33 cross-points plus the image center in the distorted image of a grid target with grid size. The distance () from each cross-point to the target center can be calculated by multiplying the grid size of 3 mm by the grid number. The distance () from the target to the distal end of the endoscope is 6.39 cm. Then the angle was calculated for each cross-point with the equation . Strictly speaking, should be the distance from the target to the endoscope entrance pupil, which is not necessarily the endoscope distal end. However, if the distance from the endoscope distal end to entrance pupil is much smaller than the distance from the target to the distal end, the distal end location can be used to approximate the entrance pupil location. The distance from the entrance pupil to the distal end should be considered when the distance from the target to the distal end is short or for special design where the distal end is not a lens, such as a capsule endoscope. So we got 34 values, including the 0 deg from the target center, with the maximum angle being 0.9976 (or 57 deg) for the 33rd cross-point from the target center. We also had 34 normalized values [same as normalized in Fig. 17(b)] corresponding to these angles. We used normalized because we did not know the image sensor size to calculate the actual values. So we got a curve of normalized versus for the endoscope. To determine the projection method of the endoscope, we normalized the values in Fig. 17(a) for from 0 to 0.9976 and compared these curves with the curve from the endoscope data, as shown in Fig. 17(b). From Fig. 17(b), the endoscope adopted the orthographic/orthogonal projection during the design since the measured curve almost overlaps with the curve of . This curve can achieve bigger FOV with a given image sensor size than other curves in Fig. 17. We can also get other optical parameters of the endoscope in the process of analyzing its projection method. The endoscope FOV in the horizontal direction is twice of the maximum angle, i.e., 1.98 (114 deg). If we know the size of the image sensor, we can also calculate the actual values of and in turn calculate the focal length using the equation . 5.ConclusionsIn this paper, we reviewed specific test methods for radial distortion evaluation and developed an objective and quantitative test method—the local magnification method—based on well-defined experimental and data processing steps to evaluate the radial distortion in the whole FOV of an endoscopic imaging system. To our best knowledge, this is the first time that the local magnification method is introduced to evaluate endoscope distortion. Our result showed that this method can describe the radial distortion of a traditional endoscope to a high degree of precision. Additionally, the image correction results showed that the local magnification method was accurate in correcting distorted images. The local magnification method overcomes the error of estimating an ideal image used in the traditional distortion evaluation method and also has advantages over other distortion evaluation methods. The commonly used distortion evaluation methods such as the picture height distortion and the radial distortion methods are integral methods, because they evaluate distortion according to the distance between two points separated by a relatively large distance. The local magnification method, on the other hand, is a differential method showing distortion results at any given local point. The local magnification has a clear physical meaning. For an infinitely small object placed at a local point in the object space, the ratio of its length in the image (on a sensor or any display formats) to its actual length is its local magnification. Therefore, the size information at each local point can be easily interpreted without having to consider the information at other points. This feature can directly help a physician to estimate the size of a lesion during diagnosis. The local magnification method is inclusive, in the sense that this method can be used to derive other distortion parameters. Based on the local magnification data, the picture height distortion and radial distortion data can be derived. A well-designed setup and procedure is essential for the accurate measurement of a distortion. The key points are (1) the test target should be planar; (2) the optical axis of the endoscope should be perpendicular to the test target and aligned with the target center; and (3) the measuring distance should be proper within the depth of field to obtain sufficient data to derive fitting equations but avoid large reading error caused by high magnification at a close distance and edge-blurred grids at a large distance. The endoscope’s own light source was used in our study. To get better illumination in terms of uniformity and intensity to reduce the reading error from distorted images, external light sources are recommended. Also, the endoscope used in this study has a prime lens (i.e., fixed focal length lens). For medical devices with a zoom lens, the distortion should be determined as a function of the focal length. Our results showed that a polynomial equation of degree 5 could well describe the radial distortion curve of a traditional endoscope with severe barrel distortion. The image correction results of distorted images showed that our local magnification method was accurate for distortion evaluation. The method could be applied to evaluate medical devices with different distortion patterns (barrel, pincushion, mustache, and so on). While the equation format for other distortion patterns might be different, the derivation method would be the same. In sum, the local magnification method is a quantitative and objective distortion evaluation method for endoscopes. It has significant benefits over the existing standards, in terms of being mathematically easy to understand and experimentally simple. It also has clear physical meaning that could potentially help a physician to interpret the size of a lesion from a distorted image. Therefore, it is a good choice for an international endoscope standard that has the potential to facilitate the product development and regulatory assessment processes in a least burdensome approach by reducing the burden on both the endoscope manufactures and the regulatory agency. As a result, high-quality endoscopic systems can be swiftly brought into the market. The method can also be used to facilitate the rapid identification and understanding of the cause for poorly performing endoscopes, and benefit quality control during manufacturing as well as quality assurance during clinical use. While this study was based on endoscope imaging, the developed methods can be extended to any circularly symmetric imaging device. Software based on this paper will soon be developed and will be available to the public upon request. ReferencesE. Hecht,
“More on geometrical optics,”
Optics, 253 Pearson Education Inc., San Francisco
(2002). Google Scholar
J. Y. Weng et al.,
“Camera calibration with distortion models and accuracy evaluation,”
IEEE Trans. Pattern Anal. Mach. Intell., 14
(10), 965
–980
(1992). http://dx.doi.org/10.1109/34.159901 Google Scholar
S. S. Beauchemin, R. Bajcsy,
“Modelling and removing radial and tangential distortions in spherical lenses,”
Multi-Image Analysis, 1
–21 Springer, Berlin Heidelberg
(2001). Google Scholar
E. Kobayashi et al.,
“A wide-angle view endoscope system using wedge prisms,”
in Third Int. Conf. on Medical Image Computing and Computer-Assisted Intervention,
661
–668
(2000). Google Scholar
M. Liedlgruber et al.,
“Statistical analysis of the impact of distortion (correction) on an automated classification of celiac disease,”
in Int. Conf. on Digital Signal Processing,
1
–6
(2011). Google Scholar
A. Sonnenberg et al.,
“How reliable is determination of ulcer size by endoscopy?,”
Br. Med. J., 2
(6201), 1322
–1324
(1979). http://dx.doi.org/10.1136/bmj.2.6201.1322 Google Scholar
S. H. Park et al.,
“Polyp measurement reliability, accuracy, and discrepancy: optical colonoscopy versus CT colonography with pig colonic specimens,”
Radiology, 244
(1), 157
–164
(2007). http://dx.doi.org/10.1148/radiol.2441060794 Google Scholar
“ISO 9039: Optics and photonics—Quality evaluation of optical systems—Determination of distortion,”
Geneva, Switzerland
(2008). Google Scholar
C. Zhang et al.,
“Nonlinear distortion correction in endoscopic video images,”
in Proc. of 2000 Int. Conf. on Image Processing,
439
–442
(2000). Google Scholar
Measurement and Analysis of the Performance of Film and Television Camera Lenses, European Broadcasting Union, Geneva, Switzerland
(1995). Google Scholar
SMIA 1.0 Part 5: Camera Characterization Specification, 20042016). http://read.pudn.com/downloads95/doc/project/382834/SMIA/SMIA_Characterisation_Specification_1.0.pdf Google Scholar
DXOLabs, “DxOMark measurements for lenses and camera sensors,”
(2016) http://www.dxomark.com/About/In-depth-measurements/Measurements/Distortion April ). 2016). Google Scholar
ImageEngineering, “What is lens geometric distortion?,”
(2011) http://www.image-engineering.de/library/technotes/752-what-is-lens-geometric-distortion July 2016). Google Scholar
B. Hönlinger and H. H. Nasse,
“Distortion,”
Zeiss,
(2009) https://www.alpa.ch/_files/Zeiss%20About%20Lens%20Distortion%20cln33.pdf April 2016). Google Scholar
E. P. Efstathopoulos et al.,
“A protocol-based evaluation of medical image digitizers,”
Br. J. Radiol., 74
(885), 841
–846
(2001). http://dx.doi.org/10.1259/bjr.74.885.740841 Google Scholar
R. Y. Tsai,
“A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,”
IEEE J. Robot. Autom., 3
(4), 323
–344
(1987). http://dx.doi.org/10.1109/JRA.1987.1087109 Google Scholar
Z. Zhang,
“Flexible camera calibration by viewing a plane from unknown orientations,”
in Proc. of the Seventh IEEE Int. Conf. on Computer Vision,
666
–673
(1999). http://dx.doi.org/10.1109/ICCV.1999.791289 Google Scholar
IEC 1262-4: Medical Electrical Equipment–Characteristics of Electro-Optical X-Ray Image Intensifiers—Part 4: Determination of the Image Distortion, The International Electrotechnical Commission, Geneva, Switzerland
(1994). Google Scholar
J. P. Barreto et al.,
“Non parametric distortion correction in endoscopic medical images,”
in 3DTV Conf.,
1
–4
(2007). Google Scholar
H. Haneishi et al.,
“A new method for distortion correction of electronic endoscope images,”
IEEE Trans. Med. Imaging, 14
(3), 548
–555
(1995). http://dx.doi.org/10.1109/42.414620 Google Scholar
D. C. Brown,
“Close-range camera calibration,”
Photogramm. Eng., 37 855
–866
(1971). Google Scholar
J. Kannala and S. S. Brandt,
“A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses,”
IEEE Trans. Pattern Anal. Mach. Intell., 28
(8), 1335
–1340
(2006). http://dx.doi.org/10.1109/TPAMI.2006.153 Google Scholar
BiographyQuanzeng Wang received his PhD in chemical and biomolecular engineering from the University of Maryland at College Park in 2009. He is a scientist and engineer in the Center for Devices and Radiological Health of U.S. Food and Drug Administration. His research interests include optical spectroscopy and imaging, tissue optics, fiber optics, optical diagnostics, computational biophotonics, image quality, and thermography. Wei-Chung Cheng received his PhD in electrical engineering from the University of Southern California in 2003. He was an assistant professor in the Department of Photonics, National Chiao-Tung University, Taiwan, before joining the U.S. Food and Drug Administration. He is a color scientist in the Center for Devices and Radiological Health. His current research interests include color science, applied vision, and medical imaging systems. Nitin Suresh is an MS student in the Department of Electrical and Computer Engineering at the University of Maryland, College Park. His research interests include signal and image processing, pattern recognition, and machine learning. Hong Hua is a professor with the College of Optical Sciences (OSC), University of Arizona. She has over 20 years of experience in researching and developing advanced display and imaging technologies. As the principal investigator of the 3-D Visualization and Imaging Systems Laboratory (3DVIS Lab), her current research interests include various head-worn displays and 3-D displays, endoscopy, microscopy, optical engineering, biomedical imaging, and virtual and augmented reality technologies. She is a fellow of SPIE. |