|
1.IntroductionOptical tomography1–8 is a technique for reconstructing inside an object by illuminating it with a light probe and observing the light penetrating through the object. In contrast to x-ray computed tomography, which uses x-rays instead of light, safer tomographic methods are demanded and scattering optical tomography methods are recently attracting computer vision researchers’ attention.9–15 We investigate a recently proposed optical tomography with discretized path integral.12,13,15 Their method takes advantages of the path integral formulation and formulates the inverse problem of optical tomography as a constraint nonlinear least squared optimization problem. This method benefits from various optimization techniques, which is not the case for voting11,14 or genetic algorithms.10 They solved the constraint optimization problem by using the log-barrier (LB) interior point method16 with inner loops of Newton method12 and quasi-Newton method.15 They have shown that optical tomography with discretized path integral produces better estimation results compared to a standard diffuse optical tomography (DOT),15 however, its high computation cost is a problem for further development. In this paper, we propose two contributions to tackle the problem. First, we introduce the primal-dual (PD) approach to solve the constraint optimization because it is known that the PD interior point method is more efficient than the LB method.16 Second, we propose new formulations of equations. The main bottle-neck of the previous approaches12,13,15 is Jacobian and Hessian which are computationally demanding as the number of paths increases. Our new formulations are equivalent to the previous ones, but much more efficient. Numerical simulations show that the proposed approach accelerates the estimation by a factor of 100. (Conference versions of this paper were presented.17,18 This paper extends those versions with the extended description of the PD approach and the efficient formulations, and additional experiments with optimized codes with comparisons.) There exists a number of acceleration methods for tomographic reconstruction; such as dimension reduction,19 power acceleration,20 and, most importantly, graphic processing units (GPUs).21 GPUs have been used for accelerating a forward problem22 and forward and backward projections.23–25 In addition, thanks to the recent progress of general-purpose computing on GPUs, GPUs have become popular for speeding up iterative computations for solving compressive sensing formulations26 and large linear systems.27,28 We have not implemented the proposed approach with any GPUs in this paper, however, the use of GPUs would be beneficial for the proposed formulations because computing a Jacobian matrix can be further accelerated.29 In the following, we briefly summarize the constraint problem to be solved in Sec. 2. Then in Sec. 3, we develop the PD method to solve the problem by taking into account the structure of the problem. In Sec. 4, we derive new efficient formulations to compute Jacobian and Hessian, with comparisons to and discussions of the previous formulations. We show simulation results in Sec. 5 to show the improvement of the proposed method in terms of computation cost. 2.PreliminaryIn this section, we briefly review the formulation of optical tomography with a discretized path integral. Details can be found in Ref. 15. We are interested in estimating the extinction coefficients of each voxel in a two-dimensional (2-D) medium as shown in Fig. 1. Extinction coefficients represent how much light attenuates at each point. We follow the 2-D layer model,15 that is, we assume the following layer scattering with the following properties. Suppose that the 2-D medium has a layered structure and is discretized into voxels of an grid; it has layers of voxels. Therefore, the problem is to estimate extinction coefficients of each voxel in the grid. With this layered medium, we use an observation model of the light transport between a light source and a detector: emitting light to each of the voxels at the top layer and capturing light from each voxel from the bottom layer (see Fig. 2). More specifically, the light source point is located on the boundary of the top surface of the voxels in the top layer. The detector point is located on the boundary of the bottom surface of the voxels in the bottom layer. Then, forward scattering happens layer by layer; light is scattered at the center of a voxel in a layer and goes to the center of a voxel in the next (below) layer. By connecting the centers of voxels of each layer, we have a path of light scattering connecting the light source and the detector. Let and be voxel indices of the light source and detector locations, respectively. By restricting the light paths only to those connecting and , the observed light by the detector is the sum of contributions of all paths connecting and . This is written as follows: where is a vector to be estimated, and each element is the extinction coefficient of a voxel, as shown Fig. 2. Vector represents a complete path connecting and , and each element is the length of the part of the path segment passing through the corresponding voxel. Factor encodes scattering coefficients and the phase function, is the intensity of the light source, and is the number of paths connecting and . Parameters , , , are given and fixed. The problem is to estimate based on observations .By changing positions of the light source and the detector , we obtain a set of observations , resulting in the following nonlinear least squares problem under box constraints for extinction coefficients to be positive where is the cost function and and are lower and upper bounds, respectively. The box constraints are due to the nature of the extinction coefficient being positive (i.e., ) and the numerical stability of excluding unrealistic large values.3.Primal-Dual Interior Point Method of the Inverse ProblemHere, we develop a PD interior point method to solve the inverse problem [Eq. (2)]. It is an inequality constraint optimization with box constraints, which is straightforward in applying a standard PD method.16 However, we can use the structure of the box constraints, hence, we derive an efficient algorithm by using the problem structure. 3.1.Primal-Dual MethodWe first rewrite the inequality constraint problem to an equivalent problem with equality constraints with slack variables as follows: where is a vector of the box constraints Here, is the ’th constraint, and is a vector of ones.The Lagrangian of the above problem is where is a vector of Lagrangian multipliers or dual variables.The KKT conditions of Eq. (4) with duality gap is written as where , and Here, is an identity matrix.To solve the system of the KKT conditions by using Newton’s method, we have the following system of equations: where and is the Hessian of the Lagrangian.3.2.Solving the System EfficientlyThe matrix in Eq. (9) is of the size , which is sparse but large and computationally expensive to solve. We, therefore, develop an efficient way to solve the system by using the problem structure. First, the system is explicitly written as follows: Substituting the last equation into the second one yields and we add both sides to the first equation to obtain Here, we define and , where is the Hadamard (element-wise) product, and is a vector of element-wise reciprocals of . Then, we have By exploiting the structure of matrix and defining to split a vector into two parts corresponding to lower and upper bound constraints, we have Similarly, we define to simplify as and then Now, the solution is given by which involves the inversion of the size .3.3.Update VariablesOnce , , and are obtained, we then estimate the step length to update the parameters.16 The maximum of the step lengths is given by the following rule: with used (for example, ). This prevents variables and from approaching the lower boundary.Next, we perform the backtracking line search30 to estimate acceptable step lengths and . To this end, we use the following exact merit function with : and make a sufficient decrease requirement where denotes the directional derivative of in the direction . The step lengths and are found in the ranges and so that Eq. (20) is satisfied.Then, the parameters , , and are updated as Once the following error function is smaller than a given threshold, the PD interior point method stops Algorithm 1 summarizes the PD interior point method developed above. Note that the Hessian can be approximated as at each iteration by the quasi-Newton method, instead of the full Hessian used by Newton’s method. We will compare Newton’s method and the quasi-Newton method in the section of experiments. Algorithm 1Primal-dual interior point with line search.
4.Efficient FormulationsThe most computationally intensive part of the PD algorithm shown above is the computation of Hessians for Newton’s method and Jacobians for Newton’s and quasi-Newton methods. We propose here efficient formulations of Hessian and Jacobian of the problem, whose computational cost is much smaller than naive formulations used in the previous approaches. First, we show the naive and old formulations of Hessian and Jacobian, then introduce our new formulations, followed by discussions on computational cost. 4.1.Previous Old Formulations for Inverse ProblemHere, we show how the previous approaches12,13,15 do. We call these the old formulations. 4.1.1.Jacobian: old formulationRemember that the objective function to be minimized is The gradient of is given as follows by taking the derivative of the objective function: To simplify the equation, the following notations are introduced: Now, and the gradient are rewritten as follows: where stands for the sum over the elements of the container [Eq. (26)] of vectors, and denotes the tensor product.4.2.Proposed New Efficient FormulationThe problem of the previous old formulations of Jacobian and Hessian shown above is the computation cost increasing as the number of paths increases. As discussed later, the computation cost is on average. The idea of the proposed formulation is to explore the property of the exponential function and its derivative in the problem. As shown below, the computation cost can be reduced to on average. 4.2.1.Jacobian: new formulationFirst, we rewrite the cost function as follows: where is a residualNow we use the chain rule of differentiation, and we have whereHere, we define [Note that this is not the same as the one defined in the previous approaches above, which is a structure used in MATLAB codes to store . Here, is an matrix.] which has vectors of dimension , and is a coefficient vector. Therefore,4.2.2.DiscussionSuppose that the expectation of the number of paths is . Then, the computations for the proposed new formulation of Jacobian above are:
for each and . In total, it takes operations to compute elements of the Jacobian or operations per element. Contrary, for each and , the previous old formulation Eq. (28) needs:
for the first term, and
for the second term. In total, it takes operations to compute elements of the Jacobian, or operations per element. The difference is mainly caused by the second term of Eq. (28). In summary, the proposed new formulation has the cost of operations per element, whereas the previous old formulation has the cost of operations per element. Table 1 summarizes the discussion above. Table 1Comparison of the new and old formulations for computing the Jacobian.
4.2.3.Hessian: new formulationIn the same manner, we can derive the Hessian as follows. From the Jacobian we obtain the Hessian by using the chain rule of differentiation whereNow, the Hessian can be written as follows: Note that should not be expanded as because it involves a large matrix , which is computationally intensive to compute.4.2.4.DiscussionBy reusing and computed for the Jacobian, the new formulation of Hessian needs: for each and . In total, it takes operations to compute elements of the Hessian or operations per element. Contrarily, for each element, the previous formulation [Eq. (32)] needs:
In total, it takes operations to compute a single element of the Hessian. In summary, the proposed new formulation has the cost of operations per element, whereas the previous old formulation has the cost of operations per element. Table 2 summarizes the discussion above. Table 2Comparison of the new and old formulations for computing the Hessian.
5.Numerical SimulationIn this section, we report the results obtained by simulations using the proposed method by comparing PD and LB interior point methods, as well as old and new formulations of Jacobian and Hessian. Since the mathematical model we used to describe the light transport in the forward problem is exactly the same as the model in the previous work,15 we use the same setup as follows. For the 2-D layered medium, the grid size was set to with square voxels of size 1 (mm), i.e., the medium is . The values of the extinction coefficients are set between 1.05 and 1.55 (), and the lower and upper bounds ( and ) are set to be 1.0 and 2.0 (), respectively. Values of the initial guess are 1.001 for all elements of , as well as and . Parameters used in Algorithm 1 are set as , , , and . 5.1.Estimation QualityThe ground truth and the estimated results are shown in Fig. 3. The matrix plots in the top row of this figure represent five different media (a)–(e) used for the simulation, which were also used in the previous work.15 Note that the medium e is the Shepp–Logan phantom.31 Each voxel is shaded in gray according to the values of the extinction coefficients. The following rows show the estimated results of different combinations of LB or PD methods, old or new formulas, and Newton’s or quasi-Newton methods. The proposed method is PD-new-Newton/quasi-Newton; that is, the PD method with Newton’s or quasi-Newton method by using the proposed new formulation. The row LB-old-quasi-Newton corresponds to the previous work15 that uses the LB method with the quasi-Newton method by using the old formulation, and the row LB-old-Newton corresponds to another prior work.12 As we can see, the results of different combinations almost look the same for each of the five media. This observation is also validated by the root-mean-square error (RMSE) shown in Table 3. The RMSE values of all combinations are more or less the same, while some variations appear due to the different update rules between Newton’s and quasi-Newton methods, and different stopping conditions between LB and PD methods. Table 3RMSEs and computation time for the numerical simulations for five different types of media (a)–(e) with grid size of 24×24, for different combinations of LB or PD methods, old or new formulas, and Newton’s or quasi-Newton methods. Each computation time shows the mean and standard deviation of 10 trials, except the combinations of “old-Newton.” Note that RMSE values are exactly the same for 10 trials. Results of DOT methods are shown for comparison.
5.2.Estimation TimeThe main goal of this paper is to develop an efficient way to solve the inverse problem. Table 3 shows the computation cost of different combinations. All experiments were performed on a Linux workstation (two Intel Xeon E5-2630 2.4 GHz CPUs, total 16 physical cores, with 256 GB memory). We implemented the method in MATLAB R2017a and did not explicitly use the parallel computation toolbox of MATLAB except the Hessian computation of LB/PD-old-Newton due to its slow computation. Parallel matrix multiplications are, however, automatically performed on MATLAB. Table 3 shows the computation time for different computations in seconds. We report the average and standard deviation of 10 trials, except the cases of LB/PD-old-Newton which show the processing time of a single trial. For any combination, our proposed new formulation is much faster than the old formulation. The uses of Newton’s method greatly benefit from the efficient Hessian computation and the computation time reduces more than a factor of 100. However, the new formulation does not help to reduce the computation cost of quasi-Newton methods and the reduction is a factor of just 2 or 3. This is due to the fact that the quasi-Newton method needs gradient vectors, and its computation is of the order (the number of voxels) and it is not so large in terms of the total computation cost. In contrast, Hessian computation in the Newton’s method is of the size , which is quite large compared to the gradient. Our new formulation is, therefore, better when the Newton method is used. With the quasi-Newton method, the PD approach seems to be comparable to the LB method. By comparing rows LB/PD-new-quasi-Newton, LB is faster than PD for denser media (c), (d), and (e). This might be caused by the different ways of approximations by the quasi-Newton method. For the LB method, the gradient is modified by the approximated Hessian. For the PD method, however, the approximated Hessian is used in the matrix to be solved, resulting in an update rule of regularized by diagonal elements of in Eq. (17). Except the simplest medium (a), the fast combination is PD-new-Newton, which is proposed in this paper. This is due to the fast convergence of Newton’s method compared to the quasi-Newton method, and also the fact that the PD method needs fewer iterations than the LB method. The qualities of results are almost the same as discussed above, then PD-new-Newton is the best when the working memory is enough for storing the Hessian. Otherwise, LB/PD-new-quasi-Newton is better to be used. 5.3.ComparisonWe compare our methods to a standard DOT implemented in the Electrical Impedance Tomography and Diffuse Optical Tomography Reconstruction Software (EIDORS).32,33 in the same setting as the previous work:15 medium of size with the five media (a)–(e). For solving DOT by EIDORS, we used triangle elements. For boundary conditions, we placed 48 light sources and detectors at the same intervals around the medium. We used some different solvers and priors; Gauss–Newton method34 with Laplace, NOSER,35 and Tikhonov priors, and PD method with total variation prior. Due to the diffusion approximation of DOT, the results in Fig. 4 for DOT with the Gauss–Newton method are blurred, and those for DOT with PD have a tendency of overestimating the high-coefficient value areas. In contrast, the results of PD-new-Newton (and other combinations in Fig. 3) are clearer and sharper for all combinations. This observation is also validated by the RMSE shown in Table 3. The RMSE values of PD-new-Newton are smaller than the values of DOT for all five media. The obvious drawback of PD-new-Newton is its high computation cost. It is slower by a factor of 10 compared to DOT with the PD method and a factor of 100 to DOT with the Gauss–Newton method. A large part of the computation cost comes from the computation of Hessian and Jacobian, which depends on the number of paths . Further reduction of computation cost is left for our future work. 6.ConclusionIn this paper, we have proposed a PD approach to optical tomography with a discretized path integral and also efficient formulation for computing Hessian and Jacobian. Numerical simulation examples with 2-D layered media are shown to demonstrate that the proposed method, called PD-new-Newton in the experiments, performed faster than the previous work (LB-old-quasi-Newton) while the estimated extinction coefficients of both methods were comparable. Compared to DOT, the proposed method worked slower but produced better estimation results in terms of RMSE. AcknowledgmentsThis work was supported in part by the Japan Society for the Promotion of Science (JSPS) KAKENHI Grant No. JP26280061. ReferencesS. R. Arridge and M. Schweiger,
“Image reconstruction in optical tomography,”
Phil. Trans. R. Soc. B, 352
(1354), 717
–726
(1997). http://dx.doi.org/10.1098/rstb.1997.0054 PTRBAE 0962-8436 Google Scholar
S. R. Arridge and J. C. Hebden,
“Optical imaging in medicine: II. modelling and reconstruction,”
Phys. Med. Biol., 42
(5), 841
–853
(1997). http://dx.doi.org/10.1088/0031-9155/42/5/008 PHMBA7 0031-9155 Google Scholar
J. C. Hebden, S. R. Arridge and D. T. Delpy,
“Optical imaging in medicine: I. experimental techniques,”
Phys. Med. Biol., 42
(5), 825
–840
(1997). http://dx.doi.org/10.1088/0031-9155/42/5/007 PHMBA7 0031-9155 Google Scholar
S. R. Arridge,
“Optical tomography in medical imaging,”
Inverse Probl., 15 R41
–R93
(1999). http://dx.doi.org/10.1088/0266-5611/15/2/022 INPEEY 0266-5611 Google Scholar
S. R. Arridge and J. C. Schotland,
“Optical tomography: forward and inverse problems,”
Inverse Probl., 25
(12), 123010
(2009). http://dx.doi.org/10.1088/0266-5611/25/12/123010 INPEEY 0266-5611 Google Scholar
G. Bal,
“Inverse transport theory and applications,”
Inverse Probl., 25
(5), 053001
(2009). http://dx.doi.org/10.1088/0266-5611/25/5/053001 INPEEY 0266-5611 Google Scholar
K. Ren,
“Recent developments in numerical techniques for transport-based medical imaging methods,”
Commun. Comput. Phys., 8 1
–50
(2010). http://dx.doi.org/10.4208/cicp.220509.200110a 1815-2406 Google Scholar
A. Charette, J. Boulanger and H. K. Kim,
“An overview on recent radiation transport algorithm development for optical tomography imaging,”
J. Quantum Spectrosc. Radiat. Transfer, 109
(17–18), 2743
–2766
(2008). http://dx.doi.org/10.1016/j.jqsrt.2008.06.007 Google Scholar
Y. Mukaigawa, R. Raskar and Y. Yagi,
“Analysis of scattering light transport in translucent media,”
IPSJ Trans. Comput. Vision Appl., 3 122
–133
(2011). http://dx.doi.org/10.2197/ipsjtcva.3.122 Google Scholar
Y. Dobashi et al.,
“An inverse problem approach for automatically adjusting the parameters for rendering clouds using photographs,”
ACM Trans. Graph., 31
(6), 1
(2012). http://dx.doi.org/10.1145/2366145 ATGRDF 0730-0301 Google Scholar
Y. Ishii et al.,
“Scattering tomography by Monte Carlo voting,”
in IAPR Int. Conf. on Machine Vision Applications,
(2013). Google Scholar
T. Tamaki et al.,
“Multiple-scattering optical tomography with layered material,”
in 2013 Int. Conf. on Signal-Image Technology & Internet-Based Systems,
93
–99
(2013). http://dx.doi.org/10.1109/SITIS.2013.26 Google Scholar
B. Yuan et al.,
“Layered optical tomography of multiple scattering media with combined constraint optimization,”
in 2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV),
1
–6
(2015). http://dx.doi.org/10.1109/FCV.2015.7103735 Google Scholar
R. Akashi et al.,
“Scattering tomography using ellipsoidal mirror,”
in 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV 2015),
1
–5
(2015). Google Scholar
B. Yuan et al.,
“Optical tomography with discretized path integral,”
J. Med. Imaging, 2 033501
(2015). http://dx.doi.org/10.1117/1.JMI.2.3.033501 JMEIET 0920-5497 Google Scholar
S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge
(2004). Google Scholar
B. Yuan et al.,
“Optical tomography with discretized path integrals: a comparison with log-barrier and primal-dual methods,”
in The Korea-Japan joint workshop on Frontiers of Computer Vision (FCV 2016),
378
–382
(2016). Google Scholar
T. Tamaki et al.,
“Efficient formulations of optical tomography with discretized path integral,”
in The 23th International Workshop on Frontiers of Computer Vision (FCV 2017),
1
–5
(2017). Google Scholar
G. Zhang et al.,
“Acceleration of dynamic fluorescence molecular tomography with principal component analysis,”
Biomed. Opt. Express, 6 2036
(2015). http://dx.doi.org/10.1364/BOE.6.002036 BOEICL 2156-7085 Google Scholar
H. M. Huang and I. T. Hsiao,
“Accelerating an ordered-subset low-dose X-ray cone beam computed tomography image reconstruction with a power factor and total variation minimization,”
PLoS One, 11
(4), e0153421
(2016). http://dx.doi.org/10.1371/journal.pone.0153421 POLNCL 1932-6203 Google Scholar
G. Pratx and L. Xing,
“GPU computing in medical physics: a review,”
Med. Phys., 38
(5), 2685
–2697
(2011). http://dx.doi.org/10.1118/1.3578605 MPHYA6 0094-2405 Google Scholar
J. Lobera et al.,
“High performance computing for a 3-D optical diffraction tomographic application in fluid velocimetry,”
Opt. Express, 23
(4), 4021
(2015). http://dx.doi.org/10.1364/OE.23.004021 OPEXFF 1094-4087 Google Scholar
S. Ha et al.,
“GPU-accelerated forward and back-projections with spatially varying kernels for 3D DIRECT TOF PET reconstruction,”
IEEE Trans. Nuclear Sci., 60
(1), 166
–173
(2013). http://dx.doi.org/10.1109/TNS.2012.2233754 IETNAE 0018-9499 Google Scholar
F. Xu and K. Mueller,
“Accelerating popular tomographic reconstruction algorithms on commodity PC graphics hardware,”
IEEE Trans. Nuclear Sci., 52
(3), 654
–663
(2005). http://dx.doi.org/10.1109/TNS.2005.851398 IETNAE 0018-9499 Google Scholar
V. G. Nguyen and S. J. Lee,
“GPU-accelerated iterative reconstruction from Compton scattered data using a matched pair of conic projector and backprojector,”
Comput. Methods Programs Biomed., 131 27
–36
(2016). http://dx.doi.org/10.1016/j.cmpb.2016.04.012 CMPBEK 0169-2607 Google Scholar
R. Liu, Y. Luo and H. Yu,
“GPU-based acceleration for interior tomography,”
IEEE Access, 2 841
–855
(2014). http://dx.doi.org/10.1109/ACCESS.2014.2349000 Google Scholar
M. Schweiger,
“GPU-accelerated finite element method for modelling light transport in diffuse optical tomography,”
Int. J. Biomed. Imaging, 2011 1
–11
(2011). http://dx.doi.org/10.1155/2011/403892 Google Scholar
S. Q. Zheng et al.,
“A distributed multi-GPU system for high speed electron microscopic tomographic reconstruction,”
Ultramicroscopy, 111
(8), 1137
–1143
(2011). http://dx.doi.org/10.1016/j.ultramic.2011.03.015 ULTRD6 0304-3991 Google Scholar
A. Borsic, E. A. Attardo and R. J. Halter,
“Multi-GPU Jacobian accelerated computing for soft-field tomography,”
Physiol. Meas., 33 1703
–1715
(2012). http://dx.doi.org/10.1088/0967-3334/33/10/1703 PMEAE3 0967-3334 Google Scholar
J. Nocedal and S. J. Wright, Numerical Optimization, 2nd ed.Springer, New York
(2006). Google Scholar
L. A. Shepp and B. F. Logan,
“The Fourier reconstruction of a head section,”
IEEE Trans. Nucl. Sci., 21 21
–43
(1974). http://dx.doi.org/10.1109/TNS.1974.6499235 IETNAE 0018-9499 Google Scholar
A. Adler and W. R. Lionheart,
“EIDORS: towards a community-based extensible software base for EIT,”
in 6th Conf. on Biomedical Applications of Electrical Impedance Tomography,
1
–4
(2005). Google Scholar
A. Adler and W. R. Lionheart,
“Uses and abuses of EIDORS: an extensible software base for EIT,”
Physiol. Meas., 27
(5), S25
(2006). http://dx.doi.org/10.1088/0967-3334/27/5/S03 PMEAE3 0967-3334 Google Scholar
A. Adler and R. Guardo,
“Electrical impedance tomography: regularized imaging and contrast detection,”
IEEE Trans. Med. Imaging, 15 170
–179
(1996). http://dx.doi.org/10.1109/42.491418 ITMID4 0278-0062 Google Scholar
M. Cheney et al.,
“NOSER: an algorithm for solving the inverse conductivity problem,”
Int. J. Imaging Syst. Technol., 2
(2), 66
–75
(1990). http://dx.doi.org/10.1002/(ISSN)1098-1098 IJITEG 0899-9457 Google Scholar
BiographyBingzhi Yuan received his BE degree in software engineering from Beijing University of Posts and Telecommunications, China, and his ME degree in engineering from Hiroshima University, Japan, in 2010 and 2013, respectively. Currently, he is a PhD student at Hiroshima University. Toru Tamaki received his BE, ME, and PhD degrees in information engineering from Nagoya University, Japan, in 1996, 1998, and 2001, respectively. After being an assistant professor at Niigata University, Japan, from 2001 to 2005, he is currently an associate professor in the Department of Information Engineering, Graduate School of Engineering, Hiroshima University, Japan. His research interests include computer vision and image recognition. Bisser Raytchev received his PhD in informatics from Tsukuba University, Japan, in 2000. After being a research associate at NTT Communication Science Labs and AIST, he is presently an assistant professor in the Department of Information Engineering, Graduate School of Engineering, Hiroshima University, Japan. His current research interests include computer vision, pattern recognition, high-dimensional data analysis, and image processing. Kazufumi Kaneda received his BE, ME, and DE degrees from Hiroshima University, Japan, in 1982, 1984, and 1991, respectively. He is a professor in the Department of Information Engineering, Graduate School of Engineering, Hiroshima University. In 1986, he joined Hiroshima University. He was a visiting researcher at Brigham Young University from 1991 to 1992. His research interests include computer graphics and scientific visualization. |