Most computers in the past have been equipped with floating point processing capabilities, allowing an easy and brute-force
solution for the machine computation errors, not requiring any specific tailoring of the computation in nearly
hundred percent of situations. However, the computation needed for the adaptive optics real-time control in 30-50 meter
telescopes is big enough to cause trouble to conventional von-Neumann processors, even if Moore's Law is valid for the
next years. Field Programmable Gate Array (FPGAs) have been proposed as a viable alternative to cope with such
computation needs[1,2], but--at least today's chips--will require fixed-point arithmetic to be used instead. It is then
important to evaluate up to what point the accuracy and stability of the control system will be affected by this limitation.
This paper presents the simulation and laboratory results of the comparison between both arithmetics, specifically
evaluated in an adaptive optics system. The real-time controller has been modeled as black box having as input the
wavefront sensor camera digital output data, providing a digital output to the actuators of the deformable mirror, and
with the task of internally computing all outputs from the inputs. MATLAB fixed-point library has been used to evaluate
the effect of different precision lengths (5-10 fractional bits) in the computation of the Shack-Hartmann subaperture
centroid, in comparison with the reference 64-bit floating-point arithmetic and with the noise floor of the real system,
concluding that the effect of the limited precision can be overcome by adequately selecting the number of fractional bits
used in the representation, and tailoring that number with the needs at every step of the algorithm.