This article analyzes the measurement performance of a 3D full-field imaging system based on the projection of grating and active triangulation. We first explore the exact mathematical relationship that exists between the height of an object's surface, the phase and the parameters of the experimental setup, which relationship can be used to obtain the precise shape of an object. We then investigate in detail the influence on the measurement results of the introduction of an inaccuracy into the determination of the system's parameters. Finally, using simulated data, we conduct experiments to evaluate the measurement performance.
The reconstruction of three-dimensional objects and environments is an increasingly important topic in the field of computer vision, animation, virtual reality (VR) and e-commerce [1]. Many methods are used to obtain digital models of the objects and environments, including stereo triangulation of pairs of images, optical flow methods for video streams [2], and optical full-field methods [3] and [4]. Due to the advantages of non-contact measurement operation, fast measurement speed and automatic processing, optical full-field methods are used widely. A typical technique among full-field methods is fringe projection [5], [6] and [7]. Using this technique we need to estimate the phase of the projected fringes relative to an object's height. Then, a relationship for phase-to-depth conversion is used to recover the object's height. Many researchers have studied the linear relationship between phase and depth using the triangulation method [5], [6] and [7].
Fringe projection systems mostly use crossed-optical-axes geometry. As a result, the relationship between the phase and the depth is not linear because the grating observed by the camera will be irregular with frequency changes in the direction of the X-axis. In order to correct this distortion, some alternative approaches have been explored. Cuevas [8] proposed a method to calculate an object's depth distribution from the demodulated phase by using a radial basis function neural network. In another application, using a multi-layer neural network, Cuevas [9] presented another way to determine the depth of an object through path-independent integration of the recovered phase information. Knopf [10] and [11] described two approaches that automatically map the measured image coordinates to 3D object coordinates for shape reconstruction. One of these approaches [10] uses a radial basis function neural network and the other [11] a Bernstein basis function (BBF) neural network. Ganotra [12] studied the direct reconstruction of an object without the use of the intermediate step for phase plane calculations. Quan [13] achieved the implicit conversion of the phase and the depth by accurately shifting a reference plane to a known distance. Windecker [14] utilized a calibration at different focal positions and a polynomial of low order to approximate the sensitivity field, which builds up the conversion of the phase and the height. Sutton [15] established a phase–height transformation for each pixel by fitting a third-order polynomial to N pairs of known distances and the phase.
All these approaches establish either a linear relationship or implicit conversion between the phase and the height. The linear relationship or implicit conversion cannot give the exact representation of the phase–height relationship using the parameters of the system. Therefore, it is difficult to acquire highly accurate spatial position data and, especially, to evaluate the influence on the measurement errors of every systematic parameter.
The existing approaches do not, therefore, perfectly analyzing the influence of systematic errors in a 3D imaging system. This paper provides, for the first time, the exact mathematical relationship to support that analysis. Using Fourier transformation profilometry (FTP), we retrieve the phase information corresponding to the height of an object. Then, a procedure is given as to how to obtain the mathematical description of the height using the phase and the parameters of the system. These systematic parameters include the distance, L, between the imaging center of the camera and the projection center of the projector, the distance, L0, between the imaging center and a reference plane, the angle, θ, between the optical axis of the projector and the camera, and the period, P0, of the fringes.
The principle and configuration of the system are described in Section 2. Section 3 explores the exact mathematical relationship among the height, phase and the parameters of the system. Section 4 reports the analysis of the systematic errors introduced by an inaccuracy in the determination of the parameters involved in the measurement. The experimental results using the simulated system and some concluding remarks are provided in 5 and 6, respectively.
To accurately obtain the height information, we explore in detail the mathematical relationship between the height, the phase and the structural parameters of a 3D imaging system, which is defined here for the first time according to our best knowledge. We can obtain the exact height information utilizing the mathematical relationship without regard to the optical distortion and electronic noise. The measurement errors that affect the height are also elaborately analyzed with respect to all the parameters of the system.
Actually, in the details of 3D imaging systems there are many sources of error, such as the optical distortion (radial distortion and tangential distortion, etc) of the camera and projector, additive noise in the electronics, and geometrical aberrations. Image capture errors and the manufacturing quality of the gratings are other inherent problems. However, these problems are not within the scope of this paper. Therefore, future work is necessary to calibrate the system quickly and effectively.