A three-step system calibration procedure with error compensation for 3D shape measurement
System calibration, which usually involves complicated and time-consuming procedures, is crucial for any three-dimensional (3D) shape measurement system based on vision. A novel improved method is proposed for accurate calibration of such a measurement system. The system accuracy is improved with considering the nonlinear measurement error created by the dif ference between the system model and real measurement environment. We use Levenberg-Marquardt optimization algorithm to compensate the error and get a good result. The improved method has a 50% improvement of re-projection accuracy compared with our previous method. The measurement accuracy is maintained well within 1.5% of the overall measurement depth range.
OCIS codes: 120.0120, 100.0100, 150.0150.
doi: 10.3788/COL20100801.0033.
Optical non-contact three-dimensional (3D) shape measurement techniques based on computer vision are used in a lot of applications[1]. Some common techniques include stereo-vision[2], laser scanning[3], structured light[4], interferometry[5], and so on. Among all these techniques, structured light based techniques are increasingly used due to their excellent characteristics. The key to accurate measurement of the 3D shape is the accurate calibration of the system parameters[6] . There have already been many calibration methods which mostly focus on the distortion and aberration[7?10]. As we know, a good system performance is not the simple accumulation of various components’ performances. The measurement results are af fected by many factors, such as system model, operating environment, and so on. Therefore, we propose an overall system calibration algorithm which considers the measurement system structure and the real measurement error compensation. We only pay attention to the input data and output data, and take the whole measurement system as a black-box to perform the optimization.
The calibration based on the system configuration has been deeply studied[4,11,12]. While our systematic calibration method is based on real measurement error compensation. A high-precision standard block is measured and the measurement error is expressed by the system parameters. Then the error is taken as the optimization objective function to acquire high accurate system parameters.
Structured light measurement is a non-contact optical measurement method based on active triangulation method. A projector projects high robustness time-space coding light model, and the object space is divided into numerous measurements regions with unique code. The object’s coordinates can be computed using the triangular geometric relation[4,13]. The specific schematic is shown in Fig. 1. A 3D point pw = (xw, yw, zw) is transformed into camera and projector pixel coordinates p 00 and p 0 , respectively. (uc, vc) of p 00 = (uc, vc, code) is the camera pixel coordinate, (up, vp) of p 0 = (up, vp, code) is the projector pixel coordinate. (uc, vc) and (up) are corresponded by the same “code”. So we can calculate the object’s 3D coordinate by the following formulas according to the principle of photogrammmetry[1]:
where Θc and Θccd are the camera’s extrinsic and internal parameters to be calibrated, Θp and Θdmd are the projector’s extrinsic and internal parameters to be calibrated, respectively.
In most conventional calibration methods, the system calibration is divided into two separate procedures:
camera calibration and projector calibration. The camera calibration is accomplished on the reference data composed of the 3D reference points and their twodimensional (2D) pixel coordinates extracted from the images. Unlike the camera calibration, the projector calibration normally prepares the reference data by projecting an extra calibration pattern with known the 2D references to the calibration artifact in dif ferent poses and obtaining the 3D correspondences with the aid of the calibrated camera[7?9]. In this way, the camera calibration error unavoidably af fects the reliability of the projector reference data and the accuracy of the projector calibration. This problem has been solved in our previous work[14] which is the system parameter adjustment based on the strong relation of system structure. With our previous two-step calibration, we achieve a precision of 0.06 mm. We find that this calibration procedure is an inverse procedure of the measurement. It is just to minimize the sum of the re-projection error of all the reference points onto the camera and the projector image planes. We develop a novel method to minimize the error of real measurement in this letter. We take this method as the third step of system calibration. Since the previous method is the basis of this letter, we first describe it simply in the following parts.
Four of the most frequently used methods for evaluating the system calibration accuracy are adopted. The signs of the evaluation method of calibration accuracy are given below, for which the details can refer to the literatures[7,9,15]: the error of distorted pixel coordinates (EDI); the error of undistorted pixel coordinates (EUDI); the distance with respect to the optical ray (EORD); and the normalized calibration error (NCE).
In the structured light system, the projector cannot capture the image as the camera, so the projector image coordinates are acquired through the stripe images which are captured by camera. The following is a description of the problem in detail when the projector is calibrated. Firstly, the calibration plate image is captured to calibrate the camera. In order to obtain the same point’s image pixel coordinate in the projector, the calibration plate is located, and the code stripes are projected on it. Then an array (uci , vci , codei) is acquired, where (uci , vci) denotes the image pixel coordinates, and codei is a code of this point. However, we find that the coding images are confused by background image (circle point image) used for camera calibration. This confusion phenomenon illustrated in Fig. 2(a) could seriously af fect the decoding and the accuracy of the projector calibration.
Zhang et al. described a novel method to solve this problem[11]. It takes advantage of the nature of optical to solve this problem. As we know, the responses of the black/white (B/W) camera to red and blue colors are similar, the B/W camera can only see a uniform board (in the ideal case) if the calibration plate is illuminated by white light, as illustrated in Fig. 2(b). So when we calibrate the camera, the blue and red calibration plate illustrated in Fig. 3(a) is illuminated with red light, a good contrast gray image is acquired, as illustrated in Fig. 3(b). Then when we calibrate the projector, the calibration plate is illuminated with white light, and the stripe image immune to background is acquired, as illustrated in Fig. 2(b).
In this system, the camera is calibrated firstly. For the calibration plate shown in Fig. 3(a), there are 11×12 calibration points with known 3D coordinates. The plate can be shifted to dif ferent preset locations along the object known z-axis to form a non-planar measurement space, as shown in Fig. 4. We use the image coordinates (uci , vci) captured by the camera and the known the 3D coordinates to calibrate the camera.
Our calibration algorithm is based on Tsai’s multiplane calibration method[7] with some aspects improved. Firstly, some points near the principal point, which are distortion minimum points, are used to estimate the initial value using the linear least squares algorithm. Secondly, the distortion factors including radial and decentering distortions are introduced. Thirdly, we use nonlinear optimization Levenberg-Marquardt (L-M) algorithm[16] to optimize the overall system and overcome the disadvantages of the traditional local optimization algorithm. We use the same algorithm to calibrate the projector. There is a little dif ference from the camera calibration, because the stripes projected by projector are one-dimensional (1D) coordinates. After the first step calibration of the system, we get the calibration results. The calibration errors of the camera and projector of 10 tests are shown in Fig. 5. Figure 5(a) shows the camera calibration error: EDI and EUDI are about 0.40 pixels, EORD is about 0.18 mm, NCE is about 0.90. Figure 5(b) shows the projector calibration error: EDI and EUDI are about 0.45 pixels, EORD is about 0.22 mm, and NCE is about 1.10. We can see that the accuracy is low, so further calibration is needed.
According to the specific characteristics of the structured light system, we find that both devices (camera and projector) are viewing the object scene at the same time, the image information of both devices’ calibration is acquired by the single camera, so it is possible to use a unique device coordinate system[4], and to apply only one transformation from the object coordinate system to the device coordinate system and relate both devices by a rigid transformation. We take the results of the first step as the initial estimation of the second step calibration and optimize the objective function shown at the bottom of Fig. 6. Figure 6 is a brief flow chart of our new two-step system calibration algorithm[14]. The strong relation between both devices’ coordinates ensures a more precise conversion from the phase to 3D coordinates. The advantage of this optimization function, in comparison with other structured light system approaches, is the simultaneous estimation of the parameters using the unique coordinate as a rigidity constraint. This constraint bounds the solution space, reducing the risk of erroneous estimations. So after this optimization, the measurement accuracy has been significantly improved, as shown in Fig. 7.
After the second nonlinear minimization with the L-M optimization method, we acquire the better system parameters. However, due to the large number of unknowns and the ill conditioning of the problem, the search for the global minimum may be dif ficult and trapped in a local minimum. In addition, the optimization objective functions are only based on the 2D image pixel error computed from 3D coordinates, such as EDI and EUDI. The actual measurement process is the reverse of the process. So it is essential to compensate the nonlinear 3D error of the conversion from the 2D images into 3D coordinates. In this measurement system, we measure the standard block and use the measurement error to compensate the partial parameters of the system. Figure 8(a) is the standard block, with each of the distance between two faces of d0 = 2.5±0.002 mm, and Fig. 8(b) is the measurement point cloud with our measurement system.
The third step calibration is the improved procedure of system calibration. At first, we select a lot of points Q (q1, q2, · · · , qm) from face 1 of the measurement point cloud in Fig. 8(b) and fit a plane equation f(Q); then, many points P (p1, p2, · · · , pn) uniform sampled from face 2 are used to calculate the distance from every point of P to the plane f(Q); thirdly, we get the average distance from face 2 to face 1: d1 = 1 n Pn i=1 g(P(pi), f(x, y, z)). Same as above, we take the distances from face 1 to face 2 as d2; at last, we have the distance dc = (d1 + d2)/2 as the final real measurement value. The dif ference of the measurement value dc and true value d0 is the absolute measurement error Derror given by
where p(x, y, z), q(x, y, z) = f(Θdmd, Θccd, Θk) (k = 1, 2, · · · , N) are measured coordinates of the standard block, and ? = [Θccd, Θdmd, Θ1, Θ2, · · · , ΘN ] are the system parameters to be adjusted. The L-M optimization algorithm[16] is used to optimize the objective function Derror, and the best parameters of the entire system are acquired.
We find that the experiment results are not very good and may be wrong if all the parameters of the system are optimized. The reason is that our mathematical model is based on single component function and the system function is not an implicit equation. So, we use the method proposed in Ref. [9] to perform the optimization. Firstly, we fix the initial estimation value focal f and center pixel (u0, v0), minimize function Derror, and optimize the other system parameters. Then, with the other system parameters fixed as current estimate, we optimize the focal f and center pixels (u0, v0), such that a cycle is established. At last, the procedure terminates unless a certain number of iterations have been performed. Then we obtain the best system parameters.
In order to evaluate the performance improvement, we calibrate the system with the previous method and the new algorithm, respectively. Figures 7(a) and (b) are calibration accuracy of camera and projector with the previous method. For example, the camera calibration errors of EDI, EUDI, EORD, and NCE are about 0.2 pixels, 0.2 pixels, 0.1 mm, and 0.6, respectively. With the new improvement of calibration algorithm, we get an improved calibration precision as shown in Figs. 7(c) and (d): EDI and EUDI are about 0.1 pixels, EORD is about 0.04 mm, and NCE is about 0.2. The calibration accuracy has been greatly improved by minimizing the 3D measurement error.
In order to clearly display the advantages of the improved algorithm, we show the re-projection calibration error calculated with our proposed new method and previous method. Figures 9(a) and (b) show the re-projection error of the reference point onto camera image planes. The re-projection error of the camera is 0.1204±0.0760 pixels, which is almost 1/2 smaller than that of the previous error (0.2334±0.1141 pixels). This indicates that our improved method is better than the previous method.
Compared with the first two steps, we find that the 3D errors EORD and NCE have been largely reduced with our new algorithm. Figures 10(a) and (b) show the results of NCE error created by the previous method and our improved method. The error shown in Fig. 10(a) is 0.6269, which is much bigger than that of our new method of 0.1818. This is due to the fact that the third step calibration focuses on the reduction of nonlinear measurement error in the real measurement environment.
In order to prove the feasibility and precision of our improved calibration algorithm, we measured a gauge using our measurement system calibrated with our new method, and compared the result with the measurement result of coordinate measurement machine (CCM). The charge-coupled device (CCD) camera in our system is AM1300 made in JIAHENGZHONGZI company, the resolution is 1280 × 1024 pixels. The projector is NEC50+. The measurement range of our system is about 200 × 150 × 30 (mm) which is computed according to the components’ view range and depth of field. Each of the distances between two faces of the standard gauge is 2.5000 mm. The flatness of the gauge surface is 0.0002. Firstly, the gauge block was measured by an industrial CMM, the distances between faces 1 and 2, faces 2 and 3 were 2.4984 and 2.5003 mm, respectively. The flatness of face 1 was measured to be 0.0017. Then, the standard gauge was measured by our system. The results are shown in Fig. 11(a). The data measured were fitted by reverse software, the distances between faces 1 and 2, faces 2 and 3 were 2.462 and 2.531 mm, respectively, as shown in Fig. 11(b), the flatness was about 0.02, as shown in Fig. 11(c). It can be seen that our measurement results and the results of CMM are in the same level. And our new calibration precision of 0.03 mm has a sizeable improvement of 50% than the previous precision of 0.06 mm. Verified with the experimental results, the measurement accuracy of surface can be maintained well within 1.5% of the overall measurement range with the proposed system calibration method.
In conclusion, an accurate system calibration algorithm is proposed for a camera-projector measurement system based on structured light system. According to the optical characteristic, we use a novel method to solve the internal restrictive conditions of the structured light calibration and greatly improve the system accuracy. A novel three-step calibration method is used to calibrate the system and compensate the measurement error in real environment. We simulate the real measurement process, and the experimental results show that the new calibration algorithm greatly improves the system robustness of high-precision calibration. In comparison with the twostep method, about 40%?50% of the measurement error can be ef fectively reduced when the proposed method is applied, and the absolute measurement accuracy is about 0.03 mm.
This work was supported partially by the National “863” Program of China (No. 2005AA420240) and the Doctoral Foundation of the Ministry of Education of China (No. 20070287055).