# Robot Vision Part 1 : Camera Calibration

## January 12, 2020

Processing any visual information requires the image from the camera to be accurate in order to avoid errors. This is very important for robots during navigation and also for rendering virtual elements over real world objects (Augmented Reality). The Source code for this project can be found here.

The image on the left shows distortion caused by a wide angled lens and its undistorted version on the right.

# Why Camera Calibration ?

Most of the cameras in the market follow the pinhole model, in which 3D objects are projected on a 2D surface (Camera Sensor). However this projection isn’t always perfect. Thus we need to make some adjustments to the camera image.

These distortions can be corrected using the Brown-Conardy model. Let $\ (x_i, y_i)$ be a point in the image and $\ r$ be the it’s radial distance from the center. Then radial correction :

tangential correction :

And corrected coordinates:

Thus finding out the five distortion coefficients $(k_1, k_2, k_3, p_1, p_2)$ will help us undistort the image.

# Getting Things Ready

We will be using a 7x4 chessboard for this process as it is very easily available. The chessboard is displayed on my mobile phone screen, gotta go green. Take few pictures of the chessboard with your camera (At least 10). You must have OpenCV 4.1 or higher and Python3. Pickle will be required for storing our calibration data for future use.

# Calibrating the Camera

For the purpose of demonstration I am using only one image. In OpenCV camera calibration can be easily done using cv2.calibrateCamera( objpoints, imgpoints, gray.shape[::-1], None, None ) which maps the 3D Object Points of the real chessboard to the 2D Image Points of the camera image and returns ret:Calibration Status(Bool), mtx: Camera Matrix (Intrinsic Parameter), dist:Distortion Coefficients, rvecs, tvecs.

So first we import the libraries and generate the 3D objpoints (Object Points) considering the chessboard to be kept stationary on the XY plane. i.e Z = 0 always.

Next we read in our images and find the chessboard corners using cv2.findChessboardCorners(img, (7,4), None)

Upon successful detection, found will be returned as true. The points can be further refined using cv2.cornerSubPix() and then drawn over the image with cv2.drawChessboardCorners()

Now we can use these points to calculate the distortion coefficients and undistort the image with cv2.undistort().

Let’s use pickle.dump() to store the calibration data for furthur use.

From upcoming posts we’ll just load the distortion coefficients dist and the camera matrix mtx from calibrated.pickle.