Camera Calibration Guide with Matlab Code

In this article, I will be explaining step by step process to perform camera calibration with MATLAB code. I wrote this code while doing Computer Vision course to better learn the academic lessons on intrinsic and extrinsic calibration parameter of a camera. With the  help of code, one can even perform calibration of a mobile camera.

Book

I followed the book ‘Computer Vision: A Modern Approach (1st Edition)’ by Forsyth & Ponce to learn about the various intrinsic/extrinsic camera parameters. One can visit my article ‘Computer Vision Resources for a Beginner‘ to know about best resources/books to learn Computer Vision.

Perspective Projection Matrix

It is a matrix which contains all the intrinsic and extrinsic parameters and it has the following form

M=[αr1Tαcotθr2T+u0r3Tαtxαcotθty+u0tzβsinθr2T+v0r3Tβsinθty+v0tzr3Ttz]

where (α,β) are magnifications in pixel units, θ is skew, (u0,v0) is position (in pixel units) of principal point, (r1T,r2T,r3T) are three rows of rotation matrix and (tx,ty,tz) are coordinates of translation vector. If P is a homogeneous coordinate vector of a point in the world coordinate system, then corresponding homogeneous image point is

p=MP.

We have to convert homogeneous coordinate to non-homogeneous to obtain image point in pixels. If m1T, m2T and m3T are rows of the matrix M, then image point is

u=m1T.Pm3T.P,v=m2T.Pm3T.P.

Step – 1: Points for Correspondence

As there are 5 intrinsic parameters and 6 extrinsic parameters, we require minimum 6 points  for correspondence to evaluate the perspective projection matrix. We can’t choose all these points to lie on a line or a plane, as this will give no unique information about the projection matrix. In the absence of calibration pad, I used a corner of a room and took photo of it using my mobile camera. One can note that the encircled points in the photo lie on three different plane with not more than 3 points on a single plane.

3d scene for camera calibration

I then did point correspondence between world coordinates and image coordinates. The unit for World coordinates does not matter if we use one uniformly.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
N = 6; % number of points for correspondence
 
% each colum in one coordinate
WldPts = zeros(3,N); %
Img1Pts = zeros(2,N);
 
% points as per my camera calibration
% matrix for world coordinates. Each point will be a column vector
%
WldPts(1,:) = [58.6,58.6, 0, 0, 0, 58.6]; % all x coordinates are mentioned
WldPts(2,:) = [0, 0, 0, 58.6, 58.6, 58.6]; % all y coordinates are mentioned
WldPts(3,:) = [0, 10, 10, 10, 0, 0]; % all z coordinates are mentioned
 
% corresponding points in image 1
Img1Pts(1,:) = [212, 126, 1618, 2979, 2899, 1265]; %in pixels: all x coordinates are mentioned
Img1Pts(2,:) = [2177, 1913, 1372, 2071, 2344, 3518]; % in pixels: all y coordinates are mentioned

Step – 2: Evaluating Perspective Projection Matrix

Second step is to convert the projection equation into a system of homogeneous linear equations to evaluate the projection matrix. If n points are chosen for correspondence, we get the following system of equations (refer eq. 5.3.1 in the book):-

[P1T0Tu1P1T0TP1Tv1P1TPnT0TunPnTPnT0TvnPnT][m1m2m3]=Pm=0.

Next we have to perform Singular Value Decomposition (SVD) of the matrix P and take eigenvector corresponding to lowest eigenvalue as solution for m.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
% calculate the vector P corresponding to projection matrix
P = zeros(2*N,12);
 
for i = 1:N
P(2*i-1,:) = [WldPts(:,i)', 1, 0,0,0,0, -Img1Pts(1,i)*WldPts(:,i)', -Img1Pts(1,i)];
P(2*i,:) = [0,0,0,0,WldPts(:,i)', 1, -Img1Pts(2,i)*WldPts(:,i)', -Img1Pts(2,i)];
end
 
[U,S,V] = svd(P'*P);
indx = 12;
eigval_min = S(indx,indx);
m_vect = V(:,indx);
 
% projection matrix M
M = zeros (3,4);
M(1,1:4) = m_vect(1:4);
M(2,1:4) = m_vect(5:8);
M(3,1:4) = m_vect(9:12);

Step – 3: Recovering Parameters

Next step is to recover intrinsic and extrinsic parameters from the perspective projection matrix. For that I used Eqs. (5.3.5) – (5.3.8) from the book.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
a1 = M(1,1:3)';
a2 = M(2,1:3)';
a3 = M(3,1:3)';
% epsilon = +-1. With -1, a different set of values of rotation matrix
% and translation vector will be obtained.
epsilon = 1;
 
% value of epsilon = +-1 comes from the fact that if a^2 = 1 then
% a = +-1. A row of rotation matrix has unit magnitude.
prow = epsilon/ norm(a3);
 
r3 = prow*a3;
u0 = prow^2*(a1'*a3);
v0 = prow^2*(a2'*a3);
 
tmp1 = cross(a1,a3);
tmp2 = cross(a2,a3);
 
tmp3 = (tmp1'*tmp2)/(norm(tmp1)*norm(tmp2));
 
% as theta is between 0 to 90, cos(theta) is always positive
tmp3 = abs(tmp3);
 
theta = acosd(tmp3);
 
% sign of alpha & beta depends on sign of focal length of lens. Assuming
% positive focal length
alpha = prow^2*norm(tmp1)*sind(theta);
beta = prow^2*norm(tmp2)*sind(theta);
 
r1 = tmp2/norm(tmp2);
 
r2 = cross(r3,r1);
 
% intrinsic parameters matrix
K = [alpha, -alpha*cotd(theta),u0; ...
    0, beta/sind(theta), v0; ...
    0, 0, 1]; 
 
% rotation matrix
R = [r1';r2';r3'];
 
% translation vector
tz = M(3,4)*prow;
ty = (M(2,4)*prow - v0*tz)*sind(theta)/beta;
tx = (M(1,4)*prow-u0*tz+alpha*cotd(theta)*ty)/alpha;

Verifying the Results

I obtained the following result for the image shown above:-

α=3368.83,β=3385.01θ=89.43u0=1523.64,v0=2112.19R=[0.78570.61840.01720.38380.46540.79750.48520.63320.6030]t=[3.6982,20.4628,136.0939]T.

In order to cross-check the results, one has to observe the following:-

  1. Skew should be close to 90 but less than it.
  2. Aspect ratio to be close to one. Though it depends on camera also.
  3. Position of principal point (u0,v0) to close to half of the size of the image.

That was all I had to write about the camera calibration. I hope this article has been of use to you. If that is the case, then you can donate me to support this website!

Subscribe
Notify of
guest


0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments