giant.camera_models.brown_model

This module provides a subclass of CameraModel that implements the Brown camera model, which adds basic distortion corrections to the Pinhole model.

Theory

The Brown camera model is the pinhole camera model combined with a lens distortion model which projects any point along a ray emanating from the camera center (origin of the camera frame) to the same 2D point in an image. Given some 3D point (or direction) expressed in the camera frame, \(\mathbf{x}_C\), the Brown model is defined as

\[\begin{split}&\mathbf{x}_I = \frac{1}{z_C}\left[\begin{array}{c} x_C \\ y_C \end{array}\right] \\ &r = \sqrt{x_I^2 + y_I^2} \\ &\Delta\mathbf{x}_I = (k_1r^2+k_2r^4+k_3r^6)\mathbf{x}_I + \left[\begin{array}{c} 2p_1x_Iy_I+p_2(r^2+2x_I^2) \\ p_1(r^2+2y_I^2) + 2p_2x_Iy_I \end{array}\right] \\ &\mathbf{x}_P = \left[\begin{array}{ccc} f_x & \alpha & p_x \\ 0 & f_y & p_y\end{array}\right] \left[\begin{array}{c} (1+a_1T+a_2T^2+a_3T^3)(\mathbf{x}_I+\Delta\mathbf{x}_I) \\ 1 \end{array}\right]\end{split}\]

where \(\mathbf{x}_I\) are the image frame coordinates for the point (pinhole location), \(r\) is the radial distance from the principal point of the camera to the gnomic location of the point, \(k_{1-3}\) are radial distortion coefficients, \(p_{1-2}\) are tip/tilt/prism distortion coefficients, \(\Delta\mathbf{x}_I\) is the distortion for point \(\mathbf{x}_I\), \(f_x\) and \(f_y\) are the focal length divided by the pixel pitch in the \(x\) and \(y\) directions respectively expressed in units of pixels, \(\alpha\) is an alpha term for non-rectangular pixels, \(p_x\) and \(p_y\) are the location of the principal point of the camera in the image expressed in units of pixels, \(T\) is the temperature of the camera, \(a_{1-3}\) are temperature dependence coefficients, and \(\mathbf{x}_P\) is the pixel location of the point in the image. For a more thorough description of the Brown camera model checkout this memo.

Speeding up the camera model

One of the most common functions of the camera model is to relate pixels in a camera to unit vectors in the 3D camera frame. This is done extensively throughout GIANT, particularly when ray tracing. Unfortunately, this transformation is iterative (there isn’t an analytic solution), which can make things a little slow, particularly when you need to do the transformation for many pixel locations.

In order to speed up this transformation we can precompute it for each pixel in an detector and for a range of temperatures specified by a user and then use bilinear interpolation to compute the location of future pixel/temperature combinations we need. While this is an approximation, it saves significant time rather than going through the full iterative transformation, and based on testing, it is accurate to a few thousandths of a pixel, which is more than sufficient for nearly every use case. The BrownModel and its subclasses make precomputing the transformation, and using the precomputed transformation, as easy as calling prepare_interp() once. Future calls to any method that then needs the transformation from pixels to gnomic locations (on the way to unit vectors) will then use the precomputed transformation unless specifically requested otherwise. In addition, once the prepare_interp() method has been called, if the resulting camera object is then saved to a file either using the camera_model save()/load() functions or another serialization method like pickle/dill, then the precomputed transformation will also be saved and loaded so that it truly only needs to be computed once.

Since precomputing the transformation can take a somewhat long time, it is not always smart to do so. Typically if you have a camera model that you will be using again and again (as is typical in most operations and analysis cases) then you should precompute the transformation and save the resulting camera object to a file that is then used for future work. This is usually best done at the end of a calibration script (for a real camera) or in a stand-alone script that defines the camera, precomputes the transformation, and then saves it off for a synthetic camera for analysis. If you are just doing a quick analysis and don’t need the camera model repeatedly or for any heavy duty ray tracing then it is recommended that you not precompute the transformation.

Whether you precompute the transformation or not, the use of the camera model should appear unchanged beyond computation time.

Use

This is a concrete implementation of a CameraModel, therefore to use this class you simply need to initialize it with the proper values. Typically these values come from either the physical dimensions of the camera, or from a camera calibration routine performed to refine the values using observed data (see the calibration sub-package for details). For instance, say we have a camera which has an effective focal length of 10 mm, a pix pitch of 2.2 um, and a detector size of 1024x1024, with radial distortion of 1e-5, -2e-7, 3e-9. We could then create a model for this camera as

>>> from giant.camera_models import BrownModel
>>> model = BrownModel(fx=10/2.2e-3, fy=10/2.2e-3,
...                    n_rows=1024, n_cols=1024, px=(1024-1)/2, py=(1024-1)/2,
...                    k1=1e-2, k2=-2e-3, k3=3e-4)

Note that we did not set the field of view, but it is automatically computed for us based off of the prescribed camera model.

>>> model.field_of_view
9.048754127234737

In addition, we can now use our model to project points

>>> model.project_onto_image([0, 0, 1])
array([511.5, 511.5])

or to determine the unit vector through a pixel

>>> model.pixels_to_unit([[0, 500], [0, 100]])
array([[-0.11110425, -0.00251948],
       [-0.11110425, -0.09015368],
       [ 0.9875787 ,  0.99592468]]

Classes

BrownModel

This class provides an implementation of the Brown camera model for projecting 3D points onto images and performing camera calibration.