giant.camera_models.owen_model

This module provides a subclass of CameraModel that implements the distortion modeled Owen (or JPL) camera model.

Theory

The Owen camera model is the pinhole camera model combined with a lens distortion model which projects any point along a ray emanating from the camera center (origin of the camera frame) to the same 2D point in an image. Given some 3D point (or direction) expressed in the camera frame, \(\mathbf{x}_C\), the Owen model is defined as

\[\begin{split}&\mathbf{x}_I = \frac{f}{z_C}\left[\begin{array}{c} x_C \\ y_C \end{array}\right] \\ &r = \sqrt{x_I^2 + y_I^2} \\ &\Delta\mathbf{x}_I = \left(\epsilon_2r^2+\epsilon_4r^4+\epsilon_5y_I+\epsilon_6x_I\right)\mathbf{x}_I+ \left(\epsilon_1r + \epsilon_2r^3\right)\left[\begin{array}{c} -y_I \\ x_I \end{array}\right] \\ &\mathbf{x}_P = \left[\begin{array}{ccc} k_x & k_{xy} & p_x \\ k_{yx} & k_y & p_y\end{array}\right] \left[\begin{array}{c} (1+a_1T+a_2T^2+a_3T^3)(\mathbf{x}_I+\Delta\mathbf{x}_I) \\ 1 \end{array}\right]\end{split}\]

where \(\mathbf{x}_I\) are the image frame coordinates for the point (pinhole location), \(f\) is the focal length of the camera in units of distance, \(r\) is the radial distance from the principal point of the camera to the gnomic location of the point, \(\epsilon_2\) and \(\epsilon_4\) are radial distortion coefficients, \(\epsilon_5\) and \(\epsilon_6\) are tip/tilt/prism distortion coefficients, \(\epsilon_1\) and \(\epsilon_3\) are pinwheel distortion coefficients, \(\Delta\mathbf{x}_I\) is the distortion for point \(\mathbf{x}_I\), \(k_x\) and \(k_y\) are the pixel pitch values in units of pixels/distance in the \(x\) and \(y\) directions respectively, \(k_{xy}\) and \(k_{yx}\) are alpha terms for non-rectangular pixels, \(p_x\) and \(p_y\) are the location of the principal point of the camera in the image expressed in units of pixels, \(T\) is the temperature of the camera, \(a_{1-3}\) are temperature dependence coefficients, and \(\mathbf{x}_P\) is the pixel location of the point in the image. For a more thorough description of the Owen camera model checkout this memo.

Speeding up the camera model

One of the most common functions of the camera model is to relate pixels in a camera to unit vectors in the 3D camera frame. This is done extensively throughout GIANT, particularly when ray tracing. Unfortunately, this transformation is iterative (there isn’t an analytic solution), which can make things a little slow, particularly when you need to do the transformation for many pixel locations.

In order to speed up this transformation we can precompute it for each pixel in an detector and for a range of temperatures specified by a user and then use bilinear interpolation to compute the location of future pixel/temperature combinations we need. While this is an approximation, it saves significant time rather than going through the full iterative transformation, and based on testing, it is accurate to a few thousandths of a pixel, which is more than sufficient for nearly every use case. The OwenModel and its subclasses make precomputing the transformation, and using the precomputed transformation, as easy as calling prepare_interp() once. Future calls to any method that then needs the transformation from pixels to gnomic locations (on the way to unit vectors) will then use the precomputed transformation unless specifically requested otherwise. In addition, once the prepare_interp() method has been called, if the resulting camera object is then saved to a file either using the camera_model save()/load() functions or another serialization method like pickle/dill, then the precomputed transformation will also be saved and loaded so that it truly only needs to be computed once.

Since precomputing the transformation can take a somewhat long time, it is not always smart to do so. Typically if you have a camera model that you will be using again and again (as is typical in most operations and analysis cases) then you should precompute the transformation and save the resulting camera object to a file that is then used for future work. This is usually best done at the end of a calibration script (for a real camera) or in a stand-alone script that defines the camera, precomputes the transformation, and then saves it off for a synthetic camera for analysis. If you are just doing a quick analysis and don’t need the camera model repeatedly or for any heavy duty ray tracing then it is recommended that you not precompute the transformation.

Whether you precompute the transformation or not, the use of the camera model should appear unchanged beyond computation time.

Use

This is a concrete implementation of a CameraModel, therefore to use this class you simply need to initialize it with the proper values. Typically these values come from either the physical dimensions of the camera, or from a camera calibration routine performed to refine the values using observed data (see the calibration sub-package for details). For instance, say we have a camera which has an effective focal length of 10 mm, a pix pitch of 2.2 um, and a detector size of 1024x1024 with second order radial distortion. We could then create a model for this camera as

>>> from giant.camera_models import OwenModel
>>> model = OwenModel(focal_length=10, kx=1/2.2e-3, ky=1/2.2e-3, radial2=1e-5,
...                   n_rows=1024, n_cols=1024, px=(1024-1)/2, py=(1024-1)/2)

Note that we did not set the field of view, but it is automatically computed for us based off of the prescribed camera model.

>>> model.field_of_view
9.050773898292755

In addition, we can now use our model to project points

>>> model.project_onto_image([0, 0, 1])
array([511.5, 511.5])

or to determine the unit vector through a pixel

>>> model.pixels_to_unit([[0, 500], [0, 100]])
array([[-0.1111288 , -0.00251967],
       [-0.1111288 , -0.09016027],
       [ 0.98757318,  0.99592408]])

Classes

OwenModel

This class provides an implementation of the Owen camera model for projecting 3D points onto images and performing camera calibration.