ellipse_matching¶
This module provides the capability to locate the relative position of a regular target body (well modelled by a triaxial ellipsoid) by matching the observed ellipse of the limb in an image with the ellipsoid model of the target.
Description of the Technique¶
Ellipse matching is a form of OpNav which produces a full 3DOF relative position measurement between the target and the camera. Conceptually, it does this by comparing the observed size of a target in an image to the known size of the target in 3D space to determine the range, and fits an ellipse to the observed target to locate the center in the image. As such, this can be a very powerful measurement because it is insensitive to errors in the a priori knowledge of your range to the target, unlike cross correlation, provides more information than just the bearing to the target for processing in a filter, and is more computationally efficient. That being said, the line-of-sight/bearing component of the estimate is generally slightly less accurate than cross correlation (when there is good a priori knowledge of the shape and the range to the target). This is because ellipse matching only makes use of the visible limb, while cross correlation makes use of all of the visible target.
While conceptually the ellipse matching algorithm computes both a bearing and a range measurement, in actuality, a single 3DOF position estimate is computed in a least squares sense, not 2 separate measurements. The steps to extract this measurement are:
Identify the observed illuminated limb of the target in the image being processed using
ImageProcessing.identify_subpixel_limbs()
Solve the least squares problem
\[\begin{split}\left[\begin{array}{c}\bar{\mathbf{s}}'^T_1 \\ \vdots \\ \bar{\mathbf{s}}'^T_m\end{array}\right] \mathbf{n}=\mathbf{1}_{m\times 1}\end{split}\]where \(\bar{\mathbf{s}}'_i=\mathbf{B}\mathbf{s}_i\), \(\mathbf{s}_i\), is a unit vector in the camera frame through an observed limb point in an image (computed using
pixels_to_unit()
), \(\mathbf{B}=\mathbf{Q}\mathbf{T}^C_P\), \(\mathbf{Q}=\text{diag}(1/a, 1/b, 1/c)\), \(a-c\) are the size of the principal axes of the tri-axial ellipsoid representing the target, and \(\mathbf{T}^C_P\) is the rotation matrix from the principal frame of the target shape to the camera frame.Compute the position of the target in the camera frame using
\[\mathbf{r}=-(\mathbf{n}^T\mathbf{n}-1)^{-0.5}\mathbf{T}_C^P\mathbf{Q}^{-1}\mathbf{n}\]where \(\mathbf{r}\) is the position of the target in camera frame, \(\mathbf{T}_C^P\) is the rotation from the principal frame of the target ellipsoid to the camera frame, and all else is as defined previously.
Further details on the algorithm can be found here.
Note
This implements limb based OpNav for regular bodies. For irregular bodies, like asteroids and comets, see
limb_matching
.
Typically this technique is used once the body is fully resolved in the image (around at least 50 pixels in apparent diameter) and then can be used as long as the limb is visible in the image.
Tuning¶
There are a few parameters to tune for this method. The main thing that may make a difference is the choice and tuning
for the limb extraction routines. There are 2 categories of routines you can choose from. The first is image
processing, where the limbs are extracted using only the image and the sun direction. To tune the image processing limb
extraction routines you can adjust the following ImageProcessing
settings:
Parameter |
Description |
---|---|
A flag specifying to apply |
|
The routine to use to attempt to denoise the image |
|
The subpixel method to use to refine the limb points. |
Other tunings are specific to the subpixel method chosen and are discussed in image_processing
.
The other option for limb extraction is limb scanning. In limb scanning predicted illumination values based on the
shape model and a prior state are correlated with extracted scan lines to locate the limbs in the image. This technique
can be quite accurate (if the shape model is accurate) but is typically much slower and the extraction must be repeated
each iteration. The general tunings to use for limb scanning are from the LimbScanner
class:
Parameter |
Description |
---|---|
|
The number of limb points to extract from the image |
|
The extent of the limb to use centered on the sun line in radians (should be <= np.pi/2) |
|
The number of samples to take along each scan line |
There are a few other things that can be tuned but they generally have limited effect. See the LimbScanner
class for more details.
In addition, there is one knob that can be tweaked on the class itself.
Parameter |
Description |
---|---|
Chooses the limb extraction method to be image processing or limb scanning. |
Beyond this, you only need to ensure that you have a fairly accurate ellipsoid model of the target, the knowledge of the sun direction in the image frame is good, and the knowledge of the rotation between the principal frame and the camera frame is good.
Use¶
The class provided in this module is usually not used by the user directly, instead it is usually interfaced with
through the RelativeOpNav
class using the identifier ellipse_matching
. For more
details on using the RelativeOpNav
interface, please refer to the relnav_class
documentation. For
more details on using the technique class directly, as well as a description of the details
dictionaries produced
by this technique, refer to the following class documentation.
Classes
This class implements GIANT's version of limb based OpNav for regular bodies. |
|
This enumeration provides the valid options for the limb extraction methods that can be used on the image. |
|
This class is used to extract limbs from an image and pair them to surface points on the target. |