limb_matching

This module provides the capability to locate the relative position of any target body by matching the observed limb in an image with the shape model of the target.

Description of the Technique

Limb matching is a form of OpNav which produces a full 3DOF relative position measurement between the target and the camera. It is a sister technique of ellipse matching, but extended to general bodies. It does this by matching observed limb points in an image to surface points on the shape model and then solving the PnP problem (essentially triangulation). As such, this can be a very powerful measurement because it is less sensitive to errors in the a priori knowledge of your range to the target than cross correlation, provides more information than just the bearing to the target for processing in a filter, and is more computationally efficient. That being said, the line-of-sight/bearing component of the estimate is generally slightly less accurate than cross correlation (when there is good a priori knowledge of the shape and the range to the target). This is because limb matching only makes use of the visible limb, while cross correlation makes use of all of the visible target.

Because matching the observed limb to a surface point is not a well defined problem for general bodies (not ellipsoidal) this technique is iterative. It keeps pairing the observed limbs with the correct surface points as the relative position between the target and the camera is refined. In addition, the limb pairing process needs the a priori bearing of the target to be fairly close to the actual location of the target in the image. Therefore, the algorithm generally proceeds as follows:

  1. If requested, identify the center of the target in the image using a moment algorithm (moment_algorithm) and move the target’s a priori to be along the line of sight identified using the moment algorithm.

  2. Identify the observed illuminate limb of the target in the image being processed using ImageProcessing.identify_subpixel_limbs() or LimbScanner

  3. Pair the extracted limb points to possible surface points on the target shape using the current estimate of the state

  4. Solve a linear least squares problem to update the state

  5. Repeat steps 2-4 until convergence or maximum number of iterations exceeded

Further details on the algorithm can be found here.

Note

This implements limb based OpNav for irregular bodies. For regular bodies, like planets and moons, see ellipse_matching which will be more efficient and accurate.

Typically this technique is used once the body is fully resolved in the image (around at least 50 pixels in apparent diameter) and then can be used as long as the limb is visible in the image. For accurate results, this does require an accurate shape model of the target, at least up to an unknown scale. In addition, this technique can be sensitive to errors in the knowledge of the relative orientation of the target frame to the image frame, therefore you need to have a pretty good idea of its pole and spin state. If you don’t have these things then this technique may still work but with degraded results. For very irregular bodies (bodies that are not mostly convex) this technique may be more dependent on at least a decent a priori relative state between the camera and the target, as if the initial limb pairing is very far off it may never recover.

Tuning

There are a few parameters to tune for this method. The main thing that may make a difference is the choice and tuning for the limb extraction routines. There are 2 categories of routines you can choose from. The first is image processing, where the limbs are extracted using only the image and the sun direction. To tune the image processing limb extraction routines you can adjust the following ImageProcessing settings:

Parameter

Description

ImageProcessing.denoise_flag

A flag specifying to apply denoise_image() to the image before attempting to locate the limbs.

ImageProcessing.image_denoising

The routine to use to attempt to denoise the image

ImageProcessing.subpixel_method

The subpixel method to use to refine the limb points.

Other tunings are specific to the subpixel method chosen and are discussed in image_processing.

The other option for limb extraction is limb scanning. In limb scanning predicted illumination values based on the shape model and a prior state are correlated with extracted scan lines to locate the limbs in the image. This technique can be quite accurate (if the shape model is accurate) but is typically much slower and the extraction must be repeated each iteration. The general tunings to use for limb scanning are from the LimbScanner class:

Parameter

Description

LimbScanner.number_of_scan_lines

The number of limb points to extract from the image

LimbScanner.scan_range

The extent of the limb to use centered on the sun line in radians (should be <= np.pi/2)

LimbScanner.number_of_sample_points

The number of samples to take along each scan line

There are a few other things that can be tuned but they generally have limited effect. See the LimbScanner class for more details.

In addition, there are a few knobs that can be tweaked on the class itself.

Parameter

Description

LimbMatching.extraction_method

Chooses the limb extraction method to be image processing or limb scanning.

LimbMatching.max_iters

The maximum number of iterations to perform.

LimbMatching.recenter

A flag specifying whether to use a moment algorithm to set the initial guess at the line of sight to the target or not. If your a priori state knowledge is bad enough that the predicted location of the target is outside of the observed target in the image then you should set this to True.

LimbMatching.discard_outliers

A flag specifying whether to remove outliers each iteration step. Generally this should be left to True.

Beyond this, you only need to ensure that you have a fairly accurate shape model of the target, the knowledge of the sun direction in the image frame is good, and the knowledge of the rotation between the principal frame and the camera frame is good.

Use

The class provided in this module is usually not used by the user directly, instead it is usually interfaced with through the RelativeOpNav class using the identifier limb_matching. For more details on using the RelativeOpNav interface, please refer to the relnav_class documentation. For more details on using the technique class directly, as well as a description of the details dictionaries produced by this technique, refer to the following class documentation.

Classes

LimbMatching

This class implements GIANT's version of limb based OpNav for irregular bodies.