moment_algorithm

This module provides a class which implements a moment based (center of illumination) center finding RelNav technique.

Description of the Technique

The moment algorithm is the technique that you typically use when your target begins to become resolved in your images, but you still don’t have an accurate shape model for doing a more advanced technique like limb_matching or cross_correlation. Generally, this only is used for a short while when the target is between 5 and 100 pixels in apparent diameter) as you attempt to build a shape model of the target to begin using the more advanced and more accurate techniques, however, there is no hard limit on when you can and can’t use this technique. You can even use it when the target is still unresolved or when the target is very large in the image, but in these cases (as in most cases) there are much more accurate methods that can be used.

In order to extract the center finding observables from this method a few steps are followed. First, we predict roughly how many pixels we expect the illuminated portion our target to subtend based on the a priori scene knowledge and assuming a spherical target. We then use this predicted area to set the minimum number of connected pixels we are going to consider a possible target in the image (this can be turned off using option use_apparent_area. We then segment the image into foreground/background objects using method segment_image() from image processing. For each target in the image we are processing, we then identify the closest segmented object from the image to the target and assume that this is the location of the target in the actual image (if you have multiple targets in an image then it is somewhat important that your a priori scene is at least moderately accurate to ensure that this pairing works correctly). Finally, we take the foreground objects around the identified segment (to account for possible portions of the target that may be separated from the main clump of illumination, such as along the limb) and compute the center of illumination using a moment algorithm. The center of illumination is then corrected for phase angle effects (if requested) and the resulting center-of-figure measurements are stored.

Tuning

There are a few things that can be tuned for using this technique. The first set is the tuning parameters for segmenting an image into foreground/background objects from the ImageProcessing class. These are

Parameter

Description

ImageProcessing.otsu_levels

The number of levels to attempt to segments the histogram into using multi-level Otsu thresholding.

ImageProcessing.minimum_segment_area

The minimum size of a segment for it to be considered a foreground object. This can be determined automatically using the use_apparent_area flag of this class.

ImageProcessing.minimum_segment_dn

The minimum DN value for a segment to be considered foreground. This can be used to help separate background segments that are slightly brighter due to stray light or other noise issues.

For more details on using these attributes see the ImageProcessing.segment_image() documentation.

In addition, there are some tuning parameters on this class itself. The first is the search radius. The search radius is controlled by search_distance attribute. This should be a number or None. If this is not None, then the distance from the centroid of the nearest segment to the predicted target u location must be less than this value. Therefore, you should set this value to account for the expected center-of-figure to center-of-brightness shift as well as the uncertainty in the a priori location of the target in the scene, while being careful not to set too large of a value if there are multiple targets in the scene to avoid ambiguity. If this is None, then the closest segment is always paired with the target (there is no search region considered) unless the segment has already been paired to another target in the scene.

This technique can predict what the minimum segment area should be in the image using the predicted apparent areas for each target. This can be useful to automatically set the ImageProcessing.minimum_segment_area based on the targets and the a priori location in the camera frame. Because this is just an approximation, a margin of safety is included with apparent_area_margin_of_safety, which is used to shrink the predicted apparent area to account for the assumptions about the spherical target and possible errors in the a priori scene information. You can turn off this feature and just use the set minimum segment area by setting use_apparent_area to False.

Whether the phase correction is applied or not is controlled by the boolean flag apply_phase_correction. The information that is passed to the phase correction routines are controlled by the phase_correction_type and brdf attributes.

Use

The class provided in this module is usually not used by the user directly, instead it is usually interfaced with through the RelativeOpNav class using the identifier moment_algorithm. For more details on using the RelativeOpNav interface, please refer to the relnav_class documentation. For more details on using the technique class directly, as well as a description of the details dictionaries produced by this technique, refer to the following class documentation.

Classes

MomentAlgorithm

This class implements GIANT's version of moment based center finding for extracting bearing measurements to resolved or or unresolved targets in an image.