LimbMatching

giant.relative_opnav.estimators.limb_matching:

class giant.relative_opnav.estimators.limb_matching.LimbMatching(scene, camera, image_processing, limb_scanner=None, extraction_method=LimbExtractionMethods.EDGE_DETECTION, state_atol=1e-06, state_rtol=0.0001, residual_atol=1e-10, residual_rtol=0.0001, max_iters=10, recenter=True, discard_outliers=True, create_gif=True, gif_file='limb_match_summary_{}_{}.gif', interpolator=<class 'scipy.interpolate._rgi.RegularGridInterpolator'>)[source]

This class implements GIANT’s version of limb based OpNav for irregular bodies.

The class provides an interface to perform limb based OpNav for each target body that is predicted to be in an image. It does this by looping through each target object contained in the Scene.target_objs attribute that is requested. For each of the targets, the algorithm:

  1. Places the target along the line of sight identified from the image using the moment_algorithm if requested

  2. Extracts observed limb points from the image and pairs them with the target based on the expected apparent diameter of the target and the extent of the identified limbs

  3. Identifies what points on the surface of the target likely correspond to the identified limb points in the image

  4. Computes the update to the relative position between the target and the camera that better aligns the observed limbs with the predicted limb points on the target surface.

Steps 2-4 are repeated until convergence, divergence, or the maximum number of iteration steps are performed.

In step 3, the paired image limb to surface points are filtered for outliers using the get_outliers() function, if requested with the discard_outliers attribute.

The convergence for the technique is controlled through the parameters max_iters, state_rtol, state_atol, residual_rtol, and residual_atol. If the fit diverges or is unsuccessful for any reason, iteration will stop and the observed limb points and relative position will be set to NaN.

When all of the required data has been successfully loaded into an instance of this class, the estimate() method is used to perform the estimation for the requested image. The results are stored into the observed_bearings attribute for the observed limb locations and the observed_positions attribute for the estimated relative position between the target and the camera. In addition, the predicted location for the limbs for each target are stored in the computed_bearings attribute and the a priori relative position between the target and the camera is stored in the computed_positions attribute. Finally, the details about the fit are stored as a dictionary in the appropriate element in the details attribute. Specifically, these dictionaries will contain the following keys.

Key

Description

'Jacobian'

The Jacobian matrix from the last completed iteration. Only available if successful.

'Inlier Ratio'

The ratio of inliers to outliers for the last completed iteration. Only available if successful.

'Covariance'

The 3x3 covariance matrix for the estimated relative position in the camera frame based on the residuals. This is only available if successful

'Number of iterations'

The number of iterations that the system converged in. This is only available if successful.

'Surface Limb Points'

The surface points that correspond to the limb points in the target fixed target centered frame.

'Failed'

A message indicating why the fit failed. This will only be present if the fit failed (so you could do something like 'Failed' in limb_matching.details[target_ind] to check if something failed. The message should be a human readable description of what called the failure.

'Prior Residuals'

The sum of square of the residuals from the prior iteration. This is only available if the fit failed due to divergence.

'Current Residuals'

The sum of square of the residuals from the current iteration. This is only available if the fit failed due to divergence.

Warning

Before calling the estimate() method be sure that the scene has been updated to correspond to the correct image time. This class does not update the scene automatically.

Parameters:
  • scene (Scene) – The Scene object containing the target, light, and obscuring objects.

  • camera (Camera) – The Camera object containing the camera model and images to be utilized

  • image_processing (ImageProcessing) – The ImageProcessing object to be used to process the images

  • limb_scanner (LimbScanner | None) – The LimbScanner object containing the limb scanning settings.

  • extraction_method (LimbExtractionMethods) – The method to use to extract the observed limbs from the image. Should be 'LIMB_SCANNING' or 'EDGE_DETECTION'. See LimbExtractionMethods for details.

  • state_atol (float) – the absolute tolerance state convergence criteria (np.abs(update) < state_atol).all())

  • state_rtol (float) – the relative tolerance state convergence criteria (np.abs(update)/state < state_rtol).all())

  • residual_atol (float) – the absolute tolerance residual convergence criteria

  • residual_rtol (float) – the relative tolerance residual convergence criteria

  • max_iters (int) – maximum number of iterations for iterative horizon relative navigation

  • recenter (bool) – A flag to estimate the center using the moment algorithm to get a fast rough estimate of the center-of-figure

  • discard_outliers (bool) – A flag to use Median Absolute Deviation to find outliers and get rid of them

  • create_gif (bool) – A flag specifying whether to build a gif of the iterations.

  • gif_file (str) – the file to save the gif to, optionally with 2 positional format arguments for the image date and target name being processed

  • interpolator (type) – The type of image interpolator to use if the extraction method is set to LIMB_SCANNING.

technique: str = 'limb_matching'

The name of the technique identifier in the RelativeOpNav class.

observable_type: List[RelNavObservablesType] = [<RelNavObservablesType.LIMB: 'LIMB'>, <RelNavObservablesType.RELATIVE_POSITION: 'RELATIVE-POSITION'>]

The type of observables this technique generates.

property camera: Camera

The camera instance that represents the camera used to take the images we are performing Relative OpNav on.

This is the source of the camera model, and may be used for other information about the camera as well. See the Camera property for details.

generates_templates: bool = False

A flag specifying whether this RelNav estimator generates and stores templates in the templates attribute.

relnav_handler: Callable | None = None

A custom handler for doing estimation/packaging the results into the RelativeOpNav instance.

Typically this should be None, unless the observable_type is set to RelNavObservablesType.CUSTOM, in which case this must be a function where the first and only positional argument is the RelativeOpNav instance that this technique was registered to and there are 2 key word arguments image_ind and include_targets which should be used to control which image/target is processed.

If observable_type is not RelNavObservablesType.CUSTOM then this is ignored whether it is None or not.

property scene: Scene

The scene which defines the a priori locations of all targets and light sources with respect to the camera.

You can assume that the scene has been updated for the appropriate image time inside of the class.

computed_positions: List[NONEARRAY]

A list of the computed relative position between the target and the camera in the image frame where each element corresponds to the same element in the Scene.target_objs list.

The list elements should be numpy arrays or None if the the target wasn’t considered or this type of measurement is not applicable. If numpy arrays they should contain the relative position between the camera and the target as a length 3 array with units of kilometers in the camera frame. This does not need to be populated for all RelNav techniques

This is were you should store results for RELATIVE-POSITION techniques.

computed_bearings: List[NONEARRAY]

A list of the computed (predicted) bearings in the image where each element corresponds to the same element in the Scene.target_objs list.

The list elements should be numpy arrays or None if the the target wasn’t considered for some reason. If numpy arrays they should contain the pixel locations as (x, y) or (col, row). This does not always need to be filled out.

This is were you should store results for CENTER-FINDING, LIMB, LANDMARK, CONSTRAINT techniques.

observed_positions: List[NONEARRAY]

A list of the observed relative position between the target and the camera in the image frame where each element corresponds to the same element in the Scene.target_objs list.

The list elements should be numpy arrays or None if the the target wasn’t considered or this type of measurement is not applicable. If numpy arrays they should contain the relative position between the camera and the target as a length 3 array with units of kilometers in the camera frame. This does not need to be populated for all RelNav techniques

This is were you should store results for RELATIVE-POSITION techniques.

observed_bearings: List[NONEARRAY]

A list of the observed bearings in the image where each element corresponds to the same element in the Scene.target_objs list.

The list elements should be numpy arrays or None if the the target wasn’t considered for some reason. If numpy arrays they should contain the pixel locations as (x, y) or (col, row). This does not always need to be filled out.

This is were you should store results for CENTER-FINDING, LIMB, LANDMARK, CONSTRAINT techniques.

extraction_method: LimbExtractionMethods

The method to use to extract observed limb points from the image.

The valid options are provided in the LimbExtractionMethods enumeration

interpolator: type

The type of interpolator to use for the image.

This is ignored if the extraction_method is not set to 'LIMB_SCANNING'.

limbs_camera: List[NONEARRAY]

The limb surface points with respect to the center of the target

Until estimate() is called this list will be filled with None.

Each element of this list corresponds to the same element in the Scene.target_objs list.

recenter: bool

A flag specifying whether to locate the center of the target using a moment algorithm before beginning.

If the a priori knowledge of the bearing to the target is poor (outside of the body) then this flag will help to correct the initial error. See the moment_algorithm module for details.

details: List[Dict[str, Any]]

Key

Description

'Jacobian'

The Jacobian matrix from the last completed iteration. Only available if successful.

'Inlier Ratio'

The ratio of inliers to outliers for the last completed iteration. Only available if successful.

'Covariance'

The 3x3 covariance matrix for the estimated relative position in the camera frame based on the residuals. This is only available if successful

'Number of iterations'

The number of iterations that the system converged in. This is only available if successful.

'Surface Limb Points'

The surface points that correspond to the limb points in the target fixed target centered frame.

'Failed'

A message indicating why the fit failed. This will only be present if the fit failed (so you could do something like 'Failed' in limb_matching.details[target_ind] to check if something failed. The message should be a human readable description of what called the failure.

'Prior Residuals'

The sum of square of the residuals from the prior iteration. This is only available if the fit failed due to divergence.

'Current Residuals'

The sum of square of the residuals from the current iteration. This is only available if the fit failed due to divergence.

Summary of Methods

compute_jacobian

This method computes the linear change in the measurements (the distance between the predicted and observed limb points and the scan center) with respect to a change in the state vector.

estimate

This method identifies the position of each target in the camera frame using limb matching.

extract_and_pair_limbs

Extract and pair limb points in an image to the surface point on a target that created it.

reset

This method resets the observed/computed attributes, the details attribute, and the gif attributes to have None.

target_generator

This method returns a generator which yields target_index, target pairs that are to be processed based on the input include_targets.