LimbMatching.estimate

giant.relative_opnav.estimators.limb_matching:

LimbMatching.estimate(image, include_targets=None)[source]

This method identifies the position of each target in the camera frame using limb matching.

This method first extracts limb observations from an image and matches them to the targets in the scene. Then, for each target, the position is estimated from the limb observations by pairing the observed limb locations to possible surface locations on the target that could have produced the limb using the current estimate of the state (pair_limbs()) and then updating the state vector based on the residuals between the extracted and predicted limbs in a least squares fashion. This process is repeated until convergence or the maximum number of iterations are reached.

Optionally, along the way, if the create_gif flag is set to True, then this class will also create a gif showing how the predicted limb locations change for each iteration.

Warning

Before calling this method be sure that the scene has been updated to correspond to the correct image time. This method does not update the scene automatically.

Parameters:
  • image (OpNavImage) – The image the unresolved algorithm should be applied to as an OpNavImage

  • include_targets (List[bool] | None) – An argument specifying which targets should be processed for this image. If None then all are processed (no, the irony is not lost on me…)