SurfaceFeatureNavigation.estimate

giant.relative_opnav.estimators.sfn.sfn_class:

SurfaceFeatureNavigation.estimate(image, include_targets=None)[source]

This method identifies the locations of surface features in the image through cross correlation of rendered templates with the image.

This method first checks to ensure that the appropriate correlator is set for the image_processing instance (which should the sfn_correlator()). If it is not a warning is printed and we set the correlator to be the sfn_correlator() (this is required for surface feature navigation). Don’t worry, we’ll put things back the way they were when we’re done :).

This method also identifies the index into the Camera.images list for the image being processed. This is done by first checking identity (to find the exact same image). If that doesn’t work we then check based on equality (the pixel data is all exactly the same) however, this could lead to false pairing in some degenerate instances. As long as you are using this method as intended (and not copying/modifying the image array from the camera before sending it to this method) the identity check should work. We do this so that we can relocate each target to be along the line of sight vector found by center finding (if done/provided) before looking for features. Therefore if you aren’t seeding your SFN with center finding results you don’t need to worry about this.

Once the initial preparation is complete, for each requested target that is a FeatureCatalogue we seek feature locations that are visible in the image. This is done by first predicting which features from the catalogue should be visible in the image using the a priori relative state knowledge between the camera and the feature catalogue and the FeatureCatalogue.feature_finder function which is usually an instance of VisibleFeatureFinder. Once potentially visible features have been determined, we render a predicted template of each feature using a single bounce ray tracer. We then do spatial cross correlation between the template and the image within a specified search region (if the search region is too large we attempt global frequency correlation first) to generate a correlation surface. From this correlation surface, we identify the peak and use that to locate the center of the feature in the image. Once this has been completed for all potentially visible features for a given target, we then optionally attempt to solve a PnP problem to refine the relative position and orientation of the camera with respect to the target based on the observed feature locations. If we successfully solve the PnP problem, then we iterate 1 more time through the entire process (but not the PnP solver) and the results from that time become the observed locations stored in the observed_bearings attribute.

More details about many of these steps can be seen in the render() and pnp_solver() methods.

Warning

Before calling this method be sure that the scene has been updated to correspond to the correct image time. This method does not update the scene automatically.

Parameters:
  • image (OpNavImage) – The image to locate the targets in

  • include_targets (List[bool] | None) – A list specifying whether to process the corresponding target in Scene.target_objs or None. If None then all targets are processed.