EllipseMatching

giant.relative_opnav.estimators.ellipse_matching:

class giant.relative_opnav.estimators.ellipse_matching.EllipseMatching(scene, camera, image_processing, limb_scanner=None, extraction_method=LimbExtractionMethods.EDGE_DETECTION, interpolator=<class 'scipy.interpolate._rgi.RegularGridInterpolator'>, recenter=True)[source]

This class implements GIANT’s version of limb based OpNav for regular bodies.

The class provides an interface to perform limb based OpNav for each target body that is predicted to be in an image. It does this by looping through each target object contained in the Scene.target_objs attribute that is requested. For each of the targets, the algorithm:

  1. If using limb scanning to extract the limbs, and requested with recenter, identifies the center of brightness for each target using the moment_algorithm and moves the a priori target to be along that line of sight

  2. Extracts the observed limbs from the image and pairs them to the target

  3. Estimates the relative position between the target and the image using the observed limbs and the steps discussed in the :mod:.ellipse_matching` documentation

  4. Uses the estimated position to get the predicted limb surface location and predicted limb locations in the image

When all of the required data has been successfully loaded into an instance of this class, the estimate() method is used to perform the estimation for the requested image. The results are stored into the observed_bearings attribute for the observed limb locations and the observed_positions attribute for the estimated relative position between the target and the camera. In addition, the predicted location for the limbs for each target are stored in the computed_bearings attribute and the a priori relative position between the target and the camera is stored in the computed_positions attribute. Finally, the details about the fit are stored as a dictionary in the appropriate element in the details attribute. Specifically, these dictionaries will contain the following keys.

Key

Description

'Covariance'

The 3x3 covariance matrix for the estimated relative position in the camera frame based on the residuals. This is only available if successful

'Surface Limb Points'

The surface points that correspond to the limb points in the target fixed target centered frame.

'Failed'

A message indicating why the fit failed. This will only be present if the fit failed (so you could do something like 'Failed' in limb_matching.details[target_ind] to check if something failed. The message should be a human readable description of what called the failure.

Warning

Before calling the estimate() method be sure that the scene has been updated to correspond to the correct image time. This class does not update the scene automatically.

Parameters:
  • scene (Scene) – The Scene object containing the target, light, and obscuring objects.

  • camera (Camera) – The Camera object containing the camera model and images to be utilized

  • image_processing (ImageProcessing) – The ImageProcessing object to be used to process the images

  • limb_scanner (LimbScanner | None) – The LimbScanner object containing the limb scanning settings.

  • extraction_method (LimbExtractionMethods | str) – The method to use to extract the observed limbs from the image. Should be 'LIMB_SCANNING' or 'EDGE_DETECTION'. See LimbExtractionMethods for details.

  • interpolator (type) – The type of image interpolator to use if the extraction method is set to LIMB_SCANNING.

  • recenter (bool) – A flag to estimate the center using the moment algorithm to get a fast rough estimate of the center-of-figure

technique: str = 'ellipse_matching'

The name of the technique identifier in the RelativeOpNav class.

property camera: Camera

The camera instance that represents the camera used to take the images we are performing Relative OpNav on.

This is the source of the camera model, and may be used for other information about the camera as well. See the Camera property for details.

generates_templates: bool = False

A flag specifying whether this RelNav estimator generates and stores templates in the templates attribute.

relnav_handler: Callable | None = None

A custom handler for doing estimation/packaging the results into the RelativeOpNav instance.

Typically this should be None, unless the observable_type is set to RelNavObservablesType.CUSTOM, in which case this must be a function where the first and only positional argument is the RelativeOpNav instance that this technique was registered to and there are 2 key word arguments image_ind and include_targets which should be used to control which image/target is processed.

If observable_type is not RelNavObservablesType.CUSTOM then this is ignored whether it is None or not.

property scene: Scene

The scene which defines the a priori locations of all targets and light sources with respect to the camera.

You can assume that the scene has been updated for the appropriate image time inside of the class.

computed_positions: List[Sequence | ndarray | None]

A list of the computed relative position between the target and the camera in the image frame where each element corresponds to the same element in the Scene.target_objs list.

The list elements should be numpy arrays or None if the the target wasn’t considered or this type of measurement is not applicable. If numpy arrays they should contain the relative position between the camera and the target as a length 3 array with units of kilometers in the camera frame. This does not need to be populated for all RelNav techniques

This is were you should store results for RELATIVE-POSITION techniques.

computed_bearings: List[Sequence | ndarray | None]

A list of the computed (predicted) bearings in the image where each element corresponds to the same element in the Scene.target_objs list.

The list elements should be numpy arrays or None if the the target wasn’t considered for some reason. If numpy arrays they should contain the pixel locations as (x, y) or (col, row). This does not always need to be filled out.

This is were you should store results for CENTER-FINDING, LIMB, LANDMARK, CONSTRAINT techniques.

observed_positions: List[Sequence | ndarray | None]

A list of the observed relative position between the target and the camera in the image frame where each element corresponds to the same element in the Scene.target_objs list.

The list elements should be numpy arrays or None if the the target wasn’t considered or this type of measurement is not applicable. If numpy arrays they should contain the relative position between the camera and the target as a length 3 array with units of kilometers in the camera frame. This does not need to be populated for all RelNav techniques

This is were you should store results for RELATIVE-POSITION techniques.

observed_bearings: List[Sequence | ndarray | None]

A list of the observed bearings in the image where each element corresponds to the same element in the Scene.target_objs list.

The list elements should be numpy arrays or None if the the target wasn’t considered for some reason. If numpy arrays they should contain the pixel locations as (x, y) or (col, row). This does not always need to be filled out.

This is were you should store results for CENTER-FINDING, LIMB, LANDMARK, CONSTRAINT techniques.

observable_type: List[RelNavObservablesType] = [<RelNavObservablesType.LIMB: 'LIMB'>, <RelNavObservablesType.RELATIVE_POSITION: 'RELATIVE-POSITION'>]

The type of observables this technique generates.

extraction_method: LimbExtractionMethods

The method to use to extract observed limb points from the image.

The valid options are provided in the LimbExtractionMethods enumeration

interpolator: type

The type of interpolator to use for the image.

This is ignored if the extraction_method is not set to 'LIMB_SCANNING'.

limbs_camera: List[Sequence | ndarray | None]

The limb surface points with respect to the center of the target

Until estimate() is called this list will be filled with None.

Each element of this list corresponds to the same element in the Scene.target_objs list.

recenter: bool

A flag specifying whether to locate the center of the target using a moment algorithm before beginning.

If the a priori knowledge of the bearing to the target is poor (outside of the body) then this flag will help to correct the initial error. See the moment_algorithm module for details.

details: List[Dict[str, Any]]

Key

Description

'Covariance'

The 3x3 covariance matrix for the estimated relative position in the camera frame based on the residuals. This is only available if successful

'Surface Limb Points'

The surface points that correspond to the limb points in the target fixed target centered frame.

'Failed'

A message indicating why the fit failed. This will only be present if the fit failed (so you could do something like 'Failed' in limb_matching.details[target_ind] to check if something failed. The message should be a human readable description of what called the failure.

Summary of Methods

estimate

This method identifies the position of each target in the camera frame using ellipse matching.

extract_and_pair_limbs

Extract and pair limb points in an image to the surface point on a target that created it.

reset

This method resets the observed/computed attributes, the details attribute, and the limb attributes to have None.

target_generator

This method returns a generator which yields target_index, target pairs that are to be processed based on the input include_targets.