RelNavEstimator

giant.relative_opnav.estimators.estimator_interface_abc:

class giant.relative_opnav.estimators.estimator_interface_abc.RelNavEstimator(scene, camera, image_processing, **kwargs)[source]

This is the abstract base class for all RelNav techniques in GIANT that work with the RelativeOpNav user interface.

A RelNav technique in GIANT is something that extracts observables of targets from an image. Usually these targets are planetary or natural bodies, not stars or man-made objects (though it is possible to have a man made object be a target). There are different variations on what exactly is extracted from the image, but most boil down into either a bearing measurement (the pixel location of the target in the image), a relative position measurement (the vector from the camera to the target in the camera frame), or a constraint measurement (paired bearing measurements of the same target in different images).

This class serves as a prototype for implementing a new RelNav technique in GIANT. It defines an abstract method, estimate() which is the primary interface GIANT will use to apply the technique to extract observables from an image. It also defines some class methods that should be overridden by subclass, including technique, observable_type, relnav_handler, and generates_templates which determine how the RelativeOpNav class interacts with the new technique. Finally, it defines some instance attributes, image_processing, scene, camera:, computed_bearings, observed_bearings, computed_positions, observed_positions, templates, and details which should be used for retrieving/storing information during the fit. This class also defines a concrete method, reset(), which is used by the RelativeOpNav class to prepare the instance of the technique for a new image/target pair.

For more details on defining/registering a new technique using this class as a template/super class, see the estimators documentation.

Note

Because this is an ABC, you cannot create an instance of this class.

Parameters:
  • scene (Scene) – The Scene object containing the target, light, and obscuring objects.

  • camera (Camera) – The Camera object containing the camera model and images to be utilized

  • image_processing (ImageProcessing) – The ImageProcessing object to be used to process the images

  • kwargs – Extra arguments that the technique may need for settings/other things. These aren’t actually used by this class, but are included so that the type checker doesn’t get mad and so you know you can do it.

technique: str | None = None

The name for the technique for registering with RelativeOpNav.

If None then the name will default to the name of the module where the class is defined.

This should typically be all lowercase and should not include any spaces or special characters except for _ as it will be used to make attribute/method names. (That is MyEstimator.technique.isidentifier() should evaluate True).

observable_type: List[RelNavObservablesType | str] | RelNavObservablesType | str | None = None

The types of observables that are generated by this technique.

This should be a list of RelNavObservablesType enum values that specify what type(s) of observables this technique generates. It can also be None in which case the default type of RelNavObservablesType.CENTER_FINDING will be assumed, or it can be a single string or RelNavObservable value if only one type is assumed.

If this is RelNavObservablesType.CUSTOM, then that can be the only type, and you must also define a class attribute relnav_handler which is a function where the first and only positional argument is the RelativeOpNav instance that this technique was registered to and there are 2 key word arguments image_ind and include_targets which should be used to control which image/target is processed.

relnav_handler: Callable | None = None

A custom handler for doing estimation/packaging the results into the RelativeOpNav instance.

Typically this should be None, unless the observable_type is set to RelNavObservablesType.CUSTOM, in which case this must be a function where the first and only positional argument is the RelativeOpNav instance that this technique was registered to and there are 2 key word arguments image_ind and include_targets which should be used to control which image/target is processed.

If observable_type is not RelNavObservablesType.CUSTOM then this is ignored whether it is None or not.

generates_templates: bool = False

A flag specifying whether this RelNav estimator generates and stores templates in the templates attribute.

computed_bearings: List[Sequence | ndarray | None]

A list of the computed (predicted) bearings in the image where each element corresponds to the same element in the Scene.target_objs list.

The list elements should be numpy arrays or None if the the target wasn’t considered for some reason. If numpy arrays they should contain the pixel locations as (x, y) or (col, row). This does not always need to be filled out.

This is were you should store results for CENTER-FINDING, LIMB, LANDMARK, CONSTRAINT techniques.

observed_bearings: List[Sequence | ndarray | None]

A list of the observed bearings in the image where each element corresponds to the same element in the Scene.target_objs list.

The list elements should be numpy arrays or None if the the target wasn’t considered for some reason. If numpy arrays they should contain the pixel locations as (x, y) or (col, row). This does not always need to be filled out.

This is were you should store results for CENTER-FINDING, LIMB, LANDMARK, CONSTRAINT techniques.

computed_positions: List[Sequence | ndarray | None]

A list of the computed relative position between the target and the camera in the image frame where each element corresponds to the same element in the Scene.target_objs list.

The list elements should be numpy arrays or None if the the target wasn’t considered or this type of measurement is not applicable. If numpy arrays they should contain the relative position between the camera and the target as a length 3 array with units of kilometers in the camera frame. This does not need to be populated for all RelNav techniques

This is were you should store results for RELATIVE-POSITION techniques.

observed_positions: List[Sequence | ndarray | None]

A list of the observed relative position between the target and the camera in the image frame where each element corresponds to the same element in the Scene.target_objs list.

The list elements should be numpy arrays or None if the the target wasn’t considered or this type of measurement is not applicable. If numpy arrays they should contain the relative position between the camera and the target as a length 3 array with units of kilometers in the camera frame. This does not need to be populated for all RelNav techniques

This is were you should store results for RELATIVE-POSITION techniques.

templates: List[Sequence | ndarray | None]

A list of rendered templates generated by this technique.

The list elements should be numpy arrays or None if the the target wasn’t considered or this type of measurement is not applicable. If numpy arrays they should contain the templates rendered for the target as 2D arrays.

If generates_templates is False this can be ignored.

details: List[Any | None]

This attribute should provide details from applying the technique to each target in the scene.

The list should be the same length at the Scene.target_objs. Typically, if the technique was not applied for some of the targets then the details for the corresponding element should be None. Beyond each element of the details should typically contain a dictionary providing information about the results that is not strictly needed for understanding what happened, however, this is not required and you can use whatever structure you want to convey the information. Whatever you do, however, you should clearly document it for each technique so that the user can know what to expect.

property scene: Scene

The scene which defines the a priori locations of all targets and light sources with respect to the camera.

You can assume that the scene has been updated for the appropriate image time inside of the class.

property camera: Camera

The camera instance that represents the camera used to take the images we are performing Relative OpNav on.

This is the source of the camera model, and may be used for other information about the camera as well. See the Camera property for details.

Summary of Methods

estimate

This method should apply the technique to a specified image for all targets specified in include_targets.

reset

This method resets the observed/computed attributes as well as the details attribute to have None for each target in scene.

target_generator

This method returns a generator which yields target_index, target pairs that are to be processed based on the input include_targets.