RelativeOpNav¶
giant.relative_opnav.relnav_class
:
- class giant.relative_opnav.relnav_class.RelativeOpNav(camera, scene, extended_body_cutoff=3, save_templates=False, image_processing=None, image_processing_kwargs=None, cross_correlation=None, cross_correlation_kwargs=None, unresolved=None, unresolved_kwargs=None, ellipse_matching=None, ellipse_matching_kwargs=None, limb_matching=None, limb_matching_kwargs=None, moment_algorithm=None, moment_algorithm_kwargs=None, sfn=None, sfn_kwargs=None, **kwargs)[source]¶
This class serves as the main user interface for performing relative optical navigation.
The class acts as a container for the
Camera
,ImageProcessing
, andScene
instances as well as for instance of all of the registered RelNav techniques. By default the registered RelNav techniques areXCorrCenterFinding
tocross_correlation
,EllipseMatching
toellipse_matching
,LimbMatching
tolimb_matching
,MomentAlgorithm
tomoment_algorithm
, andUnresolvedCenterFinding
tounresolved
. Besides storing all of these objects, it handles data transfer and collection between the different objects. Therefore, in general this class will be the exclusive interface for doing Relative OpNav.For each registered technique, this class provides a few useful capabilities. First, it creates a property that returns the current instance of the class that implements the technique to make it easy to edit/modify properties. Second, it provides a
{technique}_estimate
method which can be used to apply the technique to specific or all image/target pairs. These_estimate
methods also handle collecting and storing the data from the initialized objects as well as providing the appropriate data to the objects. Finally, for each registered technique this class provides the opportunity to pass either a pre-initialized instance of the object as a key word argument (using{technique}=instance
) or the keyword arguments to use to initialize the instance (using{technique}_kwargs=dict
) as part of the__init__
method for this class.This class also provides a simple method for automatically determining which RelNav technique to use based on the expected apparent diameter of a target in the image, as well as the type of the shape representing the target in the scene. This method,
auto_estimate()
is generally sufficient for use for most missions that are doing typical RelNav type work and really makes doing RelNav easy.For most RelNav types, the results will be collected and stored in the
center_finding_results
,relative_position_results
,landmark_results
,limb_results
andsaved_templates
, depending on the type of RelNav used (where each type is stored will be described in the class documentation for the technique). In addition, each technique can store more details about what occurred in the fit to the{technique}_details
attributes which are lists of lists where the outer list corresponds to the images and the inner lists corresponds to the targets in the scene. Typically these details are stored as dictionaries with detailed key names to indicate what each value means, but they can technically be any python object. The documentation for each technique will describe what is included in the details output.When initializing this class, most of the initial options can be set using the
*_kwargs
inputs with dictionaries specifying the keyword arguments and values. Alternatively, you can provide already initialized instances of the objects if you want a little more control or want to use a subclass instead of the registered class itself. You should see the documentation for the registered techniques and theImageProcessing
class for more details about what settings can be specified at initialization.It is possible to register new techniques to use with this class, which will automatically create many of the benefits just discussed. For details on how to do this, refer to the
relnav_class
,relnav_estimators
, andregister()
documentation for details.- Parameters:
camera (Camera) – The
Camera
containing the camera model and images to be analyzedscene (Scene) – The
Scene
describing the a priori knowledge of the relative state between the camera and the targetsextended_body_cutoff (Real) – The apparent diameter threshold in pixels at which
auto_estimate()
will switch from using unresolved techniques to using resolved techniques for extracting observables from the images.save_templates (bool) – A flag specifying whether to save the templates generated for cross-correlation based techniques to the
saved_templates
attribute.image_processing (ImageProcessing | None) – An already initialized instance of
ImageProcessing
(or a subclass). If notNone
thenimage_processing_kwargs
are ignored.image_processing_kwargs (dict | None) – The keyword arguments to pass to the
ImageProcessing
class constructor. These are ignored if argumentimage_processing
is notNone
cross_correlation (XCorrCenterFinding | None) – An already initialized instance of
XCorrCenterFinding
(or a subclass). If notNone
thencross_correlation_kwargs
are ignored.cross_correlation_kwargs (dict | None) – The keyword arguments to pass to the
XCorrCenterFinding
class constructor. These are ignored if argumentcross_correlation is not ``None
unresolved (UnresolvedCenterFinding | None) – An already initialized instance of
UnresolvedCenterFinding
(or a subclass). If notNone
thenunresolved_kwargs
are ignored.unresolved_kwargs (dict | None) – The keyword arguments to pass to the
UnresolvedCenterFinding
class constructor. These are ignored if argumentunresolved
is notNone
ellipse_matching (EllipseMatching | None) – An already initialized instance of
EllipseMatching
(or a subclass). If notNone
thenellipse_matching_kwargs
are ignored.ellipse_matching_kwargs (dict | None) – The keyword arguments to pass to the
EllipseMatching
class constructor. These are ignored if argumentellipse_matching
is notNone
limb_matching (LimbMatching | None) – An already initialized instance of
LimbMatching
(or a subclass). If notNone
thenlimb_matching_kwargs
are ignoredlimb_matching_kwargs (dict | None) – The key word arguments to pass to the
LimbMatching
class constructor. These are ignored if argumentlimb_matching
is notNone
.moment_algorithm (MomentAlgorithm | None) – An already initialized instance of
MomentAlgorithm
(or a subclass). If notNone
thenmoment_algorithm_kwargs
are ignored.moment_algorithm_kwargs (dict | None) – The key word arguments to pass to the
MomentAlgorithm
class constructor. These are ignored if argumentmoment_algorithm
is notNone
.sfn (SurfaceFeatureNavigation | None) – An already initialized instance of
SurfaceFeatureNavigation
(or a subclass). If notNone
thensfn_kwargs
are ignored.sfn_kwargs (dict | None) – The key word arguments to pass to the
SurfaceFeatureNavigation
class constructor. These are ignored if argumentsfn
is notNone
.kwargs – Extra arguments for other registered RelNav techniques. These should take the same form as above (
{technique_name}={technique_instance}
or{technique_name}_kwargs=dict()
). Any that are not supplied are defaulted toNone
.
- center_finding_results: ndarray¶
This array contains center finding results after a center finding technique is used for each image in the camera.
The array is a nxm array with the
RESULTS_DTYPE
, where n is the number of images in the camera (all images, not just turned on) and m is the number of targets in the scene. This is initialized to a zero array. The best way to check if an entry is still empty is that all of the string columns will be 0 length. If a method fails for a particular image/target, the array will instead be filled with NaN for the predicted column.
- relative_position_results: ndarray¶
This array contains relative position results after a relative position technique is used for each image in the camera.
The array is a nxm array with the
RESULTS_DTYPE
, where n is the number of images in the camera (all images, not just turned on) and m is the number of targets in the scene. This is initialized to a zero array. The best way to check if an entry is still empty is that all of the string columns will be 0 length. If a method fails for a particular image/target, the array will instead be filled with NaN for the predicted column.
- landmark_results: List[List[Sequence | ndarray | None]]¶
This list of lists contains landmark results for each image/target in the scene after a landmark technique is used.
The list of lists is nxm, where n is the number of images in the camera (all images, not just turned on) and m is the number of targets in the scene. Each list element is initialized to
None
and is only filled in when a landmark technique is applied to the image/target combination. The result after a landmark technique has been applied will be a 1D numpy array with dtypeRESULTS_DTYPE
with a size equal to the number of processed landmarks in the image/target pair. Each element can (and likely will) have a different length.
- limb_results: List[List[Sequence | ndarray | None]]¶
This list of lists contains limb results for each image/target in the scene after a limb technique is used.
The list of lists is nxm, where n is the number of images in the camera (all images, not just turned on) and m is the number of targets in the scene. Each list element is initialized to
None
and is only filled in when a limb technique is applied to the image/target combination. The result after a limb technique has been applied will be a 1D numpy array with dtypeRESULTS_DTYPE
with a size equal to the number of processed limbs in the image/target pair. Each element can have a different length.
- saved_templates¶
This list of lists contains the templates generated by many of the techniques for inspection.
The list of lists is nxm, where n is the number of images in the camera (all images, not just turned on) and m is the number of targets in the scene. Each list element is initialized to
None
and is only filled in when a method that generates a template is applied to the image/target pair. Each element is generally stored as either a 2D numpy array containing the template (if doing center finding), or as a list of numpy arrays containing the templates for each landmark (if doing landmark navigation)
- save_templates: bool¶
This flag specifies whether to save rendered templates from techniques that rely on cross-correlation.
While it can be nice to have the templates, especially for generating summary displays or trying to investigate whether results are reasonable visually, they can take up a lot of memory, so be sure to consider this before turning this option on.
- extended_body_cutoff: float¶
The apparent diameter of a target in pixels when we should switch from using unresolved techniques to resolved techniques for center finding.
This is only used in
auto_estimate()
and is further described in that documentation.
- unresolved_details¶
This attribute stores details from the
unresolved
technique for each image/target pair that has been processed.The details are stored as a list of list of object (typically dictionaries) where each element of the outer list corresponds to the same element number in the
Camera.images
list and each element of the inner list corresponds to the same element number in theScene.target_objs
list.If an image/target pair has not been processed by the
unresolved
technique then the corresponding element will still be set toNone
.For a description of what the provided details include, see the
UnresolvedCenterFinding.details
documentation.
- cross_correlation_details¶
This attribute stores details from the
cross_correlation
technique for each image/target pair that has been processed.The details are stored as a list of list of object (typically dictionaries) where each element of the outer list corresponds to the same element number in the
Camera.images
list and each element of the inner list corresponds to the same element number in theScene.target_objs
list.If an image/target pair has not been processed by the
cross_correlation
technique then the corresponding element will still be set toNone
.For a description of what the provided details include, see the
XCorrCenterFinding.details
documentation.
- sfn_details¶
This attribute stores details from the
sfn
technique for each image/target pair that has been processed.The details are stored as a list of list of object (typically dictionaries) where each element of the outer list corresponds to the same element number in the
Camera.images
list and each element of the inner list corresponds to the same element number in theScene.target_objs
list.If an image/target pair has not been processed by the
sfn
technique then the corresponding element will still be set toNone
.For a description of what the provided details include, see the
SurfaceFeatureNavigation.details
documentation.
- limb_matching_details¶
This attribute stores details from the
limb_matching
technique for each image/target pair that has been processed.The details are stored as a list of list of object (typically dictionaries) where each element of the outer list corresponds to the same element number in the
Camera.images
list and each element of the inner list corresponds to the same element number in theScene.target_objs
list.If an image/target pair has not been processed by the
limb_matching
technique then the corresponding element will still be set toNone
.For a description of what the provided details include, see the
LimbMatching.details
documentation.
- ellipse_matching_details¶
This attribute stores details from the
ellipse_matching
technique for each image/target pair that has been processed.The details are stored as a list of list of object (typically dictionaries) where each element of the outer list corresponds to the same element number in the
Camera.images
list and each element of the inner list corresponds to the same element number in theScene.target_objs
list.If an image/target pair has not been processed by the
ellipse_matching
technique then the corresponding element will still be set toNone
.For a description of what the provided details include, see the
EllipseMatching.details
documentation.
- property camera: Camera¶
The camera instance to perform OpNav on.
This should be an instance of the
Camera
class or one of its subclasses.See the
Camera
class documentation for more details
- property image_processing: ImageProcessing¶
The ImageProcessing instance to use when doing image processing on the images
This must be an instance of the
ImageProcessing
class.See the
ImageProcessing
class documentation for more details
- property model: CameraModel¶
This alias returns the current camera model from the camera attribute.
It is provided for convenience since the camera model is used frequently.
- moment_algorithm_details¶
This attribute stores details from the
moment_algorithm
technique for each image/target pair that has been processed.The details are stored as a list of list of object (typically dictionaries) where each element of the outer list corresponds to the same element number in the
Camera.images
list and each element of the inner list corresponds to the same element number in theScene.target_objs
list.If an image/target pair has not been processed by the
moment_algorithm
technique then the corresponding element will still be set toNone
.For a description of what the provided details include, see the
MomentAlgorithm.details
documentation.
- property scene: Scene¶
This property stores the scene describing the a priori conditions that the camera observed.
This is used to communicate both where to expect the target in the image, as well as how to render what we think the target should look like for techniques that use cross correlation. For more details see the
scene
documentation.
- property cross_correlation: XCorrCenterFinding¶
The
XCorrCenterFinding
instance to use when extracting center finding observables from images using cross-correlation.This should be an instance of the
XCorrCenterFinding
class or a subclass.See the
XCorrCenterFinding
documentation for more details.
- property ellipse_matching: EllipseMatching¶
The
EllipseMatching
instance to use when extracting center finding observables from images using cross-correlation.This should be an instance of the
EllipseMatching
class or a subclass.See the
EllipseMatching
documentation for more details.
- property limb_matching: LimbMatching¶
The
LimbMatching
instance to use when extracting center finding observables from images using cross-correlation.This should be an instance of the
LimbMatching
class or a subclass.See the
LimbMatching
documentation for more details.
- property moment_algorithm: MomentAlgorithm¶
The
MomentAlgorithm
instance to use when extracting center finding observables from images using cross-correlation.This should be an instance of the
MomentAlgorithm
class or a subclass.See the
MomentAlgorithm
documentation for more details.
- property unresolved: UnresolvedCenterFinding¶
The
UnresolvedCenterFinding
instance to use when extracting center finding observables from images using cross-correlation.This should be an instance of the
UnresolvedCenterFinding
class or a subclass.See the
UnresolvedCenterFinding
documentation for more details.
Summary of Methods
This is essentially an alias to the |
|
This method attempts to automatically determine the best RelNav technique to use for each image/target pair considered out of the most common RelNav techniques, |
|
This method loops applies the |
|
This method extracts observables from image(s) using the requested worker and stores them according to the types of observables generated by the technique. |
|
This method loops applies the |
|
This method loops applies the |
|
This method loops applies the |
|
This is the default setup for processing an image through a RelNav technique and storing the results for a single image. |
|
This class method registers a new RelNav technique with the RelativeOpNav class. |
|
This method replaces the existing image processing instance with a new instance using the initial |
|
This method loops applies the |
|
This method updates the attributes of the |