RelativeOpNav¶
giant.relative_opnav.relnav_class:
- class giant.relative_opnav.relnav_class.RelativeOpNav(camera, scene, extended_body_cutoff=3, save_templates=False, cross_correlation=None, cross_correlation_options=None, unresolved=None, unresolved_options=UnresolvedCenterFindingOptions(phase_correction_type=<PhaseCorrectionType.SIMPLE: 1>, brdf=<giant.ray_tracer.illumination.McEwenIllumination object>, search_distance=15, apply_phase_correction=False, point_of_interest_finder_options=None), ellipse_matching=None, ellipse_matching_options=EllipseMatchingOptions(extraction_method=<LimbExtractionMethods.EDGE_DETECTION: 'EDGE_DETECTION'>, limb_edge_detection_options=None, limb_scanner_options=None, recenter=True), limb_matching=None, limb_matching_options=LimbMatchingOptions(extraction_method=<LimbExtractionMethods.EDGE_DETECTION: 'EDGE_DETECTION'>, limb_edge_detection_options=None, limb_scanner_options=None, state_atol=1e-06, state_rtol=0.0001, residual_atol=1e-10, residual_rtol=0.0001, max_iters=10, recenter=True, discard_outliers=True, create_gif=False, gif_file='limb_match_summary_{}_{}.gif'), moment_algorithm=None, moment_algorithm_options=MomentAlgorithmOptions(phase_correction_type=<PhaseCorrectionType.SIMPLE: 1>, brdf=<giant.ray_tracer.illumination.McEwenIllumination object>, use_apparent_area=True, apparent_area_margin_of_safety=2, search_distance=None, apply_phase_correction=True, image_segmenter_options=None), constraint_matching=None, constraint_matching_options=None, sfn=None, sfn_options=SurfaceFeatureNavigationOptions(brdf=<giant.ray_tracer.illumination.McEwenIllumination object>, rays=None, grid_size=1, template_overflow_bounds=-1, peak_finder=<function quadric_peak_finder_2d>, min_corr_score=0.3, blur=True, search_region=None, run_pnp_solver=False, pnp_ransac_iterations=0, second_search_region=None, measurement_sigma=1, position_sigma=None, attitude_sigma=None, state_sigma=None, max_lsq_iterations=None, lsq_relative_error_tolerance=1e-08, lsq_relative_update_tolerance=1e-08, cf_results=None, cf_index=None, show_templates=False), **kwargs)[source]¶
This class serves as the main user interface for performing relative optical navigation.
The class acts as a container for the
Camera,ImageProcessing, andSceneinstances as well as for instance of all of the registered RelNav techniques. By default the registered RelNav techniques areXCorrCenterFindingtocross_correlation,EllipseMatching' to :attr:`ellipse_matching,LimbMatchingtolimb_matching,MomentAlgorithmtomoment_algorithm, andUnresolvedCenterFindingtounresolved. Besides storing all of these objects, it handles data transfer and collection between the different objects. Therefore, in general this class will be the exclusive interface for doing Relative OpNav.For each registered technique, this class provides a few useful capabilities. First, it creates a property that returns the current instance of the class that implements the technique to make it easy to edit/modify properties. Second, it provides a
{technique}_estimatemethod which can be used to apply the technique to specific or all image/target pairs. These_estimatemethods also handle collecting and storing the data from the initialized objects as well as providing the appropriate data to the objects. Finally, for each registered technique this class provides the opportunity to pass either a pre-initialized instance of the object as a key word argument (using{technique}=instance) or the keyword arguments to use to initialize the instance (using{technique}_options=UserOptions) as part of the__init__method for this class.This class also provides a simple method for automatically determining which RelNav technique to use based on the expected apparent diameter of a target in the image, as well as the type of the shape representing the target in the scene. This method,
auto_estimate()is generally sufficient for use for most missions that are doing typical RelNav type work and really makes doing RelNav easy.For most RelNav types, the results will be collected and stored in the
center_finding_results,relative_position_results,landmark_results,limb_resultsandsaved_templates, depending on the type of RelNav used (where each type is stored will be described in the class documentation for the technique). In addition, each technique can store more details about what occurred in the fit to the{technique}_detailsattributes which are lists of lists where the outer list corresponds to the images and the inner lists corresponds to the targets in the scene. Typically these details are stored as dictionaries with detailed key names to indicate what each value means, but they can technically be any python object. The documentation for each technique will describe what is included in the details output.When initializing this class, most of the initial options can be set using the
*_optionsinputs with dictionaries specifying the keyword arguments and values. Alternatively, you can provide already initialized instances of the objects if you want a little more control or want to use a subclass instead of the registered class itself. You should see the documentation for the registered techniques and theImageProcessingclass for more details about what settings can be specified at initialization.It is possible to register new techniques to use with this class, which will automatically create many of the benefits just discussed. For details on how to do this, refer to the
relnav_class,relnav_estimators, andregister()documentation for details.- Parameters:
camera (Camera) – The
Cameracontaining the camera model and images to be analyzedscene (Scene) – The
Scenedescribing the a priori knowledge of the relative state between the camera and the targetsextended_body_cutoff (float) – The apparent diameter threshold in pixels at which
auto_estimate()will switch from using unresolved techniques to using resolved techniques for extracting observables from the images.save_templates (bool) – A flag specifying whether to save the templates generated for cross-correlation based techniques to the
saved_templatesattribute.cross_correlation (XCorrCenterFinding | None) – An already initialized instance of
XCorrCenterFinding(or a subclass). If notNonethencross_correlation_optionsare ignored.cross_correlation_options (XCorrCenterFindingOptions | None) – The options to pass to the
XCorrCenterFindingclass constructor. These are ignored if argumentcross_correlation is not ``Noneunresolved (UnresolvedCenterFinding | None) – An already initialized instance of
UnresolvedCenterFinding(or a subclass). If notNonethenunresolved_optionsare ignored.unresolved_options (UnresolvedCenterFindingOptions | None) – The options to pass to the
UnresolvedCenterFindingclass constructor. These are ignored if argumentunresolvedis notNoneellipse_matching (EllipseMatching | None) – An already initialized instance of
EllipseMatching(or a subclass). If notNonethenellipse_matching_optionsare ignored.ellipse_matching_options (EllipseMatchingOptions | None) – The options to pass to the
EllipseMatchingclass constructor. These are ignored if argumentellipse_matchingis notNonelimb_matching (LimbMatching | None) – An already initialized instance of
LimbMatching(or a subclass). If notNonethenlimb_matching_optionsare ignoredlimb_matching_options (LimbMatchingOptions | None) – The options to pass to the
LimbMatchingclass constructor. These are ignored if argumentlimb_matchingis notNone.moment_algorithm (MomentAlgorithm | None) – An already initialized instance of
MomentAlgorithm(or a subclass). If notNonethenmoment_algorithm_optionsare ignored.moment_algorithm_options (MomentAlgorithmOptions | None) – The options to pass to the
MomentAlgorithmclass constructor. These are ignored if argumentmoment_algorithmis notNone.constraint_matching (ConstraintMatching | None) – An already initialized instance of
ConstraingMatching(or a subclass). If notNonethenconstraint_matching_kwargsare ignored.constraint_matching_kwargs – The keyword arguments to pass to the
ConstraingMatchingclass constructor. These are ignored if argumentconstraint_matchingis notNone.sfn (SurfaceFeatureNavigation | None) – An already initialized instance of
SurfaceFeatureNavigation(or a subclass). If notNonethensfn_optionsare ignored.sfn_options (SurfaceFeatureNavigationOptions | None) – The options to pass to the
SurfaceFeatureNavigationclass constructor. These are ignored if argumentsfnis notNone.kwargs – Extra arguments for other registered RelNav techniques. These should take the same form as above (
{technique_name}={technique_instance}or{technique_name}_options=technique_nameOptions()). Any that are not supplied are defaulted toNone.constraint_matching_options (ConstraintMatchingOptions | None)
- center_finding_results: ndarray¶
This array contains center finding results after a center finding technique is used for each image in the camera.
The array is a nxm array with the
RESULTS_DTYPE, where n is the number of images in the camera (all images, not just turned on) and m is the number of targets in the scene. This is initialized to a zero array. The best way to check if an entry is still empty is that all of the string columns will be 0 length. If a method fails for a particular image/target, the array will instead be filled with NaN for the predicted column.
- relative_position_results: ndarray¶
This array contains relative position results after a relative position technique is used for each image in the camera.
The array is a nxm array with the
RESULTS_DTYPE, where n is the number of images in the camera (all images, not just turned on) and m is the number of targets in the scene. This is initialized to a zero array. The best way to check if an entry is still empty is that all of the string columns will be 0 length. If a method fails for a particular image/target, the array will instead be filled with NaN for the predicted column.
- landmark_results: List[List[ndarray[tuple[Any, ...], dtype[_ScalarT]] | None]]¶
This list of lists contains landmark results for each image/target in the scene after a landmark technique is used.
The list of lists is nxm, where n is the number of images in the camera (all images, not just turned on) and m is the number of targets in the scene. Each list element is initialized to
Noneand is only filled in when a landmark technique is applied to the image/target combination. The result after a landmark technique has been applied will be a 1D numpy array with dtypeRESULTS_DTYPEwith a size equal to the number of processed landmarks in the image/target pair. Each element can (and likely will) have a different length.
- limb_results: List[List[ndarray[tuple[Any, ...], dtype[_ScalarT]] | None]]¶
This list of lists contains limb results for each image/target in the scene after a limb technique is used.
The list of lists is nxm, where n is the number of images in the camera (all images, not just turned on) and m is the number of targets in the scene. Each list element is initialized to
Noneand is only filled in when a limb technique is applied to the image/target combination. The result after a limb technique has been applied will be a 1D numpy array with dtypeRESULTS_DTYPEwith a size equal to the number of processed limbs in the image/target pair. Each element can have a different length.
- saved_templates: list[list[None | ndarray[tuple[Any, ...], dtype[float64]]]]¶
This list of lists contains the templates generated by many of the techniques for inspection.
The list of lists is nxm, where n is the number of images in the camera (all images, not just turned on) and m is the number of targets in the scene. Each list element is initialized to
Noneand is only filled in when a method that generates a template is applied to the image/target pair. Each element is generally stored as either a 2D numpy array containing the template (if doing center finding), or as a list of numpy arrays containing the templates for each landmark (if doing landmark navigation)
- save_templates: bool¶
This flag specifies whether to save rendered templates from techniques that rely on cross-correlation.
While it can be nice to have the templates, especially for generating summary displays or trying to investigate whether results are reasonable visually, they can take up a lot of memory, so be sure to consider this before turning this option on.
- extended_body_cutoff: float¶
The apparent diameter of a target in pixels when we should switch from using unresolved techniques to resolved techniques for center finding.
This is only used in
auto_estimate()and is further described in that documentation.
- unresolved_details¶
This attribute stores details from the
unresolvedtechnique for each image/target pair that has been processed.The details are stored as a list of list of object (typically dictionaries) where each element of the outer list corresponds to the same element number in the
Camera.imageslist and each element of the inner list corresponds to the same element number in theScene.target_objslist.If an image/target pair has not been processed by the
unresolvedtechnique then the corresponding element will still be set toNone.For a description of what the provided details include, see the
UnresolvedCenterFinding.detailsdocumentation.
- cross_correlation_details¶
This attribute stores details from the
cross_correlationtechnique for each image/target pair that has been processed.The details are stored as a list of list of object (typically dictionaries) where each element of the outer list corresponds to the same element number in the
Camera.imageslist and each element of the inner list corresponds to the same element number in theScene.target_objslist.If an image/target pair has not been processed by the
cross_correlationtechnique then the corresponding element will still be set toNone.For a description of what the provided details include, see the
XCorrCenterFinding.detailsdocumentation.
- sfn_details¶
This attribute stores details from the
sfntechnique for each image/target pair that has been processed.The details are stored as a list of list of object (typically dictionaries) where each element of the outer list corresponds to the same element number in the
Camera.imageslist and each element of the inner list corresponds to the same element number in theScene.target_objslist.If an image/target pair has not been processed by the
sfntechnique then the corresponding element will still be set toNone.For a description of what the provided details include, see the
SurfaceFeatureNavigation.detailsdocumentation.
- limb_matching_details¶
This attribute stores details from the
limb_matchingtechnique for each image/target pair that has been processed.The details are stored as a list of list of object (typically dictionaries) where each element of the outer list corresponds to the same element number in the
Camera.imageslist and each element of the inner list corresponds to the same element number in theScene.target_objslist.If an image/target pair has not been processed by the
limb_matchingtechnique then the corresponding element will still be set toNone.For a description of what the provided details include, see the
LimbMatching.detailsdocumentation.
- ellipse_matching_details¶
This attribute stores details from the
ellipse_matchingtechnique for each image/target pair that has been processed.The details are stored as a list of list of object (typically dictionaries) where each element of the outer list corresponds to the same element number in the
Camera.imageslist and each element of the inner list corresponds to the same element number in theScene.target_objslist.If an image/target pair has not been processed by the
ellipse_matchingtechnique then the corresponding element will still be set toNone.For a description of what the provided details include, see the
EllipseMatching.detailsdocumentation.
- moment_algorithm_details¶
This attribute stores details from the
moment_algorithmtechnique for each image/target pair that has been processed.The details are stored as a list of list of object (typically dictionaries) where each element of the outer list corresponds to the same element number in the
Camera.imageslist and each element of the inner list corresponds to the same element number in theScene.target_objslist.If an image/target pair has not been processed by the
moment_algorithmtechnique then the corresponding element will still be set toNone.For a description of what the provided details include, see the
MomentAlgorithm.detailsdocumentation.
- property scene: Scene¶
This property stores the scene describing the a priori conditions that the camera observed.
This is used to communicate both where to expect the target in the image, as well as how to render what we think the target should look like for techniques that use cross correlation. For more details see the
scenedocumentation.
- property cross_correlation: XCorrCenterFinding¶
The
XCorrCenterFindinginstance to use when extracting center finding observables from images using cross-correlation.This should be an instance of the
XCorrCenterFindingclass or a subclass.See the
XCorrCenterFindingdocumentation for more details.
- property ellipse_matching: EllipseMatching¶
The
EllipseMatchinginstance to use when extracting center finding observables from images using cross-correlation.This should be an instance of the
EllipseMatchingclass or a subclass.See the
EllipseMatchingdocumentation for more details.
- property limb_matching: LimbMatching¶
The
LimbMatchinginstance to use when extracting center finding observables from images using cross-correlation.This should be an instance of the
LimbMatchingclass or a subclass.See the
LimbMatchingdocumentation for more details.
- property camera: Camera¶
The camera instance to perform OpNav on.
This should be an instance of the
Cameraclass or one of its subclasses.See the
Cameraclass documentation for more details
- property model: CameraModel¶
This alias returns the current camera model from the camera attribute.
It is provided for convenience since the camera model is used frequently.
- property moment_algorithm: MomentAlgorithm¶
The
MomentAlgorithminstance to use when extracting center finding observables from images using cross-correlation.This should be an instance of the
MomentAlgorithmclass or a subclass.See the
MomentAlgorithmdocumentation for more details.
- property unresolved: UnresolvedCenterFinding¶
The
UnresolvedCenterFindinginstance to use when extracting center finding observables from images using cross-correlation.This should be an instance of the
UnresolvedCenterFindingclass or a subclass.See the
UnresolvedCenterFindingdocumentation for more details.
Summary of Methods
This is essentially an alias to the |
|
This method attempts to automatically determine the best RelNav technique to use for each image/target pair considered out of the most common RelNav techniques, |
|
This method loops applies the |
|
This method extracts observables from image(s) using the requested worker and stores them according to the types of observables generated by the technique. |
|
This method loops applies the |
|
This method loops applies the |
|
This method loops applies the |
|
This is the default setup for processing an image through a RelNav technique and storing the results for a single image. |
|
This class method registers a new RelNav technique with the RelativeOpNav class. |
|
This method loops applies the |