SurfaceFeatureNavigation

giant.relative_opnav.estimators.sfn.sfn_class:

class giant.relative_opnav.estimators.sfn.sfn_class.SurfaceFeatureNavigation(scene, camera, options=None)[source]

This class implements surface feature navigation using normalized cross correlation template matching for GIANT.

All of the steps required for performing surface feature navigation are handled by this class, including the identification of visible features in the image, the rendering of the templates for each feature, the actual cross correlation, the identification of the peaks of the correlation surfaces, and optionally the solution of a PnP problem based on the observed feature locations in the image. This is all handled in the estimate() method and is performed for each requested target. Note that targets must have shapes of FeatureCatalog to use this class.

When all of the required data has been successfully loaded into an instance of this class, the estimate() method is used to perform the estimation for the requested image. The results are stored into the observed_bearings attribute for the observed center of template locations. In addition, the predicted locations for the center of template for each template is stored in the computed_bearings attribute. Finally, the details about the fit are stored as a dictionary in the appropriate element in the details attribute. Specifically, these dictionaries will contain the following keys.

Key

Description

'Correlation Scores'

The correlation score at the peak of the correlation surface for each feature as a list of floats. The corresponding element will be 0 for any features that were not found. Each element of this list corresponds to the feature according to the corresponding element in the 'Visible Features' list. If no potential visible features were expected in the image then this is not available.

'Visible Features'

The list of feature indices (into the FeatureCatalog.features list) that were looked for in the image. Each element of this list corresponds to the corresponding element in the templates list. If no potential visible features were expected in the image then this is not available.

'Correlation Peak Locations'

The Location of the correlation peaks before correcting it to find the location of the location of the feature in the image as a list of size 2 numpy arrays. Each element of this list corresponds to the feature according to the corresponding element in the 'Visible Features' list. Any features that were not found in the image have np.nan for their values. If no potential visible features were expected in the image then this is not available.

'Correlation Surfaces'

The raw correlation surfaces as 2D arrays of shape 2*search_region+1 x 2*search_region+1. Each pixel in the correlation surface represents a shift between the predicted and expected location, according to sfn_correlator(). Each element of this list corresponds to the feature according to the corresponding element in the 'Visible Features' list. If no potential visible features were expected in the image then this is not available.

'Target Template Coordinates'

The location of the center of each feature in its corresponding template. Each element of this list corresponds to the feature according to the corresponding element in the 'Visible Features' list. If no potential visible features were expected in the image then this is not available.

'Intersect Masks'

The boolean arrays the shape shapes of each rendered template with True where a ray through that pixel struct the surface of the template and False otherwise. Each element of this list corresponds to the feature according to the corresponding element in the 'Visible Features' list. If no potential visible features were expected in the image then this is not available.

'Space Mask'

The boolean array the same shape as the image specifying which pixels of the image we thought were empty space with a True and which we though were on the body with a False. If no potential visible features were expected in the image then this is not available

'PnP Solution'

A boolean indicating whether the PnP solution was successful (True) or not. This is only available if a PnP solution was attempted.

'PnP Translation'

The solved for translation in the original camera frame that minimizes the residuals in the PnP solution as a length 3 array with units of kilometers. This is only available if a PnP solution was attempted and the PnP solution was successful.

'PnP Rotation'

The solved for rotation of the original camera frame that minimizes the residuals in the PnP solution as a Rotation. This is only available if a PnP solution was attempted and the PnP solution was successful.

'PnP Position'

The solved for relative position of the target in the camera frame after the PnP solution is applied as a length 3 numpy array in km.

'PnP Orientation'

The solved for relative orientation of the target frame with respect to the camera frame after the PnP solution is applied as a Rotation.

'Failed'

A message indicating why the SFN failed. This will only be present if the SFN fit failed (so you could do something like 'Failed' in sfn.details[target_ind] to check if something failed. The message should be a human readable description of what caused the failure.

Warning

Before calling the estimate() method be sure that the scene has been updated to correspond to the correct image time. This class does not update the scene automatically.

Parameters:
  • scene (Scene) – The scene describing the a priori locations of the targets and the light source.

  • camera (Camera) – The Camera object containing the camera model and images to be analyzed

  • options (SurfaceFeatureNavigationOptions | None) – A dataclass specifying the options to set for this instance.

property camera: Camera

The camera instance that represents the camera used to take the images we are performing Relative OpNav on.

This is the source of the camera model, and may be used for other information about the camera as well. See the Camera property for details.

relnav_handler: Callable | None = None

A custom handler for doing estimation/packaging the results into the RelativeOpNav instance.

Typically this should be None, unless the observable_type is set to RelNavObservablesType.CUSTOM, in which case this must be a function where the first and only positional argument is the RelativeOpNav instance that this technique was registered to and there are 2 key word arguments image_ind and include_targets which should be used to control which image/target is processed.

If observable_type is not RelNavObservablesType.CUSTOM then this is ignored whether it is None or not.

property scene: Scene

The scene which defines the a priori locations of all targets and light sources with respect to the camera.

You can assume that the scene has been updated for the appropriate image time inside of the class.

details: List[Any | None]

This attribute should provide details from applying the technique to each target in the scene.

The list should be the same length at the Scene.target_objs. Typically, if the technique was not applied for some of the targets then the details for the corresponding element should be None. Beyond each element of the details should typically contain a dictionary providing information about the results that is not strictly needed for understanding what happened, however, this is not required and you can use whatever structure you want to convey the information. Whatever you do, however, you should clearly document it for each technique so that the user can know what to expect.

observed_bearings: List[NONEARRAY]

A list of the observed bearings in the image where each element corresponds to the same element in the Scene.target_objs list.

The list elements should be numpy arrays or None if the the target wasn’t considered for some reason. If numpy arrays they should contain the pixel locations as (x, y) or (col, row). This does not always need to be filled out.

This is were you should store results for CENTER-FINDING, LIMB, LANDMARK, CONSTRAINT techniques.

computed_bearings: List[NONEARRAY]

A list of the computed (predicted) bearings in the image where each element corresponds to the same element in the Scene.target_objs list.

The list elements should be numpy arrays or None if the the target wasn’t considered for some reason. If numpy arrays they should contain the pixel locations as (x, y) or (col, row). This does not always need to be filled out.

This is were you should store results for CENTER-FINDING, LIMB, LANDMARK, CONSTRAINT techniques.

templates: list[None | DOUBLE_ARRAY | list[None | DOUBLE_ARRAY]]

A list of rendered templates generated by this technique.

The list elements should be numpy arrays or None if the the target wasn’t considered or this type of measurement is not applicable. If numpy arrays they should contain the templates rendered for the target as 2D arrays.

If generates_templates is False this can be ignored.

brdf: IlluminationModel

The illumination model that transforms the geometric ray tracing results (see ILLUM_DTYPE) into an intensity values. Typically this is one of the options from the illumination module).

rays: Rays | None | List[Rays | None] = None

The rays to use when rendering the template. If None then the rays required to render the template will be automatically computed. Optionally, a list of Rays objects where each element corresponds to the rays to use for the corresponding template in the Scene.target_objs list. Typically this should be left as None.

grid_size: int = 1

The subsampling to use per pixel when rendering the template. This should be the number of sub-pixels per side of a pixel (that is if grid_size=3 then subsampling will be in an equally spaced 3x3 grid -> 9 sub-pixels per pixel). If rays is not None then this is ignored

template_overflow_bounds: int = -1

The number of pixels to render in the template that overflow outside of the camera field of view. Set to a number less than 0 to accept all overflow pixels in the template. Set to a number greater than or equal to 0 to limit the number of overflow pixels.

peak_finder(fit_size=1, blur=True, shift_limit=3)

This function returns a numpy array containing the (x, y) location of the maximum surface value which corresponds to the peak of the fitted quadric surface to subpixel accuracy.

First, this function calls pixel_level_peak_finder_2d() to identify the pixel location of the peak of the correlation surface. It then fits a 2D quadric to the pixels around the peak and solves for the center of the quadric to be the peak value. The quadric equation that is fit is

\[z = Ax^2+By^2+Cxy+Dx+Ey+F\]

where \(z\) is the height of the correlation surface at location \((x,y)\), and \(A--F\) are the coefficients to be fit. The fit is performed in an algebraic least squares sense. The location of the peak of the surface is then given by:

\[\begin{split}\left[\begin{array}{c}x_p \\ y_p\end{array}\right] = \frac{1}{4AB-C^2}\left[\begin{array}{c} CE-2BD\\ CD-2AE\end{array}\right]\end{split}\]

where \((x_p,y_p)\) is the location of the peak.

If the peak is invalid because it is too close to the edge, the fit failed, or the parabolic fit moved the peak too far from the pixel level peak then the result is returned as NaNs.

Parameters:
  • surface (Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | complex | bytes | str | _NestedSequence[complex | bytes | str]) – A surface, or image, to use

  • fit_size (int) – Number of pixels around the peak that are used in fitting the paraboloid

  • blur (bool) – A flag to indicate whether to apply Gaussian blur to the correlation surface to filter out high frequency noise

  • shift_limit (int) – maximum difference from the pixel level peak to the fitted peak for the fitted peak to be accepted

Returns:

The (x, y) location corresponding to the peak of fitted quadric surface to subpixel accuracy

Raises:

ValueError – If the provided surface is not 2 dimensional

Return type:

ndarray[tuple[Any, …], dtype[float64]]

min_corr_score: float = 0.3

The minimum correlation score to accept for something to be considered found in an image. The correlation score is the Pearson Product Moment Coefficient between the image and the template. This should be a number between -1 and 1, and in nearly every cast a number between 0 and 1. Setting this to -1 essentially turns the minimum correlation score check off.

blur: bool = True

A flag to perform a Gaussian blur on the correlation surface before locating the peak to remove high frequency noise

search_region: int | None = None

The number of pixels to search around the a priori predicted center for the peak of the correlation surface. If None then searches the entire correlation surface.

run_pnp_solver: bool = False

A flag specifying whether to use the PnP solver to correct errors in the initial relative state between the camera and the target body

pnp_ransac_iterations: int = 0

The number of RANSAC iterations to attempt in the PnP solver. Set to 0 to turn the RANSAC component of the PnP solver

second_search_region: int | None = None

The distance around the nominal location to search for each feature in the image after correcting errors using the PnP solver.

measurement_sigma: SCALAR_OR_ARRAY = 1

The uncertainty to assume for each measurement in pixels. This is used to set the relative weight between the observed landmarks are the a priori knowledge in the PnP problem. See the measurement_sigma documentation for a description of valid inputs.

position_sigma: SCALAR_OR_ARRAY | None = None

The uncertainty to assume for the relative position vector in kilometers. This is used to set the relative weight between the observed landmarks and the a priori knowledge in the PnP problem. See the position_sigma documentation for a description of valid inputs. If the state_sigma input is not None then this is ignored.

attitude_sigma: SCALAR_OR_ARRAY | None = None

The uncertainty to assume for the relative orientation rotation vector in radians. This is used to set the relative weight between the observed landmarks and the a priori knowledge in the PnP problem. See the attitude_sigma documentation for a description of valid inputs. If the state_sigma input is not None then this is ignored.

state_sigma: NONEARRAY = None

The uncertainty to assume for the relative position vector and orientation rotation vector in kilometers and radians respectively. This is used to set the relative weight between the observed landmarks and the a priori knowledge in the PnP problem. See the state_sigma documentation for a description of valid inputs. If this input is not None then the attitude_sigma and position_sigma inputs are ignored.

max_lsq_iterations: int | None = None

The maximum number of iterations to make in the least squares solution to the PnP problem.

lsq_relative_error_tolerance: float = 1e-08

The relative tolerance in the residuals to signal convergence in the least squares solution to the PnP problem.

lsq_relative_update_tolerance: float = 1e-08

The relative tolerance in the update vector to signal convergence in the least squares solution to the PnP problem

cf_results: np.ndarray | None = None

A numpy array containing the center finding residuals for the target that the feature catalog is a part of. If present this is used to correct errors in the a priori line of sight to the target before searching for features in the image.

show_templates: bool = False

A flag to show the rendered templates for each feature “live”. This is useful for debugging but in general should not be used.

observable_type: List[RelNavObservablesType | str] | RelNavObservablesType | str | None = [RelNavObservablesType.LANDMARK]

This technique generates LANDMARK bearing observables to the center of landmarks in the image.

generates_templates: bool = True

A flag specifying that this RelNav estimator generates and stores templates in the templates attribute.

technique: str | None = 'sfn'

The name for the technique for registering with RelativeOpNav.

If None then the name will default to the name of the module where the class is defined.

This should typically be all lowercase and should not include any spaces or special characters except for _ as it will be used to make attribute/method names. (That is MyEstimator.technique.isidentifier() should evaluate True).

cf_index: List[int] | None = None

A list that maps the features catalogs contained in the scene (in order) to the appropriate column of the cf_results matrix. If left blank the mapping is assumed to be in like order

visible_features: List[List[int] | None]

This variable is used to notify which features are predicted to be visible in the image.

Each visible feature is identified by its index in the FeatureCatalog.features list.

Summary of Methods

apply_options

Update the options as attributes of the object class

compute_rays

This method computes the required rays to render a given feature based on the current estimate of the location and orientation of the feature in the image.

estimate

This method identifies the locations of surface features in the image through cross correlation of rendered templates with the image.

pnp_solver

This method attempts to solve for an update to the relative position/orientation of the target with respect to the image based on the observed feature locations in the image.

render

This method renders each visible feature for the current target according to the current estimate of the relative position/orientation between the target and the camera using single bounce ray tracing.

reset

This method resets the observed/computed attributes as well as the details attribute to have None for each target in scene.

target_generator

This method returns a generator which yields target_index, target pairs that are to be processed based on the input include_targets.