SurfaceFeatureNavigation

giant.relative_opnav.estimators.sfn.sfn_class:

class giant.relative_opnav.estimators.sfn.sfn_class.SurfaceFeatureNavigation(scene, camera, image_processing, options=None, brdf=None, rays=None, grid_size=1, peak_finder=<function quadric_peak_finder_2d>, min_corr_score=0.5, blur=True, search_region=10, run_pnp_solver=False, pnp_ransac_iterations=0, second_search_region=None, measurement_sigma=1, position_sigma=None, attitude_sigma=None, state_sigma=None, max_lsq_iterations=None, lsq_relative_error_tolerance=1e-08, lsq_relative_update_tolerance=1e-08, cf_results=None, cf_index=None, show_templates=False)[source]

This class implements surface feature navigation using normalized cross correlation template matching for GIANT.

All of the steps required for performing surface feature navigation are handled by this class, including the identification of visible features in the image, the rendering of the templates for each feature, the actual cross correlation, the identification of the peaks of the correlation surfaces, and optionally the solution of a PnP problem based on the observed feature locations in the image. This is all handled in the estimate() method and is performed for each requested target. Note that targets must have shapes of FeatureCatalogue to use this class.

When all of the required data has been successfully loaded into an instance of this class, the estimate() method is used to perform the estimation for the requested image. The results are stored into the observed_bearings attribute for the observed center of template locations. In addition, the predicted locations for the center of template for each template is stored in the computed_bearings attribute. Finally, the details about the fit are stored as a dictionary in the appropriate element in the details attribute. Specifically, these dictionaries will contain the following keys.

Key

Description

'Correlation Scores'

The correlation score at the peak of the correlation surface for each feature as a list of floats. The corresponding element will be 0 for any features that were not found. Each element of this list corresponds to the feature according to the corresponding element in the 'Visible Features' list. If no potential visible features were expected in the image then this is not available.

'Visible Features'

The list of feature indices (into the FeatureCatalogue.features list) that were looked for in the image. Each element of this list corresponds to the corresponding element in the templates list. If no potential visible features were expected in the image then this is not available.

'Correlation Peak Locations'

The Location of the correlation peaks before correcting it to find the location of the location of the feature in the image as a list of size 2 numpy arrays. Each element of this list corresponds to the feature according to the corresponding element in the 'Visible Features' list. Any features that were not found in the image have np.nan for their values. If no potential visible features were expected in the image then this is not available.

'Correlation Surfaces'

The raw correlation surfaces as 2D arrays of shape 2*search_region+1 x 2*search_region+1. Each pixel in the correlation surface represents a shift between the predicted and expected location, according to sfn_correlator(). Each element of this list corresponds to the feature according to the corresponding element in the 'Visible Features' list. If no potential visible features were expected in the image then this is not available.

'Target Template Coordinates'

The location of the center of each feature in its corresponding template. Each element of this list corresponds to the feature according to the corresponding element in the 'Visible Features' list. If no potential visible features were expected in the image then this is not available.

'Intersect Masks'

The boolean arrays the shape shapes of each rendered template with True where a ray through that pixel struct the surface of the template and False otherwise. Each element of this list corresponds to the feature according to the corresponding element in the 'Visible Features' list. If no potential visible features were expected in the image then this is not available.

'Space Mask'

The boolean array the same shape as the image specifying which pixels of the image we thought were empty space with a True and which we though were on the body with a False. If no potential visible features were expected in the image then this is not available

'PnP Solution'

A boolean indicating whether the PnP solution was successful (True) or not. This is only available if a PnP solution was attempted.

'PnP Translation'

The solved for translation in the original camera frame that minimizes the residuals in the PnP solution as a length 3 array with units of kilometers. This is only available if a PnP solution was attempted and the PnP solution was successful.

'PnP Rotation'

The solved for rotation of the original camera frame that minimizes the residuals in the PnP solution as a Rotation. This is only available if a PnP solution was attempted and the PnP solution was successful.

'PnP Position'

The solved for relative position of the target in the camera frame after the PnP solution is applied as a length 3 numpy array in km.

'PnP Orientation'

The solved for relative orientation of the target frame with respect to the camera frame after the PnP solution is applied as a Rotation.

'Failed'

A message indicating why the SFN failed. This will only be present if the SFN fit failed (so you could do something like 'Failed' in sfn.details[target_ind] to check if something failed. The message should be a human readable description of what caused the failure.

Warning

Before calling the estimate() method be sure that the scene has been updated to correspond to the correct image time. This class does not update the scene automatically.

Parameters:
  • scene (Scene) – The scene describing the a priori locations of the targets and the light source.

  • camera (Camera) – The Camera object containing the camera model and images to be analyzed

  • image_processing (ImageProcessing) – An instance of ImageProcessing. This is used for denoising the image and for generating the correlation surface using denoise_image() and correlate() methods respectively

  • options (SurfaceFeatureNavigationOptions | None) – A dataclass specifying the options to set for this instance. If provided it takes preference over all key word arguments, therefore it is not recommended to mix methods.

  • brdf (IlluminationModel | None) – The illumination model that transforms the geometric ray tracing results (see ILLUM_DTYPE) into a intensity values. Typically this is one of the options from the illumination module).

  • rays (Rays | None | List[Rays]) – The rays to use when rendering the template. If None then the rays required to render the template will be automatically computed. Optionally, a list of Rays objects where each element corresponds to the rays to use for the corresponding template in the Scene.target_objs list. Typically this should be left as None.

  • grid_size (int) – The subsampling to use per pixel when rendering the template. This should be the number of sub-pixels per side of a pixel (that is if grid_size=3 then subsampling will be in an equally spaced 3x3 grid -> 9 sub-pixels per pixel). If rays is not None then this is ignored

  • peak_finder (Callable[[ndarray, bool], ndarray]) – The peak finder function to use. This should be a callable that takes in a 2D surface as a numpy array and returns the (x,y) location of the peak of the surface.

  • min_corr_score (float) – The minimum correlation score to accept for something to be considered found in an image. The correlation score is the Pearson Product Moment Coefficient between the image and the template. This should be a number between -1 and 1, and in nearly every cast a number between 0 and 1. Setting this to -1 essentially turns the minimum correlation score check off.

  • blur (bool) – A flag to perform a Gaussian blur on the correlation surface before locating the peak to remove high frequency noise

  • search_region (int) – The number of pixels to search around the a priori predicted center for the peak of the correlation surface. If None then searches the entire correlation surface.

  • run_pnp_solver (bool) – A flag specifying whether to use the PnP solver to correct errors in the initial relative state between the camera and the target body

  • pnp_ransac_iterations (int) – The number of RANSAC iterations to attempt in the PnP solver. Set to 0 to turn the RANSAC component of the PnP solver

  • second_search_region (int | None) – The distance around the nominal location to search for each feature in the image after correcting errors using the PnP solver.

  • measurement_sigma (Sequence | ndarray | Real) – The uncertainty to assume for each measurement in pixels. This is used to set the relative weight between the observed landmarks are the a priori knowledge in the PnP problem. See the measurement_sigma documentation for a description of valid inputs.

  • position_sigma (Sequence | ndarray | Real | None) – The uncertainty to assume for the relative position vector in kilometers. This is used to set the relative weight between the observed landmarks and the a priori knowledge in the PnP problem. See the position_sigma documentation for a description of valid inputs. If the state_sigma input is not None then this is ignored.

  • attitude_sigma (Sequence | ndarray | Real | None) – The uncertainty to assume for the relative orientation rotation vector in radians. This is used to set the relative weight between the observed landmarks and the a priori knowledge in the PnP problem. See the attitude_sigma documentation for a description of valid inputs. If the state_sigma input is not None then this is ignored.

  • state_sigma (Sequence | ndarray | None) – The uncertainty to assume for the relative position vector and orientation rotation vector in kilometers and radians respectively. This is used to set the relative weight between the observed landmarks and the a priori knowledge in the PnP problem. See the state_sigma documentation for a description of valid inputs. If this input is not None then the attitude_sigma and position_sigma inputs are ignored.

  • max_lsq_iterations (int | None) – The maximum number of iterations to make in the least squares solution to the PnP problem.

  • lsq_relative_error_tolerance (float) – The relative tolerance in the residuals to signal convergence in the least squares solution to the PnP problem.

  • lsq_relative_update_tolerance (float) – The relative tolerance in the update vector to signal convergence in the least squares solution to the PnP problem

  • cf_results (Sequence | ndarray | None) – A numpy array containing the center finding residuals for the target that the feature catalogue is a part of. If present this is used to correct errors in the a priori line of sight to the target before searching for features in the image.

  • cf_index (List[int] | None) – A list that maps the features catalogues contained in the scene (in order) to the appropriate column of the cf_results matrix. If left blank the mapping is assumed to be in like order

  • show_templates (bool) – A flag to show the rendered templates for each feature “live”. This is useful for debugging but in general should not be used.

observable_type: List[RelNavObservablesType] = [<RelNavObservablesType.LANDMARK: 'LANDMARK'>]

This technique generates LANDMARK bearing observables to the center of landmarks in the image.

generates_templates: bool = True

A flag specifying that this RelNav estimator generates and stores templates in the templates attribute

technique: str = 'sfn'

The name for the technique for registering with RelativeOpNav.

If None then the name will default to the name of the module where the class is defined.

This should typically be all lowercase and should not include any spaces or special characters except for _ as it will be used to make attribute/method names. (That is MyEstimator.technique.isidentifier() should evaluate True).

run_pnp_solver: bool

This turns on/off the PnP solver that attempts to estimate the best position/pointing for each image based on the observed landmark locations in the image.

The PnP solver is useful if you expect a large initial error in your camera position, particularly radially, since it can help to reduce biases that can occur based on distortion fields and scale errors.

property camera: Camera

The camera instance that represents the camera used to take the images we are performing Relative OpNav on.

This is the source of the camera model, and may be used for other information about the camera as well. See the Camera property for details.

relnav_handler: Callable | None = None

A custom handler for doing estimation/packaging the results into the RelativeOpNav instance.

Typically this should be None, unless the observable_type is set to RelNavObservablesType.CUSTOM, in which case this must be a function where the first and only positional argument is the RelativeOpNav instance that this technique was registered to and there are 2 key word arguments image_ind and include_targets which should be used to control which image/target is processed.

If observable_type is not RelNavObservablesType.CUSTOM then this is ignored whether it is None or not.

property scene: Scene

The scene which defines the a priori locations of all targets and light sources with respect to the camera.

You can assume that the scene has been updated for the appropriate image time inside of the class.

observed_bearings: List[NONEARRAY]

A list of the observed bearings in the image where each element corresponds to the same element in the Scene.target_objs list.

The list elements should be numpy arrays or None if the the target wasn’t considered for some reason. If numpy arrays they should contain the pixel locations as (x, y) or (col, row). This does not always need to be filled out.

This is were you should store results for CENTER-FINDING, LIMB, LANDMARK, CONSTRAINT techniques.

computed_bearings: List[NONEARRAY]

A list of the computed (predicted) bearings in the image where each element corresponds to the same element in the Scene.target_objs list.

The list elements should be numpy arrays or None if the the target wasn’t considered for some reason. If numpy arrays they should contain the pixel locations as (x, y) or (col, row). This does not always need to be filled out.

This is were you should store results for CENTER-FINDING, LIMB, LANDMARK, CONSTRAINT techniques.

pnp_ransac_iterations: int

The number of RANSAC iterations to go through in the PnP solver, if being used.

The RANSAC algorithm can help to reject outliers from the PnP solution, making it much more useful, therefore it is strongly encouraged to use it. To disable the RANSAC algorithm set this to 0.

second_search_region: int

The second search region is used as the search region after completing a PnP iteration.

This is useful to reject outliers from the set of landmarks found after the PnP iteration since they will not be run through the PnP solver/RANSAC algorithms.

Typically this should be no less than 3 pixels to ensure that we have enough rough to identify the peak of the correlation surface.

measurement_sigma: SCALAR_OR_ARRAY

The 1 sigma uncertainty for the measurement as either a scalar (same uncertainty for each axis, no correlation), a 1D array of length 2 (different uncertainty for each axis x, y), or a 2D array of shape 2x2 (full measurement covariance matrix).

This measurement sigma is applied to every measurement equally in the PnP solver and is used to weight the observations with respect to the a priori state. The default is 1 pixel for each axis.

position_sigma: SCALAR_OR_ARRAY | None

The 1 sigma uncertainty of the a priori relative position between the camera and the target in units of kilometers.

This can be provided as a scalar, in which case the sigma is applied to each axis of the position vector, a 1D array of length 3 where the sigma is specified per axis (x,y,z), or as a 2D array of shape 3x3 where the full position covariance matrix is provided.

This sigma is applied to the state update portion of the PnP solver to weight the a priori state vs the the measurement observations.

If this is set to None then we will check if state_sigma is not None and use that instead. If state_sigma is also None we will assume this to be 1 kilometer for each axis. If both state_sigma and this attribute are not None, state_sigma will take precedence.

attitude_sigma: SCALAR_OR_ARRAY | None

The 1 sigma uncertainty of the a priori relative orientation between the camera and the target in units of radians.

The relative orientation is expressed as a length 3 rotation vector (a unit vector specifying the rotation axis multiplied by the angle to rotate about that axis in radians).

This can be provided as a scalar, in which case the sigma is applied to each axis of the rotation vector, a 1D array of length 3 where the sigma is specified per axis (x,y,z), or as a 2D array of shape 3x3 where the full rotation covariance matrix is provided.

This sigma is applied to the state update portion of the PnP solver to weight the a priori state vs the the measurement observations

If this is set to None then we will check if state_sigma is not None and use that instead. If state_sigma is also None we will assume this to be 0.02 degrees for each axis of the rotation vector. If both state_sigma and this attribute are not None, state_sigma will take precedence.

state_sigma: NONEARRAY

The 1 sigma uncertainty of the a priori relative state (position+orientation) between the camera and the target.

The state vector is the concatenation of the relative position vector in kilometers, and a length 3 rotation vector (a unit vector specifying the rotation axis multiplied by the angle to rotate about that axis in radians).

This can be provided as a 1D length 6 array, in which case the first 3 elements are applied to the relative position with units of kilometers and the last 3 elements are applied to the rotation vector with units of radians, or as a 2D 6x6 state covariance matrix.

This sigma is applied to the state update portion of the PnP solver to weight the a priori state vs the the measurement observations

If this is set to None then we will use position_sigma and attitude_sigma instead. If this is not None then it will take precedence over anything set in position_sigma and attitude_sigma.

max_lsq_iterations: int | None

The maximum number of iteration steps to take when trying to solve the PnP problem.

If this is set to None then the maximum number of steps will be set to approximately 100*(N+7) where N is the number of measurements (2 times the number of landmarks observed). Typically things converge much faster than this however.

This is passed to the max_nfev argument in the least_squares solver from scipy.

lsq_relative_error_tolerance: float

The relative error tolerance condition at which the least squares solution is considered final.

This essentially stops solving the linearized least squares problem once the change in the error (residuals) from one iteration to the next is less than this value times the actual measurement values.

This is passed to the ftol argument in the least_squares solver from scipy.

cf_results: NONEARRAY

A numpy array specifying the location of the center of figure of the target body in each image.

This is used to correct any large shift errors in the image plane before attempting to find features on the target. This can be very useful as surface feature navigation typically works best when there are smaller a priori error. The way it is used is that after updating the scene for the current image, the location of each target being estimated is shifted to lie along the line of sight vector specified by this input at the same distance as before.

This should be a 2D numpy array of shape at least n_images x n_feature_catalogues where n_images are the number of images in the camera and n_feature_catalogues are the number of feature catalogues in the scene with a dtype of RESULTS_DTYPE. By default it is assumed that each column of this array corresponds to each feature catalogue in the scene in order, however, you can change this using the cf_index attribute.

If this is set to None then the a priori knowledge of the relative state between the camera and the target is not modified.

cf_index: List[int]

The key to map each feature catalogue included in the scene (in order) to the corresponding column of the cf_results when correcting gross shifts in the a priori knowledge of the camera relative to the target.

By default this assumes that each column of the cf_results array corresponds to the each feature catalogue in the scene in order. This works well in many cases where you’re considering only a single target in the image (and thus have the global shape as the first target, and the feature catalogue as the second target) but if you have multiple bodies then you may need to modify this. For instance say you have 2 bodies in the images. In you scene you have the global shape of the first body as the first target, the feature catalogue for the first body as the second target, the global shape of the second body as the third target, and the feature catalogue for the second body as the fourth target. In this case you would want to make this attribute be [0, 2] to map the first feature catalogue to the first target results and the second catalogue to the third target results. We admit this is slightly confusing so best practice is to use separate scenes for global shapes and feature catalogues using the same order in each.

If cf_results is None then this attribute is ignored.

show_templates: bool

This flag specifies to show the templates/images as they are correlated.

This can be useful for debugging (if you aren’t finding any features in images for instance) but also is unsuitable for any type of batch processing, therefore you should rarely use this option.

visible_features: List[List[int] | None]

This variable is used to notify which features are predicted to be visible in the image.

Each visible feature is identified by its index in the FeatureCatalogue.features list.

visible_features: List[List[int] | None]

This variable is used to notify which features are predicted to be visible in the image.

Each visible feature is identified by its index in the FeatureCatalogue.features list.

details: List[Dict[str, Any]]

Key

Description

'Correlation Scores'

The correlation score at the peak of the correlation surface for each feature as a list of floats. The corresponding element will be 0 for any features that were not found. Each element of this list corresponds to the feature according to the corresponding element in the 'Visible Features' list. If no potential visible features were expected in the image then this is not available.

'Visible Features'

The list of feature indices (into the FeatureCatalogue.features list) that were looked for in the image. Each element of this list corresponds to the corresponding element in the templates list. If no potential visible features were expected in the image then this is not available.

'Correlation Peak Locations'

The Location of the correlation peaks before correcting it to find the location of the location of the feature in the image as a list of size 2 numpy arrays. Each element of this list corresponds to the feature according to the corresponding element in the 'Visible Features' list. Any features that were not found in the image have np.nan for their values. If no potential visible features were expected in the image then this is not available.

'Correlation Surfaces'

The raw correlation surfaces as 2D arrays of shape 2*search_region+1 x 2*search_region+1. Each pixel in the correlation surface represents a shift between the predicted and expected location, according to sfn_correlator(). Each element of this list corresponds to the feature according to the corresponding element in the 'Visible Features' list. If no potential visible features were expected in the image then this is not available.

'Target Template Coordinates'

The location of the center of each feature in its corresponding template. Each element of this list corresponds to the feature according to the corresponding element in the 'Visible Features' list. If no potential visible features were expected in the image then this is not available.

'Intersect Masks'

The boolean arrays the shape shapes of each rendered template with True where a ray through that pixel struct the surface of the template and False otherwise. Each element of this list corresponds to the feature according to the corresponding element in the 'Visible Features' list. If no potential visible features were expected in the image then this is not available.

'Space Mask'

The boolean array the same shape as the image specifying which pixels of the image we thought were empty space with a True and which we though were on the body with a False. If no potential visible features were expected in the image then this is not available

'PnP Solution'

A boolean indicating whether the PnP solution was successful (True) or not. This is only available if a PnP solution was attempted.

'PnP Translation'

The solved for translation in the original camera frame that minimizes the residuals in the PnP solution as a length 3 array with units of kilometers. This is only available if a PnP solution was attempted and the PnP solution was successful.

'PnP Rotation'

The solved for rotation of the original camera frame that minimizes the residuals in the PnP solution as a Rotation. This is only available if a PnP solution was attempted and the PnP solution was successful.

'PnP Position'

The solved for relative position of the target in the camera frame after the PnP solution is applied as a length 3 numpy array in km.

'PnP Orientation'

The solved for relative orientation of the target frame with respect to the camera frame after the PnP solution is applied as a Rotation.

'Failed'

A message indicating why the SFN failed. This will only be present if the SFN fit failed (so you could do something like 'Failed' in sfn.details[target_ind] to check if something failed. The message should be a human readable description of what caused the failure.

Summary of Methods

apply_options

This method applies the input options to the current instance.

compute_rays

This method computes the required rays to render a given feature based on the current estimate of the location and orientation of the feature in the image.

estimate

This method identifies the locations of surface features in the image through cross correlation of rendered templates with the image.

pnp_solver

This method attempts to solve for an update to the relative position/orientation of the target with respect to the image based on the observed feature locations in the image.

render

This method renders each visible feature for the current target according to the current estimate of the relative position/orientation between the target and the camera using single bounce ray tracing.

reset

This method resets the observed/computed attributes as well as the details attribute to have None for each target in scene.

target_generator

This method returns a generator which yields target_index, target pairs that are to be processed based on the input include_targets.