sfn_class

This module provides the capability to locate surface features from a target in an image using 2D cross-correlation.

Description of the Technique

When a target grows to the point where we can begin to distinguish individual features on the surface in the images, we typically consider switching to navigating using these features instead of just the center of figure of the target. There are a number of reasons for this, including the fact that as the body grows in the field of view and errors in your shape model can start contributing to larger and larger errors in your center-finding results and the fact that having multiple observations instead of a single puts a stronger constraint on the location of the camera at each image allowing us to more accurately estimate the trajectory of the camera through time.

One of the most common ways of extracting observations of each feature is through cross correlation, using a very similar technique to that describe in cross_correlation. Essentially we render what we think each feature will look like based on the current knowledge of the relative position and orientation of the camera with respect to each feature (the features are stored in a special catalogue called a FeatureCatalogue). We then take the rendered template and use normalized cross correlation to identify the location of the feature in the image. After we have identified the features in the image, we optionally solve a PnP problem to refine our knowledge of the spacecraft state and then repeat the process to correct any errors in the observations created by errors in our initial state estimates.

In more detail, GIANT implements this using the following steps

  1. Identify which features we think should be visible in the image using the FeatureCatalogue.feature_finder

  2. For each feature we predict should be visible, render the template based on the a priori relative state between the camera and the feature using a single bounce ray trace and the routines from ray_tracer.

  3. Perform 2D normalized cross correlation for every possible alignment between the center of the templates and the image. We do this in a search region specified by the user and usually in the spatial domain so that we can include information about which pixels we want to consider when computing the correlation scores, as described in sfn_correlator().

  4. Locate the peaks of the correlation surfaces (optionally locate the subpixel peak by fitting a 2D quadric to the correlation surface)

  5. Correct the located peaks based on the location of the center-of-feature in the template to get the observed center-of-feature in the image.

  6. Optionally solve the PnP problem for the best shift/rotation of the feature locations in the camera frame to minimize the residuals between the predicted feature locations and the observed feature locations. Once complete, update the knowledge of the relative position/orientation of the camera with respect to the target and repeat all steps except this one to correct for errors introduced by a priori state knowledge errors.

Tuning

There are a few more tuning options in SFN verses normal cross correlation. The first, and likely most important tuning is for identifying potentially visible features in an image. For this, you actually want to set the FeatureCatalogue.feature_finder attribute to be something that will correctly determine which features are possibly visible (typically to an instance of VisibleFeatureFinder). We discuss the tuning for the VisibleFeatureFinder here, though you could concievably use something else if you desired.

Parameter

Description

VisibleFeatureFinder.off_boresight_angle_maximum

The maximum angle between the boresight of the camera and the feature location in the camera frame in degrees.

VisibleFeatureFinder.gsd_scaling

The permissible ratio of the camera ground sample distance to the feature ground sample distance

VisibleFeatureFinder.reflectance_angle_maximum

The maximum angle between the viewing vector and the average normal vector of the feature in degrees.

VisibleFeatureFinder.incident_angle_maximum

The maximum angle between the incoming light vector and the average feature normal vector in degrees.

VisibleFeatureFinder.percent_in_fov

The percentage of the feature that is in the FOV

VisibleFeatureFinder.feature_list

A list of feature names to consider

When tuning the feature finder you generally are looking to only get features that are likely to actually correlate well in the image so that you don’t waste time considering features that won’t work for one reason or another. All of these parameters can contribute to this, but some of the most important are the gsd_scaling, which should typically be around 2 and the off_boresight_angle_maximum which should typically be just a little larger than the half diagonal field of view of the detector to avoid possibly overflowing values in the projection computation and processing features that are actually way outside the field of view. Note that since you set the feature finder on each feature catalogue, this means that you can have different tunings for different feature catalogues (if you have multiple in a scene).

Next we have the parameters that control the actual rendering/correlation for each feature. These are the same as for cross_correlation.

Parameter

Description

brdf

The bidirectional reflectance distribution function used to compute the expected illumination of a ray based on the geometry of the scene.

grid_size

The size of the grid to use for subpixel sampling when rendering the templates

peak_finder

The function to use to detect the peaks of the correlation surfaces.

blur

A flag specifying whether to blur the correlation surfaces to decrease high frequency noise before identifying the peak.

search_region

The search region in pixels to restrict the area the peak of the correlation surfaces is searched for around the a priori predicted centers for each feature

min_corr_score

The minimum correlation score to accept as a successful identification. Correlation scores range from -1 to 1, with 1 indicating perfect correlation.

Of these options, most only make small changes to the results. The 2 that can occasionally make large changes are search_region and blur. In general search_region should be set to a few pixels larger than the expected uncertainty in the camera/feature relative state. Since we are doing spatial correlation here we typically want this number to be as small as possible for efficiency while still capturing the actual peak. The blur attribute can also be used to help avoid mistaken correlation (perhaps were only empty space is aligned). Finally, the min_corr_score can generally be left at the default, though if you have a poor a priori knowledge of either the shape model or the relative position of the features then you may need to decrease this some.

The last set of tuning parameters to consider are those for the PnP solver. They are as follows:

Parameter

Description

run_pnp_solver

A flag to turn the PnP solver on

pnp_ransac_iterations

The number of RANSAC iterations to attempt in the PnP solver

second_search_region

The search region in pixels to restrict the area the peak of the correlation surfaces is searched for around the a priori predicted centers for each feature after a PnP solution has been done

measurement_sigma

The uncertainty to assume for each measurement

position_sigma

The uncertainty to assume in the a priori relative position vector between the camera and the features in kilometers

attitude_sigma

The uncertainty to assume in the a priori relative orientation between the camera and the features in degrees

state_sigma

The uncertainty to assume in the relative position and orientation between the camera and the features (overrides the individual position and attitude sigmas)

max_lsq_iterations

The maximum number of iterations to attempt to converge in the linearized least squares solution of the PnP problem

lsq_relative_error_tolerance

The maximum change in the residuals from one iteration to the next before we consider the PnP solution converged

lsq_relative_update_tolerance

The maximum change in the update vector from one iteration to the next before we consider the PnP solution converged.

cf_results

A numpy array specifying the observed center of figure for each target in the image (for instance from cross_correlation) to use to set the a priori relative state information between the camera and the feature catalogue

cf_index

The mapping of feature catalogue number to column of the cf_result array.

All of these options can be important. First, unless you have very good a priori error, you probably should turn the PnP solver on. Because of the way SFN works, errors in your a priori state can lead to somewhat significant errors in the observed feature locations, and the PnP solver can correct a lot of these errors. If you do turn the PnP solver on then the rest of these options become important. The pnp_ransac_iterations should typically be set to something around 100-200, especially if you expect there to be outliers (which there usually are). The second_search_distance should be set to capture the expected uncertainty after the PnP solution (typically mostly just the uncertainty in the camera model and the uncertainty in the feature locations themselves). Typically something around 5 works well. The *_sigma attributes control the relative weighting between the a priori state and the observed locations, which can be important to get a good PnP solution. The max_lsq_iterations`, ``lsq_relative_error_tolerance, and lsq_relative_update_tolerance can play an important role in getting the PnP solver to converge, though the defaults are generally decent. Finally, the cf_results and cf_index can help to decrease errors in the a priori relative state knowledge, which in some cases can be critical to successfully identifying features.

Use

The class provided in this module is usually not used by the user directly, instead it is usually interfaced with through the RelativeOpNav class using the identifier sfn. For more details on using the RelativeOpNav interface, please refer to the relnav_class documentation. For more details on using the technique class directly, as well as a description of the details dictionaries produced by this technique, refer to the following class documentation.

One implementation detail we do want to note is that you should set your FeatureCatalogue.feature_finder on your feature catalogue before using this class. For instance, if your catalogue is stored in a file called 'features.pickle'

>>> import pickle
>>> from giant.relative_opnav.estimators.sfn.surface_features import VisibleFeatureFinder
>>> with open('features.pickle', 'rb') as in_file:
>>>     fc = pickle.load(in_file)  # type: VisibleFeatureFinder
>>> fc.feature_finder = VisibleFeatureFinder(fc, gsd_scaling=2.5)

Classes

SurfaceFeatureNavigation

This class implements surface feature navigation using normalized cross correlation template matching for GIANT.

SurfaceFeatureNavigationOptions

This dataclass serves as one way to control the settings for the SurfaceFeatureNavigation class.