LimbMatching¶
giant.relative_opnav.estimators.limb_matching
:
This class implements GIANT’s version of limb based OpNav for irregular bodies.
The class provides an interface to perform limb based OpNav for each target body that is predicted to be in an image. It does this by looping through each target object contained in the
Scene.target_objs
attribute that is requested. For each of the targets, the algorithm:Places the target along the line of sight identified from the image using the
moment_algorithm
if requestedExtracts observed limb points from the image and pairs them with the target based on the expected apparent diameter of the target and the extent of the identified limbs
Identifies what points on the surface of the target likely correspond to the identified limb points in the image
Computes the update to the relative position between the target and the camera that better aligns the observed limbs with the predicted limb points on the target surface.
Steps 2-4 are repeated until convergence, divergence, or the maximum number of iteration steps are performed.
In step 3, the paired image limb to surface points are filtered for outliers using the
get_outliers()
function, if requested with thediscard_outliers
attribute.The convergence for the technique is controlled through the parameters
max_iters
,state_rtol
,state_atol
,residual_rtol
, andresidual_atol
. If the fit diverges or is unsuccessful for any reason, iteration will stop and the observed limb points and relative position will be set to NaN.When all of the required data has been successfully loaded into an instance of this class, the
estimate()
method is used to perform the estimation for the requested image. The results are stored into theobserved_bearings
attribute for the observed limb locations and theobserved_positions
attribute for the estimated relative position between the target and the camera. In addition, the predicted location for the limbs for each target are stored in thecomputed_bearings
attribute and the a priori relative position between the target and the camera is stored in thecomputed_positions
attribute. Finally, the details about the fit are stored as a dictionary in the appropriate element in thedetails
attribute. Specifically, these dictionaries will contain the following keys.Key
Description
'Jacobian'
The Jacobian matrix from the last completed iteration. Only available if successful.
'Inlier Ratio'
The ratio of inliers to outliers for the last completed iteration. Only available if successful.
'Covariance'
The 3x3 covariance matrix for the estimated relative position in the camera frame based on the residuals. This is only available if successful
'Number of iterations'
The number of iterations that the system converged in. This is only available if successful.
'Surface Limb Points'
The surface points that correspond to the limb points in the target fixed target centered frame.
'Failed'
A message indicating why the fit failed. This will only be present if the fit failed (so you could do something like
'Failed' in limb_matching.details[target_ind]
to check if something failed. The message should be a human readable description of what called the failure.'Prior Residuals'
The sum of square of the residuals from the prior iteration. This is only available if the fit failed due to divergence.
'Current Residuals'
The sum of square of the residuals from the current iteration. This is only available if the fit failed due to divergence.
Warning
Before calling the
estimate()
method be sure that the scene has been updated to correspond to the correct image time. This class does not update the scene automatically.- Parameters:
scene (Scene) – The
Scene
object containing the target, light, and obscuring objects.camera (Camera) – The
Camera
object containing the camera model and images to be utilizedimage_processing – The
ImageProcessing
object to be used to process the imagesoptions (LimbMatchingOptions | None) – A dataclass specifying the options to set for this instance.
The name of the technique identifier in the
RelativeOpNav
class.
The type of observables this technique generates.
The camera instance that represents the camera used to take the images we are performing Relative OpNav on.
This is the source of the camera model, and may be used for other information about the camera as well. See the
Camera
property for details.
A flag specifying whether this RelNav estimator generates and stores templates in the
templates
attribute.
A custom handler for doing estimation/packaging the results into the
RelativeOpNav
instance.Typically this should be
None
, unless theobservable_type
is set toRelNavObservablesType.CUSTOM
, in which case this must be a function where the first and only positional argument is theRelativeOpNav
instance that this technique was registered to and there are 2 key word argumentsimage_ind
andinclude_targets
which should be used to control which image/target is processed.If
observable_type
is notRelNavObservablesType.CUSTOM
then this is ignored whether it isNone
or not.
The scene which defines the a priori locations of all targets and light sources with respect to the camera.
You can assume that the scene has been updated for the appropriate image time inside of the class.
This attribute should provide details from applying the technique to each target in the scene.
The list should be the same length at the
Scene.target_objs
. Typically, if the technique was not applied for some of the targets then the details for the corresponding element should beNone
. Beyond each element of the details should typically contain a dictionary providing information about the results that is not strictly needed for understanding what happened, however, this is not required and you can use whatever structure you want to convey the information. Whatever you do, however, you should clearly document it for each technique so that the user can know what to expect.
A list of the computed relative position between the target and the camera in the image frame where each element corresponds to the same element in the
Scene.target_objs
list.The list elements should be numpy arrays or
None
if the the target wasn’t considered or this type of measurement is not applicable. If numpy arrays they should contain the relative position between the camera and the target as a length 3 array with units of kilometers in the camera frame. This does not need to be populated for all RelNav techniquesThis is were you should store results for
RELATIVE-POSITION
techniques.
A list of the computed (predicted) bearings in the image where each element corresponds to the same element in the
Scene.target_objs
list.The list elements should be numpy arrays or
None
if the the target wasn’t considered for some reason. If numpy arrays they should contain the pixel locations as (x, y) or (col, row). This does not always need to be filled out.This is were you should store results for
CENTER-FINDING, LIMB, LANDMARK, CONSTRAINT
techniques.
A list of the observed relative position between the target and the camera in the image frame where each element corresponds to the same element in the
Scene.target_objs
list.The list elements should be numpy arrays or
None
if the the target wasn’t considered or this type of measurement is not applicable. If numpy arrays they should contain the relative position between the camera and the target as a length 3 array with units of kilometers in the camera frame. This does not need to be populated for all RelNav techniquesThis is were you should store results for
RELATIVE-POSITION
techniques.
A list of the observed bearings in the image where each element corresponds to the same element in the
Scene.target_objs
list.The list elements should be numpy arrays or
None
if the the target wasn’t considered for some reason. If numpy arrays they should contain the pixel locations as (x, y) or (col, row). This does not always need to be filled out.This is were you should store results for
CENTER-FINDING, LIMB, LANDMARK, CONSTRAINT
techniques.
The method to use to extract the observed limbs from the image. Should be
'LIMB_SCANNING'
or'EDGE_DETECTION'
. SeeLimbExtractionMethods
for details.
A flag to estimate the center using the moment algorithm to get a fast rough estimate of the center-of-figure
The limb surface points with respect to the center of the target
Until
estimate()
is called this list will be filled withNone
.Each element of this list corresponds to the same element in the
Scene.target_objs
list.
Summary of Methods
This method computes the linear change in the measurements (the distance between the predicted and observed limb points and the scan center) with respect to a change in the state vector. |
|
This method identifies the position of each target in the camera frame using limb matching. |
|
Extract and pair limb points in an image to the surface point on a target that created it. |
|
This method resets the observed/computed attributes, the details attribute, and the gif attributes to have |
|
This method returns a generator which yields target_index, target pairs that are to be processed based on the input |