MomentAlgorithm¶
giant.relative_opnav.estimators.moment_algorithm
:
This class implements GIANT’s version of moment based center finding for extracting bearing measurements to resolved or or unresolved targets in an image.
The class provides an interface to perform moment based center for each target body that is predicted to be in an image. It does this by looping through each target object contained in the
Scene.target_objs
attribute that is is requested. For each of the targets, the algorithm:Predicts the location of the target in the image using the a priori knowledge of the scene
Predicts the apparent area of the target in the scene assuming a spherical target.
Segments the image into foreground/background objects using the smallest expected apparent area of all targets as the minimum segment area. This is done using
ImageProcessing.segment_image()
Identifies the closest foreground segment to the predicted target location that is also within the user specified search radius. If the closest segment is also the closest segment for another target in the image, then both targets are recorded as not found. If no segments are within the search radius of the predicted target center then the target is marked as not found.
Takes the foreground objects around the identified segment and finds the centroid of the illuminated areas using a moment algorithm to compute the observed center of brightness.
If requested, corrects the observed center of brightness to the observed center of figure using the
compute_phase_correction()
.
For more details on the image segmentation, along with possible tuning parameters, refer to the
ImageProcessing.segment_image()
documentation.The search radius is controlled by
search_distance
attribute. This should be a number orNone
. If this is notNone
, then the distance from the centroid of the nearest segment to the predicted target u location must be less than this value. Therefore, you should set this value to account for the expected center-of-figure to center-of-brightness shift as well as the uncertainty in the a priori location of the target in the scene, while being careful not to set too large of a value if there are multiple targets in the scene to avoid ambiguity. If this isNone
, then the closest segment is always paired with the target (there is no search region considered) unless the segment has already been paired to another target in the scene.This technique can predict what the minimum segment area should be in the image using the predicted apparent areas for each target. This can be useful to automatically set the
ImageProcessing.minimum_segment_area
based on the targets and the a priori location in the camera frame. Because this is just an approximation, a margin of safety is included withapparent_area_margin_of_safety
, which is used to shrink the predicted apparent area to account for the assumptions about the spherical target and possible errors in the a priori scene information. You can turn off this feature and just use the set minimum segment area by settinguse_apparent_area
toFalse
.Whether the phase correction is applied or not is controlled by the boolean flag
apply_phase_correction
. The information that is passed to the phase correction routines are controlled by thephase_correction_type
andbrdf
attributes.When all of the required data has been successfully loaded into an instance of this class, the
estimate()
method is used to extract the observed centers of the target bodies predicted to be in the requested image. The results are stored into theobserved_bearings
attribute. In addition, the predicted location for each target is stored in thecomputed_bearings
attribute. Finally, the details about the fit are stored as a dictionary in the appropriate element in thedetails
attribute. Specifically, these dictionaries will contain the following keys.Key
Description
'Fit'
The fit moment object. Only available if successful.
'Phase Correction'
The phase correction vector used to convert from center of brightness to center of figure. This will only be available if the fit was successful. If
apply_phase_correction
isFalse
then this will be an array of 0.'Observed Area'
The area (number of pixels that were considered foreground) observed for this target. This is only available if the fit was successful.
'Predicted Area'
The area (number of pixels that were considered foreground) predicted for this target. This is only available if the fit was successful.
'Failed'
A message indicating why the fit failed. This will only be present if the fit failed (so you could do something like
'Failed' in moment_algorithm.details[target_ind]
to check if something failed. The message should be a human readable description of what called the failure.'Found Segments'
All of the segments that were found in the image. This is a tuple of all of the returned values from
ImageProcessing.segment_image()
. This is only included if the fit failed for some reason.Warning
Before calling the
estimate()
method be sure that the scene has been updated to correspond to the correct image time. This class does not update the scene automatically.- Parameters:
scene (Scene) – The
Scene
object containing the target, light, and obscuring objects.camera (Camera) – The
Camera
object containing the camera model and images to be utilizedimage_processing (ImageProcessing) – The
ImageProcessing
object to be used to process the imagesuse_apparent_area (bool) – A boolean flag specifying whether to predict the minimum apparent area we should consider when segmenting the image into foreground/background objects.
apparent_area_margin_of_safety (Real) – The margin of safety we will use to decrease the predicted apparent area to account for errors in the a priori scene/shape model as well as errors introduced by assuming a spherical object. The predicted apparent area will be divided by this number and then supplied as the
minimum_segment_area
attribute. This should always be >= 1.search_distance (int | None) – The search radius to search around the predicted centers for the observed centers of the target objects. This is used as a limit, so that if the closest segmented object to a predicted target location is greater than this then the target is treated as not found. Additionally, if multiple segmented regions fall within this distance of the target then we treat it as ambiguous and not found.
apply_phase_correction (bool) – A boolean flag specifying whether to apply the phase correction to the observed center of brightness to get closer to the center of figure based on the predicted apparent diameter of the object.
phase_correction_type (PhaseCorrectionType | str) – The type of phase correction to use. Should be one of the PhaseCorrectionType enum values
brdf (IlluminationModel | None) – The illumination model to use to compute the illumination values if the
RASTERED
phase correction type is used. If theRASTERED
phase correction type is not used this is ignored. If this is left asNone
and theRastered
phase correction type is used, this will default to the McEwen Model,McEwenIllumination
.
The name of the technique identifier in the
RelativeOpNav
class.
The type of observables this technique generates.
Half of the distance to search around the predicted centers for the observed centers of the target objects in pixels.
This is also used to identify ambiguous target to segmented area pairings. That is, if 2 segmented areas are within this value of the predicted center of figure for a target, then that target is treated as not found and a warning is printed.
If this is
None
then the closest segmented object from the image to the predicted center of figure of the target in the image is always chosen.
A boolean flag specifying whether to apply the phase correction or not
The camera instance that represents the camera used to take the images we are performing Relative OpNav on.
This is the source of the camera model, and may be used for other information about the camera as well. See the
Camera
property for details.
A flag specifying whether this RelNav estimator generates and stores templates in the
templates
attribute.
A custom handler for doing estimation/packaging the results into the
RelativeOpNav
instance.Typically this should be
None
, unless theobservable_type
is set toRelNavObservablesType.CUSTOM
, in which case this must be a function where the first and only positional argument is theRelativeOpNav
instance that this technique was registered to and there are 2 key word argumentsimage_ind
andinclude_targets
which should be used to control which image/target is processed.If
observable_type
is notRelNavObservablesType.CUSTOM
then this is ignored whether it isNone
or not.
The scene which defines the a priori locations of all targets and light sources with respect to the camera.
You can assume that the scene has been updated for the appropriate image time inside of the class.
A list of the observed bearings in the image where each element corresponds to the same element in the
Scene.target_objs
list.The list elements should be numpy arrays or
None
if the the target wasn’t considered for some reason. If numpy arrays they should contain the pixel locations as (x, y) or (col, row). This does not always need to be filled out.This is were you should store results for
CENTER-FINDING, LIMB, LANDMARK, CONSTRAINT
techniques.
A list of the computed (predicted) bearings in the image where each element corresponds to the same element in the
Scene.target_objs
list.The list elements should be numpy arrays or
None
if the the target wasn’t considered for some reason. If numpy arrays they should contain the pixel locations as (x, y) or (col, row). This does not always need to be filled out.This is were you should store results for
CENTER-FINDING, LIMB, LANDMARK, CONSTRAINT
techniques.
A boolean flag specifying whether to use the predicted apparent area (number of pixels) of the illuminated target in the image to threshold what is considered a foreground object in the image.
The margin of safety used to decrease the predicted apparent area for each target.
This value should always be >= 1, as the predicted area is divided by this to get the effective minimum apparent area for the targets. This is included to account for errors in the a priori scene/shape model for the targets as well as the errors introduced by assuming spherical targets. Since there is only one margin of safety for all targets in a scene, you should set this based on the expected worst case for all of the targets.
Key
Description
'Fit'
The fit moment object. Only available if successful.
'Phase Correction'
The phase correction vector used to convert from center of brightness to center of figure. This will only be available if the fit was successful. If
apply_phase_correction
isFalse
then this will be an array of 0.'Observed Area'
The area (number of pixels that were considered foreground) observed for this target. This is only available if the fit was successful.
'Predicted Area'
The area (number of pixels that were considered foreground) predicted for this target. This is only available if the fit was successful.
'Failed'
A message indicating why the fit failed. This will only be present if the fit failed (so you could do something like
'Failed' in moment_algorithm.details[target_ind]
to check if something failed. The message should be a human readable description of what called the failure.'Found Segments'
All of the segments that were found in the image. This is a tuple of all of the returned values from
ImageProcessing.segment_image()
. This is only included if the fit failed for some reason.
Summary of Methods
The method computes the phase correction assuming a spherical target. |
|
This method extracts the observed center of figure for each requested target object from the supplied image. |
|
This method computes the phase correction by raster rendering the target to determine the offset from the center of illumination to the center of figure. |
|
This method resets the observed/computed attributes as well as the details attribute to have |
|
This method computes the simple phase correction assuming the target is a sphere. |
|
This method returns a generator which yields target_index, target pairs that are to be processed based on the input |