ConstraintMatching

giant.relative_opnav.estimators.constraint_matching:

class giant.relative_opnav.estimators.constraint_matching.ConstraintMatching(scene, camera, options=None)[source]

This class implements constraint matching in GIANT.

See the module documentation or the attribute and method documentation for more details.

Warning

While this technique is functional, it has undergone less development and testing than other GIANT techniques and there could therefore be some undiscovered bugs. Additionally, the documentation needs a little more massaging. PRs are welcome…

Parameters:
  • scene (Scene) – The scene describing the a priori locations of the targets and the light source.

  • camera (Camera) – The Camera object containing the camera model and images to be analyzed

  • options (ConstraintMatchingOptions | None) – A dataclass specifying the options to set for this instance.

observable_type: List[RelNavObservablesType | str] | RelNavObservablesType | str | None = [RelNavObservablesType.CONSTRAINT]

This technique generates CONSTRAINT bearing observables

property camera: Camera

The camera instance that represents the camera used to take the images we are performing Relative OpNav on.

This is the source of the camera model, and may be used for other information about the camera as well. See the Camera property for details.

property scene: Scene

The scene which defines the a priori locations of all targets and light sources with respect to the camera.

You can assume that the scene has been updated for the appropriate image time inside of the class.

generates_templates: bool = True

A flag specifying that this RelNav estimator generates and stores templates in the templates attribute.

relnav_handler: Callable | None = None

A custom handler for doing estimation/packaging the results into the RelativeOpNav instance.

Typically this should be None, unless the observable_type is set to RelNavObservablesType.CUSTOM, in which case this must be a function where the first and only positional argument is the RelativeOpNav instance that this technique was registered to and there are 2 key word arguments image_ind and include_targets which should be used to control which image/target is processed.

If observable_type is not RelNavObservablesType.CUSTOM then this is ignored whether it is None or not.

match_against_template: bool = True

A flag to match keypoints between the image and a rendered template.

match_across_images: bool = False

A flag to match keypoints across multiple images.

min_constraints: int = 5

Minimum number of matched constraints in order for the constraint matching to be considered successful.

brdf: IlluminationModel

The illumination model that transforms the geometric ray tracing results (see ILLUM_DTYPE) into an intensity values. Typically this is one of the options from the illumination module).

rays: Rays | None | List[Rays | None] = None

The rays to use when rendering the template. If None then the rays required to render the template will be automatically computed. Optionally, a list of Rays objects where each element corresponds to the rays to use for the corresponding template in the Scene.target_objs list. Typically this should be left as None.

grid_size: int = 1

The subsampling to use per pixel when rendering the template. This should be the number of sub-pixels per side of a pixel (that is if grid_size=3 then subsampling will be in an equally spaced 3x3 grid -> 9 sub-pixels per pixel). If rays is not None then this is ignored

template_overflow_bounds: int = -1

The number of pixels to render in the template that overflow outside of the camera field of view. Set to a number less than 0 to accept all overflow pixels in the template. Set to a number greater than or equal to 0 to limit the number of overflow pixels.

max_time_difference: datetime.timedelta | None = None

Maximum time difference between image observation dates for keypoints to be matched between images. Set to a datetime.timedelta type. If None, then a maximum time difference will not be applied.

feature_matcher: FeatureMatcher

The feature matcher instance to use

details: List[Any | None]

This attribute should provide details from applying the technique to each target in the scene.

The list should be the same length at the Scene.target_objs. Typically, if the technique was not applied for some of the targets then the details for the corresponding element should be None. Beyond each element of the details should typically contain a dictionary providing information about the results that is not strictly needed for understanding what happened, however, this is not required and you can use whatever structure you want to convey the information. Whatever you do, however, you should clearly document it for each technique so that the user can know what to expect.

compute_constraint_position(image, include_targets=None)[source]

Trace from the camera to the target to estimate roughly the location that corresponds to each constraint on the target model.

Parameters:
  • image (OpNavImage) – The image to locate the targets in

  • include_targets (list[bool] | None) – A list specifying whether to process the corresponding target in Scene.target_objs or None. If None then all targets are processed.

Summary of Methods

compute_rays

This method computes the required rays to render a given target based on the location of the target in the image.

render

This method returns the computed illumination values for the given target and the (sub)pixels that each illumination value corresponds to.

reset

This method resets the observed/computed attributes as well as the details attribute to have None for each target in scene.

target_generator

This method returns a generator which yields target_index, target pairs that are to be processed based on the input include_targets.

match_image_to_template

Matches keypoints between an image and a rendered template.

match_keypoints_across_images

Matches keypoints across different images.

compute_constraint_position

Trace from the camera to the target to estimate roughly the location that corresponds to each constraint on the target model.

estimate

Do the estimation according to the current settings