constraint_matching¶
This module implements opportunistic feature matching between images or between an image and a template
Description of the Technique¶
Constraint matching, or opportunistic feature matching, refers to identifying the same “feature” in two different images, or between an image and a rendered template.
Between Images¶
When doing constraint matching between 2 images, the observations are not tied to known points on the surface of the target. This is thereore a less accurate technique (at least from an overal orbit determination perspective) than traditional terrain relative navigation, as the information we extract only pertains to the change in the location and orientation of the camera from one image to the next (and in general is just an estimate of the direction of motion due to scale ambiguities).
That being said, this can still be a powerful measurement type, particularly when fused with other data sets that can resolve the scale ambiguity. Additionally, it can be used even at unmapped bodies, which can dramatically reduce the time required to be able to operate at a new target.
Between and Image and a Template¶
When doing constraint matching between an image and a template we can actually tie the observations back to a known map of the body (whatever map the template was rendered from). This means that the measurements can generally be treated the same as regular TRN/SFN measurements, though, if your template comes from a global shape model, you should expect that the resulting feature locations are less accurate than would be from a detailed map intended to make navigation maps. Another difference from regular TRN/SFN is that you are unlikely to observe the same exact “feature” multiple times in different images, rather, each observation will be slightly different. This removes some information from the orbit determination process as each observation is unique and mostly unrelated to any other observations (whereas in traditional TRN/SFN, we can do things like estimate both the spacecraft state and the feature locations since we receive multiple observations of the same feature from different images).
The primary benefit to constraint matching in this case is again that you can begin doing a form of TRN much earlier in operations, before there has been time to make a detailed set of maps for navigation purposes.
Tuning¶
There are several tuning parameters that can impact the performance of constraint matching, as outlined in the
ConstraintMatchingOptions
class. In general though, the most critical tuning parameters are the choice
of the feature_matcher
and the
max_time_difference
, with the feature_matcher being the more important of
the two. In general, GIANT ships with several “hand tuned” feature matchers available, including SIFT and Orb.
These can perform well between a template and an image, or between two images where the illumincation conditions
are relatively similar (with SIFT generally outperforming Orb), but they will struggle with large changes in
illumination conditions. Optionally, you can install the open source implementation of RoMa (as described in
roma_matcher
) which is a machine learning based technique for matching images. This model, even without
additional fine tuning, has show excellent performance even in challenging illumincation condition changes and
also outperforms the hand-tuned features in cases where they are well suited
Use¶
The class provided in this module is usually not used by the user directly, instead it is usually interfaced with
through the RelativeOpNav
class using the identifier constraint_matching
. For more
details on using the RelativeOpNav
interface, please refer to the relnav_class
documentation. For
more details on using the technique class directly, as well as a description of the details
dictionaries produced
by this technique, refer to the following class documentation.
Warning
While this technique is functional, it has undergone less development and testing than other GIANT techniques and there could therefore be some undiscovered bugs. Additionally, the documentation needs a little more massaging. PRs are welcome…
Classes
This dataclass serves as one way to control the settings for the |
|
This class implements constraint matching in GIANT. |