estimator_interface_abc¶
This module defines the abstract base class (abc) for defining Relative OpNav techniques that will work with the
RelativeOpNav
class.
This abc provides a design guide/requirement for building a new RelNav technique that can be easily registered with
the RelativeOpNav
class. In general, when you define a new RelNav technique, you should subclass this abc
(a) to ensure you implement all of the required methods and instance/class attributes, but also to get some of the
concrete implementations that this class provides which are generally applicable.
You should only worry about this abc when you are defining a new technique. If you are using an existing technique,
like cross_correlation
, ellipse_matching
, limb_matching
, moment_algorithm
, or
unresolved
then you can ignore this class exists and just read the documentation for those techniques.
Use¶
To implement a fully functional RelNav technique, you must implement the following class attributes.
Class Attribute |
Description |
---|---|
A string that gives the name to the technique. This should be an
“identifier”, which means it should be only letters/numbers and the
underscore character, and should not start with a number. This will be
used in registering the class to define the property that points to the
instance of this technique, as well as the |
|
A list of |
|
A boolean flag specifying whether this technique generates templates and
stores them in the |
You also should typically implement the following instance attributes:
Instance Attribute |
Description |
---|---|
|
The instance of the image processing class to use when working with the images. |
The instance of the |
|
The instance of the |
|
The attribute in which to store computed (predicted) bearing measurements as (x, y) in pixels. This is a list the length of the number of targets in the scene, and when a target is processed, it should put the bearing measurements into the appropriate index. For center finding type measurements, these will be single (x,y) pairs. For landmark/limb type measurements, these will be an nx2 array of (x,y) pairs for each landmark or feature |
|
The attribute in which to store observed bearing measurements as (x, y) in pixels. This is a list the length of the number of targets in the scene, and when a target is processed, it should put the bearing measurements into the appropriate index. For center finding type measurements, these will be single (x,y) pairs. For landmark/limb type measurements, these will be an nx2 array of (x,y) pairs for each landmark or feature |
|
The attribute in which to store computed (predicted) positions measurements as (x, y, z) in kilometers in the camera frame. This is a list the length of the number of targets in the scene, and when a target is processed, it should put the predicted position into the appropriate index. |
|
The attribute in which to store observed (measured) positions measurements as (x, y, z) in kilometers in the camera frame. This is a list the length of the number of targets in the scene, and when a target is processed, it should put the measured position into the appropriate index. |
|
The attribute in which templates should be stored for each target if templates are used for the technique. This is a list the length of the number of targets in the scene, and when a target is processed, it should have the template(s) generated for that target stored in the appropriate element. For center finding type techniques the templates are 2D numpy arrays. For landmark type techniques the templates are usually lists of 2D numpy arrays, where each list element corresponds to the template for the corresponding landmark. |
|
This attribute can be used to store extra information about what happened when the technique was applied. This should be a list the length of the number of targets in the scene, and when a target is processed, the details should be saved to the appropriate element in the list. Usually each element takes the form of a dictionary and contains things like the uncertainty of the measured value (if known), the correlation score (if correlation was used) or other pieces of information that are necessarily directly needed, but which may given context to a user or another program. Because this is freeform, for the most part GIANT will just copy this list where it belongs and will not actually inspect the contents. To use the contents you will either need to inspect them yourself or will need to write custom code for them. |
Finally, we must implement the following method
Method |
Description |
---|---|
This method should use the defined technique to extract observables from
the image, depending on the type of the observables generated. This is
also where the computed (predicted) observables should be generated and
stored, as well as fleshing out the |
If you implement a class that contains all of these things then you will have successfully defined a new RelNav
technique that can be registered with the RelativeOpNav
class. Whether that technique actually works or not
is up to you however.
For more details on defining/registering a new technique, as well as an example, refer to the
estimators
documentation.
Classes
This is the abstract base class for all RelNav techniques in GIANT that work with the |
|
This enumeration provides options for the basic types of observables generated in Relative OpNav. |