relnav_class¶
This module provides a subclass of the OpNav
class for performing relative OpNav.
Interface Description¶
In GIANT, Relative OpNav refers to the process of identifying targets of interest in an image. These targets can be
natural bodies, surface features on natural bodies, or even man made objects. Typically the result of identifying
these targets in images is line-of-sight or bearing measurements to the target in the image, which, when coupled with
the knowledge of the camera inertial pointing (possibly from the stellar_opnav
module) gives inertial bearing
measurements that can be ingested in a navigation filter. A couple of techniques result in different types of
observations, but these are discussed in more detail for the appropriate techniques.
The RelativeOpNav
class is the primary interface for performing relative OpNav in GIANT, and in general is what
the user will interact with to process images. It provides direct access to all of the estimators for doing different
types of RelNav for editing settings, and additionally provides {technique}_estimate
methods which process each
requested image by updating the scene
to reflect the a priori conditions at the image time,
applying the specified technique to the image to extract the observables, and then storing the extracted observations
and details about those observations for you. The typical scheme for these methods is {module_name}_estimate
for
instance cross_correlation_estimate
. In addition to these methods, this class also provides a auto_estimate()
method, which attempts to automatically pick the appropriate RelNav technique to use for each image based on the type of
target being identified and the a priori knowledge of the apparent diameter of the object in the image.
Specifically, this method chooses from the 5 most typical RelNav techniques, unresolved
,
moment_algorithm
, cross_correlation
, limb_matching
, and sfn
. More details on how this
decision is made are provided in the auto_estimate()
documentation. For typical users, this method is all that
they will need for doing RelNav, however, the lower-level methods for force choosing the method are provided for more
advanced analysis.
For example, we could do something like the following (from the directory containing sample_data
as generated by a
call to generate_sample_data
):
>>> import pickle
>>> from giant.relative_opnav import RelativeOpNav
>>> with open('sample_data/camera.pickle', 'rb') as input_file:
... camera = pickle.load(input_file)
>>> with open('sample_data/kdtree.pickle', 'rb') as input_file:
... target = pickle.load(input_file)
>>> from giant.scripts.generate_sample_data import (target_position, target_orientation,
... sun_position, sun_orientation)
>>> from giant.ray_tracer.scene import Scene, SceneObject
>>> from giant.ray_tracer.shapes import Point
>>> camera.only_short_on()
>>> scene = Scene(camera, SceneObject(target, position_function=target_position, orientation_function=target_orientation, name='Itokawa'),
... light_obj=SceneObject(Point, position_function=sun_position, orientation_function=sun_orientation, name='Sun'))
>>> my_relnav = RelativeOpNav(camera, scene)
>>> my_relnav.auto_estimate()
To generate RelNav observables for each short exposure image in the camera.
Extending RelativeOpNav With New Techniques¶
In addition to the built in techniques from GIANT it is possible to extend the :class:’.RelativeOpNav` object with new
techniques using the RelativeOpNav.register()
class method/decorator. Using this method to register a new
technique creates all the typical attributes/methods for the technique in the RelativeOpNav
class without
having to subclass it, including {technique}_estimate
, {technique}_details
replacing {technique}
with the
name of the technique. It will also package the results for you into the appropriate attribute (
center_finding_results
, relative_position_results
, landmark_results
’, limb_results
,
and saved_templates
) depending on the type of observables generated.
Therefore, to register a new template we could do something like
@RelativeOpNav.register
class MyNewTechnique(RelNavEstimator):
technique = "my_new_technique"
observable_type = [RelNavObservablesType.CENTER_FINDING, RelNavObservablesType.RELATIVE_POSITION]
generates_templates = False
def estimate(self, image, include_targets=None):
# do the thing
self.computed_bearings = [np.zeros(2) for _ in range(len(self.scene.target_objs))
self.computed_positions = [np.zeros(3) for _ in range(len(self.scene.target_objs))
self.observed_bearings = [np.zeros(2) for _ in range(len(self.scene.target_objs))
self.observed_positions = [np.zeros(3) for _ in range(len(self.scene.target_objs))
self.details = [{'status': "we did the thing!"} for _ in range(len(self.scene.taget_objs))]
which would register MyNewTechnique
to name my_new_technique
so that we could do something like
relnav.my_new_technique_estimate()
where relnav
is an instance of RelativeOpNav
. Note that the
registration must be done before creating an instance of RelativeOpNav
. Therefore, the code containing the
above example would needs to be imported before intializing the RelativeOpNav
.
For a more general description of the steps needed to perform relative navigation, refer to the relative_opnav
documentation. For a more in-depth examination of the RelativeOpNav
class, continue through the following
class documentation. For more details on adding new techniques to the RelativeOpNav
class, see the
relnav_estimators
documentation.
Classes
This class serves as the main user interface for performing relative optical navigation. |
Constants
The numpy structured datatype used to package most RelNav observables. |