# Copyright 2021 United States Government as represented by the Administrator of the National Aeronautics and Space
# Administration. No copyright is claimed in the United States under Title 17, U.S. Code. All Other Rights Reserved.
"""
This module provides a subclass of the :class:`.OpNav` class for performing relative OpNav.
Interface Description
---------------------
In GIANT, Relative OpNav refers to the process of identifying targets of interest in an image. These targets can be
natural bodies, surface features on natural bodies, or even man made objects. Typically the result of identifying
these targets in images is line-of-sight or bearing measurements to the target in the image, which, when coupled with
the knowledge of the camera inertial pointing (possibly from the :mod:`.stellar_opnav` module) gives inertial bearing
measurements that can be ingested in a navigation filter. A couple of techniques result in different types of
observations, but these are discussed in more detail for the appropriate techniques.
The :class:`RelativeOpNav` class is the primary interface for performing relative OpNav in GIANT, and in general is what
the user will interact with to process images. It provides direct access to all of the estimators for doing different
types of RelNav for editing settings, and additionally provides ``{technique}_estimate`` methods which process each
requested image by updating the :attr:`~.RelativeOpNav.scene` to reflect the **a priori** conditions at the image time,
applying the specified technique to the image to extract the observables, and then storing the extracted observations
and details about those observations for you. The typical scheme for these methods is ``{module_name}_estimate`` for
instance ``cross_correlation_estimate``. In addition to these methods, this class also provides a :meth:`auto_estimate`
method, which attempts to automatically pick the appropriate RelNav technique to use for each image based on the type of
target being identified and the **a priori** knowledge of the apparent diameter of the object in the image.
Specifically, this method chooses from the 5 most typical RelNav techniques, :mod:`.unresolved`,
:mod:`.moment_algorithm`, :mod:`.cross_correlation`, :mod:`.limb_matching`, and :mod:`.sfn`. More details on how this
decision is made are provided in the :meth:`.auto_estimate` documentation. For typical users, this method is all that
they will need for doing RelNav, however, the lower-level methods for force choosing the method are provided for more
advanced analysis.
For example, we could do something like the following (from the directory containing ``sample_data`` as generated by a
call to :mod:`.generate_sample_data`):
>>> import pickle
>>> from giant.relative_opnav import RelativeOpNav
>>> with open('sample_data/camera.pickle', 'rb') as input_file:
... camera = pickle.load(input_file)
>>> with open('sample_data/kdtree.pickle', 'rb') as input_file:
... target = pickle.load(input_file)
>>> from giant.scripts.generate_sample_data import (target_position, target_orientation,
... sun_position, sun_orientation)
>>> from giant.ray_tracer.scene import Scene, SceneObject
>>> from giant.ray_tracer.shapes import Point
>>> camera.only_short_on()
>>> scene = Scene(camera, SceneObject(target, position_function=target_position, orientation_function=target_orientation, name='Itokawa'),
... light_obj=SceneObject(Point, position_function=sun_position, orientation_function=sun_orientation, name='Sun'))
>>> my_relnav = RelativeOpNav(camera, scene)
>>> my_relnav.auto_estimate()
To generate RelNav observables for each short exposure image in the camera.
Extending RelativeOpNav With New Techniques
-------------------------------------------
In addition to the built in techniques from GIANT it is possible to extend the :class:'.RelativeOpNav` object with new
techniques using the :meth:`.RelativeOpNav.register` class method/decorator. Using this method to register a new
technique creates all the typical attributes/methods for the technique in the :class:`.RelativeOpNav` class without
having to subclass it, including ``{technique}_estimate``, ``{technique}_details`` replacing ``{technique}`` with the
name of the technique. It will also package the results for you into the appropriate attribute (
:attr:`.center_finding_results`, :attr:`.relative_position_results`, :attr:`.landmark_results`', :attr:`.limb_results`,
and :attr:`.saved_templates`) depending on the type of observables generated.
Therefore, to register a new template we could do something like
.. code::
@RelativeOpNav.register
class MyNewTechnique(RelNavEstimator):
technique = "my_new_technique"
observable_type = [RelNavObservablesType.CENTER_FINDING, RelNavObservablesType.RELATIVE_POSITION]
generates_templates = False
def estimate(self, image, include_targets=None):
# do the thing
self.computed_bearings = [np.zeros(2) for _ in range(len(self.scene.target_objs))
self.computed_positions = [np.zeros(3) for _ in range(len(self.scene.target_objs))
self.observed_bearings = [np.zeros(2) for _ in range(len(self.scene.target_objs))
self.observed_positions = [np.zeros(3) for _ in range(len(self.scene.target_objs))
self.details = [{'status': "we did the thing!"} for _ in range(len(self.scene.taget_objs))]
which would register ``MyNewTechnique`` to name ``my_new_technique`` so that we could do something like
``relnav.my_new_technique_estimate()`` where ``relnav`` is an instance of :class:`RelativeOpNav`. Note that the
registration must be done before creating an instance of :class:`RelativeOpNav`. Therefore, the code containing the
above example would needs to be imported before intializing the :class:`RelativeOpNav`.
For a more general description of the steps needed to perform relative navigation, refer to the :mod:`.relative_opnav`
documentation. For a more in-depth examination of the :class:`RelativeOpNav` class, continue through the following
class documentation. For more details on adding new techniques to the :class:`RelativeOpNav` class, see the
:mod:`.relnav_estimators` documentation.
"""
import time
from copy import deepcopy
from warnings import warn
from functools import partialmethod
from typing import Optional, List, Type, Iterable, Tuple, Any, Dict, Union
import numpy as np
from giant.opnav_class import OpNav
from giant.camera import Camera
from giant.image_processing import ImageProcessing
from giant.ray_tracer.scene import Scene, SceneObject
from giant.ray_tracer.shapes import Ellipsoid
from giant.image import OpNavImage
from giant._typing import Real, NONEARRAY, PATH, ARRAY_LIKE_2D
from giant.relative_opnav.estimators import RelNavObservablesType, RelNavEstimator
from giant.relative_opnav.estimators.limb_matching import LimbMatching
from giant.relative_opnav.estimators.ellipse_matching import EllipseMatching
from giant.relative_opnav.estimators.moment_algorithm import MomentAlgorithm
from giant.relative_opnav.estimators.unresolved import UnresolvedCenterFinding
from giant.relative_opnav.estimators.cross_correlation import XCorrCenterFinding
from giant.relative_opnav.estimators.sfn import FeatureCatalogue
from giant.relative_opnav.estimators.sfn import SurfaceFeatureNavigation
RESULTS_DTYPE = np.dtype([('predicted', np.float64, (3,)), ('measured', np.float64, (3,)), ('type', 'U3'),
('observation date', 'datetime64[us]'), ('landmark id', np.int64),
('target position', np.float64, (3,)), ('target name', 'U64'),
('target body', 'U64')])
"""
The numpy structured datatype used to package most RelNav observables.
For an overview of how structured data types work in numpy, refer to https://numpy.org/doc/stable/user/basics.rec.html
The following table describes the typical purpose of each field. Occasionally these conventions may not be followed, in
which case it will be clearly documented in the documentation for the RelNav type.
================ ================ ======================================================================================
Field Type Description
================ ================ ======================================================================================
predicted 3 element double The predicted observable based on the a priori state knowledge. For most bearing type
measurements, only the first 2 components of the array are used and are (x, y) or
(column, row) in units of pixels. For most vector measurements (3DP) all 3 components
are used and are (x, y, z) in kilometers.
measured 3 element double The measured observable from the image. For most bearing type
measurements, only the first 2 components of the array are used and are (x, y) or
(column, row) in units of pixels. For most vector measurements (3DP) all 3 components
are used and are (x, y, z) in kilometers.
type 3 characters The type of the measurement. Options typically are ``'cof'`` to indicate a bearing
measurement to the center-of-figure of the target, ``'lim'`` to indicate a bearing
measurement to a point on the limb of the target, ``'lmk'`` to indicate a bearing
measurement to a surface feature on the target, ``'3dp'`` to indicate a relative
position measurement from the center of figure of the target to the camera, and
``'con'`` to indicate an image constraint measurement where the same feature is
observed in 2 different images.
observation date datetime The date/time of the observation. This is the same as :attr:`.Image.observation_date`
but is included here for convenience (so you don't need the full image to know when
the observation was taken).
landmark id integer An identifier for the landmark that was targeted. For measurements to the center of
figure of the target this is always 0. For measurements to landmarks and limbs, this
will be the index into the list of targets. For image constraint measurements, this
will be the key to use to pair the observations together.
target position 3 element double The location of the target in the target-fixed frame. For center of figure
measurements this will always be zero. For landmark and limb measurements it will be
the target-fixed location in kilometers. For image constraint measurements, it will
generally be zero, unless the constraint is generated from a known feature in which
case it will be the target-fixed location in kilometers.
target name str A string giving the name of the target. For center of figure measurements this
will be ``'{} cof'`` where {} is replaced with the name of the target from the
:attr:`.SceneObject.name` attribute. For limb observations, it will be
``'LIMB{:.4g}|{:.4g}|{:.4g}' where the 3 {:.4g} will be replaced with the target fixed
location of the limb. For landmark observations, it will be the name of the landmark
observed from the :attr:`.SurfaceFeature.name` attribute. For image constraint
observations it will be ``'CONST{}'`` where ``'{}'`` will be replaced by the key
number for the constraint, unless the constraint was generated from a known feature,
in which case it will be the feature name.
target body str The name of the body hosting the target, retrieved from the :attr:`.SceneObject.name`
attribute.
================ ================ ======================================================================================
"""
[docs]class RelativeOpNav(OpNav):
"""
This class serves as the main user interface for performing relative optical navigation.
The class acts as a container for the :class:`.Camera`, :class:`.ImageProcessing`, and :class:`.Scene` instances as
well as for instance of all of the registered RelNav techniques. By default the registered RelNav techniques are
:class:`.XCorrCenterFinding` to :attr:`cross_correlation`, :class:`.EllipseMatching` to :attr:`ellipse_matching`,
:class:`.LimbMatching` to :attr:`limb_matching`, :class:`.MomentAlgorithm` to :attr:`.moment_algorithm`, and
:class:`.UnresolvedCenterFinding` to :attr:`.unresolved`. Besides storing all of these objects, it handles data
transfer and collection between the different objects. Therefore, in general this class will be the exclusive
interface for doing Relative OpNav.
For each registered technique, this class provides a few useful capabilities. First, it creates a property that
returns the current instance of the class that implements the technique to make it easy to edit/modify properties.
Second, it provides a ``{technique}_estimate`` method which can be used to apply the technique to specific or all
image/target pairs. These ``_estimate`` methods also handle collecting and storing the data from the initialized
objects as well as providing the appropriate data to the objects. Finally, for each registered technique this
class provides the opportunity to pass either a pre-initialized instance of the object as a key word argument
(using ``{technique}=instance``) or the keyword arguments to use to initialize the instance (using
``{technique}_kwargs=dict``) as part of the ``__init__`` method for this class.
This class also provides a simple method for automatically determining which RelNav technique to use based on the
expected apparent diameter of a target in the image, as well as the type of the shape representing the target in the
scene. This method, :meth:`auto_estimate` is generally sufficient for use for most missions that are doing typical
RelNav type work and really makes doing RelNav easy.
For most RelNav types, the results will be collected and stored in the :attr:`center_finding_results`,
:attr:`relative_position_results`, :attr:`landmark_results`, :attr:`limb_results` and :attr:`saved_templates`,
depending on the type of RelNav used (where each type is stored will be described in the class documentation for the
technique). In addition, each technique can store more details about what occurred in the fit to the
``{technique}_details`` attributes which are lists of lists where the outer list corresponds to the images and the
inner lists corresponds to the targets in the scene. Typically these details are stored as dictionaries with
detailed key names to indicate what each value means, but they can technically be any python object. The
documentation for each technique will describe what is included in the details output.
When initializing this class, most of the initial options can be set using the ``*_kwargs`` inputs with
dictionaries specifying the keyword arguments and values. Alternatively, you can provide already initialized
instances of the objects if you want a little more control or want to use a subclass instead of the registered
class itself. You should see the documentation for the registered techniques and the :class:`.ImageProcessing`
class for more details about what settings can be specified at initialization.
It is possible to register new techniques to use with this class, which will automatically create many of the
benefits just discussed. For details on how to do this, refer to the :mod:`.relnav_class`,
:mod:`.relnav_estimators`, and :meth:`register` documentation for details.
"""
_registered_techniques = {"cross_correlation": XCorrCenterFinding,
"ellipse_matching": EllipseMatching,
"limb_matching": LimbMatching,
"moment_algorithm": MomentAlgorithm,
"unresolved": UnresolvedCenterFinding} # type: Dict[str, Type[RelNavEstimator]]
"""
This dictionary contains all registered techniques with the RelativeOpNav class.
The dictionary maps technique name to the class that implements the techniques, and is updated by the
:meth:`register` method/decorator. A user will not typically interact with this directly.
"""
_builtins = ["cross_correlation",
"ellipse_matching",
"limb_matching",
"moment_algorithm",
"unresolved"] # type: List[str]
"""
The list contains the built in techniques that are pre-registered with the RelativeOpNav class.
It is included simply to check what has already been initialized so that we don't duplicate effort in the init
method. While these could all be treated as a typical registered technique, they are handled specially for tab
completion and static type checking purposes.
"""
def __init__(self, camera: Camera, scene: Scene, extended_body_cutoff: Real = 3,
save_templates: bool = False,
image_processing: Optional[ImageProcessing] = None, image_processing_kwargs: Optional[dict] = None,
cross_correlation: Optional[XCorrCenterFinding] = None,
cross_correlation_kwargs: Optional[dict] = None,
unresolved: Optional[UnresolvedCenterFinding] = None, unresolved_kwargs: Optional[dict] = None,
ellipse_matching: Optional[EllipseMatching] = None, ellipse_matching_kwargs: Optional[dict] = None,
limb_matching: Optional[LimbMatching] = None, limb_matching_kwargs: Optional[dict] = None,
moment_algorithm: Optional[MomentAlgorithm] = None, moment_algorithm_kwargs: Optional[dict] = None,
sfn: Optional[SurfaceFeatureNavigation] = None, sfn_kwargs: Optional[dict] = None,
**kwargs):
"""
:param camera: The :class:`.Camera` containing the camera model and images to be analyzed
:param scene: The :class:`.Scene` describing the a priori knowledge of the relative state between the camera and
the targets
:param extended_body_cutoff: The apparent diameter threshold in pixels at which :meth:`auto_estimate` will
switch from using unresolved techniques to using resolved techniques for
extracting observables from the images.
:param save_templates: A flag specifying whether to save the templates generated for cross-correlation based
techniques to the :attr:`saved_templates` attribute.
:param image_processing: An already initialized instance of :class:`.ImageProcessing` (or a subclass). If not
``None`` then ``image_processing_kwargs`` are ignored.
:param image_processing_kwargs: The keyword arguments to pass to the :class:`.ImageProcessing` class
constructor. These are ignored if argument ``image_processing`` is not ``None``
:param cross_correlation: An already initialized instance of :class:`.XCorrCenterFinding` (or a subclass). If
not ``None`` then ``cross_correlation_kwargs`` are ignored.
:param cross_correlation_kwargs: The keyword arguments to pass to the :class:`.XCorrCenterFinding` class
constructor. These are ignored if argument ``cross_correlation is not ``None``
:param unresolved: An already initialized instance of :class:`.UnresolvedCenterFinding` (or a subclass). If
not ``None`` then ``unresolved_kwargs`` are ignored.
:param unresolved_kwargs: The keyword arguments to pass to the :class:`.UnresolvedCenterFinding` class
constructor. These are ignored if argument ``unresolved`` is not ``None``
:param ellipse_matching: An already initialized instance of :class:`.EllipseMatching` (or a subclass).
If not ``None`` then ``ellipse_matching_kwargs`` are ignored.
:param ellipse_matching_kwargs: The keyword arguments to pass to the :class:`.EllipseMatching` class
constructor. These are ignored if argument ``ellipse_matching`` is not ``None``
:param limb_matching: An already initialized instance of :class:`.LimbMatching` (or a subclass). If not
``None`` then ``limb_matching_kwargs`` are ignored
:param limb_matching_kwargs: The key word arguments to pass to the :class:`.LimbMatching` class constructor.
These are ignored if argument ``limb_matching`` is not ``None``.
:param moment_algorithm: An already initialized instance of :class:`.MomentAlgorithm` (or a subclass). If not
``None`` then ``moment_algorithm_kwargs`` are ignored.
:param moment_algorithm_kwargs: The key word arguments to pass to the :class:`.MomentAlgorithm` class
constructor. These are ignored if argument ``moment_algorithm`` is not
``None``.
:param sfn: An already initialized instance of :class:`.SurfaceFeatureNavigation` (or a subclass). If not
``None`` then ``sfn_kwargs`` are ignored.
:param sfn_kwargs: The key word arguments to pass to the :class:`.SurfaceFeatureNavigation` class constructor.
These are ignored if argument ``sfn`` is not ``None``.
:param kwargs: Extra arguments for other registered RelNav techniques. These should take the same form as above
(``{technique_name}={technique_instance}`` or ``{technique_name}_kwargs=dict()``). Any that are
not supplied are defaulted to ``None``.
"""
# make a call to the super class
super().__init__(camera, image_processing=image_processing, image_processing_kwargs=image_processing_kwargs)
# initialize the auto update flag to false. It will be set with the scene
self._auto_update = False
# set the scene, which also sets the _auto_update attribute
self._scene = None
self.scene = scene
# initialize the variables to store the results
self.center_finding_results = np.zeros((len(self.camera.images), len(self.scene.target_objs)),
dtype=RESULTS_DTYPE) # type: np.ndarray
"""
This array contains center finding results after a center finding technique is used for each image in the
camera.
The array is a nxm array with the :attr:`.RESULTS_DTYPE`, where n is the number of images in the camera (all
images, not just turned on) and m is the number of targets in the scene. This is initialized to a zero array.
The best way to check if an entry is still empty is that all of the string columns will be 0 length. If a
method fails for a particular image/target, the array will instead be filled with NaN for the predicted column.
"""
self.relative_position_results = np.zeros((len(self.camera.images), len(self.scene.target_objs)),
dtype=RESULTS_DTYPE) # type: np.ndarray
"""
This array contains relative position results after a relative position technique is used for each image in the
camera.
The array is a nxm array with the :attr:`.RESULTS_DTYPE`, where n is the number of images in the camera (all
images, not just turned on) and m is the number of targets in the scene. This is initialized to a zero array.
The best way to check if an entry is still empty is that all of the string columns will be 0 length. If a
method fails for a particular image/target, the array will instead be filled with NaN for the predicted column.
"""
self.landmark_results = [[None] * len(self.scene.target_objs)
for _ in range(len(self.camera.images))] # type: List[List[NONEARRAY]]
"""
This list of lists contains landmark results for each image/target in the scene after a landmark technique is
used.
The list of lists is nxm, where n is the number of images in the camera (all images, not just turned on) and m
is the number of targets in the scene. Each list element is initialized to ``None`` and is only filled in when
a landmark technique is applied to the image/target combination. The result after a landmark technique has been
applied will be a 1D numpy array with dtype :attr:`.RESULTS_DTYPE` with a size equal to the number of processed
landmarks in the image/target pair. Each element can (and likely will) have a different length.
"""
self.constraint_results = [[None] * len(self.scene.target_objs)
for _ in range(len(self.camera.images))] # type: List[List[NONEARRAY]]
"""
This list of lists contains image constraint results for all images/targets in the scene after an image
constraint technique is used.
The list of lists is nxm, where n is the number of images in the camera (all images, not just turned on) and m
is the number of targets in the scene. Each list element is initialized to ``None`` and is only filled in when
an image constraint technique is applied to the image/target combination. The result after a image constraint
technique has been applied will be a 1D numpy array with dtype :attr:`.RESULTS_DTYPE` with a size equal to the
number of matched features in the image to other images. The pairs can be retrieved using the ``landmark id``
column of each element. The same landmark id indicates the same landmark, even in different images. you can
also use the function :func:`.pair_constraints_from_results` which will take in this attribute and return lists
with all of the paired observations. Each element can (and likely will) have a different length.
"""
self.limb_results = [[None] * len(self.scene.target_objs)
for _ in range(len(self.camera.images))] # type: List[List[NONEARRAY]]
"""
This list of lists contains limb results for each image/target in the scene after a limb technique is
used.
The list of lists is nxm, where n is the number of images in the camera (all images, not just turned on) and m
is the number of targets in the scene. Each list element is initialized to ``None`` and is only filled in when
a limb technique is applied to the image/target combination. The result after a limb technique has been
applied will be a 1D numpy array with dtype :attr:`.RESULTS_DTYPE` with a size equal to the number of processed
limbs in the image/target pair. Each element can have a different length.
"""
self.saved_templates = [[None] * len(self.scene.target_objs) for _ in range(len(self.camera.images))]
"""
This list of lists contains the templates generated by many of the techniques for inspection.
The list of lists is nxm, where n is the number of images in the camera (all images, not just turned on) and m
is the number of targets in the scene. Each list element is initialized to ``None`` and is only filled in when
a method that generates a template is applied to the image/target pair. Each element is generally stored as
either a 2D numpy array containing the template (if doing center finding), or as a list of numpy arrays
containing the templates for each landmark (if doing landmark navigation)
"""
self.save_templates = save_templates # type: bool
"""
This flag specifies whether to save rendered templates from techniques that rely on cross-correlation.
While it can be nice to have the templates, especially for generating summary displays or trying to investigate
whether results are reasonable visually, they can take up a lot of memory, so be sure to consider this before
turning this option on.
"""
self.extended_body_cutoff = float(extended_body_cutoff) # type: float
"""
The apparent diameter of a target in pixels when we should switch from using unresolved techniques to
resolved techniques for center finding.
This is only used in :meth:`auto_estimate` and is further described in that documentation.
"""
# store the classes that do the hard work
# unresolved
self._unresolved = unresolved
if self._unresolved is None:
if unresolved_kwargs is not None:
self._unresolved = UnresolvedCenterFinding(self.scene, self._camera, self._image_processing,
**unresolved_kwargs)
else:
self._unresolved = UnresolvedCenterFinding(self.scene, self._camera, self._image_processing)
self.unresolved_details = [[None] * len(self.scene.target_objs) for _ in range(len(self.camera.images))]
"""
This attribute stores details from the :mod:`.unresolved` technique for each image/target pair that has
been processed.
The details are stored as a list of list of object (typically dictionaries) where each element of the outer list
corresponds to the same element number in the :attr:`.Camera.images` list and each element of the inner list
corresponds to the same element number in the :attr:`.Scene.target_objs` list.
If an image/target pair has not been processed by the :mod:`.unresolved` technique then the corresponding
element will still be set to ``None``.
For a description of what the provided details include, see the :attr:`.UnresolvedCenterFinding.details`
documentation.
"""
# cross correlation
self._cross_correlation = cross_correlation
if self._cross_correlation is None:
if cross_correlation_kwargs is not None:
self._cross_correlation = XCorrCenterFinding(self.scene, self._camera, self._image_processing,
**cross_correlation_kwargs)
else:
self._cross_correlation = XCorrCenterFinding(self.scene, self._camera, self._image_processing)
self.cross_correlation_details = [[None] * len(self.scene.target_objs) for _ in range(len(self.camera.images))]
"""
This attribute stores details from the :mod:`.cross_correlation` technique for each image/target pair that has
been processed.
The details are stored as a list of list of object (typically dictionaries) where each element of the outer list
corresponds to the same element number in the :attr:`.Camera.images` list and each element of the inner list
corresponds to the same element number in the :attr:`.Scene.target_objs` list.
If an image/target pair has not been processed by the :mod:`.cross_correlation` technique then the corresponding
element will still be set to ``None``.
For a description of what the provided details include, see the :attr:`.XCorrCenterFinding.details`
documentation.
"""
# sfn
self._sfn = sfn
if self._sfn is None:
if sfn_kwargs is not None:
self._sfn = SurfaceFeatureNavigation(self.scene, self._camera, self._image_processing,
**sfn_kwargs)
else:
self._sfn = SurfaceFeatureNavigation(self.scene, self._camera, self._image_processing)
self.sfn_details = [[None] * len(self.scene.target_objs) for _ in range(len(self.camera.images))]
"""
This attribute stores details from the :mod:`.sfn` technique for each image/target pair that has
been processed.
The details are stored as a list of list of object (typically dictionaries) where each element of the outer list
corresponds to the same element number in the :attr:`.Camera.images` list and each element of the inner list
corresponds to the same element number in the :attr:`.Scene.target_objs` list.
If an image/target pair has not been processed by the :mod:`.sfn` technique then the corresponding
element will still be set to ``None``.
For a description of what the provided details include, see the :attr:`.SurfaceFeatureNavigation.details`
documentation.
"""
# limb matching
self._limb_matching = limb_matching
if self._limb_matching is None:
if limb_matching_kwargs is not None:
self._limb_matching = LimbMatching(self.scene, self._camera, self._image_processing,
**limb_matching_kwargs)
else:
self._limb_matching = LimbMatching(self.scene, self._camera, self._image_processing)
self.limb_matching_details = [[None] * len(self.scene.target_objs) for _ in range(len(self.camera.images))]
"""
This attribute stores details from the :mod:`.limb_matching` technique for each image/target pair that has
been processed.
The details are stored as a list of list of object (typically dictionaries) where each element of the outer list
corresponds to the same element number in the :attr:`.Camera.images` list and each element of the inner list
corresponds to the same element number in the :attr:`.Scene.target_objs` list.
If an image/target pair has not been processed by the :mod:`.limb_matching` technique then the corresponding
element will still be set to ``None``.
For a description of what the provided details include, see the :attr:`.LimbMatching.details` documentation.
"""
# ellipse matching
self._ellipse_matching = ellipse_matching
if self._ellipse_matching is None:
if ellipse_matching_kwargs is not None:
self._ellipse_matching = EllipseMatching(self.scene, self._camera, self._image_processing,
**ellipse_matching_kwargs)
else:
self._ellipse_matching = EllipseMatching(self.scene, self._camera, self._image_processing)
self.ellipse_matching_details = [[None] * len(self.scene.target_objs) for _ in range(len(self.camera.images))]
"""
This attribute stores details from the :mod:`.ellipse_matching` technique for each image/target pair that has
been processed.
The details are stored as a list of list of object (typically dictionaries) where each element of the outer list
corresponds to the same element number in the :attr:`.Camera.images` list and each element of the inner list
corresponds to the same element number in the :attr:`.Scene.target_objs` list.
If an image/target pair has not been processed by the :mod:`.ellipse_matching` technique then the corresponding
element will still be set to ``None``.
For a description of what the provided details include, see the :attr:`.EllipseMatching.details` documentation.
"""
# moment algorithm
self._moment_algorithm = moment_algorithm
if self._moment_algorithm is None:
if moment_algorithm_kwargs is not None:
self._moment_algorithm = MomentAlgorithm(self.scene, self._camera, self._image_processing,
**moment_algorithm_kwargs)
else:
self._moment_algorithm = MomentAlgorithm(self.scene, self._camera, self._image_processing)
self.moment_algorithm_details = [[None] * len(self.scene.target_objs) for _ in range(len(self.camera.images))]
"""
This attribute stores details from the :mod:`.moment_algorithm` technique for each image/target pair that has
been processed.
The details are stored as a list of list of object (typically dictionaries) where each element of the outer list
corresponds to the same element number in the :attr:`.Camera.images` list and each element of the inner list
corresponds to the same element number in the :attr:`.Scene.target_objs` list.
If an image/target pair has not been processed by the :mod:`.moment_algorithm` technique then the corresponding
element will still be set to ``None``.
For a description of what the provided details include, see the :attr:`.MomentAlgorithm.details` documentation.
"""
for technique, technique_obj in self._registered_techniques.items():
if technique in self._builtins: # we've already dealt with these so skip them
continue
internal = '_' + technique
setattr(self, internal, kwargs.get(technique))
if getattr(self, internal) is None:
technique_kwargs = kwargs.get(technique+'_kwargs')
if technique_kwargs is not None:
setattr(self, internal, technique_obj(self.scene, self._camera, self._image_processing,
**technique_kwargs))
else:
setattr(self, internal, technique_obj(self.scene, self._camera, self._image_processing))
# make the details list for this technique
details = f'{technique}_details'
setattr(self, details, [[None] * len(self.scene.target_objs) for _ in range(len(self.camera.images))])
@property
def scene(self) -> Scene:
"""
This property stores the scene describing the a priori conditions that the camera observed.
This is used to communicate both where to expect the target in the image, as well as how to render what we think
the target should look like for techniques that use cross correlation. For more details see the :mod:`.scene`
documentation.
"""
return self._scene
@scene.setter
def scene(self, val: Scene):
# check to be sure this is a Scne
if isinstance(val, Scene):
self._scene = val
self._auto_update = False
# figure out if any of the objects can be placed automatically
for t in val.target_objs:
if t.position_function is not None and t.orientation_function is not None:
self._auto_update = True
break
else:
raise ValueError("scene must be a Scene or subclass")
@property
def cross_correlation(self) -> XCorrCenterFinding:
"""
The ``XCorrCenterFinding`` instance to use when extracting center finding observables from images using
cross-correlation.
This should be an instance of the :class:`.XCorrCenterFinding` class or a subclass.
See the :class:`.XCorrCenterFinding` documentation for more details.
"""
return self._cross_correlation
@cross_correlation.setter
def cross_correlation(self, val: XCorrCenterFinding):
if not isinstance(val, XCorrCenterFinding):
warn("The cross_correlation object should probably subclass XCorrCenterFinding. "
"We'll assume you know what your doing for now but see the XCorrCenterFinding documentation for "
"details")
self._cross_correlation = val
@property
def ellipse_matching(self) -> EllipseMatching:
"""
The ``EllipseMatching`` instance to use when extracting center finding observables from images using
cross-correlation.
This should be an instance of the :class:`.EllipseMatching` class or a subclass.
See the :class:`.EllipseMatching` documentation for more details.
"""
return self._ellipse_matching
@ellipse_matching.setter
def ellipse_matching(self, val: EllipseMatching):
if not isinstance(val, EllipseMatching):
warn("The ellipse_matching object should probably subclass EllipseMatching. "
"We'll assume you know what your doing for now but see the EllipseMatching documentation for details")
self._ellipse_matching = val
@property
def limb_matching(self) -> LimbMatching:
"""
The ``LimbMatching`` instance to use when extracting center finding observables from images using
cross-correlation.
This should be an instance of the :class:`.LimbMatching` class or a subclass.
See the :class:`.LimbMatching` documentation for more details.
"""
return self._limb_matching
@limb_matching.setter
def limb_matching(self, val: LimbMatching):
if not isinstance(val, LimbMatching):
warn("The limb_matching object should probably subclass LimbMatching. "
"We'll assume you know what your doing for now but see the LimbMatching documentation for details")
self._limb_matching = val
@property
def moment_algorithm(self) -> MomentAlgorithm:
"""
The ``MomentAlgorithm`` instance to use when extracting center finding observables from images using
cross-correlation.
This should be an instance of the :class:`.MomentAlgorithm` class or a subclass.
See the :class:`.MomentAlgorithm` documentation for more details.
"""
return self._moment_algorithm
@moment_algorithm.setter
def moment_algorithm(self, val: MomentAlgorithm):
if not isinstance(val, MomentAlgorithm):
warn("The moment_algorithm object should probably subclass MomentAlgorithm. "
"We'll assume you know what your doing for now but see the MomentAlgorithm documentation for details")
self._moment_algorithm = val
@property
def unresolved(self) -> UnresolvedCenterFinding:
"""
The ``UnresolvedCenterFinding`` instance to use when extracting center finding observables from images using
cross-correlation.
This should be an instance of the :class:`.UnresolvedCenterFinding` class or a subclass.
See the :class:`.UnresolvedCenterFinding` documentation for more details.
"""
return self._unresolved
@unresolved.setter
def unresolved(self, val: UnresolvedCenterFinding):
if not isinstance(val, UnresolvedCenterFinding):
warn("The unresolved object should probably subclass UnresolvedCenterFinding. "
"We'll assume you know what your doing for now but see the UnresolvedCenterFinding documentation for "
"details")
self._unresolved = val
[docs] def add_images(self, data: Union[Iterable[Union[PATH, ARRAY_LIKE_2D]], PATH, ARRAY_LIKE_2D],
parse_data: bool = True, preprocessor: bool = True):
"""
This is essentially an alias to the :meth:`.Camera.add_images` method, but it also expands various lists to
account for the new number of images.
When you have already initialized a :class:`StellarOpNav` class you should *always* use this method to add
images for consideration.
The lists that are extended by this method are:
* :attr:`center_finding_results`
* :attr:`relative_position_results`
* :attr:`limb_results`
* :attr:`landmark_results`
* :attr:`saved_templates`
* All ``{technique}_details`` attributes
See :meth:`.Camera.add_images` for a description of the valid input for ``data``
:param data: The image data to be stored in the :attr:`.images` list
:param parse_data: A flag to specify whether to attempt to parse the metadata automatically for the images
:param preprocessor: A flag to specify whether to run the preprocessor after loading an image.
"""
super().add_images(data, parse_data=parse_data)
if isinstance(data, (list, tuple)):
generator = data
n_images = len(data)
else:
generator = (lambda: (yield data))()
n_images = 1
number_of_targets = len(self.scene.target_objs)
self.center_finding_results = np.vstack([self.center_finding_results,
np.zeros((n_images, number_of_targets), dtype=RESULTS_DTYPE)])
self.relative_position_results = np.vstack([self.center_finding_results,
np.zeros((n_images, number_of_targets), dtype=RESULTS_DTYPE)])
for _ in generator:
self.limb_results.append([None] * number_of_targets)
self.landmark_results.append([None] * number_of_targets)
self.saved_templates.append([None] * number_of_targets)
for technique in self._registered_techniques.keys():
# update all the details lists
getattr(self, f'{technique}_details').append([None] * number_of_targets)
@staticmethod
def _resolve_technique_name(worker: Union[RelNavEstimator, Type[RelNavEstimator]]) -> str:
"""
This method determines the name of the technique from the worker class or an instance of the worker class
:raises ValueError: if the technique name is not an identifier
:param worker: The worker class or an instance of the worker class
:return: the name of the technique
"""
# figure out the name for this technique by checking the technique class attribute
technique = getattr(worker, 'technique', None)
if technique is None:
# if the technique attribute wasn't set the use the defining module name
technique = worker.__module__.split('.')[-1]
if not technique.isidentifier():
raise ValueError(f'Technique name {technique} is not a valid identifier')
return technique
[docs] @classmethod
def register(cls, technique_class: Type[RelNavEstimator]) -> Type[RelNavEstimator]:
"""
This class method registers a new RelNav technique with the RelativeOpNav class.
This method is intended to be used as a decorator, therefore typical use would look like
.. code::
@RelativeOpNav.register
class MyNewClass(RelNavEstimator):
pass
When the module containing this code is imported, MyNewClass will be registered with the :class:`RelativeOpNav`
class. It is import to note here that registration must be done before creating an instance of
:class:`RelativeOpNav`, so ensure that at some point the module with the new technique is imported.
Alternatively you can use this as a standalone function call
.. code::
RelativeOpNav.register(MyNewClass)
before creating an instance of :class:`RelativeOpNav`, but to do this you would have had to import the module
containing the new technique anyway so the decorator method would have been fine as well.
Registering a technique does a number of things. First, it creates a new property that returns the instance
of the class that defines the technique. This property will be assigned to either the class attribute
``technique`` of the class representing the new technique, or the defining module of the new technique.
Second, the new technique is registered so that one initializing the :class:`.RelativeOpNav` object, you can
specify the technique name as a keyword argument to provide a pre-initialized version of the technique class, or
you can specify the technique_name followed by _kwargs to specify the key word arguments to pass to the class at
initialization.
Third, a new method is created called ``{technique}_estimate`` where ``{technique}`` is replaced by the
technique name. This method, like the others named like it, will apply the technique to all images currently
turned on in the camera for all targets and store the results.
There are a few ways that this registration is controlled. The first way you can control it is by specifying a
name for the new technique as a class attribute of the class that implements the new technique. This attribute
should be called ``technique`` and should be a string.
The second control is implemented through the class attribute ``observable_type``. This attribute controls how
the ``{technique}_estimate`` method is built and should be a string, a value from
:class:`.RelNavObservablesType`, or a container of those values.. The type chooses how the types will be
interpreted for the new technique. There are some common built in types that can be combined together for most
RelNav observables (``'CENTER-FINDING'``, ``'LIMB'``, ``'LANDMARK'``, ``'RELATIVE-POSITION'``). Setting
these allows the :class:`.RelativeOpNav` class to automatically handle packaging the results from the
estimation, so long as you followed the conventions laid out in the :mod:`.relative_opnav.estimators`
documentation. Alternatively, you can specify this to be ``'CUSTOM'`` (``'CUSTOM'`` cannot be an element in
a container, it must be on its own), in which case the estimate method to use can be supplied using the
``relnav_estimator`` class attribute which should point to a function that takes a single argument, the
current instance of the :class:`.RelativeOpNav` class. Typically however, if you have to make a custom
estimator method then there is minimal benefit to registering the new technique with the RelNav class.
All of this assumes that the class describing the new technique is built according to the template provided by
:class:`.RelNavEstimator`. For more details on building/registering a new technique refer to the
:mod:`.relative_opnav.estimators` documentation. This also assumes that the technique has not already been
registered. If the technique has already been registered either a warning will be printed to screen, or an
error raised (if the technique was registered with a different handler class first).
:raises ValueError: If the technique name is not a valid identifier
:raises ValueError: If the technique has already been registered with a different class
:raises ValueError: If a custom type is chosen but the :attr:`~.RelNavEstimator.relnav_estimator` is ``None``
.. note::
This does not modify the implementation of the RelNav technique class in any way, it modifies the
:class:`RelativeOpNav` class itself. The returned class object will be exactly the same as what was input.
:param technique_class: A class representing the new technique to use built according to the outline in
:mod:`.relative_opnav.estimators`.
"""
# figure out the name of the technique
technique = cls._resolve_technique_name(technique_class)
# set the internal name for the technique by prepending an underscore
internal = '_' + technique
# check if the technique has already been registered and either error or just return
if technique in cls._registered_techniques:
current_class = cls._registered_techniques[technique]
if current_class is technique_class:
warn(f'Technique {technique} already registered')
return technique_class
else:
raise ValueError(f'Technique {technique} already defined with {current_class}. '
f'Cannot override registered classes. Please choose a new name for the '
f'technique by setting the technique attribute of the new class.')
# register the new technique in the class dictionary
cls._registered_techniques[technique] = technique_class
# define the getter/setter for the technique instance
def getter(self: cls) -> technique_class:
return getattr(self, internal)
def setter(self: cls, val: technique_class):
if not isinstance(val, technique_class):
warn(f"The {technique} object should probably subclass {technique_class.__name__}. "
f"We'll assume you know what your doing for now but see the {technique_class.__name__} "
f"documentation for details")
setattr(self, internal, val)
# get the type of relnav this technique implements
observable_type = getattr(technique_class, 'observable_type')
if observable_type is None:
observable_type = [RelNavObservablesType.CENTER_FINDING]
elif isinstance(observable_type, str):
observable_type = [RelNavObservablesType(observable_type.upper())]
elif isinstance(observable_type, RelNavObservablesType):
observable_type = [observable_type]
else:
observable_type = list(
map(lambda x: RelNavObservablesType(x.upper()) if isinstance(x, str) else RelNavObservablesType(x),
observable_type))
# set the documentation for the property and make the property
doc = (f"The {technique_class.__name__} instance to use when extracting "
f"[{', '.join(map(str, observable_type))}] observables from images using {technique}.\n\n"
f"This should be an instance of the {technique_class.__name__} class. You can edit the class "
f"attributes using this property.")
setattr(cls, technique, property(getter, setter, doc=doc))
# check if this is a custom type, and if so handle it
if RelNavObservablesType.CUSTOM in observable_type:
if len(observable_type) > 1:
raise ValueError('observable_type must only have 1 element if it is CUSTOM')
custom_handler = getattr(technique_class, 'relnav_handler', None)
if custom_handler is None:
raise ValueError('Unable to register technique {} '
'which needs a custom handler but did not provide one'.format(technique))
setattr(cls, technique+'_estimate', custom_handler)
else:
# otherwise use the default
setattr(cls, technique+'_estimate', partialmethod(cls.default_estimator, internal, observable_type))
return technique_class
def _package_center_finding(self, worker: RelNavEstimator, image_ind: int, target_ind: int, target: SceneObject,
image: OpNavImage):
"""
This stores center finding results in the center finding results attribute from the provider worker.
:param worker: The worker that extracted the center finding observables
:param image_ind: The index of the image the observables were extracted from
:param target_ind: The index of the target the observables are of
:param target: The target the observables are of
:param image: The image the observables were extracted from
"""
# check if this wasn't processed for this target
if worker.observed_bearings[target_ind] is None:
return
self.center_finding_results[image_ind,
target_ind] = (np.hstack([worker.computed_bearings[target_ind], [0]]), # predicted
np.hstack([worker.observed_bearings[target_ind], [0]]), # measured
'cof', # type
image.observation_date, # observation date
target_ind, # landmark id
np.zeros(3, dtype=np.float64), # target position
'{} COF'.format(target.name), # target name
target.name) # target body
def _package_relative_position(self, worker: RelNavEstimator, image_ind: int, target_ind: int, target: SceneObject,
image: OpNavImage):
"""
This stores relative position results in the relative-position attribute from the provided worker
:param worker: The worker that extracted the center finding observables
:param image_ind: The index of the image the observables were extracted from
:param target_ind: The index of the target the observables are of
:param target: The target the observables are of
:param image: The image the observables were extracted from
"""
# check if this wasn't processed for this target
if worker.observed_positions[target_ind] is None:
return
self.relative_position_results[image_ind,
target_ind] = (worker.computed_positions[target_ind], # predicted
worker.observed_positions[target_ind], # measured
'pos', # type
image.observation_date, # observation date
target_ind, # landmark id
np.zeros(3, dtype=np.float64), # target position
'{} COF'.format(target.name), # target name
target.name) # target body
def _package_limbs(self, worker: RelNavEstimator, image_ind: int, target_ind: int, target: SceneObject,
image: OpNavImage):
"""
This stores limb results in the limbs attribute from the provided worker
:param worker: The worker that extracted the center finding observables
:param image_ind: The index of the image the observables were extracted from
:param target_ind: The index of the target the observables are of
:param target: The target the observables are of
:param image: The image the observables were extracted from
"""
# check if the target was processed
if worker.observed_bearings[target_ind] is None:
return
limbs_camera: NONEARRAY = getattr(worker, 'limbs_camera', [None]*len(self.scene.target_objs))[target_ind]
if limbs_camera is None:
raise ValueError('Unable to package limb observations. The worker must provide a '
'limbs_camera attribute and fill it out when a target is processed')
# store the limb observations for each limb used in the estimation
limb_obs = []
for pred, obs, limb_position in zip(worker.computed_bearings[target_ind].T,
worker.observed_bearings[target_ind].T,
limbs_camera.T):
limb_position_body = target.orientation.matrix.T @ limb_position
limb_obs.append((np.hstack([pred, [0]]), np.hstack([obs, [0]]), 'lim', image.observation_date, 0,
limb_position_body, 'LIMB{:.4g}|{:.4g}|{:.4g}'.format(*limb_position_body), target.name))
self.limb_results[image_ind][target_ind] = np.array(limb_obs, dtype=RESULTS_DTYPE)
def _package_landmarks(self, worker: RelNavEstimator, image_ind: int, target_ind: int, target: SceneObject,
image: OpNavImage):
"""
This stores landmark results in the landmarks attribute from the provided worker
:param worker: The worker that extracted the center finding observables
:param image_ind: The index of the image the observables were extracted from
:param target_ind: The index of the target the observables are of
:param target: The target the observables are of
:param image: The image the observables were extracted from
"""
# check if this wasn't processed for this target
if worker.observed_bearings[target_ind] is None:
return
# get the identified landmarks list this way to catch if it isn't defined or wasn't filled out
visible_features = getattr(worker, 'visible_features', [None]*len(self.scene.target_objs))[target_ind]
if visible_features is None:
raise ValueError('Unable to package landmark observations. The worker must provide a '
'visible_features attribute and must set it to contain a list of '
'identified landmark indices for each processed target')
# store the landmark observations for each limb used in the estimation
landmark_res = []
for pred, obs, landmark in zip(worker.computed_bearings[target_ind].T,
worker.observed_bearings[target_ind].T,
visible_features):
feature = target.shape.features[landmark]
landmark_res.append((np.hstack([pred, [0]]), np.hstack([obs, [0]]), 'lmk', image.observation_date,
landmark, feature.body_fixed_center, feature.name, target.name))
self.landmark_results[image_ind][target_ind] = np.array(landmark_res, dtype=RESULTS_DTYPE)
def _package_templates(self, worker: RelNavEstimator, image_ind: int, target_ind: int):
"""
This stores center finding results in the center finding results attribute from the provider worker.
:param worker: The worker that extracted the observables
:param image_ind: The index of the image the observables were extracted from
:param target_ind: The index of the target the observables are of
"""
if not worker.generates_templates:
raise ValueError("The worker doesn't generate templates which are required to package templates")
self.saved_templates[image_ind][target_ind] = deepcopy(worker.templates[target_ind])
def _package_details(self, worker: RelNavEstimator, image_ind: int, target_ind: int):
"""
This stores the details of the fit retrieved from the :attr:`.RelNavEstimator.details` property of the worker.
The details are stored in ``{worker_name}_details`` where ``{worker_name}`` is replaced with the technique name
of the worker class. The details are specific to each technique so see the corresponding documentation.
:param worker: The worker that extracted the observables
:param image_ind: The index of the image the observables were extracted from
:param target_ind: The index of the target the observables are of
"""
worker_name = self._resolve_technique_name(worker)
details_list = getattr(self, f'{worker_name}_details') # type: List[List[Optional[Any]]]
if details_list is None:
raise ValueError(f"Somehow we ended up without a list of details for {worker_name}")
details_list[image_ind][target_ind] = deepcopy(worker.details[target_ind])
[docs] def process_image(self, worker: RelNavEstimator, image_ind: int, image: OpNavImage,
observable_type: List[RelNavObservablesType], include_targets: Optional[List[bool]] = None):
"""
This is the default setup for processing an image through a RelNav technique and storing the results for a
single image.
First, the RelNav technique instance is applied to the image using the :meth:`.RelNavEstimator.estimate` method.
Finally, the results from processing the image are stored depending on what types of results were generated by
the technique. This is controlled by the ``observable_type`` input.
:param worker: The RelNav technique instance that is to be applied to the image.
:param image_ind: The index of the image in the :attr:`.Camera.images` list being processed
:param image: The :class:`.OpNavImage` being processed
:param observable_type: The type of measurements that are generated by this technique as a list of
:class:`.RelNavObservablesType` values
:param include_targets: The targets to process for this image as a list of bools the same length as
:attr:`.Scene.target_objs`. If ``None`` then all targets are processed.
"""
# extract the observables
worker.estimate(image, include_targets=include_targets)
# store the observables based on what is generated
for target_ind, target in enumerate(self.scene.target_objs):
if (include_targets is not None) and (not include_targets[target_ind]):
continue
# the following are independent if statements so multiple can execute
if RelNavObservablesType.CENTER_FINDING in observable_type:
self._package_center_finding(worker, image_ind, target_ind, target, image)
if RelNavObservablesType.RELATIVE_POSITION in observable_type:
self._package_relative_position(worker, image_ind, target_ind, target, image)
if RelNavObservablesType.LIMB in observable_type:
self._package_limbs(worker, image_ind, target_ind, target, image)
if RelNavObservablesType.LANDMARK in observable_type:
self._package_landmarks(worker, image_ind, target_ind, target, image)
if self.save_templates and worker.generates_templates:
self._package_templates(worker, image_ind, target_ind)
# package the details
self._package_details(worker, image_ind, target_ind)
[docs] def default_estimator(self, worker_name: str, observable_type: List[RelNavObservablesType],
image_ind: Optional[int] = None, include_targets: Optional[List[bool]] = None):
"""
This method extracts observables from image(s) using the requested worker and stores them according to the types
of observables generated by the technique.
This method typically is not called directly by the user but is made public for documentation purposes for
registering a new RelNav technique. Typically, a :func:`.partialmethod` is created from this method where the
2 positional arguments have been filled by the :meth:`register` method. This partial method is then stored
as a method ``{technique}_estimate`` where ``{technique}`` is the name of the technique that is being applied.
It is through that method that a user will generally do things. This resulting partial method will still take 2
key word arguments, ``image_ind`` and ``include_targets`` which can be used to apply the technique to a
particular image (instead of all of the turned on images) and to control which targets in the scene are
processed using this technique.
For more details about what is happening internally in this method see the :meth:`process_image` documentation,
which is called for each image processed by this method. For more details about registering a new RelNav
technique and using this method to apply it to images, refer to the :mod:`~.relative_opnav.estimators`
documentation.
:param worker_name: The name of the technique to be applied to the images
:param observable_type: The type of observables generated by this technique as a list of
:class:`.RelNavObservablesType` objects.
:param image_ind: An index specifying which image to process or ``None`` to indicate that all turned on images
should be processed.
:param include_targets: A list of booleans specifying which targets in the scene should be processed or ``None``
to indicate that all targets in the scene should be processed
"""
# get the actual worker
worker_inst = getattr(self, worker_name, None) # type: RelNavEstimator
# something was wrong with the name
if worker_inst is None:
raise ValueError('No technique {} found'.format(worker_name))
generator, num_images = self._prepare_generator(image_ind)
# process every image we have to
processed_image = 1
for image_index, image in generator:
print('\n-----------------------------------', flush=True)
print(f'Processing Image {processed_image} of {num_images}', flush=True)
start = time.time()
self._update_scene(image)
self.process_image(worker_inst, image_index, image, observable_type, include_targets=include_targets)
# call reset to make sure we don't cross the streams
worker_inst.reset()
print(f'Processed image {processed_image} of {num_images} in {(time.time()-start):.3f} seconds', flush=True)
processed_image += 1
def _update_scene(self, image: OpNavImage):
"""
This helper method updates the scene if possible
"""
# update the scene if possible
if self._auto_update:
if not hasattr(self.scene, 'update'):
raise AttributeError('Somehow we ended up with a scene without an update method')
self.scene.update(image)
else:
warn("This isn't an auto scene so we can't update things, hope you know what you're doing")
def _prepare_generator(self, image_ind: Optional[int]) -> Tuple[Iterable[Tuple[int, OpNavImage]], int]:
"""
This helper method simply prepares a generator/the number of images in the generator for processing images.
If ``image_ind`` is None then this returns the camera object itself, which works as a generator, along with the
sum of the :attr:`.Camera.image_mask` attribute as the number of images to process. If ``image_ind`` is an
integer, then this will return a generator that yields on the image requested (plus its index) and 1 as the
number of images to process.
:param image_ind: The image that is to be processed or ``None`` to process all turned on images in the camera
:return: The iterable that yields (image_index, image) pairs and then number of images in that iterable
"""
if image_ind is not None:
# make a generator expression the returns the image to be processed and its index.
# note that we need to "call" the lambda expression to get a generator
generator = (lambda: (yield image_ind, self.camera.images[image_ind]))()
# set the number of images to be 1
num_images = 1
else:
# just use the camera itself as the generator
generator = self.camera
# the number of images is the sum of the image mask
num_images = sum(self.camera.image_mask)
return generator, num_images
[docs] def auto_estimate(self, image_ind: Optional[int] = None, include_targets: Optional[List[bool]] = None):
r"""
This method attempts to automatically determine the best RelNav technique to use for each image/target pair
considered out of the most common RelNav techniques, :mod:`.unresolved`, :mod:`.ellipse_matching`,
:mod:`.cross_correlation`, and :mod:`.sfn`.
The decision of which technique to use is made as follows:
#. The apparent diameter of the target in pixels in the image under consideration is predicted either using the
bounding box for the target or the circumscribing sphere, if available.
#. If the predicted apparent diameter of the target is less than the :attr:`extended_body_cutoff` value then the
target/image pair is processed using the :mod:`.unresolved` RelNav technique
#. If the predicted apparent diameter of the target is greater than the :attr:`extended_body_cutoff` value then
the target/image pair is processed based on the type of object it is
- If the target is a :class:`.Ellipsoid` then the target is processed using :mod:`.ellipse_matching`
- If the target is a :class:`.FeatureCatalogue` then the target is processed using :mod:`.sfn`
- Otherwise the target is processed using :mod:`.cross_correlation`
:param image_ind: An index specifying which image to process or ``None`` to indicate that all turned on images
should be processed.
:param include_targets: A list of booleans specifying which targets in the scene should be processed or ``None``
to indicate that all targets in the scene should be processed
"""
generator, num_images = self._prepare_generator(image_ind)
# initialize a counter for telling the user status
image_number = 1
# loop through each image in the camera that is active and process it
for image_index, image in generator:
print('\n-----------------------------------', flush=True)
print(f'Processing Image {image_number} of {num_images}', flush=True)
start = time.time()
# if we can do an auto update, update the scene to the current image
self._update_scene(image)
# loop through each object in the scene and determine what type of solution to use
for target_ind, target in enumerate(self.scene.target_objs):
if (include_targets is not None) and not include_targets[target_ind]:
# don't process this one if the user told us not to
continue
target_mask = [False] * len(self.scene.target_objs)
target_mask[target_ind] = True
apparent_diameter = target.get_apparent_diameter(self.camera.model, image=image_index,
temperature=image.temperature)
# check if this size is less than the extended body cutoff
if apparent_diameter < self.extended_body_cutoff: # if this is unresolved use unresolved
self.process_image(self._unresolved, image_index, image, [RelNavObservablesType.CENTER_FINDING],
include_targets=target_mask)
self._unresolved.reset() # reset so we don't pollute other images/targets
elif isinstance(target.shape, Ellipsoid): # if this is an ellipsoid use ellipse matching
self.process_image(self._ellipse_matching, image_index, image,
[RelNavObservablesType.RELATIVE_POSITION, RelNavObservablesType.LIMB],
include_targets=target_mask)
self._ellipse_matching.reset() # reset so we don't pollute other images/targets
elif isinstance(target.shape, FeatureCatalogue): # if this is a feature catalogue try SFN
self.process_image(self._sfn, image_index, image, [RelNavObservablesType.RELATIVE_POSITION,
RelNavObservablesType.LANDMARK],
include_targets=target_mask)
self._sfn.reset() # reset so we don't pollute other images/targets
else: # otherwise, default to cross-correlation for resolved tessellated objects
self.process_image(self._cross_correlation, image_index, image,
[RelNavObservablesType.CENTER_FINDING],
include_targets=target_mask)
self._cross_correlation.reset() # reset so we don't pollute other images/targets
print(f'Image {image_number} of {num_images} done in {(time.time()-start):.3f} seconds', flush=True)
image_number += 1
[docs] def unresolved_estimate(self, image_ind: Optional[int] = None, include_targets: Optional[List[bool]] = None):
"""
This method loops applies the :mod:`.unresolved` technique to image/target pairs.
This method does nothing to check if it makes sense to apply the unresolved technique to each pair, it just
attempts to and if something fails then the results are recorded appropriately. If you are looking for
something that will attempt to automatically choose the right technique to use for each image/target pair, see
the :meth:`auto_estimate` method instead.
The results from applying the :mod:`.unresolved` technique to each image/target pair are stored in the
:attr:`.center_finding_results` attribute.
This method dispatches to the :meth:`default_estimator` method which provides more details on what is happening.
:param image_ind: An index specifying which image to process or ``None`` to indicate that all turned on images
should be processed.
:param include_targets: A list of booleans specifying which targets in the scene should be processed or ``None``
to indicate that all targets in the scene should be processed
"""
self.default_estimator('_unresolved', [RelNavObservablesType.CENTER_FINDING], image_ind, include_targets)
[docs] def cross_correlation_estimate(self, image_ind: Optional[int] = None, include_targets: Optional[List[bool]] = None):
"""
This method loops applies the :mod:`.cross_correlation` technique to image/target pairs.
This method does nothing to check if it makes sense to apply the cross correlation technique to each pair, it
just attempts to and if something fails then the results are recorded appropriately. If you are looking for
something that will attempt to automatically choose the right technique to use for each image/target pair, see
the :meth:`auto_estimate` method instead.
The results from applying the :mod:`.cross_correlation` technique to each image/target pair are stored in the
:attr:`center_finding_results` attribute. If the :attr:`save_templates` is set to true then this will also
save the rendered templates the :attr:`saved_templates` attribute. Finally, this will save fit information for
each image/target pair to the :attr:`cross_correlation_details` attribute.
This method dispatches to the :meth:`default_estimator` method which provides more details on what is happening.
:param image_ind: An index specifying which image to process or ``None`` to indicate that all turned on images
should be processed.
:param include_targets: A list of booleans specifying which targets in the scene should be processed or ``None``
to indicate that all targets in the scene should be processed
"""
self.default_estimator('_cross_correlation', [RelNavObservablesType.CENTER_FINDING], image_ind, include_targets)
[docs] def ellipse_matching_estimate(self, image_ind: Optional[int] = None, include_targets: Optional[List[bool]] = None):
"""
This method loops applies the :mod:`.ellipse_matching` technique to image/target pairs.
This method does nothing to check if it makes sense to apply the ellipse matching technique to each pair, it
just attempts to and if something fails then the results are recorded appropriately. If you are looking for
something that will attempt to automatically choose the right technique to use for each image/target pair, see
the :meth:`auto_estimate` method instead.
The results from applying the :mod:`.ellipse_matching` technique to each image/target pair are stored in the
:attr:`relative_position_results` and :attr:`limb_results` attributes. Finally, this will save fit information
for each image/target pair to the :attr:`ellipse_matching_details` attribute.
This method dispatches to the :meth:`default_estimator` method which provides more details on what is happening.
:param image_ind: An index specifying which image to process or ``None`` to indicate that all turned on images
should be processed.
:param include_targets: A list of booleans specifying which targets in the scene should be processed or ``None``
to indicate that all targets in the scene should be processed
"""
self.default_estimator('_ellipse_matching',
[RelNavObservablesType.RELATIVE_POSITION, RelNavObservablesType.LIMB],
image_ind, include_targets)
[docs] def limb_matching_estimate(self, image_ind: Optional[int] = None, include_targets: Optional[List[bool]] = None):
"""
This method loops applies the :mod:`.limb_matching` technique to image/target pairs.
This method does nothing to check if it makes sense to apply the limb matching technique to each pair, it
just attempts to and if something fails then the results are recorded appropriately. If you are looking for
something that will attempt to automatically choose the right technique to use for each image/target pair, see
the :meth:`auto_estimate` method instead.
The results from applying the :mod:`.limb_matching` technique to each image/target pair are stored in the
:attr:`relative_position_results` and :attr:`limb_results` attributes. Finally, this will save fit information
for each image/target pair to the :attr:`limb_matching_details` attribute.
This method dispatches to the :meth:`default_estimator` method which provides more details on what is happening.
:param image_ind: An index specifying which image to process or ``None`` to indicate that all turned on images
should be processed.
:param include_targets: A list of booleans specifying which targets in the scene should be processed or ``None``
to indicate that all targets in the scene should be processed
"""
self.default_estimator('_limb_matching', [RelNavObservablesType.RELATIVE_POSITION, RelNavObservablesType.LIMB],
image_ind, include_targets)
[docs] def moment_algorithm_estimate(self, image_ind: Optional[int] = None, include_targets: Optional[List[bool]] = None):
"""
This method loops applies the :mod:`.moment_algorithm` technique to image/target pairs.
This method does nothing to check if it makes sense to apply the moment algorithm technique to each pair, it
just attempts to and if something fails then the results are recorded appropriately. If you are looking for
something that will attempt to automatically choose the right technique to use for each image/target pair, see
the :meth:`auto_estimate` method instead.
The results from applying the :mod:`.moment_algorithm` technique to each image/target pair are stored in the
:attr:`relative_position_results` and :attr:`moment_algorithm attributes. Finally, this will save fit
information for each image/target pair to the :attr:`moment_algorithm_details` attribute.
This method dispatches to the :meth:`default_estimator` method which provides more details on what is happening.
:param image_ind: An index specifying which image to process or ``None`` to indicate that all turned on images
should be processed.
:param include_targets: A list of booleans specifying which targets in the scene should be processed or ``None``
to indicate that all targets in the scene should be processed
"""
self.default_estimator('_moment_algorithm', [RelNavObservablesType.CENTER_FINDING], image_ind, include_targets)