estimators

This package provides an abstract base class that defines the interface GIANT expects for Relative OpNav techniques as well as concrete implementations some of the most commonly used RelNav techniques.

Description of the Problem

In Relative OpNav, we are trying to extract observables of targets we see in an image to be able pass them to some external filter to use in estimating the relative state between the camera (the spacecraft that hosts the camera) and the observed targets. Usually this takes the form of line-of-sight/bearing measurements to a target, whether that be the center-of-figure of the target or a landmark on the surface of the target, although other observables are possible for some technique.

Because there are many different types of RelNav techniques in the world, and more are being added frequently, we do not try to implement them all here. Rather, we provide a subset of estimators that are the ones that are most commonly used, at least in our experience. In many cases, these provided estimators will be enough to extract plenty of information from images for estimating the relative state between the camera and the observed targets.

In some cases, however, you may need a different or more advanced technique, or you may simply like to try new things. To accommodate this, we have made adding new RelNav techniques to GIANT easy, allowing you to follow a simple architecture to define a new technique and register it for use, automatically creating a lot of the boilerplate code that is shared by many RelNav techniques for you. This is done through the RelNavEstimator abstract base class, and the process is described in detail in the following section.

Adding a New RelNav Technique

As mentioned, adding a new RelNav technique to GIANT is pretty simple if you follow the architecture laid out in the RelNavEstimator abstract base class. This involves defining some class and instance attributes as well as a few methods for our new technique that GIANT expects a RelNav estimator to expose. Once we have done that, (and implemented the details of our new technique) we can simply wrap the class with the RelativeOpNav.register() decorator to register the new technique and prepare it for use.

Specifically, in our new technique, we should define the following class attributes

Class Attribute

Description

technique

A string that gives the name to the technique. This should be an “identifier”, which means it should be only letters/numbers and the underscore character, and should not start with a number. This will be used in registering the class to define the property that points to the instance of this technique, as well as the {technique}_estimate method and {technique}_details attribute.

observable_type

A list of RelNavObservablesType values that specify what types of observables are generated by the new technique. This controls how the results from the new technique are retrieved and stored by the RelativeOpNav technique.

generates_templates

A boolean flag specifying whether this technique generates templates and stores them in the templates attribute. If this is True, then RelativeOpNav may store the templates for further investigation by copying the templates attribute.

When defining the observable_type attribute, we need to be careful what we pick and ensure that whatever we pick we actually generate. Therefore, if we specify that the measurement generates bearing measurements (CENTER_FINDING, LIMB, LANDMARK, CONSTRAINT) we should be sure to populate the computed_bearings and observed_bearings instance attributes. Similarly, if we specify that it generates RELATIVE_POSITION measurements, then we should be sure to populate the computed_positions and observed_positions instance attributes below. If our technique generates multiple types of bearing measurements (i.e. CENTER_FINDING and LIMB) then we will unfortunately need to use the CUSTOM observables type instead to ensure that the appropriate data is grabbed.

We should also define/use the following instance attributes

Instance Attribute

Description

image_processing

The instance of the image processing class to use when working with the images.

scene

The instance of the Scene class that defines the a priori knowledge of the location/orientation of the targets in the camera frame. When you are using your custom class with the RelativeOpNav class and a Scene class then you can assume that the scene is appropriately set up for each image.

camera

The instance of the Camera class which contains the camera model as well as other images.

computed_bearings

The attribute in which to store computed (predicted) bearing measurements as (x, y) in pixels. This is a list the length of the number of targets in the scene, and when a target is processed, it should put the bearing measurements into the appropriate index. For center finding type measurements, these will be single (x,y) pairs. For landmark/limb type measurements, these will be an nx2 array of (x,y) pairs for each landmark or feature

observed_bearings

The attribute in which to store observed bearing measurements as (x, y) in pixels. This is a list the length of the number of targets in the scene, and when a target is processed, it should put the bearing measurements into the appropriate index. For center finding type measurements, these will be single (x,y) pairs. For landmark/limb type measurements, these will be an nx2 array of (x,y) pairs for each landmark or feature

computed_positions

The attribute in which to store computed (predicted) positions measurements as (x, y, z) in kilometers in the camera frame. This is a list the length of the number of targets in the scene, and when a target is processed, it should put the predicted position into the appropriate index.

observed_positions

The attribute in which to store observed (measured) positions measurements as (x, y, z) in kilometers in the camera frame. This is a list the length of the number of targets in the scene, and when a target is processed, it should put the measured position into the appropriate index.

templates

The attribute in which templates should be stored for each target if templates are used for the technique. This is a list the length of the number of targets in the scene, and when a target is processed, it should have the template(s) generated for that target stored in the appropriate element. For center finding type techniques the templates are 2D numpy arrays. For landmark type techniques the templates are usually lists of 2D numpy arrays, where each list element corresponds to the template for the corresponding landmark.

details

This attribute can be used to store extra information about what happened when the technique was applied. This should be a list the length of the number of targets in the scene, and when a target is processed, the details should be saved to the appropriate element in the list. Usually each element takes the form of a dictionary and contains things like the uncertainty of the measured value (if known), the correlation score (if correlation was used) or other pieces of information that are necessarily directly needed, but which may given context to a user or another program. Because this is freeform, for the most part GIANT will just copy this list where it belongs and will not actually inspect the contents. To use the contents you will either need to inspect them yourself or will need to write custom code for them.

Finally, we should define (or override) the following methods

Method

Description

estimate()

This method should use the defined technique to extract observables from the image, depending on the type of the observables generated. This is also where the computed (predicted) observables should be generated and stored, as well as fleshing out the details and the templates lists if applicable. This method should be capable of applying the technique to all targets in the scene, or to a specifically requested target.

reset()

This method is called after the estimate method and after all required data has been extracted and store from the class to prepare the class for the next image/target processing. You should use this method as an opportunity to remove any old data that you don’t want to potentially carry over to a new image/target pair. Typically, you can leave this method alone as it already handles the most likely source of issues, but in some cases you may want to handle even more with it.

Once you have defined these things, simply wrap the class with the RelativeOpNav.register() decorator (or call the decorator on the un-initialized class object) and then you will be good to go.

As an example, lets build a new RelNav technique for using moments to compute the center-of-figure of a target in an image (note that a moment based algorithm already exists in moment_algorithm and this will be a much simpler technique). First, we need to import the RelNavEstimator and RelNavObservablesType classes. We also need to import the RelativeOpNav class so that we can register our new technique.

from giant.relative_opnav.estimators import RelNavEstimator, RelNavObservablesType
from giant.relative_opnav.relnav_class import RelativeOpNav
from giant.point_spread_functions import Moment
import cv2  # we need

Now, we can define our class and the class/instance attributes we will use

@RelativeOpNav.register  # use the register decorator to register this new technique
# subclass the relnav estimator to get some of the concrete implementations it provides
class MomentCenterFindingSimple(RelNavEstimator):
    technique = "simple_moments"  # the name that will be used to identify the technique in the RelativeOpNav class
    observable_type = [RelNavObservablesType.CENTER_FINDING]  # we only generate center finding observables
    generates_templates = False  # we don't generate templates for this technique.

    def __init__(self, scene, camera, image_processing, use_apparent_area=True,
                 apparent_area_margin_of_safety=2, search_range=None):
        # let the super class prep most of our instance attributes
        super().__init__(scene, camera, image_processing)

        # store and or apply any extra options here
        # this flag tells us to use the apparent diameter to predict the size
        self.use_apparent_area = use_apparent_area

        # this fudge factor is used to account for the fact that things aren't spherical and don't project to
        # circles in most cases even if they are.
        self.apparent_area_margin_of_safety = apparent_area_margin_of_safety

        # specify the search range for trying to pair the identified segments with the a priori location of the
        # camera
        self.search_range = search_range
        if self.search_range is None:
            self.search_range = max(self.camera.model.n_rows, self.camera.model.n_cols)

Now, we need to continue our class definition by defining the estimate method. Since this generates center finding observables we need to be sure to populate both the computed_bearings and observed_bearings attributes, as well as the details attribute. The details of what exactly we’re doing for the technique here are out of scope and are further addressed in the moment_algorithm documentation.

def estimate(image, include_targets=None):

    image_processing_original_segment_area = self.image_processing.minimum_segment_area
    # use the phase angle to predict the minimum size of a blob to expect assuming a spherical target.
    # because many targets aren't spherical we give a factor of safety setting the minimum size to half the
    # predicted area for each target.
    if self.use_apparent_area:
        minimum_area = None
        # do it for each target and take the minimum one
        for target_ind, target in self.target_generator(include_targets):
            # compute the phase angle
            phase = self.scene.phase_angle(target_ind)

            # predict the apparent diameter in pixels
            apparent_diameter = target.get_apparent_diameter(self.camera.model, temperature=image.temperature)

            apparent_radius = apparent_diameter/2

            # compute the predicted area in pixels assuming a projected circle for the illuminated limb and an
            # ellipse for the terminator
            if phase <= np.pi/2:
                predicted_area = np.pi*apparent_radius**2/2*(1+np.cos(phase))
            else:
                predicted_area = np.pi*apparent_radius**2/2*(1-np.cos(phase))

            # apply the margin of safety
            predicted_area /= self.apparent_area_margin_of_safety

            # store it if it is smaller
            if minimum_area is None:
                minimum_area = predicted_area
            else:
                minimum_area = min(predicted_area, minimum_area)

        # set the minimum segment area for image processing
        self.image_processing.minimum_segment_area = minimum_area

    # segment our image using Otsu/connected components
    segments, foreground, segment_stats, segment_centroids = self.image_processing.segment_image(image)

    # process each target using the concrete target_generator method from the super class
    for target_ind, target in self.target_generator(include_targets):

        # predict the location of the center of figure by projecting the target location onto the image plane
        # we assume that the scene has been updated to reflect the image time correctly and everything is already
        # in the camera frame.
        self.computed_bearings[target_ind] = self.camera.model.project_onto_image(target.position.ravel(),
                                                                                  temperature=image.temperature)

        # figure out which segment is closest
        closest_ind = None
        closest_distance = None
        for segment_ind, centroid in enumerate(segment_centroids):

            distance = np.linalg.norm(centroid - self.computed_bearings[target_ind])

            if closest_ind is None:
                if distance < self.search_range:
                    closest_ind = segment_ind
                    closest_distance = distance
            else:
                if distance < closet_distance:
                    closest_ind = segment_ind
                    closest_distance = distance

        # if nothing met the tolerance throw an error
        if closest_ind is None:
            raise ValueError(f"No segments were found within the search range.  for target {target_ind}"
                             f"Please try adjusting your parameters and try again")

        # now, get the observed centroid
        # extract the region around the blob from the found segment.  Include some extra pixels to capture things
        # like the terminator.  Use a fudge factor of 1 tenth of the sqrt of the area with a minimum of 10
        fudge_factor = max(np.sqrt(segment_stats[closest_ind, cv2.CC_STAT_AREA])*0.1, 10)
        top_left = np.floor(segment_stats[closest_ind, [cv2.CC_STAT_TOP, cv2.CC_STAT_LEFT]] -
                            fudge_factor).astype(int)
        bottom_right = np.ceil(top_left + segment_stats[closest_ind, [cv2.CC_STAT_HEIGHT, cv2.CC_STAT_WIDTH]] +
                               2*fudge_factor).astype(int)

        use_image = np.zeros(image.shape, dtype=bool)
        use_image[top_left[0]:bottom_right[0], top_left[1]:bottom_right[1]] = \
            foreground[top_left[0]:bottom_right[0], top_left[1]:bottom_right[1]]

        # get the x/y pixels where we need to include in the centroiding
        y, x = np.where(use_image)

        # do the moment fit using the Moment "PSF"
        fit = Moment.fit(x.astype(np.float64), y.astype(np.float64), image[keep_image].astype(np.float64))

        # store the fit in case people want to inspect it more closely
        self.details[target_ind] = {'fit object': fit,
                                    'minimum segment area': self.image_processing.minimum_segment_area}

        # store the location of the centroid (typically we would phase correct this but not in this example)
        self.observed_bearings[target_ind] = fit.centroid

    # reset the image processing minimum segment area in case we messed with it
    self.image_processing.minimum_segment_area = image_processing_original_segment_area

and that’s it, we’ve now implemented a basic moment algorithm for RelNav. This new technique could be accessed from the RelativeOpNav class using attribute simple_moments and it can be applied to images using method simple_moments_estimate. We can also initialize our new technique directly through RelativeOpNav by supplying simple_moment_kwargs={'use_apparent_area': True, 'apparent_area_margin_of_safety': 1.5, 'search_range': 200} as a key word argument to the RelativeOpNav initialization. Finally, we could retrieve the details about our moment algorithm fit through the simple_moment_details attribute.

Adding a New Technique With a Custom Handler

Occasionally, you may need to implement a new RelNav type that doesn’t work like the others. Perhaps it doesn’t generate bearing, position, or constraint measurements but something else entirely. Or perhaps it needs more than just the image, camera, and scene to extract the measurements from the image. If this is the case then you have 2 options for proceeding.

The first option is to define a custom handler for the RelativeOpNav class to use in place of the default_estimator(). This custom handler should be a function that accepts at minimum the RelNav instance as the first argument (essentially it should be a method for the RelativeOpNav class but defined outside of the class definition). In addition, it typically should have 2 optional arguments image_ind and include_targets which can be used to control what image/target pairs the technique is applied to, although that is strictly a convention and not a requirement. Inside the function, you should handle preparing the scene and calling the estimate method of your technique for each image requested (note that you can use the methods/attributes that exist in the RelativeOpNav class to help you do some common things). You should also handle storing the results for each image/target pair that is processed. You may need to do some fancy work at the beginning to check if an instance attribute already exists in the RelNav instance using getattr and setattr. Finally, you should ensure that the RelNavEstimator.observable_type class attribute is set to only CUSTOM. This should then allow you to register the technique with the RelativeOpNav class and use it as you would with a regular registered technique.

While that is one option, it may not be the best in a special case like this. The primary benefit to registering the new techniques with the RelativeOpNav class is that it generates a lot of the boilerplate code for preparing the scene and storing the results for you, but by generating your own handler you are largely bypassing this benefit. Therefore, it may be better to just use your new class directly in a script instead of registering it, unless you are looking to share with others and want to ensure a standard interface. In this case you can use the default_estimator() method as a template for making your script and you don’t need to bother with the RelativeOpNav class at all.

Modules

cross_correlation

This module provides the capability to locate the center-of-figure of a target in an image using 2D cross-correlation.

ellipse_matching

This module provides the capability to locate the relative position of a regular target body (well modelled by a triaxial ellipsoid) by matching the observed ellipse of the limb in an image with the ellipsoid model of the target.

estimator_interface_abc

This module defines the abstract base class (abc) for defining Relative OpNav techniques that will work with the RelativeOpNav class.

limb_matching

This module provides the capability to locate the relative position of any target body by matching the observed limb in an image with the shape model of the target.

moment_algorithm

This module provides a class which implements a moment based (center of illumination) center finding RelNav technique.

sfn

This subpackage provides the requisite classes and functions for performing surface feature navigation in GIANT.

unresolved

This module provides a class which implements an unresolved center finding RelNav technique along with a new meta class that adds concrete center-of-brightness to center-of-figure correction methods.