OpNavImage

giant.image:

class giant.image.OpNavImage(data, observation_date=None, rotation_inertial_to_camera=None, temperature=None, position=None, velocity=None, exposure_type=None, saturation=None, file=None, parse_data=False, exposure=None, dark_pixels=None, instrument=None, spacecraft=None, target=None, pointing_post_fit=False)[source]

This is a subclass of a numpy array for images which adds various parameters to the ndarray class necessary for the GIANT algorithms as well as some helper methods for loading in an image.

The OpNavImage class is primarily a numpy ndarray which stores the illumination values of an image. In addition to the illumination data (which is used as you would normally use a numpy array) this class has some extra attributes which are used throughout the GIANT routines. These attributes are metadata for the image, including the location the image was taken, the observation_date the image was taken, the camera used the take the image, the spacecraft hosting the camera used to take the image, the attitude of the camera at the time the image was taken, the velocity of the camera at the time the image was taken, the file the image was loaded from, and the length of exposure used to generate the image.

The OpNavImage class also provides helper methods for loading the data from an image file. There is the load_image() static method which will read in a number of standard image formats and return a numpy array of the illumination data. There is also the parse_data() method which attempts to extract pertinent information about an image to fill the metadata of the OpNavImage class. The parse_data() method is not implemented here and most be implemented by the user if it is desired to be used (by subclassing the OpNavImage class). It is possible to use the OpNavImage class without subclassing and defining the parse_data() method by manually specifying the required metadata in the class initialization and setting the parse_data flag to False in the class initialization, but in general this is not recommended.

You can initialize this class by either passing in a path to the image file name (recommended) or by passing in an array-like object of the illumination data. The metadata can be specified as keyword arguments to the class initialization or can be loaded by overriding the parse_data() method.

Note that if you have overridden the parse_data() method, specified parse_data=True, and specify one of the other optional inputs, what you have specified manually will overwrite anything filled by parse_data()

Parameters:
  • data (Path | str | Sequence[Sequence] | ndarray) – The image data to be formed into an OpNavImage either as a path to an image file or the illumination data directly

  • observation_date (datetime | None) – The observation_date the image was captured

  • rotation_inertial_to_camera (Rotation | Sequence | ndarray | None) – the rotation to go from the inertial frame to the camera frame at the time the image was taken

  • temperature (Real | None) – The temperature of the camera when the image was captured

  • position (Sequence | ndarray | None) – the inertial position of the camera at the time the image was taken

  • velocity (Sequence | ndarray | None) – The inertial velocity of the camera at the time the image was taken

  • exposure_type (ExposureType | str | None) – The type of exposure for the image (‘short’ or ‘long’)

  • saturation (Real | None) – The saturation level for the image

  • file (Path | str | None) – The file the illumination data came from. Generally required if parse_data is to be used and the data was entered as an array like value

  • parse_data (bool) – A flag whether to try the parse_data method. The parse_data method must be defined by the user.

  • exposure (Real | None) – The exposure time used to capture the image. This isn’t actually used in GIANT (in favor of exposure_type attribute) so it is provided for convenience and for manual inspection

  • dark_pixels (Sequence | ndarray | None) – An array of dark pixels to be used in estimating the noise level of the image (this generally refers to a set of pixels that are active but specifically not exposed to light

  • instrument (str) – The camera used to capture the image. This is not used internally by GIANT and is provided for convenience and for manual inspection

  • spacecraft (str) – The spacecraft hosting the camera. This is not used internally by GIANT and is provided for convenience and for manual inspection

  • target (str | None) – The target that the camera is pointed towards. This is not used internally by GIANT and is provided for convenience and for manual inspection

  • pointing_post_fit (bool) – A flag specifying whether the attitude for this image has been estimated (True) or not

exposure: Real | None

The exposure length of the image (usually in seconds).

This attribute is provided for documentation and convenience, but typically isn’t used directly by core GIANT functions. Instead, the exposure_type attribute is used. For an example of how one might use this attribute, see the getting started page for more details.

dark_pixels: numpy.ndarray | None

A numpy array of “dark” (active but covered) pixels from a detector.

This array (if set) is used in the find_poi_in_roi() method to determine a rough noise level in the entire image. It typically is set to a region of the detector which contains active pixels but is covered and not exposed to any illumination sources, if such a region exists. If a region like this does not exist in your detector then leave set to None.

Note that if your detector has these pixels, you should probably crop them out of the image data stored in this class (though this is not technically necessary).

instrument: str | None

A string specifying the instrument that this image comes from.

This attribute is provided for documentation and convenience, but isn’t used directly by core GIANT functions. For an example of how one might use this attribute, see the getting started page for more details.

spacecraft: str | None

A string specifying the spacecraft hosting the instrument that this image comes from.

This attribute is provided for documentation and convenience, but isn’t used directly by core GIANT functions. For an example of how one might use this attribute, see the getting started page for more details.

target: str | None

A string specifying the target observed by the instrument in this image.

This attribute is provided for documentation and convenience, but isn’t used directly by core GIANT functions. For an example of how one might use this attribute, see the getting started page for more details.

pointing_post_fit: bool

A flag specifying whether the attitude for this image has been estimated (True) or not.

If this flag is True, then the attitude has been estimated either using observations of stars, or by updating the attitude of a short exposure image from the attitude of a long exposure image that has been estimated using stars. This is primarily used for informational purposes, though it is also used as a check in the Camera.update_short_attitude() method.

property observation_date: datetime | None

The observation_date specifies when the image was captured (normally set to the middle of the exposure period).

This is used for tagging observations with timestamps, updating attitude knowledge in short exposure images using long exposure images, and updating a scene to how it is expected to be at the time an image is captured.

Typically this attribute is a python datetime object, however, you can make it a different object if you want as long as the different object implements isoformat, __add__, and __sub__ methods. You can also set this attribute to None but this will break some functionality in GIANT so it is not recommended. If you really want to set this to something else you will need to set the _observation_date attribute directly, but again this is likely to break other functionality in GIANT so it is not recommended.

property rotation_inertial_to_camera: Rotation | None

The rotation_inertial_to_camera attribute encodes the rotation to transform from the inertial frame to the camera frame at the time of the image.

This is used extensively throughout GIANT. It is updated when using stars to estimate an updated attitude, when doing relative navigation to predict where points in the scene project to points in the image, and also in relative navigation to predict where points in the image project to in inertial space.

This attribute should be set to a Rotation object, or something that the Rotation object can interpret. When you set this value, it will be converted to an Rotation object. You can also set this attribute to None but this will break a significant portion of the functionality in GIANT so it is not recommended. If you really want to set this to something else you will need to set the _rotation_inertial_to_camera attribute directly, but again this is likely to break other functionality in GIANT so it is not recommended.

property velocity: ndarray

The velocity attribute encodes the inertial velocity of the camera at the time the image was captured.

This must be the inertial velocity with respect to the solar system barycenter and is used when to compute the stellar aberration correction to stars and targets. To ignore stellar aberration you can set this to the zero vector.

This attribute should be set to a length 3 array like object and it will be converted into a double numpy ndarray. If you try to set this value to None then it will be reset to a vector of zeros.

property position: ndarray

The position attribute encodes the inertial position of the camera at the time the image was captured.

Typically this is the inertial position from the solar system barycenter to the spacecraft and is used when updating an Scene to place objects in the camera frame at the time of the image. You can optionally put this in another frame or with another central body as long as you know what you are doing and understand how the Scene works.

This attribute should be set to a length 3 array like object and it will be converted into a double numpy ndarray. If you try to set this value to None then it will be reset to a vector of zeros.

property exposure_type: ExposureType | None

The exposure type specifies what type of processing to use on this image.

short exposure images are used for relative navigation like center finding. long exposure images are used for star based navigation like attitude estimation. dual exposure images are used for both star based and relative navigation.

this property should be set to an ExposureType value or a string. It can also be set to None but this can break some other functionality of GIANT so it is not recommended.

property temperature: Real

The temperature of the camera at the time the image was captured

This property is used by the camera model to apply temperature dependent focal length changes. It should be a Real and convertible to a float by using the float function. You can set this value to None but it will break things in the camera model so that is not recommended.

property saturation: Real

The saturation value of the camera.

This attribute is used when determining if a pixel is saturated or not in image processing. It may be set to a very high number to effectively ignore the check.

Summary of Methods

load_image

This method reads in a number of standard image formats using OpenCV and pyfits and converts it to grayscale if it is in color.

parse_data

This method should fill in the metadata for an OpNavImage.