giant.calibration¶
This package provides the required routines and objects to identify stars in an image and then estimate attitude, camera pointing alignment, and geometric camera model calibration using the observed stars.
Description¶
In GIANT, calibration refers primarily to the process of using identified stars in multiple images to estimate the
geometric camera calibration and camera frame alignment. There are many different sub-steps that need to be performed
for this, particularly with respect to identifying the stars in the image, which can lead to cluttered scripts and hard
to maintain code when everything is thrown together manually. Luckily, GIANT has done most of this nitty gritty work
for us by creating a simple, single interface in the Calibration
class.
The Calibration
class is a subclass of the StellarOpNav
class, which provides the functionality for
identifying stars and estimating updated attitude information for single images. In addition to the
StellarOpNav
functionality, the Calibration
class also provides the interfaces for using identified
stars in multiple images to estimate updates to geometric camera model (camera calibration,
estimate_calibration()
) and the alignment between the camera frame and a base frame
(estimate_static_alignment()
and estimate_temperature_dependent_alignment()
).
While these methods make it easy to get everything packaged appropriately, GIANT also exposes all of the substeps to you
if you need them to do a more advanced analysis.
This package level documentation focuses specifically on using the Calibration
class along with tips for
successfully doing camera calibration and alignment. For more details about what exactly is happening, refer to the
documentation for the submodules from this package.
Tuning for Successful Calibration¶
As with stellar_opnav
, tuning the Calibration
class is both science and art. Indeed, tuning for
calibration is nearly the same as tuning for stellar OpNav, therefore we urge you to start with the
stellar_opnav
documentation before proceeding with this documentation. Once you are familiar with tuning for
stellar OpNav, then tuning for calibration will be fairly straight forward.
There are 2 main differences between tuning for calibration and tuning for stellar OpNav. First, with calibration we
typically are considering many different view conditions across various temperatures and with various amounts of stray
light, which may make it difficult to find a single tuning that works to ID stars in all images under consideration.
The best way to work around this issue is to group the images into similar exposure times, temperatures, and stray light
patterns and figure out tuning for each of these groups independently using the recommended steps in
stellar_opnav
. The Calibration
class makes this process easy by providing the method
add_images()
which, when coupled with calls to Camera.all_on()
and Camera.all_off()
makes it easy to add/consider groups of images one by one, storing the results of images that have already been
processed.
The second main difference between calibration and stellar OpNav is that in calibration, particularly for the camera
model estimation, we typically want as many stars as possible extracted from each image instead of just finding the
brightest stars in the image. The best way to handle this is to work iteratively, where you first tune for getting just
bright stars and estimate and update to the attitude (and possibly the camera model if it had a poor initial guess) and
then, when you have better a priori, turn off the RANSAC feature of the StarID
class and identify dimmer
stars. Once this has been done you can then re-estimate an update to the camera model (and maybe the pointing for each
image).
The only other real tuning that might need to be done is choosing which parameters are estimated as part of the
geometric camera model calibration, which is done through the CameraModel.estimation_parameters
attribute. For
many of the camera models, some of the parameters are highly correlated with each other and it is not recommended to
attempt to simultaneously estimate them unless you have a very large dataset to help break the correlation (for instance
misalignment and the principal point for the camera can be highly correlated unless you have a lot of images at
different viewing conditions). Further details about good subsets of elements to estimate in calibration is included
with the documentation for the camera models provided with GIANT.
Example¶
Below shows how calibration can be used to id stars, estimate attitude corrections, estimate a camera model, and
estimate alignment. It assumes that the generate_sample_data
script has already be run and that the
sample_data
directory is in the current working directory. For an in depth example using real images see the
tutorial.
>>> import pickle
>>> from pathlib import Path
>>> # use pathlib and pickle to get the data
>>> data = Path.cwd() / "sample_data" / "camera.pickle"
>>> with data.open('rb') as pfile:
>>> camera = pickle.load(pfile)
>>> # import the stellar opnav class
>>> from giant.calibration.calibration_class import Calibration
>>> # import the default catalogue
>>> from giant.catalogues.giant_catalogue import GIANTCatalogue
>>> # set the estimation parameters for the camera model
>>> camera.model.estimation_parameters = ["fx", "fy", "px", "py", "k1"]
>>> # form the calibration object
>>> cal = Calibration(camera, image_processing_kwargs={'centroid_size': 1, 'poi_threshold': 10},
... star_identification_kwargs={'max_magnitude': 5, 'tolerance': 20})
>>> # identify stars
>>> cal.id_stars()
>>> cal.sid_summary() # print a summary of the star id results
>>> # estimate an update the attitude for each image
>>> cal.estimate_attitude()
>>> # update the star id settings
>>> cal.star_id.max_magnitude = 5.5
>>> cal.star_id.tolerance = 2
>>> cal.star_id.max_combos = 0
>>> # identify stars again to get dimmer stars
>>> cal.id_stars()
>>> # import the visualizer to look at the results
>>> from giant.stellar_opnav.visualizer import show_id_results
>>> show_id_results(cal)
>>> # estimate the geometric camera model
>>> cal.estimate_calibration()
>>> cal.calib_summary()
- class giant.calibration.Calibration(camera, use_weights=False, image_processing=None, image_processing_kwargs=None, star_id=None, star_id_kwargs=None, alignment_base_frame_func=None, attitude_estimator=None, attitude_estimator_kwargs=None, static_alignment_estimator=None, static_alignment_estimator_kwargs=None, temperature_dependent_alignment_estimator=None, temperature_dependent_alignment_estimator_kwargs=None, calibration_estimator=None, calibration_estimator_kwargs=None)[source]¶
Bases:
StellarOpNav
This class serves as the main user interface for performing geometric camera calibration and camera frame attitude alignment.
The class acts as a container for the
Camera
,ImageProcessing
, andstellar_opnav.estimators
,calibration.estimators
objects and also passes the correct and up-to-date data from one object to the other. In general, this class will be the exclusive interface to the mentioned objects and models for the user.This class provides a number of features that make doing stellar OpNav and camera calibration/alignment easy. The first is it provides aliases to the image processing, star id, attitude estimation, calibration estimation, and alignment estimation objects. These aliases make it easy to quickly change/update the various tuning parameters that are necessary to make star identification and calibration a success. In addition to providing convenient access to the underlying settings, some of these aliases also update internal flags that specify whether individual images need to be reprocessed, saving computation time when you’re trying to find the best tuning.
This class also provides simple methods for performing star identification, attitude estimation, camera calibration, and aligment estimation after you have set the tuning parameters. These methods (
id_stars()
,sid_summary()
,estimate_attitude()
,estimate_calibration()
,calib_summary()
,estimate_static_alignment()
, andestimate_temperature_dependent_alignment()
) combine all of the required steps into a few simple calls, and pass the resulting data from one object to the next. They also store off the results of the star identification in thequeried_catalogue_star_records
,queried_catalogue_image_points
,queried_catalogue_unit_vectors
,ip_extracted_image_points
,ip_image_illums
,ip_psfs
,ip_stats
,ip_snrs
,unmatched_catalogue_image_points
,unmatched_image_illums
,unmatched_psfs
,unmatched_stats
,unmatched_snrs
unmatched_catalogue_star_records
,unmatched_catalogue_unit_vectors
,unmatched_extracted_image_points
,matched_catalogue_image_points
,matched_image_illums
,matched_psfs
,matched_stats
,matched_snrs
matched_catalogue_star_records
,matched_catalogue_unit_vectors_inertial
,matched_catalogue_unit_vectors_camera
, andmatched_extracted_image_points
attributes, enabling more advanced analysis to be performed external to the class.This class stores the updated attitude solutions in the image objects themselves, allowing you to directly pass your images from stellar OpNav to the
relative_opnav
routines with updated attitude solutions. It also stores the estimated camera model in the original camera model itself, and store the estimated alignments in thestatic_alignment
andtemperature_dependent_alignment
attributes. Finally, this class respects theimage_mask
attribute of theCamera
object, only considering images that are currently turned on.When initializing this class, most of the initial options can be set using the
*_kwargs
inputs with dictionaries specifying the keyword arguments and values. Alternatively, you can provide already initialized instances of theImageProcessing
,AttitudeEstimator
,StarID
,CalibrationEstimator
,StaticAlignmentEstimator
, orTemperatureDependentAlignmentEstimator
classes or subclasses if you want a little more control. You should see the documentation for these classes for more details on what you can do with them.- Parameters:
camera (Camera) – The
Camera
object containing the camera model and images to be utilizeduse_weights (bool) – A flag specifying whether to use weighted estimation for attitude, alignment, and calibration
alignment_base_frame_func (Callable | None) – A callable object which returns the orientation of the base frame with respect to the inertial frame the alignment of the camera frame is to be done with respect to for a given date.
image_processing (ImageProcessing | None) – An already initialized instance of
ImageProcessing
(or a subclass). If notNone
thenimage_processing_kwargs
are ignored.image_processing_kwargs (dict | None) – The keyword arguments to pass to the
ImageProcessing
class constructor. These are ignored if argumentimage_processing
is notNone
star_id (StarID | None) – An already initialized instance of
StarID
(or a subclass). If notNone
thenstar_id_kwargs
are ignored.star_id_kwargs (dict | None) – The keyword arguments to pass to the
StarID
class constructor as a dictionary. These are ignored if argumentstar_id
is notNone
.attitude_estimator (AttitudeEstimator | None) – An already initialized instance of
AttitudeEstimator
(or a subclass). If notNone
thenattitude_estimator_kwargs
are ignored.attitude_estimator_kwargs (dict | None) – The keyword arguments to pass to the
DavenportQMethod
constructor as a dictionary. If argumentattitude_estimator
is notNone
then this is ignored.static_alignment_estimator (StaticAlignmentEstimator | None) – An already initialized instance of
StaticAlignmentEstimator
(or a subclass). If notNone
thenstatic_alignment_estimator_kwargs
are ignored.static_alignment_estimator_kwargs (dict | None) – The keyword arguments to pass to the
StaticAlignmentEstimator
constructor as a dictionary. If argumentstatic_alignment_estimator
is notNone
then this is ignored.temperature_dependent_alignment_estimator (TemperatureDependentAlignmentEstimator | None) – An already initialized instance of
TemperatureDependentAlignmentEstimator
(or a subclass). If notNone
thentemperature_dependent_alignment_estimator_kwargs
are ignored.temperature_dependent_alignment_estimator_kwargs (dict | None) – The keyword arguments to pass to the
TemperatureDependentAlignmentEstimator
constructor as a dictionary. If argumenttemperature_dependent_alignment_estimator
is notNone
then this is ignored.calibration_estimator (CalibrationEstimator | None) – An already initialized instance of
CalibrationEstimator
(or a subclass). If notNone
thencalibration_estimator_kwargs
are ignored.calibration_estimator_kwargs (dict | None) – The keyword arguments to pass to the
IterativeNonlinearLSTSQ
constructor as a dictionary. If argumentstatic_alignment_estimator is not ``None
then this is ignored.
- alignment_base_frame_func: Callable | None¶
A callable object which returns the orientation of the base frame with respect to the inertial frame the alignment of the camera frame is to be done with respect to for a given date.
This is used on calls to
estimate_static_alignment()
and :meth`estimate_temperature_dependent_alignment` to determine the base frame the alignment is being done with respect to. Typically this returns something like the spacecraft body frame with respect to the inertial frame (inertial to spacecraft body) or another camera frame.
- static_alignment: Rotation | None¶
The static alignment as a
Rotation
object.This will be none until the
estimate_static_alignment()
method is called at which point it will contain the estimated alignment.
- temperature_dependent_alignment: NONEARRAY¶
The temperature dependent alignment as a 3x2 numpy array.
The temperature dependent alignment array is stored such that the first column is the static offset for the alignment, the second column is the temperature dependent slope, and each row represents the euler angle according to the requested order (so if the requested order is
'xyx'
then the rotation from the base frame to the camera frame at temperaturet
can be computed using:>>> from giant.rotations import euler_to_rotmat, Rotation >>> import numpy as np >>> temperature_dependent_alignment = np.arange(6).reshape(3, 2) # temp array just to demonstrate >>> t = -22.5 # temp temperature just to demonstrate >>> angles =temperature_dependent_alignment@[1, t] >>> order = 'xyx' >>> rotation_base_to_camera = Rotation(euler_to_rotmat(angles, order))
- property model: CameraModel¶
This alias returns the current camera model from the camera attribute.
It is provided for convenience since the camera model is used frequently.
- property calibration_estimator: CalibrationEstimator¶
The calibration estimator to use when estimating the geometric calibration
This should typically be a subclass of the
CalibrationEstimator
meta class.See the
estimators
documentation for more details.
- property static_alignment_estimator: StaticAlignmentEstimator¶
The static alignment estimator to use when estimating the static alignment
This should typically be a subclass of the
StaticAlignmentEstimator
class.See the
estimators
documentation for more details.
- property temperature_dependent_alignment_estimator: TemperatureDependentAlignmentEstimator¶
The temperature_dependent_alignment estimator to use when estimating the temperature_dependent_alignment
This should typically be a subclass of the
TemperatureDependentAlignmentEstimator
class.See the
estimators
documentation for more details.
- estimate_calibration()[source]¶
This method estimates an updated camera model using all stars identified in all images that are turned on.
For each turned on image in the
camera
attribute, this method provides thecalibration_estimator
with thematched_extracted_image_points
, thematched_catalogue_unit_vectors_camera
, and optionally thematched_weights_picture
ifuse_weights
isTrue
. Theestimate()
method is then called and the resulting updated camera model is stored in themodel
attribute. Finally, the updated camera model is used to update the following:matched_catalogue_image_points
queried_catalogue_image_points
unmatched_catalogue_image_points
For a more thorough description of the calibration estimation routines see the
calibration.estimators
documentation.Warning
This method overwrites the camera model information in the
camera
attribute and does not save old information anywhere. If you want this information saved be sure to store it yourself.- Return type:
None
- estimate_static_alignment()[source]¶
This method estimates a static (not temeprature dependent) alignment between a base frame and the camera frame over multiple images.
This method uses the
alignment_base_frame_func
to retrieve the rotation from the inertial frame to the base frame the alignment is to be done with respect to for each image time. The inertial matched catalogue unit vectors are then rotated into the base frame. Then, the matched image points-of-interest are converted to unit vectors in the camera frame. These 2 sets of unit vectors are then provided to thestatic_alignment_estimator
and itsestimate()
method is called to estimate the alignment between the frames. The resulting alignment is stored in thestatic_alignment
attribute.Note that to do alignment, the base frame and the camera frame should generally be fixed with respect to one another. This means that you can’t do alignment with respect to something like the inertial frame in general, unless your camera is magically fixed with respect to the inertial frame.
Generally, this method should be called after you have estimated the geometric camera model, because the geometric camera model is used to convert the observed pixel locations in the image to unit vectors in the camera frame (using
pixels_to_unit()
).Note
This method will attempt to account for misalignment estimated along with the camera model when performing the estimation; however, this is not recommended. Instead, once you have performed your camera model calibration, you should consider resetting the camera model misalignment to 0 and then calling
estimate_attitude()
before a call to this function.- Return type:
None
- use_weights: bool¶
A flag specifying whether to compute weights/use them in the attitude estimation routine
- scene: Scene | None¶
Optionally, a scene defining the targets that may be in the FOV of the camera used to reject points interior to a body as stars.
If
None
then no attempt is made to reject points that might be interior to a body. If notNone
then we will attempt to reject these points using a priori knowledge.
- process_stars: List[bool]¶
This list contains a boolean specifying whether the corresponding image needs to be processed using image processing again.
This typically is automatically updated and you shouldn’t have to worry about it. It is included for speed.
- estimate_temperature_dependent_alignment()[source]¶
This method estimates a temperature dependent (not static) alignment between a base frame and the camera frame over multiple images.
This method uses the
alignment_base_frame_func
to retrieve the rotation from the inertial frame to the base frame the alignment is to be done with respect to for each image time. Then, the rotation from the inertial frame to the camera frame is retrieved for each image from theImage.rotation_inertial_to_camera
attribute for each image (which is updated by a call toestimate_attitude()
). These frame definitions are then provided to thetemperature_dependent_alignment_estimator
whoseestimate()
method is then called to estimate the temperature dependent alignment. The estimated alignment is then stored as a 3x2 numpy array where the first column is the static offset for the alignment, the second column is the temperature dependent slope, and each row represents the euler angle according to the requested order (so if the requested order is'xyx'
then the rotation from the base frame to the camera frame at temperaturet
can be computed using:>>> from giant.rotations import euler_to_rotmat, Rotation >>> from giant.calibration.calibration_class import Calibration >>> cal = Calibration() >>> cal.estimate_temperature_dependent_alignment() >>> t = -22.5 >>> angles = cal.temperature_dependent_alignment@[1, t] >>> order = cal.temperature_dependent_alignment_estimator.order >>> rotation_base_to_camera = Rotation(euler_to_rotmat(angles, order))
This example is obviously incomplete but gives the concept of how things could be used.
Note that to do alignment, the base frame and the camera frame should generally be fixed with respect to one another (with the exception of small variations with temperature). This means that you can’t do alignment with respect to something like the inertial frame in general, unless your camera is magically fixed with respect to the inertial frame.
Generally, this method should be called after you have estimated the attitude for each image, because the estimated image pointing is used to estimate the alignment. As such, only images where there are successfully matched stars are used in the estimation.
Note
This method will attempt to account for misalignment estimated along with the camera model when performing the estimation; however, this is not recommended. Instead, once you have performed your camera model calibration, you should consider resetting the camera model misalignment to 0 and then calling
estimate_attitude()
before a call to this function.- Return type:
None
- reset_calibration_estimator()[source]¶
This method resets the existing calibration estimator instance with a new instance using the initial
calibration_estimator_update
argument passed to the constructor.A new instance of the object is created, therefore there is no backwards reference whatsoever to the state before a call to this method.
- update_calibration_estimator(calibration_estimator_update=None)[source]¶
This method updates the attributes of the
calibration_estimator
attribute.See the
calibration.estimators
documentation for accepted attribute values.If a supplied attribute is not found in the
calibration_estimator
attribute then this will print a warning and ignore the attribute. Any attributes that are not supplied are left alone.- Parameters:
calibration_estimator_update (dict | None) – A dictionary of attribute->value pairs to update the
calibration_estimator
attribute with
- reset_static_alignment_estimator()[source]¶
This method replaces the existing static alignment estimator instance with a new instance using the initial
static_alignment_estimator_kwargs
argument passed to the constructor.A new instance of the object is created, therefore there is no backwards reference whatsoever to the state before a call to this method.
- update_static_alignment_estimator(alignment_estimator_update=None)[source]¶
This method updates the attributes of the
static_alignment_estimator
attribute.See the
calibration.estimators
documentation for accepted attribute values.If a supplied attribute is not found in the
static_alignment_estimator
attribute then this will print a warning and ignore the attribute. Any attributes that are not supplied are left alone.- Parameters:
alignment_estimator_update (dict | None) – A dictionary of attribute->value pairs to update the
static_alignment_estimator
attribute with
- reset_temperature_dependent_alignment_estimator()[source]¶
This method replaces the existing temperature_dependent_alignment estimator instance with a new instance using the initial
temperature_dependent_alignment_estimator_kwargs
argument passed to the constructor.A new instance of the object is created, therefore there is no backwards reference whatsoever to the state before a call to this method.
- update_temperature_dependent_alignment_estimator(temperature_dependent_alignment_estimator_update=None)[source]¶
This method updates the attributes of the
temperature_dependent_alignment_estimator
attribute.See the
calibration.estimators
documentation for accepted attribute values.If a supplied attribute is not found in the
temperature_dependent_alignment_estimator
attribute then this will print a warning and ignore the attribute. Any attributes that are not supplied are left alone.- Parameters:
temperature_dependent_alignment_estimator_update (dict | None) – A dictionary of attribute->value pairs to update the
temperature_dependent_alignment_estimator
attribute with
- reset_settings()[source]¶
This method resets all settings to their initially provided values (at class construction)
Specifically, the following are reset
star_id
image_processing
attitude_estimator
In each case, a new instance of the object is created supplying the corresponding
_kwargs
argument supplied when this class what initialized.This is simply a shortcut to calling the
reset_XXX`
methods individually.
- update_settings(star_id_update=None, image_processing_update=None, attitude_estimator_update=None, calibration_estimator_update=None, static_alignment_estimator_update=None, temperature_dependent_alignment_estimator_update=None)[source]¶
This method updates all settings to their provided values
Specifically, the following are updated depending on the input
star_id
image_processing
attitude_estimator
In each case, the existing instance is modified in place with the attributes provided. Any attributes that are not specified are left as is.
This is simply a shortcut to calling the
update_XXX
methods individually.- Parameters:
star_id_update (dict | None) – The updates to
star_id
.attitude_estimator_update (dict | None) – The updates to
attitude_estimator
.image_processing_update (dict | None) – The updates to
image_processing
.calibration_estimator_update (dict | None) – The updates to
calibration_estimator
.static_alignment_estimator_update (dict | None) – The updates to
static_alignment_estimator
.temperature_dependent_alignment_estimator_update (dict | None) – The updates to
temperature_dependent_alignment_estimator
.
- calib_summary(measurement_covariance=None)[source]¶
This prints a summary of the results of calibration to the screen
The resulting summary displays the labeled covariance matrix, followed by the labeled correlation coefficients, followed by the state parameters and their formal uncertainty.
One optional inputs can be used to specify the uncertainty on the measurements if weighted estimation wasn’t already used to ensure the post-fit covariance has the proper scaling.
Note that if multiple misalignments were estimated in the calibration, only the first is printed in the correlation and covariance matrices. For all misalignments, the values are replaced with NaN.
- Parameters:
measurement_covariance (Sequence | ndarray | Real | None) – The covariance for the measurements either as a nxn matrix or as a scalar.
- limit_magnitude(min_magnitude, max_magnitude, in_place=False)[source]¶
This method removes stars from the
matched_...
attributes that are not within the provided magnitude bounds.This method should be used rarely, as you can typically achieve the same functionality by use the
StarID.max_magnitude
andStarID.min_magnitude
attributes before callingid_stars()
. The most typical use case for this method is when you have already completed a full calibration and you now either want to filter out some of the stars for plotting purposes, or you want to filter out some of the stars to do an alignment analysis, where it is generally better to use only well exposed stars since fewer are needed to fully define the alignment.When you use this method, by default it will edit and return a copy of the current instance to preserve the current instance. if you are using many images with many stars in them this can use a large amount of memory; however, so you can optionally specify
in_place=True
to modify the current instance in place. Note however that this not a reversible operation (that is you cannot get back to the original state) so be cautious about using this option.- Parameters:
min_magnitude (float) – The minimum star magnitude to accept (recall that minimum magnitude limits the brightest stars)
max_magnitude (float) – The maximum star magnitude to accept (recall that maximum magnitude limits the dimmest stars)
in_place – A flag specifying whether to work on a copy or the original
- Returns:
The edited Calibration instance (either a copy or a reference)
- Return type:
- class giant.calibration.DavenportQMethod(target_frame_directions=None, base_frame_directions=None, weighted_estimation=False, weights=1)[source]¶
Bases:
AttitudeEstimator
This class estimates the rotation quaternion that best aligns unit vectors from one frame with unit vectors in another frame using Davenport’s Q-Method solution to Wahba’s problem.
This class is relatively easy to use. When you initialize the class, simply specify the
target_frame_directions
unit vectors (\(\textbf{a}_i\) from theestimators
documentation) as a 3xn array of vectors (each column is a vector) and thebase_frame_directions
unit vectors (\(\textbf{b}_i\) from theestimators
documentation) as a 3xn array of vectors (each column is a vector). Here thetarget_frame_directions
unit vectors are expressed in the end frame (the frame you want to rotate to) and thebase_frame_directions
unit vectors are expressed in the starting frame (the frame you want to rotate from). You can also leave these inputs to beNone
and then set the attributes directly. Each column oftarget_frame_directions
andbase_frame_directions
should correspond to each other as a pair (i.e. column 1 intarget_frame_directions
is paired with column ` inbase_frame_directions
.Optionally, either at initialization or by setting the attributes, you can set the
weighted_estimation
andweights
values to specify whether to use weighted estimation or not, and what weights to use if you are using weighted estimation. When performing weighted estimation you should setweighted_estimation
toTrue
and specifyweights
to be a length n array of the weights to apply to each unit vector pair.Once the appropriate values are set, the
estimate()
method can be called to compute the attitude quaternion that best aligns the two frames. When theestimate()
method completes, the solved for rotation can be found as anRotation
object in therotation
attribute of the class. In addition, the formal post fit covariance matrix of the estimate can be found in thepost_fit_covariance
attribute. Note that as will all attitude quaternions, the post fit covariance matrix will be rank deficient since there are only 3 true degrees of freedom.A description of the math behind the DavenportQMethod Solution can be found here.
- Parameters:
target_frame_directions (Sequence | ndarray | None) – A 3xn array of unit vectors expressed in the camera frame
base_frame_directions (Sequence | ndarray | None) – A 3xn array of unit vectors expressed in the catalogue frame corresponding the the
target_frame_directions
unit vectorsweighted_estimation (bool) – A flag specifying whether to weight the estimation routine by unit vector pairs
weights (Sequence | ndarray | Real) – The weights to apply to the unit vectors if the
weighted_estimation
flag is set toTrue
.
- target_frame_directions: ndarray¶
The unit vectors in the target frame as a 3xn array (\(\mathbf{a}_i\)).
Each column should represent the pair of the corresponding column in
base_frame_directions
.
- base_frame_directions: ndarray¶
The unit vectors in the base frame as a 3xn array (\(\mathbf{b}_i\)).
Each column should represent the pair of the corresponding column in
target_frame_directions
.
- weights: ndarray¶
A length n array of the weights to apply if weighted_estimation is True. (\(w_i\))
Each element should represent the pair of the corresponding column in
target_frame_directions
andbase_frame_directions
.
- weighted_estimation: bool¶
A flag specifying whether to use weights in the estimation of the rotation.
- rotation: Rotation | None¶
The solved for rotation that best aligns the
base_frame_directions
andtarget_frame_directions
after callingestimate()
.This rotation goes go from the base frame to the target frame.
If
estimate()
has not been called yet then this will be set toNone
.
- compute_residuals()[source]¶
This method computes the residuals between the aligned unit vectors according to Wahba’s problem definitions.
If the updated attitude has been estimated (
rotation
is notNone
) then this method computes the post-fit residuals. If not then this method computes the pre-fit residuals. The residuals are computed according to\[r_i=\frac{1}{2}\left\|\mathbf{a}_i-\mathbf{T}\mathbf{b}_i\right\|^2\]where \(r_i\) is the residual, \(\mathbf{a}_i\) is the camera direction unit vector, \(\mathbf{b}_i\) is the database direction unit vector, and \(\mathbf{T}\) is the solved for rotation matrix from the catalogue frame to the camera frame, or the identity matrix if the matrix hasn’t been solved for yet.
The output will be a length n array with each element representing the residual for the correspond unit vector pair.
- Returns:
The residuals between the aligned unit vectors
- Return type:
ndarray
- estimate()[source]¶
This method solves for the rotation matrix that best aligns the unit vectors in
base_frame_directions
with the unit vectors intarget_frame_directions
using Davenport’s Q-Method solution to Wahba’s Problem.Once the appropriate attributes have been set, simply call this method with no arguments and the solved for rotation will be stored in the
rotation
attribute as anRotation
object.- Return type:
None
- property post_fit_covariance: ndarray¶
This returns the post-fit covariance after calling the
estimate
method as a 4x4 numpy array.This should be only be called after the estimate method has been called, otherwise it raises a ValueError
- class giant.calibration.StarID(model, extracted_image_points=None, catalogue=None, max_magnitude=7, min_magnitude=-10, max_combos=100, tolerance=20, a_priori_rotation_cat2camera=None, ransac_tolerance=5, second_closest_check=True, camera_velocity=None, camera_position=None, unique_check=True, use_mp=False, lost_in_space_catalogue_file=None)[source]¶
Bases:
object
The StarID class operates on the result of image processing algorithms to attempt to match image points of interest with catalogue star records.
This is a necessary step in all forms of stellar OpNav and is a critical component of GIANT.
In general, the user will not directly interface with the
StarID
class and instead will use theStellarOpNav
class. Below we give a brief description of how to use this class directly for users who are just curious or need more direct control over the class.There are a couple things that the
StarID
class needs to operate. The first is a camera model, which should be a subclass ofCameraModel
. The camera model is used to both project catalogue star locations onto the image, as well as generate unit vectors through the image points of interest in the camera frame. The next thing theStarID
class needs is a star catalogue to query. This should come from thecatalogues
package and provides all of the necessary information for retrieving and projecting the expected stars in an image. Both the star catalogue and camera model are generally set at the construction of the class and apply to every image being considered, so they are rarely updated. The camera model is stored in themodel
attribute and is also specified as the first positional argument for the class constructor. The catalogue is stored in thecatalogue
attribute and can also be specified in the class constructor as a keyword argument of the same name.The
StarID
class also needs some information about the current image being considered. This information includes points of interest for the image that need to be matched to stars, the a priori attitude of the image, and the position/velocity of the camera at the time the image was captured. The points of interest are generally returned from theImageProcessing
routines, although they don’t need to be. The camera attitude, position, and velocity are generally passed from theOpNavImage
metadata. The image attitude is used for querying the catalogue and rotating the catalogue stars into the image frame. The camera positions and velocity are used for correcting the star locations for parallax and stellar aberration. The camera position and velocity are not required but are generally recommended as they will give a more accurate representation. All of these attributes need to be updated for each new image being considered (theStarID
class does not directly operate on theOpNavImage
objects). The image points of interest are stored and updated in theextracted_image_points
attribute, the camera attitude is stored in thea_priori_rotation_cat2camera
attribute, and the camera position and velocity are stored in thecamera_position
andcamera_velocity
attributes respectively. They can also be specified in the class constructor as keyword arguments of the same name.Finally, there are a number of tuning parameters that need set. These parameters are discussed in depth in the Tuning Parameters Table.
When everything is correctly set in an instance of
StarID
, then generally all that needs to be called is theid_stars()
method, which accepts the observation date of the image being considered as an optionalepoch
keyword argument. This method will go through the whole processed detailed above, storing the results in a number of attributes that are detailed below.Warning
This class will load data for the lost in space catalogue. The lost is space catalogue is a pickle file. Pickle files can be used to execute arbitrary code, so you should never open one from an untrusted source. While this code should only be reading pickle files generated by GIANT itself that are safe, you should verify that the
lost_in_space_catalogue_file
and the file it points to have not been tampered with to be absolutely sure.- Parameters:
model (CameraModel) – The camera model to use to relate vectors in the camera frame with points on the image
extracted_image_points (Sequence | ndarray | None) – A 2xn array of the image points of interest to be identified. The first row should correspond to the y locations (rows) and the second row should correspond to the x locations (columns).
catalogue (Catalogue | None) – The catalogue object to use to query for potential stars in an image.
max_magnitude (Real) – the maximum magnitude to return when querying the star catalogue
min_magnitude (Real) – the minimum magnitude to return when querying the star catalogue
max_combos (int) – The maximum number of random samples to try in the RANSAC routine
tolerance (Real) – The maximum distance between a catalogue star and a image point of interest for a potential pair to be formed before the RANSAC algorithm
a_priori_rotation_cat2camera (Rotation | None) – The rotation matrix to go from the inertial frame to the camera frame
ransac_tolerance (Real) – The maximum distance between a catalogue star and an image point of interest after correcting the attitude for a pair to be considered an inlier in the RANSAC algorithm.
second_closest_check (bool) – A flag specifying whether to reject pairs where 2 catalogue stars are close to an image point of interest
camera_velocity (Sequence | ndarray | None) – The velocity of the camera in km/s with respect to the solar system barycenter in the inertial frame at the time the image was taken
camera_position (Sequence | ndarray | None) – The position of the camera in km with respect to the solar system barycenter in the inertial frame at the time the image was taken
unique_check (bool) – A flag specifying whether to allow a single catalogue star to be potentially paired with multiple image points of interest
use_mp (bool) – A flag specifying whether to use the multi-processing library to accelerate the RANSAC algorithm
lost_in_space_catalogue_file (Path | str | None) – The file containing the lost in space catalogue
- model: CameraModel¶
The camera model which relates points in the camera frame to points in the image and vice-versa.
- camera_position: ndarray¶
The position of the camera with respect to the solar system barycenter in the inertial frame at the time the image was captured as a length 3 numpy array of floats.
Typically this is stored in the
OpNavImage.position
attribute
- camera_velocity: ndarray¶
The velocity of the camera with respect to the solar system barycenter in the inertial frame at the time the image was captured as a length 3 numpy array of floats.
Typically this is stored in the
OpNavImage.velocity
attribute
- extracted_image_points: ndarray¶
a 2xn array of the image points of interest to be paired with catalogue stars.
the first row should correspond to the x locations (columns) and the second row should correspond to the y locations (rows).
typically this is retrieved from a call to
ImageProcessing.locate_subpixel_poi_in_roi()
.
- catalogue: Catalogue¶
The star catalogue to use when pairing image points with star locations.
This typically should be a subclass of the
Catalogue
class. It defaults to theGIANTCatalogue
.
- a_priori_rotation_cat2camera: Rotation¶
This contains the a priori rotation knowledge from the catalogue frame (typically the inertial frame) to the camera frame at the time of the image.
This typically is stored as the
OpNavImage.rotation_inertial_to_camera
attribute.
- max_magnitude: Real¶
The maximum star magnitude to query from the star catalogue.
This specifies how dim stars are expected to be in the
extracted_image_points
data set. This is typically dependent on both the detector and the exposure length of the image under consideration.
- min_magnitude: Real¶
The minimum star magnitude to query from the star catalogue.
This specifies how dim stars are expected to be in the
extracted_image_points
data set. This is typically dependent on both the detector and the exposure length of the image under consideration.Generally this should be left alone unless you are worried about over exposed stars (in which case
ImageProcessing.reject_saturation
may be more useful) or you are doing some special analysis.
- tolerance: Real¶
The maximum distance in units of pixels between a projected catalogue location and an extracted image point for a possible pairing to be made for consideration in the RANSAC algorithm.
- max_combos: int¶
The maximum number of random combinations to try in the RANSAC algorithm.
If the total possible number of combinations is less than this attribute then an exhaustive search will be performed instead
- ransac_tolerance: Real¶
The tolerance that is required after correcting for attitude errors for a pair to be considered an inlier in the RANSAC algorithm in units of pixels.
This should always be less than the
tolerance
attribute.
- second_closest_check: bool¶
A boolean specifying whether to ignore extracted image points where multiple catalogue points are within the specified tolerance.
- unique_check: bool¶
A boolean specifying whether to ignore possible catalogue to image point pairs where multiple image points are within the specified tolerance of a single catalogue point.
- use_mp: bool¶
A boolean flag specifying whether to use multi-processing to speed up the RANSAC process.
If this is set to True then all available CPU cores will be utilized to parallelize the RANSAC algorithm computations. For small combinations, the overhead associated with this can swamp any benefit that may be realized.
- queried_catalogue_image_points: Sequence | ndarray | None¶
A 2xn numpy array of points containing the projected image points for all catalogue stars that were queried from the star catalogue with x (columns) in the first row and y (rows) in the second row.
Each column corresponds to the same row in
queried_catalogue_star_records
.Until
project_stars()
is called this will beNone
.
- queried_catalogue_star_records: DataFrame | None¶
A pandas DataFrame of all the catalogue star records that were queried.
See the
Catalogue
class for a description of the columns of the dataframe.Until
project_stars()
is called this will beNone
.
- queried_catalogue_unit_vectors: Sequence | ndarray | None¶
A 3xn numpy array of unit vectors in the inertial frame for all catalogue stars that were queried from the star catalogue.
Each column corresponds to the same row in
queried_catalogue_star_records
.Until
project_stars()
is called this will beNone
.
- queried_weights_inertial: Sequence | ndarray | None¶
This contains the formal total uncertainty for each unit vector from the queried catalogue stars.
Each element in this array corresponds to the same row in the
queried_catalogue_star_records
.Until
id_stars()
is called this will beNone
.
- queried_weights_picture: Sequence | ndarray | None¶
This contains the formal total uncertainty for each projected pixel location from the queried catalogue stars in units of pixels..
Each element in this array corresponds to the same row in the
queried_catalogue_star_records
.Until
id_stars()
is called this will beNone
.
- unmatched_catalogue_image_points¶
A 2xn numpy array of points containing the projected image points for all catalogue stars that not matched with an extracted image point, with x (columns) in the first row and y (rows) in the second row.
Each column corresponds to the same row in
unmatched_catalogue_star_records
.Until
id_stars()
is called this will beNone
.
- unmatched_catalogue_star_records: Sequence | ndarray | None¶
A pandas DataFrame of all the catalogue star records that were not matched to an extracted image point in the star identification routine.
See the
Catalogue
class for a description of the columns of the dataframe.Until
id_stars()
is called this will beNone
.
- unmatched_catalogue_unit_vectors: Sequence | ndarray | None¶
A 3xn numpy array of unit vectors in the inertial frame for all catalogue stars that were not matched to an extracted image point in the star identification routine.
Each column corresponds to the same row in
matched_catalogue_star_records
.Until
id_stars()
is called this will beNone
.
- unmatched_extracted_image_points: Sequence | ndarray | None¶
A 2xn array of the image points of interest that were not paired with a catalogue star in the star identification routine.
The first row corresponds to the x locations (columns) and the second row corresponds to the y locations (rows).
Until
id_stars()
is called this will beNone
.
- unmatched_weights_inertial: Sequence | ndarray | None¶
This contains the formal total uncertainty for each unit vector from the queried catalogue stars that were not matched with an extracted image point.
Each element in this array corresponds to the same row in the
unmatched_catalogue_star_records
.Until method
id_stars()
is called this will beNone
.
- unmatched_weights_picture: Sequence | ndarray | None¶
This contains the formal total uncertainty for each projected pixel location from the queried catalogue stars that were not matched with an extracted image point in units of pixels.
Each element in this array corresponds to the same row in the
unmatched_catalogue_star_records
.Until method
id_stars()
is called this will beNone
.
- matched_catalogue_image_points: Sequence | ndarray | None¶
A 2xn numpy array of points containing the projected image points for all catalogue stars that were matched with an extracted image point, with x (columns) in the first row and y (rows) in the second row.
Each column corresponds to the same row in
matched_catalogue_star_records
.Until
id_stars()
is called this will beNone
.
- matched_catalogue_star_records: Sequence | ndarray | None¶
A pandas DataFrame of all the catalogue star records that were matched to an extracted image point in the star identification routine.
See the
Catalogue
class for a description of the columns of the dataframe.Each row of the dataframe corresponds to the same column index in the
matched_extracted_image_points
.Until
id_stars()
is called this will beNone
.
- matched_catalogue_unit_vectors: Sequence | ndarray | None¶
A 3xn numpy array of unit vectors in the inertial frame for all catalogue stars that were matched to an extracted image point in the star identification routine.
Each column corresponds to the same row in
matched_catalogue_star_records
.Until
id_stars()
is called this will beNone
.
- matched_extracted_image_points: Sequence | ndarray | None¶
A 2xn array of the image points of interest that were not paired with a catalogue star in the star identification routine.
The first row contains to the x locations (columns) and the second row contains to the y locations (rows).
Each column corresponds to the same row in the
matched_catalogue_star_records
for its pairing.Until
id_stars()
is called this will beNone
.
- matched_weights_inertial: Sequence | ndarray | None¶
This contains the formal total uncertainty for each unit vector from the queried catalogue stars that were matched with an extracted image point.
Each element in this array corresponds to the same row in the
matched_catalogue_star_records
.Until methods
id_stars()
is called this will beNone
.
- matched_weights_picture: Sequence | ndarray | None¶
This contains the formal total uncertainty for each projected pixel location from the queried catalogue stars that were matched with an extracted image point in units of pixels.
Each element in this array corresponds to the same row in the
matched_catalogue_star_records
.Until method
id_stars()
is called this will beNone
.
- lis_catalogue: Tuple[cKDTree | None, Sequence | ndarray | None]¶
The lost in space catalogue.
Contains a scipy cKDTree containing hash codes as the first element and a numpy array containing star ids for each hash element for the second element.
Warning
The lost is space catalogue is a pickle file. Pickle files can be used to execute arbitrary code, so you should never open one from an untrusted source.
- query_catalogue(epoch=datetime.datetime(2000, 1, 1, 0, 0))[source]¶
This method queries stars from the catalogue within the field of view.
The stars are queried such that any stars within 1.3*the
CameraModel.field_of_view
value radial distance of the camera frame z axis converted to right ascension and declination are returned betweenmin_magnitude
andmax_magnitude
. The queried stars are updated to theepoch
value using proper motion. They are stored in thequeried_catalogue_star_records
attribute. The stars are stored as a pandas DataFrame. For more information about this format see theCatalogue
class documentation.The epoch input should either be a python datetime object representation of the UTC time or a float value of the MJD years.
In general, this method does not need to be directly called by the user as it is automatically called in the
project_stars()
method.- Parameters:
epoch (datetime | Real) – The new epoch to move the stars to using proper motion
- compute_pointing()[source]¶
This method computes the right ascension and declination of an axis of the camera frame in units of degrees.
The pointing is computed by extracting the camera frame z axis expressed in the inertial frame from the
a_priori_rotation_cat2camera
and then converting that axis to a right ascension and declination. The conversion to right ascension and declination is given as\[\begin{split}ra=\text{atan2}(\mathbf{c}_{yI}, \mathbf{c}_{xI})\\ dec=\text{asin}(\mathbf{c}_{zI})\end{split}\]where atan2 is the quadrant aware arc tangent function, asin is the arc sin and \(\mathbf{c}_{jI}\) is the \(j^{th}\) component of the camera frame axis expressed in the Inertial frame.
In general this method is not used by the user as it is automatically called in the
query_catalogue()
method.- Returns:
The right ascension and declination of the specified axis in the inertial frame as a tuple (ra, dec) in units of degrees.
- Return type:
Tuple[float, float]
- project_stars(epoch=datetime.datetime(2000, 1, 1, 0, 0), compute_weights=False, temperature=0, image_number=0)[source]¶
This method queries the star catalogue for predicted stars within the field of view and projects those stars onto the image using the camera model.
The star catalogue is queried using the
query_catalogue()
method and the stars are updated to the epoch specified byepoch
using the proper motion from the catalogue. Theepoch
should be specified as either a datetime object representing the UTC time the stars should be transformed to, or a float value representing the MJD year. The queried Pandas Dataframe containing the star catalogue records is stored in thequeried_catalogue_star_records
attribute.After the stars are queried from the catalogue, they are converted to inertial unit vectors and corrected for stellar aberration and parallax using the
camera_position
andcamera_velocity
values. The corrected inertial vectors are stored in thequeried_catalogue_unit_vectors
.Finally, the unit vectors are rotated into the camera frame using the
a_priori_rotation_cat2camera
attribute, and then projected onto the image using themodel
attribute. The projected points are stored in thequeried_catalogue_image_points
attribute.If requested, the formal uncertainties for the catalogue unit vectors and pixel locations are computed and stored in the
queried_weights_inertial
andqueried_weights_picture
. These are computed by transforming the formal uncertainty on the right ascension, declination, and proper motion specified in the star catalogue into the proper frame.In general this method is not called directly by the user and instead is called in the
id_stars()
method.- Parameters:
epoch (datetime | Real) – The epoch to get the star locations for
compute_weights (bool) – A boolean specifying whether to compute the formal uncertainties for the unit vectors and the pixel locations of the catalogue stars.
temperature (Real) – The temperature of the camera at the time of the image being processed
image_number (int) – The number of the image being processed
- solve_lis(epoch=datetime.datetime(2000, 1, 1, 0, 0), temperature=0, image_number=0)[source]¶
Solves the lost in space problem (no a priori knowledge) for the orientation between the catalogue and camera frames.
The lost in space problem is solved by first generating hash codes of observed possible star quads in an image using
_generate_hash()
. Given the hash codes, they are compared with a precomputed database of hash codes (seebuild_lost_in_space_catalogue
) to identify the closest matches. The closest matches are then used to make a guess at the rotation from the catalogue frame to the camera frame, and the usual star ID routines (id_stars()
) are called using the guess as the a priori attitude knowledge. The number of identified stars found using the usual methods is then compared with the best number of stars found so far, and if more stars are found the rotation is kept as the best available. This is done using the settings already provided to the class, so you need to ensure that you have a good setup even when solving the lost in space problem. This continues until all possible hash code pairs have been considered, or until a pair produces an a priori attitude that successfully identifies half of the queried stars from the catalogue in the FOV of the camera and one quarter of the possible stars.The result is saved to the
a_priori_rotation_cat2camera
attribute and then the usual star ID routines are run again to finish off the identification.- Parameters:
epoch (datetime | Real) – the epoch of the image
temperature (Real) – the temperature of the camera at the time the image was captured
image_number (int) – The number of the image being processed
- Returns:
The boolean index into the image points that met the original pairing criterion, and a second boolean index into the the result from the previous boolean index that extracts the image points that were successfully matched in the RANSAC algorithms
- Return type:
Tuple[ndarray | None, ndarray | None]
- id_stars(epoch=datetime.datetime(2000, 1, 1, 0, 0), compute_weights=False, temperature=0, image_number=0, lost_in_space=False)[source]¶
This method attempts to match the image points of interest with catalogue stars.
The
id_stars()
method is the primary interface of theStarID
class. It performs all the tasks of querying the star catalogue, performing the initial pairing using a nearest neighbor search, refining the initial pairings with thesecond_closest_check
andunique_check
, and passing the refined pairings to the RANSAC routines. The matched and unmatched catalogue stars and image points of interest are stored in the appropriate attributes.This method also returns a boolean index in the image points of interest vector, which extracts the image points that met the initial match criterion, and another boolean index into the image points of interest which extracts the image points of interest that were matched by the RANSAC algorithms. This can be used to select the appropriate meta data about catalogue stars or stars found in an image that isn’t explicitly considered by this class (as is done in the
StellarOpNav
class), but if you do not have extra information you need to keep in sync, then you can ignore the output.If requested, the formal uncertainties for the catalogue unit vectors and pixel locations are computed and stored in the
queried_weights_inertial
andqueried_weights_picture
. These are computed by transforming the formal uncertainty on the right ascension, declination, and proper motion specified in the star catalogue into the proper frame.- Parameters:
epoch (datetime | Real) – The new epoch to move the stars to using proper motion
compute_weights (bool) – a flag specifying whether to compute weights for the attitude estimation and calibration estimation.
temperature (Real) – The temperature of the camera at the time of the image being processed
image_number (int) – The number of the image being processed
lost_in_space (bool) – A flag specifying whether the lost in space algorithm needs to be used
- Returns:
The boolean index into the image points that met the original pairing criterion, and a second boolean index into the the result from the previous boolean index that extracts the image points that were successfully matched in the RANSAC algorithms
- Return type:
Tuple[ndarray | None, ndarray | None]
- ransac(image_locs, catalogue_dirs, temperature, image_number)[source]¶
This method performs RANSAC on the image poi-catalogue location pairs.
The RANSAC algorithm is described below
The pairs are randomly sampled for 4 star pairs
The sample is used to estimate a new attitude for the image using the
DavenportQMethod
routines.The new solved for attitude is used to re-rotate and project the catalogue stars onto the image.
The new projections are compared with their matched image points and the number of inlier pairs (pairs whose distance is less than some ransac threshold) are counted.
The number of inliers is compared to the maximum number of inliers found by any sample to this point (set to 0 if this is the first sample) and:
if there are more inliers
the maximum number of inliers is set to the number of inliers generated for this sample
the inliers for this sample are stored as correctly identified stars
the sum of the squares of the distances between the inlier pairs for this sample is stored
if there are an equivalent number of inliers to the previous maximum number of inliers then the sum of the squares of the distance between the pairs of inliers is compared to the sum of the squares of the previous inliers and if the new sum of squares is less than the old sum of squares
the maximum number of inliers is set to the number of inliers generated for this sample
the inliers are stored as correctly identified stars
the sum of the squares of the distances between the inlier pairs is stored
Steps 1-5 are repeated for a number of iterations, and the final set of stars stored as correctly identified stars become the identified stars for the image.
In order to use this method, the
image_locs
input and thecatalogue_dirs
input should represent the initial pairings between the image points found using image processing and the predicted catalogue star unit vectors in the inertial frame. The columns in these 2 arrays should represent the matched pairs (that is column 10 ofimage_locs
should correspond to column 10 incatalogue_dirs
).This method returns the paired image locations and catalogue directions from the best RANSAC iteration and the boolean index into the input arrays that extract these values.
In general this method is not used directly by the user and instead is called as part of the
id_stars()
method.- Parameters:
image_locs (ndarray) – The image points of interest that met the initial matching criteria as a 2xn array
catalogue_dirs (ndarray) – The catalogue inertial unit vectors that met the initial matching criteria in the same order as the
image_locs
input as a 3xn array.temperature (Real) – The temperature of the camera at the time of the image being processed
image_number (int) – The number of the image being processed
- Returns:
The matched image points of interest, the matched catalogue unit vectors, and the boolean index that represents these arrays
- Return type:
Tuple[ndarray, ndarray, ndarray]
- ransac_iter_test(iter_num)[source]¶
This performs a single ransac iteration.
See the
ransac()
method for more details.- Parameters:
iter_num (int) – the iteration number for retrieving the combination to try
- Returns:
the number of inliers for this iteration, the image location inliers for this iteration, the catalogue direction inliers for this iteration, the boolean index for the inliers for this iteration, and the sum of the squares of the residuals for this iteration
- Return type:
Tuple[int, Sequence | ndarray | None, Sequence | ndarray | None, Sequence | ndarray | None, Sequence | ndarray | None]
- class giant.calibration.CalibrationEstimator[source]¶
Bases:
object
This abstract base class serves as the template for implementing a class for doing camera model estimation in GIANT.
Camera model estimation in GIANT is primarily handled by the
Calibration
class, which does the steps of extracting observed stars in an image, pairing the observed stars with a star catalogue, and then passing the observed star-catalogue star pairs to a subclass of this meta-class, which estimates an update to the camera model in place (the input camera model is modified, not a copy). In order for this to work, this ABC defines the minimum required interfaces that theCalibration
class expects for an estimator.The required interface that the
Calibration
class expects consists of a few readable/writeable properties, and a couple of standard methods, as defined below. Beyond that the implementation is left to the user.If you are just doing a typical calibration, then you probably need not worry about this ABC and instead can use one of the 2 concrete classes defined in this module, which work well in nearly all cases. If you do have a need to implement your own estimator, then you should subclass this ABC, and study the concrete classes from this module for an example of what needs to be done.
Note
Because this is an ABC, you cannot create an instance of this class (it will raise a
TypeError
)- abstract property model: CameraModel¶
The camera model that is being estimated.
Typically this should be a subclass of
CameraModel
.This should be a read/write property
- abstract property successful: bool¶
A boolean flag indicating whether the fit was successful or not.
If the fit was successful this should return
True
, andFalse
if otherwise.This should be a read-only property.
- abstract property weighted_estimation: bool¶
A boolean flag specifying whether to do weighted estimation.
If set to
True
, the estimator should use the provided measurement weights inmeasurement_covariance
during the estimation process. If set toFalse
, then no measurement weights should be considered.This should be a read/write property
- abstract property measurement_covariance: Sequence | ndarray | Real | None¶
A square numpy array containing the covariance matrix for the measurements.
If
weighted_estimation
is set toTrue
then this property will contain the measurement covariance matrix as a square, full rank, numpy array. Ifweighted_estimation
is set toFalse
then this property may beNone
and should be ignored.This should be a read/write property.
- abstract property a_priori_state_covariance: Sequence | ndarray | None¶
A square numpy array containing the covariance matrix for the a priori estimate of the state vector.
This is only considered if
weighted_estimation
is set toTrue
and ifCameraModel.use_a_priori
is set toTrue
, otherwise it is ignored. If both are set toTrue
then this should be set to a square, full rank, lxl numpy array wherel=len(model.state_vector)
containing the covariance matrix for the a priori state vector. The order of the parameters in the state vector can be determined fromCameraModel.get_state_labels()
.This should be a read/write property.
- abstract property measurements: Sequence | ndarray | None¶
A nx2 numpy array of the observed pixel locations for stars across all images
Each column of this array will correspond to the same column of the
camera_frame_directions
concatenated down the last axis. (That ismeasurements[:, i] <-> np.concatenate(camera_frame_directions, axis=-1)[:, i]
)This will always be set before a call to
estimate()
.This should be a read/write property.
- abstract property camera_frame_directions: List[ndarray | List[List]] | None¶
A length m list of unit vectors in the camera frame as numpy arrays for m images corresponding to the
measurements
attribute.Each element of this list corresponds to a unique image that is being considered for estimation and the subsequent element in the
temperatures
list. Each column of this concatenated array will correspond to the same column of themeasurements
array. (That isnp.concatenate(camera_frame_directions, axis=-1)[:, i] <-> measurements[:, i]
).Any images for which no stars were identified (due to any number of reasons) will have a list of empty arrays in the corresponding element of this list (that is
camera_frame_directions[i] == [[], [], []]
wherei
is an image with no measurements identified). These will be automatically dropped by numpy’s concatenate, but are included to notify the user which temperatures to use.This will always be set before a call to
estimate()
.This should be a read/write property.
- abstract property temperatures: List[Real] | None¶
A length m list of temperatures of the camera for each image being considered in estimation.
Each element of this list corresponds to a unique image that is being considered for estimation and the subsequent element in the
camera_frame_directions
list.This will always be set before a call to
estimate()
(although sometimes it may be a list of all zeros if temperature data is not available for the camera).This should be a read/write property.
- abstract property postfit_covariance: Sequence | ndarray | None¶
The post-fit state covariance matrix, taking into account the measurement covariance matrix (if applicable).
This returns the post-fit state covariance matrix after a call to
estimate()
. The covariance matrix will be in the order according toestimation_parameters
and ifweighted_estimation
isTrue
will return the state covariance matrix taking into account the measurement covariance matrix. Ifweighted_estimation
isFalse
, then this will return the post-fit state covariance matrix assuming no measurement weighting (that is a measurement covariance matrix of the identity matrix). Ifestimate()
has not been called yet then this will returnNone
This is a read only property
- abstract property postfit_residuals: Sequence | ndarray | None¶
The post-fit observed-computed measurement residuals as a 2xn numpy array.
This returns the post-fit observed minus computed measurement residuals after a call to
estimate()
. Ifestimate()
has not been called yet then this will returnNone
.This is a read only property
- abstract estimate()[source]¶
Estimates an updated camera model that better transforms the camera frame directions into pixel locations to minimize the residuals between the observed and the predicted star locations.
Typically, upon successful completion, the updated camera model is stored in the
model
attribute, thesuccessful
should returnTrue
, andpostfit_residuals
andpostfit_covariance
should both be not None. If estimation is unsuccessful, thensuccessful
should be set toFalse
and everything else will be ignored so you can do whatever you want with it.- Return type:
None
- abstract reset()[source]¶
This method resets all of the data attributes to their default values to prepare for another estimation.
This should reset
to their default values (typically
None
) to ensure that data from one estimation doesn’t get mixed with data from a subsequent estimation. You may also choose to reset some other attributes depending on the implementation of the estimator.- Return type:
None
- class giant.calibration.IterativeNonlinearLSTSQ(model=None, weighted_estimation=False, max_iter=20, residual_atol=1e-10, residual_rtol=1e-10, state_atol=1e-10, state_rtol=1e-10, measurements=None, camera_frame_directions=None, measurement_covariance=None, a_priori_state_covariance=None, temperatures=None)[source]¶
Bases:
CalibrationEstimator
This concrete estimator implements iterative non-linear least squares for estimating an updated camera model.
Iterative non-linear least squares estimation is done by estimating updates to the “state” vector (in this case the camera model parameters being updated) iteratively. At each step, the system is linearized about the current estimate of the state and the additive update is estimated. This iteration is repeated until convergence (or divergence) based on the pre/post update residuals and the update vector itself.
The state vector that is being estimated by this class is controlled by the
CameraModel.estimation_parameters
attribute of the provided camera model. This class does not actually use theCameraModel.estimation_parameters
attribute since it is handled by theCameraModel.compute_jacobian()
andCameraModel.apply_update()
methods of the provided camera model internally, but it is mentioned here to show how to control what exactly is being estimated.Because this class linearizes about the current estimate of the state, it requires an initial guess for the camera model that is “close enough” to the actual model to ensure convergence. Defining “close enough” in any broad sense is impossible, but based on experience, using the manufacturer defined specs for focal length/pixel pitch and assuming no distortion is generally “close enough” even for cameras with heavy distortion (star identification may require a better initial model than this anyway).
As this class converges the state estimate, it updates the supplied camera model in place, therefore, if you wish to keep a copy of the original camera model, you should manually create a copy of it before calling the
estimate()
method on this class.In the
estimate()
method, convergence is checked on both the sum of squares of the residuals and the update vector for the state. That is convergence is reached when either of\begin{gather*} \left\|\mathbf{r}_{pre}^T\mathbf{r}_{pre} - \mathbf{r}_{post}^T\mathbf{r}_{post}\right\| \le(a_r+r_r\mathbf{r}_{pre}^T\mathbf{r}_{pre}) \\ \text{all}\left[\left\|\mathbf{u}\right\|\le(a_s+r_s\mathbf{s}_{pre})\right] \end{gather*}is
True
. Here \(\mathbf{r}_{pre}\) is the nx1 vector of residuals before the update is applied, \(\mathbf{r}_{post}\) is the nx1 vector of residuals after the update is applied, \(a_r\) is theresidual_atol
absolute residual tolerance, \(r_r\) is theresidual_rtol
relative residual tolerance, \(\mathbf{u}\) is the update vector, \(\text{all}\) indicates that the contained expression isTrue
for all elements, \(a_s\) is thestate_atol
absolute tolerance for the state vector, \(r_s\) is thestate_rtol
relative tolerance for the state vector, and \(\mathbf{s}_{pre}\) is the state vector before the update is applied. Divergence is only checked on the sum of squares of the residuals, that is, divergence is occurring when\[\mathbf{r}_{pre}^T\mathbf{r}_{pre} < \mathbf{r}_{post}^T\mathbf{r}_{post}\]where all is as defined as before. If a case is diverging then a warning will be printed, the iteration will cease, and
successful
will be set toFalse
.Typically this class is not used by the user, and instead it is used internally by the
Calibration
class which handles data preparation for you. If you wish to use this externally from theCalibration
class you must first setmeasurement_covariance
ifweighted_estimation
isTrue
a_priori_state_covariance
ifuse_a_priori
is set toTrue
for the camera model.
according to their documentation. Once those have been set, you can perform the estimation using
estimate()
which will iterate until convergence (or divergence). If the fit successfully converges,successful
will be set toTrue
and attributespostfit_covariance
andpostfit_residuals
will both return numpy arrays instead ofNone
. If you wish to use the same instance of this class to do another estimation you should callreset()
before setting the new data to ensure that data is not mixed between estimation runs and all flags are set correctly.- Parameters:
model (CameraModel | None) – The camera model instance to be estimated set with an initial guess of the state.
weighted_estimation (bool) – A boolean flag specifying whether to do weighted estimation.
True
indicates that the measurement weights (and a priori state covariance if applicable) should be used in the estimation.max_iter (int) – The maximum number of iteration steps to attempt to reach convergence. If convergence has not been reached after attempting
max_iter
steps, a warning will be raised that the model has not converged andsuccessful
will be set toFalse
.residual_atol (float) – The absolute convergence tolerance criteria for the sum of squares of the residuals
residual_rtol (float) – The relative convergence tolerance criteria for the sum of squares of the residuals
state_atol (float) – The absolute convergence tolerance criteria for the elements of the state vector
state_rtol (float) – The relative convergence tolerance criteria for the elements of the state vector
measurements (Sequence | ndarray | None) – A 2xn numpy array of measurement pixel locations to be fit to
camera_frame_directions (List[ndarray | List[List]] | None) – A length m list of 3xj numpy arrays or empty 3x1 list of empty lists where m is the number of unique images the data comes from (and is the same length as
temperatures
) and j is the number of measurements from each image. A list of empty lists indicates that no measurements were identified for the corresponding image.measurement_covariance (Sequence | ndarray | Real | None) – An optional nxn numpy array containing the covariance matrix for the ravelled measurement vector (in fortran order such that the ravelled measurement vector is [x1, y1, x2, y2, … xk, yk] where k=n//2)
a_priori_state_covariance (Sequence | ndarray | None) – An optional lxl numpy array containing the a priori covariance matrix for the a priori estimate of the state, where l is the number of parameters in the state vector. This is used only if
CameraModel.use_a_priori
is set toTrue
. The length of the state vector can be determined bylen(
CameraModel.state_vector
)
temperatures (List[Real] | None) – A length m list of floats containing the camera temperature at the time of each corresponding image. These may be used by the
CameraModel
to perform temperature dependent estimation of parameters like the focal length, depending on what is set forCameraModel.estimation_parameters
- max_iter: int¶
The maximum number of iteration steps to attempt for convergence
- residual_atol: float¶
The absolute tolerance for the sum of square of the residuals to indicate convergence
- residual_rtol: float¶
The relative tolerance for the sum of square of the residuals to indicate convergence
- state_atol: float¶
The absolute tolerance for the state vector to indicate convergence
- state_rtol: float¶
The relative tolerance for the state vector to indicate convergence
- property model: CameraModel | None¶
The camera model that is being estimated.
Typically this should be a subclass of
CameraModel
.
- property successful: bool¶
A boolean flag indicating whether the fit was successful or not.
If the fit was successful this should return
True
, andFalse
if otherwise. A fit is defined as successful if convergence criteria were reached before the maximum number of iterations. Divergence and non-convergence are both considered an unsuccessful fit resulting in this being set toFalse
- property weighted_estimation: bool¶
A boolean flag specifying whether to do weighted estimation.
If set to
True
, the estimator will use the provided measurement weights inmeasurement_covariance
during the estimation process. If set toFalse
, then no measurement weights will be considered.
- property measurement_covariance: Sequence | ndarray | Real | None¶
A square numpy array containing the covariance matrix for the measurements or a scalar containing the variance for all of the measurements.
If
weighted_estimation
is set toTrue
then this property will contain the measurement covariance matrix as a square, full rank, numpy array or the measurement variance as a scalar float. Ifweighted_estimation
is set toFalse
then this property may beNone
and will be ignored.If specified as a scalar, it is treated as the variance for each measurement (that is
cov = v*I(n,n)
wherecov
is the covariance matrix,v
is the specified scalar variance, andI(n,n)
is a nxn identity matrix) in a memory efficient way.- Raises:
ValueError – When attempting to set an array that does not have the proper shape for the
measurements
vector
- property a_priori_state_covariance: Sequence | ndarray | None¶
A square numpy array containing the covariance matrix for the a priori estimate of the state vector.
This is only considered if
weighted_estimation
is set toTrue
and ifCameraModel.use_a_priori
is set toTrue
, otherwise it is ignored. If both are set toTrue
then this should be set to a square, full rank, lxl numpy array wherel=len(model.state_vector)
containing the covariance matrix for the a priori state vector. The order of the parameters in the state vector can be determined fromCameraModel.get_state_labels()
.- Raises:
ValueError – If the shape of the input matrix is not appropriate for the size of the state vector
- property measurements: Sequence | ndarray | None¶
A nx2 numpy array of the observed pixel locations for stars across all images
Each column of this array corresponds to the same column of the
camera_frame_directions
concatenated down the last axis. (That ismeasurements[:, i] <-> np.concatenate(camera_frame_directions, axis=-1)[:, i]
)This must always be set before a call to
estimate()
.
- property camera_frame_directions: List[ndarray | List[List]] | None¶
A length m list of unit vectors in the camera frame as numpy arrays for m images corresponding to the
measurements
attribute.Each element of this list corresponds to a unique image that is being considered for estimation and the subsequent element in the
temperatures
list. Each column of this concatenated array will correspond to the same column of themeasurements
array. (That isnp.concatenate(camera_frame_directions, axis=-1)[:, i] <-> measurements[:, i]
).Any images for which no stars were identified (due to any number of reasons) will have a list of empty arrays in the corresponding element of this list (that is
camera_frame_directions[i] == [[], [], []]
wherei
is an image with no measurements identified). These will be automatically dropped by numpy’s concatenate, but are included to notify the which temperatures/misalignments to use.This must always be set before a call to
estimate()
.
- property temperatures: List[Real] | None¶
A length m list of temperatures of the camera for each image being considered in estimation.
Each element of this list corresponds to a unique image that is being considered for estimation and the subsequent element in the
camera_frame_directions
list.This must always be set before a call to
estimate()
(although sometimes it may be a list of all zeros if temperature data is not available for the camera).
- property postfit_covariance: Sequence | ndarray | None¶
The post-fit state covariance matrix, taking into account the measurement covariance matrix (if applicable).
This returns the post-fit state covariance matrix after a call to
estimate()
. The covariance matrix will be in the order according toestimation_parameters
and ifweighted_estimation
isTrue
will return the state covariance matrix taking into account the measurement covariance matrix. Ifweighted_estimation
isFalse
, then this will return the post-fit state covariance matrix assuming no measurement weighting (that is a measurement covariance matrix of the identity matrix). Ifestimate()
has not been called yet or the fit was unsuccessful then this will returnNone
- property postfit_residuals: Sequence | ndarray | None¶
The post-fit observed-computed measurement residuals as a 2xn numpy array.
This returns the post-fit observed minus computed measurement residuals after a call to
estimate()
. Ifestimate()
has not been called yet or the fit was unsuccessful then this will returnNone
.
- reset()[source]¶
This method resets all of the data attributes to their default values to prepare for another estimation.
Specifically
are reset to their default values (typically
None
). This also clears the caches for some internally used attributes.- Return type:
None
- compute_residuals(model=None)[source]¶
This method computes the observed minus computed residuals for the current model (or an input model).
The residuals are returned as a 2xn numpy array where n is the number of stars observed with units of pixels.
The computed values are determined by calls to
model.project_onto_image
for thecamera_frame_directions
for each image.- Parameters:
model (CameraModel | None) – An optional model to compute the residuals using. If
None
, then will usemodel
.- Returns:
The observed minus computed residuals as a numpy array
- Return type:
ndarray
- estimate()[source]¶
Estimates an updated camera model that better transforms the camera frame directions into pixel locations to minimize the residuals between the observed and the predicted star locations.
Upon successful completion, the updated camera model is stored in the
model
attribute, thesuccessful
will returnTrue
, andpostfit_residuals
andpostfit_covariance
should both be not None. If estimation is unsuccessful, thensuccessful
should be set toFalse
.The estimation is done using nonlinear iterative least squares, as discussed in the class documentation (
IterativeNonlinearLSTSQ
).- Raises:
ValueError – if
model
,measurements
, orcamera_frame_directions
areNone
.- Return type:
None
- class giant.calibration.LMAEstimator(model=None, weighted_estimation=False, max_iter=20, residual_atol=1e-10, residual_rtol=1e-10, state_atol=1e-10, state_rtol=1e-10, max_divergence_steps=5, measurements=None, camera_frame_directions=None, measurement_covariance=None, a_priori_state_covariance=None, temperatures=None)[source]¶
Bases:
IterativeNonlinearLSTSQ
This implements a Levenberg-Marquardt Algorithm estimator, which is analogous to a damped iterative non-linear least squares.
This class is nearly exactly the same as the
IterativeNonlinearLSTSQ
except that it adds damping to the update step of the iterative non-linear least squares algorithm and allows a few diverging steps in a row where the damping parameter is updated before failing. The number of diverging steps that are allowed is controlled by themax_divergence_steps
setting. This represents only difference from theIterativeNonlinearLSTSQ
interface from the user’s perspective.In general, this algorithm will result in the same answer as the
IterativeNonlinearLSTSQ
algorithm but at a slower convergence rate. In a few cases however, this estimator can be more robust to initial guess errors, achieving convergence when the standard iterative nonlinear least squares diverges. Therefore, it is likely best to start with theIterativeNonlinearLSTSQ
class an only switch to this if you experience convergence issues.The implementation of the LMA in this class is inspired by https://link.springer.com/article/10.1007/s40295-016-0091-3
- Parameters:
model (CameraModel | None) – The camera model instance to be estimated set with an initial guess of the state.
weighted_estimation (bool) – A boolean flag specifying whether to do weighted estimation.
True
indicates that the measurement weights (and a priori state covariance if applicable) should be used in the estimation.max_iter (int) – The maximum number of iteration steps to attempt to reach convergence. If convergence has not been reached after attempting
max_iter
steps, a warning will be raised that the model has not converged andsuccessful
will be set toFalse
.residual_atol (float) – The absolute convergence tolerance criteria for the sum of squares of the residuals
residual_rtol (float) – The relative convergence tolerance criteria for the sum of squares of the residuals
state_atol (float) – The absolute convergence tolerance criteria for the elements of the state vector
state_rtol (float) – The relative convergence tolerance criteria for the elements of the state vector
max_divergence_steps (int) – The maximum number of steps in a row that can diverge before breaking iteration
measurements (Sequence | ndarray | None) – A 2xn numpy array of measurement pixel locations to be fit to
camera_frame_directions (List[ndarray | List[List]] | None) – A length m list of 3xj numpy arrays or empty 3x1 list of empty lists where m is the number of unique images the data comes from (and is the same length as
temperatures
) and j is the number of measurements from each image. A list of empty lists indicates that no measurements were identified for the corresponding image.measurement_covariance (Sequence | ndarray | Real | None) – An optional nxn numpy array containing the covariance matrix for the ravelled measurement vector (in fortran order such that the ravelled measurement vector is [x1, y1, x2, y2, … xk, yk] where k=n//2)
a_priori_state_covariance (Sequence | ndarray | None) – An optional lxl numpy array containing the a priori covariance matrix for the a priori estimate of the state, where l is the number of parameters in the state vector. This is used only if
CameraModel.use_a_priori
is set toTrue
. The length of the state vector can be determined bylen(
CameraModel.state_vector
)
temperatures (List[Real] | None) – A length m list of floats containing the camera temperature at the time of each corresponding image. These may be used by the
CameraModel
to perform temperature dependent estimation of parameters like the focal length, depending on what is set forCameraModel.estimation_parameters
- max_iter: int¶
The maximum number of iteration steps to attempt for convergence
- residual_atol: float¶
The absolute tolerance for the sum of square of the residuals to indicate convergence
- residual_rtol: float¶
The relative tolerance for the sum of square of the residuals to indicate convergence
- state_atol: float¶
The absolute tolerance for the state vector to indicate convergence
- state_rtol: float¶
The relative tolerance for the state vector to indicate convergence
- max_divergence_steps: int¶
The maximum number of steps in a row that can diverge before breaking iteration
- estimate()[source]¶
This method estimates the postfit residuals based on the model, weight matrix, lma coefficient, etc. Convergence is achieved once the standard deviation of the computed residuals is less than the absolute tolerance or the difference between the prefit and postfit residuals is less than the relative tolerance.
- Return type:
None
- class giant.calibration.StaticAlignmentEstimator(frame1_unit_vecs=None, frame2_unit_vecs=None)[source]¶
Bases:
object
This class estimates a static attitude alignment between one frame and another.
The static alignment is estimated using Davenport’s Q-Method solution to Wahba’s problem, using the
DavenportQMethod
class. To use, simply specify the unit vectors from the base frame and the unit vectors from the target frame, and then callestimate()
. The estimated alignment from frame 1 to frame 2 will be stored as aRotation
object inalignment
.In general this class should not be used by the user, and instead you should use the
Calibration
class and itsestimate_static_alignment()
method which will handle set up and tear down of this class for you.For more details about the algorithm used see the
DavenportQMethod
documentation.- Parameters:
frame1_unit_vecs (Sequence | ndarray | None) – Unit vectors in the base frame as a 3xn array where each column is a unit vector.
frame2_unit_vecs (Sequence | ndarray | None) – Unit vectors in the destination (camera) frame as a 3xn array where each column is a unit vector
- frame1_unit_vecs: Sequence | ndarray | None¶
The base frame unit vectors.
Each column of this 3xn matrix should correspond to the same column in the
frame2_unit_vecs
attribute.Typically this data should come from multiple images to ensure a good alignment can be estimated over time.
- frame2_unit_vecs: Sequence | ndarray | None¶
The target frame unit vectors.
Each column of this 3xn matrix should correspond to the same column in the
frame1_unit_vecs
attribute.Typically this data should come from multiple images to ensure a good alignment can be estimated over time.
- class giant.calibration.TemperatureDependentAlignmentEstimator(frame_1_rotations=None, frame_2_rotations=None, temperatures=None, order='xyz')[source]¶
Bases:
object
This class estimates a temperature dependent attitude alignment between one frame and another.
The temperature dependent alignment is found by fitting linear temperature dependent euler angles (or Tait-Bryan angles) to transform from the first frame to the second. That is
\[\mathbf{T}_B=\mathbf{R}_m(\theta_m(t))\mathbf{R}_n(\theta_n(t))\mathbf{R}_p(\theta_p(t))\mathbf{T}_A\]where \(\mathbf{T}_B\) is the target frame, \(\mathbf{R}_i\) is the rotation matrix about the \(i^{th}\) axis, \(\mathbf{T}_A\) is the base frame, and \(\theta_i(t)\) are the linear angles.
This fit is done in a least squares sense by computing the values for \(\theta_i(t)\) across a range of temperatures (by estimating the attitude for multiple single images) and then solving the system
\[\begin{split}\left[\begin{array}{cc} 1 & t_1 \\ 1 & t_2 \\ \vdots & \vdots \\ 1 & t_n \end{array}\right] \left[\begin{array}{ccc} \theta_{m0} & \theta_{n0} & \theta_{p0} \\ \theta_{m1} & \theta_{n1} & \theta_{p1}\end{array}\right] = \left[\begin{array}{ccc}\vphantom{\theta}^0\theta_m &\vphantom{\theta}^0\theta_n &\vphantom{\theta}^0\theta_p\\ \vdots & \vdots & \vdots \\ \vphantom{\theta}^k\theta_m &\vphantom{\theta}^k\theta_n &\vphantom{\theta}^k\theta_p\end{array}\right]\end{split}\]where \(\vphantom{\theta}^k\theta_i\) is the measured Euler/Tait-Bryan angle for the \(k^{th}\) image.
In general a user should not use this class and instead the
Calibration.estimate_temperature_dependent_alignment()
should be used which handles the proper setup.- Parameters:
frame_1_rotations (Iterable[Rotation] | None) – The rotation objects from the inertial frame to the base frame
frame_2_rotations (Iterable[Rotation] | None) – The rotation objects from the inertial frame to the target frame
temperatures (List[Real] | None) – The temperature of the camera corresponding to the times the input rotations were estimated.
order (str) – The order of the rotations to perform according to the convention in
quaternion_to_euler()
- frame_1_rotations: Iterable[Rotation] | None¶
An iterable containing the rotations from the inertial frame to the base frame for each image under consideration.
- frame_2_rotations: Iterable[Rotation] | None¶
An iterable containing the rotations from the inertial frame to the target frame for each image under consideration.
- temperatures: List[Real] | None¶
A list containing the temperatures of the camera for each image under consideration
- order: str¶
The order of the Euler angles according to the convention in
quaternion_to_euler()
- angle_m_offset: float | None¶
The estimated constant angle offset for the m rotation axis in radians.
This will be
None
untilestimate()
is called.
- angle_m_slope: float | None¶
The estimated angle temperature slope for the m rotation axis in radians.
This will be
None
untilestimate()
is called.
- angle_n_offset: float | None¶
The estimated constant angle offset for the n rotation axis in radians.
This will be
None
untilestimate()
is called.
- angle_n_slope: float | None¶
The estimated angle temperature slope for the n rotation axis in radians.
This will be
None
untilestimate()
is called.
- angle_p_offset: float | None¶
The estimated constant angle offset for the p rotation axis in radians.
This will be
None
untilestimate()
is called.
- angle_p_slope: float | None¶
The estimated angle temperature slope for the p rotation axis in radians.
This will be
None
untilestimate()
is called.
- estimate()[source]¶
This method estimates the linear temperature dependent alignment as 3 linear temperature dependent euler angles according to
order
.This is done by first converting the relative rotation from the base frame to the target frame into euler angles for each image under consideration, and then performing a linear least squares estimate of the temperature dependence. The resulting fit is store in the
angle_..._...
attributes in units of radians.- Raises:
ValueError – if any of
temperatures
,frame_1_rotations
,frame_2_rotations
are stillNone
- Return type:
None
Modules
This module provides a subclass of the |
|
This module provides the ability to estimate geometric camera models as well as static and temperature dependent attitude alignment based off of observations of stars in monocular images. |
|
This module provides utilities for visually inspecting calibration and alignment results. |