SurfaceFeatureNavigation.pnp_solver

giant.relative_opnav.estimators.sfn.sfn_class:

SurfaceFeatureNavigation.pnp_solver(target_ind, image, image_ind)[source]

This method attempts to solve for an update to the relative position/orientation of the target with respect to the image based on the observed feature locations in the image.

We solve the PnP problem here looking for an update (ie not trying to solve the lost in space problem). This is done for 2 reasons. First, it reflects what we’re doing. In order to find the surface features in the image we need some reasonable a priori knowledge of the relative position/orientation between the camera and the target so we might as well use it (since it makes things simpler). Second, because the PnP problem can be fickle, especially if the observations are noisy (as they can sometimes be if our a priori knowledge was not the greatest) by solving for an update we force it to stay near the a priori knowledge which helps to prevent outlandish solutions, even when there are very few points available.

Because we are doing an update PnP solution we simply solve a nonlinear least squares problem for the translation and rotation (cast as a 3 element rotation vector) that minimizes the error between the predicted feature locations in the image and the observed feature locations. This is generally robust and fast, especially since we can easily compute the analytic Jacobian for the least squares problem. Specifically, the problem we are trying to minimize is

\[\begin{split}\mathbf{r}_i = \mathbf{y}_i-f\left((\mathbf{I}-[\boldsymbol{\delta\theta}\times]) (\mathbf{x}_i+\mathbf{d})\right) \\ \min_{\mathbf{d},\boldsymbol{\delta\theta}}\left\{\sum_i \mathbf{r}_i^T\mathbf{r}_i\right\}\end{split}\]

where \(\mathbf{d}\) is the shift, \(\boldsymbol{\delta\theta}\) is the rotation vector, \(\mathbf{y}_i\) is the observed location of the \(i^{th}\) features, \(\mathbf{x}_i\) is the location of the \(i^{th}\) feature center in the current a priori camera frame, \(\mathbf{I}\) is a 3x3 identity matrix, and \(f(\bullet)\) is the function which transforms points in the camera frame to points in the image (CameraModel.project_onto_image()).

In addition, we provide the option to use RANSAC when solving the PnP problem to attempt to reject outliers in the solution. This is very useful and should typically be used when possible (RANSAC will not be used when the number of observed features is less than 8). We implement a typical RANSAC algorithm, where we choose a subsample of the available measurements, compute the PnP solution using the sample, and then compute the statistics/number of inliers from the full set of measurements given the estimated location from the sample, keeping the one with the most inliers and lowest statistics.

To apply the results of this method to a scene object you should do object.translate(shift) and then object.rotate(rotation) where object is the scene object, shift is the shift vector from this method, and rotation is the rotation from this method (note translate first, then rotate).

Parameters:
  • target_ind (int) – The index of the target that we are currently doing the PnP problem with respect to

  • image (OpNavImage) – The image that we are currently solving the PnP problem for

  • image_ind (int) – The index of the image that we are solving the PnP problem for. (This only really matters if you still have per image misalignments in your camera model still for some reason)

Returns:

The best fit shift as a length 3 numpy array in km and rotation as a Rotation to go from the current camera frame to the new camera frame if successful or None, None

Return type:

Tuple[ndarray, Rotation]