LimbScanner

class giant.image_processing.limb_scanning.LimbScanner(scene, camera, options)[source]

This class is used to extract limbs from an image and pair them to surface points on the target.

This is done by first determining the surface points on the limb based on the shape model, the scan center vector, and the sun direction vector. Once these surface points have been identified (using :meth:.Shape.find_limbs`) they are projected onto the image to generate the predicted limb locations in the image. Then the image is sampled along the scan line through each predicted limb location and the scan center location in the image using the image_interpolator input to get the observed intensity line. In addition, the scan line is rendered using ray tracing to generate the predicted intensity line. The predicted intensity lines and the extracted intensity lines are then compared using cross correlation to find the shift that best aligns them. This shift is then applied to the predicted limb locations in the image along the scan line to get the extracted limb location in the image. This is all handled by the extract_limbs() method.

There are a few tuning options for this class. The first collection affects the scan lines that are used to extract the limb locations from the image. The number_of_scan_lines sets the number of generated scan lines and directly corresponds to the number of limb points that will be extracted from the image. In addition, the scan_range attribute sets the angular extent about the sun direction vector that these scan lines will be evenly distributed. Finally, the number_of_sample_points specifies how many samples to take along the scan lines for both the extracted and predicted intensity lines and corresponds somewhat to how accurate the resulting limb location will be. (Generally a higher number will lead to a higher accuracy though this is also limited by the resolution of the image and the shape model itself. A higher number also will make things take longer.)

In addition to the control over the scan lines, you can adjust the brdf which is used to generate the predicted intensity lines (although this will generally not make much difference) and you can change what peak finder is used to find the subpixel peaks of the correlation lines.

This technique requires decent a priori knowledge of the relative state between the target and the camera for it to work. At minimum it requires that the scan center be located through both the observed target location in the image and the target shape model placed at the current relative position in the scene. If this isn’t guaranteed by your knowledge then you can use something like the moment_algorithm to correct the gross errors in your a priori knowledge as is done by LimbMatching.

Generally you will not use this class directly as it is used by the LimbMatching class. If you want to use it for some other purpose however, simply provide the required initialization parameters, then use extract_limbs() to extract the limbs from the image.

Parameters:
  • scene (Scene) – The scene containing the target(s) and the light source

  • camera (Camera) – The camera containing the camera model

  • options (None | LimbScannerOptions) – The options structure to configure the class with

scene: Scene

The scene containing the target(s) and the light source

camera: Camera

The camera containing the camera model

predicted_illums: NONEARRAY

The predicted intensity lines from rendering the scan lines.

This will be a number_of_scan_lines by number_of_sample_points 2d array where each row is a scan line.

This will be None until extract_limbs() is called

extracted_illums: NONEARRAY

The extracted intensity lines from sampling the image.

This will be a number_of_scan_lines by number_of_sample_points 2d array where each row is a scan line.

This will be None until extract_limbs() is called

correlation_lines: NONEARRAY

The correlation lines resulting from doing 1D cross correlation between the predicted and extracted scan lines.

This will be a number_of_scan_lines by number_of_sample_points 2d array where each row is a correlation line.

This will be None until extract_limbs() is called

correlation_peaks: NONEARRAY

The peaks of the correlation lines.

This will be a number_of_scan_lines length 1d array where each element is the peak of the corresponding correlation line.

This will be None until extract_limbs() is called

predict_limbs(scan_center, line_of_sight_sun, target, camera_temperature)[source]

Predict the limb locations for a given target in the camera frame.

This is done by

  1. get the angle between the illumination vector and the x axis of the image

  2. Generate number_of_scan_lines scan angles evenly distributed between the sun angle - scan_range /2 the and sun angle + scan_range /2

  3. convert the image scan line directions into directions in the camera frame

  4. use Shape.find_limbs() to find the limbs of the target given the scan center and the scan directions in the camera frame

The limbs will be returned as a 3xn array in the camera frame.

This method is automatically called by extract_limbs() and will almost never be used directly, however, it is exposed for the adventurous types.

Parameters:
  • scan_center (ndarray) – the beginning of the scan in the image (pixels)

  • line_of_sight_sun (ndarray) – the line of sight to the sun in the image (pixels)

  • target (SceneObject) – The target the limbs are to be predicted for

  • camera_temperature (float) – The temperature of the camera

Returns:

The predicted limb locations in the camera frame

Return type:

tuple[ndarray, ndarray, ndarray]

extract_limbs(image_interpolator, camera_temperature, target, scan_center, line_of_sight_sun)[source]

This method extracts limb points in an image and pairs them to surface points that likely generated them.

This is completed through the used of 1D cross correlation.

  1. The predicted limb locations in the image and the scan lines are determined using predict_limbs()

  2. Scan lines are generated along the scan directions and used to create extracted intensity lines by sampling the image and predicted intensity lines by rendering the results of a ray trace along the scan line.

  3. The predicted and extracted intensity lines are cross correlated in 1 dimension fft_correlator_1d()

  4. The peak of each correlation line is found using peak_finder.

  5. the peak of the correlation surface is translated into a shift between the predicted and extracted limb location in the image and used to compute the extracted limb location.

The resulting predicted surface points, predicted image points, observed image points, and scan directions in the camera frame are then all returned as numpy arrays.

Parameters:
  • image_interpolator (Callable[[ndarray[tuple[Any, ...], dtype[_ScalarT]]], ndarray[tuple[Any, ...], dtype[_ScalarT]]]) – A callable which returns the interpolated image values for provided [y,x] locations in the image

  • camera_temperature (float) – The temperature of the camera in degrees at the time the image was captured

  • target (SceneObject) – The target we are looking for limb points for

  • scan_center (ndarray) – The center where all of our scan lines will start

  • line_of_sight_sun (ndarray) – The line of sight of the sun in the image

Returns:

The predicted surface points in the camera frame as a 3xn array, the predicted limbs in the image as a 2xn array, the observed limbs in the image as a 2xn array, and the scan directions in the camera frame as a 3xn array of unit vectors where n is the number_of_scan_lines

Return type:

tuple[ndarray, ndarray, ndarray, ndarray]

Methods

__call__

Call self as a function.

predict_limbs

Predict the limb locations for a given target in the camera frame.

extract_limbs

This method extracts limb points in an image and pairs them to surface points that likely generated them.