Skip to main content

refine_pose

Skill class for ai.intrinsic.refine_pose skill.

Refines poses of world objects based on the image from the providedcamera.

Each object pose in the belief world has to be close to the pose of the object in the real world for the algorithm to work. The skill does not require any pose estimator training, the refinement is done using the 3D object model only.

Ideally, each object is clearly visible in the camera view and occupies a large amount of the image. The smaller an object appears in the camera view, the less accurate the result will be. The camera must be correctly calibrated (position in space as well as the intrinsic parameters).

The refinement works in a coarse-to-fine manner starting with a downsampled version of the image to compute the alignment, then proceeding to a finer versions of the image until the original image resolution is reached. This is controlled by the parameter refinement.num_iterations_per_level. For example, when specifying [20, 10, 5] on a [100, 100] image, the algorithm will perform 5 refinement steps on the [25, 25] version of the image, 10 on [50, 50], and finally 20 on [100, 100]. By default the skill will directly update the pose of the objects if the refinement was successful. Alternatively, one can set skip_pose_update parameter to true (which will leave the objects at their original pose) and use the return value of the skill which contains the refined poses.

Occlusion masks: In some cases the object may be partially occluded by a known object; a common example case is a gripper being visible in a camera that is mounted on the robot flange. The caller may specify a list of occluder objects which will mask out these regions of the image during refinement.

Example settings for refinement:

# Performs 10 iterations on the two coarsest level and 20 iterations on
# the finest level.
refinement.num_iterations_per_level = [20, 10, 10]
# Requires 0.9 as the minimum score. Refinement below this score will
# result in a skill failure.
refinement.score_threshold = 0.9
# Recommended setting. Aligns color and normal edges to the edges in the
# image.
refinement.model_modality_type = COLOR_AND_NORMALS

Prerequisites

This skill does not have any prerequisite.

Usage Example

This skill does not have any usage example yet.

Parameters

objects

Objects whose pose will be refined. The corresponding object poses will be updated if refinement of all objects was successful. (Unless skip_pose_update is set).

refinement

Refinement options. Uses reasonable default options if this field is not specified.

occluders

Optional list of world objects which may occlude parts of the image. The provided objects will be rendered from the current camera view into an occlusion mask which the pose estimator uses to mask out the occluded regions.

skip_pose_update

If set, the objects' poses won't be updated. The caller will receive the refined pose through the return value.

image_intensity_corrections

List of image intensity corrections to be applied to the acquired image before running pose estimation. The corrections are applied in the order in which they are specified.

capture_data

If specified, the capture results for pose estimation will be retrieved from the corresponding kvstores (contained in capture_data) instead of captured directly from the cameras.

Capabilities

camera

Resource with capability CameraConfig

Returns

refined_poses

Refined pose for each input object in the same order.

Error Code

The skill does not have error codes yet