collect_calibration_data
Skill class for ai.intrinsic.collect_calibration_data skill.
Skill that performs the data collection for intrinsic and/or camera-to-robot calibration for a single camera and a single robot.
The skill moves the robot to the provided waypoints and estimates the pose of the calibration object at each waypoint. The collected data is returned as a HandEyeCalibrationRequest which can then be used to perform the calibration with the skill CalibrateHandEye.
The following two typical cases are supported:
-
'STATIONARY_CAMERA': The camera has a fixed position in the workcell and the calibration object is mounted on the robot's flange.
-
'MOVING_CAMERA': The camera is mounted on the robot's flange and the calibration object is placed in the workcell.
The data collection can be performed with collision detection or without. The mode without collision detection is useful when the model of the world is not accurate, but we are still certain that the provided waypoints can be safely reached.
This skill depends on the update_robot_joint_positions,
move_robot, and estimate_and_update_pose skills.
Make sure these skills have been installed as part of your application
before running this skill.
Prerequisites
This skill does not have any prerequisite.
Usage Example
This skill does not have any usage example yet.
Parameters
pose_estimator
Id of the pose estimator to use.
pattern_detection_config
Specify the pattern detection config, if needed to get samples with pattern detections. This is needed in case the samples are also used for intrinsic calibration.
calibration_case
Has to be one of 'STATIONARY_CAMERA' or 'MOVING_CAMERA'.
calibration_object
Uniquely identifies the object used for calibration. Typically a calibration pattern, but - with an appropriate pose estimator - all kinds of objects can be used.
waypoints
Robot waypoints.
minimum_margin
Minimum margin between the moving object (calibration pattern for the 'STATIONARY_CAMERA' case, and camera for the 'MOVING_CAMERA' case) and all other world objects.
Set this parameter to a higher value if you are unsure about the exact positions of the objects in your world.
disable_collision_checking
Set to true to run without collision checking. This enables cases where the exact camera position is unknown, or the world is not accurately modelled.
ensure_same_branch_ik
If true, this restricts the sampled poses to the same IK branch of the robot.
motion_type
Determines the type of robot motion used to collect the data. Defaults to a planned move.
skip_return_to_base_between_waypoints
Indicates if we want to return to the base robot pose after visiting the individual waypoints. Using the return-to-base strategy can help to increase robustness, e.g. in presence of a dress-pack.
arm_part
Name of the ICON arm part to control. If not provided but only a single arm part is present in the ICON instance, that part will be used.
use_unified_calibration_service
TODO(b/441268424): Remove this field once the unified calibration service is fully rolled out.
Capabilities
camera
Resource with capability CameraConfig
robot
Resource having all of the following capabilities:
-
Icon2Connection
-
Icon2PositionPart
Returns
hand_eye_calibration_request
TODO(b/441268424): Remove this field once the unified calibration service is fully rolled out.
Error Code
The skill does not have error codes yet